Monday, April 27, 2009

The AI Singularity

The AI Singularity has been defined as a future point in history when machine intelligence will surpass human. A technological transition event of this magnitude has been compared to a cosmological "black hole" with an "event horizon" beyond which it is impossible to know anything.

The idea of the AI fails philosophically because it assumes the following clause: “I am not so smart to know what smarter is”. This clause is implicit in “recognizing” a hyper-intelligence. The acid test of passing through the AI “event horizon” does not suffice in you or I being flabbergasted by the amazing smartness of machines, or human-machine fusion. The acid test is not to be able to comprehend anything! Smartness after the AI Singularity event horizon is defined as so huge that no ordinary human mind can realize that is there. It is similar to asking a chimp to realize how smarter a human is. The chip cannot possibly know that. The question (for the chimp) is meaningless. And equally meaningless would be the question for the human of the future crossing the event horizon of the AI Singularity. For all we know, there might exist today - or have existed for centuries, or millennia – higher intelligences that ours. Perhaps, we live in a “Matrix” world created by smarter than us intelligences. Perhaps we crossed the AI Singularity event horizon many centuries ago, or last year. But we can never possibly know that.

The AI Singularity reduces thus to the “brain-in-a-vat” argument. However, a brain-in-a-vat cannot possibly know that is a brain-in-a-vat because to state that you are a brain-in-a-vat is a self-refuting statement. Therefore, to claim such a thing is nonsense.

The AI Singularity also fails scientifically. In order to have any sensible discussion on intelligence - human, machine or otherwise - we need a theory of intelligence; which we do not currently have. Major questions are looming, the most significant of which is the relationship between a mind and a brain. Until such scientific problems have been clearly defined, researched and reduced to some testable causal explanatory model, we cannot even begin to imagine “machine intelligence”. We can of course (and we do) design and develop machines that perform complex tasks in uncertain environments. But it would be a leap of extreme faith to even compare these machines with “minds”.

The AI Singularity fails sociologically too. It is a version of transhumanism, which is based on a feeble and debatable model of human progress. It ignores an enormous corpus of ideas and data relating to the human psychological and cultural development. It assumes a value system of ever better, faster, stronger, longer, i.e. a series of superlatives which reflect a social system of intense competition. However, not all social systems are ones of competition. Indeed, the most successful ones are of collaboration. In the social context of collaborative reciprocity, superlatives act contrary to the common good. To imagine a society willing to invest resources into building the intelligence of its individual members would be to imagine a society bent on self-annihilation. To put it in simple terms: if I am the smartest person in the world, if I can solve any problem that comes my way, if I can be happy by myself - why should I need you?

No comments:

Post a Comment