X-Message-Number: 25526
Date: Wed, 12 Jan 2005 09:44:37 -0800
Subject: AI and the Singularity
From: <>

Many people fantasize thusly:

1. General AI is a laudable goal whose attainment will herald the 
arrival of the singularity (or something of similarly grand nature);

2. Humans will have to upgrade their brains and network their 
brains with others in order to compete in future economies;

3. Advanced beings will be like gods, and we will be completely at 
their mercy unless we too advance ourselves.

All of these are nonsense. Let me begin with Fantasy #1.

By 'general AI', we of course mean AI like our own, which was 
evolved to optimize our reproductive fitness. That is what it does 
best. Everything else in the world, it does poorly. Humans are bad 
at math, they are bad at crunching numbers, they are bad at playing 
chess, they are bad at virtually every thing they do. Why? BECAUSE 
they have 'general AI'.

The development of so-called general-purpose AI will be great for 
creating believable computer characters. However, it won't solve 
any significant problems whatsoever, and it will have only a 
slightly broader range of applications than humans themselves. The 
GAI will be as bad as humans are, only it will be faster, or have 
more memory, than humans.

The significant engineering breakthroughs to occur in the future 
will be done not by GAI, but by highly-specific 'AI'. For example, 
hardware/software that can perform QM simulations, or which can 
derive new theorems, or which can design a faster processor. Such 
hardware/software will be extraordinarily useful, but it would be a 
stretch to call it sentient.

This should not be a surprise. A being like us will have our 
limitations. Other things will have their own limitations. It is 
simply not possible to do well at everything. Jack of all trades is 
master at none. My handheld calculator can do things my highly 
evolved brain can never do, because of the limitations of my design.

A consequence of this is the absurdity of Fantasy #2. Matthew and 
others imagine people pressured to 'upgrade' their brains, to 
network together to solve certain problems, all because of 
competititon. This is one of the most ridiculous things I have ever 
heard. Do you think GAI, or an enhanced GAI, can ever perform QM 
simulations better than a dedicated machine, designed with nothing 
in mind other than performing QM simulations? Do you think GAI can 
ever perform advanced routing better than a dedicated machine? Do 
you think GAI can ever write software better than a tool designed 
to do nothing but write software?

Clearly, dedicated design will always be more efficient than GAI. 
Sure, I can perform QM calculations with a pen and pencil, but it 
would take me a lifetime to compute what my computer can do in a 
minute. The only way I could ever compete with a dedicated design 
is to BECOME a dedicated design, or attach hardware onto my skull 
which was a dedicated design. But then, why bother? Why attach the 
hardware to my head, if I can leave it sitting by my desk?

The economy of the future won't involve humans, even super-duper 
humans. It will involve purpose-specific machines, created to do 
one thing, and do it very well. GAI will have only niche 
applications, and none of those applications will involve the 
supposed intelligence of GAI.

This leaves me with Fantasy #3. The easiest rebuttal of this 
fantasy is the observation that it is easier to destroy than to 
create. Even at this point in time, a nuclear bomb which I can fit 
in my house can destroy my city. This nuclear bomb will destroy 
everything, 'upgraded' humans and 'nanobots' alike. The future will 
contain even more such destructive technologies. Humans and their 
successors, if they are to survive at all, must do so by 
cooperation. There will be no super advanced race of post-humans 
who decides the fate of everyone. 

Other observations are nearly as important: (1) That more 
intelligence, of the kind we already have, is not likely to 
increase short-term survival, but only long-term survival. (2) That 
humans are already smart enough they can use technology. A squirrel 
isn't smart enough to take a gun and point it at a human and shoot. 
But a human is. Humans have reached a threshold level of 
intelligence where they can understand and use any technology, 
provided the interface is designed with them in mind. This means a 
human equipped with technology should have just or nearly as good 
short- and long-term survival as an 'upgraded' human, when 
considering threats from other beings.

Best Regards,

Richard B. R.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=25526