X-Message-Number: 25499
References: <>
From: Peter Merel <>
Subject: The Singularity Is A Fantasy
Date: Mon, 10 Jan 2005 00:35:19 +1100

Tim Freeman writes,

> "Drexler" isn't a citation.

EOC Chapter 5, Drexler's description of The Singularity:

"[...] assembler-built AI systems will bring still swifter automated 
engineering, evolving technological ideas at a pace set by systems a 
million times faster than a human brain. The rate of technological 
advance will then quicken to a great upward leap: in a brief time, many 
areas of technology will advance to the limits set by natural law. In 
those fields, advance will then halt on a lofty plateau of achievement. 
This transformation is a dizzying prospect. Beyond it, if we survive, 
lies a world with replicating assemblers, able to make whatever they 
are told to make, without need for human labor. Beyond it, if we 
survive, lies a world with automated engineering systems able to direct 
assemblers to make devices near the limits of the possible, near the 
final limits of technical perfection."

> Let's get clear on what the task is.  I've asked you to cite a
> credible nanotech apologist who says you need real AI to command,
> control, and orient a single robot in an environment as complex and
> demanding as that of an assembler.

That's a strawman of your own. I've already stipulated that a single 
assembler in a controlled environment may need no such thing. The point 
is that the orientation, command, and control of a single assembler in 
a natural environment a la Drexler - or the coherent command and 
control of a millionty-billionty of same, are diamond-hard engineering 
problems with no obvious solution. The feasibility of an assembler does 
not demonstrate the feasibility of real AI; consequently The 
Singularity, Drexler's "great upward leap", remains as fantastic as any 
real AI.

I'm suggesting then that real AI is actually precluded by the 
Turing/Von Neumann computational paradigm. We have plenty of 
mathematical proofs of this paradigm's inability to deal with 
combinatorial complexity. And that complexity is inherent in all of the 
Drexlerian command and control scenarios. Since Moore's law refers 
specifically to the power of T/VN machines, nothing suggests that The 
Singularity is on the way any time soon.

> Given the variety of artificial food
> already available to eat I doubt you're claiming that food synthesis
> is a hard problem.

It is hard if you cast it in Drexlerian terms - the machine that sits 
on top of your fridge and turns your lawn clippings and old tires into 
filet mignon. I certainly grant we may develop non-Drexlerian means to 
synthesize food.

> they aren't on the critical path to real
> AI so problems with these devices don't support your argument that
> there's an unresolvable dependency loop at the beginning of the
> scenario.

Until you can show me a classical scale robot that can, say, navigate 
blind across a roomful of randomly firing random velocity billiard 
balls, the end-product devices remain fantasy. Until you can digitally 
program an ant's nest to cooperate to spell out Drexler's name in ants, 
these scenarios remain a fantasy.

The dependency loop between implementing Drexler's programs and real AI 
isn't my argument anyway - it's the fundamental mode of The 
Singularity. I have stipulated that there may be other paths to real AI 
than nanotech, in which case this loop may be resolvable. But by 2030? 
What leads anyone to expect that?

> The dependencies are: Someone builds an assembler that works in a
> controlled environment.  The assembler builds more assemblers, still
> in a controlled environment.  The group of assemblers builds a large
> computer, still in a controlled environment.

Fine to here.

> The large computer is
> either programmed with uploads (which requires new neurobiology) or
> programmed with custom code (which requires new AI).

To deploy your uploads you need to distinguish them from their 
biological substrate - a new engineering problem with no explicable 
solution. We've already touched on the scale of the problem with real 
AI under Turing/Von Neumann. Vinge's 2005-2030 estimate assumes these 
problems just naturally work themselves out under Moore's law. I'm 
suggesting for the reasons above there's no rational reason to think 
this.

> Nowhere in this scenario do we need to orient an assembler in a
> challenging environment.

An assembler is just a nanobot programmed to create a copy of itself; I 
grant it's sloppy wording on my part but I reckon if you can't orient, 
command, or control the one then you can't the other.

To give a human-scale analogy, a few years ago the Blue Angels 
aeorbatics team flew into the ground. The whole team - all killed. It 
turned out at the inquiry that the pilots could never handle all the 
different vectors required to orient themselves to the horizon as well 
as their team members. So just one pilot watched the horizon while the 
rest of the team oriented themselves relative to him. The lead pilot 
lost the horizon and that was the end of them.

If a dozen humans can't orient themselves when their lives depend on 
it, what suggests that your large computer can do it for combinations 
of millions of bots interacting with trillions of other molecules? What 
suggests orienting, commanding and controlling even one nanobot in such 
an environment is practicable at all?

Peter Merel.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=25499