X-Message-Number: 25530
References: <>
From: Peter Merel <>
Subject: The Singularity Is A Fantasy - replies to Donaldson & Soreff
Date: Thu, 13 Jan 2005 23:07:00 +1100

Thomas Donaldson alludes to Rodney Brooks' subsumption architecture and 
behavior based programming model. While Brooks' approach has been 
influential in robotics and simulated robotics, there seems no obvious 
way to generalize it into a paradigm for learning systems, Artificial 
Life, or similar "real AI". Last I heard he's gone back to the drawing 
board on Artificial Life in particular - trying like his AI forbears to 
get less heuristic and more general. We should certainly wish him luck.

Jeffrey Soreff writes,

> A pity that failures of hyped technologies don't get more widely 
> publicized.

I still have the 4 volume "Handbook of AI". It's on my bookshelf next 
to the works of J.B. Rhine, Lord Korzybski, and Erich Raspe.

> Yes, NP-hard problems are almost certainly exponential (though,
> as of the last I'd heard, NP!=P still hasn't actually been _proved_).

That's because Turing's model of computation includes infinities. 
Infinity, in any model, indicates a frame problem.

> That said, the bulk of NP-hard problems have some sort of
> approximation which is _not_ exponential.  I work in CAD, and we
> routinely compute approximate solutions to many NP-hard problems.

Yes, and similarly in many other problem domains. There are, for 
example, TSP solutions for upwards of 13,000 cities. Global optima are 
seldom necessary for productive work any problem domain, and 
simplifying assumptions are often passable. Yours truly once wrote a 
wonderful train scheduling algorithm for a now thankfully defunct 
startup. You'd be surprised at how well you can optimize a schedule if 
you assume each station has an infinite number of platforms ...

Nevertheless it is fair to say that scaling with combinations and 
dimensions is the major limiting factor in human science and 
technology. Most of our databases are incapable of dealing effectively 
with more than 4 dimensions. You can index more dimensions than that, 
but woe betide you if you want to do a join!

In CAD, GIS, and other spatial modelling the famous bugbears are 
multidimensional scaling, interpolation, and integration. There are 
lots of tricks that enable modelers to get by, for models of limited 
complexity, but there are other problems that can be addressed only by 
years of time on a supercomputer, and many more that we simply don't 
attempt to solve at all.

In scientific domains it's worth observing that almost all of our laws 
involve first and second order differentials and either linear or 
squared powers. There are a smattering of cubed variables, but that's 
about it. If there are physical laws in operation that involve more 
complexity than this, we're blind to 'em.

>       As far as I know, no one has been able to
>       do "common sense" reasoning robustly, at any speed.

This was the origin of the classical formal systems frame problem. If 
we say painting changes an object's color and moving changes its 
position, formal logic concludes nothing about an object's position 
after it has been painted. Formal logic systems can't axiomatize the 
commonsense law of inertia - that the properties of every day processes 
are usually not perturbed unless operated upon. So formal logic can't 
tell that painting doesn't usually cause a real change in an object's 
position.

The logical frame problem was adequately solved for formal systems - 
again in small domains. In the eighties metaphysicians like Dennett and 
Fodor, famously realized that frame problems occur in human logic as 
well. In fact they're implicit whenever there's ambiguity in 
determining which elements of a description are relevant to a process 
of inference. As far as I know, no AI researcher has demonstrated an 
answer to this epistemological frame problem. As far as I can tell, 
most of them simply wish it away.

> My _guess_ is that once hardware substantially faster that the full
> bandwidth of a human brain is routinely available, AI researchers
> will probably find ideas which do permit human-equivalent AI, but we
> aren't there yet, and I have no idea how long this might take.

This is what Drexler asserts too. I'm suggesting that both T/VN and 
connectionist computing paradigms are poor bases for modelling 
biological intelligence. The T/VN paradigm is poor because of its 
reliance on procedure and state, neither of which are empirically 
evident in biological behaviors. The connectionist paradigm is poor 
because it provides no means of abstracting the intelligence of more 
than one NN into a more general NN. I assert without proof that the 
scaling failure of the entire field of AI has resulted from the fact 
that no one has concerned themselves with this computational frame of 
intelligence - they just assume Church/Turing equivalence of biological 
systems and continue to bark up a tree that isn't there.

Peter Merel.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=25530