X-Message-Number: 25540
Date: Fri, 14 Jan 2005 16:57:25 +0100
From: Henri Kluytmans <>
Subject: Re : Singularity a Fantasy

I wrote :

> has enough computational capacity to use evolutionairy methods
> to create AI neural networks in time scales of several years.

Peter Merel replies :

>Existing connectionist computing paradigms have universally 
>failed to scale. 

Indeed, the standard supervised training algorithms seem to be 
very limited in their use. But using genetic methods for evolving
neural network weights is much less limited. The performance 
of genetic methods does not relate to the complexity of a system. 
As long as there is a way to determine the fitness of a system,
a genetic method should work. 

And, you're right, current genetic methods seem to have limitations. 
But my statement was referring to a time in the future (maybe even 
more than 40 years from now). (I do not foresee full blown MNT being 
developed within the next 10 years.) There will be some time in the 
future when the current limitations for genetic methods will have 
been overcome. Nature shows us that genetic methods can work to create 
intelligent neural networks.

>No one has been able to demonstrate non-biological neural 
>networks that reproduce the behaviors of even the simplest animals. 

I know of some virtual "animals" with simulated neural networks that 
have been evolved to show certain behaviors of biological animals.
One reseach project was focussed on generating movement in different mediums.
Another one was trying to generate predator-prey and food competition 
behaviors.

And to me they seemed to reproduce certain behavioral aspects of real 
animals quite clearly. 

Of course, current simulated neural networks do not mimic all functional 
aspects of biological neurons, but some time in the future we should have 
complete models which include all aspects. (Including growing new
connections.)
The question of course is then, which functional aspects are critical for 
creating an intelligent neural network.

>Neural Networks are notoriously vulnerable to combinatorial 
>complexity and no one has demonstrated a paradigm wherein the 
>intelligence of one NN can be combined with the intelligence 
>of another to any constructive purpose. 

Hugo de Garis PhD thesis was about using neural networks evolved 
by genetic methods and using them as building blocks, whereby 
some NN blocks are used to control others. Also he made simulations 
of neural on top of cellular automata. These simulated neural nets 
can also grow new connections.

>These are known in the abstract as NP-Hard problems. The commonality to 
>them is that you can mathematically prove you can't solve them in 
>polynomial time on T/VN computers. 

The computational time needed for solving such problems only grows 
in a polynomial way when you want to find the best solution possible. 
There is no such explosion in computing time if you are only looking 
for a good solution (which can be an approximation of the best solution).

>Now it may be that orienting a nanobot or coordinating the 
>activities of a millionty billionty nanobots may not entail 
>NP-Hard problems. But that would be extremely unlikely - in 
>every other problem domain the bloody things are everywhere. 

Maybe it could be a NP-hard problem, when you want to build 
an object using nanobots, and you want to do it in the 
theoretically shortest possible time, or if you want to
make it using the theoretically least possible amount of 
energy, etc..

But it is certainly no NP-hard problem if your satisfied 
with less than the most optimal way to do it.

I do not understand why you think it will be so hugely 
complex to coordinate the activities of a large number of
nanobots. Most objects you want to make contain a large 
number of repetitive components. 

Furthermore you do not even have to use the nanobot way 
to make objects using MNT. Lately exponentional 
manufacturing methods are favored. This means making a big 
device, by making nanometersize components first, and then 
assembling these very small components into bigger one's,
etc... until at the last step the final device is assembled
from several large components. No nanobots are used in this 
method. The object is made by an assembly system of manipulator
arms of different sizes.

>Clark relates a suggestion by Drexler that AI be created by simulating 
>a natural environment and then letting life and sentience evolve. Well, 
>sure, we see sentient life evolving all over the universe all the time. 
>The Fermi paradox is just a figment of unimaginative minds. :-)

I happen to support the assumption that no extra-terrestial 
technological civilisations exist in our visible universe.

About some possible reasons for the Fermi paradox :

Probably there are a number of tressholds in the path from dead matter, 
to intelligent life, that are difficult to cross for evolution.
This could be the first step (to single cell life forms), or from 
there to multi cellular life forms, or somewhere else.

Or maybe in nature there is no great evolutionairy pressure in the 
direction of real intelligence (i.e. like humans have). 

Or it could be that there are certain trade-offs that make the path to 
real intelligence difficult to cross. 

And there can also be external factors that make creation of intelligent 
life by evolution difficult. Like for example, disturbances caused by events 
originating in the solarsystem or in the galaxy (e.g. supernova's, 
stars passing close by that disrupt the Oort cloud, a big planet in a close 
orbit, a sun that shines irregular, ...).

But in an artificial evolution all these factors, that make it 
difficult in the real universe for intelligent life to evolve, 
can be by-passed. We don't have to start from scratch. We can 
begin with with a neural network and evolve from there. Also we 
can apply evolutionairy presssures that favor the evolution 
of intelligence.

>Seriously, what you've quoted is an obfuscated assertion that if you 
>throw enough hardware at Genetic Algorithms / Evolutionary Computing 
>your system will magically wake up. 

It has been done before. It seems very unlikely that it can't be 
done again. Nature shows that evolution works, but there seem 
to be some barriers where evolution has slowed down.

But as I said before, we don't have to start from scratch as nature did. 
We want neural networks that exhibit intelligent behavior. So we 
could start with simple neural networks. Like for example, ones 
having the complexity and behavioral characteristics of insects. 
Of course, we do not have those yet. But we're talking about more 
than 30 years in the future. Furthermore, MNT capabilities 
can be used as a perfect tool to analyse biological neurons at a 
molecular level. So if it happens to be that science still does 
not know the detailed working principles of biological neurons by 
then, which seems very unlikely to me, then it will take only a 
short time to determine these using MNT based tools.

On the other hand, the same tools can then be used not to scan 
only a single biological neuron to determine its exact functional 
behavior, but to scan the whole network at once. A complete 
biological neural network of an insect can be analysed in molecular 
detail (if required) and simulated in an artificial neural network. 
Then starting from this the network, could be evolved further. But when 
this can be done, why not use an intelligent neural network to 
begin with, i.e. just scan a human brain.

>What makes GA/EC any more likely of emergent sentience than, say a 
>massive heuristic search, or a massive simulated annealing project? 

Because evolutionairy methods do mind the complexity of a system.

Grtz,
>Hkl

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=25540