X-Message-Number: 25564
Date: Mon, 17 Jan 2005 13:45:25 +0100
From: Henri Kluytmans <>
Subject: Re : Singularity induced by MNT

I wrote :

>> [...] Because evolutionary methods do not mind the complexity of a 
>> system.

Peter Merel replies :

>If this assertion were correct we could do away with the supercomputers 
>and the quantum computing research and use EC/GA for almost all our 
>computing needs. In fact EC/GAs have performance / time complexity 
>bounds and tradeoffs, same as any algorithm. The main advantage of 
>EC/GA is their applicability to black-box searching. 

Yes, of course you are right here. What I meant was that evolutionary 
methods should (in principle) always be able to evolve a system to 
an improved fitness.

For many (probably most) computing tasks evolutionary methods cost
too much computational resources. Only for solving a subset of problems 
they are interesting. Also evolutionary methods will in most cases not 
be able to find the best solution but only an approximation, and 
many times we do not want an approximation.

I wrote :

>> Nature shows us that genetic methods can work to create
>> intelligent neural networks.

Peter Merel replies :

>As far as we know Nature managed the trick only once, and it took over 
>10 billion years on a quantum computer the size of the universe to do 

Why do you call the universe quantum computer ?

Does it perform quantum computations ?

And if it would, then it is at least a very very inefficient computer.

>Consequently Nature actually "shows us" that genetic methods are 
>unlikely to create intelligent 
>neural networks within the lifetime of the solar system.

I gave several possible reasons that could be why nature  
seemed to have been able to create intelligent neural 
networks only once.

But nature has shown that (in principle) genetic methods are able 
to create intelligent neural networks. Therefore I assume that in 
the future we will be able to use genetic methods that will be able 
to do the same. I do not claim the exact same genetic algorithms 
as today will be used, they probably will be improved versions. 
And because we can start with a neural networks to begin with, 
(not with dead matter, like nature did) I think we can do it much 
much faster. Furthermore we can do it a lot more efficient than 
nature did. For example, in a virtual world we do not have simulate 
the quantum mechanical behavior of every molecule.

Therefore comparisons with a hypothetical computational capacity 
of the universe as a quantum computer is absurd and totally 

I would like to know, what do you guess, is the computational 
capacity of a single human neuron?

(I.e. How much computational capacity does an artificial brain 
need to be able to simulate the human brain. Let's assume we would 
have a perfect compational model of the biological neuron. I think 
in the Drexler calculation it was guessed at 10^17 to 10^18, 8-bit 
calculations per second.)

>Now you may object that, with a working model before us, we can speed 
>this up rather a lot. And I'd agree that seems likely. But if that's 
>the case you can forget the genetic methods, because the main advantage 
>of genetic methods is their ability to perform black box search. If you 
>know a lot about some problem domain you can devise *much* more 
>efficient methods to search it than GA/EC. 

Yes, I agree.

>> One reseach project was focussed on generating movement in different 
>> mediums.
>> Another one was trying to generate predator-prey and food competition
>> behaviors.

>I think it's plain I meant all the behaviors of a single animal. If you 
>just want one behavior you can generate both movement and predator/prey 
>competition using nothing fancier than the logistic map and a sharp 

I am quite certain it would be feasible to simulate all the behaviors 
of an animal like the nematode worm today. But I think it has not been 
done yet. I guess it wouldn't be very interesting. (The nematode worm 
has only a thousand neurons.) What animal would be of sufficient 
complexity for you? 

>> Hugo de Garis PhD thesis was about using neural networks evolved
>> by genetic methods and using them as building blocks, whereby
>> some NN blocks are used to control others. Also he made simulations
>> of neural on top of cellular automata. These simulated neural nets
>> can also grow new connections.

>Despite decades of beavering away at "building brains", De Garis 
>doesn't appear to have shown any significant results. 

Maybe you're right. He didn't create anything significant yet.
But that doesn't mean that we can now conclude that evolutionary 
methods cannot be used to create artificial intelligent neural nets.

>When his initial 
>efforts ran into a combinatorial complexity wall he switched to an FPGA 
>implementation, and when this project ended without empirically 
>significant results in 2001 he switched track to quantum computing. 

Hmm, yes I see. He's doing a research project on using quantum logic 
to find the best neural network parameters. Interesting, but of course,
for many years this will remain only theoretical work. 

>> [...] But it is certainly no NP-hard problem if your satisfied
>> with less than the most optimal way to do it.

>That such approximations are adequate for wild 
>MNT orientation, etc, does not follow.

We are drifting off-track here. Nowhere I stated that genetic 
methods will be used for guiding nanobots in the wild. 
We were discussing if evolutionary methods can be used for 
evolving intelligent neural nets. And evolving of artificial 
neural nets was used as an example of a way to create artificial 
super intelligent beings. These would then initiate (a kind of) 
singularity. With MNT we would be able to easily create the 
necessary computing capacity for doing this.

But as I stated before with advanced MNT we could just as well scan 
and simulate (i.e. upload) a complete human brain to start with.

>> I do not understand why you think it will be so hugely
>> complex to coordinate the activities of a large number of
>> nanobots. Most objects you want to make contain a large
>> number of repetitive components.
>The same can be said for solving large QM problems. The same particles 
>over and over again ... each with interdependent position, velocity, 
>energy, ... whammo, combinatorial explosion. You understand this is one 
>of those problems in the basket we don't try to use supercomputers on. 
>Perhaps orienting nanobots in Brownian environments will be easier. Now 
>tell me why.

To get back on track, we were talking about coordinating the activities 
of a large number of nanobots (let's say 10^14) to make a macroscopic 
object. You state that the complexity of doing this would equal that 
of solving [NP-Hard] problems. I think that coordinating nanobots is 
not that difficult.

There are ways to manufacture objects using MNT without nanobots.
Therefore it is not necessary to show that coordination nanobots 
for manufacturing an macroscopic object is feasible. However I 
will do so anyway ... (Sorry, this will follow in a later post.)

>> Furthermore you do not even have to use the nanobot way to make 
>> objects using MNT.

>I never suggested you can't make assemblers, create factories from 
>them, and manufacture objects.

But you were using the possible infeasibility of coordinating nanobots 
as an argument that MNT would not work ...

>And that's exactly my argument against your Clark/Drexler post. If only 
>one sentience has evolved, we need not hold our breath for 10^38 mops. 
>Or 10^38^38. The universe is a *MUCH* bigger and *MUCH* older quantum 
>computer than a paltry city-block-full of nanobots. Heck, make your 
>large computer out of the sun and you'll still have orders of magnitude 
>times solar lifetimes to wait it out.

I explained this above. If you call the universe computer it is a 
very very very inefficient one. I think comparing the universe 
to a quantum computer and estimating its computational capacity by 
assuming it is performing quantum logic for the determining the 
behavior of each single atom (or each fundamental particle) is absurd.

>> We can begin with with a neural network and evolve from there. [...] 
>> MNT capabilities
>> can be used as a perfect tool to analyse biological neurons at a 
>> molecular level.

>With wild-navigating nanoprobes we can map a sentient brain to produce 
>real AI adequate to create wild-navigating nanoprobes ... assumes the 

I didn't state anything like this. This circular statement is entirely 

For creating MNT capabilities we don't need artificial intelligence.
We just need to create the first molecular assembling device. It 
doesn't even need to have an internal computer to control it. 
The assembler device could be something like a microscopic manipulator 
arm (on the order of 100 nanometers in size), and the supporting 
devices to deliver the buildingblocks, energy and control signals.
The computer to control the manipulator arm can be macroscopic 
in size. With an MNT assembling system like this, we could start to 
build more advanced MNT manufacturing systems. (And eventually 
replicating nanobots too.)

More advanced MNT would also result in devices able to disassemble 
(at first only solid) molecular structures. Thus we could take 
some neural tissue and freeze it. We could take some samples from 
it that contain a perfectly undamaged frozen neuron. We could 
disassemble those samples while still frozen solid. Then we would 
have the exact molecular compositions of some neurons. From there 
I think we should be able to deduce an exact functonal model 
for a biological neuron. Or at least those functional aspects of it 
that are relevant to it's functioning as a processor in the network 
(brain). But maybe we will have such a model already by that time. 
Research is currently being done that is trying to establish such 
a model. But who knows, maybe we already know enough functional aspects 
of neurons right now. Maybe those more detailed functional aspects that 
we do not know yet are not that important.

But never mind, what I wanted to make clear :

-No artificial intelligence is required for creating MNT capabilities.
 (no nanobots needed)

-Advanced MNT could analyse anything (including biological samples) at a 
 molecular scale. (no nanobots needed)

-With MNT we could create very large capacity computing system for 
low prices.

Because of those last two, we should be able to upload a human 
brain (in principle at least) and run it about a million times 
faster. It seems likely that human brains running (and evolving) 
a million times faster would initiate a kind of singularity.


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=25564