X-Message-Number: 15328
Date: Fri, 12 Jan 2001 07:29:48 -0500
From: Thomas Donaldson <>
Subject: to Mike Perry and others

Hi again!

Once more about Turing machines and brains, this time for Mike Perry
again.

First of all, I would consider the THEORETICAL possibility of imitating
a brain with a single computer to be quite useless unless we could
somehow make it a real possibility. I am quite willing to accept such
a theoretical possibility, but think that it means virtually nothing.
Sure, if we were Gods and could create whatever amount of matter and
time we needed (and space, too) then it might have some reality, but we
are not Gods and never will be.

Second, the comments about polynomial growth are wrong. I must admit that
my previous comment on this question wasn't at all clear to many, and 
therefore apologize. But here is a bit more which may clarify this
issue: the exponentiality comes not from the creation of new neurons,
but from the number of connections which they allow. These connections
would presumably not be limited to connecting only those neurons which
came from an original ancestor neurons, but involved connecting neurons 
which came from quite different other neurons. Remember the structure of
neurons here: the neurites, branches of an axon, may extend quite far,
while dendrites remain relatively close to the cell ... though far if
measured by the size of the cell body. Moreover, even if an individual
new connection is short, it may create a much longer connection which
brings together the connections from two different neurons which 
themselves connect to others... and so the creation of connections
over major distances in our brain.

Given a set of N neurons, the number of connections between them goes
up like N!. If we look at M sets of N neurons, then the total number
of connections is greater than M! * N!. (Recall again that axons can
go quite far. I will send in a message which completes these calculations
later). This is where the exponentials come into play.

As I have said repeatedly, the major problem comes from TIME. If we cannot
build a human-like brain with a single computer without using up all
the matter available to us, its theoretical possibility means very 
little. It continues to mean very little no matter how many believe
in its THEORETICAL possibility. To do computing, or even exist as 
human beings (or their successors) we must deal with what is actually
possible. In that sense we must deal with very large parallel computers
which are composed of small processors, and theory about the behavior
of Turing machines means nothing at all. And if in addition such 
machines as ourselves don't fit Turing's model AT ALL, not even in 
theory, then we'd do well to abandon it and try to make models which
fit... nonTuring "computers", if such a notion does not offend too 
badly.

		Best and long long life for all,

			Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15328