X-Message-Number: 9323
From: Thomas Donaldson <>
Subject: Re: CryoNet #9293 - #9301
Date: Fri, 20 Mar 1998 00:28:32 -0800 (PST)

To Mr. Metzger, once again:

My you seem upset! I will tell you my own background. I have a PhD in

mathematics. My thesis was on nonlinear elliptic partial differential equations.
I have written a number of papers on this subject and also on linear PDE's,
not necessarily elliptic. In 1985 I decided to get involved with the computer
industry, and found parallel processing to be a field in which my knowledge
of algorithms and analysis could be used to produce interesting parallel
software.

I also read a lot about many things. I have never, however, had a formal 
course in computer science. I will also add (though you are clearly not a
subscriber to PERIASTRON) that for the last 9 years, and even before that,
I have learned as much as I could about what we know about brains, memory,
identity, etc. I report these things in PERIASTRON. Fundamentally I became
interested in this subject because, one way or another, we will need to 
understand how brains work to do the repairs needed for current cryonics
patients, to improve how our brains work afterwards, and even to emulate
them or simulate them.

As someone who has followed parallel processing for more than 10 years now,
I will say that I could hardly fail to notice how various tricks borrowed
from parallel processing have been used to improve the speed of processing
chips. That's what MMX does, and lots of other things are done internally.
Naturally all that design comes nowhere near to matching a frankly parallel
computer, with 1024 processors (say). 

As for Turing machines etc I have certainly read about them. I will point
out, as I have already done, that the classical Turing machine with an
infinite tape can no more exist than could a Siegelmann machine. If we
really wanted to model REAL computing, we would have to use finite
Turing machines, with finite tapes. Naturally such a machine can in no 
way do all the computing possible (and I'm not bringing in any oracles,
either): just give it a problem which requires more than the amount of
tape it has. On top of that, even a machine with an infinite tape will
be unbearably slow on many computations we want to do. Fundamentally I
think that we should work on other models of computation; the Turing 
machine has become inadequate. Sure, it can do anything, but given that
it requires an infinite stack to do that, it cannot exist in the first
place. And yes, it was my interest in parallel computing which led me
to think this.

Naturally a finite Turing machine cannot even do everything -- oracles
aside (oracles are a side issue). Just give it a problem which requires
a longer tape than it has.

As for recursion, if you define recursion so that it requires a stack,
you have a fundamental problem even worse than that of not being parallel.
For the size of the stack will then limit the amount of recursion you 
can do, even in normal computers. Of course the ideal Turing machine,
with an infinite tape, can no doubt create an infinite stack and then
proceed happily. I won't quote any authorities here, but a slightly
broader definition would work better: an algorithm is recursive if it
repeatedly applies a simple function to the result of previously applying the
function. Clearly Mr. Metzger likes stacks and has decided, perhaps with
authorities behind him, that this is iteration rather than recursion.
The important point here is that we repeatedly use our result, not that
we use a stack (if function calling in a system does not require use of
a stack, but some other method, does that mean that it cannot do recursion?
Yes, I know that stacks are almost always used in processors when a 
subroutine or function is called. However there are other arrangements
imaginable). 

I wasn't interested in arguing about words or recursion, in any case. The
point I was originally making was that doing things in sequence, as a 
model of how the world works, is a very poor model. Sure, if you have
an ideal Turing machine, with an infinite tape, it will indeed provide
some kind of model of the world. Some of us want to be immortal for other
reasons, but I suppose to a devotee of Turing machines one advantage
would be that you could use your Turing machine for everything, and just
sit and wait for its answer. If we want to model the world, we should use
(at a minimum) a parallel computer, since so many things are happening
concurrently in the world. For some reason, this seems to have upset
Mr. Metzger. And despite all the arguments about stacks, etc, I don't
believe he's really said anything relevant to the main point I made.

As someone with a DE background, and who has done actual research in that
field, I can hardly fail to notice chaos. Chaos, technically, occurs
when even small changes in your initial conditions cause an increasing
divergence between your solutions. Yes, there are lots of cases in which
chaos does NOT happen, and this is good: it makes it very easy to model
the behavior of the world (in those cases) with a computer. However 
we still run into problems in which it DOES occur, weather prediction
giving an example. Yes, we can use a computer to predict the weather,
but not for very long. And yes, also, I was explicitly claiming that
brains (the entire system of neurons connected together, at least for
mammalian brains, even the simplest ones) are very likely to show
chaos, even more than does the weather. 

One problem is that if your initial conditions are different in an
unusual way, then your result will diverge in ways which natural phenomena
simply don't show. You won't get a passable simulation unless you 
start with "realistic" initial conditions: but here lies a problem.
Digital conditions are not realistic, so you start with several strikes
against you. That is why I would prefer, if I wished to build a brain,
to use analog parts. (And yes, if you allow arbitrarily large decimal
numbers, then you can be digital, too. The problem is that you will
still diverge, it will just take a little longer).

And now we come to neurons. First of all, they show no obvious 
resemblance to computers, and they are also (individually) much slower.
(This does not mean that brains must be similarly slow --- remember
parallelism). Some properties of neurons have been modelled as
electrical circuits, but they not only use electrical circuits but
chemical circuits too. Chemical circuits (I believe the name is even
appropriate --- students of nervous systems and even of biochemistry
have begun using ideas from circuit theory to describe and explain
the behavior of the kind of complex chemical processes which happen
in all our cells, including neurons, too) ... chemical circuits do
behave differently from electrical circuits. First of all, chemicals
diffuse. Second, there are many more chemicals than there are electrons
and holes (absence of electrons). This allows much richer signalling
at the cost of slower signalling. Since neurons do signal electrically,
the fact that they also use different biochemicals to do so suggests
to me that evolution has found that to be superior to using only
electrical signals. This difference alone tells me that you'll
have to use much more complex electrical circuits to emulate a
neuron than you would if neurons were simple electrical processors.

Furthermore, even neurons show self repair. That they normally show
less of it than some other cells does not mean they totally lack that
ability. The abilities involved may even be central to memory itself,
since formation of new connections and abandonment of old ones is the
current best idea of how we store our true long term memories.

They do not seem to me to be simple "devices" at all, and the fact
that neuroscientists are still studying behavior of individual neurons
suggests that we cannot simply decide that they can be realistically
modelled by some simple set of electrical devices. Sure, we could
no doubt write a program which would, for a while, act like a neuron.
It's far from obvious that this program could emulate a neuron for
a long time (chaos again: it's turned out that only the simplest
phenomena don't show chaos). 

Furthermore, an argument that we can use digital devices to emulate
ANYTHING requires more than just the observation that (say) it
occurs in a discrete set of states. Those states, over time, may 
approach closer and closer to one another. They may also be incommensurable,
which means in this case that we can find NO number (let us suppose for
the moment that the state can be characterized by a single number!) such
that every state can be expressed as a multiple of that number. One
major problem with an digital system with a fixed size of "real number"
ie numbers expressed as a decimal number, is that incommensurability
means that over time the difference between two states might become
arbitrarily small. Sure, we can always increase the size of our real
numbers, but they will always remain finite, with accuracy growing less
and less as they become larger (or closer to zero). Digital systems
BREED chaos: even if you don't have it when you look at the physical
system itself, you have to devise your algorithms so that the simple
inaccuracy of your model will not cause it to blow up. In practice,
this is done by limiting the time span over which your computations
are to approximate the phenomena.

I would say, myself, that incommensurable states provide the real
basis for models with infinite accuracy. We can play with them intellectually
even if we could never use a computer to develop them. Mr. Metzger
gave me the example of a digital scene, no doubt done with oodles of pixels.
Yes, we could be fooled by such a scene. But those oodles of pixels start
to become a severe computational burden if the scene ceases to be static
and starts to move. 

I won't claim that this is a complete discussion of my ideas on these
issues, and I'm sure that Mr. Metzger will come up with further laughter.

As for counterexamples, they remain important. The use of a counterexample
is that it shows that one way of thinking has a problem in it. And a 
counterexample providing a machine (not a Turing machine) capable of doing
calculations which Turing machines cannot, and one which does not simply
assume an oracle (yes, that's cheating!), gives one more reason to believe 
that Turing machines fail to catch some very important things about both
thinking and computing.
   
			Best and long long life,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=9323