X-Message-Number: 9296
Subject: More Crud about Computers and Brains
Date: Mon, 16 Mar 1998 11:58:01 -0500
From: "Perry E. Metzger" <>

> From: Thomas Donaldson <>
> Date: Sun, 15 Mar 1998 15:06:31 -0800 (PST)
> 

> I do hope you have read the Siegelmann paper. Some of your statements made me
> wonder about that. I claim NOTHING about the device it discusses except that
> it provides an example of a computer which is not a Turing computer. That is,
> it is a counterexample in the mathematical sense.

Gak.

A "counterexample in the mathematical sense" is worthless.

"Assume I have a faster than light device" is not a "counterexample"
to Special Relativity. "Assume I have a device which physics prohibits 
the construction of" is not a counterexample to the Church-Turing
Thesis. Sure, I can "mathematically assume" away any problem. That
doesn't make the exercise a counterexample.

Hell, theoretical computer scientists played this game long before
you. They call their "assume"s "Oracles". "Assume we have an oracle
for the Halting Problem. What can we then compute?" exercises were
conducted by lots of people fourty years ago. They are interesting
discussions, but not disproofs of the Church-Turing Thesis.

> Furthermore, both you and Mike Perry have a rather truncated idea of what 
> neurons do.

You've been studying my brain under a microscope and know what my
ideas on the subject are?

> I really wish that you would go and learn a bit of neuroscience
> before you got involved in this discussion.

I wish you would, too.  I note that you spent a lot of time in one of
your messages flaming on about the "continuous" nature of perception
-- and seem to have stopped discussing that after I pointed out that
neurons put out monolevel pulses, not continuous signals. Perhaps we
could get back to that, or at least get a concession of the point from 
you.

> As to the complexity of neurons, to be brief, each one works more like a 
> small computer than like a single chip in a bigger one.

It would be more accurate to say that each neuron is the equivalent of
a small (by modern standards) cluster of logic gates.  Neurons are (at
best) the equivalent of small embedded microcontrollers.

(BTW, many computers are made up of multiple processors -- your "chip
vs. computer" metaphor doesn't work very well.)

Anyway, I think you still don't get the point I've been making, so
I'll make it again.

We now understand most of how neurons work -- the chemical reactions
involved, the way that neurotransmitters fit into the receptor sites
on the surfaces of the membrane, everything. There are details we are
missing, but they are rapidly being filled in. Sure, by 18th or 19th
century standards, we are talking about a complicated device, but we
don't live in the 19th century any more.

We hear lots of Sturm und Drang from you and others about the
mysterious, almost numinous qualities of the neurons, and how they are
possibly based on these non-Turing equivalent principles, but the
problem for you is that we know how they work, and there aren't any
niches for that behavior to hide in.

Penrose is far more pathetic than anything you have claimed, of
course. He argues with a straight face that some sort of quantum
effects must be at work -- and of course, we know most of how neurons
work, and there isn't so much as a quantum tunnelling phenomenon
involved, let alone some sort of quantum computation being
performed. We are talking about fairly straightforward (by modern
standards!) chemistry here.

Given that we know how the things work, the claim that somehow
mystical non-TM equivalent computations are going on becomes hard to
swallow. Imagine someone claiming that automobile engines operate off
of thought power or some such crud, when we know how they *do*
operate?

There isn't any place left for the "mystery" to hide. We've seen the
neurons, and they aren't magic. They are just machines.

A lot of your discussion was based on the bizarre premise that somehow
the positions of every molecule and atom in a neuron was materially
relevant to it's behavior -- which of course it is not. (I mention
this because you claim that all such positions are somehow relevant to
the neuron's state.) I need not model the orientation of every water
molecule in an axon to know pretty well that once triggered I get a
pretty easily characterized signal transmission behavior down that
axon. Hell, we now can explain pretty well how each type of seratonin
receptor works, and even show the individual positions of the atoms in
them. The devices in question aren't that complicated -- at least, as
I emphasize, by current standards.

> So far as I am aware, neurons still have far more connections with
> other neurons than any single chip,

That's a canard.

You are correct that the brain doesn't look like a computer in its
architecture. That has never been a claim that we've made, however. It 
would, in fact, be rather shocking if the brain looked anything like a 
modern computer. The brain is not a computer. No one would argue it is 
a computer. We are arguing it can be SIMULATED by a computer. This is
a different characteristic.

Just because a brain looks nothing like a computer in its components
does not change the fact that one could potentially simulate a brain
with a computer. The mere fact that the computer looks nothing like an
aircraft engine doesn't change the fact that I can profitably simulate
the aircraft engine with the computer. Anyone staring inside the
computer looking for "aircraft engine like components" would be rather
disappointed -- as would anyone looking inside the aircraft engine for
computer like components.

By your argument, of course, the aircraft engine shouldn't be
characterizable with a computer -- after all, its "analog", the gas
flow through it is "chaotic", the engine looks nothing like a computer
component, etc., etc. -- but of course, this doesn't stop engineers at
Boeing from doing just the thing you claim couldn't be done.

You will, of course, argue that the simulation isn't perfect -- that
if I ran a real aircraft engine next to the simulated one that after a 
few moments the airflows through the real one would not be identical
to that in the simulated one. As I repeatedly note, however, the
airflows in a second real engine wouldn't identical to those in the
first, either. The question is only whether the simulation is "good
enough", not whether it is an exact prediction of behavior. Given that 
the analog systems can't exactly duplicate each other, it is a bit
much to request that the digital system do something the analog system 
cannot do.

> and show a variation in response according to their 
> electrical and chemical inputs which is much larger than (say) a single
> processor. Not only that, but the "random jitteriness" you allude to 
> in their behavior has turned out to be part of the processes by which they
> work with other neuron.

On all of this, I say "so what?"

All of it is simulatable. You haven't yet described something that
can't be effectively simulated.

> It is not just a "mistake", it's part of their design. 

They aren't "designed", Mr. Donaldson. They arose by evolution. The
brain has no "design" in the sense that a desk or a car has.

> As to recursiveness, I can cite you computer papers which claim that is one
> feature of Turing machines. Remember recursive languages?

Mr. Donaldson, could you please learn something about computer science 
BEFORE commenting?

Yes, you are correct that Turing Machines are capable of expressing
the recursively enumerable languages. That doesn't mean they express
their algorithms recursively. Turing machines don't even have
iteration constructs, although they are capable of simple iteration.

Iteration and recursion are, of course, mathematically equivalent, but 
the fact remains that Turing Machines have no stacks and thus do not
perform recursion directly (although one might map a recursive
algorithm onto a Turing Machine quite straightforwardly.)

You are making the typical mistake a layman who's learned very little
about a field might make. "Gee, this bond has a ten year term. This
Trust Unit has terms prohibiting resale. The word 'term' occurs in
both, so that must mean that the Trust Unit is like a bond and
eventually matures!"

Just because you are smart, Mr. Donaldson, doesn't mean you understand 
a field without studying it.

> As for executing a recursive algorithm in a parallel machines, they
> are not parallel as presented. Usually a bit of thought will yield
> ANOTHER algorithm which is NOT recursive but comes to the same
> result.

You don't seem to understand the distinction between REPRESENTATION of 
an algorithm and its TRANSLATION.

When you write a program in any high-level language capable of
invoking recursion, the program gets TRANSLATED into machine code. In
general, machine code does not have any notion of recursion in it --
it has primitives that manipulate registers and memory locations in
various ways, and that is about it. On a machine architecture like a
PDP-6 or (if I remember correctly) IBM 370, there aren't even any
stack pointers per se, and the stack on which recursive function
invokations get made is cobbled together from operations conducted on
an "ordinary" machine register. (Modern machines tend to have
dedicated stack pointers, but this is a convenience, and a fairly
recent one at that.)

There are *NO* machines that humans build that are naturally suited to 
the execution of recursive algorithms. ALL of them are translated, if
only by adding an explicit stack to sit in for the implicit one.

As for parallelism, executing a recursive algorithm in parallel is
usually no harder a translation task than executing it iteratively.

Take the following silly recursive algorithmic definition for the
Fibonacci Series (in a lisp-like language):

(define fib
	(lambda (x)
		(cond ((= 1 x) 1)
		      ((= 2 x) 1)
		      (#t (+ (fib (- x 1)) (fib (- x 2)))))))

Now, the bulk of the most obvious execution of this (inefficient, but
who cares) algorithm is going to be conducted in the last expression
of the conditional, which recursively invokes the fib function
twice. Note that these invokations are potentially utterly independent 
-- there is nothing preventing a translator from scheduling the
execution of the (fib (- x 1)) expression on one processor and the
(fib (- x 2)) expression on a second processor. Many recursive
algorithms are similarly "obvious" in their parallelization.

> I was hardly claiming that to be impossible, since I've done it lots
> of times myself.

I must say that I find it hard to believe you've done particularly
much on a deep level with computers. You don't seem to understand them
at much more than an educated layman's level.  Hubris, of course,
makes you think that this qualifies you as an expert. You've written a
few programs in a couple of languages and thus believe you understand
something about the field.

> However to claim that two algorithms are the SAME because they reach
> the same result completely misunderstands the notion of algorithm.

No algorithm expressed in a high level language is *ever* directly
executed -- all are translated into "equivalent" algorithms in a
different notation so that they can be executed. I'm not sure how your 
pontifical pronouncement on the nature of algorithms fits in here, but 
I thought I'd mention it.

> Do Turing machines have stacks? (Not all recursive algorithms need them,
> anyway).

How, Mr. Donaldson, do you represent a recursive function invokation
with neither a stack nor a translation into an iterative algorithm?

We are dying to hear how.

So are the legions of Fortran programmers of the last 40 years who've
been forced to build stacks out of arrays to do recursion, btw. They'd 
all like to know how they could have avoided this.

> Well, it should be easy to write the Turing machine's software
> so that it creates a stack, if you want one.

Sure. You can make a Turing Machine simulate anything. That is the
point of the Church-Turing Thesis. However, Turing Machines don't
*naturally* operate in a recursive mode. You claimed that they do.

"Easy" is relative. Getting TMs to perform recursive algorithms is
very clumsy. You claimed otherwise, of course.

Indeed, Turing Machines are pretty damned crippled in most ways. They
don't, for example, possess random access, which means whole classes
of algorithms aren't naturally expressable on them. (That is one
reason people who study the time complexity of real algorithms use a
model of computation frequently referred to as a RAM rather than using
a TM -- real computers possess random access.)

> You seem to be confusing one particular kind of implementation of a 
> recursive algorithm with all such implementations. (Note my wording here!).
> If you need a stack then create one.

Mr. Donaldson, simply present me with a recursive algorithm that can
be naturally expressed with neither a stack (explicit or implicit via
function invokation) nor by converting it into an iterative algorithm
and I'll be happy.

We're waiting.

> You commit a second error when you claim that (admittedly I may have 
> misunderstood you) your computer can do several things at once though it
> is not parallel. As you know, it does this by jumping from one job to
> another.

Actually, that is not what I am refering to at all. All modern
processors have "pipelined execution" and similar tricks. I suggest
you learn a bit about that topic.

> Unfortunately, a lot of the speed of current single processors is wasted 
> in trying to look like they're doing lots of things at once.

If you are refering to context switching in multiprogrammed
environments, I suggest you find some figures to back up that claim
you've just made. I think you'll find you are highly incorrect in that
regard -- in fact, the machines are far more efficient at using their
processors BECAUSE they are multiprogrammed.

> For that matter, the point about weather prediction was hardly that it 
> was impossible, but that it required great speed --- which our computers
> most definitely have, at least comparatively.

So your point is?

> And parallelism is a good way to get great speed: no matter how fast your
> single processor is, if you can join 100 of them together to work on 
> a job, you'll get something faster than any single one (though you do
> have to remember that you're programming 100 processors rather than just
> one, and choose your algorithms accordingly).

And your point is?

> I will assume that you have heard of chaos, though your response shows
> no sign of that.

Your response shows no sign of coping with the point I've made
repeatedly on this.

We are not trying to PREDICT the behavior of a human. We are trying to 
SIMULATE a human.

Sure, nonlinearities in the system will lead to a machine simulating a 
human rapidly diverging from that human. On the other hand, an exact
copy of your brain would also diverge from your brain, and just as
quickly. Analog systems are noisy. We all know that. The question is
not "can I predict Thomas Donaldson's behavior perfectly forever". It
is "can I produce a simulated Thomas Donaldson that no one could
distinguish from the original provided they couldn't look inside the
skull to see if there was a computer inside."

> Weather predictions require such computer power exactly
> because their accuracy decreases with time. You may have done elementary
> DEs in college;

I did some elementary ones, yes. One usually starts with the simple
ones before going on to complex systems of DEs and PDEs.

Just because you're an ignoramous who pretends to understand topics
you haven't studied doesn't mean that I am.

> most of the DEs you studied were very simple,

Thank you for telling me about what I studied in school,
Mr. Donaldson.

I've probably forgotten more math than most people ever learn.

> Several major features have made biological devices superior to
> computers so far. The first and most obvious one is self-repair: any
> device into which I was read must have at least equal abilities at
> self-repair, and ideally far superior such abilities.

I see no reason why we could not produce self repairing computers in
the future if we chose to.

> Furthermore, as structures brains are much more complex, with many
> more processors, than any computer so far built.

Human brains don't contain "processors". Trying to pretend a brain is
designed like a computer is stupid. Brains are not built like modern
computers. They don't have "RAM", they don't have "CPUs", etc.

The point is only that computers could SIMULATE a brain, not that a
brain is a computer. As I noted, an aircraft engine is not equivalent
to a computer, and yet a computer can simulate an aircraft engine.

Your point about complexity is perfectly true. It will be perhaps ten
to fifteen years before our computers have the same complexity as a
brain. Your point, therefore, is what?

> Among their interesting features is that they aren't laid out on
> flat boards like computers, but are essentially 3-dimensional.

Actually, modern computers are not two dimensional, either. Circuits
in a modern printed circuit board are typically dozens of layers deep,
and cross over each other. Furthermore, the interconnection of many
machines is in fact robustly three dimensional even in current designs
-- take the arrangement of the circuits in a Cray.

Certainly this is grotesquely trivial compared to a brain. The brain
is far more complex than *CURRENT* computers, and tens of orders of
magnitude more complex in its three dimensional wiring. However, the
reason for this is not that computers cannot be built in three
dimensional wiring -- it is that we simply don't do so yet, because
manufacturing such devices in bulk would be difficult.

Given a robust nanotechnology, however, it is almost certain that
computer devices *would* be three dimensional in their wiring.

> These features make them much more compact, and allow them to do
> important tasks such as recognition and interpretation which
> computers are currently struggling with.

Computers currently struggle with these tasks because they are still
primitive. The human brain had a few billions of years to arise. The
machines have only had fifty years. Give them another couple of
decades and see where they have gotten.

> For that matter, computers cannot withstand the range of
> temperatures and environments that biological people can.

Ah, no.

Human beings are far more constrained in their operating temperatures
than modern embedded systems. I can easily build a machine capable of
operating smoothly from -40C to 70C, and at pressures from near zero
PSI to several atmospheres. I defy you to find a human capable of
operating in those ranges.

> Of course, our "medical" problems will become different, but
> we will not escape them simply by becoming software.  

Being software is not something to do simply because the machines are
more durable. It is something to do because software is more
*flexible*.

Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=9296