X-Message-Number: 9332
Subject: Is Thomas Donaldson just a Markov Chain?
Date: Sat, 21 Mar 1998 19:35:17 -0500
From: "Perry E. Metzger" <>

> From: Thomas Donaldson <>
> Date: Sat, 21 Mar 1998 00:43:10 -0800 (PST)
> 
> To Perry Metzger et al:
> 
> You seem drunk on Turing machines, despite the plain fact that they have
> proven inadequate models even of significant computation.

Huh?

1) I didn't even bring up Turing Machines. You're the one who's been
bringing the things up repeatedly. You (implicitly) claim that neurons
can somehow perform non-Turing equivalent computations, but haven't
provided us with evidence.

2) Turing Machines are a maximal model of real world computation. No
one has ever demonstrated a constructable machine that could compute
things Turing Machines cannot. They are also mathematically
tractable. As such, I can't find any evidence that they are
"inadequite" models of computation. They are, in fact, the model of
computation used in theoretical computer science to this day, in
preference over other (equivalent) models such as Church's lambda
calculus. Every text on theoretical computer science that I know of
uses Turing Machines as the mathematical model of computation. If they 
are "inadequate models even of significant computation" then I invite
you to provide a better one. I am sure that the community will happily 
adopt it.

> Not only that, but you also show a remarkable ignorance of the
> mathematics involved in any kind of simulation.

I've written numerous simulations of real world devices on many
occassions. Perhaps that was possible only because i was ignorant --
if i knew as much as you did, perhaps i would have been unable to
write those simulations.

> The Siegelmann counterexample, alone, suggests to me
> that some caution should be used when thinking about just what the range of
> the turing machine model might be.

There is no counterexample, any more than Halting Problem oracles are
a 'counterexample'.

Might I note that, for the fourth message in a row, you've ignored the 
entire point of why people use turing machines as a model of
computation? I could quote my entire long comparison of the Turing
Machine to the equivalent device in thermodynamics, the Carnot Cycle
Engine, and the explanation of the uses of "Maximal Models", but it
would be boring to do so, especially as it is unlikely that you would
bother to read it, any more than you read it last time, or read my
comments explaining why we use Turing Machines the last several times.

I contend, in fact, that you are a Markov Chain, not a person, and
that you simply regurgitate the same patterns of text over and over
again, without taking any input whatsoever. If I'm wrong, perhaps you
could then bother to respond to my comments.

> (And for Mr. Metzger in particular, it does not help that we can add
> more tape when needed.

You've utterly missed the point of Turing Machines. See my comments above.

> (Again for Mr. Metzger: don't pride yourself too much on your knowledge of
> computing. My own books have been packed away, but I looked it up elsewhere:
> the formal definition of recursion does not require a stack).

As I recall, the last time you tried to define recursion, you came up
with a description of iteration. You didn't mention self-invocation of 
a function even once, and that is the core of the whole thing.

Perhaps I'm ignorant, but at least I know how to define "recursion"
accurately.

> First of all, until we fully understand how brains work, it is not
> sufficient to simply say that neurons are electrochemical machines.

We are missing details of how the brain works, just as we are missing
details in genetics. However, we know enough about genetics that the
big picture (DNA, mRNA, tRNA, ribosomes, etc.) is well
understood. It is true that details (like the reason exons are often
so long, etc.) are missing, but we know enough that it is unlikely
that suddenly people are going to discover something huge that we
didn't know about before, like, say, a non-DNA/RNA genetic storage
medium, as there isn't enough "unexplored space" for such things to
hide in.

Similarly, the big picture in how neurons work is understood. We know
how signals are sent down them, how neurotransmitters are released,
what all the neurotransmitters are, how many of the receptors for
neurotransmitters work, what sort of dendrites and axons one finds in
neurons, etc, etc. We are missing details, true, but they aren't
"huge" details. It is unlikely in the extreme, for instance, that
neurons are involved in some sort of magical quantum computations the
way Penrose contends they might -- there isn't enough "unexplored
space" for such things to hide in.

> Tell me about the electrical reactions IN DETAIL, and the chemical reactions
> IN EQUAL DETAIL and also the interactions between them IN DETAIL.

Actually, I thought we had most of the information on that. We know,
for example, how the sodium/potassium pump stuff works in grotesque
detail at this point -- down to the molecular structures in the
membrane that make it possible. Some of the stuff we've been getting
in recent years -- stuff like the molecular structure of the receptors 
for many neurotransmitters, and the mechanisms of their operation down 
to the atom, seem like they are just the sort of thing you are asking for.

Sure, we are missing details. However, there isn't anyplace for "big
stuff" to hide in any more.

> WE know far less about simulating brains than we know about
> simulating airplane wings or the weather.

True enough, but people have been working hard on simulating airplane
wings and the weather for fourty years, and there aren't machines good
enough yet to simulate full human brains. I suspect we could easily
simulate a full C. Elegans at this point, though, given enough cash
and will. The thing only has, what, 950 odd cells so far as I
remember. It would be a stretch for our equipment, though.

However, computers get better every day, and are improving
exponentially. Twenty years ago, making a machine that could beat the
world champion at Chess even once would have been unthinkable, but now 
it is on the edge of what we can do.

> Not only that, but unless brains have very special features (say,
> for instance, that they are literally digital, but other features
> would work as well) then any simulation of them will inevitably
> involve chaos, divergence, and failure after only a short
> time. Merely being electrochemical machines is far from enough.

Why do you assume that "chaos" is important here?

I've repeatedly explained -- through perhaps six messages now -- the
concept of statistical functional equivalence, and why that more or
less nullifies the entire issue of "chaos" that you bring to bear as
though it were garlic and computers were vampires. I'll note (as I've
done repeatedly) that you've never paid attention to my comments on
that, or even so much as given a hint that you've read the argument.

Perhaps, as I noted, you ignore these repeated explanations because
you aren't a conscious entity. If you are, I'd appreciate enough
acknowledgment of what I've said to make a reasonable judgement to
that effect.

> Given the complexity of human brains, it seems very unlikely to me that 
> we won't find chaos in our simulation.

Yes?

As I've noted, that doesn't matter, because we aren't trying to
produce a prediction of the behavior of an analog system, only to
produce a functional equivalent to that system. As I note above, I've
been describing this in detail, over and over, message after
message.

> Basically, to claim that we will ever simulate a human being seems to me
> to be next to the claim that we will someday completely understand the 
> universe.

How about this as a little test, Mr. Donaldson.

I contend that you haven't been reading my messages.

I'll continue this discussion if you will...

1) Acknowledge this comment at the head of your next message by
quoting it in your next message, and

2) Explain why the "statistical functional equivalence" argument I've
repeatedly explained in prior messages isn't reasonable. I'm not
looking for a GOOD argument, mind you -- just enough of one to give me 
a hint that I'm talking to a person and not a Markov Chain
regurgitating previous messages.

> There is a problem in simply assuming that we can add more memory if our
> Turing machine runs short. To do so requires that we be able to predict the
> length of the computation it has been working on, which (if I understand
> rightly) is an unsolvable problem. 

Not that this is important, but no, you're wrong.

Imagine I have a Turing Machine with a finite length of steel tape. I
rig it so that if it needs to go past the end of the tape, it freezes
until more tape is needed. I then go out, get myself a few hundred
more miles of Handy Dandy Alan(TM) Brand Turing Machine Steel Tape,
weld it on the end, and let the machine continue on until it hits the
end of that tape again or completes its computation.

> Any serious theory of computing, which aims to deal with real computation
> rather than purely theoretical computation, will have to use finite machines.

We have theories of computation based on finite machines. If you knew
anything about theoretical computer science, you would know about the
theory of Finite Automata, and what they are capable of. One also
tends to learn the theory of other sorts of limited automata, like the 
limits of the so-called Push-Down Automata, or PDAs (which, unlike
Finite Automata and Turing Machines, have different characteristics
when you compare the deterministic and non-deterministic versions of
the machines.)

> Considering the florid growth of computing, a decision to limit
> ourselves to finite machines will probably limit even our theories
> of computation at worst by a trivial amount. And in return, we would
> have a much deeper idea of what is really possible.

As I noted, we already have that theory, and it is well explored.

As I also noted, you have yet to explain *why* any of this has
anything to do with neurons, since you haven't presented any
information which would lead us to believe neurons perform any sort of 
computations which are non-Turing equivalent.

Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=9332