X-Message-Number: 8588
Date: Fri, 12 Sep 1997 14:09:25 -0400
From: "John P. Pietrzak" <>
Subject: Setting the Limits
References: <>

Thomas Donaldson wrote:
> As for the notion of equivalence, I do hope that John Pietrzak read
> and listened to what I said. There is no single definition of
> equivalence, there are many depending on what you want to do. The
> ideas of Turing about computers use a broad definition of equivalence.
> Practical issues raise the validity of much narrower definitions.

Uhm, let's see, Cryonet #8549?  I'll re-read it again to make sure.
But really, the Turing Machine is as abstract as it is for a purpose:
it sacrifices many specific real-world aspects of modern digital
processors in order to gain wide applicability.  The TM captures the
notion of computation in practically any digital processor currently
conceivable.  Whenever you _can_ actually prove something about the
properties of a Turing Machine, you can be pretty well certain that
the result should hold over all possible computers.

[ Later on here, on the Turing Test, I've just gotta disagree: ]
> 3. The original Turing test, with limited means by which the
>    interrogator could even talk with the computer/person on the other
>    side, fails because it does not take in the full range of behavior
>    a human being can show.

But, what would you do with the additional range of behavior?

>    Issues of intelligence and its meaning, while I certainly agree
>    that they are far more vague than most people think, aren't the
>    central problem with it.

W.R.T. the TT, there are no issues of intelligence and it's meaning.
The TT says _nothing_ about intelligence.  It's a test of mimicry.  If
you have a *completely indistinguishable* mimic of a human, you still
don't know if it's intelligent or not, because the TT doesn't say that
mimicry = intelligence!

>    The central problem is that it operates only with symbols.
>
>    Yes, folks, deep down our brains do not work with symbols.

Ok, hang on here.  Number one, why do you say symbols are insufficient?
Depending on your definition of intelligence, they may be just fine;
if you can pass information from one being to another which indicates
intelligence, you should be able to use symbols (indeed, have to use
symbols) to do it.

Number two, deep down, we still don't understand our brains.  It's
perhaps somewhat arrogant to believe that we can write off any given
system as not existing.

>    [...] when we use symbols, we know their MEANING, which ultimately
>    cannot be given with other symbols.

At this point we reach full-fledged philosophical theorizing.  You've
stated that symbols are not the bottom-level construct, that they are
grounded in meaning.  Therefore, you must be saying that meaning is
the bottom-level construct, right?  Some undefined substance that
resides somewhere inside our minds?  Or is meaning itself grounded in
something else; and if so, how would it differ from symbols?  (Of
course, this is starting to reach well beyond my area of expertise...)

>    For that matter, human beings (and other devices like them) will
>    be subject to the Turing limits.

Careful now, the implications of this statement may go against some
of what you have stated earlier.  I too believe that, ultimately,
humans are bound by the constraints of Turing machines.  However,
this also means that I must believe that humans can be simulated,
perfectly, on a Turing Machine, or any other equivalent digital
processor (given sufficient time and storage capacity).  No other
futuristic gadgetry need apply.  However, this is an issue of belief
rather than science right now, and I suspect most people wouldn't agree
with me here.


John

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8588