X-Message-Number: 11085
From: Thomas Donaldson <>
Subject: More on the Turing Test and its inadequacies
Date: Thu, 14 Jan 1999 00:31:03 +1100 (EST)

To Mike Perry:

I may criticise your book when I get it and read it. But yes, I am
interested in what you have to say.

A bit more about the Turing test (the one in which we decide whether or
not we are talking with a computer or a human being): as commonly stated
it suffers a MAJOR fault, which may turn out to be much deeper than it
first seems. The fault is that it deals only with language using language
alone. I summarize this problem is a review (as yet unpublished) I wrote
for CRYONICS: if we work only within language, we cannot present the
person on the other side with any other KIND of test. Can he, she, or it
recognize a real, live alligator? Or earthworm, for that matter? Can
he/she/it tell when the US national anthem is being played (normally, or
on a calliope, or on a digeridoo?). 

Human beings, and I would hope any creature/device we may someday build to
emulate even some of the simple features of human beings, do not just work
with symbols. They know how to apply those symbols to things in the real
world. Any creature/device we may someday build must do the same or fail
completely at one of the simplest things human beings do. Yet the Turing
test never leaves the world of symbols at all. You can ask the
person/computer about alligators, and may even get a long description of
them(*), but you cannot show that person/computer a real, live alligator
and ask he/she/it what it is. 

(Yes, this argument comes from Searle, but I'm stating it differently. And
I'll add that ultimately it is an empirical question: is our normal
language so all-encompassing that merely by knowing its symbols and their
relationships we'll be able to use it to identify real earthworms or
alligators? Though I do admit to lots of skepticism about the outcome of
such a test).

I would add, similarly to Ettinger, that the fact that very simple programs 
can actually fool a significant number of people should start to make us
suspicious of the merits of this test. Just who is the ideal person who
will not be fooled by a sufficiently clever program which utterly fails,
internally, to work like a human being at all? 

As you will notice, I do not think the failure of Turing's Test means that
we will never be able to build devices which emulate human beings. It may
mean that such devices turn out not to be computers, but that's a separate
question, even though many want to think they are the same.

			Best and long long life to all,


				Thomas Donaldson

(*) Not every human being could give a LONG verbal description of an 
    alligator, even though more humans can recognize one if they see it
    than can give such a description.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11085