X-Message-Number: 7956
From:  (Thomas Donaldson)
Subject: Re: CryoNet #7941 - #7944
Date: Thu, 27 Mar 1997 00:17:13 -0800 (PST)

To Mr. Dratzman:

I don't know how much of the discussion you have followed, but there are a
couple of points I would make.

Going backwards, the Turing Test has faults. Its main fault is that it is 
far too based on language rather than behavior. (I'm hardly surprised that
some thought that humans were computers, using it, either --- for exactly
that reason). 

Clearly the Turing Test also assumes that the computer (or person) shares
a language with the person on the other side. That is, they have something
in common. If we are allowed to admit that somehow the computer and me
(the interrogator?) have something in common, here is a test which I would
take much more seriously. I would require that the computer, or something
guided by it by some means, be able not only to converse with me but to
take several walks through city streets and perhaps also through large
parks, and we would both discuss what we had seen. At the end of these
walks I would take it home with me and we would both prepare a small 
snack. If the computer can't eat what I eat, then perhaps there is some
other thing which it or its "moving version" might do --- say, oil itself,
or plug itself into a power outlet, or whatever. Naturally we would 
continue our conversation. Then I would bid it goodbye and it would find
its own way back. (It's OK if it has to ask me for a map first).

If it can do all of these things, and everything they imply, then I 
would seriously consider it to be aware. The point of this test, unlike
the Turing Test, is that it must not only interact with me, but also
interact with the world, in a serious way, not as a game. The city we
walked through would not be virtual, nor would the park. 

I will summarize what I am saying. Because everything we do is real
rather than symbolic (when stripped down to its basics) we must interact
with a world which cannot be defined fully by any amount of symbols or theory.
Clearly some systems of interacting atoms can do this, and some are aware.
But then trees are systems of interacting atoms, and they are not aware.
I am saying first of all that requiring that such a system be aware imposes
some conditions on its structure. Those conditions are much stronger than
simply the ability to manipulate symbols --- which is exactly what the
objects we normally call computers can do, and do very well. It must be
able to interact with the real world; and if we are talking about 
computer intelligence rather than simple awareness, it must do so at least
as well as a human being. 

The problem with symbols is that they have no INTRINSIC meaning to anyone.
Some other person must be present to interpret the symbols produced by
a computer program. That interpretation, one way or another, connects 
them somehow to something in the world... but it is not the computer which
does that, it is its user. Not only that, but symbols by their nature
can only have a floppy, fuzzy, changeable interpretation. God never made
a dictionary, we did. And ultimately words cannot be defined by other 
words. (One problem with the Turing Test is that it plays on our human
tendency to react to symbols --- that is, the language produced by the 
person or computer on the other side --- quite automatically, as if they
automatically meant something to the object/person producing them).

And I'm actually making a strong statement here: not about computers ---
I think that this part of what I am saying should be trivial, once we
liberate ourselves from the notion that symbols are real independent of
us -- but about awareness. WE are structured in a particular way which 
allows us to interact with the world. So are mammals. And I am saying that
a brain which can do that as well as we do, or even similarly, will also
produce awareness. In animals, perhaps not intelligence --- though 
compared to most animals, a rat is really quite bright, but awareness.

Perhaps as we explore the universe further we will find creatures which
are not aware, but can still function in the world. That will be interesting.
But we have NOT made them when we made computers. 

As for artificial intelligence or artificial awareness, I have no problem
with either. My problem is with the notion that we could program computers
to have either. Robots, though, they are a separate question, and the more
independent we can make those robots, the more awareness they may have.

			Long long life,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=7956