X-Message-Number: 8655
From: Thomas Donaldson <>
Subject: Re: CryoNet #8626 - #8630
Date: Thu, 2 Oct 1997 22:29:25 -0700 (PDT)

Hi again!

Some comments for John Pietrzak:

1. First of all, neural nets do NOT learn categories. They learn to respond
   in a similar way to a set of inputs. Ithuman beings who train them
   or observe that use categoriegories to describe what they are doing.

   This is actually what we ourselves do in the early stages of learning a 
   language. We don't so much get a verbal definition of a word as learn to
   recognize its instances, which is not the same thing. Later on, we may have
   many synonyms and so play around with verbal definitions. Consider chairs.
   Chairs can have many forms, and even a given chair may look different 
   depending on the light, who is sitting in it, and other such contexts. Yet
   we learn to recognize a chair.

2. I do not intend to defend everything Searle has said. However that is not
   the same as saying that some of what he's said gets to the heart of the
   problem with the Turing test.

   That problem is simply that recognition is fundamental to everything we do,
   and cannot be explained purely by algorithms, no matter how complex. First
   we learn words without being able to define them other than by pointing 
   at instances. THEN we learn how to define them with other words. Searle's
   Chinese Room is an attempt to show this problem: sure, I know lots of 
   symbols, and can even answer one train of symbols with another. But as for
   knowing what they MEAN, in the sense that I could apply them to say 
   something about the real world, I'm helpless.

   And if we examine the setup of the Turing Test, there is nowhere in it
   in which the Interrogator (the human being) presents the device not with
   words but with some real object. It's not set up so that the Interrogator
   and the Device can take a stroll in the park. When you bring in such 
   possibilities you cease to talk about the Turing Test at all. 

3. OK, it's true that we may someday devise even better means than neural 
   nets to do what neural nets can do. Fine. I wasn't making a universal 
   statement, just discussing the kinds of devices we know now and what they
   could do.

   I will add, though, that I differ strongly from you as to whether we will
   SOON have "simpler devices able to do the same things that neural nets 
   do now". Since you have not provided one such device, this is a point on
   which we'll simply have to agree to differ, and given cryonics, we may
   come back to it 100 years from now and see just who had the best 
   approximation to truth.

   I will also say that quantum computer probably won't help --- though 
   perhaps we could make quantum neural nets. The problem in chess, or 
   any game, is that of searching through a very large set of possibilities
   WHICH COULD BE DESCRIBED IN ADVANCE. Neural nets provide a means to get
   an answer, and one which is at least approximately suitable, to inputs which
   HAVE ** NOT ** BEEN DESCRIBED IN ADVANCE, and may even be quite beyond the
   imagination of those who designed the neural net. (No, I'm not against
   quantum computers either, and hope we can build them).

One major point. Several times in this conversation I have said that I'm
perfectly happy with the notion that we might build a DEVICE that could 
not only pass the Turing Test but even respond like a human being in the
real world. I am arguing not about that possibility but whether, first, the
Turing Test provides an adequate test that we have constructed such a
device, and second, whether that device would or would not qualify as a 
COMPUTER, whatever a computer is supposed to be.

Any set of axiomatised concepts basically provides a little game with which
we can play and possibly draw conclusions which hold within its definitions
and axioms. We know now that such systems cannot encompass all mathematics,
or all that can be known. One fundamental reason for that is simply that 
such systems have no connection to the world itself. We, human beings,
decide to use them to make predictions and statements about the world. When
we do so we make assumptions about how they correspond with real objects ---
which assumptions, by their nature, CANNOT take account of everything that
might be said about these objects and their interrelations. If we know what
we are doing, such systems have proven very useful, but they should never
be identified with the world itself. They are constructions we have made
so that we can predict and try to understand, and exist only in our heads.

So that is what I have to say about Turing Tests and neural nets. And oh yes,
I'd say that if we wanted to make a DEVICE which might do what human beings
do, the first design choice I would make would be to construct most of its
brain from neural nets. Not all, but nearly all.

			Long long life,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8655