X-Message-Number: 8236
From:  (Thomas Donaldson)
Subject: Re: CryoNet #8229 - #8232
Date: Thu, 22 May 1997 22:04:47 -0700 (PDT)

Hi again!

To Perry Metzger:

You claim, then, that "consciousness", the word, is nonsense. Well, if you
have no idea what I mean, I'm sorry. 

The test I proposed was not a global, always working, test of consciousness,
nor did I claim so. I just said that it was a way of testing the presence of
consciousness in human beings. No doubt it would work for other mammals also,
and as the animals got more and more distant from us it would work less and
less well. It seems to me that if I have such a test for one species we can
at least consider the notion for others. Yes, we may need to set up different
tests, but that's for later. 

Now as for definitions: in the very first place, I don't think that's the
way to proceed. We form a definition once we get some kind of empirical hold
on what we're trying to find. We can already see some good signs of whether
or not someone is conscious or not, given that they are human  beings. A
conscious human being responds to events quite differently from an unconscious
one. A conscious human being, for instance, will respond if you try to speak
to him/her in a normal tone of voice. That is not true of an unconscious
person. There are a variety of other behaviors which distinguish conscious
people from unconscious ones.

There is also the issue of how this consciousness appears to a conscious
individual: me, for instance. I know that sometimes I might feign unconscious-
ness, and given that I am not hooked up to the proper lab equipment I might
even succeed in doing so. I also know that generally when I would say that
I am conscious I also fit the normal tests for consciousness, such things as
being aware of the events around me.

Why don't I believe in starting off with definitions? Because we might
discover that our notion of consciousness is somehow inadequate. An example
of the kind of thing that might happen comes from study of memory: now we
have several different kinds of memory, not just one kind. There was a time
when neuroscientists thought there was only one kind. Right now, consciousness
is one of those words which has no "official" definition, even though many
have ideas about what it might mean. Most words don't really have noncircular
definitions: you did not learn English from reading the dictionary alone. You
are being quite disingenuous when you claim you don't know what I mean.  

Finally: you say that these ideas need to be falsifiable, and I set up a 
scheme in which we might falsify one notion of consciousness. You then tell
me that I'm foolish. Just who is foolish here? Certainly, if we did the tests
I describe on someone, and he gives every external appearance of being 
conscious but there is no area in his brain which is always active when he
appears conscious, that would falsify this notion of consciousness. It would
also be an interesting experiment, and tell us something about how our 
brains work. It would be even more interesting if we found such areas in 
some people but not in others.

One problem with unthinking reliance on Popper or any philosophical account
of science is that no philosopher has actually done science. We try to form
hypotheses which are testable (ultimately, then, falsifiable). 

Most important, we do not start with definitions. We end with them. NO NOTION
USED. For instance, gravity means something quite different in Newton's
theory and in Einstein's. Mass does also. This does not mean that "mass"
is a meaningless word. If we aim to understand how our brains work, then
consciousness will ultimately become part of a theory of how human brains
work. It may not be quite the consciousness we think of now, any more than
mass means quite the same thing in general relativity as it meant to 
Newton, but that hardly means that it is meaningless. Physicists set up a 
rough correspondence and use it frequently, and they will be frank about
what they are doing. Before both Newton and Einstein, Galileo had a different
notion. In the fuzzy way in which words really exist outside dictionaries,
we've had an idea of a common notion of "mass", without any ability to 
define it independently of our theories. Consciousness is the same.

As for intelligence and machines, perhaps I misunderstood Searle, but I do
not find him simply foolish. I have already given a test for consciousness
and intelligence in a computer: I ask for more than just the ability to 
play with words, wanting IN ADDITION an ability to deal with objects in
the world. The Chinese Room is really suggesting that the ability to carry
on a conversation is only a small part, and quite possibly one which might
be imitated on a computer with no more awareness than any other computer
(such as, for instance, Deep Blue). If a device cannot deal with the world
outside of language: know when to duck, how to cross the street, how to
pick up objects, then it fails MY test for consciousness. The Turing Test
is far too fixated on language alone, when most of our experience does
not involve language. For that matter, I'd probably accept the computer
as conscious if I could pass it photographs of paintings --- a Gauguin,
a Renoir, a Rubens, a Rembrandt, and it could tell me what was in them. 
But note that passing it photos isn't part of the Turing Test as set up.

			Long long life,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8236