X-Message-Number: 11828
Date: Wed, 26 May 1999 19:02:00 -0400
From: Daniel Crevier <>
Subject: zombies, ducks and consciousness.

I was surprized at the vehemence with wich Mr. Ettinger replied
to my arguments in message #11800. He states for example that:

>Mr. Crevier (among others) takes an arbitrary stance and fails to 
>label his postulates as such.

And later:

>At least, Dr. Perry understands my arguments, and 
>agrees that his isomorphism postulate is only that, and does not claim 
>certainty but only expresses leanings, as do I.

I had said:

>There cannot be one kind of mechanisms (the A's) that make physical 
>systems just behave as if they were conscious, and another kind (the
>B's) that make them really conscious.

All right, maybe I should have qualified this and said something like:
"The argument I am examining states the there cannot be one kind of
mechanisms...". This is what you do in scientific papers, and I had
hoped to be allowed to dispense with it in the informal context of 

On the other hand Mr. Ettinger replied:

>Yes there can. The computer predicts or describes behavior, and a book 
>written by a computer could do the same, and a robot controlled by the 
>computer or the book would behave as if conscious.

I don't see any maybies there either, and that position sure looks like
a lot more than a mere "leaning". 

Mr. Ettinger counters me with propositions, the first of which states:

>1. We do not yet understand the physical basis of consciousness.

and continues:

>Prop (1) by itself ought to be enough to warn against any assumption
>that any inorganic system, let alone a computer, coud be conscious.

I don't follow: if you don't know what causes X, how can you be 
prejudiced in one way or the other? In particular, if you see a 
being that displays all the appearances of having X, would'nt it be
logical, in the absence of reasons to the contrary, to grant that 
this being has X? In other words, if appearances are against you,
the onus of the proof is on you.

Part of the misunderstanding may reside in a misapprehension of how
strong the appearances can be, and in a belief that one can easily
fake X with smoke and mirrors. Indeed Mr. Ettinger states about his 
second proposition that
>we already know ... that a computer could fool at least 
>some of the people some of the time, and programs with that 
>capability already exist.

Well, I don't know of anyone who's been fooled yet in a controlled
experiment. Remember: we are  talking about a being whose behavior 
is *absolutely indistinguisable* from what it would be 
if it were equipped with X. To use the duck analogy, which Mr. 
Ettinger is fond of, suppose you are presented with a "duck" that not
only looks and quacks like a duck, but also swims and flies like a duck.
It eats and excretes like one, mates and lays eggs, which even-
tually turn into small "ducks" that grow into full-fledged "ducks". 
It would take *very* solid grounds for the contrary to claim that this
"duck" is not a duck.  

A "conscious" computer would present us with a very similar situation.
It could see, hear and talk, and control a robot body.
It could hold a conversation with you and express heartfelt opinions. 
It might even feel upset if you contradicted it too much, as happens
in human discussion groups ;). You could become friends with it. 
It could be part of a work team involving several people, and enter 
into the usual office politics. Everyone in the team would testify 
to its apparent consciousness. Discussing with it could prompt it 
to emit original ideas, that might eventually grow into full fledged 
world views or theories. 

Mr. Ettinger admits that such a computer could exist. I think it will,
in twenty to thirty years, at which time the discussion we are having
will become much more than academic. 

If you don't know what causes consciousness, you don't have any grounds
for denying that this computer is conscious. The arbitrary stance (be-
cause, I'll venture to say, that is what it is) that it takes an 
additional or a different mechanism to cause consciousness is not 
justified. In fact, even stating that there is a "physical basis 
for consciousness" (other than the physical processes required to 
make one behave as if conscious) is an enormous unlabeled postulate 
in itself. This is so because (and here I'll repeat myself because 
this seems to be the only way to get the point across) if such an 
additional mechanism were required, then it would be possible to 
make systems that act *exactly* as if conscious but are not conscious. 
Thus, adding the additional mechanism would, by definition, have no 
observable effect whatsoever. Believing that something that can have 
no effect must exist is an act of pure faith.

There are other points in Mr. Ettinger's reply with which I 
disagree, but this posting is getting too long and I'll reply
to them some other time. I would like to add a word about 
isomorphism, though. Mr. Ettinger seems to think that, contrary to
Mike Perry, who understands the word maybe :), I have an incontro-
vertible belief that it is a sufficient condition for consciousness. 
In truth I hold no such belief. I don't think either that it is a
logical implication of my position. For example an inert book merely 
describing a mind could not be conscious, because no such book could 
ever behave as if conscious.

I'm sorry if anything I said previously was offensive to Mr.Ettinger. 
He is a witty and worthwhile advocate, and I enjoy our discussions.
I hope we'll continue to have them in good humor.

Daniel Crevier, Ph.D.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11828