X-Message-Number: 17933
Date: Wed, 14 Nov 2001 19:49:52 -0800
From: Dave Shipman <>
Subject: Re: The Old Consciousness Thing

Friends,

I am grateful for the responses to my first cryonet posting on what must be 
a perennial topic for this group. It's one of those discussions that just 
keeps going around and around, so apologies to long-time cryonet veterans. 
I guess my basic point is that I think consciousness should be taken 
seriously.

Robert Ettinger (#17914) points out that the substrate necessary for 
consciousness may require "specialized physics or biophysics..., 
conceivably possible only in organic matter". A purely computational 
substrate may not hack it. That's why I used terms like "intelligent 
artifacts" and "artificial minds" instead of something like "conscious 
computers". To sustain consciousness, the "intelligent artifact" may well 
have to include organic machinery to do the job. If it needs to be made out 
of meat, then we'll make it out of meat (presumably artificial meat). My 
own suspicion is that algorithmic execution alone will be adequate. But who 
knows?

Charles Platt (#17916) articulates well the problems faced by anyone trying 
to create a conscious machine. (By the way, Charles, the Alcor Forum is 
great! Don't be discouraged by a delayed response from the readers. People 
do appreciate this kind of information and your efforts to bring it to us.) 
But I don't think we should give up on understanding consciousness or deny 
its existence just because it is problematic.

There are two reasons why I care about whether a machine is conscious or 
not. First, I think we have ethical responsibilities toward conscious 
machines. For example, we should not leave the robot's "pain test circuit" 
on for long periods of time, especially if the dial is set to 
"Excruciating". And many have asked whether it is ethical to turn a 
conscious machine off. Is that equivalent to murder? But I believe the more 
important question may be whether we should turn it on in the first place. 
I think the responsibilities are similar to those we have in bringing a 
child into the world. Before we build robot factories churning out 
conscious entities by the billions, we should think about what we are 
doing.

The second reason I care about machine consciousness has to do with 
uploading. I personally like the idea of becoming superhuman. But I really 
don't want to be uploaded unless and until I'm reasonably convinced that 
I'll actually "be there". Real conscious me, not just a behavioral 
simulation. Even if it's a behavioral simulation that posts cryonet 
messages about qualia.

Which brings us to the heart of Charles' criticism. How can we know whether 
another person, animal, or machine is conscious if we can't tell from its 
behavior or any other physical manifestation? Well, simply put, we cannot. 
So do we give up? I say no. The situation is no worse than for other 
postulates we can never know for sure. For example, we can't even be sure 
the physical world really exists. After all, all we have is our own 
conscious impressions of it which may be false. But in order not to get 
stuck in a skeptical tar pit, we assume that physical reality does exist 
and that it is more or less what it appears to be and just go on from 
there. Similarly, we do not know whether there are other minds or if it is 
just us alone in an otherwise purely mechanical mindless world. But again, 
rather than become mired in pointless solipsism, we assume that other 
people are indeed conscious just like us. This is a reasonable assumption 
since 1) they behave like us, and 2) they have brains very similar to ours. 
The second point is crucial. Whatever the mechanism is that makes us 
conscious, we assume it is also at work in other human brains. Ditto for 
other mammals, but with less certainty, and for the lower animals, with 
much less certainty. So we go ahead and assume that other people are 
conscious and that we can believe them when they tell us what their 
experiences feel like. Then we can correlate these experiences with 
neurological observations. Hopefully, we will arrive at some hypotheses 
about the nature of consciousness. We can then go on to test these and 
puzzle through the results and eventually, after a sufficient number of 
iterations and maybe a scientific revolution or two, come to some 
conclusions we feel comfortable with. Is it absolute knowledge? No. But it 
seems the best we can do under the circumstances. In any case, we shouldn't 
just presume that machine consciousness will or won't happen on its own. 
The questions demand intensive and difficult scientific inquiry into the 
nature of consciousness itself. And we shouldn't give up just because the 
problem is hard. It's too important.

	-- Dave Shipman

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=17933