X-Message-Number: 21397
Date: Wed, 12 Mar 2003 23:27:52 -0700
From: Mike Perry <>
Subject: Symbols and Qualia

Robert Ettinger writes

>Thomas' previous point was much stronger--the simple fact that we will almost
>certainly learn the anatomy/physiology of feeling, and finally understand the
>physical basis of qualia.

I too think we will almost certainly learn the physical basis of qualia 
(states or conditions implying consciousness). When we do, again, I think 
almost certainly we will be able to design systems that behave as if qualia 
were present. These new systems will isomorphically simulate the internal 
workings of those systems we already accept as having true qualia, while 
actually lacking at least some important element or component of these 
other systems. The question will then be whether we should regard the new 
systems as also having true qualia or being just unconscious imitations. It 
does not follow that, just because our new systems are different, we must 
consider them as unconscious imitations. Instead, we must ask what would be 
a rational basis for deciding the question one way or another, if such 
could be found. In particular, if someone were to claim that a simulating 
system is conscious, how would you "prove" him wrong? I (as one example of 
what Ettinger calls an upmorphist) have expressed the thought that, in a 
reasonable sense, a "proof" as I am referring to will be impossible in 
principle.

>The basic weakness of the upmorphist position is
>that it is a strategy of surrender, of accepting permanent ignorance and
>helplessness.

I don't see it that way, but instead, as a sober recognition of a certain 
possibility, which at this point seems likely to me but, I admit, could 
prove untrue. Yet, in any case, it is a legitimate subject of rational 
inquiry. The possibility would involve a "computer"--I call it that for 
want of a better term, though it may be different in many respects from 
today's machines. But this device, in its internal workings, would crunch 
bits or otherwise operate with what we would call "symbols." Yet it will be 
able, isomorphically, to simulate a system with qualia down to a very deep 
level, and will in all usual, behavioral respects seem to have 
consciousness and feeling. The possibility that I recognize is that there 
will be no good argument that this particular system in its working does 
*not* in fact have the consciousness and emotion it seems to exhibit, but 
is really only unconscious. I don't accept as a good argument a blanket 
assertion that, because it operates with symbols only, it cannot be 
conscious. I would ask for justification. Why is it that a system that 
works by processing symbols could not be conscious?

I think I can see one reason many people *feel* that a symbol-processing 
system could not be conscious. It is because we can design very simple 
systems of this sort, and also rather complicated systems, that do not 
appear conscious though showing some of the expected features. But we can 
ask, should we think of them as totally unconscious or just having a low 
level of consciousness? If their level is low but nonzero (and I favor this 
view) it opens the door to systems of the same basic type 
(symbol-processing), but greater sophistication, having higher levels of 
consciousness, until, say, the human level is finally reached or surpassed.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=21397