X-Message-Number: 11762
Date: Sun, 16 May 1999 01:25:30 -0700
From: Mike Perry <>
Subject: Consciousness issues, freezing damage.

Bob Ettinger writes,

>Another reminder: If a computer can be conscious, so can a book. If 
>isomorphism is everything--and this is indeed the real, basic premise of
>Mr.  Crevier's school of thought--then the principle of isomorphism should
>apply  to time as well as space and matter. If nothing is essential except
>eventual  generation of the right sets of numbers, then the pages of a
>book can  correspond to the successive states of a Turing computer, and
>therefore to a  living person. I think this is a valid
>reductio-ad-absurdum.

We have had discussions on this recently, and I may have created a
misleading impression of what I think. And, having thought about it more, I
now may perhaps have slightly different thoughts than before, but still
basically the same, which I'll try to clarify. No, I don't agree that "if a
computer can be conscious, so can a book." Not every isomorphism is valid.
(I don't think this is a "basic premise of Mr. Crevier's school of thought"
either.) I'll also drop any talk about "relative consciousness" and the
like--I think that is too confusing. I'll just say that no, I don't think a
book is conscious as we normally understand consciousness.  

But let's look at the isomorphism issue. If "not every isomorphism is valid"
we are left wondering which ones are and which aren't. By valid I mean,
basically, that if a certain system is conscious, then an isomorphic system
is (guaranteed) conscious too. If I read it right, strong AI advocates think
of the brain as a type of computational device that could be described as a
finite state machine. (This is how I see it anyway, though it is a *very*
complex device, regarded in this way.) In principle you can decide if two
finite state machines are equivalent by matching (what you hope are)
corresponding states, etc. and seeing if the two behave equivalently. At the
quantum level, it is estimated that the human brain undergoes up to about
4x10^53 state changes a second, so this isn't practical but is only possible
"in principle". (Maybe you wouldn't have to go down to the quantum level,
however.) But the kind of isomorphism that would apply to two running
systems (like a brain and a robot's onboard computer), would be one in which
time is modeled as time, and not, for example, as a page number in a book.
Thus real activity must map to real activity, not just to a record that
never budges. I never heard of any strong AI person who would not concede
this, though I (and probably the others too) am generous in what I qualify
as "activity"--flowing electrons are a form of activity, just like walking
and running.

So, why this "discriminatory" policy on isomorphisms, while still allowing a
great many? For me, simple intuition basically. If, in the future, I were to
meet a robot that looked human, and whose brain, while not made of
protoplasm but silicon, say, nevertheless had structures in its hardware or
software that functioned exactly like all the essential components of a
human brain, and if it behaved indistinguishably from a human, I would grant
the benefit of doubt and concede that it probably had genuine emotions. I'm
not saying these are the most general conditions under which I would think
of the robot as human either, but on the other hand, a pile of notes or a
book is not something I could converse with, nor could it initiate its own
actions, etc. So once again I would not view such an entity as conscious or
human.

Basically I am lenient about what I would think of as conscious (again
limited to active systems), to the point that I don't consider such
properties as what sort of matter a system is made of important, but rather
how its components function. Someone who disagrees may feel that a construct
not made of flesh could never be conscious even if it behaved as if
conscious, *and* if its internal activity matched up, isomorphically. But my
reaction is that, given a suitably designed robot, there would (probably)
*never* be any way in principle to decide whether it "really" had feelings
or was just doing an unconscious imitation. In a case like this, either
position you take would "fit the facts." I am not a solipsist (though
solipsism also "fits the facts") nor would I be a biological solipsist here.

On the freezing damage issue, I am in basic agreement with Bob. I think the
damage we've seen is cause for worry, but I see grounds for optimism too.
I'm all for research to improve our techniques, yet also strongly in favor
of freezing now, if faced with death. It's still the best choice you've got.

Brook Norton writes,

>This person would appear aware
>from the outside... would hold a normal conversation... would do EXACTLY
>what the original person would have done.  But its a zombie.  A closer
>look with x-ray would show that the brain was gone... a chip in its
>place... no awareness being generated.  
>
How do we know a chip can't generate awareness? I'll grant that a chip that
simulates an oscillating rod doesn't physically oscillate itself, but you
haven't shown that your implied analogy is valid.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11762