X-Message-Number: 14857
Date: Sun, 05 Nov 2000 22:17:50 -0700
From: Mike Perry <>
Subject: Turing Tome Is Alive (In Some Sense)

Bob Ettinger, #14842:

"Let me repeat my Turing Tome counterexample, in part, with a slightly 
different emphasis. Imagine a huge book, containing code for a person and his 
lifetime (or a large segment of it, including his environment). Is the book 
alive? Does it have feelings? Is anything happening?

"It must be alive and feeling, if you believe that isomorphism is everything. 
And you can't escape by saying the program must be running in an active 
computer. If isomorphism is good enough for space and for matter, why isn't 
it good enough for time? ..."

This issue has come up before, as hinted. And it does pose a tough challenge
for us uploading advocates, one I didn't entirely meet last time around. And
I won't claim to have solved all the problems now either, but I can offer
some more thoughts that, for me at least, bolster confidence that the
uploading idea is basically correct, and isomorphism, while maybe not
"everything," is enough to carry the day.

In the first place, we might ask whether we think a record of a person like
Bob describes should be considered "alive" (but not necessarily
conscious)--so that, for example, to destroy it would be murder. And we'd
have to answer yes, in the sense that a person is described here, much as a
record of a frozen person, or the frozen remains themselves, would amount to
a living person we could reanimate. This would be true, at least, if the
"code" referred to has an exact, straightforward description of a person. If
it is merely a procedure for generating such a description, perhaps the
issue gets far stickier. I suppose you could say the number pi is that sort
of thing, because somewhere in its stream of digits is very likely such a
description. Let's pass over this for now. (I didn't say I had all the
problems solved.)

I think one issue of great significance, and it will probably be a practical
issue within a relatively short time, is whether a system that *does*
function over time in a manner that seems conscious should be considered
"really" conscious or just faking it. And in appropriate circumstances, with
time modeled more or less as actual time, I would not hesitate to say yes,
even if the system is made of silicon and metal rather than flesh and blood.
Furthermore, such a system, if it is conscious, is clearly so *in my
universe* or relative to me. If instead all I had was a very long
description of the behavior of the system extending to many pages in some
format, so that time is modeled, say, by page number, but a completely
static record, it would then be reasonable to say the system does not
exhibit consciousness relative to me. But we can ask whether the system
might still be conscious in some sense.

To shed some light on this, let's consider a thought experiment, admittedly
a bit farfetched, but not logically impossible I think. We have two,
parallel universes; one is our own and the other is just like ours but time
flows orthogonally to ours (as opposed, say, to backward or forward). In
both universes the same things are happening, but relative to one time is
standing still in the other, so "there is no consciousness". An outside
observer, though, reports that since the universes are so similar, it is not
possible to say that consciousness resides in one but not in the other; i.e.
both must have consciousness (since ours does). I could accept an argument
like this, so that under some thinkable circumstances consciousness would be
present in a system that is static relative to my time. But still there
would be no consciousness relative to me, and my consciousness would not be
able to interact with this other consciousness.

In practical terms, though, I doubt if an issue like this will come up for a
very long time, if ever. If we have a sufficiently detailed description of a
person including thoughts, memories, even behavior over a period of time,
then the information paradigm demands we treat this as a potentially living
person, much as in the case of a well-frozen body. We couldn't just destroy
the information with impunity, even if it couldn't protest. On the other
hand, we shouldn't have to worry about issues like civil rights that pertain
to active processes only (that is until our person is put in an active
state). But we will have to face the issue eventually (if not already) of
whether some active computer systems have real feelings that ought to be
respected, or can be considered unfeeling automata, no matter how much they
tell us different. Here again I would vote for feeling, as the default
assumption. If, on the other hand, we could simulate a human with sufficient
fidelity in a non-meat device, I would consider it a real, feeling person,
not an unfeeling imitation.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14857