```X-Message-Number: 27581
Date: Sun, 05 Feb 2006 19:12:22 -0500
From: Francois < var s1 = "asimov_orion"; var s2 = "videotron.ca"; var s3 = s1 + "@" + s2; document.write("<a href='mailto:" + s3 + "'>" + s3 + "</a>"); >
Subject: Definitions of identity and computer chips

Discussions of the nature of Identity have resumed for a while and I have
had some new thoughts on the subject. I think the problem we are facing here
is that we are arguing about two different things which we are confusing
with each other. Therefore, one side's arguments are seen as invalid by the
other side, and vice versa. But the arguments are valid when applied to the
notions they aim to prove.

I think Robert's mathematical definition of identity is probably perfectly
valid, although I do not have the mathematical training to fully understand
it. For the sake of argument, I will simply accept it as such. What,
however, does it actualy prove? Well, it proves something I would call
Objective Identity. It proves that indeed, objects A and B cannot be the
same, however perfectly we make B a copy of A. At the very least, we still
have two distinct objects we call A and B, and in no circumstances can A
become B or B become A. That's Objective Identity. There is however a second
kind of identity which I would call Subjective Identity. It is a property of
sentient entities. Humans, and possibly many animals, fall into that
category. Subjective Identity is the internal perception of being a certain
individual and not another. Memories, feelings, desires, personality traits,
the general shape, sounds , smells and other attributes of one's body all
contribute to the creation of that sense of self. This is really what we
want to preserve when we are talking about prolonging life.

Lets revisit a philosophical problem I believe was submitted on cryonet some
time ago. Individual A is on an out of control spaceship about to burn up in
Earth's atmosphere, and there is nothing he can do about it. Aboard the ship
is a device that can read an object's atomic structure and transmit that
information to a recieving device on the ground. It uses the information to
build a perfect replica of the object out of its own reserves of atoms. In
desperation, individual A enters the transmiting device which dutifully
performs its intended function. On the ground, the receiving device builds a
perfect replica of individual A, lets call it individual B. The process is
completed before the spaceship is destroyed. Individual B walks out of the
device totally unharmed, while individual A is killed when the spaceship
breaks up in the atmosphere and is totally incinerated. Did individual A
escape the doomed ship or not?

Well, according to Robert's definition of idendity, of course not.
Individual A was killed in the crash, and what walks out of the replicating
machine is individual B, a clearly different individual. From the point of
view of Objective Identity, this is trivially obvious and cannot be
contested. However, from the point of view of Subjective Identity,
Individual A did escape. Lets say that individual A called himself Sam.
Individual B will therefore also call himself Sam. He has Sam's memories,
feelings, desires and personality traits. His body is atomically identical
to Sam's body. If the receiving device was at an appropriate location, he
could have walked out, looked up at the sky and witnessed the fiery
destruction of the spaceship, a spaceship he would have remembered having
been on only moments ago. Individual A dies, individual B comes into
existence, but the person  that calls himself Sam lives on. Sam's opinion is
the only one that matters, and a person calling itself Sam clearly exists in
Individual B just like it existed in individual A. Sam did escape the
spaceship.

This can be illustrated even more convincingly by one of Robert's own
examples. He evoked the possibility of taking an old 8086 chip and offering
it to be converted into a modern Pentium IV, only to dismantle it and
building a new Pentium IV out of different atoms. The 8086 itself, of
course, would not have survived the procedure, but we can approach the
problem from a very different angle. Suppose we have a sentient program
running on that old 8086. It is not a human brain simulation or any kind of
simulation, it is an actual pure software program, an assemblage of
heuristic algorithms, routines and subroutines, something that behaves very
much like the Doctor program on Star Trek Voyager. It has routines that
receive and interprets inputs from digital cameras which allows it to see,
routines that understand and interpret speach, which alows it to understand
people talking to it, routines that can create meaningful sentenses in
response, which allows it to hold intelligent conversations with people,
etc,. The global network of interractions between all these routines is what
ultimately gives sentience to the program.

We now tell that program that we will transfer it to another machine, one
that uses a Pentium IV. The program is turned off, saved on the 8086
machine's hard drive and uploaded to the hard drive of the Pentium's
machine. The old 8086 machine is then unplugged and thrown in the trash. The
sentient program is now dead. It is then loaded into the Pentium IV machine
and started up again. The program has been brought back to life and as far
as it's concerned, nothing much has happened, except that it now finds
itself running on a much better machine then before. The Objective Identity
of the program is no longer the same, but it's Subjective Identity has not
changed.

Francois