X-Message-Number: 22284
Date: Sat, 02 Aug 2003 00:25:17 -0700
From: Mike Perry <>
Subject: Brain Simulations

Robert Ettinger, #22275, in responding to Tim Freeman's position favoring 
emotions as computational properties, considers the more general question 
of whether a system that is isomorphic to a person, in a suitable, detailed 
sense, is thereby a person in its own right, with full consciousness and 
feeling. He brings up, as a possible counterexample, a written description 
of the person, considered as a time-evolving phenomenon. As a thought 
experiment we can imagine a complete description at the quantum level, 
including all interactions of particles in the brain and the rest of the 
body, extending over a considerable time, decades or (for us immortalists) 
centuries or millennia. Our description also extends to a large amount of 
surrounding space. But all this is just one big, static record, even though 
time and space, we assume, are accurately modeled. So how can a static 
record have or contain consciousness? My answer is to invoke what I call a 
frame of reference. The static record describes a world--a significant 
portion of a universe--in which events are happening. Among the things 
happening are all the biographical details of the person in question. 
Relative to that world, then, it is reasonable to say that there is indeed 
a conscious person with feeling. But that world is not our world, so we are 
not forced to a conclusion that the described person is conscious as we 
usually understand it. Our frame of reference differs--so a different rule 
applies.

But now let's suppose that we have a system that *is* active in our world, 
a robot whose brain is a careful simulation of a human's yet differs in 
some material way, not being made of protoplasm. The exact details of this 
simulation are unimportant and I'll gloss over them, except to note that 
the robot brain behaves isomorphically to a human brain in something 
approaching realtime. This robot, then, is clearly part of our world and 
shares our frame of reference. It certainly seems to be conscious and have 
feeling, just like any human. Is it and does it, or is it some zombie that 
is just imitating these attributes? A contention I've long made when this 
subject has come up (as it does periodically) is that there may be no way 
in principle to tell. Either hypothesis might fit all the facts that will 
ever be observable. But the same might be said of the theory that there are 
other persons besides oneself versus solipsism. True, the anti-solipsist 
can invoke the fact that brains of different individuals are physically 
similar in that they are protoplasmic, which would not apply to the robot. 
Still, to claim the robot must be a zombie strikes me as basically a 
solipsist argument. My natural reaction, if actually confronted with a 
human-seeming robot, would be to grant it the benefit of doubt instead. 
Unless I was aware of a compelling argument that it could not be a person, 
I would accept it as one. By implication, then, I would be affirming a 
conviction that emotions are indeed computational in nature (since quantum 
mechanics, the substrate level of the simulation, is itself simulable 
computationally). This I would take as a working hypothesis, not a dogma, 
but something I would not reject unless forced by the evidence.



To go now to another posting on brain simulation, Thomas Donaldson, #22276, 
writes:


>It's easy to SAY that we might then try simulating atoms, or
>individual neurons, or etc. Actually doing so would be impossible
>or tremendously hard if we want to make a brain out of them.
>You can say anything you want; the really important question
>(if you don't want to just do philosophy) is whether or not
>you can DO IT.

I think it will be a while before we can simulate or closely approximate a 
human brain by any method whatever; philosophy at least is something we 
*can* do at this point. And the idea of simulating things at the level of 
atoms (or subatomic particles) does have an important implication in 
principle, as Francois's posting, #22268, suggests. It is that we don't 
have to have a system that physically alters itself the way brains do 
(growing new neurons for instance) to simulate a brain--we can do it in 
software. Whether a practical system like this can be developed remains to 
be seen. I for one am optimistic, though clearly we have a long way to go.


Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=22284