X-Message-Number: 32920
From: 
Date: Sun, 10 Oct 2010 05:30:24 -0400 (EDT)
Subject: response to Perry

Mike's message below.
 
>Any attempt to have two copies of a person running in parallel would  
>surely diverge very quickly
 
I don't see the relevance here. I was talking about an original and a  
simulation, not two copies.
 
>A computer when running looks pretty much like any other piece of  
>matter at the subatomic level. So how do you decide it definitely has  
>no consciousness but some other type of device, a meat brain, say,  
>definitely does, if both are doing similar things?
 
We can't definitely decide whether a simulation is conscious until, at  

minimum, we can characterize consciousness in terms of anatomy and  physiology.
But even if the simulation closely mimics the aspects or events in  the 

brain that give rise to conscious, there is still the ontological  problem--in
what sense, and to what degree, can a collection of  symbols "be" the thing 
described? 
 
Remember, we have described brain structure and function, and  then used 
this description to construct another description, the  simulation.
 
Robert Ettinger       
 
Message  #32918
Date: Sat, 09 Oct 2010 02:56:31 -0700
From: Mike Perry  <>
Subject: Simulation and Consciousness
References:  <>

At 02:00 2010-10-08,   wrote:
>[...]
>Reason 3. Time intervals in the  computer and in  life.
>
>[...]
>
>Now, assume the  original lives on, while his simulation is being run on 
the
>   computer. The simulation "lives" like a film with frames at  non-zero
>intervals.  The original lives in some fashion not  presently 
>understood--possibly
>in a  continuous fashion with  no gaps, or possibly
>jumping each time to an appreciably different state  with nothing in
>between. Even in the latter case, however, it is  exceedingly 
>unlikely that the
>intervals between successive states  would be the same for the 
>original and for
>  the simulation.  Hence, it seems to me, the simulation cannot be faithful
>to the   original. Again, we can't know yet how important the differences 
may
>be,  but  there will surely be differences.
>

Any attempt to have  two copies of a person running in parallel would 
surely diverge very  quickly. This would also hold if you just had two 
meat copies. In effect you  would get two individuals with a common 
past--that's how I view it. Each of  the copies would be a legitimate 
continuer of the one original, on  more-or-less equal footing. On the 
other hand, if a person could be  simulated in a classical computer it 
would open the possibility that two  such computers could run in 
lockstep and thus behave exactly the same. A  question can be raised 
whether this sort of thing will be possible to a  classical computer, 
in any practical sense. (The time requirement may be too  great.) I 
think the answer presently is unknown. A quantum computer uses  
unpredictability in an essential way and would not repeat its 
behavior  exactly on successive runs.

Robert Ettinger also says:

>My  question is not whether a simulation could be "intelligent" but  
whether
>it could have feeling or life-as-we-know-it. This is not a matter  of
>accuracy of  simulation, but whether a mere collection of symbols  can have
>consciousness. It  seems very clear to me that it  cannot.

A computer when running looks pretty much like any other piece of  
matter at the subatomic level. So how do you decide it definitely has 
no  consciousness but some other type of device, a meat brain, say, 
definitely  does, if both are doing similar things? One possibility is 
that  unpredictability will turn out to be essential for efficient 
operation. In  that case you won't have the same "manipulation of 
symbols" as in  present-day computers, even though the process would 
still be  "computational" in some sense. Perhaps, then, both the 
uploaders and their  opponents will feel somewhat  vindicated.

MP


 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=32920