X-Message-Number: 55
From arpa!Xerox.COM!merkle.pa Tue Jan 24 13:28:51 PST 1989
Received: from Salvador.ms by ArpaGateway.ms ; 24 JAN 89 13:29:09 PST
Date: Tue, 24 Jan 89 13:28:51 PST
From: 
Subject: CRYONICS: Matter, Consciousness, and the like
To: <kqb%>
Message-ID: <>
Status: RO

Robert Ettinger recently wrote an article titled 'The Turing Tape and 
Clockwork People'  in 'The Immortalist'  (Vol. 19, No. 7, July 1988).   
Ettinger's conclusion was:  'If even a few of those very bright 
downloaders will realize that work should come before play, maybe 
real immortalism will get some much-needed help.' 

There followed a spirited series of letters.

The next paragraph is a brief plug for two books that  introduce and 
clarify many of the philosophical issues involved.  The letter that 
follows was originally sent to 'The Immortalist' (and also 'Cryonics'),
hopefully to clarify  some of the issues being debated so vigorously;
I thought it might be of interest to readers of the Cryonics mailing list.

There are infinitely many philosophical works discussing almost 
every aspect of consciousness -- two of which I have read and 
enjoyed.  'The Mind's I'  (by Douglas R. Hofstadter and Daniel C. 
Dennett, Bantam Books 1981) is a very entertaining introduction to 
many of the puzzles and issues involved.  It has been highly 
acclaimed by The New York Times Book Review,  the Washington 
Post,  and many others.   Kirkus Review  accurately described it as 
'philosophical fun and games of a very high order'.  The second book, 
'Consciousness and Matter' (by Paul M. Churchland, MIT Press (now 
available in a new, 1988 edition which updates the older 1984 
edition)), is an upper division undergraduate level introduction to 
the philosophy of the mind.  It provides broad and even coverage of 
the many theories and ideas about how the mind and brain interact 
in a well written and readable format.

What follows is a series of questions that will hopefully reduce the 
heat and increase the light in future discussions of uploading.

The first question deals broadly with the relationship between the 
laws of physics and the human brain.  It is:

1.)   Are the ultimate laws of physics the same both inside and 
outside the human brain?  That is, is there something 'special' about 
the human brain that makes its behavior fundamentally different 
from the rest of the universe?

This question carefully refers to 'the ultimate laws of physics' rather 
than the current laws.  This avoids tedious digressions about their 
completeness and accuracy and focuses instead on the fundamental 
question -- is there something unique about the human brain that 
makes it forever unpredictable in terms of any  laws of physics?   
While Q.E.D. (Quantum Electro Dynamics) is a remarkably accurate 
theory that fully accounts for all the known behavior of matter 
under the conditions that hold in the human brain (and a wide 
variety of other circumstances) it is still possible to argue that 
current physical theories are incomplete (a statement that most 
physicists will support) and that a new unified theory might 
somehow shed new light on the behavior of the human brain (a 
remarkably tenuous claim.  How the behavior of particles in a high-
energy accelerator will alter our understanding of the basic 
biochemistry that governs the human brain is at best unclear).

This question also completely avoids any reference to consciousness.  
Whether or not physical law does or does not explain consciousness 
is simply not considered.  All that is addressed is whether or not 
physical law explains the observed behavior of the human brain.  
This avoids a second fertile area for misunderstanding and confusion.

A 'no' answer to this question almost completely blocks further 
discussion based on the use of physical law.  Essentially, it is a 
declaration that modern Western science is fundamentally 
inadequate in dealing with the human brain and so makes it difficult 
to draw any further conclusions that will be generally accepted.

It is safe to say that almost all scientists studying consciousness, 
awareness, or neuroscience will answer 'yes' to this question.


The second and more difficult question is:


2.) Is it possible to computationally model the physical behavior of 
the brain without any significant deviation between the 
computational model and the physical reality, given sufficiently 
large computational resources?

Again, we carefully avoid questions of 'consciousness'.   We also don't 
say how much computer power is 'sufficiently large'.  Finally, we 
introduce the tricky idea of a 'significant deviation'.

A computational model of a physical system will fail to precisely 
predict the behavior of that system down to the motion of the last 
electron for two reasons:  quantum mechanics is fundamentally 
random in nature, and any computational model has an inherent 
limit to its precision.  The former implies that we can at best predict 
the probable future course of events, not the actual future course of 
events.  The latter is even worse -- we cannot precisely predict even 
the probable course of future events.   A good example of this second 
point is the weather --  weather prediction more than a week or two 
into the future might well be inherently impossible given any  error 
in the initial conditions or computations.  Any error at all (rounding 
off to a mere million digits of accuracy) will eventually result in 
gross errors between the actual events and the events predicted by 
the computational model.  The model predicts sunshine next Tueday, 
and we get rain.  This kind of error cannot be avoided.

Any computational model of the human brain will almost certainly 
deviate from the behavior of the original -- eventually in some gross 
and detectable fashion.  If I decide that it doesn't matter which of 
two courses of action to follow and allow myself to decide on whim, 
then it seems plausible that some slight influence might cause a 
computational model of my brain to select the opposite course.  But is 
this difference 'significant'?  Given that our model is highly accurate 
for short periods of time and that any deviations are either random 
or represent the accumulation of slight errors, does it matter that the 
behavior of the model and of the original eventually deviate in some 
gross and obvious fashion?

We can view this another way:  the human brain, as a physical 
system, is already subject to a variety of outside and essentially 
random influences caused by (among other things):  temperature 
fluctuations in the environment; microwaves, light, and other 
electromagnetic radiation;  cosmic rays; neutrinos; gravitational 
forces; last nights dinner;  the humidity of the air; thermal noise; etc.   
If the errors in our computational model are smaller than these 
influences, and if in particular they are smaller than random thermal 
fluctuations, do we really care about the difference?  Is it 
'significant'?  The human brain can and does continue to function 
reasonably well in the presence of gross perturbations (the death of 
many neurons, for example) yet  this does not detract from our 
consciousness or life -- I don't die even if tens of thousands of 
neurons do.  In fact, I usually don't even notice the loss.  The rather 
small errors that we are in principle required to tolerate in a 
computational model seem small by contrast.

It would seem, in principle, that a computational model of the human 
brain can successfully model all the 'significant' behavior -- where 
we tolerate a small amount of 'insignificant' deviation between the 
model and the original.  This 'insignificant' deviation can be made 
smaller than the deviation caused by random thermal noise (at least 
in principle -- remember we assumed 'sufficient' computational 
power).  We continue to avoid any discussion of 'consciousness' -- we 
are merely arguing that a computational model of the behavior of the 
human brain that is as accurate as a real brain subjected to random 
variations in particle behavior of the same magnitude as thermal 
variations is possible.  

A 'no' answer implies some basic mechanism in the brain is so 
sensitive that 'computational noise' must inherently substantially 
disrupt it.  This seems very unlikely, given the much greater physical 
noise that we already tolerate.


Finally, we turn to a question about consciousness!

3.)  Given that the answer to both the first and second questions is 
'yes', is such a computational model conscious?

The question is largely unanswerable because we have no adequate 
definition of 'consciousness'.   Even worse, many view consciousness 
as being inherently subjective and therefore any 'objective' 
definition (varifiable by others) is impossible.   We illustrate the 
quandry in the following paragraphs.

First, we imagine that a flesh-and-blood person and their 
computational model are both before us -- and that the 
computational model has been provided with a sufficiently realistic 
body that neither we nor the model know which is which.  We do not 
ask 'can we distinguish between the model and the original' for we 
already know the answer:  no.  Given that we have answered 'yes' to 
both the first and second questions, then it is possible in principle to 
build a computational model that we cannot distinguish from the 
original by any test (assuming we cannot predict thermal noise).  
Therefore, it is necessarily completely futile to conduct any test, ask 
any question, or try in any fashion to 'trick' the computational model 
into revealing its 'true' nature -- we know in advance this can't be 
done.

What, then, can we do?  The subjective experience of the model is, by 
definition, not available for our examination.  The objective data 
shows no significant behavioral deviation between the model and the 
original.  Any definition of 'consciousness' that rests on behavioral 
considerations will necessarily conclude that both the model and the 
original are conscious to the same degree.  Any definition that 
depends solely on subjective experience has already postulated that 
the needed information is unavailable, and therefore that the 
subjective state of both the model and the original is unknowable by 
anyone else.  We must know the definition of consciousness before 
we can answer the question -- and once we define it, the answer is 
either obviously 'yes' or forever unknowable.

I have a very powerful subjective feeling that I'm 'conscious' --  
would a computer model feel the same?  Would anyone (other than 
the model) know (or care) if it didn't?  If it didn't have the same 
feeling of consciousness it wouldn't be able to tell anyone about this -
- because it was programmed to faithfully imitate an original which 
did  think it was conscious, and so the model would tell anyone who 
asked that it was  conscious.  By subjective standards I have no real 
reason to believe anyone else is conscious -- for I have no first-hand 
experience of your consciousness.  Although you claim to be 
conscious, such claims cannot be accepted as evidence of actual 
consciousness (unless we are then willing to accept the claims of our 
computational model).   Yet I believe that other humans are 
conscious -- is this merely blind faith?
This topic is considered much more extensively in Matter and 
Consciousness , particularly in chapter 4, 'The Epistemological 
Problem', which considers both 'The Problem of Other Minds' and 
'The Problem of Self-Consciousness'.

Finally, we ask a question whose answer might actually affect the 
real world!

4)  Given that the answer to the first, second and third questions is 
'yes', is it possible to construct such a computational model in 
practice?

Modeling the behavior of every single electron in the human brain 
will take LOTS of computer power.  It might even be impossible to 
build a big enough computer to do this.  This, however, is not an 
answer but simply a statement that a particular method of modeling 
the brain might not work.  An obvious question to ask is whether 
some other  method would  work -- for example, a computational 
model based on the behavior of individual neurons and synapses 
might prove both satisfactory and feasible.  There are roughly 
10**11 neurons, and even more roughly 10**15 synapses.  These are 
large numbers.  However, when we consider that a single cubic 
centimeter can hold well over 10**18 molecular size gates, then a 
computational model based on the behavior of neurons seems 
plausible.

Before using such a 'simplified' model we must return to the question 
of what is a 'significant difference'.   Clearly, such a model ignores a 
great deal of the chemistry and biology of the human brain -- can it 
still capture those elusive things we call 'consciousness' and 'self'?   If 
such a model walked up to us and struck up a conversation, what 
criteria would we use for deciding if it was conscious?  Even if we 
decide the model is conscious, is it the 'same' person as the original?  
If we use behavioral criteria, could we distinguish between the 
model's behavior and that of the original?  Our model is now based 
on a host of assumptions about the behavior of individual neurons -- 
how they work, how they interact, how they change.   Are these 
assumptions all correct?  If we've made an error, would we be able 
to tell?  If we could tell, would we care?  Would the model  care?
And even if the answers to all these questions were acceptable, 
many more questions would remain.   Do these computer models 
break down a lot?  Does society at large regard them as real people 
with real rights, or as funny computer programs that can be turned 
off when they start acting oddly?  Has everyone else bought 
'Advanced Mark XXIII Quantum Brains', now available at discount 
prices?   Or were the last three people who attempted uploading shot 
and killed for 'crimes against nature?' 
Fortunately, the utility of cryonic suspension does not depend on the 
answers to these questions.  It seems highly probable that at least 
one method for reversing cryonic suspension will prove feasible and 
generally acceptable (an excellent candidate is molecular repair via 
nanomachines).  It also seems clear that we have inadequate 
information at the present time to determine the 'best' method, 
taking into account the broad range of technical, philosophical, and 
societal possibilities that confront us.  At the moment, it seems 
prudent to delegate our choice to the best judgement of those 
dedicated individuals who we sincerely hope will still be tending our 
dewars when restoration becomes feasible.
Once we are again able to make our own decisions we will face a 
wide range of choices -- and we will hopefully have both the means 
and the wisdom to make them successfully.  At the very least, we 
will know very much more than we do today.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=55