X-Message-Number: 12575
From: 
Date: Fri, 15 Oct 1999 17:11:18 EDT
Subject: again--thinking, feeling etc

We have waltzed around these topics many times over many years, and the 
relevance to present-day cryonics is limited, but since new people always 
seem interested I'll touch a couple of bases once more.

There is no agreed definition of intelligence, but surely we should consider 
goal-directed activity and adaptive capability as part of what is necessary, 
if not sufficient. By those criteria, all life has some degree of 
intelligence, as do many of our artifacts.

Feeling--the capacity for subjective experiences--is a completely different 
story. The physical or anatomical/physiological nature of a "quale" or 
feeling or experience is still unknown. It is perhaps the most important 
question--although probably by no means the most profound--in all of science. 
All questions of *value* rest ultimately on this.

We don't know at what stage in organic evolution feeling arose. We don't know 
whether the simpler organisms have any awareness at all or are just 
"automatons."

We know by direct experience that we have feelings, and since other people 
seem very similar to us, in anatomy and behavior, we are justified in 
assuming they are also aware. When we learn the explicit objective nature of 
feeling, modulated standing waves or whatever, we will be able to verify, by 
various scanning or other procedures, that the internal life of other people 
is of the same nature as ours.

(In my lexicon, only beings with feeling, with subjectivity, can have 
awareness or consciousness. Obviously words are used in different ways by 
different people, and sometimes at different times by the same people. If a 
machine reacts to you in an "appropriate" manner, you might be justified in 
saying it is "aware" of your presence or actions, but that "awareness" would 
not be a subjective experience.)

We assume machines do not have feelings because we have no reason to think 
they do. 

Machines could, of course, be dangerous, and could be "enemies" with or 
without feeling or consciousness. There already exist machines and computer 
programs that are goal-directed and have adaptive capabilities, but which we 
have no reason to believe have any subjective life. In Saberhagen's 
"Berserker" stories the mechs were of this character. 

Asimov invented "laws of robotics" to protect humans against robots. "A robot 
shall not harm a human or allow a human to come to harm." Nonsense, of 
course. There is--as far as I can see--no way such a "law" could be 
programmed reliably and unambiguously.

A machine only does what it is programmed to do (we also, but our programming 
is of a different type), but in the case of an adaptive system there is no 
way the programmer can anticipate everything the machine will do--and that is 
even disregarding simple errors or mechanical failures or accidental input.

I'm not a meat chauvinist, and if my sister wants to marry a robot I'll send 
a wedding present. I look forward to becoming part "robot" myself one day. 
(Well, I already am, if you want to look at it that way--only a small portion 
of my brain, the "self circuit," is "alive.") But we have so far absolutely 
no evidence whatsoever for feeling or consciousness anywhere in the universe 
outside carbon-based organisms.

Unfortunately, some people are turned off cryonics by this kind of stuff--not 
because they think it is nonsense, but because they are afraid it isn't. They 
fear a future of radical change. On the other hand, some are turned on, and 
relish the chance to see and experience wonders that the lemmings forfeit. 

Robert Ettinger
Cryonics Institute
Immortalist Society
http://www.cryonics.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12575