X-Message-Number: 12733
From: 
Date: Sat, 6 Nov 1999 17:01:05 EST
Subject: goals& gizmos 

I'm afraid Thomas Donaldson is continuing to misjudge criteria for "goals" 
and "independence" and especially "feeling."

>The important issue is whether or not the actions prescribed by a program in 
a robot controlled by a computer have been written out by someone else or 
arise from the computer totally independently. 

I think Perry, Crevier and others have already pointed out that this is NOT 
an important issue. Every system, including each of us, came into existence 
as the result of a chain of events over which we had no control, and our 
control remains extremely limited. If some super-alien had designed and built 
me, instead of my developing in the traditional way, that would in no way 
affect my "independence" or lack of it.  

>For this to happen you need not only a computer but a robot capable of 
acting in the world, first of all.

Again, there are easy counter-examples. If an oil-refinery computer is hooked 
into the refinery hardware, it really refines oil (causes it to be refined); 
if it is merely hooked into a simulator for test purposes it only deals in 
symbols at both ends. But the computer and its program don't know what is at 
the interface, so the nature of the interface cannot have any bearing on the 
"purpose" or "independence" of the computer/program system, nor on the 
"feeling" in the system or its lack.

>my "program", if you wish to call it that, came from myself, though it was 
affected >by many other events.

"from myself"? What does that mean? Your initial program (genotype)was 
"imposed" by something prior to yourself, and then you developed by 
interaction with the environment. A computer system can do the same, whether 
or not the initial program was written by a human, by another computer, by an 
alien, or just happened as a rare random event.

At another point, Thomas shifts ground a bit and says:

>If we design our robot so that the program is on ROM, then we'd have to do 
major >surgery to change it, and to that >degree it approaches independence. 
And even if >it's on ROM, but its program has it coming to us for any 
changes, then it also fails >to be independent. 

So now he is speaking of DEGREES of "independence" depending on vulnerability 
or reliance on a support system. Certainly one can choose to speak that way, 
but in that case what distinguishes us from robots? We are also in varying 
degree dependent. 

>no matter how powerful its processors are, that does not alone provide any 
goals >toward which it will act. 

"Power" has little or nothing to do with it, and Thomas fails to be clear on 
"goals." Goal-directed behavior (reasonably construed) is NOT the same as 
feeling, nor necessarily accompanied by feeling.

>its goals remain symbolic only.

Here is a profound difference between humans and some types of robot: 

The robot can have goals that are "real" in the sense that the program 
results in overt actions with a definite tendency to produce special effects 
as externally observed. Yet the robot knows and feels nothing.

But in humans, the felt internal goals are necessarily symbolic in the first 
instance, in the sense that they consist of electrochemical changes in the 
brain which may or may not tend to cause corresponding changes in the 
environment. Thus we can have feeling and goals without external effects; 
while robots can have external effects, which look like goals, without 
internal feelings or wants.

Robert Ettinger
Cryonics Institute
Immortalist Society
http://www.cryonics.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12733