X-Message-Number: 12738
Date: Sat, 06 Nov 1999 22:31:02 -0700
From: Mike Perry <>
Subject: "Real" creatures and robot "feeling."

Thomas Donaldson, #12720, writes: 

> If we design
>a robot such that it invents its own program, then again we have a case of
>independent goals, ...

Good; I have no problem with that. And having "independent goals" apparently
characterizes a "real" creature--I'll assume that here. However, a program a
robot could invent could also be invented by a human. So, let's imagine two
scenarios. In one, a certain robot "lives its life" for a period of time,
and a program is developed (or develops) within it, starting say, from a
much simpler "seed" (since the robot would have to start with something,
much as a baby could be said to start with a "program" put in from the
outside). Next, say there is some human tinkerer somewhere, who knows
nothing about this robot or its program, but obtains a robot of the same
model, with only the seed program, and purposefully designs an "upgrade"
program. This, by an unlikely but not impossible coincidence just happens to
agree in every detail with the first, robot-invented program. Are we to say
that the first robot is a "real" creature but the second, now identical one
is not?

Bob Ettinger, #12726, says:

>More basically, other recent posters have just assumed, in effect, that the 
>ACT of doing something is the SAME THING as the FEELING that may accompany 
>(or precede or follow) it.

I wouldn't go that far (nor does Bob). The same act could certainly produce
a different feeling, as in eating when you are hungry vs. when you are not.

>Does a robot have a degree of primitive feeling just because it has a 
>goal-seeking design? I can think of no reason to assert such a thing, except 
>the apparent need some people have to defend the "strong AI" position as an 
>article of faith. (The "strong AI" position, as I understand it, is 
>epitomized by the claim that a furnace thermostat "thinks" and "feels" at a 
>primitive level.) 
>
>Look yet again at the robot that seeks an electric outlet that will allow it 
>to recharge itself. Notice that I did NOT say, "in order to recharge itself." 
>The robot knows nothing, feels nothing, and has no purpose in any ordinary 
>sense. It anticipates nothing, remembers nothing (although a more advanced 
>robot might), and in the act of recharging it experiences nothing. To claim 
>that the ACT of being recharged is the same as the FEELING of being recharged 
>is just blowing smoke, abusing language, and totally ignoring major features 
>of our own experience.   
>
I think it would be necessary to look in more detail at what is going on
within the robot, how its "awareness" of its need for a recharge is
manifested, etc. But over a broad range of conditions I can see a parallel
with a natural, biological organism experiencing hunger, and thus it seems
natural to attribute a degree of feeling to the robot too.

>Finally, again consider time-binding. WHEN do you have an experience? Surely 
>not at a mathematical instant of time, if time is continuous; not at an atom 
>of time, if time is quantized. A single subjective impression apparently must 
>stretch over many milliseconds. It probably encompasses a relatively large 
>region of the brain, so it is also space-binding. 
>
"Having an experience" might best be regarded as a fuzzy concept, in some
appropriate sense. On the other hand, and this is what I think, a great
many, but still finite number, of definite, quantum events involving
discrete state changes in the system that is ourselves, may add up to what
we call an experience.

>Feeling-subjective experience based on qualia-is a distinct physical or 
>biological phenomenon. It cannot be dealt with by philosophy or word games; 
>it can only be understood and dealt with by experiment and theory in the 
>usual manner-despite its unique character and the entanglement with cognition 
>and representation. But "philosophy" can help show the way.
>

I agree with this, but I also think certain basic insights are provided by
things that already are known (or at any rate seem rock-solidly verified).
Mainly, we are all constructs made of "unfeeling" atoms yet we feel. Feeling
must therefore be an emergent property, not something "innate." A device can
be built that has feeling but can be subdivided into components that do not.
Or perhaps we should say that the components do have some sort of
rudimentary quanta of feeling, that add up into a much more serious level
that we call "real" feeling when they are suitably arranged into a larger
system. Something like this point was made a few nights ago on a PBS show on
robots. The reporter was raising a question that has been raised often here
too, whether robots that exhibited certain characteristics of feeling were
"really" experiencing anything or just imitating it unconsciously. And the
answer given by a researcher was along the lines I've just given, i.e. we
are machines too, and to say "yes, we feel," but "no, they don't, not in the
slightest degree," was presumptuous and probably wrong--at least that was my
take on it.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12738