X-Message-Number: 12683
From: "Scott Badger" <>
References: <>
Subject: still more on feelings and goals
Date: Sun, 31 Oct 1999 10:10:44 -0600

Thomas Donaldson concludes that there are exceptions (in the form of
pathologies) but maintains that, by in large, all feelings are associated
with goals even if those goals are unconcious.  That's a difficult argument
to counter.

If you want to say that an amoeba moving away from a noxious stimulus or a
robot with a simple program that allows it to seek an electrical outlet is
experiencing an emotion, then we're just using different definitions.

For another perspective on this issue I refer to a recent review of Rosalind
Picard's Affective Computing (1997, MIT Press) at about.com.

"A computer," she says, "can be said to have emotion if it has the following
five components that are present in healthy human emotional systems:" [my
interpretation is in square brackets]

1. System has behavior that appears to arise from emotions. [i.e., we will
say "It must be feeling happy/sad/etc. because it just did X," where X
cannot be explained on the basis of simple rationality.]

2. System has fast "primary" emotional responses to certain inputs. [i.e.,
it will duck if it suddenly senses an object hurtling toward it, without
first analyzing the object.]

3. System can cognitively generate emotions , , , , [i.e., it can assess a
situation as happy/sad/etc., and then put itself in a state of feeling

4. System can have an emotional experience . . . [i.e., it must be able to
detect the presence of emotion in a situation, give the emotion a name, and
then feel the emotion itself.]

5. The system's emotions interact with other processes that imitate human
cognitive and physical functions [i.e., its emotion processing circuits must
be connected to and interact with its sensors and its memory banks, just as
our emotions are linked to our memories and sensory perceptions.]

Noting that children acquire and learn to control emotions over time and
through social interaction, and the consequent likelihood that we will need
to give machines the same innate tools to acquire and learn to control their
own emotions, she appears to implicitly recognize that emotion will emerge
given the right conditions; that we do not need to program in every smile
and every teardrop.

She acknowledges that if a designer were to let a computer evolve its own
emotions, it is possible that non-human emotions could emerge, which we
would be unable to recognize. In the same vein (but this is a different
point) because of the different physiology of the machine from the human, it
is unlikely that a machine could ever "feel" emotions the way we do. It does
not have a gut, for instance, and is therefore unlikely to experience the
feeling of being hit in the stomach on receipt of really bad news.

Complete article at:


... Dr. Doug Lenat's Cyc project, aims to imbue an AI system (Cyc) with
rational, knowledge-based, intellectual powers-but no emotion. In a chapter
he contributed to Hal's Legacy, Dr. Lenat wrote: "[E]motions . . . are not
useful for integrating information, making decisions . . . ."

Excerpts from Hal's Legacy available at:



Back to me...

Picard argues that emotions are an essential element to intelligence and
truly intelligent robots will require them as well.  She brings up some
interesting points about emotional computers.  Not only can we expect them
to better understand (be in touch with) their emotions, we can expect them
to be in better control of them and this will enhance their decision-making
systems relative to ours.  In addition we can expect them to be better at
identifying our emotional states, and that could make it relatively easy to
manipulate us.  She suggests that it may be impossible to program robots to
have only positive emotions toward humans since emotions, like conciousness,
may be emergent phenomena.

In response to Kennita's question regarding relevance, I guess I would
suggest that it is unlikely that we will be able to live indefinitely in our
current biological form.  Eventually, we're going to need a more durable
system and it's important to know whether we're going to want or need to
take our emotional systems along with us since they seem to be more tied to
our bodies than our cognitive systems are.  Also, if AI isn't developed by
the time we deanimate, I think we can expect them to be around when we are
reanimated.  Aren't some of the opinion that successful reanimation is a
problem that AI is likely to solve before humans do?

Best regards,

Scott Badger

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12683