X-Message-Number: 11703
Date: Sat, 08 May 1999 14:53:14 -0400
From: Daniel Crevier <>
Subject: consciousness in simulated characters, again.
References: <>

Mike Perry answered the following to Tom Donaldson, in message #11691, 
about what characteristics make a virtual object in a computer
program aware or not.

>I can't give necessary and sufficient conditions, guaranteed valid, but
>it's not hard to give a hypothetical example, invoking as usual some 
>future possibilities. Think of a robot that can see and hear and speak,
>who responds like a human, *and* whose internal processing corresponds
>isomorphically to the processing in our own brains when we are 
>conscious and doing things. Such a construct has a strong claim to 
>being conscious in my view, and even at a human level.  

I think it's possible to be a little more specific. The classical
position in philosophy holds that consciousness implies the ability to
represent and reason about one's own mental states. For example, a
conscious being should be able to explain the reasons for its actions.
A classical example of this in computer science is the program  SHRDLU,
written by Patrick Winston at MIT in the early 1970's. It manipulated 
simulated geometric objects at the request of a human user, and  could 
answer questions about its motivations, as the following dialogue
excerpt illustrates. (The program's statements are capitalized).

Why did you pick up [the green pyramid]?

TO GET RID OF IT.

Why did you do that?

TO CLEAR OFF THE RED CUBE.

Why did you clear off the red cube?

TO PUT IT ON A LARGE GREEN CUBE.

Why did you do that?

TO STACK UP THE RED CUBE, A LARGE RED BLOCK AND 
A LARGE GREEN CUBE.

Why did you do that?

BECAUSE YOU ASKED ME TO.

SHRDLU may thus have been the first conscious computer program, 
even though the details of its programming were definitely not homo-
morphic to the human brain. We can however surmise that it was much less
conscious than a human being because its internal states and
the kind of reasoning it could perform about them were much less
complex, by many orders of magnitudes, than our own. 

By the way, most so-called 'intelligent agents' that are all the
rage in AI nowadays are not conscious by this measure. For example
programs of the Eliza type, which aim at passing a simplified version
of the Turing test (see
http://www.loebner.net/Prizef/loebner-prize.html)
are just bags of tricks with no representation of their internal
states, even though they can at times make halfway sensible
conversation.

Daniel Crevier

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11703