X-Message-Number: 11507
From: 
Date: Sun, 4 Apr 1999 11:03:45 EDT
Subject: intelligence, consciousness etc

For Mike Perry and anyone else interested, let me try again, very briefly, to 
indicate why both extreme camps--strong AI people and meat chauvinists--are 
wrong.

I begin by showing why both camps are, in part, correct.

Searle's famous "Chinese Room" was intended to dramatize the fact that the 
ability to carry on an apparently reasonable conversation does not prove 
intelligence. His model was not the best, but he was absolutely right. A 
purely brute-force program of instructions, or automated Chinese Room 
("Answer string Qn with string An") could be designed that would fool most of 
the people most of the time, if anyone wanted to bother investing the time 
and effort. 

Strong AI people note that, e.g., a computer that directs the operations of 
an oil refinery does not merely manipulate symbols that are empty of meaning. 
It does not just simulate oil refining; it actually causes oil to be refined. 
There is two-way interaction with the environment.

Likewise, the most extreme AI people say that even a simple furnace 
thermostat thinks. Of course, it only has two thoughts--anthropomorphizing, 
either "It's too cold in here." or "It's not too cold in here." Yes, these 
really are thoughts, however primitive--if we agree that thought is any kind 
of goal-directed information processing. 

Along with many others, I prefer to go beyond that and require that 
"intelligence" include evolution or learning capacity as well as 
goal-directedness and problem-solving. 

Now let's see in what ways both camps (at least their typical spokesmen) are 
wrong.

Searle says a computer only manipulates symbols with no semantic content, so 
knows nothing and understands nothing, regardless of its usefulness. But the 
most important communications signals in our brains are also just symbols. 
Yet they are correlated with physical conditions or events in the external 
world (or in other parts of our brains or bodies), and through two-way 
interaction or trial-and-error feedbacks we can exploit these symbols to 
build a meaningful representation of the world. 

It is also enlightening here to remember what might happen with attempted 
communication, say by radio or TV, with intelligent entities on a distant 
planet. We receive nothing but symbols--and yet ingenious plans have been 
devised whereby, without any Rosetta Stone, we could learn the aliens' 
language, just by examining the patterns of information. (Or we could teach 
them ours.) I believe this parallel is valid and powerfully convincing.

Therefore ordinary computers do indeed have the potential of becoming 
intelligent, as defined above, and Searle's denial of this is wrong.

But when we turn to consciousness and feeling, it is another matter. Here 
Searle et al are right and the strong AI people are wrong. This becomes clear 
when we take a hard look at the "hard problem" in consciousness--the 
anatomy/physiology of qualia--roughly, "feelings" or subjective conditions.

Mike Perry has written:

>Actually (and I may as well admit to being a strong-AI person, which I am),
the thought that occurred to me is that, indeed a simulated stomach does not
do the same thing as a real stomach (except relative to a simulated
environment). I'll concede that it isn't really doing digestion! But, on the
other hand, suppose we simulated a human brain. If our simulated brain is
composing poetry or solving a math problem, then clearly the simulation is
*really* doing these things and not just "simulating doing them". If our
similated brain is experiencing consciousness, is this "real" consciousness
or not? I.e. is consciousness more like digestion or more like problem
solving, or neither? I would vote for it being "real" consciousness in any
case. Subjective experiences are not tied to our physical world in the same
way as chemical processes like digestion.<

I think the last sentence is wrong. I don't want to repeat some previous long 
discussions, but it is reasonably clear to me that feeling is specifically 
physical and not just a pattern of information or the processing of 
information. It probably involves time binding, which is ruled out in Turing 
computers. I.e., the "self circuit" or "subjective circuit" involves 
something like a standing wave in the brain that includes an appreciably 
extended region both in space and time; feelings or experiences are 
modifications of that standing wave.  Without that physical construct and 
REAL TIME correlations, there are no qualia, no consciousness, no LAWKI (Life 
As We Know It). A simulation of that wave would no more constitute feeling 
than a modem transmission of a scan of a photo of a rose would constitute a 
rose. 

In due course the experimentalists will verify or disprove this hypothesis 
(if I can call something so vague an "hypothesis"). This will not solve all 
the "philosophical" problems of criteria of survival--it may even make them 
harder--but it will prick a few bubbles.

Robert Ettinger
Cryonics Institute
Immortalist Society
http://www.cryonics.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11507