X-Message-Number: 11534
Date: Sat, 10 Apr 1999 01:13:31 -0700
From: Mike Perry <>
Subject: Re:Chinese Closet, Feeling

Bob Ettinger, #11529, writes

>Instead of the Chinese Room--a bad metaphor--look instead at the Chinese 
>Closet. This is a very small Chinese Room, with nothing inside except a 
>primitive computer using one simple rule and a modest data store: "Answer 
>string Qn with string An." In a very short conversation, it might answer 
>appropriately, and thus give the impression of intelligence; yet clearly, by 
>any ordinary criterion, it understands nothing. Despite a plausible short 
>conversation, it emulates nothing, not even an idiot. From this point of 
>view, I believe, Searle was right.
>
It is my understanding that the Chinese Room (or closet) is not supposed to
be limited in what it can do, any more than a human. If you start putting
limitations on it, sure I'll grant that you can get to a level where it does
not "understand" as we humans do. But when the limitations are lifted, under
appropriate conditions we might well say it understands.

>>[Mike Perry:]As for the more general point that Bob makes, basically, that
>feeling is not
>necessarily captured in an emulation, I would again bring up the
>possibility, which we have no way of disproving, that we are right now  in
>an emulation of some sort, 
>

>This "possibility" exists only if we concede ahead of time the very point at 
>issue!

I would say the possibility exists as long as we do not conclude that it is
impossible that an emulation has feeling, i.e. we are not required to
concede the point at issue, just to allow that we haven't disproved it.

> Further, as I have noted at more length previously, there IS in 
>principle the possibility of determining by experiment whether we are 
>simulations. A simulation cannot anticipate new discoveries in science; the 
>program can only include what was known when the program was written, and 
>deductions therefrom.

Depends on the program, wouldn't you say? Why couldn't a programmer--an
advanced nonhuman intelligence, design a program that would incorporate new
discoveries for us to make, in the course of our activities? We then would
make discoveries new *to us* at least, thus as far as we could tell really
new. On the other hand, such a program could be halted from the outside from
time to time and modified carefully, to add more possible "new" discoveries
for us to make, to keep pace with any "real" new discoveries. Another point
to make is that we may *eventually* be able to determine that we are in an
emulation (a good simulation), but certainly haven't done so yet.

> Therefore there is an insuperable natural barrier for a 
>simulation that does not exist for a "real" entity.

Again, this barrier is removed if we allow a "real" entity to have an
ongoing interaction with the emulation.

>I think Mike has overlooked my point that real time correlations may be 
>essential to feeling, and these cannot exist in Turing computers.  Feeling 
>may depend on time-binding and space-binding constructs.
>
>
>> [Mike Perry:]  I do think, once again, that there is
>no way to resolve this issue by scientific "proof" or "almost-proof" as we
>usually understand it. 
>
>And again I point out that, once we understand the anatomy and physiology of 
>feeling in mammals, this will tell us for sure what is sufficient for 
>feeling, and it may well also give us clues as to what is necessary.
>
I agree with this, but still think there could be unresolvable issues
relating to whether a system that seems to have feeling "really" has it or
not--though personally I expect to give the benefit of doubt in the
"unresolvable" cases.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11534