X-Message-Number: 8086
Subject: CRYONICS The remainder of the Chinese Room
Date: Wed, 16 Apr 1997 10:45:18 -0400
From: "Perry E. Metzger" <>

> From: 
> Subject: Chinese feeling
> 
> Still another thought experiment, which may help highlight problems with the
> info paradigm:
> 
> Searle's Chinese Room example fails to convince the die-hard info
> people that the room as a whole doesn't understand Chinese.

Largely because Searle's argument is crap. If you read the whole
paper, he goes so far as to argue that a robot that behaved exactly as
you and I do could be built but would not be "conscious", as though
consciousness was a magic fluid that only neurons could have.

> However, let's focus on feeling rather than "understanding," and use
> a routine computer-type simulation, as follows.

Let me take a step back and note what I consider to be the primary
inanity of Searle's argument.

A simulated hurricane, Searle argues, is not a hurricane. However,
what is the distinction between "simulating" adding 2+2 and actually
doing it? In what sense is a "simulation" of a computation different
from an actual computation? In no way that I can see.

> Needless to say, the operator has little or no idea of what his calculations
> signify in ordinary descriptive/predictive terms; and his progress will be
> EXCEEDINGLY slow compared with real-time phenomena. Nevertheless, the info
> people, if they are consistent, will have to claim that this system somehow
> not only understands what is (would be) going on, but has the appropriate
> FEELING or subjectivity--that it actually EXPERIENCES events, with qualia
> essentially like yours and mine. If some new pencil marks simulate the birth
> of a child, for example, the system will feel love (which further pencil
> marks will duly simulate) etc.

And?

> They admit it isn't the operator that has the experience, nor the pencils,
> nor the paper, nor the squiggles on the paper, old or new; but the system as
> a whole does. There isn't any "seat" of feeling, they claim; the feeling is
> just an "emergent" phenomenon distributed over space and time (or perhaps
> over phase space).

And?

People treat this as somehow shocking. Of course, if you argue
otherwise, then you have to argue that neurons are somehow themselves
conscious or aware of what they are doing, or at least that some
subset of them are, at which point, you've merely displaced the
problem. Eventually, you end up arguing that the individual molecules
inside the neurons are somehow conscious and aware of what *they* are
doing, and on to the atoms and subatomic particles.

At some point you have to stop and agree, yes, this is a property of
the whole system, and not of a component.

We are used to this idea in every day life. Would a car, with half of
its components removed and no ability to move, still function as a
car? I mean, it might look like a car, but without the engine no one
would argue that it would still be drivable.

However, otherwise sane people who would never search for the "seat of
car-ness" amidst the cylinders and valves, looking for the magic
"car-ness part", and who would easily see that it is the property of
the whole system that has the ability to move and carry passengers --
people who would have no trouble understanding that a computer
functions as a whole, and would never search for the "computer-ness
component" amidst the circuit boards, struggling to see how the whole
thing could somehow be a computer even if parts of it were pretty much
useless on their own -- have trouble with the idea that your brain
functions AS A WHOLE, and that the whole is needed for consciousness
to exist.

I mean, its obvious to most people that "car-ness" or "computer-ness"
are emergent properties of the whole assembled automobile or whole
assembled computer (possibly still functioning to some extent if you
remove some unneeded components like the case on the monitor or the
cigarette lighter but more or less requiring the whole) but these same
people have trouble with the idea that "consciousness" might be a
whole brain phenomenon (requiring more or less the whole brain but
sometimes functioning even if some components that you can "live
without" are damaged, but not if too much is damaged).

> But how do they get around the time problems? In the real world, many events
> are simultaneous, not sequential. Furthermore, we have placed no restrictions
> on the operator's work schedule; he may work fast (for him), or slowly, or
> intermittently with lots of R & R breaks. WHEN does the system feel a quale?

Who cares? We aren't even naive enough to believe in simultaneity in
the "real world" any more (Einstein having banished it).

Simulations operate in simulated time. If you simulate ten minutes in
the existence of a hurricane, "when does the hurricane happen"?

> What happens during the operator's breaks?  What happens
> (subjectively) during the progress of a calculation or notation?

What happens if I put you in a cryochamber? What happens to a
simulated hurricane if I stop my computer?

> What happens if the operator just stops after a while and never
> completes a sequence of calculations?

What happens if I put you in a cryochamber and never take you out?
What happens to the windows on your computer display if I shut the
computer off?

Really, these are trivial questions.

> True enough, partly similar questions arise in the ordinary course of natural
> events in the brain--but only PARTLY similar. One of the dissimilarities is
> that, in the brain, we DO allow simultaneities. 

You can run a "parallel brain simulation" just as easily as a
non-parallel one, and one of the lessons of automata theory is that
you can simulate any parallel system more slowly with a serial system.

You can choose to simulate a hurricane, an inherently parallel
phenomenon, with a serial or a parallel computer. When do
"simultaneous" events occur in the simulation? Quit looking at the
real world's clock and look at the clock of the simulation instead.

> Another dissimilarity is that, in the brain, qualia can be causes as well as
> effects. In the simulation (as it would have to be written with current
> knowledge) what is the story?

Why would this be different in the "simulation"?

> In the world, effect flows inevitably from cause, with no time outs
> and no exceptions allowed. In the simulation, the operator can
> pause, or stop,

So what?

> or he might make a mistake

Yeah, and there is a finite probability of one of your neurons
misfiring, or being made to misfire. The simulated brain will "feel"
exactly what you would if the same thing happened.

It is inane to me that a materialist might be hunting for the "seat of
automotiveness", pardon, the "automotive seat", pardon, the "seat of
consciousness".

Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8086