X-Message-Number: 8106
Subject: CRYONICS More tilting win windmills
Date: Fri, 18 Apr 1997 11:59:50 -0400
From: "Perry E. Metzger" <>

> From: 
> 
> The info people claim "emergence" explains the (contended) presence of
> consciousness and feeling in a simulation, e.g. the Chinese Room or
> variations thereof.

No. You show a profound misunderstanding of the concept as I at least
mean it throughout the rest of your three messages posted yesterday.

The point of the automobile and computer analogies is that you have a
large, complicated system, who's operation isn't obvious to the
uninitiated, that works AS A WHOLE to produce a behavior.

There is nothing magical about the way a car engine works, and nothing
magical about how a human brain works.

The "automobileness" of an "automobile" comes from the action of the
large number of parts, working as a whole. The same thing holds with a
computer. The same thing holds with a brain.

Searching for the "automobileness part" is silly because there is
none. Certainly there are many CRITICAL parts. Take out a couple of
spark plugs -- hardly big things -- and the car starts functioning
very badly if at all. Are these the seat of "automobileness"? Are the
valves? Is anything?

The car is a SYSTEM. No individual part of the car is the whole
car. You can't find "automobileness" in any individual part. Somehow,
though, put together, they *become* a car. This isn't mystical -- its
just obvious. A car is a *complex system*. The "carness" is an
abstract property we, as thinking beings, assign to this collection of
properly assembled parts.  Saying it is a property of the whole and
not of the parts isn't mystical -- it is just a reflection of the fact
that no one part is the car, but that with a critical set of parts
missing, the thing won't function as a car any more, even though those
individual parts are useless on their own and have no "carness" about
them.

Similarly, a "mind" is the result of a massively complex set of
neurons put together in a specific way. There is no "consciousness
circuit" just as there is no "automobileness part" in the car. The
"mind-ness" of a working brain is just like the "automobile-ness" of a
properly put together set of automotive parts.  Saying it is a
property of the whole and not of the parts isn't mystical -- it is
just a reflection of the fact that no one set of parts making up your
brain is the whole mind, but that with a critical set of parts
missing, the thing won't function as a mind any more, even though
those individual parts are useless on their own and have no "mindness"
about them.

> Sometimes they go into condescending detail to explain
> how your brain thinks even though it is made of unthinking neurons, and the
> neurons of unthinking molecules, etc. But "emergence" is not a magic wand
> that "explains" everything,

You've totally missed the point of what is meant.

> Now the info folk claim, with COMPLETE vagueness, that "somehow"
> feeling and consciousness emerge in the simulation. They assert this
> while understanding virtually NOTHING about feeling and
> consciousness in animals, and with HUGE gaps in our understanding of
> the laws of physics as well as biology.

Hey, you get to assert that there is somewhere this mystical
"self-circuit" that possesses this magic property that gives the brain
"self consciousness" -- even though you know virtually nothing about
how brains work.

Most of those reading this message are laymen and not technical
specialists in how complex computers work. As it happens, I'm a
specialist in how computer systems work, and understand them from the
physics that makes the smallest semiconductor part work all the way up
to how operating systems and applications programs
function.

As laymen, most people reading this probably won't be able to tell you
exactly why a machine might fail if you pull out a particular part,
the way that I could, but most of them who aren't ignorant mystics can
probably tell you that no individual part constitutes the "seat of
computerness", but that you'd better damn well have them all running
if you are going to experience enough "computerness" to have something
to do your word processing on.

> Concerning "Chinese Feeling," Metzger says that, if the operator makes a
> mistake, that might be equivalent to a neuron misfiring. NOT AT ALL. The
> misfiring of a neuron is only a "mistake" from our point of view; in the
> operation of our universe it is natural and indeed inevitable. In the Chinese
> simulation, however, a mistake by the operator is equivalent to a CHANGE IN
> THE LAWS OF NATURE, or at the very least a break in historical continuity; it
> comprises the introduction of new and inconsistent data. It might even crash
> the whole system.

And? What does this mean? Of what philosophical significance is it?

> Of course, the info folk could answer that a simulation is a simulation only
> while it is a valid simulation, so my question is irrelevant. But that is not
> quite right. Since they claim that simulated events are just as "real" as the
> original, it becomes necessary to explain what events in our world correspond
> to any possible events in the simulation OR IN AN IMPERFECT SIMULATION. 

Is a "simulation" [sic] of adding 2+2 different from "actually" adding
2+2?  If the "simulated" [sic] computation develops a problem, why is
this particularly more interesting than "reality" developing a
problem, like a bullet going into your head, or someone flipping a
switch that sends random electrical signals from a previously
implanted device cascading around your head? Yes, we know delicate
systems can be disrupted easily. So what?

> It seems a bit ironic to me that info folk typically seem to think of
> themselves as materialists, yet believe we are essentially immaterial, being
> at core just patterns of information and processing of information. An info
> person seems to believe that a system that just manipulates symbols in an
> appropriate manner (simulating himself) would BE himself. 

No -- it could no more be me than my twin brother can be me, or than
two identical looking glasses are "the same". Shoot my twin, shatter
one glass, the other remains untouched.

However, if you are asking "would a perfect simulation of my brain,
hooked up to all the motor and sensory nerves in a clone of me which
had its brain removed, be functionally identical to the point where it
would not (without an x-ray) be able to tell the difference itself",
the answer I have to give to that is "yes".

> We are asked to accept that it doesn't matter if the processing is erratic in
> rate or has interruptions; the simulated people will not notice--just as we,
> supposedly, would not notice if we were repeatedly stopped and started by
> some kind of super beam-me-up machine. Nevertheless, with the beam-me-up
> machine, presumably we actually have experiences only in the intervals when
> we are assembled and functioning. When does a simulation have an experience?

This is inane. You accuse the rest of us of believing in an
"immaterial" thing, and yet you obviously are speaking of the brain or
the simulation as if it can have an experience apart from itself.

To illustrate, lets say that I take the two hemispheres of your brain,
cut them apart, separate them by a few feet, and use fiendishly
complicated sensors and stimulators placed on the nerve stubs on both
sides to "reconnect" the halves. Lets ask the silly space-like
equivalent of your time-like question: "where" are you after I do
this? In the left half? In the right half? Hovering mystically in
between? 

Lets take this further, and get out the theoretical "simulation
machine", only we will have our computer simulate only the right half
of the Ettinger brain. We then take the "real" [sic] right half, toss
it aside, and attach all those fiendishly complicated sensors and
stimulators to the output ports of the computer instead of to the left
half of "your" [sic] brain.

Now, "where" is Mr. Ettinger now? Is he "half conscious" since half of
him is a "mere simulation"?

Lets even attach the other half of the biological brain to its own
simulator of the opposite hemisphere. "Where" is Mr. Ettinger? Is he
dead? Is his "self circuit" functioning? If there was only a "self
circuit" in one hemisphere and not the other, is only one of the two
resulting "Ettingers" conscious even though both behave identically to
an outside observer?

If instead of using two hemispheres I broke your brain down into 100
pieces, maintained connected but separate, and slowly replaced them,
one by one, with simulators, when does "Ettinger" cease to exist,
having been replaced by a mere "simulation"? Perhaps the "self
circuit" can be located in one magical piece of the brain, and all the
rest replaced with functionally identical materials, and you will
somehow "still be there", but if I were to replace that and leave the
rest biological you somehow wouldn't "be there" and all that would be
in place would be a "mere simulation" even though no one else could
tell the difference?

If I take your 100 biological brain fragments, carefully life
supported, and scatter them (connected with radio links) throughout a
large building, "where" is "Ettinger"? Kind of a meaningless
question. "Ettinger" is a label we attach to a functioning whole. The
parts conducting "Ettinger" are all around us, but given that there is
no seat of "Ettingerness", at best we can say "The parts performing
the 'Ettinger' function are at this point contained entirely within
building 2." You can't point to the "Ettingerness" though.

So, when we conduct our simulation of "Ettinger" on paper instead of
in "real time", you ask "when is the entity experiencing?"  Well, why
is this any different from the question "given that it takes a while
for electrical signals to cross the brain we call "Ettinger", how can
we pinpoint the "moment" of experience in biological "Ettinger" any
better than in the paper-and-bored-clerk version of "Ettinger"? I
mean, at what moment do you "experience" seeing a flower? Can I nail
it down to the nanosecond?

At best, you can bound it in a pretty damn huge space of time, even if
there really is such a thing as "experience" -- "the experience took
place over this ten second interval bounded here and here in
time". So, if you need to, you can say "The paper-and-bored-clerk
'Ettinger' experienced seeing a flower between March 12, 1998
and December 18th, 70481".

> If the simulation is on something like a Turing tape, or other serial
> computer, then at a minimum the simulated person cannot have an experience
> until all of the relevant changes in the data store have been made.

How does this differ from the contents of your squishy skull?

> At the moment (in the operator's world, our world) that the operator
> completes his notations, does the simulation then (according to an observer
> in our world) feel the experience?

When do you "feel" an experience? Is there some sort of mystical
"feeling circuit" that goes with the "self circuit", and when we
stimulate it you "feel"?

Reminds me of the elephants holding up the world. The whole world, you
know, is on the back of an elephant. That elephant, of course, needs
to be supported, so its on the back of a bigger elephant. When does it
end? "It's elephants all the way down".

The "self circuit", as you know, is your seat of "self
consciousness". How does it work? Well, presumably it has a smaller
"self circuit" inside it, see! "It's self circuits all the way
down"...

> This is difficult to envision, for many reasons. For one thing, why
> should it matter whether the last notation is actually put on paper?

Rather than draw the logical conclusion and assume that there isn't a
"moment" when you feel something inside your own squishy head, you
have chosen to maintain your paradigm even at the expense of forcing
yourself into a meaningless question.

Reminds me of the first guy to ask if the parallel lines postulate was
correct. Of course, he "knew" that it had to be, so when he set out
trying to construct a proof by contradiction, he kept on deriving more
and more theorems, all of which were happily self consistant. Rather
than become the father of non-euclidean geometry, however, he just
threw his hands up and said "look how silly this all looks".

In the same way, Mr. Ettinger "knows" that the homunculous theory --
that is, that there is a cartesian theater inside his brain in which
his "consciousness" (or should I say "soul"?) watches the activities
of the outside world much as someone watches a film at the movie
house. Therefore, since he "knows" there must be a single "moment"
where you experience something, and since he sees that idea looks
rather silly if you simulate a mind on paper, he assumes that "well,
this is so silly sounding, we had better disbelieve that we could have
simulated the mind on paper".

At no point, of course, is a good reason why there would be any
physical impossibility to doing the simulation shown.

> Some info folk have tried to ridicule my implied contention that
> every feature of a system must have a "seat" in the system or its
> functions,

Yup.

> Of course a wheel is just a collection of atoms, and its wheelness
> is an emergent property of the system, of the atoms collectively and
> not individually.  The info folk are right in this--nothing profound
> or difficult in the concept.  But the POINT is that wheelness is a
> definite, specific, easily discerned, esily described, and easily
> understood property.

Ah, I see you are a platonist.

> "Car-ness" is a concept less clear-cut; there is a less clear distinction
> between "cars" and "non-cars."

Thats because cars are complicated.

The more complicated things become, the less clear cut they are. Is
this a "chair" or is it a "sofa"? How can I possibly tell? The
definitions are arbitrary.

The more complicated something becomes, the harder it is to define
precisely what its required properties are.

> Nevertheless, we could proceed the same way with "car-ness" as
> with"wheelness" and find a reasonably clear and concise way to
> assign "car-ness" to appropriate parts/functions of the auto.

Please do so and come back to us later.

Or, to be even nastier about it, please tell me how to classify
this large padded piece of furniture. Is it a "sofa", or is it a
"chair"? I'm having some trouble figuring this out because my Platonic
Essense Detector seems to be broken.

> Of course, there is no particular NEED to look for a "seat of
> car-ness;" the concept is handled well enough implicitly. We know
> how cars work; there isn't a problem.

I'll bet you don't know how your computer works. (I do, probably down
to the minutest detail, because I'm a very knowledgable specialist,
but it is very likely that you know almost nothing about the subject
and couldn't tell me any of the crucial details that would be required
to reproduce the thing.) Somehow, though, even though you don't
understand your computer -- even though you couldn't probably tell me
very much about it -- you haven't found it necessary to assign
"computerness" to it.

Where is the "Seat of Windowing" on your computer, I wonder. I mean,
since your computer is displaying windows, there must be some "Window
Circuit" in it, right?

> With consciousness and feeling, there IS a problem. We do NOT know
> the nature or origin of feeling or its offshoot, consciousness.

We don't know a lot about them, but we know enough to suspect that the
materialist hypothesis makes more sense than its alternatives.

Take a car. Even if you didn't know how it worked, really, you could
conduct some crude experiments. Fire a bullet into a door, and it
still runs. Fire one into the engine compartment, and it starts acting
"funny". You can take small chunks of metal out of the engine and have
the car only act "funny", but if you take large chunks out, it stops
functioning, though it might still "seem" okay from the outside.

We know, for example, that you remain conscious if I put a bullet into
your kidney, but not if I put one into your brain. We know that
although you can probably remain pretty much the same person even if
we cause damage to small amounts of brain tissue, but if we damage
larger areas you start acting "funny", and if we take out chunks of
your brain you stop working.

This looks pretty much like every complicated device, really.

> At extremes of info-freakery (sorry), on the other hand, we find
> denial of any need to seek a seat of feeling; feeling and
> consciousness are claimed just to "emerge" from almost any system,
> so there and forget it.

Not from *any* system, just as "carness" won't emerge from any random
collection of metal.

If you put together the right parts the right way, though, you get
something that has the "automotiveness" you seek.

Similarly, if you put together signaling systems the right way, you
can get "IBM Computerness", or, in all likelyhood, with sufficient
amounts of properly put together very complicated signaling systems,
"mindness". However, this "mindness" property, although it emerges
from a collection of parts, is no more magic than finding the
"autoness" property from a collection of properly assembled parts.

> However, the info people include some of the brightest and best, and
> we do need to straighten them out

Thank you for your concern for our mental welfare.

> We also need to inspire the experimentalists a bit in their search
> for the seat of feeling, this search being so far mostly without any
> clear understanding that qualia represent a separate and distinct
> problem from those of sensory signal transmission and computation
> and storage.

And here you are, claiming that *we* try to make assumptions without
understanding of the underlying processes.

A friend of mine, many years ago when large mainframe computers still
ruled the earth and mammalian PCs were not yet in vogue, asked the
following question. "If you gave someone a drill and a voltmeter, do
you think they'd be able to figure anything interesting out about a
running DecSystem-20?"

No amount of probing would likely reveal the complex software involved
in things like the virtual memory subsystem, or even that there was
such a thing in the first place.

The human brain is far more complicated than any human built
computation system. Using the Dec 20 analogy again, we are still at
the "hmmm... spinning piece of metal... hmmm... I notice that when the
console lights flash I get voltage across these pins... hmmm... I
notice that when the machine is busy this piece of metal over this
spinning piece of metal flies back and forth a lot more" stage of the
game. We barely know what the grossest features of the system are, if
that. We have a long way to go before we even learn what it is that
allows the system as a whole to function as a conscious entity.

Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8106