X-Message-Number: 14989
Date: Wed, 22 Nov 2000 00:43:06 -0700
From: Mike Perry <>
Subject: Consciousness Issues

Thomas Donaldson, #14966, says:

>So far no one has actually produced a proof or a reference to a 
>proof that a system like a brain could be imitated by a computer (+
>of course necessary peripherals). 

Brains are systems of particles interacting according to the laws of quantum
mechanics. Such systems, when acting over finite scales of time and
distance, behave as finite state machines, which are within the powers of
Turing machines to emulate. See, for example, the discussion in Tipler, "Can
a Machine Be Intelligent?" in *The Physics of Immortality*, pp. 20-44, esp.
30-37 where he discusses Bekenstein bounds and ties in his arguments to
Turing machines. So there is one reference, though perhaps not the best.
Deutsch considers this issue too in *The Fabric of Reality* where he notes
that there is a quantum analog of the universal Turing machine. And there is
Seth Lloyd's paper on the universal quantum simulator (*Science*, 23 Aug.
1996: 1073-79). No doubt there are many other such references. 

I can't speak as an expert on these matters (including reduction of quantum
behavior to that of a finite-state machine) but according to my researches
the quantum computer is less powerful, in the sense of less efficient
(sometimes) than what is called the nondeterministic Turing machine. This is
a magical, theoretical device that can multiply copies of itself and its
tape as it executes. The copies then are free to pursue independent,
parallel lines of computation and can make further self-copying copies at
will. So you get tremendous additional computational power, efficiency-wise.
A quantum computer, though weaker, is like this too, able in effect to
generate independent copies of itself (or act as if it did) and thus is more
powerful, efficiency-wise than classical computational devices including
ordinary computers and the Tm. But other than that you gain nothing in terms
of the actual computations you can do. An ordinary Tm on a 1-D tape can do
the same computations as the full-blooded non-deterministic device, and
similarly the quantum computer, but it may take a lot longer. So on this
basis we would have to conclude that it could emulate a brain, at least in
the sense of an isomorphism, which means it could talk to us and so forth.
This, of course, may have no practical significance, due to the enormous
times that would be needed. But it does have some philosophical
significance. It says, in effect, that consciousness and other brain
attributes are not something mystical, nor do they (as far as we know) rest
on as-yet undiscovered physics or still unknown mathematics.

>Since our brains do both more and
>less than just calculations, no proof that Turing machines can do
>any arbitrary CALCULATION shows that Turing machines can imitate
>brains. 

To my thinking, you could "imitate" a brain by calculating, even if in some
sense brains are not just calculating. You just have computation going that
has the necessary correspondence with the quantum states of the system you
are imitating. Is this a reasonable notion of "imitation"? At least we can
imagine that an imitated brain in this sense would be able to direct the
actions of a physical device much as a real brain could, by supplying the
necessary inputs over time, and (again) also communicate like a brain does.
(Of course our imitation may be able to run in slowed-down time only, and
I'm not denying that that is an important practical consideration, and I'm
not claiming that a Tm could do what a brain does in realtime or be a
practical device.)
...
>As for Turing machines, the simplest major problem is that human
>beings and other animals with brains must NECESSARILY do everything
>in real time, in the real universe. The universe does not obligingly
>slow down if you think slowly. The possibility of creating some other
>computer world in which everything goes slowly enough that a single
>slow computer could imitate a brain simply fails to deal with the
>real world. 
>

Again, I'm not denying this.

 Pat Clancy, #14969:

>Quantum considerations do not support the brain being a Turing machine; in 
>fact quite the opposite. A Turing machine cannot implement quantum reality;

See my remarks above.
 
>to the extent that quantum effects play a part in the function of the mind
(and 
>noone knows to what extent that might be), a Turing machine would be ruled 
>out.

No, I disagree with this, but it is an interesting question whether quantum
effects are important in the functioning of the brain, in a way that is not
found in classical computers such as we now have. Whatever is the case, it
seems unlikely we cannot do the same thing a brain is doing in some kind of
artificial device that we can build. My hunch is that there are such devices
that are clearly not meat brains--time will tell. 

Next, Robert Ettinger in #s 14956, 14962, critiques the idea of uploading.
An uploader might claim that a computational device must be conscious
because it isomorphically models some other system we already consider
conscious, e.g. a human or animal brain (supposing of course that such
modeling is possible, including internal states as well as output). Normally
we think of such a device executing some sort of a program over time and
interacting with its environment in ways that seem to involve consciousness.
But other "systems" that clearly do not seem conscious could also achieve
the necessary isomorphism. We could, for example, have a big book (the
Turing Tome) that records the atomic configuration of the brain of a human
at closely spaced points in time--this could be isomorphic yet totally
static. What right, Ettinger asks, do we have to exclude this case, but
still include as conscious a robot just because it *seems* to be conscious? 

And the obvious rejoinder (as Lee Corbin most recently noted, though it has
been noted before) is that, again, the robot in its actions constitutes a
process that is unfolding over *time*--that makes it special and appropriate
to consider it on a different footing than the static record. Then the
question is Why?, and why, in particular, is it okay to ignore differences
in material (protoplasm vs. whatever the robot brain is made of) and other
details of implementation, so long as you have isomorphism, but not
similarly to ignore time? What is so special about time? I've tried to
address this before, but since the question keeps coming up, I'll try again,
hoping maybe it'll come out clearer. I think there are two reasons to treat
time as a special case.

The first is what I call a frame of reference issue. I am a conscious being
which undergoes experiences over something I call time, and my "time" is
essentially the same as that of other beings I observe and consider
conscious, i.e. others in my frame of reference. Similarly, the robot
behaving over time (again, my time) would be within my frame of reference.
To be within my frame of reference in this sense does not require special
materials or special spatial configurations, but does require coexistence in
time. Time must be modeled as time, and not as, e.g. page number in a book.
I don't similarly feel that other features of the system must be modeled in
special ways, so long as I could talk to or otherwise interact with the
system in question. (This also means its behavior must depend causally on
what happened at earlier times, however; that too is clearly a frame of
reference requirement.) If I could interact appropriately, I would feel
intuitively that I was dealing with a conscious system and would grant it
the benefit of doubt. 

If I could not do this, at least in principle, I would not consider the
system to be conscious *in my frame of reference,* though possibly conscious
in some other sense. Thus I can imagine there might be other universes with
systems inaccessible to me, which still might be said to exhibit
consciousness. We might also conjure up a system in our own universe that
would fail the frame of reference test but would isomorphically model
consciousness: a certain pattern here, a certain other pattern over there,
outside the light cone of "here", and so on. Other such possibilities are
the Turing Tome, systems based on digits of pi, etc. Clearly I could not
communicate in any meaningful sense with these, which then are outside my
frame of reference. Whether we should consider them conscious in some sense
or not, I don't think the information-based notion of consciousness is
seriously threatened.

And here is another reason, from my point of view, that time is
fundamentally more important than space or materials and needs to be modeled
pretty much as what it is rather than something else like spatial location
or page number. I'm not interested equally in all systems that can be
considered conscious (or at least to model consciousness) but preferentially
in those that can be considered, in addition, *immortal*. This means there
has to be an infinite amount of the timelike element, however it is modeled.
In a universe such as ours, you could not do that with a Turing Tome or
other such static record--the record would have to be infinite, an
impossibility. Instead an infinity of conscious states could only occur in a
process unfolding over time.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14989