X-Message-Number: 11691
Date: Fri, 07 May 1999 01:54:17 -0700
From: Mike Perry <>
Subject: Replies mainly to Thomas Donaldson

Thomas Donaldson, #11680:

>This is an answer to Mike Perry:
>You seem to believe that the notion that we are emulations in a computer
>already has some value, even though there is no experimental evidence for
>that idea. I will be stronger here: just what difference does this notion
>have from the belief that there is a God (or gods)? Those who like
>theology are welcome to it, but it does not become less theological if
>we believe in a Programmer rather than a God.

What I think is that, in view of many-worlds and the multiverse, emulations
actually are happening somewhere, but for various reasons are very unlikely
possibilities that we can virtually discount. In other words, we can
consider ourselves to be in an emulation with probability weight epsilon,
epsilon very small but nonzero. The same can be said about the existence of
a Programmer, which each emulation must have (almost always anyway). The
notion of an emulation, though, is useful to make a philosophical point,
since in my view it *is* at least barely possible (i.e. does actually happen
in the multiverse, though rarely). As for a God, that (we have to admit I
think) is at least a logical possibility too, but it does not have the
relevance to philosophical positions I think are important (such as strong
AI). And, moreover, the *absence* of a God *is* relevant because, for
example, I think we must put our trust in science and reason rather than a
putative higher power, and the evidence against the existence of God seems
substantial to me, so I will argue against this possibility rather than for it.

Should I then discount the possibility of an emulation in the same way that
I discount the possibility of God (though I don't really do that
dogmatically either)?  Well, the main reason I do consider the emulation
possibility is to argue the case that consciousness and feeling are emergent
properties that depend on discrete changes within a system, rather than a
continuous "flow" of some sort, or requiring other elements or components,
as some have argued (Bob Ettinger for example). It seems to be an effective
device for that, if one keeps in mind the thought-experimental nature of the
scenario as I imagine it.

>Again you do not believe that a character in a book can be conscious. And
>you do give a reason: such characters are not interacting with anything.

and cannot interact, by their nature as part of a static record.

>Suppose I have a program which contains, first, an emulation of the
>world, and second, one or more emulated people living in that emulation.
>If I am not running the program at time 1, but running it at time 0,
>are these emulated people conscious only when I run the program?

Yes, in my view (I'll overlook possible complications that could arise if
many-worlds is taken into account).

> I'll
>point out that I did not just ask if characters in books were conscious,
>but also asked for an answer to the question of whether or not they
>were ONCE conscious.

To address that question (which somehow slipped my attention) characters in
a book had to be created somehow, and you have to consider this process of
creation. It is possible the novelist based his creation on real people, or
had a dream in which he played the role of one or more of his characters, or
otherwise "lived" the lives he is depicting--certainly these things will
happen. So the possibility that they were once conscious in some fashion is
open.  More generally, we can go to many-worlds, and well, you can see where
that will lead, I think. All histories, etc. But in our world at least we
can assume that fictional characters did not actually walk the earth as
themselves, as we usually understand it.

>Basically, since our emulation of the world cannot be very good, it 
>seems to me that we cannot consider virtual beings in that emulation as

We don't know what computers of the future will be able to do. To me,
though, a "not very good" emulation is no emulation.

>You seem to believe that ANY character in a computer emulation
>(even just the kind in a computer game) must have awareness because
>that character is in some sense active.

I wouldn't state it that strongly. But more or less, a thingy that behaves
as if conscious, and has internal processing of the right sort (which
probably we don't fully understand the requirements for, but will understand
I think) can be said to have awareness *at some level*. It may be a quite
rudimentary level, but nonzero, nonetheless.

> I will say that in such cases
>you do not have an active character, you have a program which gives you
>images and statements.

But isomorphically, you probably have an active character too. Think of the
different ways that entities can be defined mathematically. 

> The same happens no matter how complex your
>program and your virtual characters. What are your boundaries? Why is
>it that a virtual machine in a computer game (ie. a car or an airplane)
>is not conscious, but the characters are?

I won't say it *couldn't* be conscious, but it doesn't have to be,
certainly. The world as a whole isn't conscious, but it supports beings who
are. In principle, a computer could be like this too. But I think the point
you are driving at is that, if "all you have is a program which gives you
images and statements" that could never amount to something with real
awareness, no matter how sophisticated. I see it differently, at least in
the case that the "images and statements" are clearly modeling a being with
recognizable features. You would need to check your isomorphisms. 
>Please explain just what characteristics make a virtual object in a 
>computer program aware or not.
I can't give necessary and sufficient conditions, guaranteed valid, but it's
not hard to give a hypothetical example, invoking as usual some future
possibilities. Think of a robot that can see and hear and speak, who
responds like a human, *and* whose internal processing corresponds
isomorphically to the processing in our own brains when we are conscious and
doing things. Such a construct has a strong claim to being conscious in my
view, and even at a human level. 
Now, this robot may be capable of motion too, but let's suppose there is a
breakdown in that system, and also it is blinded and muted, but it still has
appropriate connections so you can communicate with it via your keyboard,
and it can answer through your monitor screen. In effect, the robot
personality has now become a "virtual object in a computer," but otherwise
things are as before. So, the robot now responds to you in much the same way
Turing imagined for his imitation game. *However,* in view of what I've
assumed about the robot's internal processing, it isn't simply an arbitrary
device playing the imitation game, but one that we still have reasonable, a
priori grounds to attribute human-level awareness and consciousness to. (So
we don't have to worry over the problem of how much awareness/consciousness
should be associated with successful playing of the imitation game alone.)
Beyond that, it's a matter of deciding how many features you can disable or
otherwise modify and still have a system with *some* level of awareness. 

From #11681:
>If you want to do theology, you are welcome to do it. I have no way to
>argue against theology. Most of your posting in this Cryonet basically
>justifies the notion that we might be virtual beings in a computer
>I will note, however, that you say that you do not believe in this

I think of it as occupying only a very small "probability slice" in the
multiverse, but see my comments above.

> As I've been saying, it's very important that a device with
>awareness interact with the real world; and you suggest that this is
>what you want. Great! I may still disagree with you on just how soon
>and how easily we will be able to make any such device, but we may have
>started to agree on what is IN PRACTICE needed for awareness. 
I think we are in basic agreement, though not perfect agreement. (Life would
be dull if everybody perfectly agreed.)

Bob Ettinger, #11683, had some interesting thoughts, with which I largely
though not entirely agree, with one remark that I'll comment on:

> ... neither do I 
>agree with Dr. Perry's apparent assumption that anything goal-directed and 
>adaptable is necessarily in some degree conscious. (Turing Test revisited.) I 
>think consciousness is distinct from other characteristics of life, and 
>possibly a relatively late arrival in evolutionary terms. We won't know for 
>sure until we understand the anatomical/physiological basis of awareness at 
>least in some animals.  
This would seem to argue the possibility that some creatures that are
goal-directed and adaptible are not conscious (yes?). I wonder if this
includes, for example, such relatively primitive organisms as the sea snails
that have been used in learning experiments. (These critters have neurons
much like our own, as do most other animals beyond the single-cell stage,
and they are clearly goal-directed and adaptible.) If that is so, it does
clash with my view, which is to ascribe some level of awareness to such
creatures. As for the anatomical/physiological basis of awareness, well, we
do understand quite a bit about it, especially in the simpler animals like
the snail (if you grant that they have awareness). "Machinery all the way
down," as far as I can see, and resting on quantum mechanics, in which it
would seem that it is the discrete state changes (sudden jumps) that are

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11691