X-Message-Number: 7940
Date:  Tue, 25 Mar 97 01:42:26 
From: Mike Perry <>
Subject: Symbols, meaning, consciousness

[Original Date sent:        Sat, 22 Mar 1997 17:38:25]

Bob Ettinger wrote (#7912):

>We KNOW that ALREADY robots (computer programs) 
>exist that, to a limited extent, can converse like people, 
>and sometimes fool people; and we also know they have 
>no slightest consciousness. It is OBVIOUS that, a little 
>further down the road, there will be programs, even if 
>only similar but larger and faster ones, that could fool 
>most of the people most of the time. What more need be 
>said?

I wouldn't credit computer programs that now converse 
with people with having "very much consciousness." On 
the other hand, if we consider other computer programs, 
e.g. neural nets, it does not seem obvious to me that ALL 
computer programs today have "no slightest conscious-
ness" (and it's worth noting that Bob is not saying this 
either here, i.e. that all lack any consciousness). In fact I 
might credit some computer programs of today with at least 
a dim level of consciousness. To back this up I'd have to 
define what I mean, etc. which is beyond the scope of this 
message, but maybe my point will be granted for the sake 
of argument. If that *were* granted, then it would be a 
plausible argument that we ought to be able to *increase* 
the level or quality of consciousness, and then the idea of a 
human-equivalent consciousness in a machine does not 
seem so farfetched.

On the other hand, suppose it is held that there is some 
*fundamental* dichotomy at work, that "no machine can 
exhibit the slightest amount of consciousness because 
machines just manipulate symbols." A person upholding 
this view would have to argue that machines, *no matter 
how complex*, must forever be unconscious--assuming, 
again, that they are the symbol-crunching kind, i.e. digital 
devices. Although we don't have a machine (one of our 
own, artificial construction) today that conclusively refutes 
the "no-consciousness" theory, I think the "dichotomy" is 
quite dubious. It sounds suspiciously like the following 
argument: "Atoms are not conscious. Interacting atoms are 
therefore not conscious. Humans are interacting atoms. 
Humans therefore are not conscious." A human, in fact, is a 
system consisting of about 10^28 interacting atoms, a large 
number. That many interacting atoms clearly *can* exhibit 
consciousness. So, I rather suspect, can a sufficiently large 
system that "just manipulates symbols." In fact this is 
explicitly guaranteed if you believe a human is a finite state 
machine, but even if you reject that argument, you still 
have to account for the emergence of consciousness from 
unconscious components.

Bob also says,

>Well, an emulation of your deceased dog might be `as 
>good as' the original, and for that matter a similar puppy 
>might be almost as good--maybe better. So
>what? That says nothing whatever about the question of 
>survival."

To me, the survival of a being through an emulation seems 
straightforward. Any construct that started as an emulation 
of me, right now, might then go its separate way from me, 
but it would still represent a continuer of what I was at the 
start of the emulation. "I"--the pre-emulation version of me,
would survive in the emulation, as well as through the non-
emulation, "natural" me. (Or if only the emulation was left, "I"
would still survive.) 

>Again: The criterion is not what your reaction or intuition 
>is, but what it OUGHT to be, and we do not yet have an 
>adequate basis for reaching a conclusion.

It's a tough issue as to "what it ought to be"--for one thing, 
we can ask, "what ought to be" our criteria for deciding 
"what it ought to be." My posting on identity and survival 
(#7919) attempts to address this question somewhat, 
though it is only a small beginning. 


Let's go on. Thomas Donaldson (#7915) wrote:

>The problem with a simulation of Mike is that it is a 
>symbolic representation of Mike, and has meaning only so 
>far as the symbols used have meaning. Ultimately it is 
>human beings who attach meaning to those symbols. I 
>find it very difficult to believe that any system which does 
>no more than modify symbols can produce awareness IN 
>THE SYSTEM OF SYMBOLS IT IS MODIFYING.   

As my discussion above might suggest, I don't see that 
much difference between atoms and "symbols." An atom to 
me (in a particular energy state, say) seems to be essentially 
a kind of symbol. (To which we might add the chemical 
bonds between atoms, etc., all of which, however, can be 
described in symbols.) Atoms can be modified too (by 
changing the energy levels, forming different chemical 
bonds, etc. all of which could again be described by modi-
fying symbols in an appropriate data structure). Are we to 
"find it very difficult to believe that any system which does 
no more than modfy atoms can produce awareness IN THE 
SYSTEM OF ATOMS IT IS MODIFYING"? We have 
awareness. So there would have to be some fundamental 
difference between atoms and "symbols"--I don't see it.

I'll also raise the issue of whether symbols only have 
meaning to those who have attached it. Can symbols in-
stead have a kind of "universal meaning" that ought to be 
understandable to any sufficiently advanced intelligent 
being, even if totally alien to the culture that produced 
them? I think in fact that we could create messages that 
ought to be intelligible to any intelligent extraterrestrials 
that happen upon them--attempts to do this have already 
been made (for probes we have sent beyond the solar 
system for instance). Or just think of a simple bit stream, 
broadcast into space, that encoded the prime numbers in a 
straightforward base-2 format, maybe with lots of repeti-
tions to make it easier to follow. Here we would be choos-
ing which patterns of symbols to send, but not attaching the 
meaning they have.

Thomas also says, "...we don't really know that even the 
universe (digital or not!) is finite (think of the physicists
proposing many other universes as a way to reconcile GR 
and quantum theory)." I completely agree that "we don't 
know the universe is finite" and hope it isn't, i.e. I hope 
space and time are unlimited in a reasonable sense. One 
possibility, if it does prove finite, is that many-worlds will 
provide other universes some of which are infinite, or if all 
must be finite, then the totality of universes will still prove 
infinite. Other possibilities, then, for immortal survival, if 
viewed the right way.

Mike Perry

http://www.alcor.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=7940