X-Message-Number: 21460
Date: Mon, 24 Mar 2003 07:48:14 -0500
From: Thomas Donaldson <>
Subject: CryoNet #21401 - #21406

For Mike Perry:

Do not confuse what I will be saying here with what Ettinger says,
though I do think he has a point.

The problem with computer awareness is that computers are more
like books than like living things. Books are symbolic: if you do
not know how to read the book, it remains a stack of paper with
meaningless marks on the pages. Computers, again, perform programs,
which have a meaning which depends on the background of the
person using those programs. I note that books, too, are not
self-aware in any but the most speculative of senses. As purely
symbolic entities, programs and the computers which run them
cannot be aware... no matter how involved that program may be.
It is WE who attribute such properties to them because they may
behave like a creature that is aware.

And given tools like fMRI, it's quite illegitimate to restrict
our examination of a computer program versus a human being solely
to external behavior. (It may once have seemed reasonable, but
it is no longer reasonable). It's not that internal activities
in a creature must be identical to our own, but that we should
somehow find awareness in those activities, just as we have begun
to find it in human brains.

If you wish to be philosophical, you may argue as much as you
want about whether or not the awareness we might find in your
brain using tools to look into living working brains really
corresponds with the awareness you feel. So when you  fall
asleep or act groggy, you're really aware of ... what?

Nor is this an argument against the possibility that we might
build creatures that are aware. It is an argument that we cannot
base such creatures on computers; but then computers are hardly
the only kind of machine in existence. 

            Best wishes and long long life to all,

                      Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=21460