X-Message-Number: 8242
Date: Sun, 25 May 1997 19:40:10 -0400 (EDT)
Subject: misc.

Glancing over some recent Cryonets, a few comments:

1. Perry Metzger (# 8221) says that I claim "you can always 'tell' you are in
a simulation," and implies I have not offered any scientific test of the
hypothesis that one is or is not living in a simulation.

I did NOT say you can always tell, or anything very close to that--although I
have said  I think it fairly likely that consciousness can exist only on a
physical organic substrate, in which case it could not exist in a simulation,
nor even in silicon.

I DID offer one possible experimental test of the simulation hypothesis. If
you run a lot of physical experiments and never encounter any
surprises--nothing not implied by known rules--then probably you are in a
simulation, since in the real world the fundamental rules are not fully known
and there will be experimental surprises.

2. Will Dye (#8225) made a good point about testability of an
hypothesis--that testability in the future, or in principle, is not always
easy to decide. As a crude example:

A tribe of monkeys live in a large cavern, with an unreachable and invisible
hole in the high roof. A  monkey philosopher asks, is there anything outside
the cavern? Another philosopher says the question in meaningless, since there
is no way to test it. 

Later eras and better brains may make many "untestable" things testable.
Whether something is falsifiable IN PRINCIPLE is not always clear. Matters of
"principle" may in due course come to be matters of practice. 

Whether something is falsifiable by any CONCEIVABLE means depends, obviously,
on what you are capable of conceiving. I learned long ago that the world is
not limited by my ability to conceive.

3. On the question of whether one could live out his (normally expected) life
as a simulation:

I have suggested several reasons for skepticism, including the simulated
scientist. One which seems to have aroused no response is the "bug" problem
in simulations and subsimulations:

Most complex programs have bugs, at least at first, and sometimes there are
bugs that do not come to light until after long use. Real programmers write
bugs, and simulated programmers will also. What effect a bug written in a
simulation will have on parent simulations, or on the original program and
computer, I suppose depends on the nature of the bug; but all instructions,
at any level of simulation, ultimately must affect the operation of the
original, physical computer. Thus it seems to me almost inevitable that, in a
cascade of simulations, bugs will cause crashes--at least in some simulations
and their successors, and perhaps also in preceding levels including the

4. Definitions and tests of consciousness:

Thomas Donaldson and many others have suggested ways of experimentally
investigating consciousness, and this is going on apace, although still only
in very early stages.

The brain is not a black box; it can be investigated experimentally--even, in
many cases, without injuring the subject. To claim that consciousness (in
mammals) has no observable consequences or mechanisms is just not true.  And
UNTIL we understand the mechanisms of consciousness in mammals, it is
premature to guess whether artifacts could be conscious.

5. Searle's claim is that syntax alone cannot result in intelligence [let
alone consciousness], and therefore no computer can ever be intelligent,
since it has only syntax and not semantics; it merely manipulates symbols
with no slightest understanding of what the symbols represent. 

I don't think this claim should be elevated to the status of axiom. I am
inclined to think Mike Perry may be right--that a sufficiently large and
sophisticated artificial language might be uniquely relatable to the real
world. For example, you might start out with a list of numbers corresponding
to the atomic numbers; then pairs of numbers relating atomic numbers and mass
numbers of the most common isotope. You now already have a start on relating
symbols to reality....If the language becomes large and complex enough, it
might serve UNIQUELY to anchor syntax to semantics...There could easily be
obstacles I haven't thought of, perhaps insurmontable ones. But Searle has
gone too far with his axiom.

On the other hand, the claim of Searle and many others, that a system could
act "intelligently" without genuine understanding and without feeling, is
obviously correct.

6. On the general question of criteria of identity and criteria of survival:

It baffles me that so many people are determined to assert or accept their
preferred criteria with no proof. The fact is that so far we have neither
real experiments nor thought experiments to justify any firm decision. In
addition, some of the thought experiments may turn out to be impossible in
principle, for a variety of reasons. 

Those who claim a duplicate or near-duplicate would be you, and survival of a
duplicate would constitute your survival, have courage but no clear evidence,
only an article of faith that "your pattern of information is you." It is
possible that several adults are more like me now than my childhood self is
like my present self; make of that what you will. There are also endless
problems with postulated creation of "duplicates" of you not as you are now,
but as you were many years ago or as you might become many years or centuries
from now.

Some say the duplicate is "over there" and I am "here" and therefore the
duplicate is not me. They are certainly right in the sense that there are two
or more self circuits involved, separate systems with separate (although
similar) feelings. But this observation does not begin to touch the logic or
philosophy of criteria of identity or of survival (not the same). We need at
a minimum to understand the detailed anatomy and physiology of feeling, and
probably the physics of spacetime as well--and maybe still other things we
haven't thought about.

Robert Ettinger

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8242