X-Message-Number: 8296
From: "Peter C. McCluskey" <>
Date: Sun, 8 Jun 1997 20:50:08 -0700
Subject: subsim speeds, bugs, etc.
References: <>

I fell way behind in reading this list, so I'm very late in replying,
but Mr. Ettinger's fallacies are such that a late reply seems better
than none.

>Message #8242
>From: 
>Date: Sun, 25 May 1997 19:40:10 -0400 (EDT)

>I DID offer one possible experimental test of the simulation hypothesis. If
>you run a lot of physical experiments and never encounter any
>surprises--nothing not implied by known rules--then probably you are in a
>simulation, since in the real world the fundamental rules are not fully known
>and there will be experimental surprises.

 A belief you apparently acquired by observing a world you assume to
be nonsimulated. I will assume this is circular reasoning unless you
can produce a theory of how many surprises to expect that is independent
of what we observe in this (possibly simulated) world.

>I have suggested several reasons for skepticism, including the simulated
>scientist. One which seems to have aroused no response is the "bug" problem
>in simulations and subsimulations:
>
>Most complex programs have bugs, at least at first, and sometimes there are
>bugs that do not come to light until after long use. Real programmers write
>bugs, and simulated programmers will also. What effect a bug written in a
>simulation will have on parent simulations, or on the original program and
>computer, I suppose depends on the nature of the bug; but all instructions,
>at any level of simulation, ultimately must affect the operation of the
>original, physical computer. Thus it seems to me almost inevitable that, in a
>cascade of simulations, bugs will cause crashes--at least in some simulations
>and their successors, and perhaps also in preceding levels including the
>original.

 It is fairly easy to write a simulation such that no behavior of the
simulated entities can crash the simulation - just keep the rules simple,
say program in a few laws such as quantum mechanics, start with a big
bang, and the probability of observing a crash between year 10000000000
and year 10000000050 is effectively zero.
 Even if there is a bug that causes simulation level X to crash, so what?
Start it over from scratch or from a backed-up checkpoint; with enough
trial and error you'll eventually be able to reach any state you want.
The typical entity in such a simulation will observe no evidence of crashes.

>6. On the general question of criteria of identity and criteria of survival:
>
>It baffles me that so many people are determined to assert or accept their
>preferred criteria with no proof. The fact is that so far we have neither

 What is there to prove? If I assert that software which passes behavioral
tests X, Y and Z is me regardless of what it "feels", that's an assertion
about what I value being propogated into the future, not a claim about any
natural phenomenon.

>Message #8198
>From: 
>Date: Sat, 10 May 1997 12:14:36 -0400 (EDT)

>First of all, remember that a simulation is not necessarily slower than the
>system simulated. In fact, we often use simulations precisely because they
>are much faster; and AI people often say that an electronically simulated
>person could live a lifetime while a flesh and blood person is blowing his
>nose. I suspect the question of how fast the first simulation would run,
>relative to the real world, is complex and difficult and dependent on many
>unknown factors. Perhaps it is not unreasonable to guess that the first
>simulation could run at the same speed as the real world, or faster.
>
>But if that is true, then the the first subsimulation must also run at least
>as fast as the first simulation, its "parent." Why? Because the first
>simulation (supposedly) DOESN'T KNOW it is a simulation, and therefore its
>inhabitants reason that THEIRS is the real world, and a simulation (our
>subsimulation) must run just as fast. If a simulated person lives faster than
>a real person, then a subsimulated person must live faster than a simulated
>person, etc.
>
>Unless I have missed something, then, the successive subsimulations should
>(to fulfill the requirements) NOT run more slowly. But the original computer
>cannot keep up with this demand, and therefore the system breaks down--i.e.,
>fails to maintain its intended function, even though the real hardware keeps
>ticking away. I think "grinding to a halt" is close enough to express this
>condition.

 This is an interesting proof of something, but it doesn't prove what you
claim. Most simulations that people actually run are faster than what they
are simualting because they take a lot of shortcuts and only simulate the
features they find interesting. A perfect simulation is normally much slower
than the original. If I can produce an adequate simulation of myself that
runs faster than I do, there are several hypotheses which could explain it:
a) I'm not in a simulation.
b) I'm in an atypically inefficient simulation.
c) I'm in a typical simulation, but my idea of an adequate simulation
 of myself is less stringent than a that of a typical creator of simulations
 (I do in fact observe that my standards in this regard are less stringent
  than those of many people on this list).
-- 
------------------------------------------------------------------------
Peter McCluskey |                        | "Don't blame me. I voted
 | http://www.rahul.net/pcm | for Kodos." - Homer Simpson
 | http://www.quote.com     | 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8296