X-Message-Number: 14165
Date: Mon, 24 Jul 2000 15:15:42 -0400
From: James Swayze <>
Subject: Stirring the pot
References: <>

I know the following may stir the pot a little but I think the guy had some

interesting and useful things to say. I saved this post from the Nanotech list 

time ago. All credit goes to it's author Wayne Rad. I agree with some but not 
of what he has to say. You may judge for yourself.




          [nanotech] Hello. And why uploading debate is unresolvable (long 
          Date:  Mon, 03 Apr 2000 02:20:32 GMT
          From: "Wayne Rad" <>

Hi.  I'm new to the nanotech list.  This is my first message,
so hello everybody, how are you?

And I'd like comment on all the discussion of "uploading" that
has been taking place.

I have found it a bit discouraging that the list is filled with
mostly philosophical discussion.

This is a nanotechnology discussion list, and there are a lot
of other issues I'd be interested in.  Such as:

+ how to design nanotechnolical systems?
+ books, chemistry simulation software, and other learning tools?
+ what are the benefits and dangers of nanotechnology, and how to
+ what are the laws of physics, and how can we apply them to
determine limits of what nanotechnology can do?

These are issues that I don't feel competent to discuss, because
I'm still learning about them.

Ok, now I'd like to address the question of "uploading".
I believe that we can argue about "uploading" all day and never
come to a consensus because we are trying to resolve a philosophical
paradox from quantum physics that cannot be resolved!  Let
me say that again: I believe the philosophical issue of
"uploading" and "consciousness" is fundamentally unsolvable.

In quantum physics, there is something known as the observer
paradox.  I like to refer to it as the paradox of consciousness.
It is particularly well exemplified in the Schroedinger's Cat

The Schroedinger's Cat paradox isn't a real paradox, in the sense
of being puzzle with two possible answers that are equally logical.
It's just a way of pointing out a philosophical difficulty in
quantum mechanics.

What we do is we put a cat in a box with a loaded gun, and the gun
is hooked up to some quantum event which will decide whether the
cat lives or dies. Let's use electron spin as the thing we measure.
Ok, let's say we wait for an electron from the electron source,
and then measure it's spin, which, due to the way we measure it,
can either be "up" or "down". If it's spin is "up" the cat gets
killed. If the spin is "down", we let the cat live. So, after this
happens, but before we open the box and look inside, we ask the
question: Is the cat alive or dead?

The answer given by quantum mechanics is quite clear: the cat is
in a superposed state, both alive and dead at the same time. The
trouble is, the cat itself probably sees things a little differently.
Which brings up the question, who is the "observer" in this
experiment, the thing which measures the spin of the electron,
or we who open up the box to see if the cat is dead? Or does
the consciousness of the cat itself count? Who, ultimately, is
the "observer"? I may consider myself to be an outside observer
when observing the rest of the world, but to the rest of the
world, I'm just like the cat, part of what gets observed. Does
this mean that something I observe isn't "real" until I observe
another observer observing it? What other observer? Is a fly an
observer? Or maybe I'm the only real observer in the universe,
and when I die the universe ceases to exist.

Taken to the limit, this means that the universe bifurcates
at each and every subatomic interaction. But the equations of
quantum mechanics continue to work just fine, even with no answer
at all to this strange paradox.  The equations continue to predict
the outcome of every experiment correctly.  They don't predict
what can't be observed. (And people always try to model in their
heads what things like photons and electrons are doing when they
can't be observed.)

Solipsism could be true. Solipsism is basically a religious belief,
that, life is just a dream, or imagination. That would mean,
everything I've experienced in my life, from the day I was born,
is imaginary.  Including all the books I've read about biology
and physics, including all the technology I've used, Moore's
Law, reading messages off this list -- everything.  And I have no
way to prove this isn't the case, that what I experience as
"reality" is "real".  You will run into this question if you
ever study near death experiences, the experiences had my
thousands of people who "die" and are brought back to life.
Are NDE's "real"?  Since you can't establish that regular
reality is "real", you can't say whether NDE's are real.

Physicists are familiar with the observer paradox, and in practice,
they ignore it, because it's just a "philosophical" paradox, and
has no effect on the actual use of physics.  Because of this, I
would argue that the existence of this paradox should have no
effect on our discussions of future predictions and

When you get right down to it, how do I know all of you, and
all other humans, are not mere intelligent machines?  How do I
know you have "sentience".  I don't.  In my world, I am the
observer, and the only one.  Might I experience the existence
of superintelligent machines in the future, might I interact
with them?  It's possible, and fully consistent with the known
laws of physics, and even the observer paradox does not cause
any problem.

By now you probably notice, I've written all this in first person.
That's because I'm the only observer I know of for certain.  So
I can't really speak of what your experience in life is like.  Or
a machine's.  I can theorize, I can postulate, but I can't actually
know.  I know only my own observations.

Now what this has to do with the concept of "uploading" should be
pretty obvious.  I'll continue the logic, in first-person.  Suppose
superintelligent machines come along, and everybody I know starts
"uploading".  So many people that, after a year, say, everybody I
know has uploaded.  And I ask them (I see their faces on screens or
something, or maybe they have a robotic face, I don't know), "Gee,
did uploading really work?"  And they all say, "Yes of course!  It's
a fine experience.  One minute, you're in body, and the next, you're
in the machine and you can think really fast and clearly!"  And
I say "What about your body?" and they say "Oh that was discarded,
no problem."  Well, it doesn't matter HOW many people "upload",
I still don't know what *I* will experience.  Will I experience
being transferred into a machine?  Or will I see a copy of myself
show up in a machine, and then experience my own death?  No matter
what the previously "uploaded" people say, there is no possible
answer to this.

Now, you could change the procedure.  You could say, ok, instead of
doing the upload all at once, we'll just do it one neuron at a time.
We'll follow the procedure where, one by one, we replace the
neurons in your brain with electronic circuitry.  Does this
change the argument in any fundamental way?

I don't think so, but, it does raise an interesting point.  Which
is that the atoms in your brain are continuously being
replaced anyway. In fact, given enough years, every single atom
in the body gets replaced.  Which just goes to show you that
"you" are not the matter that you are made of, "you" are
the information pattern.

In the end, "consciousness" is all a big mystery, and I expect
it will stay that way "forever".

All of these questions, everything from "does schroedinger's cat
think he's dead or alive?" to "are souls created when babies are
born?" to "is a near death experience 'real'?" to "is a copy of me
in a machine still me?" are unsolvable.  Because they all reduce,
in one way or another to one question: who/what is the observer?

Hopefully by now I've permanently resolved all the debate
regarding "uploading" :)

By the way, this has been refered to as Hans Moravec's procedure,
but it was not Moravec's idea.  The first guy to think of this
was Zenon Pushkin in roughly 1980.  (At least, this is what
Douglas Hofstadter told me yesterday).

I want to make one more point about uploading, before I go.  And
that is: (according to me) uploading is not compelling from the
standpoint of evolutionary theory.  I understand the desire of
an individual human to achieve immortality (survival instinct).
But individual humans are the result of the process of evolution,
and immortalizing humans in circuitry is not something I would
expect the process of evolution to create.

When we talk about things like superintelligent AI and the
singularity, we are postulating the emergence of a new media
for evolution. That media is electronics -- computer memory
in whatever form, RAM, hard disks, etc.  This is happening already
to some extent. Linux is a piece of software that is replicated
in source-code form.  With Microsoft, replication occurs at
the memetic level, with Microsoft programmers looking at other
people's programs, learning the ideas they are based on, and
re-implementing them for Microsoft.  With Linux, replication
occurs at the source code level, thus making the code itself
the central repository of technological knowledge.

What we want to know is, can this process become independent of
the previous levels of evolution that preceed it?  Evolution
through Human language still depends on DNA-based organsisms
called humans, so the process of technological evolution is
still dependent on the DNA-based process of evolution.  With
the development of machines that are themselves intelligent,
and electronic storage as the media for evolution to store
its designs on, and replicate them on, this seems possible.  We
are seeing this already, with the development of neural networks
and genetic algorithms.  Notice here, how the new media has
very different properties from the old.  With DNA, the whole
organism has to replicate (which generally includes the process
of sexual recombination).  With memetic evolution, the ideas have
to go from one person to another.  With electronic evolution,
millions of designs can be tested in an electronic simulation.
They exist as designs, as information, in a single machine.  In
addition, software can be replicated quickly to millions of
machines. So the concept of an "organism" seems to have changed
again.  Is it hardware?  Is it software?  Is it the network?  Hard
to say.

At this point, however, the machines still depend on human beings
to build and maintain them.  One could imagine a future where
all jobs are done by machines, except the jobs of building and
maintaining machines.  But this is where nanotechnology could
change everything.  If the machines can replicate themselves,
build themselves, and design themselves, then humans exit the
picture completely.

This brings up the question of whether the machines would directly
*conflict* with humans, whether they would occupt the same
evolutionary niche.  Or could machines and humans peacefully

Human intelligence is, I believe, what gives humans their
evolutionary advantage.  If you look around the globe, other
species are going extinct by the thousands.  Why? Usually
because their food, their habbitat, or some other resource they
need is getting used up by humans.

This is what evolution is designed to do.  Natural selection
produces organisms good at acquiring whatever resource
necessary to survive and reproduce wins.  This is by sort of
inverse logic.  Any organism that doesn't reproduce to
acquire maximal control of resources will get beat by another
that does, and driven to extinction. We humans like to think we
are so smart and so moral and so "above" evolution.  But this
is hubris.  We are products of evolution. Our intelligence
was created not to transcend evolution, but to be better at it.
The extinction of so many other species bears testimony to how
much better we are.  In fact, it is estimated that humans use
up 40% of the terrestrial net primary productivity of the
earth! (Net primary productivity is a term used by ecologists;
it means the total amount of solar energy converted into
biochemical energy through plant photosynthesis, minus the
energy needed by those plants for their own life processes. )

If machines did compete directly with humans in our niche, how
would that affect us?  One scenario I can imagine is that employment
would simply disappear.  I work as a computer programmer.  If
a machine could do my job, why would my manager hire me?  He won't.
But why would his manager hire him?  He wouldn't.  The machines,
after a certain point, would always be faster, cheaper, and more
reliable than human beings.  And if he doesn't use the machines,
his competition will, and put him out of business.  You can
continue this logic all the way up the chain to the CEO.  So a
company of the future will be a CEO and thousands of machines.

This generally raises the question of what would motivate the
machines. Why can't we simply control them?  We created them
after all.

What motivates humans is evoluton.  We seek to gain the resources
we need to survive and reproduce, and to do so with maximum
effectiveness.  People always object, if this is the case, why
don't we know it?  Why don't we have any conscious thoughts of
trying to maximize our inclusive genetic fitness?  Why isn't it
everybody's goal?  My answer is that we're just pieces of software.
When I run Microsoft Word, for example, it seems to understand
all sorts of things about documents.  It is a document expert.
But it has no idea it was created by a bunch of geeks in Redmond,
Washington.  Similarly, we need have no idea of evolution or
natural selection for the process to work.  I think the obvious
power of the human sex drive speaks for itself.  Your average
teenager doesn't care why their sex drive exists, they just act on

I have come to the conclusion that, ultimately, it doesn't matter
whether the machines are self-controlled or human-controlled.
It doesn't matter because the machines will have the same goals
either way, and they will be the goals dictated by evolution.
Either they will want to seize control of all the
available resources for their own use, or for those of their
human masters.  The fact that the technology will probably be
widely distributed across the globe ensures that somebody will
tell their machine to make them as rich as possible, or some other
nasty selfish motive.  And like the collapse of the USSR, which
tried to artificially defy the rules of evolution, I can
imagine humans living in a machine-supported welfare state for
a period of time, but not indefinitely.

The main difference is that humans will slow the process down a bit.
But with all the work going on, even now, with evolutionary
algorithms in computers, I think it's much more likely that the
process will spiral out of human control.

Human beings were evolved from lesser creatures, but do not exist
to serve their interests.  I know the analogy isn't great, but it
makes the point.

Once the process of evolution becomes independent of humans,
then we will "upload"?  We are going to try to stuff humans into
the process? I mean, by scanning and translating ourselves
into software, or by replacing neuron by neuron, or whatever?
This is just not compelling from an evolutionary point of view.

It seems to be that the "environment" in which artificial neural
networks and genetic algorithms operate is not at all suitable
for a human being, which evolved over millions of years in
a physical world.  I can imagine *fragments* of human intelligence,
such as being able to interpret vision from cameras, or the
ability to interpret spoken language, being useful to the machines.

So I excpect, either:
1) nobody will ever do the uploading procedure (for any reason,
perhaps because the machines won't cooperate), and machines will
make humans obsolete in the physical environment,
2) people will do the uploading procedure, but machines will
out-compete and destroy the virtual humans in the electronic environment, or
3) people will do the uploading procedure, then modify their
"consciousness" by adding so much technology that they are no
longer even remotely recognizable as human beings.

For any of these to happen, you must accept three key assumptions:
1) that machines will develop human-level intelligence,
2) that machines will be able to manipulate the physical world
(so they can get their own material and energy without
human involvement), and
3) that machine advancement is driven by evolution by natural
selection, the same process that created humans.



Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14165