X-Message-Number: 5265
From: 
Date: Sun, 26 Nov 1995 12:44:37 -0500
Subject: singularity

There will be at least two Great Divides in human history in the relatively
near future (the next couple of centuries at most). 

One will be the transition to "immortality" (indefinitely extended life), in
which cryonics may or may not play a substantial part. )

The other will be the "singularity," discussed most recently on this net by
John Clark and others--the point at which self-improving AI machines can
assume control of their own development, grow perhaps exponentially
intellectually and possibly even in physical oganization and fabrication
capabilities; if uploading proves possible then "we" may become such
machines. 

There are many interesting possibilities related to this putative
singularity. The most obvious is that the fustest gets the mostest, or maybe
the fustest gets everything. This is because, if two or more racers have the
same acceleration, then the first to start not only remains ahead
permanently, but steadily increases his lead (in displacement and in
velocity). The sci-fi scenario here is that Dr. X, who develops the first
such machine, can become Supreme Potentate of the Known Universe if he
chooses. (It won't matter what precautions his employers take; his machine
will tell him how to outwit them or overpower them, or will do so itself.)

Notice that I said "possibility." There is no assurance that accelerations
will be fixed or equal.

Another possibility is that, as Brad Templeton suggests, the uploaded may
have no interest in the frozen, and may doom them either by indifference or
by making changes in the world that are incompatible with their revival or
continued maintenance.

There are many other possibilities. Personally, I think it is extremely
significant that no superbeings have (as far as I know) manifested themselves
to us. This, along with other features of our world, make it virtually
certain that the underlying nature of things, if we ever discover it,  will
prove extremely strange. And quite likely the universe is not user-friendly.

Getting back to the disappearance of money and economics as we know  them, I
agree that this will probably happen. At some point, each individual may have
a private machine, or machine extension of the person, that can do almost
unlimited thinking and fabrication; then  "communities" may exist mainly for
reasons of defense and perhaps social intercourse. There could still be
situations, at least for certain time intervals, when goals of construction
or research or defense might demand more resources than any individual might
have, and then we would be back to trading in some sense. Maybe real estate
on the surface of the earth, non-virtual, will still be prized, among other
things.

The main point concerns values. All economics, and indeed all life, revolves
around our underlying values or concerns--and what these are or ought to be
is basically unknown, currently dealt with on an evolutionary, accidental, or
ad hoc basis.

The fundamental task of philosophy is to enlighten us as to what we ought to
do--"ought" on a strict scientific/logical basis, rigorously developed  from
impeccable premises. No philosopher has ever come close. In the next couple
of years I hope to finish my project of coming a bit closer.

Robert Ettinger


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=5265