X-Message-Number: 15946
Date: Sun, 25 Mar 2001 13:46:48 -0500
From: Brian Atkins <>
Subject: The reality of our unending precarious situation (Swayze)
References: <>

Excuse poor formatting please

James Swayze wrote:
> Sabine wrote:
> > 

> > In my husband's and my opinion, humans becoming super humans without the 
control > > of a
> > Friendly AI is the recipe for desaster:
> 

> You seem to be missing some of the points I made about the alternative I hope 
for > to machine AI.

Not really. Actually we have been through this all before years ago... The
problem here is that you have a very unrealistic view of what the near future
holds. See below


> There's every reason to believe we'll be much more moral and peaceful by the 
time > and because of

> becoming super human. Please recall what I said about appreciating individual 
> lives more because of

> their newly added value. I also feel and it's been said by others here that if
> people believed this

> life was capable of perpetuity they would feel they had more to lose with 
death. > They'd take fewer
> risks like crime, self abuse and war.

James I still don't have a clear understanding of what your "ideal future"
looks like. From what I can tell it consists of all people still living in
biological bodies forever. Not gonna happen. Once you can move your mind
to a more robust substrate, death becomes less of a worry, and the ability
to act without the possibility of retribution increases. In other words I

can upload myself to an extremely small nanocomputer (perhaps 1cm^2 or smaller),
make a 10000 copies of myself and distribute them all throughout random rocks
in the asteroid belt. Then I launch my war on Earth. And all this just gets
worse as the intelligence of minds increase. More below

> 

> Your apparent need for a beneficent parental AI supreme being is akin to and >
reminds me of the

> argument religionists apply where they claim we are without morals if there is
no > god. This simply

> is not so. We have morals out of realization of the benefit of cooperation. We
> also realize that we
> are much alike and so empathy to a large degree rules our morals.

See you are completely missing the point (to be blunt). Our goal of having
a Sysop is not to "enforce morals". It is to provide very basic (mostly
physical) protection, the kind of protection that is required in a solar
system where individuals have full power over such things as nanotechnology.
99.9999% of the people in the solar system may develop perfect morals in the
future as you hope for, but it only takes one to turn the planet into a
big blob of grey goo. Go read the science-fiction book Bloom to see what I
mean.

How to you prevent disasters like this once these extreme technologies are
available? When no one needs anyone else to survive?

Secondarily, how do you get around the fact that AI is an inevitability in
the near-future world with near-infinite computing power? In that kind of
future, a teenager in his basement can evolve a superhuman evil AI overnight
by mistake, and before anyone can do anything it's game over. The AI problem
does not ever go away as long as we have computers capable of running one.
This problem must be faced, and it must be gotten right the first time.

Are you going to tell me that all technologies advanced beyond a certain
point must be totally restricted ala Bill Joy? (if you say that, I don't
think you can call yourself an Extropian) Even if you think that, how do
you propose to make it real? It would require a Big Brother-esque situation.

If you say that you want a more anarchic future where everyone is totally
"free" to do what they want, how do you propose to prevent the eventual
disaster?

From what we can tell there are two supergoals: 1) as much freedom as possible
for everyone, and 2) total safety for everyone. The best way to balance
these goals so that you provide as much as possible of both is a Sysop.
This can be condensed down to one supergoal: Friendlyness

> 
> > Even if I'm super human myself, having to deal with a

> > whole bunch of moody, warm-fuzzy, violent, confused (maybe even crazy), > > 
pretentious super
> > humans is not what I'm wishing for.
> 
> Let's take these presumptions on one at a time shall we?
> 

> moody--As many have said here and other forward looking groups we are working 
> towards eventual

> elimination of mood disorders. Why assume no progress will be made at amking 
us > all healthy in this

> regard? For whatever natural moodiness remains there will be no end of > 
distractions to entertain

> us. May I suggest the following reading as something to tickle the imagination
> regarding these and
> more issues. http://www.hedweb.com/hedab.htm

And what about the individuals who refuse treatment? Are you going to force
everyone to be treated just so you can feel safe? Remember, in the future it
only takes one madman to cause BIG trouble.

> 

> warm-fuzzy--I don't get this one frankly. Could you further elaborate on the 
evils > of being warm

> and fuzzy? If I may presume, I get from it a sense of distain toward our meat 
> shell. Forgive me if

It not about bodies, it is about minds. Minds with warm fuzzy thinking that
have evolved to think in certain ways that could be very dangerous at this
point in our species history. Humans almost wiped themselves out with nukes,
do you want to tell me that we will do better with even more advanced and
more _distributed_ technologies?


> violent--Like I said before I believe reasons for violence will deminish if 
not > entirely disappear.

> If we are linked as I described we would be able to instantly upload to the 
mutual > network any

> image of violence being done to us and the perpetrator's identities. Acts of >
violence would be
> difficult to hide and too costly personally to commit.

So when your body is infected with a microscopic grey goo nanobot, you can
transmit images to all your friends of your hand getting eaten. Violence
in the future is not via guns, fists, or other silly stuff.

> 

> confused--Confused? Hardly! Why would we be confused as our intellect and > 
knowledge increases?

> Confusion comes from misunderstanding. How could we be deficited in 
understanding > if we have all

> current knowledge available to us instantly and an increased capacity to > 
comprehend that knowledge?

James the majority of the people on this planet seem to believe in some kind
of supreme being, even though there is no proof. People believe in stupid
stuff, they form cults, and otherwise act in confused ways. Are you going to
force everyone in your ideal future to become unconfused?

> 

> crazy--Certainly you can see the trends of medical knowledge and power to 
cure. As > we learn the

> genome and the proteome and tease at every nook amd cranny of the human being 
we > will be able to

> eliminate I believe all diseases. Someday I believe we'll be able to 
"blueprint" > human beings.
> Crazy will cease to be a concern.

It is unlikely this will cease to be a concern in the next 10 to 50 years,
which is the most critical period of all of human history. Get real

> 

> pretentious--I'm not certain how to respond to this one. Certainly we'll still
be > individual enough
> to have pretentious people but I fail to see how it is a major threat.

Think of your favorite dictators

> 
> > Even if I can't be killed/erased/terminated, the quality of

> > my life would be rather bad as I' have to spend precious time with 
struggling > > with and
> > fightingoff super human emperor wannabe's.
> 

> I know I run the risk of including myself in the following when I confess my >
distrust of Super AI

> and the danger of our extinction at it's purposes but I must express my 
feeling > that apocalyptic

> memes run needlessly rife in our culture. I hate that most Sci-fi 
entertainment is

Do you still believe after reading this that there is zero chance of a
human/posthuman causing an apocalypse in the near future if there are no
protections? If so, you are living in a fantasy world. Why not go out and
hand all your neighbors nuclear weapons right now then?

> based on

> apocalypse. Every view of the future is dark and sinister. Mega corporations 
or > evil power hungry

> governments will enslave everyone. Aliens will eat us. Nuclear war or power > 
station meltdown will

> transform us all into trogs. Sci-fi is the methos of our time. The previous 
methos > was religion.

> Unfortunately the armeggeden apocalypse of religion has carried over not only 
to > the new methos but

> society in general. Most people expect the future is likely to be horrible. I 
run > into it

> everywhere. I can't talk some people out of it. I think all the literature 
from > the bible to modern

> Sci-fi has so entered our language that people seem incapable of seeing a 
bright > future and indeed
> in my opinion unconsciously seek to self fulfill the doom prophecies.

Well I am sorry that the future isn't what you wished for. Reality does seem
to have a habit of intruding...

> 

> "fightingoff super human emperor wannabe's"? Can you not see it any other way?
We

Plan for the worst, and all that...

Personally I want to live forever, and if I end up getting killed off by some
nanotech accident or attack I would be extremely pissed off (right before I
died).

 > can't achieve a

> bright and wonderful future ourselves without some parental beneficent Super 
AI? I > strongly

> disagree. Not only that but even if it was beneficent I feel it would be the 
worst > thing for us.

> Human beings, even transhuman beings need stuggles. We need problems to solve.
> Furthermore we need

> to solve them without meddling and smothering parents. I hope the association 
I > just made to how we
> best raise children came through loud and clear.

Oh give the parental metaphor a rest ok? The sysop scenario is more akin to
creating an operating system for the Universe. You can do everything you want,
except the short list of evil actions, which will literally be impossible.
The sysop will not appear in front of you and chastise you for trying to
shoot an Amish farmer on Old Earth, your gun simply will fail to fire when
you pull the trigger.
-- 
Brian Atkins
Director, Singularity Institute for Artificial Intelligence
http://www.singinst.org/

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15946