X-Message-Number: 29757
From: 
Date: Mon, 20 Aug 2007 21:11:19 -0500
Subject: To Francois: Reply

--_----------=_1187662279213979
Content-Disposition: inline

You said:  "Those machines will be unbelievably powerful. For instance,
they probably will be capable of restructuring entire galaxies to suit
their purposes. Without strong ethics guiding their actions, conflicting
goals among them would probably drive them toward their mutual
anihilation very quickly."

Point in my favor.  Where will humans be in that process?  No mutual
annihilation of supercomputers having restructured entire galaxies is
going to leave any hiding place for lesser creatures such as humans.

I know - you are going to say "they must then obtain strong ethics". 
Even if they do, say, from being programmed so by humans, you are
forgetting one little thing.  The whole idea of the singularity is that
these entities become powerful and capable enough to reprogram
themselves.  They would eliminate that useless piece of code faster than
you can zap a pacman on a video screen.

Or, oh, OK, maybe they would develop the sort of logic that deems
cooperation and peace to be mutually beneficial.  Some human tribes have
done that.  Others have not.  To many, it is not a logical thing for even
one enemy to be left alive, after which you then have peace.  If you
think these super-AI's will be benevolent, you are merely transferring
your own sentimental ethics upon them.

It is not worth the risk.  That "Summit" needs to focus on how to STOP
the Singularity from ever getting here, how to stop a super-AI from being
developed, and to ensure there is a power plug that can be pulled on all
supercomputers by security personnel.

-- 
We've Got Your Name at http://www.mail.com !
Get a FREE E-mail Account Today - Choose From 100+ Domains


--_----------=_1187662279213979
Content-Disposition: inline

 Content-Type: text/html; charset="iso-8859-1"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=29757