X-Message-Number: 30162
From: 
Date: Tue, 18 Dec 2007 00:32:43 -0500
Subject: To Robert Ettinger on Consciousness and threats

--_----------=_1197955963285716
Content-Disposition: inline

You state there is "no agreed definition of consciousness ... no current
way for an observer to decide  whether an
observed system is conscious".  This may not be the real issue.  If I'm
out camping in the woods and a bear comes along, that bear might or might
not be a brilliant mathematician with intense feelings for his wife and
children, but one thing is sure, it has the power and ability to wreak
havoc with my belongings on the ground, and with me if I am stupid enough
not to get into the car and make sure the windows are up.  Or,
alternatively, shoot the bear.

We don't need to make any more clear definitions of consciousness than we
already have, to decide it is important to protect ourselves against the
*possibility* of an unfriendly super-AI.  Indeed, if it has no ability to
"feel" anything, it is even more likely to consider humans irrelevant,
and to give us no consideration at all when it decides it is time, say,
to do some terraforming.

"Unfriendly" does not necessarily have to mean "hostile".  It can merely
mean the same consideration most humans give when walking around, to the
possibility of ants underfoot.

Qualia or not, a Singularity AI is not to be risked, and any AI
developments in the future should be carefully monitored so that they do
not progress out of control.

-- 
Got No Time? Shop Online for Great Gift Ideas!
http://mail.shopping.com/?linkin_id=8033174


--_----------=_1197955963285716
Content-Disposition: inline

 Content-Type: text/html; charset="iso-8859-1"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30162