X-Message-Number: 30212
From: 
Date: Tue, 25 Dec 2007 20:53:14 -0500
Subject: Re: AI & conspiracies

--_----------=_119863399499042
Content-Disposition: inline

I had about given up trying to talk to Bob, who persists in following
fringe issues and and excuses regarding the need to ensure safeguards
against the development of non-friendly super-AI.

Today, though, he says something that needs to be put in a clearer
perspective:  "This very complex frontier research, almost certainly,
cannot be  effectively done by one person or even a small team....and for
such a team to be collectively careless, let alone engaged in a malign
conspiracy, is difficult to envision. Somebody is going to blow the
whistle."

The simple fact is that no "malign conspiracy" is required for a
development team to engender a non-friendly super-AI.  All they need do
is proceed on with their usual  R&D, right to the point where their
software becomes self-developing.  At that point, the computer is in
control, has more power than its developers, and will show the human race
just how "friendly" it is.

As I have said before, it is possible, though by no means definite, that
installing safeguards all along the way (such as the pauses for prompts
Bob suggests) may ensure that the point of control is not passed to the
AI.  It is the only hope we will have, if we reach such a point in time.

The problem we have today is that few if any, especially groups like the
SIAI, have any plan at all for safeguards or see the need for them.  They
appear willing to risk the fate of humanity on whatever the odds are of a
super-AI being friendly.  This reckless attitude is, to me, at the very
least entirely puzzling, coming from people many of whom are also
cryonicists, concerned to that extent regarding their future survival. 
Think it through, people.  It is time to foster a new paradigm of caution
with respect to future AI development.

Until then, it could very well be any mainstream research centers who are
by default "malign," by allowing development into areas where there are
inadequate safeguards.  I do not know how safeguards could even indeed be
developed against that "tipping point" of no return, and if close to that
time they do not know either, the only safeguard will be "don't go
there".

-- 
Got No Time? Shop Online for Great Gift Ideas!
http://mail.shopping.com/?linkin_id=8033174


--_----------=_119863399499042
Content-Disposition: inline

 Content-Type: text/html; charset="iso-8859-1"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30212