X-Message-Number: 30154
Date: Mon, 17 Dec 2007 02:05:58 -0500
Subject: Comment on "singularity" by Bob Ettinger

Content-Disposition: inline

First, allow me to compliment you on your creative posting of three
points enumerated in reverse order.  I would suspect this is suggestive
of your sense of humor.  As to all the extra spaces in your text, as well
as obviously missing special characters such as quote marks, I would
guess that your word processor is not compatible with CryoNet's edit
programming.  It was readable enough, though.

And I'm delighted that you, the Founder/Father of Cryonics, appear to be
in agreement with me regarding the probable outcome of an uncontrolled
Singularity AI.  As you state:  "It is true that IF a computer were
sufficiently intelligent and motivated and independent and capable of
self-modification, it would be impossible to control,  because it would
necessarily have communication, and it could accomplish whatever it
wanted by persuasion, however physically limited it might initially be." 
It is also obvious to me that it could accomplish mobility and capability
of manufacturing about anything it wanted to.

If every cryonicist's mind could get past the hurdle of understanding
that, we would be most of the way there in demanding SAFEGUARDS against
such self-independence.   No matter what the promoters of the Singularity
think about "oh we hope it will be a FRIENDLY AI", they are merely
engaging in wishful thinking and psychological denial regarding the real
possibility of a threat to human existence on this planet.

In your second point (presented second!) you state "the original human
programmers will almost certainly want to retain control or otherwise
protect themselves against a possible monster".  I would say maybe, but
not if they are run by people like those who run the SIAI (Singularity
Institute for Artifical Intelligence), who merely point out a "risk" and
no safeguard against that risk.  Apparently the future of flesh and blood
humanity is a risk they are eager to take.

In your first point (presented third) you disagree with Asimov's laws of
robotics but follow up on them with the idea that a computer would break
down under logic errors from such laws; I simply disagree with both. 
Besides, nobody out there is talking about any kind of safeguard against
an intelligent supercomputer asserting its power over lowly humans, that
I know of.  Further in this point, you question the emergence of computer
consciousness.  I truly hope you are right, but do we really know, and
unless we do, it would be reckless not to impose credible safeguards
against such a development.  Could you give us pointers to where you
wrote on this before - CryoNet posts, chapters in your books, wherever? 
I don't see how "slaving" it to a human brain would necessarily keep the
human in charge, if it is indeed self-developing, do you really?

In your third point (presented first) you swallow the argument of
somebody who says "Well, I don't think there will be a Singularity AI for
xxx years" (usually some figure over a century)..  And that is supposed
to be a logical reason for not being concerned at all about it.  My view
is that the longer we have, the more time we have to prepare, but since
we don't really know how long it will be, we had best get going on it
already.  We have no details on how such an emergence will arise, so we
don't really know.   You, of course, say you have details on how it will
NOT arise; I look forward to reading same, and apologize if I missed them
some time along the road.

Got No Time? Shop Online for Great Gift Ideas!

Content-Disposition: inline

 Content-Type: text/html; charset="iso-8859-1"


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30154