X-Message-Number: 30176
Date: Thu, 20 Dec 2007 03:16:07 -0500
Subject: Re: "evil" AI & consciousness

Content-Disposition: inline

Hello again Mr. Ettinger.  You said, in quotes:

"My point is that a system  without feeling, without subjectivity, cannot
have motives in the sense that we  do. It
doesn't want anything or fear anything. It can only have programmed 
goals, states to attempt to reach or to avoid, which is very different."

You just described my ferocious grizzly bear, in better detail than I
did.  Except that it does "want" something, just as a programmed machine
would.  Some other species of bear fear humans and will run, unless they
are very hungry or trapped.  If that grizzly bear is coming at you,
though, believe it - he has a programmed goal, and fears nothing.  You
can shoot him and he will still come at you, unless you break bone in the

"These goals must be very explicit and unambiguous. Any attempt by the
programmer  to paint with a broad brush will inevitably result in
freeze-up, and trying  to foresee all future possibilities in detail is

Many animals, when frustrated by ambiguous programming, will just run
around aimlessly, until they find something that seems to match their
programming, such as that grizzly bear, and then he comes after you to
take a big bite out of wherever he can.  Things happen already in
computers, akin to this - programs running haphazard until they end up
causing damage to your hard drive or causing the operating system to

"In any case, to repeat myself, when some programmer thinks he is near  a
super-intelligent program, he will build in safeguards, e.g. in certain 
situations requiring a pause for external input. That there is little
present effort
to do this simply reflects the fact that such programs are nowhere on the

He will??  Maybe you would, because you are a nice fellow.  But some
folks, like those who run the SIAI, don't seem to believe in any
safeguards at all.  If they do, why don't they ever talk about them? 

But let's assume for a brief second that any programmer would, as you
say, build in a pause for external input when he thinks he is nearing a
super-intelligent program.  You certainly cannot conclude that there is
no such proximity to achieving a super-AI, because the programmer may not
know when the conditions are present to satisfy that.  We certainly do
not know now ahead of time what algorithms are necessary to spawn such
intelligence.  The other factor is publicity.  You cannot say you know
anything about what "present effort to do this" indeed exists, because
the general public is not privy to all ongoing research projects in any
field, much less this one.

Got No Time? Shop Online for Great Gift Ideas!

Content-Disposition: inline

 Content-Type: text/html; charset="iso-8859-1"


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30176