X-Message-Number: 30173
References: <>
From: Kennita Watson <>
Subject: Re: To Kennita on Future AI and what it may lead to
Date: Wed, 19 Dec 2007 11:07:54 -0800

From: 
> I tend to agree with you on the diversity in which future AI will
> develop.  However, you overlook the outcome of such development, the
> Singularity, which by definition is a point in time when an AI (or  
> yes,
> multiple ones working together) "rapidly accelerate technological
> progress beyond the capability of human beings to participate
> meaningfully in said progress" (a Wikipedia author's wording).

"The outcome" and "a point in time" presume (I think)
an inevitable "hard takeoff" in which a small number
of AIs (maybe one) will (to put the Wikipedia wording
more colloquially) "leave humans in the dust".  I
doubt many parts of this.  1) I think there will be many
"outcomes":  AI will recognize the value of preserving,
diversity, including humans and their cultures, better
than we do, and real intelligence doesn't want an
"outcome", since that implies a "winner", and in an
evolving universe, there is no static end state.
2) Progress will accelerate, but along the same curve
as always ("soft takeoff") rather than with a spike.
3) I think there will be millions of AIs, or more, with
motivations as diverse as those of humans.
4) Technological progress per se is not bad, and
unaugmented humans may be their beneficiaries rather
than their victims.

> We can speculate that there will probably be wars between sentient
> machines, challenging each other on increase in intelligence and
> capability.

I think it's our human perspective that translates
"challenge" to "war", either between machines or
between machines and humans.  It's an argument for
wanting AIs to be unlike humans -- we may not
understand their motivations, but that may be a good
thing, since they will presumably win any game we
understand, and we don't want to lose our game.
>
> Regardless, we cannot run the risk of whatever machine comes out on  
> top,
> being "unfriendly" towards flesh and blood entities.  Well, there  
> are a
> few people who would prefer to be uploaded regardless, but they do not
> speak for me, nor would I associate with them.

I don't think risk can be avoided.  Any self-modifying
entity (including a human) can modify itself to something
you don't like.  I admit a preference to being
augmented rather than uploaded; I certainly don't see
myself uploading to a machine that is unfriendly to
humans, whether augmented or not, unless the only
alternative were destruction.  Any "me" that would
do such a thing wouldn't be me, so "I" would unfortunately
have already been destroyed in such a case.  I'll try
not to get painted into such a corner.
>
> Open source for AI development?  I agree with you there.  All we  
> need is
> some company like Micro$oft developing their proprietary AI program  
> that
> leads to the Singularity - then we are truly and indeed all done for.
> Legislation for safeguards would also probably not work, as some  
> maniac
> somewhere in the world would develop the key to super-AI  
> regardless.  IMO
> the only hope for staving off disaster is to promote awareness of the
> problem (then hope for the best that people will do the right thing,
> ha).  When I see places, yea even the WTA, doing that, I might even
> donate.
>
"People" sounds like "all people".  Only some people,
maybe only a few, are needed; most will do nothing, nor
even have the least idea what can/should be done
(assuming there is a "should").  Awareness of "the
problem" is most likely to foment panic and the kind of
useless safeguards you mention, which would probably
hinder research that we *want* to happen.

(Pardon me while I have a talk with myself:  I have
been writing this *one* message for a solid hour.  Was
it worth it?  To me or to anyone?  Almost 10% of my waking
day?  What *would* it be worth, and how could I hold
myself to it, keeping in mind that there is the rest of
CryoNet to read, never minding any other mailing lists?
<despair>  My only hope seems to be not to respond to
email messages, no matter how valuable I think what I have
to say might be.  Do most people just hold their tongues
when they disagree, or do they agree, or do they hope that
someone (like me?) will speak up?  And there goes another
ten minutes.  Give up and move on.)

Live long and prosper,
Kennita
--
Vote Ron Paul for President in 2008 -- Save Our Constitution!
Go to RonPaul2008.com, and search "Ron Paul" on YouTube

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30173