X-Message-Number: 15917
Date: Thu, 22 Mar 2001 11:18:44 -0500
From: Sabine Atkins <>
Subject: Re: is there a dichotomy here?

>Message #15911
>From: "john grigg" <>
>Subject: is there a dichotomy here?
>Date: Thu, 22 Mar 2001 03:13:58 
>
>Sabine Atkins replied:
>This is a false dichotomy. As far as SIAI is concerned, the super 
>intelligent AI we are planning to build will be a protector and a 
>facilitator. It is also planned to prevent us from doing harm to each other 
>and ourselves.
>(end)
>
>I see, what we all need is a friendly and near all-powerful AI to watch over 
>us!  Heil big brother AI! ;)  Have you read "the Humanoids" series by Jack 
>Williamson? lol  Hey, they did protect humanity in the way we protect 
>animals in zoos...


Heil? Big Brother? I know you were just joking, but this is too serious for me: 
These words 

should not be put in connection with the AI we want to build. Big Brother didn't
protect the 

people from harm. It inflicted harm on them when they didn't want to obey. 
Sorry, my lack of 

humor here. And for me the word "Heil" is connected with incredible injustice 
and the horrific 

murders of millions of people. Hardly a word I want to talk to a Friendly AI 
with.

Also, a Friendly AI would not put us in cages. It would support us becoming 
superintelligent 
(and superFriendly :-) transhumans.

>
>I realize Eliezer's intent is to design a self-upgrading AI which would have 
>at the core of it's programming the instructions to be benevolant toward 
>humanity.  BUT, a machine that can upgrade and transform itself may overcome 
>that programming.  Especially if paranoid, unreliable and violent humans 
>confuse and provoke it.  The classic "Hal goes nuts" scenario may not be so 
>fictional a half-century or less from now.


Brian (my husband and chairman of SIAI) says: "Hal was a subhuman intelligent AI
when it 

came to his morality and even his logical thinking towards the humans. But we 
are more 

interested in building an AI that will eventually achieve greater than human 
abilities in those 

areas. Once we validate that the Friendly AI will have those abilities, we can 
let it continue 
growing." 

>Sabine continues:
>In my husband's and my opinion, humans becoming super humans without the 
>control of a Friendly AI is the recipe for disaster: Even if I'm super human 
>myself, having to deal with a whole bunch of moody, warm-fuzzy, violent, 
>confused (maybe even crazy), pretentious super humans is not what I'm 
>wishing for. Even if I can't be killed/erased or terminated, the quality of 
>my life would be rather bad as I'd have to spend precious time with 
>struggling with and fighting off super human emperor wannabe's.
>(end)
>
>Professor Xavier and the whole X-Men team really could commisserate with 
>you!  All those pesky supervillians hellbent on chaos and conquest can wear 
>you down after awhile! ;)

hehe :-)

>I think a sysop AI system which oversees everything and enforces the law in 
>a way that does not violate our liberties(and eventually blow up in our 
>face) is at least in theory a good idea considering the alternative scenario 
>you gave.  But making it a reality will be the very challenging part.  And 
>at a certain point you may have a really hard time turning it off...

Eliezer's online document on Friendly AI will address these issues :-)

>It is of course easy to be somewhat negative about the incredible 
>undertaking your Singularity Institute has embarked on rather then being a 
>toughminded optimist.  I wish you all the very best and hope things turn out 
>as you envision.


Thank you very much, John :-)!  We're looking forward to see you at the Extro-5.

>
>best wishes,
>
>John
>
--
Sabine Atkins  
http://www.posthuman.com/
--
Singularity Institute for 
Artificial Intelligence 
http://singinst.org/

--
Sabine Atkins  
http://www.posthuman.com/
--
Singularity Institute for 
Artificial Intelligence 
http://singinst.org/

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15917