X-Message-Number: 15911 From: "john grigg" <> Subject: is there a dichotomy here? Date: Thu, 22 Mar 2001 03:13:58 James Swayze wrote: >Rhetorical question to everyone: Which would you rather be, super >human >yourselves or serf to super machine? I'd like to be a super-enhanced transhuman who is on really good terms with the super AI who are bound to be developed. lol James, that was some post your wrote the other day here! I hope Eugene Leitl steps up to the plate in reply. Or maybe he will just wait till he sees you at the next cryonics/transhumanist conference and then getcha! You have to watch out for those Germans... :) Sabine Atkins replied: This is a false dichotomy. As far as SIAI is concerned, the super intelligent AI we are planning to build will be a protector and a facilitator. It is also planned to prevent us from doing harm to each other and ourselves. (end) I see, what we all need is a friendly and near all-powerful AI to watch over us! Heil big brother AI! ;) Have you read "the Humanoids" series by Jack Williamson? lol Hey, they did protect humanity in the way we protect animals in zoos... I realize Eliezer's intent is to design a self-upgrading AI which would have at the core of it's programming the instructions to be benevolant toward humanity. BUT, a machine that can upgrade and transform itself may overcome that programming. Especially if paranoid, unreliable and violent humans confuse and provoke it. The classic "Hal goes nuts" scenario may not be so fictional a half-century or less from now. Sabine continues: Our research fellow Eliezer Yudkowsky is currently completing work on his recent online document about Friendly AI. As soon as he is finished I will forward the link to it to the CryoNet forum. (end) I look forward to reading it and also hearing his talk at Extro 5. I am so happy the Singularity Institute has provided an outlet for Eliezer's talents. Sabine continues: In my husband's and my opinion, humans becoming super humans without the control of a Friendly AI is the recipe for disaster: Even if I'm super human myself, having to deal with a whole bunch of moody, warm-fuzzy, violent, confused (maybe even crazy), pretentious super humans is not what I'm wishing for. Even if I can't be killed/erased or terminated, the quality of my life would be rather bad as I' have to spend precious time with struggling with and fighting off super human emperor wannabe's. (end) Professor Xavier and the whole X-Men team really could commisserate with you! All those pesky supervillians hellbent on chaos and conquest can wear you down after awhile! ;) I think a sysop AI system which oversees everything and enforces the law in a way that does not violate our liberties(and eventually blow up in our face) is at least in theory a good idea considering the alternative scenario you gave. But making it a reality will be the very challenging part. And at a certain point you may have a really hard time turning it off... It is of course easy to be somewhat negative about the incredible undertaking your Singularity Institute has embarked on rather then being a toughminded optimist. I wish you all the very best and hope things turn out as you envision. best wishes, John _________________________________________________________________ Get your FREE download of MSN Explorer at http://explorer.msn.com Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15911