X-Message-Number: 15962
Date: Wed, 28 Mar 2001 13:05:51 -0500
From: "Eliezer S. Yudkowsky" <>
Subject: Re: Whose future will it be?

James Swayze wrote:
>
> Sounds like Eliezer won't be happy unless he is by himself with his AI sysop
> entity. The only 
> logical way given these parameters to bring risk to zero is not allow
> variables, like warm-fuzzy 
> outside-the-box variables, to exist at all. The AI has god powers so it can
> create new and 
> tractable compliant companions a plenty, what need has he of us?

Ah, yes, the "Argument from Star Trek".

This is merely the Hollywood conception of superintelligence - you know,
that logical entities behave like emotionally repressed humans, and
therefore hate all signs of the emotion that they have repressed in
themselves.

I should note, for the record, that intelligence is far more unpredictable
than any warm-fuzzy feelings ever will be.  I'm not particularly trying to
deprecate emotions at the moment, but the last thing I would call them is
"unpredictable".

That said, it's very easy to write an equation in which huge amounts of
variance don't affect some overall quality.  Conservation of momentum, for
example, restricts our Universe to a single planar slice through phase
space, and yet within that plane, there are planets, star systems,
people.  All the *interesting* parts are unaffected.

What is conserved, in the Sysop Scenario, is your inability to kill me
without my permission.  A nanocomputer does not have zero probability of
error, but 1e-64 per transistor operation is good enough.  The way you
think, and the way you act, and the things you choose to do, don't
significantly increase the probability of a single failure (involuntary
death).  That level may be zero or it may merely be absurdly small; the
point is that what you regard as valuable variance doesn't affect the
probability of failure either way.

Your suggestion that an AI would fear all variance is anthropomorphic;
your suggestion that it would fear variance enough to kill you - *really*
undesirable under a Friendly AI's supergoals - is absurd.

> Yeah if you have your way whoever minds the sysop can create for themselves
> their ideal 
> world and do away with all the troublesome rest of humanity since as you've
> said above "who 
> needs em?".

This shows a fundamental misconception about the nature of Friendly AI.  A
Friendly AI, once built, achieves total independence of the original
programmers, just as normative altruists such as Martin Luther King
achieve independence of their parents and the society in which they
inhere.  The task of the Friendship programmers is not to give orders, or
even to specify morality, but to provide enough core Friendliness that the
AI (1) wants to be Friendly and (b) can understand and correctly interpret
the instruction "Be the best Friendly AI you can be; be the Friendly AI
that the best Friendship programmers in the world, not us, would have
created, or be a better Friendly AI than even that."

I don't want to get too far into this before "Friendly AI" is published,
since that's what deals with all structural complexity required to flesh
out the above paragraph and all the objections that I'm sure are running
through your mind.

The overall point is that given a certain amount of core Friendliness, the
end result is supposed to converge to a unique or strongly convergent
point; certainly, "sufficient" convergence is enough to wipe out, except
as a historical curiousity, the fact that Eliezer Yudkowsky is one of the
original programmers.

In particular, the Sysop Scenario is *not* part of Friendliness content,
something I cannot overemphasize enough.  The Sysop Scenario is simply
what I think is the best way to handle the Transition to post-Singularity
life.  If I'm wrong, a Friendly AI would tell me and my overflung
dictatorial ideas to go jump in the lake - because I don't like coercion,
and that *is* something that becomes a part of Friendliness content.

> I'll trust humanity however painful the growing pains to become wise enough
> to govern 
> ourselves. It's time to grow up  and grow out of our need for parental super
> entities.

If you're right, then I would expect a Friendly AI turned Transition Guide
to enhance, upload, upgrade, hand off independent nanotech, et cetera, to
whoever asks for it, and the end state would be a stable set of
independent polises a la Greg Egan's _Diaspora_.

> You can't imagine our thinking powers becoming powerful enough and wise
> enough to simply grow up?

I don't think it's probable that everyone, including the Amish,
voluntarily choosing to grow up; and I don't think it's ethical to force
people to grow up without their consent.

If everyone does grow up voluntarily, I'll be pleased as toast, and
obviously no Sysop could or would be necessary.

--              --              --              --              -- 
Eliezer S. Yudkowsky                          http://singinst.org/ 
Research Fellow, Singularity Institute for Artificial Intelligence

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15962