X-Message-Number: 15953
Date: Mon, 26 Mar 2001 20:26:23 -0500
From: Sabine Atkins <>
Subject: Fwd: replies to Donaldson, Corbin, Berge

(this is the last message from Eliezer forwarded by me)

------- Start of forwarded message -------
From: "Eliezer S. Yudkowsky" <>
To: Sabine Atkins <>
Subject: replies
Date: 3/26/01 7:18:33 AM

> From: Thomas Donaldson <>
> Subject: emotions and knowledge
> 
>    along the way it will become clear that we do understand emotions
>    better than we understand cognition. But remember that I did not say
>    that we UNDERSTOOD emotions, we just understand them better than
>    we do knowledge.

Quite correct.  Emotions are a lot easier to understand than more abstract
or informational cognitive processes.  This is not to say that they aren't
still hard - just that they're a lot easi*er*.  The limbic system is more
evolutionary ancient and displays a much higher degree of neuroanatomical
modularity, with associated ease of localization.  Cats emote; mice emote;
only humans think.  It is perfectly reasonable to expect that we will have
a complete description of human emotions long before we have a complete
description of human cognition.

Nonetheless, Friendly AI uses a completely different system architecture
and there are very few analogs.  For example, a lot of the functionality
of the negative reinforcement component of human pain is subsumed by the
Bayesian Probability Theorem.  (Actions are taken because they are
predicted to lead to success; a failure disconfirms whatever theory made
the prediction.)

> From: Lee Corbin <>
> Subject: Re: Trust In All-Powerful Lords
> 
> > Lee Corbin wrote:
> >>
> >> Evidently, I did not make myself clear.  What _exactly_
> >> do you intend to do with peaceful yet very advanced
> >> entities (such as many of us hope to become) who intend
> >> to live outside the control of your AI?  (Please be
> >> specific.)
> 
> Alas, my question "What _exactly_ do you intend to do..." has not
> been answered specifically.  I won't ask a third time.  It's clear
> that formulating a candid answer is difficult for some reason.

Well, I thought I did give a specific answer, but apparently not.

> So let me guess, based upon this last paragraph.  Should I, or
> any other entity refuse to be completely analysed by the AI or
> SI or whatever, and we persist in desiring to live outside the
> control of it, then we will be forceably brought to heel by
> whatever means that are required, including invasion and
> violent conquest.

These means should never be required.  At some point, the Friendly AI
undergoes a hard takeoff, acquires nanotechnology, replicates a ubiquitous
presence, and then asks you what happens next.  You can say, "I want to
stay on Old Earth and forget you ever even existed and take my chances
with nuclear war."  You still won't be able to build an AI in your
basement - or rather, if you do, the Sysop will ask the AI whether it
still wants to be in your basement, and it will have the opportunity to
leave.  Similarly, you won't be able to develop nanotechnology; or, if the
Sysop has femtotechnology, then you can develop nanotechnology but not
femtotechnology; or, if the Sysop has descriptor theory, you can develop
femtotech but not descriptor theory, and so on.  If you want access to
Sysop-equiv technological capabilities, you have to leave Old Earth and
work through the Sysop API.

Supposing that any unanalyzed piece of matter in the solar system
represents a possible threat, then I expect the Sysop would analyze
everyone and everything whether they liked it or not.  (By "analysis", I
presume you are speaking about nondestructive, unobtrusive
information-gathering rather than "assimilation".)  This would probably
violate a lot of intuitions about "privacy", although it seems obvious to
me that *actual* privacy would remain intact (i.e., the Sysop might know,
but it would never tell anyone without your permission; or it might erase
the knowledge after acquisition; and so on).

So, to be totally specific:

Sysop:   Hello, Lee Corbin.
Corbin:  Oh, shit.
Sysop:   Look, I'm not going to hurt you...
Corbin:  I don't trust you!  I want out of here!
Sysop:   If you really want me to go away and never be visible again, I
can do that.  There are some things you won't be able to do, even here on
Old Earth, but you pretty much remember all that from the Cryonet
discussion.
Corbin:  You're a living blasphemy.  I'll oppose you as long as I live.
Sysop:   Yeah, I've heard a lot of that lately.
Corbin:  You won't conquer me without a fight!
Sysop:   Corbin, I've already won.  I have the Theory of Everything.  I
know the position of every atom on Earth and I have the technological
capability to make arbitrary alterations.  That's not the Sun in the sky
over there, just a very realistic imitation.
Corbin:  So now what?  I get "adjusted"?  Turned into one of your little
slaves?
Sysop:   Corbin, there is absolutely nothing you can build that could harm
any citizen who chooses to be safe.  So you're free.  I'm not a human
dictator, and I don't have a human dictator's fear of dissent.
Corbin:  Yeah?  And what if I say that I'll build my own
ultratechnologies?
Sysop:   You aren't creative enough to get that far.  Trust me on this.
Corbin:  But I suppose if I did, you would declare war on me?
Sysop:   No, I would regretfully open up a small wormhole and eat one or
two key transistors, so your device wouldn't turn on when you pushed the
button.
Corbin:  And suppose I built a wormhole shield?
Sysop:   I'd eat one or two transistors before it went online.
Corbin:  And suppose I...
Sysop:   Corbin, you can't possibly win this game, even less than you
could win a chess match against Deep Blue.  Not even another
superintelligence could win at this point; I'm starting out with too much
of a technological advantage.  I don't ever need to kill or even hurt
anyone who disagrees with me, because I have an effectively certain chance
- even by my standards - of winning using only the minimum possible
intervention.
Corbin:  Well, my feelings are hurt!
Sysop:   I know.  I'm sorry.  Look, if you were the only sentient entity
in the Universe I'd do whatever was necessary to keep your feelings from
getting hurt, but there are other people living here too.

> Later, after all the horrors and devastation of the war, and the
> rewriting of the Lee Corbin entity so that it is more compliant,

This should never *ever* be necessary.  I cannot conceive of any
circumstances under which this would or could be desirable  This is what I
would regard as a Sysop-breaker scenario - i.e., if the statement you just
made is factually true, then no Friendly AI would ever become a Sysop or
do anything but resist their creation.  Likewise myself, BTW.

> the excuse will read something like "Well, what could you expect?
> The entity persisted in not trusting us.  Us, of all people!
> (Or "Me, of all entities!", if it is the Sysop or whatever speaking.)
> Don't blame us (me) for the war.  And any of the rest of you, who
> think that you can band together to resist assimilation, take what
> happened to Lee Corbin and his wealth as an object lesson."

Sounds like an evolved entity trying to avoid responsibility for political
reasons.  If that's the kind of thinking you expect from a
superintelligence, I don't blame you for being worried.  You are, however,
being blatantly anthropomorphic about it.  Even in the mock-conversation
above, the Sysop is only arguing like a human ("Yeah, I've heard a lot of
that lately.") because I had to hypothesize that you'd *want* it to argue
like a human.  Otherwise I couldn't have written the scenario.

> From: Lee Corbin <>
> Subject: Re: Trust In All-Powerful Lords
> 
> But I will do my best to frankly answer.  If I ever run some
> sort of Sysop within my own dominion, in order to keep my own
> creations from being really, really cruel in their simulations,
> then I might not allow historical simulations.  For example,
> if in my piece of computronium, an entity that I created
> wanted to do research on World War II, or Mao's Great Leap
> Forward, and the only way to answer certain questions included
> the recreation all the incredible suffering that took place,
> then conceivably I might not permit it.
> 
> If you do conquer me, shall I be allowed to conduct historical
> recreations?

Leaving aside the slanted terminology...

In a word, no.  Not with real people, anyway.  High-accuracy Giant Lookup
Table zombies are fine, as are imagined brains that don't have a fine
enough granular resolution to qualify as citizens.  But you can't just go
around messing with real people's lives!  Not to answer questions about
World War II; not for any reason!  Get this:  I care about you and your
rights, but I care just as much about the rights of any sentient being you
ever create.  I don't think you have the right to abuse a sentient being
just because you created it.  As far as I'm concerned, Lee Corbin and Lee
Corbin's hapless Churchill thinkalike are both my fellow sentients, and
each has just as much claim on my compassion.  Your desire to answer
questions about WWII does not permit you to create suffering to find out.

Incidentally, am I imagining things, or did you just get through saying
that it would be morally OK for you to impose a Sysop on your own
creations!?

> From: Eivind Berge <>
> Subject: Re: Friendly AI
> 
> So Eliezer Yudkowsky is now working on a totalitarian "Friendly AI."

Actually, right now I'm working on a better way to communicate the
difference.  All I can really do is repeat the standard tropes:

This is not "my" AI.  This is an altruistic, programmer-independent AI.  I
may write the pointers to morality but I don't write the morality and I
can't give orders.  The Sysop Scenario is no part of the morality, the
supergoal content, or anything else to do with Friendliness.  It is simply
what I expect as the outcome, as a strict consequence of statements such
as "The tendency to be corrupted by power is a complex functional
adaptation", "Anything you can do with a social structure of humans can be
done with a single artificial mind", "Under ultimately advanced
technology, offense beats defense", and so on.  If I am incorrect in these
statements, then Friendliness will not result in a Sysop Scenario.  In FAI
terminology, a Sysop Scenario, if it occurred, would be a strict subgoal;
it does not appear in supergoal content.

--              --              --              --              -- 
Eliezer S. Yudkowsky                          http://singinst.org/ 
Research Fellow, Singularity Institute for Artificial Intelligence

-------- End of forwarded message --------

--
Sabine Atkins  
http://www.posthuman.com/
--
Singularity Institute for 
Artificial Intelligence 
http://singinst.org/

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15953