X-Message-Number: 15957
Date: Tue, 27 Mar 2001 23:52:03 -0800
From: Lee Corbin <>
Subject: Re: Trust In All-Powerful Lords

Eliezer S. Yudkowsky wrote a long reply to my post. (Thanks
to Sabine Atkins for having facilitated the exchanges, by
the way.)  Primarily, it consisted of the assertion that
the AI that they're working on---or the one that they hope
takes over the solar system---will be so advanced that it
will take control effortlessly, and that real violence
won't be neccessary.

Some of the dialog, though not very good or realistic,
was nonetheless revealing:

> Corbin:  You're a living blasphemy.  I'll oppose you as long as I live.

(Why would I say that to a deity?)

> Sysop:   Yeah, I've heard a lot of that lately.
> Corbin:  You won't conquer me without a fight!

(A lot of the rest of the dialog was equally inane.)

> Sysop:   Corbin, I've already won.  I have the Theory of
> Everything.  I know the position of every atom on Earth
> and I have the technological capability to make arbitrary
> alterations.

In this fantasy, the AI is so overpowering that it even
knows the position (and momentum, I suppose) of every 
atom on Earth!

Well, that wouldn't be so hard to get used to!  All my
Christian ancestors had to deal with an Omnipotent Being
who wasn't even as nice as Eliezer's is going to be. :-)

No, the real point here, and what's been at stake in all
these exchanges is this:  how successful are professor
Yudkowsky and the others going to be in soliciting help
for their grand project?

No one on Cryonet, so far as I know, hopes that they'll
get an AI going that can take over the solar system. 
Sentiment here, which I share, is that a certain type
of anarchy would be both more realistic and preferable;
that hundreds or thousands of separate sovereignties,
perhaps federated together with evolved rules, come to
dominate in any particular region.

The history of revolutionary movements is instructive.
Should the Singularity Institute ever get close to 
achieving their aim, you can be sure that hundreds of
other groups---many of them present-day governments or
corporations---will also be hard at work towards the
same goal.  And competition between them, and the usual
mutual distrust that grows into hatred, will be the
rule.  I shall be both refuted and pleased if a miracle
occurs, and they all agree on the same Friendliness
Rules to animate their AIs.  (I don't blame them for
trying to write such rules---sounds like a good exercise.
I guess that I just object to the usual kind of gleeful
ruthlessness that always attends utopian projects.
"The state will have complete control", "you don't have
a chance", "I've already won. I have the Theory of
Everything", etc.)

> Incidentally, am I imagining things, or did you just
> get through saying that it would be morally OK for you
> to impose a Sysop on your own creations!?

Interesting.  An exceedingly bright man, Mr. Yudkowsky
doesn't intuit that what I do to my property is any
different than what he or this AI, I suppose, does to
me.  This bespeaks a cultural chasm.  Since I would
presumably have the right to run my creatures as slowly
as I would like, perhaps giving some of them only a
finite amount of run time, then perhaps their AI would
feel justified to run me as slow as it likes (for the
sake of higher projects, of course!).  Either that,
or demand that if I make a creature in my own space,
then I'm obligated forever on to devote some fraction
of my resources to it.  We see here how failure to
appreciate PRIVATE PROPERTY, and failure to appreciate
LEAVING OTHER PEOPLE ALONE, leads to complications,
to say the least.

Lee Corbin

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15957