X-Message-Number: 30158
From: "James Clement" <>
Subject: Why help the WTA? (was: To James Clement: A reply, and a ques...
Date: Mon, 17 Dec 2007 10:08:33 -0800

Flavonoid wrote: To James Clement: A reply, and a question

The WTA offers autographed copies of "The Singularity Is Near" book to our
donors primarily because it's already a Bestseller, particularly among our
crowd, and because it offers deep insights into the convergence of biotech,
nanotech, and artificial intelligence.  I have heard Bruce Sterling's speech
on why a Singularity is not coming, as well as Ray Kurzweil's and Vernor
Vinge's speeches respectively on why it is and what may happen if it doesn't
(all three available as podcasts on the Long Now's website:
http://www.longnow.org/projects/seminars/).   I tend to agree with Vernor
Vinge that there are so many ways to eventually get to a Singularity that
it's most likely going to occur.  That said, the WTA is not a
Singularitarian society (a great many of our members don't agree with some
or all of Ray's views), and our own focus has very little to do with the
Singularity other than to promote organizations such as Lifeboat and SIAI
which (see below) are dedicated to limiting our existential risks and
encouraging a "Friendly AI" future.

I don't know how much time you've spent on the Singularity Institute of
Artificial Intelligence (SIAI) website.  I've been fortunate enough to
attend their Singularity Summit and to get to know both Eliezer Yudkowsky (a
Research Fellow of the SIAI) and Tyler Emerson (the Exec. Dir. of the SIAI).
The Institute's mission is stated as "In the coming decades, humanity will
likely create a powerful artificial intelligence. The Singularity Institute
for Artificial Intelligence (SIAI) exists to confront this urgent challenge,
both the opportunity and the risk."  From what I've seen, the SIAI is almost
entirely dedicated to the "risk" side of the equation.  Certainly 90% of the
AI discussion and Q&A at the Singularity Summit had to do with the risks of
creating unfriendly AI and how best to go about avoiding such.  SIAI would
probably go so far as to endorse banning Strong AI research if the
"Friendly" characteristic could not be assured.  

As you may know, some individuals who have thought about the subject have
concluded that they'd rather see AI become the first posthuman intelligence,
rather than a human, because we already know that we're "wired" for many
behaviors which are not necessarily benevolent.   In fact on page 599 of
Ray's book, The Singularity Is Near, he states "See Singularity Institute,
http://www.singinst.org...  Yudknowsky formed the Singularity Institute for
Artificial Intelligence (SIAI) to develop "Friendly AI," intended to "create
cognitive content, design features, and cognitive architectures that result
in benevolence," before near-human or better-than-human AIs become
possible...."  

Likewise, at the WTA's TransVision 2007 conference, we had several
individuals talk about the risk of unfriendly AI.  The principle speaker on
this subject was Eliezer Yudkowsky, however Phillippe Van Nedervelde of the
LifeBoat Foundation also spoke briefly on the existential risks of Strong
AI.  Our co-founder, Nick Bostrom has stated: " "Our approach to existential
risks cannot be one of trial-and-error. There is no opportunity to learn
from errors. The reactive approach - see what happens, limit damages, and
learn from experience - is unworkable. Rather, we must take a proactive
approach. This requires foresight to anticipate new types of threats and a
willingness to take decisive preventive action and to bear the costs (moral
and economic) of such actions."  This is why the WTA supports the Lifeboat
Foundation.

As I understand it, your other question was why should an organization such
as the World Transhumanist Association exist - why not just fund the
particular individual organizations (nanotech, cryonics, biotech, AI)
directly?  My answer is that there are numerous well-funded Neocon,
Bioconservative and Luddite groups which are spending tens of millions of
dollars each year to spread their memes that all such technologies are
"playing God" and should be banned.  The WTA serves as an international
umbrella organization to educate the public about all of these technologies,
discuss the ethical issues, and facilitate their development.  To date, we
have over 4,700 members.  As an umbrella organization that discusses all of
these subjects, we're able to cross-pollinate people interested in one field
with many other fields.  Thus, although someone might come to us to learn
more about AI they'll get exposed to life-extension, cryonics, and nanotech
too.  Through our student outreach and our digital magazine we hope to do
the same thing on a bigger scale - encourage people to learn about, fund,
and/or work in these fields.  I've been on the Cryonet list for several
years and continually hear people talk about how to market cryonics to the
public - well, IMHO the WTA is one such way.

Best regards,

James Clement
www.transhumanism.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30158