X-Message-Number: 3951
From: Ralph Merkle <>
Subject: Uploading
Date: Sat, 4 Mar 1995 18:31:56 PST

I agree with Robin's central thrust: it's not clear that the fine
points of philosophy will have a major impact on the outcome.

Thomas Donaldson said:
>Nor do I expect such partial brains to stand up and  ask for any rights. Who
>would want to make them with the will to do that? We use our machines to do
>what WE want, not what THEY want.

Hans Moravec.  And enough others that blocking the development of such
beings would likely require an extremely vicious campaign of terror which
would still be unlikely to work.

In many respects, I share Thomas's concerns on this subject: we build
computers to better the human condition.  Creating a world where biological
humans are shunted to the scrap heap is an undesirable thing to do.  At
the same time, it is unreasonable to assume that computers won't become
*much* smarter than us.  And if Hans Moravec elects to build an artificial
brain modeled directly on his own, what should we do?  Shoot him and
destroy his "mindchild?"  I for one think we should respect the rights of
such a being just as we respect the rights of our fellow humans (well,
actually, I hope we do better than that....)

What choice do we have?  By recognizing and protecting the rights of all
intelligent beings we can hope to create a world where everyone (and
everything) can live, if not in perfect harmony, at least in safety: knowing
that their rights will be respected regardless of who (or what) they
find themselves dealing with.  If we don't, we risk the creation of a
new (and very bright) underclass that will presumably dislike its
inferior status and strive to alter it.  We might not like the solutions
that such minds cook up.

Or we could seek to repress the development of such autonomous artificial
minds.  Everywhere.  For all time.  This strikes me as somewhat difficult.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=3951