X-Message-Number: 948
Date: 03 Jul 92 02:17:36 EDT
From: Thomas Donaldson <>
Subject: Re: cryonics: #941 - #943

I note that Tim Freeman replied to my short message and said very
little about the longer message, which discussed the economic issues at
much more length (and gave a better explanation than my own somewhat short
version).

I still think that Freeman's argument is weak. Why should there be a 
tragedy of the commons at all? The CEO is responsible to the SHAREHOLDERS,
who, we notice, are human beings. If a CEO Robot is invented (certainly
over the concentrated opposition of every human CEO, and every human who
has ever dreamed of being a CEO), that robot will remain owned by and 
(somehow) under the control of the shareholders. Whatever it does, it will
not act against the shareholders' interests. Of course, at some stage
humans would exercise actual control.

This is not an argument that completely autonomous robots cannot, by the
laws of physics, be built. This is an argument that NO ONE will find it in
their interest to build one. Robots and computers are our TOOLS; we use
them to satisfy our own desires. Of course, desires and knowledge can't
really be separated: so that when we use these robots as tools, we must
learn about the problems for which we use them. This situation is very much
the same as what goes on now with our technology: when we learn how to use
a computer program (say for CAD-CAM) we still need to know something of
what we want, and something of what is possible. And a CAD-CAM program
running on a powerful computer ALREADY has abilities which we can't match.
(That's why we use the program rather than our own hands and brain).

If no such computers are built, we must still think of accidents. But even
though accidents could cause a lot of damage, the INDEPENDENT WILL of 
another being would still be lacking in our tools. We'd just have an 
accident --- different in kind from those that happen now with our tools,
but still an accident rather than a rebellion. HUMAN slaves rebel. The
worst that a machine can do is malfunction.

This issue isn't really very new. If human beings had all decided on mass
suicide they could have done so thousands of years ago. They have not,
since we are here now; and I don't expect any change in this basic drive.
No one will decide to commit the elaborate suicide of building a machine
that would proceed to outcompete himself/herself; those who do want to 
commit suicide have far easier means available to them.

Of course, we will probably also change ourselves over time, and so 
metamorphose into beings with more abilities (of one kind or another) 
than we have now. And any ability of a machine (after all, designed and
built by us in the first place) could be incorporated into ourselves. But
that is not the essential reason why machines cannot take over. It is only
human slaves that can do that, since we would not build machines with any
will of their own, at all.
				Best
					Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=948