X-Message-Number: 12720
From: Thomas Donaldson <>
Subject: some explanations and examples
Date: Sat, 6 Nov 1999 00:39:06 +1100 (EST)

Some replies, since they seem necessary:

For Daniel Crevier: 
Please understand that I am NOT claiming that building independent
creatures is impossible. I am drawing some important distinctions. The
important issue is whether or not the actions prescribed by a program 
in a robot controlled by a computer have been written out by someone
else or arise from the computer totally independently. 

For this to happen you need not only a computer but a robot capable of
acting in the world, first of all. Most programs on computers are written
by human beings (yes, they used computers to help them, but that's
hardly the same thing as one written entirely by a computer). The 
behavior of a robot controlled by such a program is thus controlled by
the author of the program, who is the person responsible.

The mere fact that the author of such a program may not be able to predict
just what it will do is no more surprising than with any other device. The
author of course wants the program to do some things, within limits, but
total prediction is out of the question even with simple programs ---
especially if they involve response to different data.

In my own case, yes, I was INFLUENCED by many people, but CONTROLLED by 
no one, and influence in any case is hardly the same as outright 
prescription ie. my "program", if you wish to call it that, came from
myself, though it was affected by many other events. Not only that, but
because it grew up in just that way, it is NOT a symbolic entity at all,
any more than an auto engine is symbolic. (Yes, we could eventually, no
doubt, DESCRIBE it symbolically, but that does not change the fact that
it is not symbolic itself).  

Finally, my problems with building a robot able to learn and act quite
independently of us are two: first of all, no matter how powerful its
processors are, that does not alone provide any goals toward which it
will act. Second, if we were to provide such a robot with independent
goals, we would be doing something quite dangerous to ourselves. If
we were to almost do that, but write the programs by which it acted,
then WE would be the ones responsible for any destruction it might
cause, not the computer-robot itself, which would have no independent
goals but only those we put into it.

For Mike Perry:
Yes, that's exactly what I'm saying, though we have to be careful about
just what a "program" is. If it's one we wrote and can change, then 
the robot has no independent goals, only those we wrote into its program.
If we design our robot so that the program is on ROM, then we'd have
to do major surgery to change it, and to that degree it approaches
independence. And even if it's on ROM, but its program has it coming
to us for any changes, then it also fails to be independent. If we design
a robot such that it invents its own program, then again we have a case of
independent goals, but again if we place too many constraints on its 
invention, that fails again. If its goals consist solely of the desire to
do what we ask it to do, even if that involves giving us advice about what
to ask, again it lacks independence and likely awareness too. (What do
present computers do but just that?).

To make my test you look first at independently living creatures, the
kind we see all around us. No person gave them their goals, no one
designed them (even if we've gone out and modified them genetically,
that's hardly a complete design). How closely does our robot approximate
such creatures in its behavior? 

And there's also a second test: who is really responsible for the actions
of this computer-robot? If it acts independently of us, not only at that
time but in its history, then it counts as an independent living creature
just like others (the fact that it is a robot is irrelevant). If we write
its program, we're causing its behavior. (And yes, the computer program
which beat a chess champion was NOT an independent intelligence: the
authors of that program are responsible for beating that chess champion).

Most of all, the goals of an independent creature may be expressed
symbolically, but symbolic goals do not become real goals. Even by putting
a compiled program into ROM, the ROM itself ceases to be symbolic; but
if we don't use a ROM, we have a computer specifically designed to do
whatever its program prescribes for it. Its goals remain symbolic only,
though they are also the real goals of the people who wrote its program.

And I hope that these words will at least explain what distinctions I
am drawing, and why.

			Best and long long life to all,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12720