X-Message-Number: 17977
Date: Mon, 19 Nov 2001 21:37:05 -0700
From: Mike Perry <>
Subject: Non-independence of Values

Thomas Donaldson (*17967) wrote:
>And don't
>argue that partial independence is enough: I'm not talking about
>abstract free will but about the ability of the robot to make
>decisions to suit its own values rather than our own.

In a sense, Thomas, I'd say you're right, that is, we don't want machines 
making decisions based on values entirely different from our own. But we 
don't want that in people either. If the future world is to be a happy one, 
I think a certain dependence or convergence of values must prevail among 
all, or at least all sufficiently advanced beings. We should all be 
benevolent, loving, kind, and considerate, for instance, which implies some 
sharing of values. The space of "reasonable" values is just not large 
enough to support an unlimited independence. Within those constraints, 
though, I can see artificial devices with a will of their own.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=17977