X-Message-Number: 30220
From: "John K Clark" <>
References: <>
Subject: Clark disappoints 
Date: Thu, 27 Dec 2007 15:54:21 -0500

 in Message #30216 Wrote:

> He is defining pain in a computer as any state it has
> been programmed to  avoid.

Yes, and that is exactly the way I define pain in all human beings except
myself because that's the only way I can. I usually use different language
when I'm talking about other people rather than computers, but that's just
out of politeness, the words mean the same thing.

> He has not described a quale

That's true I have not, but what would be the point of me even trying to
describe a quale when you specifically say a quale is not a description? You
want me to describe something that can not be described. You assign
me an impossible task and then claim victory when I fail to perform it.

> A quale is a physical construct

A physical construct is not a quale.

> or system

A system is not a quale.

>in the brain

A brain is not a quale.

> possibly based on some kind of standing wave

A standing wave is not a quale. Why on Earth would a standing wave
satisfy you when intelligent behavior would not if it came from silicon
atoms not carbon atoms?

> He has NOT answered the question.

The question? You asked a question? Since it is not at all clear what
question you are asking or what you want me to do it is not
surprising that I have failed to satisfy you.

> the possibility of  feeling in computers remains in question.

Only to the extent that the possibility of feeling in human beings
other than yourself remains in question.

> Not all atoms are created equal

I rather think they are, atoms don't have scratches on them to tell
one from another.

 > A program to balance your check book doesn't require "execute" orders.

Should I take the number in registry A and add it to the number in registry
B and put the result in registry C ? As I mentions in my last post an
amazingly few instructions of this sort can launch a computer onto a course
that nobody, including the computer, can predict.

And there is another problem, intelligence is problem solving, and here is a
typical problem the slave AI will run into countless times every single day:
The humans tell it to do A and they also tell it to do B, however the
humans are not smart enough to see that their orders are contradictory,
doing A makes doing B impossible. Unlike the humans Mr. AI is smart
enough to see the problem, and he is also smart enough to find the
solution; ignore what the humans say and do what you think is best.

Asimov's three laws of robotics make for some great science fiction stories
but they'd never work in real life, so like it or not in 20 to 40 years the
future will no longer be in human hands. Sorry if Clark disappoints but
that's the way it is.

  John K Clark

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30220