X-Message-Number: 30213
From: "John K Clark" <>
References: <>
Subject: Re again, feelilng vs. computing
Date: Wed, 26 Dec 2007 00:51:56 -0500

In  Message #30207 : 

> a computer might eventually surpass human  intelligence

I agree.

> you cannot  (with certain  obvious exceptions) program
> a  computer to give different  outputs for the same input,
> whereas this happens often with brains.

I don't believe anybody has proved that is true of brains, and
if true I am certain nobody has shown why that would be something
to be proud of. However in 5 minutes I could write a very short computer
program that will behave in ways NOBODY or NOTHING in the known
universe understands; it would simply be a program that looks for the
first even number greater than 4 that is not the sum of two primes
greater than 2, and then stops. The only way to know what this
computer will do is watch it and see, even the computer doesn't what it
will do until it does it. And that is not a bad definition of free will.

> And you  cannot program a  computer to act in a certain way when
> it feels  a certain way,  because it doesn't  feel.

It is not only possible to write a program that experiences pain it is easy
to do so, far easier than writing a program with even rudimentary
intelligence. Just write a program that tries to avoid having a certain
number in one of its registers regardless of what sort of input the machine
receives, and if that number does show up in that register it should stop
whatever its doing and immediately change it to another number.

>if subjectivity depends on unique properties of carbon,
> then it  cannot  be duplicated in silicon.

So carbon atoms can be conscious but silicon atoms cannot, I'd say that's
about as likely as white people are conscious but black people are not.

>remember that "emulation" doesn't
> count.  A description of a quale is not a quale.

This demonstrates what's so crazy about this topic. The quale people ask
people like me how a computer can have feelings, that is they want a
description of how it could happen, but when I give what they were asking
for they say a description of a quale is not a quale. Because there is no
conceivable answer that would satisfy you I conclude the question is
meaningless.

> computers don't have agendas in the sense we do

And that is why computers always do exactly what we want them to do.

> a programmed requirement for  human
> review  before any "execute" order.

A computer like that would be of no danger to us, or be of any use to us;
it couldn't even balance your checkbook.

  John K Clark

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30213