X-Message-Number: 30207
From: 
Date: Mon, 24 Dec 2007 11:37:57 EST
Subject: again, feelilng vs. computing

Flavonoid wrote in part:
 
>R.E. asserts that an animal is "not programmed in the way that a  computer
>is. The computer is language-based and digital, which is  very
>different."  And he appears to conclude based on this  assertion, that
>therefore no computer can be programmed with a true  intelligence
>surpassing that of humans. 
 
Clearly he is not really paying attention. I have said  repeatedly that, in 
principle, a computer might eventually surpass human  intelligence, if we 

define intelligence to mean prediction or description or  problem solving. In 
fact, 
in some areas computers have already surpassed  humans.
 
Also:
 
> I see no reason to believe a computer could not be programmed 
>in the same manner that human/animal brains are. 
 
The reason, yet again, is that brains and digital computers are VERY  

DIFFERENT, and in fact we already KNOW  that a computer could not be  programmed
in 
the same way that brains are. For a simple example, you cannot  (with certain 
obvious exceptions) program a computer to give different  outputs for the same 
input, whereas this happens often with brains. And you  cannot program a 

computer to act in a certain way when it feels a certain way,  because it 
doesn't 
feel. You cannot, for the foreseeable future, even  program it to act in a 

certain way when it computes that a human would feel in a  certain way, because 
we 
don't yet understand feeling. 
 
> He of course does not admit "feeling" [in computers]  either.  


It shouldn't be much of a strain to understand that, since we cannot yet  

characterize feeling (subjectivity) in physical terms, it is premature to assume
that this will ever be available to a digital algorithmic computer. As a 

crude  example, if subjectivity depends on unique properties of carbon, then it

cannot  be duplicated in silicon. (And remember that "emulation" doesn't count.
A  description of a quale is not a quale.)
 
He goes on to mention some current research on the  capabilities of neurons, 
e.g. that touching a single neuron can induce  sensation. Not relevant to the 
problem of subjectivity. Sensation can result  from a touch to the skin or a 
photon on the retina, but this information, while  useful, does not answer the 
basic question of the nature of qualia.
 
He also continues to assume that, somehow, the dangerous  computer will have 
its own agenda, and might be "hostile" or  "indifferent." Yet again, computers 
don't have agendas in the sense we do,  and are always indifferent to our 
value judgments, except as those can be  expressed in very specific, detailed, 
and unambiguous terms, which means the  computer is "satisfied" when it 

registers a particular set of numbers at a  particular address, or one of a 
particular 
set of sets.
 
He also ignored my suggestions for simple controls which  surely any 

programmer would want. One example was a programmed requirement for  human 
review 
before any "execute" order. Another possibility would be a  requirement for 
specific predictions of some of the results of an execute order. 
 
He also ignored Kennita's note that there will be lots of  frontier computers 
and programmers, and most of those will value  life and be alert to danger. 
He complains that not enough is being done  presently to face the dangers, 
ignoring the fact, which seems obvious to me,  that we are leagues and leagues 
away from singularity-level  computers.
 
R.E.
 



**************************************See AOL's top rated recipes 
(http://food.aol.com/top-rated-recipes?NCID=aoltop00030000000004)


 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30207