X-Message-Number: 14937
Date: Thu, 16 Nov 2000 18:36:50 -0800
From: Lee Corbin <>
Subject: Re: Simulating People and Animals

Pat Clancy wrote

>There's really no reason to think that the functioning
>of the mind is an _algorithm_, which is what is required
>to make it implementable as a computer program.

According to the technical definition of "algorithm", algorithms
halt!  An "algorithm" is used in most texts to refer to a process
that delivers a definite output upon the receipt of a definite
input and then stops.  In more recent books, some authors have
loosened this usage, but most (e.g. Penrose) continue to say
"algorithmic behavior" when they speak in general of what
programs can do.  A person or a program may not merely provide a
mathematically certain output, but may instead tell you to go
stuff it, or behave in some other unpredictable way.  The behavior
of the first robots who seem like people will be like this.

An operating system is a good example of a program that 
technically isn't an algorithm (by this definition).  An
OS offers you a response for each possible input you provide
it, but it's capable of much more.  It has a long memory,
so to speak.  I'm sure that you can easily imagine an interactive
robot which obeys commands and perhaps even gives some uppity
back talk from time to time, just the way that operating systems
seem to.  Yet it still seems to you that in the course of millions
of years of development, computer programs can never imitate humans

in any way whatsoever?

Please imagine a life-like robot that responds to all of the
world's stimuli unpredictably. (Unpredictable, of course, unless
you run a simulation of the same program.) The burden is still
upon you to say why a tremendous amount of emergent behavior from
an extremely complex set of programs cannot mimic animals or humans.
You bring up Dreyfus' old book; believe me, many of us on the other
side, e.g. Dennett, Hofstadter, and many many others do not find
his arguments convincing.  And as for Penrose, as wonderful as his
books are (I am perpetually re-reading them), his views about
mysterious goings on of microtubules and what not, have been panned
by numerous people, e.g., Ralph Merkle.  As Marvin Minsky said, to
paraphrase, "Roger Penrose is so intelligent that there are only two
things in the world that he does not in principle thoroughly
understand. One is quantum mechanics, and the other is consciousness.
So who can blame him for thinking that they must be somehow
intimately related?" 

>So, even the prey-recognition or locomotion capabilities of a 
>"primitive" animal are still beyond the most sophisticated computer,
>whereas chess grandmasters are in serious trouble. This is a big
>clue that computers aren't the right things to implement minds.

I never said that they were!  Our claim is not that TMs are the
_best_ way to implement artificial intelligence.  (Perhaps whatever
way that AI is eventually implemented will reduce to being equivalent,
but that is a different argument.)

>The key test is the so-called Turing test - the artificial 
>mind along with a set of real people are "behind a curtain"
>answering questions, and if you can't pick out the AM then
>it passes the test (i.e. it's "equivalent" to a real person).
>I just don't think a computer will ever pass that test.

Can you guess what would be the give-away difference?  Again, after
millions of years of development of programs by humans and other
programs, what tell-tale clue could still be present?  Must it
write a sonnet (to use one of Turing's examples)?  Why in principle
can't a robot sing and dance?  Even after just fifty years of
development, they're very good at some things.  Why is there some
sort of vague barrier that forever prevents them from doing other
things?

Lee Corbin

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14937