X-Message-Number: 14959
Date: Sun, 19 Nov 2000 12:05:20 -0800
From: Lee Corbin <>
Subject: Re: Simulating People and Animals

In Message #14944 Pat Clancy wrote
>An algorithm is just a list of instructions that guides the
>action of the machine, it may halt or not halt. I can write
>a one-line program that never halts; if you prefer not to
>call this an algorithm that is really just semantics. 

I apologize for being picky about terminology, but often our
thinking is indeed influenced by our choice of terms and the
unconscious associations that result.  Sometimes semantics
is important.  The term "algorithm" in particular is very
troublesome.  For example, Penrose's entire analysis of Godel's
proof's implications for "what we can know to be true"---as well
as his analysis for the requirements for consciousness, I think
to be flawed precisely because of what "algorithm" and
"algorithmic behavior" mean to him.

I submit that it is very misleading to characterize the behavior
of extremely complex programs as "algorithmic".  A program doing
very complicated analyses as a function of its input, of its own
internal state, and of a tremendous number of flags and fluid
stored data representations simply does not fit the notion that
"algorithm" is likely to bring to mind.  And I conclude that
the same goes for "algorithmic".

Since a human being is a physical object, one could just as easily
say that its behavior is "algorithmic":  it sits in the world and
generates behavior that is a function of its present state, the
internal data that it draws upon, and the output that it is receiving
from the world.  So I am likely always to respond that we can do

without this term.

>An operating system is a program; if you prefer that term to
>"algorithm" that's fine, [thanks!] I see no difference in the
>main argument.  There are light-years of difference between an
>"uppity" robot that has a mind, and an operating system that
>gives you cryptic but entirely pre-determined error messages. 

But one can retort that determinism predicates that all of our
actions are "pre-determined" from some vantage point.  Determinism
has been believed by very large numbers of profound thinkers---some
have believed in God (Einstein was one), some follow Hugh Everett
and David Deutsch to say that the quantum wave function never
collapses, and many have been scientists and philosophers from many
disciplines.  Our familiarity with relatively simple programs does
at first suggest to us that we humans, in our present state, will
always be smart enough to identify the exact cause of each "error
message" or other signal from a program.  But even now it's sometimes
almost impossible.  In the future, it may be no more possible to know
exactly why a program or robot said something than it is presently
possible to know why a human said something.  To be sure there is a
reason for everything under the sun, but with some physical machines
(e.g. animals) its just too difficult to determine what it is, and
programs of the future will be similar.

>> Please imagine a life-like robot that responds to all of the
>> world's stimuli unpredictably. (Unpredictable, of course, unless
>> you run a simulation of the same program.) The burden is still
>> upon you to say why a tremendous amount of emergent behavior from
>> an extremely complex set of programs cannot mimic animals or humans.

>No actually the burden is on you to show why any set of programs
>_should_ be able to do this. So far no one has shown it.

Sorry; by "say" I meant only "to suggest".  Neither of us, obviously,
can strictly _show_ anything of the kind.  (I am amused by how the
language we employ tends to escalate.)   I above suggested why we
ought to expect programs eventually to be able to engage in arbitrarily

complex behaviors, act unpredictably, and display emergent behavior,
namely, that an action taken by a person or a word spoken is only a
response by an exceedingly complicated physical object, i.e., a human
being.

>> Can you guess what would be the give-away difference [in the Turing
>> test]?  Again, after millions of years of development of programs...
>> what tell-tale clue could still be present?  Why is there some vague
>> barrier that forever prevents them from doing other things?
>
>I think computers can do maybe the first 80% of the behaviour simulation 
>task, and then as they try to conquer that last and hardest 20%, the 
>difficulties grow _exponentially_ until they become, in effect, infinite
>(for a Turing machine). I think that when a correct substrate for an
>artificial mind is found, it will possibly be that last 20% which was
>impossible for a Turing machine that will be the _easiest_ for the new
>whatever-it-is.

Thanks for the answer.  But what sort of query (if we are indeed talking
about the Turing test, and I'll assume that we were) would elicit this last
twenty percent, any idea?  Or do you figure that people will "just know",
but not be able to say what made them suspicious that the program was
not human?  And remember, this is after millions of years of refinement,
and vast improvements over what is possible today.

Lee Corbin

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14959