X-Message-Number: 8529
Date: Wed, 03 Sep 1997 09:59:29 -0400
From: "John P. Pietrzak" <>
Subject: Re: Digital Shakespeare
References: <>

John K. Clark wrote:
>         [I wrote:]
>         >It's wonderful that such a simple device [a computer] can do
>         >so much.
> 
> Nothing simple about it. If something, anything, is complex then it
> must be made of many parts, if not it wouldn't be complex. If the
> parts are still pretty complex then, for the same reason, they must
> be made of many subparts.  Eventually you will come to a point where
> the sub sub sub parts are pretty damn simple, an on off switch is
> pretty damn  simple, however I don't conclude from that the universe
> contains nothing  complex.

Ah, I'm obviously using the wrong definition of simple here.  Certainly,
the machine itself has changed over time into a large agglomeration of
many small parts working together, so physically it isn't simple; but
what it *does* is still very simple.  In the end, the computer is just
a glorified state machine.  No matter what fancy programming language
one uses, no matter what tools or algorithms you have, when you attempt
to encode an idea into a computer, you'll find yourself restricted in
the same way that a classic Turing Machine is restricted: your machine
may be faster, but it _can't do anything new_.  From a programming
point of view, it is (basically) equivalent to the simplest computers.

[on genetic algorithms]
> Genetic computer programs would have a huge advantage over evolution
> because they could have the ability to inherit acquired
> characteristics, something nature never figured out how to do.

Oh, really? :)  What are you talking about here, self-modifying code?
Most of the GA work I've seen uses the standard "generate & test"
system; you take a population of algorithms, test them, use the
winners as the basis for the next generation of algorithms (combining
their features, plus adding some percentage of "mutation" if desired),
and iterating the test again.  I've not seen a lot of work dealing
with GA + learning, or anything of that sort.  (Seems like an awfully
hard thing to pull off.)

>       >Right now, I believe humans have a distinct advantage in  the
>       >storage capacity, power consumption, and portability of their
>       >internal memory store. :) )
> 
> If that was true nobody would bother making computers because they'd
> have no use, people do.

Ah, but you see, I _don't_ keep my programs on a computer, I keep my
programs on a hard disk.  The hard drive is "external storage" to both
me and my computer.  If you want to talk about the entire computer
(motherboard + hard disk) as a unit, you should allow me to describe
the human system as brain + external storage as well.

> 30 years ago everybody said that playing Chess or doing Calculus
> required intelligence, they don't say that now and we both know why.
> Now they say true intelligence is whatever a computer can't yet do as
> well as a person and the definition keeps shrinking.

Indeed.  This problem was there from the beginning of AI.  But, in fact,
it's been a problem in science from the beginning.  It's why Galileo
had such trouble with the Church, it's why people argue so vehemently
against evolution.  "Intelligence" is one of the last bastions of
human egocentricity, that which "makes us better than other things".

>         >Oh Ghod, how I hate the Turing Test.  [...] his test manages
>         >to entirely miss the point of intelligence.
> 
> To me it always seemed strange to hear people say that the Turing Test
> is not a test of consciousness, but to say it's not even involved in
> intelligence is utterly bizarre.  I have a few questions.

Shoot.  :)

> I'm sure you've met people you consider brilliant and people you
> consider morons, how do you tell one from the other?

I can't.  In fact, I've met people who are brilliant but act like
morons, and people who seem intelligent but suddenly do something I
find stupid.  Thomas Jefferson wrote about freedom and kept slaves:
is he brilliant or a moron?  (Closer to home, I know of a group of
people who realize that 100% of all people born more than, say, about
122 years ago have died, and yet they hope to live much longer than
that.  How intelligent would you think these people are?  Careful now,
I'm one of them...)

> The physicists Stephen Hawking is perhaps the worlds greatest
> authority on General Relativity and Black Holes,

(maybe...  I always wonder a little about scientists who cash in on
their discoveries)

> he is also paralyzed except for one finger of one hand, he uses that
> finger to tap out (literally digital) messages, that's how he writes
> his scientific papers and books and performs all communication with
> the outside world. Do you think Steven Hawking in intelligent and if
> so why?

Intelligent?  Certainly.  Because of the Turing Test?  Certainly not.
More on why below.

> Einstein was not intelligent, he just talked wrote and behaved like he
> was.  Do you see anything crazy about the preceding sentence?

Absolutely, it seems crazy because it's just what the Turing Test
espouses.  The Turing Test says basically, "if it talks like a human,
it's intelligent."  Therefore:

 - All non-human life on earth (currently and through all of history)
   is unintelligent (none of it could talk like a human).

 - If I'm the one using the test to determine intelligence, only people
   who speak English could pass the test.

 - Finally, the person you attempt to use the test on must be willing
   to be tested; it is of no use in determining the intelligence of
   those who are unwilling.

The real problem here is that there are many different definitions of
intelligence.  Some definitions describe activities or abilities; others
blatantly tie intelligence to human beings.  The Turing Test goes after
a particular set of abilities: the ability to present properly formatted
linguistic data over a terminal, and the ability to encode content in
that linguistic data which would "seem appropriate" to have been made
by a human.  This test is heavily weighted towards the "properties of
a human" definition, rather than the "intelligent activities" definiton.

I prefer my definition of intelligence to include aspects of
intentionality and competence.  Language is a wonderful thing, but
any creature which can deal with the world around it in a successful
manner has something to be said about it.  If the way in which it
deals with the world involves knowledge of past events and reasoning
by analogy to future events, I generally find that sufficient reason
to award some amount of "intelligence" to it.

(I remember watching an episode of NOVA, showing a group of gorillas
stripping the leaves from a small branch and using it to fish termites
out of a hole.  Obviously, no intelligence going on there...)

[On analog computers]
> A ten year old home computer could easily reproduce the output of any
> "analog computer" ever built,

Absolutely!  People never built significant analog machines, because
they were so darned hard to create and use.  The hardware was intimitely
tied to the task you wanted to perform.  Perhaps most significantly,
there was never any analog equivalent to the "Universal Turing Machine",
a machine which could be proven able to simulate any other Turing
machine (including itself).  Until something like that is developed,
every analog computer would be unique.

John

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8529