X-Message-Number: 8579
Date: Thu, 11 Sep 1997 11:02:07 -0400
From: "John P. Pietrzak" <>
Subject: Re: Digital Shakespeare
References: <>

John K Clark wrote:
>         >[I wrote:]
>         >What more do you want!!!!
> 
> I want the number.

You're not going to get it.

[ On an algorithm to determine Pi ]
> So you're saying no matter how many simple things you put together
> the whole never gets one bit more complex and performing a trillion
> easy tasks is just as easy as doing one.

So, you're saying that no matter how few simple things it takes to
describe an algorithm, every iteration of the algorithm increases it's
complexity.  For example:

FOR i := 1 TO 2 DO BEGIN ... END;

is less complex than

FOR i := 1 TO 20 DO BEGIN ... END;

In fact, the second piece of code is 10 times more complicated!!!!
Man, I guess

WHILE TRUE DO BEGIN ... END;

must be *infinitely* complex.  It's warping my brain just to think
about it.

[ The question of connectionism -- how people have knowledge in common ]
> You and I live in the same world with the same laws of logic and
> physics, these laws put pressure on evolution and rendered most
> potential mechanisms nonviable, the remaining tiny minority, life,
> thus have certain things in common. And we have more, we're even of
> the same species so our minds work in more or less the same way.

In other words, much of our common knowledge is arrived at by evolution.
Evolution's effect on us is via our genes, not examples shown to us.
Therefore, our common knowledge comes not from a trained neural net,
but rather some other structure.

Thank you.  That's what I was trying to get at.

> If a purely connectionist person can't tell the difference between A
> and not A then I'm not a purely connectionist person.

Correct.  (Which means, you shouldn't assume that your concepts of
intelligence and complexity are based solely upon examples which
trained your neural net.  It is quite likely that there is a
significant instinctual basis to much of your knowledge & opinion.)

[ On Einstein-like intelligence ]
>        >No, sir, I DON'T associate intelligence with mathematics or
>        >physics.
> 
> So I could talk philosophy with you, write good novels and bad love
> poems, make up jokes you thought were funny, solve calculus problems
> and even come up with a theory as great as Einstein's and you still
> would not think I was intelligent unless I could prove to you I was
> made of meat and not silicon.

I didn't say that.  As I noted several messages ago, my interest
in intelligence is to find a good, universally applicable definition.
Novels, poems, and theories are symptoms of intelligence, not causes.

In any case, silicon has already begun to write novels, poems, and
theories.  Thus, some aspects of intelligence are already encoded in
these machines. However, other aspects of meat haven't been seen in
silicon yet.  You may be satisfied with the intelligence of automated
theorem provers, but I want more.

> Not too long ago many said much the same thing about black people.

And this is a VERY good point.  The southern United States promulgated
a particular definition of intelligence for political and social
reasons, not scientific ones.  We need to do better than that, and
concepts like the Turing Test are not helping.  Unfortunately,
labelling people with differing levels of intelligence is still of
political and social importance; people are afraid to dig very deep
into the subject.

And, therefore, when we talk about intelligence, we still talk about
rockets and chess and scientific theories and grand symphonies.  Always
the result, never the process.  Researchers in AI in the 60s basically
took this view, creating software which would achieve these results;
and they *succeeded*!  They made successful chess and checkers programs,
they made automated theorem provers which proved novel theories.  And
they discovered what no-one wanted to know, that you could do those
things with an automaton.  They also discovered that there was a lot
more to being a human (or even any other sort of animal) than it took
to play chess; none of their algorithms would scale up to the real
world.  So, they had to switch gears...

Let's face it: AI frightens people.  So long as the little metal box
is producing pretty graphics or performing boring, time-consuming
calculations, it's safe.  Even if it starts talking in an intelligent
manner, or producing brilliant theories, it's still pretty safe.
However, the basic research in AI today goes well beyond all these
things: people are investigating just what is required of a being to
act in a competent manner in the real world.  Eventually they will
succeed: after all, they have billions and billions of Earthly examples
to follow.

Lots of doomsday Sci-Fi talks about when that happens, and the final
war between humans and their progeny occurs.  But that's total BS.
The real fear is that we will, at that point, finally have the tools
to completely understand _ourselves_.  We'll know for certain what
our own limits are, and that our lives here are, in fact, what everyone
already knows in their heart but hopes to deny: that this _is_ all that
there is.

(Ahem.  Sorry to go spastic there.  I now return you to your regularly
scheduled Cryonet posting.)

[ On JKC's concept of intelligence ]
> If these things are not intelligence they are certainly something,
> let's call it Attribute X. Personally I don't give a hoot in hell
> about intelligence, I'm only interested an Attribute X.

Good for you.  You go play with your Attribute X.  Personally, I'm not
interested in the glorification of a purely arbitrary category.

[ A digression into the philosophy about philosophy ]
> If your philosophy has nothing to do with everyday life or how the
> world works, if your philosophy is only good for arguing about your
> philosophy then it would be no different than the rules of poker
> which are only good for playing poker. I think my philosophy is more
> than a game.

In the end, all that philosophers do is come up with rules and axioms,
things which can be used to explain (and perhaps predict) the world
around them (or, perhaps, some imaginary worlds).  Philosophy _is_ a
game, one where you try to play the role of the entire universe.  If
you succeed, your rules are able to predict accurately.  If you fail,
an experiment eventually proves you wrong (if your philosophy even
allows experiments to occur).  In any case, since philosophy often
attempts to define exactly what the real world is, it's hard to make
it all that accountable to the real world.

[ On Eliza and the TT ]
> The Turing Test does not produce facts it produces opinions, you wish
> we had something better and I do too but we don't, we can only work
> with what we have.

Or, we can try to produce something better.

> If the tester is an idiot then you have the opinion of an idiot, if
> the tester is a genius then you have the opinion of a genius, but far
> more important than either of these is the opinion of John K Clark.

You were saying, your philosophy is "more than a game"?


John

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8579