X-Message-Number: 8627
Date: Fri, 26 Sep 1997 11:36:19 -0400
From: "John P. Pietrzak" <>
Subject: I was programmed before birth....
References: <>

Thomas Donaldson wrote:
> What I want to do here is comment on one claim made by John Pietrzak:
> that we must somehow have some innately coded means to tell various
> things about our environment: such as whether someone with whom we
> are speaking is intelligent.

Good!  Someone else has recognized that I'm going out on a limb here
and taking a rather unpopular point of view. :)

> This is a pure assumption and need not be true. Certainly we cannot
> conclude it is true simply because we go around making such
> judgements. Neural nets can LEARN about the world, and so far as our
> brains are assemblies of neural nets then there is no reason (on
> simple computing grounds) to believe that we have ANY inborn ability
> to understand a concept or perceive others in the world.

It's more than an assumption.  Consider: the neural net is a system
for learning, but what it learns (assuming we're talking about NNs as
understood in computer science today) is categories.  _All_ that it
learns is categories.

However, I'm currently writing a paragraph of text in response to your
Cryonet message.  Although I can categorize your message into words
and sentences and ideas, and I can categorize my writing in the same
way, where in the world of the NN do I find the structure which
initiated these tasks?  A NN can react, but it cannot act.

I'm not saying that I don't think there are NNs in the brain, I actually
believe the significant majority of human thought occurs through NNs
as currently understood.  I just don't believe that they are everything
that there is.  And if there is more than an NN, it's likely that it
doesn't involve learning, or at least not learning in the same way
that the NN works.

> I'd go so far as to say that the ASSUMPTION that such things must
> exist is a remainder of ideas from AI which tried to do such tasks as
> recognition without use of neural nets in any form.

Recognition, as well as many other categorization tasks, are performed
much more efficiently with NNs than other structures.  However, AI
didn't just try to understand a variety of tasks, it was going after
the concept of Intelligence itself.  As such, much of the research that
did go on (and is still going on) is of value in understanding how
all this worked.

At any rate, this isn't assumption, it's observation.

> There ARE reasons to believe that some of our responses are innate.
> [...]  Note here that I am referring to studies on real brains,
> without making any assumptions about innateness in general.

Neither am I.  You can study brains by looking at their tissue, but
you can also study how people work by looking at what they do and how
they act.

> I have actually raised this issue before, in the context of the
> Turing Test.  Most words we do not learn from definitions. We learn
> what they mean by seeing or hearing them used in particular contexts.
> And since our consciousness is quite sequential, it's clear that all
> the activity in our brain's neural nets goes on outside our
> consciousness.

Wow, and you're complaining about me making assumptions!  You've just
assumed you know (1) exactly how we learn words, (2) how our
consciousness works, and (3) how NNs work with our consciousness.
I do also believe that we tie most linguistic structures to concepts
via contextual relationships, although I don't believe that that means
the concepts themselves are undefined.  I actually do believe that
consciousness is NOT sequential (and most of the other voices in my
head right now agree with me on this point ;) ).  Considering that,
I do think that NN use occurs within our consciousness as well, at
least to some extent.  At any rate, we're both talking about beliefs
here.

> It bears on the Turing test precisely because it means
> that any computer not equipped with neural nets will fail that test:

And this is another unsupported assumption, and a much worse one than
the above.  Not only do you suggest here that NNs are the only way
to implement a sufficient learning system (there are certainly many,
many ways to learn things in this world!), you've also made the
assumption that you need the capabilities of a NN to solve this
particular problem.  Personally, I believe that programs with much
less sophisticated systems than a true NN will be beating the TT
regularly in the near future.

> I believe this is also what Searle meant when he came up with his
> Chinese room problem: sure, a computer can manipulate symbols. It's
> whether or not it knows what they MEAN that is important, and
> knowledge of what symbols mean comes from experience with use of them
> in the real world ---- which cannot be programmed, but might be
> trained (into a device which some might consider a computer.

Nah, what Searle _really_ meant is that AI got all his touchy-feely
human-centric ideals in a tizzy.  What his example boils down to is:
do you want to believe that a competent automaton is intelligent or
not?  You can say he's talking about meaning somewhere, but his
example really isn't really set up to deal with that: it's just another
black-box example, with a few (silly) structures stuck inside the box
as an example of how it might work (essentially, straw men to be torn
down by him, although I don't believe he did all that good a job of it).

Let's talk about meaning, then.  Where would meaning exist in a Searle
box?  You've got the slips of paper for input and output.  You've got
the magic book.  You've got the human.  In this system, the slips of
paper initiate and conclude the activities of the box.  The human
performs the actions taken by the box.  The magic book indicates which
tasks are to be performed when.  Where in this system are choices
made?  Nowhere, actually; it's just running tasks chosen when it was
constructed.  Then how can the box possibly act like a real human?
Obviously, because we've forgotten to include a part of the black box,
that being the individual or individuals who put it together in the
first place (assuming that the magic book is even possible to
construct).

What if the Chinese Room could do more than produce accurate textual
responses?  What if it could laugh, cry, make love, have children, grow
old with a spouse?  If I can make a magic book of responses to slips of
paper, why can't I make a magic book for any of the other activities
performed by human beings?  Searle comes no closer to understanding
anything about intelligence with his room than anybody else does.  He
just creates a convenient structure in which to place his angst about
the way science is starting to uncover the limits of humanity.


John

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8627