X-Message-Number: 8617
From:  (Thomas Donaldson)
Subject: Re: CryoNet #8600 - #8604
Date: Sat, 20 Sep 1997 13:20:34 -0700 (PDT)

Hi again!

Here I am again, answering one of the older Cryonets. But we'll see.

What I want to do here is comment on one claim made by John Pietrzak: that
we must somehow have some innately coded means to tell various things about
our environment: such as whether someone with whom we are speaking is
intelligent.

This is a pure assumption and need not be true. Certainly we cannot conclude
it is true simply because we go around making such judgements. Neural nets
can LEARN about the world, and so far as our brains are assemblies of neural
nets then there is no reason (on simple computing grounds) to believe that
we have ANY inborn ability to understand a concept or perceive others in the
world. I'd go so far as to say that the ASSUMPTION that such things must
exist is a remainder of ideas from AI which tried to do such tasks as
recognition without use of neural nets in any form. A neural net LEARNS, it
isn't programmed. Just what the limits of that learning are for brains
remain unknown, but the fact that we can learn ideas without definitions for
them does not imply that those ideas are in any way innate.

There ARE reasons to believe that some of our responses are innate. Those
reasons come from brain scans and careful observation, together with
observation of other primates. For instance, unlike most animals, we have a 
special area in our brain devoted to processing language. This turns out to
be different from the area devoted to processing sound: brain scans of 
deaf people using sign language show that the same area lights up. Just 
what CONTENT (innate or other) may lie in this area, if it has any, isn't
presently clear. Moreover, as with many mammals, we quickly learn to 
recognize our mothers, and from there go on to recognize other animals of
our species. Note here that I am referring to studies on real brains, without
making any assumptions about innateness in general. (I will even say that
higher nonhuman primates can understand language, though not as well as we.
That may mean they have their own primitive versions of language, or that 
they are bright enough to use other brain circuits to reach a meager under-
standing).

I have actually raised this issue before, in the context of the Turing Test.
Most words we do not learn from definitions. We learn what they mean by
seeing or hearing them used in particular contexts. And since our consciousness
is quite sequential, it's clear that all the activity in our brain's neural
nets goes on outside our consciousness. It bears on the Turing test precisely
because it means that any computer not equipped with neural nets will fail
that test: it knows only the verbal definitions of the words it uses, and
might be trapped by getting it to fail to recognize just what a described
object or feeling might be, without the tester giving it a name for it.
(And remember that it's easy to program that for a few objects, but to do
so for ALL the objects we meet with in our ordinary lives becomes a massive
project -- more so because lots of different descriptions may be valid
but also different). 

I believe this is also what Searle meant when he came up with his Chinese
room problem: sure, a computer can manipulate symbols. It's whether or not
it knows what they MEAN that is important, and knowledge of what symbols
mean comes from experience with use of them in the real world ---- which
cannot be programmed, but might be trained (into a device which some might
consider a computer. BUT if you consider such a device a computer, then 
you're very very close to deciding that our brains are also computers,
whereupon the Turing Test totally loses its meaning. Tell me then just
what is a computer...).

			Best and long long life,

				Thomas Donaldson

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8617