X-Message-Number: 15468
Date: Sun, 28 Jan 2001 11:37:19 -0700
From: Mike Perry <>
Subject: Brains, Connections, Imitation

Thomas Donaldson, #15461, says (in part):
...
>2. Once more on neurons, brains, and computers: basically we do not 
>   have a situation in which we have some large subset of our neurons
>   which connect totally to one another. Our connections take up only
>   a subset of those possible.

It occurs to me that you can model the absence of a connection by allowing
varaible connection strengths and just having a zero connection strength
between two neurons. Thus, you would have all of the N^2 connections but
only some of them active, or effectively present.
...
>   Moreover, our fine connections are not stable (recent discoveries
>   verify this). 

No problem modeling this either, just allow connection strengths to vary
over time, and in particular go to zero, and from zero to nonzero, to model
breaking or atrophy of connections, and forming of new ones. And don't
forget that these changes could be probabilistic, i.e. unpredictable, in
your computer just as in real life.

>That is why the number of possible connections is
>   closer to 2^N and than N^2... though the number N here is hardly
>   well known, as yet. 

Once again, with variable connection strengths, you could just assume N^2
connections all the time (except when N itself changes, which the computer
could also model).
...
>   The value of parallel computing here is that living in the real
>   world we must do several different "mental" processes all at the
>   same time. Not interleaving them, but really at the same time.

I don't deny (or doubt) that parallel computing *may* be essential for any
*practical* or realtime imitation of a real brain; that's not the issue here
(or shouldn't be). 
...
>   sequential computer. And that's why I think Turing neglected
>   timing, and timing is important enough that it should NOT be
>   neglected.
>
Turing didn't neglect timing, but living at the dawn of the computer era, he
was naturally focused on the more fundamental issues of computation theory,
rather than the finer points. I'm sure he realized that, even though a
parallel computer might be much faster, it still couldn't do any computation
a sequential device couldn't do. His main construct, the Turing machine, is
sequential, in the first instance, not because he couldn't envision it being
parallel, but because a sequential device is easier to work with theoretically.

Perhaps it's worth remarking here that I do recognize that Turing's
formulation of a simple, sequential device has its limitations as a
conceptual tool. Sometimes it's better to think in terms of analog devices,
and probability may play a very significant role that is not easily captured
in Turing's main formalism, which is also deterministic. (In particular all
this could be important in some AI work I am now trying to get involved in,
using a new, fast computer just acquired.) But this doesn't necessarily
overturn what Turing accomplished (or Church before him, who I think was
really the first to come up with the notion of an effectively computable
function). If we try to assert that brains are "not computers or Turing
machines" we must be careful of the sense in which that is to be taken. We
don't want to make the claim (without substantial supporting evicence) that
we have found a new, non-Turing procedure or computation we can effectively
do, or even, I would say, that such an outcome seems likely.

Mike Perry

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15468