X-Message-Number: 33452
From: 
Date: Thu, 10 Mar 2011 01:50:04 EST
Subject: Thomas Donaldson on How We Are Not Computers

How We Are Not Computers, And What That Means



By  Thomas K. Donaldson



    This article, while not a book review, does depend strongly on two 

particular books, "Artificial Life" (ed. CG Langton, 1989) and "Brain 

Organization and Memory" (ed. JL McGaugh, NM Weinberger, G Lynch, 1990).  

My discussion comes from my own thinking, but one article from each book 

has particularly influenced me.  From "Artificial LIfe," S. Hameroff and 

his colleagues argue (p. 521) that we should not take nerve cells as simple 

computational units.  Even single cells have complex abilities.  Nerve 

cells, and even signal transmission between them, has much more complexity 

than a stream of single bits.  Hameroff reinforces that point.  Second, the 

article by WJ Freeman and CA Skarda ("Representations:  Who Needs Them?", 

p. 375, Brain Organization and Memory), helped me clarify my own thoughts 

about brain models.





The computer model



    The idea of brains as computers has become very popular.  It lies 

behind the idea that someday we will upload (download?) ourselves into 

other computers somehow better than the wetware one we work in now.  It 

also underlies a lot of mainstream theory about how our brains function:  

cryonicists believing a computer analogy have many other scientists to 

point to for support.  If our Selves are computer programs, uploading 

becomes trivial.  Before anything else we first need to clarify just what a 

computer program and a computer are supposed to be.  For the purpose of 

this article, I shall consider programmability as the main feature 

distinguishing computers from other objects.  A program is a combination of 

instructions and data which controls the operation of the object.  If that 

object is a computer, then means exist by which a wide variety of different 

programs can (at different times) control its operation.



    Clearly devices (and living things) fulfill this criteria to greater or 

lesser degrees.  Even within the class normally accepted as computers, some 

may be unable to perform a set of instructions because memory capacity is 

too small.  Performance of instructions, in general, requires not just 

ability to compute but peripherals such as monitors, printers, and disk 

drives.  And some devices often thought (perhaps loosely) to be computers 

cannot run a wide variety of instructions (ie. embedded devices each with 

one single program burned into its ROM).  So no object can lie at the 

(theoretical) end of the computer side of the rainbow.  As for the other 

side, for objects totally unable to perform any separate instructions, 

rocks or stars fit that description very well.  Perhaps some living things 

do also.



    The analogy of our brains as computers suggests many common ideas.  

Behind it lies an image of human brains as objects which are all, 

fundamentally, identical.  They differ only in the programs they are 

running;  these programs are Persons, everything that makes you You.



    Again, computer programs operate on Data.  Data is always a symbolic 

representation of some part of the real world.  By the computer analogy 

when we remember our home town, then, we do so by forming a symbolic 

representation of it in our brains.  We must distinguish the symbolic 

representations we use in talking to one another from those in our brains;  

most mammals are quite inarticulate, but somehow find their way around 

their environment.  A computer analogy for their brains would suggest that 

they too have such symbolic representations.  If our brains do work as 

computers, such representations become essential to their operation.



    Furthermore, in principle we might devise ways to transfer specific 

memories between one person and another.  The language of the symbolism may 

differ from person to person; but with the proper translations that becomes 

a detail.  A computer analogy implies that training of any kind might 

someday be compressed into a few hours on a "brain trainer."  It also 

causes us to spend special attention on these supposed symbolic 

representations.  The ability to manipulate symbols becomes identified with 

intelligence itself.  This body of ideas about how our brains work ties in 

very well with such efforts as Chomsky's, to explain our spoken language as 

a form of translation from a private symbolic representation to a public 

one.



    It has become, in short, the dominant image of brain operation in the 

late 20th Century.  Dominance, however, does not make it correct.





Another and different analogy



    We can already see signs of another quite different idea.  I shall call 

this model for memory the Growth Model.



    One major theme in the study of memory has been the idea that learning 

(even in adults) involves the very same processes by which our brains 
develop from embryos.  That is, remembering 

something long-term means that our brains have formed new physical 

connections between neurons;  these connections persist by the same 

processes maintaining our physical form.  This idea easily answers one 

major question about long-term memories:  why are they so durable?  No one 

knows a way to destroy a memory without physically destroying neurons 

involved in it.



    Processes of development also include healing.  True, neurons in adult 

primates may divide only rarely (although some experiments suggest the 

contrary).  But healing in our brains includes more than simple division:  

it can involve massive rearrangement of circuits, as in recent (unplanned) 

work with monkeys with severed nerve connections to their hands.  

Development involves not only passage of signals between neurons which 

already touch one another, but chemical signals causing growth of a 

dendrite or axon toward a neuron not within "touching distance."



    Any dominant analogy creates an impression that no other possibility 

can even exist.  Yet, if followed out, the developmental hypothesis just 

sketched above suggests very different conclusions about how our brains 

work.



    First, long-term memory forms when our brains grow a set of new 

connections.  These connections would contain new synapses.  Hence 

(contrary to other estimates based on a computer analogy) to estimate our 

capacity for new memories we must do more than count nerve connections.  A 

maximum capacity would still exist, reached when present connections left 

no room for more.  How that might happen remains an open question:  perhaps 

simple crowding, or again single neurons might only support a limited 

number of connections.



    Furthermore, our long-term memories would consist of the connection 

patterns that have grown up between our neurons.  They would not be "coded" 

into our brains, in any sense of "code."  On a gross level our brains do 

resemble one another; but when we find out how to look at a brain closely 

enough to distinguish memories, we would find these memories identical to 

the connectivity itself.



    This model resembles the neural net computers that computer scientists 

now use, successfully, to make machines to solve problems our brains do 

easily.  Neural net computers don't store their memories in any one 

connection, but in the pattern of all their connections.  In this way they 

resemble our brains.  But neural nets start with a fixed set of possible 

connections, some of which are turned on, others off.  Unlike neural nets, 

brains would form memories by growing new connections.  Disused connections 

would disappear.



    Our thinking would also proceed totally without any symbolic relation 

to the world.  Instead, our nerve cells have grown connections so that 

their total response to any outside event deals with it successfully.  

(Note that neural net computers, too, do not use any symbolism in their 

computation:  in this way they follow brains).



    Such systems resist any easy transference of "programs" from one to the 

other.  In that sense they aren't computers.  (Although we certainly can 

imagine some massive intervention which reconnects an entire brain.  As 

before, "computerness" is a matter of degree.)  Nor could we make learning 

easier simply by separating out a set of skills and knowledge and then 

reading it into our own brains, for no single memory can be separated from 

any other.  And even if someone else could follow all the excitations in 

your brain for every neuron and synapse, they would need a long prior 

period of observation to read off from them just what it was that you were 

thinking. . . other than in the very broadest sense.  (That is:  whether 

you are sleeping or awake, you are thinking something about food or sex, 

you are afraid, etc.)





Directions for use



    If our brains follow the Growth Model, some common ideas about possible 

improvements would require rethinking.  Simple transference of our Selves 

from one body to another ("Uploading") raises far more problems.  

Improvements in learning ability, or transference of particular skills or 

knowledge, do also.  But here are some ways towards the same aim.





Preservation of alternative copies



    This technology should interest every cryonicist.  By storing 

inactivated copies of ourselves we can survive total destruction of our 

main, living copy.



    Even if we are not computers, our structure might perfectly well be 

stored in a computer system.  Graphs give the main data structures needed.  

We would store each connection with additional information (just what isn't 

fully known yet:  transmitters used, its age, and possibly other items).



    The practical problem to solve for such a system is how to read out 

brain connectivity as rapidly as possible.  Fast read-out rates let us 

frequently update our stored copies.  If you have not been updated in the 

last 10 years, then any destructive accident would mean a loss of 10 

years.  One idea would be to add a system of "watchdog molecules" to each 

synapse; these shed copies of themselves constantly.  They might then be 

gathered together to find how your brain has changed.





Increase in memory capacity



    No one yet has faced this problem, but at some stage it will arise.  I 

believe the most likely brain response (by our unmodified brains) would be 

to forget all information least used, rather than to simply stop learning 

(some neural nets already do this).  In the Growth Model, synapses between 

neurons would disappear.



    Basically, increased capacity requires unwieldy increased storage 

space.  Miniaturizing brain circuits (while keeping the same connectivity 

system as before) only puts off the problem.  True, you might separate your 

"extra brain" from your main brain.  But even if your "extra brain" 

connects with your "main brain" at the speed of light, your memory will 

fade significantly if you even go to the Moon.



    We can still add off-line storage space, relearning older memories when 

needed.  Relearning, of course, involves growing new connections.



    Note that the same problem arises with neural net computers.  One other 

point needs stress:  despite the limits, by miniaturization and larger 

brains we might increase our capacity by at least a factor of 10.  That is 

still very worthwhile.


                                    (25)





Increase in learning speed



    Learning, in this model, involves growth, which takes time, energy, and 

materials.  Already our brains burn a large share of the calories we eat, 

as high as 40%.  To cut down energy and materials expense, we might first 

use miniaturization.  After that, we might add a special cooling system, 

perhaps an extension of our present cerebrospinal fluid.  Our blood would 

bring more materials and also take away excess heat.  For temporary periods 

of learning at very high rates, we might also imagine special "Learning 

Stations" to rearrange our brains, allowing not only increased blood flow 

but cooling solutions.



    What about increasing "intelligence?"  But just what does 

"intelligence" consist of?  Besides ourselves and other animals, we now 

have computers, capable of spectacular feats of processing on some problems 

and spectacular stupidity on others.  Even other animals can do processing 

we cannot (dolphins and sonar, for instance).



    The lesson of these examples is that many different kinds of brain 

processing exist.  In the end, we will want some increase in our learning 

ability, but rely on many different systems for other kinds of processing.  

These would connect to us in detachable ways, more or less like present 

computers.  We may even develop special interfaces to attach to them, much 

as our hands attach to our machines;  but they would still remain apart 

from us.  Hands were a good idea and remain so.  (The problem with making 

any kind of ability a permanent part of yourself is that you may not always 

use it:  one more piece of baggage.  With too many additions you grow too 

fat, metaphorically and actually).





Why not move ourselves over into computers?



    Some would say that by doing this we would become essentially 

different, and so lose our Selves and our personality.  That may be so, 

although I know no logical or experimental means to find an answer.  

Instead I will discuss one major practical reason why a growth model may 

have prevailed for brain design.



    I shall discuss only neural net computers since, so far, they alone can 

do some kinds of learning needed.  Suppose then a neural net computer with 

the same capabilities as our brains in all respects.  Neurologists have 

classified about 100 or so different processing regions in our brains, each 

one a neural (sub)net.  The advantage of a neural net computer (with fixed 

connections) over a brain would be that all connections had been grown in 

advance.  (Even at start this "advantage" may mean little:  short-term 

memory allows a temporary learning until growth has finished.  That may 

even be its explanation.)



           13                                                        13

    With 10   neurons in each processing region, each neuron needs 10   

synapses to make all possible connections.  (Neurons now have a maximum of 

about 1000 connections, within a factor of 4.)  Let each connection cover 

only 1 nm2 of surface.  Total area of all connections becomes 10,000 

meters2, or 100 meters on a side, for only one neuron.  A brain designed 

this way carries along one billion (109) times the mass it actually uses.  

What about virtual connections?  That merely turns one kind of unused 

capacity into another (virtual connections use other neurons to transfer 

impulses, invisibly to sender and receiver).  Virtual connections may not 

even work:  direct connections between neurons must exist for a reason.  

(This is an argument valid for both silicon and protoplasm).



    If we try to limit connectivity a priori we find another problem.  By 

limiting possible connectivity, we limit the kinds of connections our 

brains can make.  Given that all new connections form on a background of 

the old, this means a limitation on connections between responses.  Brains 

with fixed connectivity, then, will lack adaptability.



    These two factors, combined, may tell us why our brains operate by 

growing new connections rather than staying solely with the old.  Yes, 

growth takes longer.  But it may also support our mental flexibility, which 

is still far more than any computer and may remain so.  Perhaps someday we 

will modify ourselves to even more flexibility.  We can see now how to do 

so.





 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=33452