X-Message-Number: 26143
From: 
Date: Sat, 7 May 2005 01:50:41 EDT
Subject: Uploading technology (2.iii.1) large neural nets.

Uploading technology (2.iii.1) large neural nets.
 
Here I start to look at large neural networks. A first remark: the brain is 
not an ice cube, with everywhere the same properties and functions. All have 
not the same interest in uploading and some can be seen as black boxes. for 

example, the retina is a direct extension the the central nervous system, did we
need to copy it? The eye has some 125 millions rods and cones cells, but the 
optical nerve has far less links to the brain, one function of the retina, 

beyond turning light into an electrochemical signal is to compress the data. So

must we copy that or use something as a CCD chip, a conventional data compressor
and an adapter to send the signal to the brain?
 
Because we understand the retina and what it does, we are free to choose the 
best solution, not simply the biological one. For example, artificial retina 

have been built with some hundreds of cells, it would be a big task to have one
with 125 millions elements. Today  large CCD are in the 20 millions elements 
range, even if they somewhat slow, they need only 3 - 4 electronics 

generations, 5 years or so, to get the eye power. It seems here, in a first 
time, the 
CCD is a winer and it is best to concentrate on the adaptator, itself a neural 
net with distant link to a biological copy.
 
Other brain parts may deal with body functions and are interesting in 

uploading only as black boxes giving an output. There is no need to simulat it 
neuron 
by neuron, dendride spine by dendride spine. If we understand the input and 
the output, a simplified simulator can be used.
 
Next are intermediate interest elements, such the olfactory unit. There are 
something as one million piramidal cells, all of them excitatory linked, and 
may be as much inhibitory GABA cells. The input, comming from the olfactive 
bulbe, seems distributed at random on the first piramidal cell layer. So, why 

simulate them all independently? One way to reduce the computing load is to 
create 
"super neurons", each working as the averaged sum of a sub neural network. 
Another solution is to weight the output of each neuron as if there was many 
interactions with nearby similar neuron(1).
 
Yet another, more interesting solution is to use a "crossword grid". In a 

case there may be 1,000 to 2,000 neurons simulated using the Langevin's 
equation, 
recall this is an equation using a differential part and a random one. All 
conection leaving the left side of the case are mapped to the right side. This 
works the other way too. The up-down sides are worked out in the same scheme. 
The trick is that when a link goes across a border limit, the random part of 
the Langevin's equation is altered. Assume we compute 1,000 times the random 

function, in case 1, the first result will be used, if an axon subelement goes 
to 
case 2, the result 2 of the ramdom function will be applied, and so on. So, 

the output of a single neuron will be something as a "state superposition" with
one differential part and 1,000 random elements. This will work as 1,000 
different neurons.
 
The auditory processing area has a very structured input domain, each neuron 
receive a well defined and localized input, so it can't be a "crossword grid", 
each neuron is different. A good solution in a domain may be a bad one in 
another.
 
Yvan Bozzonetti.


 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=26143