X-Message-Number: 27631
From: 
Date: Fri, 17 Feb 2006 15:16:39 EST
Subject: Uploading (3.v.1) The gap junction neuron.

Uploading (3.v.1) The gap junction neuron.

Gap junction  may leak potentials from one neuron body to another. This bring 
the second  neuron up to some percents of the potential in the first. If ten 
percents of the  somewhat 300 neurons in a microcolumn are at firing level, 
then they can bring  nearly all other to do the same. This is why it is so 
difficult to separate one  neuron from others in a microcolumn. For lower 

potentials, there may be a  noticeable effect at the dendrite domain scale, such
a 
domain is assumed to be  the elementary element for the gap juncton activity.

For uploading,  gap junctions are of particular interest. Indeed, assume 

there is a "blank  brain", how to load it in a person? This must be done in two
steps : First, by  adjusting the network geometry and second by loading the 

right sensibility value  at each synapse. This can be done as a training using a

firing threshold  adjusted by the gap junction potential. So, gap junctions are
the programming  entries of the network.

The gap junction number or kind, dont  evolve rapidly, in a first system, it 
can even being a given property of the  network. The reason to do that is that 
the original structure is copied from a  biological configuration, an evolved 
one would rest on the running of the  electronics system. At least for the 
first generations, such ystem are only  approximate models. So, it may not be 

wise to allow too much flexibility and  possibilities to evolve way out of what
could be done in a biological  brain.

The full gap junction network must be modeled and run at the  microcolumn 
level and from that, a "leak" potential is observed from each neuron  to each 

other. If there are 300 neurons, this is a big 300 x 300 matrix. In  fact, from

it, a simplified system can be produced. Assume the maximum potential  shift is
50 mV and maximum coupling is 10 percent, one neuron can't induce more  than 
5 mV in a nearby one. If the noise level is .1 mV, there can't be more than  
50  different interaction levels, from .2 to 10 percent. This can be  

represented as a 6 bit number (2 at power six = 64). A local effect is then  
computed 
as sum of 300 products, each made of a neuron potential and a 6 bits  

percentage. This has to be done at the dendrite domain level, not the full  
neuron. On 
the other hand, nearby dendrite domain of different neurons in the  same area 
may undergo a similar or identical gap junction effect. In fact, in a  given 
microcolumn, the number of domains may be smaller than the number of  neurons.


Gap junction induced potentials have an interest beyond the neuron  

simulation : Because they define the firing threshold of the neuron, if this one
is 
too high, there is few probabilities for an effective firing. So, it may a  

waste of time and computing ressource to simulate it precisely at this instant.
A 
rough estimate of the activity can be deduced from nearby neurons and that 

may  be sufficient to actualise the gap junction potential. A neuron is firing
something as ten time per second, being able to predict when is a big economy. 
 From Shannon theorem we would have to simulate it every millisecond, from 
this  smart choice this may be reducend by a factor of 100.

This could  bypass the necessity to use a metaneuron level and simulate a 
brain directly  from its elementary neurons. More precisely, the metaneuron 
approximation would  be reserved for some particular functions not linked to 

conciousness, such that  signal processing at low level. A single chip such the

Vitex 4 SX 35 could  simulate in real time more than ten millions neurons. An 
ASIC 
derivative with up  to  20 millions electronics gates would reach the 300 
millions neurons  mark. Less than 20 would simulate a full neocortex.


We have now a good idea of what could be done with current technologies.  

Even without ASICs, current FPGA could display a power well beyond what is found
in an insect for example. A bee has 800 000 neurons, a small mammal 100  

millions, the chip is not far from the second. Animals have a tremendous  
capacity 
at recognizing patterns, being visual , sound or another. Potential  

applications are numerous and this must be a strong incentive to implement such
a 
neural system in FPGA. On the other hand, we may have to build a first  
generation brain scanner before we can wire correctly such a  network.

Y. Bozzonetti.


 Content-Type: text/html; charset="US-ASCII"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=27631