X-Message-Number: 28557 From: Date: Sat, 7 Oct 2006 01:40:31 EDT Subject: Uploading technology (1.iv.3) MRI Brain Reader 4 Uploading technology (1.iv.3) MRI Brain Reader 4 In the preceeding message (#28547), I have argued for the use of electron resonance imaging. If it is so good, why is it not in general use today ? I think the problem is historical : MRI has been developed in the 70's-80's years. It was difficult to get homogeneous magnetic fields in the sub milligauss range, so the gradient field had to be in the gauss/inch to be both, manageable and produce a good picture definition. The continuous field could not then fall under one kilogauss or so. One kgauss in electron magnetics resonance implies a frequency near ten gigahertz, that was difficult for the electronics amplifier. One the other side, nuclear magnetics resonance allowed the use of nearly off the shelf video amplifiers found in any TV set. So there was no incentive to use ERI and there is no much today outside brain reading. Direct to home satellite TV technology has put on the market amplifiers and components adapted to GHz amplifiers. There are off the shelf amplifiers up to 45 GHz and ASIC circuits can be ordered up to 100 GHZ and beyond. The technology used is the so called Pseudomorphic High Electron Mobility Transistors (PHEMT). Thre second problem, that is the field homogeneity remains as before. A field can be produced with homogeneity of one part per million in the volume looked for in MRI. If the main field is one Tesla, that is 10,000 gauss, it can be smoothed to .01 gauss. Because one gauss translates into nearly ten MHz in electron magnetics resonance and the pixel spectrum width is 1 kHz, the field sensitivity is .0001 gauss. So the pixels are one hundred time smaller than the homogeneity scale of the magnetics field. If a picture is taken in the X,Y plane, there will be many displacement in the third, Z direction. A 3 dimensional picture can't no more be built as a stack of flat 2D views. 30 years ago, that was the end of the story. Today, there is one possibility around that : If the field roughness is stable, that is it remains the same in time or evolves slowly in a predictable way, the instrument can be calibrated. For example, a set of straight fibers can be imaged, the picture will look wavy because of the field inhomogeneities. The "wavyness" is then a mesure of the picture curvature or how it departs from flatness. Doing that for any neighbor for each pixel give a map of the field distorting effect. Taking then a 3-D picture of any object, the true position of each pixel can be computed back. This process is called deconvolution, it is rather simple at the elementary level, but ask for massive computing power. This power is one the market today, it was not 30 years ago. There are specialized circuits called Digital Signal Processor (DSP) able to crunch up to 500 billions operations per second on such a problem. Even for them, this is not a small task. Assume one pixel or voxel because this is 3D, is 10 nanometers on a side and the full picture is 10 cm (4") on a side. There will be ten millions pixels in a row and 10^21 of them in the picture. If each needs 100 operations, the computing load is 10^23 or 100,000 billions of billions operations. 200 DSP would take some 30 years to finish the job. The more advanced DSP have phased local loop (PLL) problems i.e. synchronization difficulties, beyond 230 devices chained together, that is why I have assumed a round number limit of two hundred of them. The full resolving power is not yet asked for everywhere, it is usefull only at synaptic buttons, there may be 10,000 of them per neuron and 10 billions of neurons. One synapse can be defined with 100,000 voxels. That sum up to 10^19 voxels, "only" one percent of the preceeding brute force estimate. The computing time falls then to 4 months. What remains could be pictured at the micrometer scalle, giving an economy of one million to one on the voxel number. Even if this volume include 99 percent of the full picture, it account only for 99/1,000,000 = .01% of the computing load. So, brain reading with ERI looks as a possibility with current technology, even if the computing request is at the front end of DSP capabilities. Yvan Bozzonetti. Content-Type: text/html; charset="US-ASCII" [ AUTOMATICALLY SKIPPING HTML ENCODING! ] Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=28557