X-Message-Number: 19392
Date: Tue, 2 Jul 2002 15:37:24 EDT
Subject: Intensity Interferometer 5

Content-Type: text/plain; charset="US-ASCII"
Content-Transfer-Encoding: 7bit

Intensity interferometer 5.

In Intensity interferometer 4 (ii4), I have given a sketch of what a sound 
based ii could do. I have pointed out that sounds are made of phonons, one of 
many excitons found in matter.  If someone wan to make some advance in the 
field without putting too much money, a good idea could be to study and 
compare the different possibilities open up by excitons fields.

Different excitons will use different detectors. Beyond that, there is a 
common trunk to handle and process data. There is the hardware part and then 
the software part. This last one is completly indifferent to the nature of 
the input signal: photon, phonon, plasmon, and so on-on. It don't matter if 
we try to get the picture of a neutron star, the inside of Earth or a frozen 
brain. A second step, after looking at all known excitons, would be to 
identify application domains for each potential system. An ii web site could 
be the start of a link between potential users, from astronomers to 
geophysicists to medical researchers.

Then there must be an active project in the ii domain. To my knowledge, the 
australian Narrabri instrument has been the only ii in the world for at least 
20 years and don't function anymore today. Because astronomy is the sole 
domain where that technology has effectively been put in practical use, I 
think the new project must start from here. That will suppress all steril 
talks about the theory is flawed, it can't work, and so on. The objective of 
the new project would be to test hardware and software in the common trunk. 
So it must be defined with an eye on a comming phonon brain reader. It must 
start from what has been done and make a significan but manageable increment. 
I think the effort must be on making pictures with an ii. This will run 
against a current missconception and define the battleground.

There is what could be such a project: The astronomical imaging intensity 
interferometer (i-cube) would produce 2-D pictures with a definition near 100 
x 100 pixels. The sampling frequency would be 300 Mhz or one meter sample 
wave. This is the limit for common high speed photometers, at least as I know 
them. It is too the maximum frequency that could be used on a phonon system.  
Observing wavelength would be 1 000 times smaller, or one millimeter. This 
value is choosen so that observations are not too long and mostly because it 
is the value used on a brain reader, so the software could be transposed 
directly from star to brain picture building. Using the GPS and comming 
Galileo systems, it could be possible to pinpoint the position of light 
detector with meter scale precision over large distance. I suggest a 1 000 x 
1 000 km observing ground with one observing position every 10 km. (one for 
each pixel in the final picture. The resolving power would be equal to that 
of a 500 m in diameter optical telescope. Because of the low sampling 
frequency, this is a long tail instrument with poor energy efficiency, only 
bright object such neutron stars or black hole environment could be seen ( no 
Earth-like planets). The angular resolution would be 10^-9 radian or 10 000 
km at 1 light-year, a Jupiter scale object could be just resolved at the 
distance of the nearest stars.

One observational run would take 3 seconds, a deeper observation with 10 
times the resolving power would expand that time to nearly one hour ( each 
experiment would be 10 times longuer and the statistics would call for 100 
times more experiments). If the light collector was of the solar furnance 
kind used at Narrabri, they would be cheap, but an imaging system with 10 000 
pixels would request as much of them. And as much electronics systems. the 
cost would be huge. A way out of this problem is to use movable instruments. 
There could be only 100 instruments moved each day to a new location. They 
would come back to the same point after 100 stops. So, there is a problem: 
One observing run last from 3 seconds to one hour maximum, but after that all 
the system must be packed and moved 10 km (6 mi) aways. Making one picture 
ask then for 3 months at least (not counting days with cloud cover).

One solution would be to invest more in the light collector. If it was a more 
or less classical telescope, it would produce an observed field picture. 
Optical fibers could then pick up point sources, each beam at a fiber exit 
would then be moved along a CCD row by a rotating mirror. That would give the 
time resolution of the system. The next CCD row would be devoted to another 
beam from another fiber. Similar systems in spectrographs may have up to 2000 
fibers observing as much objects. There could be 1 000 fibers in round 
number. That would not accelerate the production of a picture, but 1 000 
would be built at the same time. If the fiber entries can be moved in the 
observing field,  more objects could be scanned. Then the telescope may be a 
robotic one aiming at many fields in the course of an observation run. In one 
night, 100 fields could be targeted, in each one, ten fiber settings could be 
used, each observing 1 000 objects. After 3 months, a batch of one millions 
pictures would be released. The system moving fiber entries in the field of 
view would be very akin to the scanning system on a phonon based ii.

The next problem is who would pay for that system?

Yvan Bozzonetti.


 Content-Type: text/html; charset="US-ASCII"


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=19392