X-Message-Number: 5273 Date: Mon, 27 Nov 1995 13:48:42 +0100 (MET) From: Eugene Leitl <> Subject: #5254: Re: CryoNet #5245 - #5251 [Thomas Donaldson] (intelligence substrate) On Fri, 24 Nov 1995 11:55:52 -0800 (PST) (Thomas Donaldson) wrote: > > Hi! > > And here we go again: > > Mr. John Clark argues that I am only raising "engineering" difficulties. As > readers of my original message know, I pointed out that not all materials > provided equally good substrates either for computing or for brains. It seems > to me that "engineering" difficulties, when we become really serious, count > at least as much as other problems. That's why I mentioned them. Yes, with > considerable effort we can make computers out a wider variety of substances > than silicon. Silicon won out precisely because it did not require all that > effort, and made something that worked better, to boot. Just what materials > we might use to make neural nets that behave as our brains do remains an open > question. Silicon by itself is not necessarily a bad substrate for cognition, however the photolithography process is constrained to be a two-dimensional technique. True 3d Si circuitry being unattainable and the die size limited, the constraints are very noticeable. Switching speed is more than adequate, but the connectivity is severely limited (both due to geometry and fanout) and the power dissipation/logical operation is very noticeable, which also limits integration density and switching speed. All in all this bodes ill for silicon photolithography. What I'd like to add: all graphs of integration density, memory size and processing speed are exponential (fit a straight line very well if plotted on a logarithmic scale). However, at least one metric is a fake: the performance as measured in MFlops (meaningless floating point operations per second). Virtually all major computer scientists tend to estimate the brain's computing capacity in MFlops (nothing wrong with that, apart from the fact that their estimation tend to range orders of magnitude too low). Then they extrapolate the straight line way into the future until it is at the same height as the brain estimate, look down at the according time and voila! the human equivalent is supposed to be reached in 2015 or 2035 or whenever it is supposed to be. There are two basic faults in this reasoning. The weakest would indicate that a substrate change would introduce at least a discontinuity, if there is not a massive parallel programme targeted to make an alternative substrate mature (which takes considerable time) long before the saturation point becomes visible. Since there currently no noticeable attempts to develop molecular substrates, particularly 3d ones, we might face a development discontinuity of 10-25 years sometime in the near future. The stronger objection is that the MFlops growth rate is a fake metric. Oh, 'tis true that a certain machine can do some 100 MFlops, but this are _local_ MFlops. Apart from the fact that the transistor number vs. MFlops plot starts to speak a different language lately, the nonlocal memory bandwidth growth has so far been very limited. The trend for maspar fine-grain machines with high/scalable internode communication bandwidth has been reversed lately (whatever happened to Thinking Machines?), so the future prospects do look gloomy indeed. Recent emergence of radical new technologies like CAMs, the quantum dot array computer etc. might offer a solution, though. > And note, moreover, that I specifically did not say that Mr. Clark was "wrong". > I said that the materials involved needed to be considered or else he was > raising what was, in PRACTICAL terms, a meaningless abstraction. > > Moreover, Mr. Clark still shows the arrogance I have come to expect of those > who believe in Nanotechnology. We learned to fly by careful study of birds, [ heavy chanting in the background; flickering light of firebrands through dense snow fleet; pitiful cries for mercy] Smash them! Burn' em! A hempen noose for those infamous nanotech heretics! (Erm.) > not the small birds but of large ones such as albatrosses, which spend much > of their time gliding. And the entire notion of neural nets, which has proven > to be more and more useful in real engineering applications (whether or not > these "artificial neural nets" imitate real neural circuits, or even claim to) > came from study of how brains might work. Yes, mammals (humans included) The NNs, particularly in their finite automaton network dedicated engines are certainly to stay. BTW, the reason why all von Neumann machines perform very poorly on NN's is not a random fact. A high-end off-shelf PC or a workstation are roughly equivalent to 50-100 realtime biologic neurons, _whatever MFlops the CPU has_ (within reasonable lower limits, of course). An average insect has some millions of them. To say it again: MFlops and vector multiplications/s is the wrong metric for NN performance estimation. > send impulses on their nerves are far less than the speed of electrical > impulses on wire. Yet right now there is work in progress on making organic > conductors, with some success. I personally believe that the reason our > nerve impulses are so "slow" is not because of any physical limit, but simply > because they do their job well enough as it is (their job, of course, is to > manage our limbs and body, both of which have physical restraints on how fast > they can move. Right now, if our nerves worked faster, their speed would be > quite useless because nothing else would match it). Apart from the "evolution being conservative" bit, there are at least two reasons why nature still uses this supposedly inferior membrane depolarisation signaling technique. The first is power dissipation. An adults brain burns roughly 20 W, considering the integration density and the performance (once again, brain is an excellent computer) this is an extremely good value. (According to recent research results the brain size is bottlenecked by available metabolic power during fetus morphogenesis, a significant insight). The second might be a less susceptibility to EM noise: organisms can tolerate quite stupendous currents without suffering much damage. Another reason might be higher achievable connectivity due to active signal transfer, a major problem for semiconductors. > Naturally, as parallel systems, far more parallel than anything yet built, > our eyes (for instance) can match any electrical device in their speed of > reaction. That is because they are highly parallel, not because individual > impulses proceed faster. It is quite instructive to read Moravec's treatise on retina performance ("Mind children" appendix). Apart from the fact that his MFlops should be considered parallel MFlops, his estimate appears realistic (but only for the retina, not the cortex). > I would agree with him when he says that we can probably design ourselves > "better". Immortalism is part of that belief. Just what materials we might > use remains an open question. And of course, as biochemical creatures, > we are basically instances of nanobehavior if not nanotechnology. That is We _are_ instances of weak nanotechnology. Meaning, we are the living proof of the existance of weak nanotech. Strong (Drexlerian) nanotech lacks this proof so far. Numerical data is hint, no proof. > exactly what enzymes and cofactors do. Not only that, but despite all the > noise in some circles, nanotechnology (except for its currently major > branch, known as BIOCHEMISTRY) has so far been little but theory. Biochemistry > however has been moving forward very rapidly. > > I too am not disinterested in theoretical issues. I would be interested in > knowing some other way to produce the features that our brain has, and do > well enough to match real brains in practical things. Clearly the only means > NOW to do that involves neurons; it would be of great interest to find other > materials, not least because, even if we don't create brains faster than our > own, we may devise materials capable of surviving environments that our > current biological substrate cannot. But one essential point that study of > parallel computing has led me to is that the older model of a computer fails: > not, not because it is "wrong", but because speed of computation is and will be > a major important factor if we really want to use computers for more than > intellectual games. The question is less of the substrate, than of the architecture. (If the substrate is not instrumental in implementation of the architecture it is a bad one, of course). > As for the possibility of uploading ourselves into a computer which would then > provide a substitute reality for us, rather than actually dealing with the real > one all around us, I would say that (IF that is the only reason for uploading) > we already have means to do much the same. If you want to escape the world, > try opium, or speed, or any one of a number of different drugs. If you use The drug "VR" is perceived in an altered perception state. Only psychotomimetics produce something visual, and without any control on part of the user, i. e. one has no choice in designing it. The productivity is limited in most cases. The experience is not permanent and tends to damage one's wetware. Messing with one's reward centers (e.g. with opiates) is virtual suicide. > them you won't even CARE how fast you are. And how do we tell reality from > a dream? Reality has this tendency to produce events that are not just > unexpected, but that we would not have imagined in thousands of years of > dreaming. Sometimes those events are very uncomfortable, sometimes they are > delightful. That is what happens with reality. The artifial reality can be an exact (within implementation limits) replica of the physical reality. However, it does not need not remain one. This is its greatest plus. The natural reality does not support cognition and action too well. -- Eugene > > Best and long long life, > > Thomas Donaldson > > ---------------------------------------------------------------------- Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=5273