X-Message-Number: 4326 From: Date: Mon, 1 May 1995 15:51:34 -0700 Subject: "SCI.CRYONICS" Neural nets At least four contribotors to CryoNet (Keith F. Lynch, John Clark, Eugen Leitl, and Paul Wakfer) have made the claim that neural nets are computationally universal, i.e. that any recursive function can be computed by a suitable neural net. I have asked to see a cite to the literature that demonstrates this claim; no-one has supplied one yet. I believe that is because the claim is FALSE. The concept of neural net traces back to the original paper of McCulloch and Pitts [1]. The logician Stephen Cole Kleene abstracted from their ideas the notion of a finite automaton, and proceeded to describe exactly what sets of inputs were accepted by such automata: they are just the regular sets of strings over the input alphabet. Finite automata are far weaker than Turing machines, because they do not have any scratch memory (like the infinite tape on a Turing machine). The class of regular sets is much simpler than the class of recursively enumerable sets accepted by Turing machines. The same applies to neural nets. A given neural net has a finite amount of memory for computation, which is fixed in advance. To accept arbitrary recursively enumerable sets, the machines must have arbitrarily large amounts of scratch memory available. Neural nets don't. A physical computer, such as the PC on my desk, is also a finite automaton and thus cannot compute every recursive function. It can compute every recursive function only under the idealization that I live next to a memory factory, and whenever the machine signals "Out of memory", I can add more RAM and continue the computation. An analogous procedure with a neural net would be to add more neurons during the course of the computation. However, depending upon how it is done, that could be making a much more fundamental change to the neural net than is adding RAM to a conventional computer. It could hardly count as in inessential modification unless some uniform scheme was given in advance as to how such neurons are to be added, and with what weights. Art Quaife Trans Time, Inc. References McCulloch, W.S. and Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical BioPhysics 5: 115-133 (1943). Kleene, S.C. Representation of Events in Nerve Nets and Finite Automata. Automata Studies, C.E. Shannon and J. McCarthy editors, Princeton: Princeton University Press (1956). Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=4326