X-Message-Number: 3645 From: Peter Merel <> Subject: CRYONICS - Alive & Intelligent. Date: Mon, 9 Jan 1995 16:54:12 +1100 (EST) Robert Ettinger writes, >Just as the ancient philosophers aspired beyond their capabilities, so do >those who try to make final decisions on the basis of current knowledge. It's better to discuss the issues than to wait for some perfect enlightenment to occur. There are a fierce number of philosophies available, and even someone who asserts a total lack of faith, who reserves their philosophical preference, is still making a philosophical statement by that reservation. That said, there's also a fierce amount of tail-chasing that is performed by philosophers in general, resulting in little or nothing of use except a sensation of giddiness. I doubt that the subject pertains to this list at all except in the matter of producing philosophies within which cryonics is feasible and desirable. >For that matter, I have never denied the POSSIBILITY that inorganic systems, >including "computers" of some kind, could have feeling. What I have always >insisted upon is that there CANNOT be such an ASSUMPTION, for the very simple >reason that we do not yet know the physical basis of feeling (the "self >circuit.") We don't even know if there _is_ a physical basis for feeling. Feeling might simply be an emergent behaviour of sufficiently complex neural networks, a statistical phenomenon of some sort. Alternatively, as I've suggested, the whole idea of self might be nothing more than a psychological abstraction, with no more physical basis than a psychological abstraction of a chair, or, for that matter, the psychological abstraction of a unicorn. >For newcomers--and maybe some old hands need reminding too--a description or >an analog of a thing is not the thing, and will not NECESSARILY be able to >perform ALL the functions of the thing SIMULTANEOUSLY. The issue of simultenaity is surely a matter of implementation, not of modelling. An emulation will be as good as the hardware that implements it. >One can describe and >predict the behavior of a hydrogen atom with a computer, or with pencil and >paper for that matter; but neither a computer nor a notebook can substitute >for a hydrogen atom. It is conceivable that "information is everything" in >SOME sense, but it is also possible, even likely, that ONLY a hydrogen atom >can contain ALL the information about itself in the volume and mass of a >hydrogen atom. If you accept the modern interpretations of quantum mechanics, even a Hydrogen atom cannot contain ALL the information that is needed to determine its behaviour. Cf. the Aspect empirical test of Bell's inequality. There was quite a good recent Scientific American popularisation of Bohm's work that might make a nice starting point for such enquiries. Now depending on what observables you're interested in, there is quite an active area of research in quantum physics that concerns itself with producing artificial micro and macro scale quantum systems that mimic the properties of atoms and other naturally occuring quantum systems; quantum dots and such. I daresay that any obserable property of a Hydrogen atom could be emulated by such systems with sufficient ingenuity. >By the same token, it is conceivable that ONLY an organic >brain can host a self. This is a pretty wild leap - has Bozzonetti been forging Ettinger articles? :-) >Turing's tape is the paradigm general purpose computer, which can in >principle perform any calculation that any computer--serial or parallel--can. >But it can't do two things at once, which may well be essential for a self, >as it is for many objects and systems in the real world. For those who rely >on intuition to any extent, just try to imagine a Turing Tape as being a >"person." I'm afraid that you're arguing computability when you ought to be arguing complexity here. The general Turing machine does one thing at a time, but there is a great deal of theory to do with what problems are soluble over what amounts of time on parallel computing devices. It is well known that parallel computation is easily emulated on a single-threaded device. If you've ever used a UNIX computer then you've taken advantage of simple implemenation of such an emulation. True parallelism is much harder to do, but it is presently being addressed. There has been some recent work on true parallelism based on quantum dots - I read about it in New Scientist, which I don't have handy, but the gist of it is that you can solve NP-complete problems, like prime factorization, in linear time given an appropriately sized array of quantum dots. I'll try to dig the issue number out if you're interested. As to imagining a general Turing machine implementation of "a person", I don't see why you feel it is more preposterous than an implementation based over any other Turing-equivalent computing device. >Second, I have repeatedly and emphatically insisted that intelligence is not >the stuff of life or humanity or consciousness: the central role goes to >FEELING, the capacity to experience pleasure and pain and the like, the >subjective condition. Which, again, might not have a physical basis at all. Let me pose a question for you: do you believe that, with sufficient intelligence, this "FEELING" might not be emulated? >seem to think)--whereas if memories etc. survived WITHOUT the self circuit, >EVERYTHING would be lost, YOU would be gone, and in fact there would be NO >ONE there. So then all that need be done is construct a blank "self-circuit" that would implement the recorded memories. Or are you suggesting that this "self-circuit" maintains some sort of stateful information beyond what we think of as memories? >Mr. Platt failed to debunk my proof that the Turing Test is neither necessary >nor sufficient to prove humanity. Whoever suggested that the Turing test was necessary or sufficient? >It is NOT intelligence we are trying to prove. EVERY computer/program has >intelligence of some kind and degree. We have computers NOW that are more >intelligent in some areas than people--but I don't think anyone claims they >are conscious. Surely you must, then, admit that even if the computer/program >is so bright it fools everybody, it still MIGHT not be conscious. It is >possible to be intelligent (in most senses of the word) yet unconscious; it >is also possible to be conscious and stupid. I think you're tieing yourself into knots here. Before you can attribute consciousness to a mechanism, you have to have some criterion, based upon the observable behaviour of the device, that you are willing to say is sufficient to prove consciousness. But you've said that there are no such criteria - it can only follow that the question of whether or not a device is conscious is itself meaningless. For my money, before you prove that the device is conscious, you must first prove that it is alive. And before you do that, you must produce a criterion whose satisfaction is sufficient to prove "life". I know of no such criteria, but I'd like to propose one here in the hope that it might give us something more concrete to debate. As far as I can see, the whole focus of the question is arsey-versey. The quality of "life" does not inhere in mechanism, but in the functions of mechanism in relation to its environment. It is readily apparent that fire is not a living entity, and yet fire is an important stage in the life-cycle of certain plants - Australian eucalypts for example. To draw another example, there are many simple mechanisms - viruses and prions - that are completely inert and lifeless except when they are taken up by living cells. In fact, we might say that all organic molecules are unliving except when they are taken up by living cells. Yet living cells are entirely composed of these same molecules. What, then, causes us to think of these molecules as being alive when they serve the purposes of a living cell, but dead when they do not? It can only be the _process_ that the cell is conducting! It is not the mechanisms that are alive, but the processes that they implement. When is a process alive? I propose that it is alive when it is part of a mathematical group of processes, any combination of which will be equivalent to another member of the same group. I suspect that this definition is a little dodgy, but I think that it is heading in the right direction. One of its advantages is that it would permit us to describe living processes that involve many different organisms, Indeed, if we regard only the processes as living, rather than their components, then we could map out organisms as nexi of various living processes, and thereby describe entire ecosystems and their interrelations in group theoretic terms. How, then would we go about defining intelligence? If the elements of a living process are themselves dead, then shouldn't we presume that the elements of an intelligent, or, better, a conscious process are themselves unconscious? I'm not certain that consciousness can be defined analagously to my defintion of life above .. >The point is, for the umpteenth time, we do not yet know what underlies >consciousness and feeling--and no respondent can prove he has it just by >making conversation. Of course it is possible that, at some time in the >future, we might enounter a machine or a life form of questionable >consciousness, and have a hard time finding criteria for decision. That >doesn't change anything. I can only suggest that we have already found it, and it is us. Humans are capable of the most profoundly stupid, inhumane and unconscious actions. Perhaps consciousness is like the unicorn - a very pretty myth, but not something you can find in reality? >[...] >About Mr. Bozzonetti: I am fascinated by his writings, and impressed by his >apparent breadth of information and imagination (not to mention humor). I >don't have competence in most of these areas, and perhaps, as others suggest, >most of his ideas are wrong. Maybe he's just blowing smoke. We'll see. I >can't help hoping he has a nugget or two in there somewhere. Hmm, another piece of evidence suggesting a forgery :-) -- Internet: | Accept Everything. | http://www.usyd.edu.au/~pete | Reject Nothing. | Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=3645