X-Message-Number: 8244 From: Date: Mon, 26 May 1997 12:17:55 -0400 (EDT) Subject: robots & sentience In my Searle comments yesterday, my focus was too narrow for the context. To allow the possibility of semantics in robots, it is not necessary to invoke Dr. Perry's universal language. If a robot were equipped with sensors and effectors--even indirectly--it might be able to develop semantics in more or less the same way we do, by interplay with the environment, as many have noted. Except for the self circuit, our own internal signals are only symbols, which acquire meaning, at least in part, by environmental interaction. So again, Searle seems clearly wrong in asserting as an axiom that a computer-like robot can have only syntax and not semantics. However, yet again, this does not change certain facts that the radical info folk are reluctant to face: a) FEELING (subjectivity) is the sine-qua-non of life as we know it, and it may also be necessary for full semantics or full sentience. b) We do not yet understand the anatomical/physiological basis of feeling in mammals, and until we do it is premature to assume it can exist on other than an organic substrate. c) The "information paradigm" is only a postulate, with a limited degree of plausibility, not a known fact. The map is not the territory; isomorphism is not necessarily enough, and COMPLETE isomorphism may not always be possible, even in principle. d) References to solipsism are red herrings, as are claims that subjective phenomena are inherently and permanently private and not objectively verifiable by outside observers. It is a reasonable presumption that there are no black boxes on any relevant scale; every system can in principle be examined and studied in its internal workings, although not necessarily by means currently available and not necessarily by non-destructive methods. e) The Turing Test is neither necessary nor sufficient to prove sentience or the lack thereof. Although it may be possible (not yet proven either way) for a robot to be fully sentient, it is CLEARLY possible for a robot to SEEM sentient without being sentient. Some computers/programs can ALREADY fool some of the people some of the time, and future ones--even non-sentient brute-force computers--will certainly be able to fool most of the people most of the time. f) "Intelligence" is not the same as what we usually mean by "sentience," and intelligence of various kinds and degrees can exist without life and without sentience. In its narrow field, Deep Blue is more "intelligent" than any human, as judged by results. But humans may be more intelligent at chess than Deep Blue, if we don't just look at results but also look at "productivity" or the relation of work to results. Per unit of processing, humans may do better--although this isn't really clear, since the human procedures may depend on cerebral structures and functions that are much more complex than those of the computer. In any case, and even if goal-seeking and adaptability are included in the definition of intelligence (with self preservation one kind of goal-seeking), computers have already been built with such qualities, yet no one claims they are sentient--except the people who claim atoms and electrons are sentient. "People"are defined primarily by feeling, not intelligence. Again, forgive the rambling and ramshackle structure. Robert Ettinger Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8244