X-Message-Number: 12957 From: Date: Fri, 17 Dec 1999 21:36:54 EST Subject: Crevier's robots [In re arguments in favor of machine consciousness, Daniel Crevier wrote the following (inter alia) and I have interspersed comments in brackets. (On another occasion I may discuss the not-so-simple reasons that the question is important.)] Computers can simulate any physical system, [I'm willing to stipulate that, for the sake of argument, but we don't really know it.] and therefore a brain. Such a simulation would behave as if conscious, and claim to be conscious. ["behave as if conscious"? What does that mean? If it means anything, presumably it means that the machine, somehow given working connections and fitted into a human skull in place of the organic brain, would send the same signals to the rest of the body that the brain would have done. In that case, of course, the "person" would be indistinguishable from an ordinary person except by inspecting its innards (which would not necessarily involve invasion, since some types of scanner can just look at natural radiation from the specimen).] - Argument from the uploading thought experiment. You could turn your brain into a computer bit by bit, and ensure the continuity of your conscious experience. [Not necessarily. If feeling is defined or characterized by a special physical mechanism, then changing or removing that mechanism may result in sudden loss of consciousness. There are many examples of very small changes having very large effects. ..Also, this reminds us of Corbin's frog or Parfit's Greta Garbo. If Corbin were very gradually changed into a frog, or Parfit into Greta Garbo, would the end result still be Corbin or Parfit? Continuity-even if truly achievable-is not the same as identity.] - Argument from the unobservability of 'true' consciousness. If no difference can be observed between a conscious being and a simulation of one, does it make sense to talk about such a difference? [Impossible premise. A simulation can always, in principle, be distinguished from an orginal. If it isn't distinguishable, even in principle-other than by location, as in the case of two electrons-then it isn't a simulation; it is the real thing. And of course "indistinguishability" by an outside observer, e.g. by conversation, is meaningless-such an observer can always be mistaken.] - Argument from evolution or entropy. What survival advantage would a 'truly' conscious being enjoy over a perfect simulation of one? [Wrong question. Perhaps Dr. Crevier meant to say, what is the advantage of feeling over the same behavior without feeling. But that is still misleading. The evolutionary point is that feeling may allow fast-and-dirty reaction better than other systems, more responses with less circuitry.] - Argument from solipsism. If we were surrounded by apparently conscious machines, doubting their consciousness would be a form of solipsism. [No. I don't doubt that you are conscious, but I would doubt the consciousness of a clanking robot if I saw one tomorrow. That isn't solipsism, and with more and better robots it still would not be solipsism. (And please remember I say "doubt," not "deny.")] Argument from artificial intelligence: the utilitarian view of consciousness. Consciousness seems to be just a set of behaviorally efficient mechanisms. ["Seems" to whom? This is really just asserting your premise as a conclusion.] - The orchestral chord argument, or the informational origins of qualia. How the 'redness of red' could be the result of information processing. [This is unclear, and it may cut the other way. In any case, being the result of information processing is not the same as being identical with information processing.] - The cartesian theater argument, or why appearances can be deceiving. Our consciousness is not what it seems to be, and qualia may not be either. [The Cartesian theater is a red herring. And the deceptiveness of appearances is a very good argument for skepticism about the consciousness of robots.] - The Russian dolls argument. Why a true account of consciousness will always disappoint. [If you are talking about nested dolls or homunculi, it's a false analogy. In any case, how do you know that a true account will disappoint, since you don't have a true account yet?] Robert Ettinger Cryonics Institute Immortalist Society http://www.cryonics.org Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12957