X-Message-Number: 30168 From: Date: Wed, 19 Dec 2007 00:36:34 EST Subject: "evil" AI & consciousness Flavonoid wrote in part: >You state there is "no agreed definition of consciousness ... no current >way for an observer to decide whether an >observed system is conscious". This may not be the real issue. If I'm >out camping in the woods and a bear comes along, that bear might or might >not be a brilliant mathematician with intense feelings for his wife and >children, but one thing is sure, it has the power and ability to wreak >havoc with my belongings on the ground, and with me if I am stupid enough >not to get into the car and make sure the windows are up. Or, >alternatively, shoot the bear. In other words, he questions the relevance of consciousness in computers to the potential dangers of powerful computers. Let my try to clarify: The potential danger of intelligent computers is that they might have motives resulting in choices inimical to us. My point is that a system without feeling, without subjectivity, cannot have motives in the sense that we do. It doesn't want anything or fear anything. It can only have programmed goals, states to attempt to reach or to avoid, which is very different. These goals must be very explicit and unambiguous. Any attempt by the programmer to paint with a broad brush will inevitably result in freeze-up, and trying to foresee all future possibilities in detail is hopeless. In any case, to repeat myself, when some programmer thinks he is near a super-intelligent program, he will build in safeguards, e.g. in certain situations requiring a pause for external input. That there is little present effort to do this simply reflects the fact that such programs are nowhere on the horizon. Robert Ettinger **************************************See AOL's top rated recipes (http://food.aol.com/top-rated-recipes?NCID=aoltop00030000000004) Content-Type: text/html; charset="US-ASCII" [ AUTOMATICALLY SKIPPING HTML ENCODING! ] Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30168