X-Message-Number: 11687
Date: Thu, 6 May 1999 10:30:21 EDT
Subject: isomorphism responses

Mike Perry writes:

>I've argued that space-binding of interior elements [in a computer], of the 
type Bob >Ettinger imagines, should not be necessary for the device to 
exhibit true
>consciousness, this being basically the position of strong AI advocates like

With all due respect, even though some of them are much smarter than I am, I 
don't think the strong AI people have "argued" this--they have merely 
asserted it. 

>However, it seems that the isomorphism idea, if we push it far
>enough, would require us to recognize a static artifact as "conscious." For
>example, we could record the successive internal states of an active device
>we consider conscious, and mathematically, the unmoving record would be
>isomorphic to the real behavior!

As I said earlier, it is encouraging that Dr. Perry recognizes this.

>After thinking about this, I came to the conclusion that there are really
>two issues here (at least): (1) consciousness as a property of active
>constructs in our universe, and (2) consciousness in a more general sense,
>supposing we allow for such possibilities as other universes besides ours,
>mathematical "worlds," etc.

Even postulating the possibility of (2), it seems to me, requires an 
unjustified  leap of tacit assumptions. Unless you simply accept as an axiom 
that isomorphism is everything, I see no reason to assume, even tentatively, 
that a book could be conscious.

 >Now, if we want to go beyond this, and consider systems whose time component
>does not correspond to our actual time, well, I think this is possible too,
>but then we are in the realm of systems whose consciousness is not
>"consciousness" in *our* world. A static record is like this. If you think
>of the record as simply a model of happenings over time, with time itself
>modeled by "page number" or some other reasonable subdivision--fine. It
>would, under appropriate conditions, be reasonable to regard some recorded
>entity as "conscious" relative to the domain in which it is expressed, say a
>human whose brain activity is captured in detailed fashion in some recorded
>form. The existence of such a record, however, would not require us to
>consider the recorded human as conscious in *our* world. So I think that the
>notion of extending consciousness through isomorphism can be defended,
>provided we recognize that, if pushed far enough, we will have to recognize
>a certain relativity principle too. In the most general sense, in deciding
>whether a system should be considered conscious, we have to consider *in
>what domain* it might be conscious, which involves a frame of reference, and
>how time is being modeled.

I see no merit in this notion of consciousness relative to a frame of 
reference. If we say e.g. that the right kind of book is conscious relative 
to the world of libraries or symbolic records, we have only removed the 
question from the realm of the verifiable. We have made the statement true by 
definition, and clarified nothing.

He goes on to say that a physical construct or analog--robot or inorganic 
alien--should be accepted as conscious if it behaves more or less as we do. 
In practice, I agree. If  they behave as though conscious, then it would 
certainly be dangerous in many ways to treat them as unconscious, even though 
they might be.

Robert Ettinger
Cryonics Institute
Immortalist Society

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11687