X-Message-Number: 12989
Date: Thu, 23 Dec 1999 20:07:49 -0700
From: Mike Perry <>
Subject: Re: Being There

Here are some comments on Bob Ettinger's posting "Being There" (#12981).

In the first place he goes through a thought experiment in which parts of
your brain are replaced bit by bit with electronic devices. "You" report
that the devices "feel exactly the same" as the original tissue, so "you"
have no problem (presumably) in discarding this biological tissue for the
electronic devices. We are assuming, of course, that the devices in question
really do function as intended. But opponents of strong AI will argue that
it is not sufficient if, under such a hypothetical change in hardware, you
*say* you feel fine, and still pass every test of functionality, observable
emotional response, etc. Because, changing your hardware in this way could
result in an entity that only imitates consciousness and feeling, however
perfectly, and actually has not a bit of either. But an argument like this
to me suffers from the same sort of weakness as the solipsist's argument.
Such arguments might never be disproved, yet I would reject them. We could
ask, for example, whether a person, just like you but with reverse chirality
(left-handed molecules changed to right-handed and vice versa) would be
"really" conscious, or just an imitation, and go from there.

>
>Second (yes, this is partly redundant), the thought experiment dodges the 
>question of time and space relationships. I'll omit repeating the reasons 
>here, but it is reasonably clear that awareness (based on feeling) must bind 
>time and space. A computer--especially a serial computer--does not do this. 
>According the the hard-core strong-AI people, it doesn't matter. Even a 
>basic, low-tech Turing computer--a strip of paper moving back and forth with 
>marks being written and erased on its squares--would feel and think, they 
>claim.

This is what I think, and I think the objections can be answered, perhaps
better than I've tried to do in previous postings.

> One must admire the audacity of the thought, but not the stubbornness 
>that refuses to admit its weaknesses.
>

I'll admit that there are difficulties, but one is simply that we are
talking about thought experiments here, and actually implementing an
advanced intelligence on a Turing machine (or even one of our
top-of-the-line computers) is far outside our powers today. But this is not
an argument against the principle of the thing. A being with whom I could
converse and who seemed intelligent and full of emotion I would be inclined
to accept as such, even if implemented in silicon or some other
nonbiological form, and with sequential in place of parallel processing.

>There are even deeper questions. David Deutsch and others tend to believe 
>that one should not speak cavalierly about what is possible "in 
>principle"--that there is  no disjuncture between the physically possible and 
>the logically possible. Again,  remains to be seen, and not soon either.
>

In general, I use "in principle" to mean "ought to be achievable eventually,
though we may not be able to do so yet." 

>
>I realize I have said all this before, many times, but I am searching for 
>phraseology that will be more effective. Hope springs eternal.
>

I have had some new thoughts on some of the issues previously raised, and
think I now know some better ways of addressing them--later, if there is
enough interest.

Mike Perry 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=12989