X-Message-Number: 11871
From: 
Date: Wed, 2 Jun 1999 02:05:43 EDT
Subject: The Quest for the Holy Quale, Part I

THE QUEST FOR THE HOLY QUALE-PART I

First, concerning usefulness, relevance, boredom, and repetition of the 
discussions on consciousness and related matters:

As Prof. Hirsch noted, there is really no significant imposition on those who 
are not interested; all they have to do is glance at the subject line (or the 
writer, perhaps) and skip it. What's the big deal? New readers might possibly 
be turned off by too much stuff that is boring or incomprehensible, but again 
they (and others) may come to realize the relevance, and some will even be 
intrigued by the discussion. Further, if all the "philosophical" and 
tangential material were omitted, Cryonet might dwindle to very little, if 
past is prelude. Finally, those who think they have better material are 
always free to post it.

The relevance is something we have not conveyed very well, I think. It isn't 
just a question of persuading Uploaders not to put all their money on that 
dogmeat nag. There is also a fairly direct connection to the practice of 
cryonics (and also to value systems). The cryonics connection goes as follows.

If cryopreservation could only save the information related to generics, and 
nothing of the individual's uniquely personal brain configuration, then it 
would be useless. If it could save the uniquely individual information, then 
it would be worth while, even if nothing else were saved. Therefore it is 
potentially of vital importance-even from the narrow perspective of 
cryonics--to understand the anatomy and physiology of consciousness, and 
perhaps of memory also. These questions must ultimately be answered by the 
experimentalists, but speculations of theorists can be helpful.  

I also remind readers that, in the recent Canadian Cryonics News, Ben Best 
reports that 21CM researchers think some organs, such as eyes and bones, may 
remain very difficult to perfuse, even with the new CPAs and ice blockers, 
and therefore fully perfected cryopreservation of a whole person, or even a 
whole head, may well be decades away, not just years. So even if brain 
cryopreservation is perfected, how will they know it or prove it?

Obviously there are many methods of bioassay, electroprobes, EEGs and other 
scans. But these already have been successful to some extent-cf on our web 
site Pichugin's demonstration of coordinated electrical discharges among 
networks of neurons in pieces of rabbit brains after cryopreservation. 
Nothing of this kind can be fully convincing, as far as I can see. Evidence 
of success, yes; proof, no. And as long as we do not know the detailed 
mechanisms of memory and consciousness, how can we be sure they have 
survived, if the whole animal (or at least the whole head) is not available 
alive?

So I repeat; if we learn the detailed anatomy/physiology of consciousness 
(and secondarily of memories), and can show that this is retained or regained 
after cryopreservation, then we will have achieved something important even 
in the narrow context of cryonics.

So let's look again at the question of consciousness. We must be patient, 
because it really is very difficult, subtle and complex, and it has confused 
some of the world's greatest thinkers, and still does. 

I take issue first with those who, explicitly or implicitly, say that 
consciousness is just an emergent property of any sufficiently complex 
information processing system, or that it is basically computational and 
closely akin to cognition, or that it primarily involves one subsystem 
looking at another subsystem or looking at a representation of itself. 
Instead, I propose that the most basic element of consciousness is 
feeling--the capacity for subjective experiences or qualia. Feeling and 
cognition certainly interact, but feeling is the more basic. Subjectivity is 
the ground of being, the sine qua non of life as we know it (LAWKI). 

Intelligence is neither necessary nor sufficient for feeling; feeling is 
neither necessary nor sufficient for intelligence (even though feeling can 
enhance the efficiency of responses to stimuli, and therefore be favored in 
evolution). 

So perhaps the most basic question in biology is the mechanism of a quale. My 
suggestion is that the capacity for feeling--for subjective 
experience--resides in what I call the "self circuit," defined simply as the 
portion(s) or aspect(s) of the brain or its functions that permits or gives 
rise to feeling. By definition, we know it exists; whether the label is 
useful remains to be seen. I suggest that the self circuit is something like 
a standing wave (electromagnetic, chemical, whatever) that exists over a 
non-zero interval of space and time. A modulation of the wave is a quale. 
(The quale does not "represent" a feeling; the quale IS the feeling.) 

(As a crude partial analogy, your central self is like a radio carrier wave, 
and your feelings or subjective impressions or qualia are the modulations of 
the wave. 
The wave is a physical phenomenon inside your brain.)    

Whether this particular suggestion is right or wrong is not even very 
important. What is important is the recognition that there is a specific, 
unique, physical structure or mechanism underlying feeling, which the 
experimentalists must seek and find. It is not just a hand-wavingly 
"emergent" phenomenon of computation or interaction.

When we learn the mechanism of the self circuit and qualia, that will not 
automatically solve all the "philosophical" problems of criteria of identity 
and survival. There may still remain questions of continuity and duplication, 
for example. But it will be a huge step forward, in some ways the most 
important achievement in the history of science, and possibly crucial for 
cryonics. 

Now let's look again at the issue of consciousness in an ordinary computer, 
which we can think of as a Turing Tape. Uploaders think there is no 
reasonable doubt that a computer, or an emulated person "in" a computer, can 
be conscious. 

It isn't hard to see why they might (at least initially) think so. After all, 
in principle the computer could describe or predict the most detailed 
behavior of the person. (In reality it could not, now or ever, but in the 
context of our thought experiment the postulate is permissible.) This implies 
among other things that, given the right tools and some ancillary 
programming, the computer could BUILD a person. Could the creator then be 
inferior to its creature? In a sense the computer CONTAINS the person; must 
it not then be everything the person is, and more? 

All such verbiage means next to nothing--just language traps. After all, WE 
(as most of us believe) were created by blind forces of nature, and yet in 
some ways we are superior to the rest of nature. We have to keep our eye on 
the ball, and be very sure we are addressing precisely the point at issue and 
not just blowing smoke.

It is very easy to prove that a computer, however "intelligent" it may be, is 
not necessarily conscious. The first proof is just the reminder that 
"intelligent" but unconscious computers already exist--e.g. the chess program 
Deep Blue that has Grand Master capability yet is merely a brute-force 
program with a few flourishes. "Expert system" programs exist that may 
diagnose medical symptoms better than most physicians, yet again are no more 
conscious than a dictionary. Conversation programs  exist that can fool some 
of the people some of the time. Surely anyone can see that, projecting into 
the future, programs and computers have unlimited potential for producing 
impressive results, including goal-seeking and adaptive behavior, even 
without the slightest hint of consciousness. 

But what about the case where the computer predicts or describes or 
"contains" the detailed behavior, including the "self circuit," of a person? 
Could the computer, or the emulated person "in" the computer, still be 
unconscious?

Yes, because consciousness might, and probably does, require that more than 
one thing happen at the same time, and our computer is sequential. The 
Uploaders customarily respond that a sequential computer is still "universal" 
and can do anything that another computer, including a parallel computer, can 
do--but that isn't true. It is only "universal" in the sense of eventually 
producing the same "results"--the same descriptions of sequences of states, 
i.e. the same sets of numbers. 

A sequential computer (basically a Turing Tape) cannot work in time--not in 
real time, and not in scaled-down time. In the various texts discussing the 
Turing computer, there is not even any mention of the physical mechanisms of 
reading, writing, and moving the tape. The essence of the computer is just 
that it uses a program and initial data store to grind out successive sets of 
numbers corresponding to the successive internal states of the computer, and 
the final product is just that sequence of sets of numbers, which then by an 
agreed symbolism can be construed as the answer to some question, such as a 
past or future history. 

Part II tomorrow or sometime, unless my better judgment prevails.

Robert Ettinger
Cryonics Institute
Immortalist Society
http://www.cryonics.org

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=11871