X-Message-Number: 32932
Date: Wed, 13 Oct 2010 14:41:37 -0700 (PDT)
From: 2Arcturus <>
Subject: experimental validation of uploading

--0-1486200287-1287006097=:3767

Sorry - I meant to respond to this in a more timely fashion, but I couldn't...

I think you're right, Daniel. Experiments with neural prosthetics are going to 
tell us a lot, and provide us with evidence as to whether machines and digital 
processing can substitute for some parts of the brain. As you noted, the 
evidence is even already coming in, and it is coming in in the affirmative.

Of course, anti-uploaders might back up their argument and argue that the 
primary visual cortex, while interfacing with the part of the brain that gives 
rise to conscious, is not the part of the brain that gives rise to 
consciousness.


The trouble with extending the experiment to include more and more of the brain
would be that anti-uploaders would say that at some (undefined) point, the 

crucial part of the brain that supports consciousness would get swapped out with

a device that merely supports a "philosophical zombie"/p-zombie. They might say
- well, the subject is reporting being consciousness, but that is just the 
p-zombie glibly misrepresenting itself as being conscious.

I am convinced the anti-uploaders' survival/doubling issue is a simple 

philosophical error, but the issue of whether machines can support consciousness
the way the brain does is a little more challenging.

Part of the problem is that we have no science of how the brain supports 

consciousness. I am convinced that someday we will have that, because in theory

it should be possible to test the brains of living human subjects at any scale,

from brain-area to the molecular level, for example, by temporarily silencing or
reversibly-modifying certain areas and seeing what the results are to 

consciousness, as reported by the subjects. This should bring about a whole new
field of science within neuroscience and allow at least some provisional 
theories about how matter such as the brain relates to the subjective 
consciousness and how it gives rise to it.

This is the kind of information that could be brought to bear on the issue of 

consciousness "in silicon" - not just the Turing test, but a test that does look
inside the box and examines how a purported mind works and what it is doing 
(structure and dynamics) and how that plausibly relates to inferred 
consciousness. I have a suspicion that truly simulating verisimilitudinous 
consciousness (Turing-test level) in a non-conscious system, especially by a 
sequential formalism, would be much harder than simply building an actually 
conscious system. Surprise - nature, the blind watchmaker, discovered the 
shortcut to making a conscious-looking animal - by making a conscious animal! 
Even if a p-zombie were possible, we might be able to discern that it was a 
p-zombie, by looking inside its 'brain' and noticing the vast, circumlocutious 
complexity of its attempts to simulate consciousness, compared to the simpler, 
more direct way of the brain.

Of course, even a science of consciousness, an "objective/subjective" science, 
wouldn't answer radical (paranoid?) skeptics who would reject all the evidence 
of the new science by doubting the subjective consciousness of the test 
subjects, and every other living human being except themselves. After all, the 

only evidence we have of subjective consciousness is of our own, by definition.

The critics would have to decide where to draw the line between suspecting that

everyone except themselves was a p-zombie, or believing that anything that looks
conscious is conscious  (the 'hard AI'/Turing test approach that Searle rails 
against).


And this is why I don't think it's 'even money' whether machine consciousness is

possible. Believing that it is possible is simply extending belief that what the
brain is doing to make consciousness is something procedurally possible in the 

real world and that it can be understood and that it does not bear some sort of
unique relationship to some unknown characteristic of the brain, supported in 

some sort of unknown area of the brain. Conscious animals evolved from inanimate
matter, in the stuff of inanimate matter. Occam's razor brings the burden of 
proof to those who would suppose, without warrant, that there is something 
unknown and magical, some tertium quid/quintessence, that lets a brain make 

consciousness and that keeps anything else, including a machine, from making it.

What is it and why should we think so? Since I can't think of anything else or a
reason for it, I don't suppose that, although of course I admit it there is a 
lot left to learn.

>>>

Message #32895
From: Daniel Crevier <>
References: <>
Subject: experimental validation of uploading
Date: Fri, 01 Oct 2010 12:28:37 -0400

I'd like to come back to my posting No. 32877 of september 28. It didn't get 
any reaction, so let me try to add some zest to it. I think this posting 
shows that the theory that uploading preserves consciousness is falsifiable 
: it can be proven or disproven experimentally whether the uploaded person 
is still conscious, and not a zombie.

The experiment I suggest  is the following. Do only a partial uploading of a 
subject : only replace the primary visual cortex by a digital circuit. It 
has been shown that loss of this area leaves stroke victims consciously 
unaware of visual information, even if some visual processing seems to be 
still occuring in other parts of the brain. So if there is a short list of 
brain areas intimately related to consciousness, then the primary visual 
cortex is very much in it. Note also that loss of this brain area leaves all 
other mental abilities intact: the subjects can still think, talk, and 
report on their internal states. If this brain area is digitized, then one 
of two things could happen.

First, the subjects could report that they still have normal vision. 
According to uploaders, this is the expected outcome, since we assume that 
the digitized part of the brain will interface with the rest in exactly the 
same way  as the pre-existing biological one.

Alternatively, the subject could report that he or she has become blind. 
This should be the outcome expected by anti uploaders, since according to 
them the digital circuitry lacks whatever magic is required to induce 
consciousness.

Right now, this can't be done in practice, but we'll get there. The evidence 
so far, though, is pretty much in favor of uploaders: for example, retinas 
are made of neurons, and electronic replicas have been made; subjects were 
quite aware of their inputs.

So, after all, belief in uploading may not be a matter of personal choice or 
values. It may be objectively verifiable.

Any comments?

Daniel Crevier 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=32895


   
--0-1486200287-1287006097=:3767

 Content-Type: text/html; charset=iso-8859-1

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=32932