X-Message-Number: 8247
Date: Mon, 26 May 1997 21:42:48 -0700 (PDT)
From: John K Clark <>
Subject: Observable Consequences Of Consciousness

-----BEGIN PGP SIGNED MESSAGE-----

In #8242   On : Sun, 25 May 1997 Wrote:

	>If you run a lot of physical experiments and never encounter any        
	>surprises
	  
That would be very surprising.


	>then probably you are in a simulation, since in the real world the         
	>fundamental rules are not fully known and there will be experimental         
	>surprises.


That does not follow. Millions of people know the fundamental rules of Chess 
yet the implications of those simple rules constantly surprises them, that's 
why they still play the game.
					   


	>On the question of whether one could live out his (normally expected)
	
	>life as a simulation: I have suggested several reasons for skepticism,
		>including the simulated scientist. One which seems to have aroused no        
	>response is the "bug" problem in simulations and subsimulations:
	   

I have heard more than one brilliant person say that the deeper they get into 
Quantum Mechanics the less sense it seems to makes, could that be a bug?  
Intel would say it's not a bug it's a feature.
		

	>The brain is not a black box; it can be investigated experimentally         


The brain can be investigated experimentally but consciousness  can not be,  
not directly.       


	>To claim that consciousness (in mammals) has no observable         
	>consequences or mechanisms is just not true.
 

Although I can't prove it I think you're correct, if consciousness had no  
observable consequences then Evolution would not be interested in it and 
Science could not explain why we ever developed it. However I am surprised 
and pleased to hear you say this, you've been saying for years that even if 
you observed a robot acting just like a person you would still have no reason 
for thinking it was conscious unless its brain was made of meat and not 
silicon.            


	>UNTIL we understand the mechanisms of consciousness in mammals, it         
	>is premature to guess whether artifacts could be conscious.


If by "understand" you mean "know with certainty" then it will always be 
premature to say whether artifacts could be conscious.
		    

	>Searle's claim is that syntax alone cannot result in intelligence         
	>[let alone consciousness], and therefore no computer can ever be         
	>intelligent, since it has only syntax and not semantics; it merely         
	>manipulates symbols with no slightest understanding of what the         
	>symbols represent.  
	     

Yep, that's what the man says.
			 

	>I don't think this claim should be elevated to the status of axiom. 
			 

I certainly agree.
			 

	>I am inclined to think Mike Perry may be right--that a sufficiently         
	>large and sophisticated artificial language might be uniquely         
	>relatable to the real world. 
			 

I rather doubt that and it's not needed. We didn't figure out things about 
the world just by manipulating abstract symbols, we have senses that connect 
our mental symbols to external reality, and I see no reason a robot couldn't  
do the same thing.
			 

	>the claim of Searle and many others, that a system could act        
	>"intelligently" without genuine understanding  and without feeling,         
	>is obviously correct.
			 

That is far from obvious to me and it is even further from obvious that  
feeling is a more difficult problem to solve that intelligence. Look at 
History, Evolution found feeling far easier to obtain than intelligence, 
it developed it hundreds of millions of years earlier. 

As for understanding, Deep Blue doesn't understand much but it does  
understand Chess, if not then the word "understand" has no meaning to me.
			 

	>Those who claim a duplicate or near-duplicate would be you, and         
	>survival of a duplicate would constitute your survival, have courage         
	>but no clear evidence, only an article of faith that "your pattern         
	>of information is you." 
			 

My problem is that the only alternative to the "your pattern of information 
is you" theory is the "soul" theory of the holly rollers, if you can suggest  
a third alternative I'd love to hear it and I don't even demand you prove it.  


	>Some say the duplicate is "over there" and I am "here" and therefore         
	>the duplicate is not me. 
	       

But there is no asymmetry in the situation, the other fellow would be saying  
exactly the same thing " the duplicate is over there and I am here and  
therefore the duplicate is not me." Of course I have no way of knowing who is 
the "duplicate" and who is the "original", and I don't even see how the words 
make any sense if you're talking about making copies at the quantum level.
						

					     John K Clark    

-----BEGIN PGP SIGNATURE-----
Version: 2.6.i

iQCyAgUBM4pjC303wfSpid95AQHfJwTokYdBcAevYr4KpdWpqR6pHcz7cEdAQBdI
c6MDtOaAH2orPVOpNlH2CFL0mcTdTxKtlO1uGZfJPkKaJ3qSqbCaNm6oPRWul5IZ
XpyMSJ0STpIeY2SG0+ojocvfXN8BNY/inGBE2f4wBt4t+lPnlOMGSQFQT2JSbeKr
0++cjzxKqJVdehYCSoRWjliwgz1yNUqK9n6QyJBYLyjY7q2UEQ==
=/PnT
-----END PGP SIGNATURE-----

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8247