X-Message-Number: 5877
From:  (Thomas Donaldson)
Subject: Re: CryoNet #5784 - #5790
Date: Mon, 4 Mar 1996 21:42:49 -0800 (PST)

Hi again!

Well, I seem to be getting some attention here, once more.

On the decline of civilizations, Mr. Merel produced some interesting examples.
We note, for instance, that the Toltecs had successors (sure, we may have 
SUCCESSORS too, but that doesn't mean that they simply vanished). The Mayans 
are actually still there, in Yucatan. For some reason not yet worked out by
archaeologists, they abandoned their old cities before the Spanish ever 
appeared (and the Spanish and then the Mexican government had lots of trouble
with them: they were lots more trouble to conquer than the Aztecs, and even
in 1830 they rebelled seeking their independence). The closest example to 
a group which regressed would be the Easter islanders; Mr. Merel might also 
have mentioned the aboriginals of Tasmania. One problem these peoples had 
which most did not was not so much the exhaustion of resources as the small
number of people: it takes more than a certain amount to keep any animal 
population going indefinitely. A lot of Pacific islands, once populated by
Polynesians, were empty by the time Europeans reached them: because they 
could only sustain too small a population, not because that population
simply ran out of resources.  


On the issue of "intelligence", whatever it may be, it seems to me that in 
orderfor anyone to design a machine which will be "intelligent" the first thing
they will need is some better idea of just what "intelligence" is to mean. And
without such an idea, they will certainly produce an INTERESTING machine,
but whether it is intelligent or not will not be accepted by many, maybe even
not even its designer. Was the program defeated by Kasparov intelligent or
not? And for those who write software among us, since when has it happened
that someone has come to you and asked you to write a program to do X, and
you ask them to be more precise about what X is, and the only thing they can
say to you is that if your program does X they will know it?

I made this point about "intelligence" specifically because it's supposed to
be one thing which we might redesign ourselves to have more of. I'm not 
unhappy with the idea of redesigning ourselves, but I become very unhappy 
with the notion of redesigning ourselves so that we will do or have more X,
when we can't even specify what X is. Greater memory capacity, yes, that is
far easier to specify (and all of the drugs claimed to increase "intelligence"
actually increase memory abilities). Whether or not a greater memory,
combined with lots of experience, will produce "intelligence" is an interest-
ing but open question.

As for the points I made about responsibility for the machine, and Kasparov,
I distinguish causality from responsibility. His parents, if they are still
alive, may well feel proud of their son, but it was still Kasparov, not his
genes, his parents, or his schooling, which defeated the designers of 
Deep Blue. His parents were partly responsible for Kasparov. Kasparov was
responsible for defeating Deep Blue's designers. Responsibility relates to
causality in that it belongs, first, to creatures able to make choices, and
second to an immediately causative choice, not more distant ones such as
Kasparov's parents decision to have a child. If we don't make such a 
distinction, just how do we do such things as work out whom to pay when
we buy something? The shopkeeper? The owner of the shop? The manufacturer
of the object? The people who made the machines needed to make the object?
Causation fans out into many different events; responsibility focuses on
just one creature or person, the one who made the ultimate choice. (Yes,
I know this seems longwinded, but I didn't raise the issue, and to deal 
with it I see no other way than to be longwinded. SORRY).

			Best and long long life,

				Thomas Donaldson


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=5877