X-Message-Number: 4108
Date: Wed, 29 Mar 1995 23:42:01 -0800
From: John K Clark <>
Subject: SCI.CRYONICS Minsky on Penrose

I asked Marvin Minsky if I could send this to Cryonet and he
agreed. It's long but I thought it excellent, Interestingly he
thinks that consciousness is easier to achieve than intelligence.

 John K Clark   

======================================================================

From:  (Marvin Minsky)


Conscious Machines
Marvin Minsky
M.I.T.

Published in "Machinery of Consciousness", Proceedings, National Research=
 Council of Canada, 75th Anniversary Symposium on Science in Society, June=
 1991.  I don't have the final publication date.


Many people today insist that no machine could really think.  "Yes,"
they say, "machines can do many clever things.  But all of that is
based on tricks, just programs written by people to make those
machines obey preconceived rules.  The results are useful enough --
but nowhere in those cold machines is there any feeling, meaning, or
consciousness . Those computers simply have no sense that anything is
happening."

They used to say the same about automata vis-a-vis animals.  "Yes,
those robots are ingenious, but they lack the essential spark of
life."  Biology then, and psychology now: each was seen to need some
essence not mechanical.

The world of science still is filled with mysteries.  We're still not
sure of how the Sun produces all its heat.  We do not know precisely
where our early ancestors evolved.  We can't yet say to what extent
observing violence leads to crime.  But questions like those do not
evoke assertions of futility.  We can try harder to detect more
neutrinos, find more fossils, or perform throrough surveys.  However,
in certain areas of thought, more people take a different stance about
the nature of our ignorance.  They proceed to work hard, but not
toward finding answers, but toward trying to show that there are none.
Thus Roger Penrose's book [1] tries to show, in chapter after chapter,
that human thought cannot be based on any known scientific principle.

I have already written a book [2] that discusses various attempts to
show that men are not machines, but mainly works to demonstrate how
the contrary might well be so.  You might object that no one has time
to read all such books, so why can't I just summarize?  And @i[that's]
what this essay is all about: that certain things are too complex to
summarize!  This includes the mechanisms of highly evolved organisms
and, especially, the workings of their nervous systems.  It also
includes the highly evolved systems that we call cultures and
societies.  And especially, it includes what we call consciousness.
In particular, consider the problem of describing the brain in detail
-- in view of the fact that it is the product of tens of thousands of
different genes.  We can certainly see the attractiveness of proposing
to get around all that stuff, simply by postulating some novel "basic"
principle by which our minds are animated by some vital force or
essence we call Mind, or Consciousness, or Soul.

That tendency is not confined to religion and philosophy.  The same
approach pervades our everyday psychology.  We speak of making
decisions by exercising 'freedom of will'; or by finding what
something 'means', or of discovering truths by means of 'intuition'.
But none of those terms explains very much; each only serves to name
another set of mysteries.

The situation is different in Physics.  Consider the whirlpools that
form when water flows down drains.  When a scientist says that this
can be explained by the Conservation of Momentum, that's very
different from attributing it to some convenient Whirlpool God --
because precisely the very same mathematical rule can be used to
explain and predict a vast range of other phenomena, with a precision
and lack of exception found in no other realm of ideas.  That
principle apparently applies to everything in our universe and,
because of its singularly good performance, we regard this sort of
"fundamental" or "unified" principle as an ideal prototype of how to
account for mysterious phenomena.  But one can carry that quest too
far by only seeking new basic principles instead of attacking the real
detail.  This is what I see in Penrose's quest for a new basic
principle of physics that will account for consciousness.

The trouble is that this approach does not work well for systems whose
behavior has evolved through the accretion of many different
mechanisms, over the course of countless years.  For example, in
physiology, the excretion of excess potassium in the urine occurs
because our ancestors evolved elaborates system of receptors and
transport mechanisms, along with intricate machinery for controlling
them.  This is understood so well today, that no one feels that
there's any need to postulate a separate, special principle for the
Conservation of Potassium.  Progress in this area is no longer news
for biology because we have seen two hundred years of great success
accrued from working out details.  Since Harvey, Darwin and Pasteur,
the idea of a Vital Force has nearly vanished from biology.  Why is it
still so much a part of present-day psychology?

I'll argue that vitalism still persists because we're only starting to
find a way to understand the brain.  (I see this as the irony of
Penrose's book, because the path toward understanding lies in that
flood of new ideas that began to grow around the time, half a century
ago, along with the emergence of computers in the 1950s -- include the
work of Turing in 1936, McCulloch and Pitts in 1943, and the hundreds
of thinkers who joined them afterward.  Yet Penrose takes the other
side, and argues that the abilities of human mathematicians to
discover new mathematical truths cannot be explained on the basis of
anything a machine could do.  He argues in [1], p110, that this kind
of thinking must be based on "insights that cannot be systematized --
and, indeed, must lie outside any algorithmic action!"  He bases this
on the assumption that any thinking machine we build for attempting to
discovering knowledge about mathematics must itself be based on some
absolutely consistent logical foundation -- that is, one that cannot
possibly produce any type of logical contradiction or inconsistency.
This is the same assumption used in G=F6del's celebrated
'incompleteness theorem'.  Penrose's application of this idea to
psychology is due, as Penrose notes, to J.R.Lucas, in @i[Philosophy],
36, pp120-4, 1961.

Consistency

It seems to me that all of this stands upon a single and simple
mistake.  It overlooks the possibility, as my colleague Drew McDermott
once remarked, of including systems "that are mistaken about
mathematics to some degree, or systems that can change their minds."
By inadvertently ruling such machines out, you've simply begged the
question whether human mathematicians can be kinds of machines --
because people do indeed change their minds, and can indeed be
mistaken about some parts of mathematics.  An entire generation of
logical philosophers has thus wrongly tried to force their theories of
mind to fit the rigid frames of formal logic.  In doing that, they cut
themselves off from the powerful new discoveries of computer science.
Yes, it is true that we can describe the operation of a computer's
hardware in terms of simple logical expressions.  But no, we cannot
use the same expressions to describe the meanings of that computer's
output -- because that would require us to formalize those
descriptions inside the same logical system.  And this, I claim, is
something we cannot do without violating that assumption of
consistency.

If you are not a logician, then you might wonder what's all the fuss
about.  "What could possibly be wrong with logical consistency.  Who
wants those contradictions, anyway?"  The trouble with this is that
the problem is worse than it looks: paradoxes start to turn up as soon
as you permit your machine to use ordinary common-sense reasoning.
For example, troubles appear as soon as you try to speak about your
own sentences, as in "this sentence is false" or "this statement has
no proof" or in "this barber shaves all persons who don't shave
themselves."  The trouble is that when you permit "self reference" you
can quickly produce absurdities.  Now you might say, "Well then, why
don't we redesign the system so that it cannot refer to itself?"  The
answer is that the logicians have never found a way to do this without
either getting into worse problems, or else producing a system too
constrained to be useful.

Then what do ordinary people do?  So far as we know they scarcely use
any logic at all.  The studies made by the great child psychologist
Jean Piaget suggest that the abilities required for to manipulating
formal expressions are not reliably available to children until their
second decade, if ever.  And even as a mathematician, I cannot
recognize the psychology Penrose describes.  When doing mathematics,
my mind is filled with many things non-logical.  I imagine examples
based on gears and levers, I imagine conversations that might reveal
to me what Andrew Gleason or Dana Scott might do in the same
situation, or I imagine explaining my solution to a student and
discovering something wrong with it There's little sign of consistency
in any of that experience.  Nor is that famous 'intuition' really a
privileged route to the truth, because although the answer seems to
come with a feeling of certainly, later it's likely to turn out to be
wrong.

Perhaps the most important aspect of how humans work are the ways in
which we ask ourselves (not necessarily by using words) what problems
have we seen before that most closely resemble the present case, and
how did we manage to deal with them.  For those were where we made our
mistakes and then sometimes managed to learn from them.  And notice
that in doing so, we somehow must employ some capabilities for
retrieving and then manipulating some descriptions of some of our
earlier mental activities.  Now, notice how self-referent this is.
Often when you work on a problem you consider doing some certain thing
-- but before you actually carry that out, you often inquire about
yourself, about whether you actually be able to carry it through.
Solving problems isn't merely applying rules of inference to axioms.
It involves making heuristic assessments about which aspects of the
problem are essential, and which of one's own abilities might be
adequate to dealing with them.  Then, whatever happens next arouses
various feelings and memories about of situations that seem similar
and methods that might be appropriate.  Is this done by some kind of
non-physical magic or it is accomplished, as I maintain, by the huge
and complex collection of knowledge-base representations and
pattern-matching processes that we all regard as 'common sense'?

Now it happens that when we do such things, we often find that we talk
to ourselves about what we're doing.  And when we thus "refer to
ourselves" we sometimes speak of being conscious or aware.  I think it
no coincidence that Penrose feels that this, too is something
present-day science cannot explain.  Indeed, he Could this result from
just that fear of inconsistency and self-reference?  Indeed, Penrose
sometimes speaks of a "reflection [principle" with something
resembling awe: "The type of 'seeing' that is involved in a reflection
principle requires a mathematical insight that is not the result of
the purely algorithmic operations that could be coded into some formal
mathematical system (p110)."  In my opinion this is just a mistake!
He appears to assume that when this is applied to humans, the word
"consistent" can be freely inserted between 'some' and 'mathematical'
-- as though people possess some marvelous gift whereby they can tell
which assertions are true.  But in view of the many mistakes we all
make, I see no compelling evidence that anyone has any direct such
access to truth.  All we can depend upon (including the power of
formal proof) is based on our experience.  I think.  And in any case
there really is no problem at all in programming a computer to perform
that sort of reflective operation. Indeed John McCarthy has pointed
out that forming a Godel sentence from a proof predicate expression
(which is the basis of the Lucas-Penrose argument) requires no more
than a one line LISP program.  So in my view Penrose and many other
philosophers have put the problem upside down: the difficulty is not
with making algorithms that can do reflection -- which is easy for
machines, but with consistency -- which is hard for people.  In
summary, there is no basis for assuming that humans are consistent --
not is there any basic obstacle to making machines use inconsistent
forms of reasoning.

Consciousness.

Even the most technically, sophisticated people maintain that whatever
consciousness might be, it has a quality that categorically places it
outside the realm of science, namely, a subjective character that is
makes it utterly private and unobservable.  Why do so many people feel
that consciousness cannot be explained in terms of anything science
can presently do?

Instead of arguing about that issue, let's try to understand the
source of that skeptical attitude.  I have found that many people
maintain that even if a machine were programmed to behave in a manner
indistinguishable from a person, it still could not have any
subjective experience.  Now isn't that a strange belief -- considering
that unless you were a machine yourself, how could you possibly know
such a thing?  As for 'subjectivity,' consider that @i[talking] about
consciousness is a common, objective form of behavior.  Therefore, any
machine that suitably simulated a human brain would have to produce
that behavior.  Then, wouldn't it be curious for our artificial entity
to falsely claim to have consciousness?  For if it had no such
experience, then how could it possibly know what to say?  Of course a
classic question in philosophy is asking for proof that our friends
have minds; perhaps they are merely unfeeling machines.  But then one
must ask how they'd know how to lie.

In any case, we have much the same problem with ourselves; try asking
a friend to describe what having consciousness is like.  Good luck!
Most likely you hear only the usual patter about knowing oneself and
being aware, of sensing one's place in the universe, and so on.  Why
is explaining consciousness so dreadfully hard?  I'll argue that this
is something of an illusion, because consciousness is actually easier
to describe than most other aspects of mind; indeed, our problem is a
far more general one, because our culture has not developed suitable
tools for discussing and describing thinking in general.  This leads
to what I see as a kind of irony; it is widely agreed that there are
"deep philosophical questions" about subjectivity, consciousness,
meaning, etc. But people have even less to say about questions they'd
consider more simple:

        How do you know how to move your arm?
        How do you choose which words to say?
        How do you recognize what you see?
        How do you locate your memories?
        Why does Seeing feel different from Hearing?
        Why does Red look so different from Green?
        Why are emotions so hard to describe?
        What does "meaning" mean?
        How does reasoning work?
        How do we make generalizations?
        How do we get (make) new ideas?
        How does Commonsense reasoning work?
        Why do we like pleasure more than pain?
        What are pain and pleasure, anyway?

We never discuss these in everyday life, or bring them up in our
children's schools.  An alien observer might even conclude that those
Earth-people seem to have a strong taboo against thinking about
thinking.  It seems to me that this is because our traditional views
of psychology were so mechanistically primitive that we simply had no
useful ways to even begin to discuss such things. This is why I find
such irony in the arguments of those who reject the new mechanistic
concepts of psychology -- the new ideas about computational processes
that promise at last to supply us with adequate descriptions of these
complex processes.

The science of Psychology, as we know it today, is scarcely one
hundred years old.  Why did humanity wait so long before the emergence
of thinkers like Freud, Piaget, and Tinbergen?  I think the answer
lies in the fact that the brain is not merely a kind of machine, but
one that is far more complex than anything ever imagined before.  The
pivotal notion provided by those three pioneers was that the mind has
many parts.  A person doesn't simply See by "looking out" through the
eyes.  Instead, vision involves many different processes, cooperating,
competing, being promoted and inhibited by other processes, being
managed and regulated by yet others.  You can not simply 'recognize' a
telephone, because that is scarcely at all a matter of vision;
instead, you have to "re-cognize" it -- that is, the input has to
somehow activate some memory representations of a device with a
certain kind of structure (handset and dial, say) coupled with a
certain functional disposition (to hold to the mouth and ear for
communication purposes).  This is nothing like the sorts of unitary
concepts found both in commonsense and philosophy, e.g., of a platonic
ideal of a telephone, or some sort of model inside the head.  In
recent years we've learned much more about the complexity of the
brain.  It now appears that perhaps fully half of our entire genetic
endowment is involved in constructing our nervous systems.  This would
suggest that the brain is nothing like a single large-scale neural
net; instead, it would have even more parts than the skeletomuscular
system -- which can be seen to have hundreds of functional parts.  If
you examine the index of a book on neuroanatomy, you will find the
names of several hundred different organs of the brain.  A good
fraction of those are already known to have psychologically distinct
functions.  To pursue the analogy a little further, note that the
skeletal anatomies of animals have been known for millennia, but only
in rather recent years have scientists understand the mechanics of
locomotion and its various gaits; that had to wait until scientists
learned more about the mechanics of forces and materials.  Similarly,
mechanistic theories of psychology may have to wait even longer for
adequate conceptual tools because the 'mechanics' of heuristic
computation could turn out to be more complex than those of physics.
Before these new ideas emerged, with the era of complex
information-processing computer models, such models were not
considered convincing -- perhaps because there were no feasible
experiments.  I don't mean to say that there was no progress at all
before computers, only that there was precious little.  Freud himself
was one of the first to conceive of "neural-net-like" machines -- only
no one would listen to him except Fliess.  Later came the astounding
insights of Post, Godel, and Turing, followed by those of Rashevky's
group, McCulloch and Pitts, and Grey Walter's simple yet somewhat
life-like mini-robots.  But significant progress began only in the
1950s when more serious models could be conceived, tested, and
discarded in days or weeks instead of years.  Soon the researchers in
Artificial Intelligence discovered a wide variety of ways to make
machine do pattern recognition, learning, problem solving, theorem
proving, game-playing, induction and generalization, and language
manipulation, to mention only a few.  To be sure, no one of those
programs seemed much like a mind, because each one was so specialized.
But now we're beginning to understand that there may be no need to
seek either any single magical "unified theory" or and single and
hitherto unknown "fundamental principle"-- because thinking may
instead be the product of many different mechanisms, competing as much
as cooperating, and generally unperceived and unsuspected in the
ordinary course of our everyday thought.

What has all this to do with consciousness?  Well, consider what
happened in biology.  Before the 19th century there seemed to be no
alternative to concept of "vitality" -- that is, the existence of some
sort of life-force.  There simply seemed no other way to explain all
the things that animals do.  But then, as scientists did their work,
they gradually came to see no need for a "unified theory" of life.
Each living thing performed many functions, but is slowly became clear
that each of them had a reasonably separate explanation!  For the most
part each separate function was served by a different and specialized
organ!  Thus the lungs oxygenate the blood, while the heart pumps it
to other organs.  The nucleus reproduces all the organs' structural
information, while the ribosomes translates those codes into proteins
which then self-configure themselves.  For some time that subsequent
appeared to entail a mystery.  It seemed natural to assume that those
configurations were based on a uniform energy-minimizing mechanism --
but simulations did not bear this out.  This appears to not be so;
instead, each protein has had to evolve this property on its own. (A
random string of peptides cannot usually manage it.)  Conclusion:
There is no central principle, no basic secret of life.  Instead, what
we have are huge organizations, painfully evolved, that manage to do
what must be done by hook or crook, by whatever has been found to
work.

Why not assume the same for the mind?  (I could have said the brain,
instead -- but in my view minds are simply what brains do.)  Why else
would our brains contain so many hundreds of organs?  Of course there
are many old arguments against localization of brain-functions because
it seemed that often a mind still works when some of its brain has
been lost.  One answer to that is to argue that many functions are
accomplished in multiple ways, not only to provide resistance to some
injuries, but perhaps more important, because no particular way is
likely to be always reliable.  To be sure, there still seem to be some
mental phenomena that have not yet been shown to "organ-ized".  So
there is still some room for theories about mechanisms that are not so
localized.  But now, I maintain, it is time for "insulationism" to
take its place along with, and in complementary opposition to,
connectionism.

Then what might be the functions and the organs of what we call
consciousness?  To discuss this, we'll have to agree on what we're
talking about -- so I'll use the word consciousness to mean the
organization of different ways we have for knowing what is happening
inside your mind, your body, and in the world outside.  Here is my
thesis; some people may find it too radical:

        We humans do not possess much consciousness.
        That is, we have very little natural ability to sense
        what happens within and outside ourselves.

In short, much of what is commonly attributed to consciousness is
mythical -- and this may in part be what has led people to think that
the problem of consciousness is so very hard. My view is quite the
opposite: that some machines are already potentially more conscious
than are people, and that further enhancements would be relatively
easy to make.  However, this does not imply that those machines would
thereby, automatically, become much more intelligent.  This is because
it is one thing to have access to data, but another thing to know how
to make good use of it.  Knowing how your pancreas works does not make
you better at digesting your food.  So consider now, to what extents
are you aware?  How much do you know about how you walk?  It is
interesting to tell someone about the basic form of biped locomotion:
you move in such a way as to start falling, and then you extend your
leg to stop that fall: most people are surprised at this, and seem to
have which muscles are involved; indeed, but few people even know
which muscles they possess.  In short, we are not much aware of what
our bodies do.  We're even less aware of what goes on inside our
brains.

Similarly we can ask the extents to which we're aware of the words we
speak.  At first one thinks, "yes, I certainly can remember that I
just pronounced "the words we speak."  But to what extent are we aware
of the process that produced those particular words?  Why, barely at
all!  We have to employ linguists for lifetimes of research even to
discover the simplest aspects of the language production process.

=46inally, I can ask you questions like, "Can you tell me what you are
thinking about."  The answers to such questions are hard to interpret.
The listener might list the names of some subjects or concerns that
were recently in mind, and sometimes can describe a bit of the trains
of thought that led to them.  These kinds of answers clearly feed upon
memories of recent brain-activities.  But every such answer seems
incomplete, as though the act of probing into any one of those
memories interferes with subsequently reaching any other ones.  In any
case, I cannot think of any aspect of consciousness that could operate
without making use of short-term memories, and this suggests that the
term 'consciousness' is usually used in connection with whatever
processes brains use for accessing memories of their recent states.

This raises the question the extent to which such memories might
really exist inside our brains.  Clearly there is a problem: if the
same neural network has been used recently for only a single purpose,
then it may still contain substantial information about what it
recently did.  But if it was used for several things, then most of
those traces will have been overwritten -- unless some special
hardware has been evolved for maintaining such records.  For a modern
computer, there is much less of a problem with this because we can
write programs to store such records inside the machine's 'general
purpose memory".  Of course, there will be ultimate limits on the size
of such records, but not on the nature of their contents.  For
example, most LISP language systems allow the user to specify that all
the activations of an arbitrary set of program-components will have
traces stored recursively.  If you specify enough of this before you
run your program, then subsequently you'll be able to find out
everything it did -- and even to simulate running it backwards.
However, as we've already said, having such access does not by itself
enable the machine to make a good interpretation of those records.
Certainly a certain degree of consciousness -- in the sense of access
to such records -- is necessary for a person (or machine) to be
intelligent.  But even a large degree of such 'consciousness' would
not by itself yield intelligence.

So this finally leads us to some really important questions about what
are the uses of consciousness.  It seems entirely clear to me that
consciousness has usefulness.  It can't be what some philosophers
claim: some sort of useless metaphysical accessory.  On the contrary,
there are important ways to exploit short term memories.  For example,
one has to keep out of loops -- that is, repeating an unsuccessful
action many times -- which requires knowing what already has been
done.  Also, after one has successfully solved a difficult problem,
one wants to "assign credit" to those actions that actually helped.
This may involve a good deal of analysis -- in effect, thinking about
what you've recently done -- which clearly requires good records.
Furthermore, such evaluations must be done on various scales; did you
waste the last few moments, and why; or did you waste an entire year?
Why do we use the term consciousness only about the shorter term
memories?)  On each such scale, you'd better have an adequate array of
memories.  Otherwise you cannot intelligently revise your plans,
adjust your strategies, take stock of your resources, and in many
other ways maintain some control over your future.  On how many such
time-scales do we work, and how many different mechanisms are involved
with each?  Because we're living in the early times of psychology, no
one can yet answer such questions.  Clearly it is time to begin to
seek constructive ways to study them.  To do this we should prepare
ourselves for coping with complexity, because it seems unlikely that
so many different functions can emerge from a single, completely new
principle.

Then what is the alternative.  We'll simply have to face the facts
that our many-hundred-organ-ed brain is not a useless luxury.  By the
time of your birth the brain contains hundreds of specialized
agencies, and by the time that you're an adult, most of those systems
have probably grown through dozens of stages of development.  Now at
various times in those first few yours, some of those systems create
the most supremely useful of all fictions, namely, that the unwritten
novel that constitutes your life is centered on a principle
protagonist -- that you conceive of as your consciousness, like an
actual person inside your head!  Some sections of [2] describe in more
detail why this illusion is so useful in life; indeed, in effect, it
makes itself true.  But the point of all this is to emphasize that
none of those old simplistic concepts from the past -- those spirits,
souls, and essences -- can help us with that modern task, of
understanding how all those different resources, are constructed,
operated and managed. Surely they work to a large extent as a
partially cooperative parallel system -- but also, surely, those are
largely controlled (much as Dennett suggests in [3]) by one or several
sequentially controlled systems, which in turn are assembled from
smaller parts.  The first sentence in my book [2], attributed to
Einstein, is "Everything should be made as simple as possible, but not
simpler."  The first step to take toward doing that is to exorcise
those Spirits from Psychology.

[1] The Emperor's New Mind, Roger Penrose
[2] The Society of Mind, Marvin Minsky


Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=4108