X-Message-Number: 104
From arpa!Xerox.COM!merkle.pa Mon Jun 26 20:02:45 PDT 1989
Received: from Cabernet.ms by ArpaGateway.ms ; 26 JUN 89 20:02:49 PDT
Date: Mon, 26 Jun 89 20:02:45 PDT
Subject: Re: CRYONICS #102 - Re: Brain Computational Memory Limits
In-reply-to: "'s message of Mon, 26 Jun 89 19:41 EDT"
Message-ID: <>

Thomas Donaldson <> said:

>I read Ralph Merkle's paper with some interest. If I examined his figures
>closely I might find reasons to doubt his estimate by as much as a factor of
>10, but that is trivia. However his paper does (implicitly) raise a central
>issue much more important: what is the SIGNIFICANCE of his computational
>estimate, anyway?

There are several implications of such a number.  The most important
implication is simply that we will have hardware in the not-too-distant
future capable of out performing the human brain.  By a large margin.  By
margins so large, that the prospect is daunting.

Thomas objects that we will also need the software.  This is true.  He
objects that we will need to know how to connect a large number of gates
into a device effectively able to perform the computations the brain can do
(the computer "architecture").  This is also true.

Both these problems will be solved.  At a minimum, when we have a
technology capable of imaging the human brain with molecular precision, we
can determine the neuronal structure and function in full detail.  We have
already determined what the retina does with a fairly high accuracy.  With
further effort, we should at least be able to determine how the isolated
parts of the brain work.  Even if we are unable to see the "grand pattern,"
if we can analyze local structure we can then duplicate its local function.
Then, we can connect these pieces together in the "grand pattern" even
without understanding exactly how it works.  The "grand pattern" might
continue to be mysterious, but we would have produced human abilities in a

It seems quite likely that something more interesting will happen.  We will
probably solve the problems of AI.  We can do so directly (without
"cheating" and looking at the existing biological solutions) or we can do
so indirectly.  The second approach is more likely.  In this approach, we
combine everything we can guess or learn about the function of the
biological system with everything we can guess or learn about machine
intelligence by simply trying to write intelligent programs.  As we learn
more on either front, it provides a better and more coherent picture of the
whole.  Eventually, we will learn a significant amount about both, and
combine our understanding of both into a very powerful thinking system.

I am not impressed with the "Frankenstein's Monster" fear that such a
system will turn against its masters.  I am concerned that such a system
will do more or less what it was designed and built to do.  It is this
second concern that seems more significant.  The existence of the human
brain sets lower bounds on what can be accomplished by a system of given
complexity.  We will be able to create systems of much greater complexity,
therefore they will be able to do at least the following:

1.)  Solve the same problems faster.  The basic time delay in biological
systems is one millisecond, while the basic time delay in computers of the
future will be less than (and probably quite a bit less than) one
nanosecond.  Thus, we will be able to build devices that solve problems
that human beings solve today, but they will be able to do so over one
million times faster.

2.)  Solve many instances of the same problem, independently and in
parallel.  Again, a factor of a million or more seems likely.

Thomas argues that a specific computer, designed for a specific problem,
will solve that specific problem faster and better than a different
computer of similar complexity, but which was designed to solve a different
problem.  Hence, "benchmarks" are irrelevant.  A computer that runs a
particular benchmark very quickly as compared with another "slower"
computer, might run a different benchmark very slowly.  This is true, but
largely beside the point.  He also argues that much of what is known about
conventional serial computation will not prove useful in a parallel
computation.  This is almost certainly true for some range of computational

However, if a task is already performed by the human brain, then we know a
lower bound on how well that task can be performed by a device specifically
designed to solve that task.  A device with one million times the raw
computational capacity of the human brain, specifically designed to solve a
problem already solved by the human brain, will do so one million times
more quickly, or solve a million instances of the problem while the human
could solve only one.

A number of specific possibilities will no doubt come to mind.

Computational power a forgotten concept?  No.  If I have more gates,
operating faster, I can compute more things more quickly.  How to arrange
the gates to best advantage to solve a particular problem will continue to
be a problem of great interest -- but if I have a million times as many
gates, operating a million times faster, then the device I build will
compute at least a million times faster and can do so a million times in
parallel while your device can finish but one problem.  (Assuming I can
peek at your design and steal the good ideas you use.  Which, in the case
of the human brain, we will be able to do at some point).

The significance?  We will build such devices, and use them to accomplish
ends of our devising.  We will build them, and set them some tasks.

What tasks?  Perhaps we need to think about this.  What ends?  Perhaps we
should be careful.  And, perhaps most vexing, WHO will set these tasks?  It
might not be us.  It might not even be someone we like very much.  What

Perhaps it would help if we thought about these things before they
happened.  Of course, to think about such things, we must first be aware
that they are possible.

Hence my paper.

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=104