X-Message-Number: 0019.3
Subject: The Technical Feasibility of Cryonics; Part #3

Newsgroups: sci.cryonics
From:  (Ralph Merkle)
Subject: The Technical Feasibility of Cryonics; Part #3
Date: 22 Nov 92 21:15:56 GMT

The Technical Feasibility of Cryonics

PART 3 of 5.


Ralph C. Merkle
Xerox PARC
3333 Coyote Hill Road
Palo Alto, CA 94304

A shorter version of this article appeared in:
Medical Hypotheses (1992)  39, pages 6-16.


Even if information theoretic death has not occurred, a frozen brain is 
not a healthy structure.  While repair might be feasible in principle, 
it would be comforting to have at least some idea about how such repairs 
might be done in practice.  As long as we assume that the laws of 
physics, chemistry, and biochemistry with which we are familiar today 
will still form the basic framework within which repair will take place 
in the future, we can draw well founded conclusions about the 
capabilities and limits of any such repair technology.

     The Nature of This Proposal

To decide whether or not to pursue cryonic suspension we must answer one 
question:  will restoration of frozen tissue to a healthy and functional 
state ever prove feasible?  If the answer is "yes," then cryonics will 
save lives.  If the answer is "no," then it can be ignored.  As 
discussed earlier, the most that we can usefully learn about frozen 
tissue is the type, location and orientation of each molecule.  If this 
information is sufficient to permit inference of the healthy state with 
memory and personality intact, then repair is in principle feasible.  
The most that future technology could offer, therefore, is the ability 
to restore the structure whenever such restoration was feasible in 
principle.  We propose that just this limit will be closely approached 
by future advances in technology.

It is unreasonable to think that the current proposal will in fact form 
the basis for future repair methods for two reasons:

First, better technologies and approaches are likely to be developed.  
Necessarily, we must restrict ourselves to methods and techniques that 
can be analyzed and understood using the currently understood laws of 
physics and chemistry.  Future scientific advances, not anticipated at 
this time, are likely to result in cheaper, simpler or more reliable 
methods.  Given the history of science and technology to date, the 
probability of future unanticipated advances is good.

Second, this proposal was selected because of its conceptual simplicity 
and its obvious power to restore virtually any structure where 
restoration is in principle feasible.  These are unlikely to be design 
objectives of future systems.  Conceptual simplicity is advantageous 
when the resources available for the design process are limited.  Future 
design capabilities can reasonably be expected to outstrip current 
capabilities, and the efforts of a large group can reasonably be 
expected to allow analysis of much more complex proposals than 
considered here.

Further, future systems will be designed to restore specific individuals 
suffering from specific types of damage, and can therefore use specific 
methods that are less general but which are more efficient or less 
costly for the particular type of damage involved.  It is easier for a 
general-purpose proposal to rely on relatively simple and powerful 
methods, even if those methods are less efficient.

Why, then, discuss a powerful, general purpose method that is 
inefficient, fails to take advantage of the specific types of damage 
involved, and which will almost certainly be superseded by future 

The purpose of this paper is not to lay the groundwork for future 
systems, but to answer a question: under what circumstances will 
cryonics work?  The value of cryonics is clearly and decisively based on 
technical capabilities that will not be developed for several decades 
(or longer).  If some relatively simple proposal appears likely to work, 
then the value of cryonics is established.  Whether or not that simple 
proposal is actually used is irrelevant.  The fact that it could be used 
in the improbable case that all other technical progress and all other 
approaches fail is sufficient to let us decide today whether or not 
cryonic suspension is of value.

The philosophical issues involved in this type of long range technical 
forecasting and the methodologies appropriate to this area are addressed 
by work in "exploratory engineering."[1]  The purpose of exploratory 
engineering is to provide lower bounds on future technical capabilities 
based on currently understood scientific principles.  A successful 
example is Konstantin Tsiolkovsky's forecast around the turn of the 
century that multi-staged rockets could go to the moon.  His forecast 
was based on well understood principles of Newtonian mechanics.  While 
it did not predict when such flights would take place, nor who would 
develop the technology, nor the details of the Saturn V booster, it did 
predict that the technical capability was feasible and would eventually 
be developed.  In a similar spirit, we will discuss the technical 
capabilities that should be feasible and what those capabilities should 
make possible.

Conceptually, the approach that we will follow is simple:

1.)     Determine the coordinates and orientations of all major molecules, 
and store this information in a data base.

2.)      Analyze the information stored in the data base with a computer 
program which determines what changes in the existing structure should 
be made to restore it to a healthy and functional state.

3.)     Take the original molecules and move them, one at a time, back to 
their correct locations.

The reader will no doubt agree that this proposal is conceptually simple 
and remarkably powerful, but might be concerned about a number of 
technical issues.  The major issues are addressed in the following 

An obvious inefficiency of this approach is that it will take apart and 
then put back together again structures and whole regions that are in 
fact functional or only slightly damaged.  Simply leaving a functional 
region intact, or using relatively simple special case repair methods 
for minor damage would be faster and less costly.  Despite these obvious 
drawbacks, the general purpose approach demonstrates the principles 
involved.  As long as the inefficiencies are not so extreme that they 
make the approach infeasible or uneconomical in the long run, then this 
simpler approach is easier to evaluate.

     Overview of the Brain.

The brain has a volume of 1350 cubic centimeters (about one and a half 
quarts) and a weight of slightly more than 1400 grams (about three 
pounds).  The smallest normal human brain weighed 1100 grams, while the 
largest weighed 2050 grams [30, page 24].  It is almost 80% water by 
weight.  The remaining 20% is slightly less than 40% protein, slightly 
over 50% lipids, and a few percent of other material[16, page 419].  
Thus, an average brain has slightly over 100 grams of protein, about 175 
grams of lipids, and some 30 to 40 grams of "other stuff".

     How Many Molecules

If we are considering restoration down to the molecular level, an 
obvious question is: how many molecules are there?  We can easily 
approximate the answer, starting with the proteins.  An "average" 
protein molecule has a molecular weight of about 50,000 amu.  One mole 
of "average" protein is 50,000 grams (by definition), so the 100 grams 
of protein in the brain is 100/50,000 or .002 moles.  One mole is 6.02 x 
10^23 molecules, so .002 moles is 1.2 x 10^21 molecules.

We proceed in the same way for the lipids (lipids are most often used to 
make cell membranes) - a "typical" lipid might have a molecular weight 
of 500 amu, which is 100 times less than the molecular weight of a 
protein.  This implies the brain has about 175/500 x 6.02 x 10^23 or 
about 2 x 10^23 lipid molecules.

Finally, water has a molecular weight of 18, so there will be about 1400 
x 0.8/18 x 6.02 x 10^23 or about 4 x 10^25 water molecules in the brain.  
In many cases a substantial percentage of water will have been replaced 
with cryoprotectant during the process of suspension; glycerol at a 
concentration of 4 molar or more, for example.  Both water and glycerol 
will be treated in bulk, and so the change from water molecules to 
glycerol (or other cryoprotectants) should not have a significant impact 
on the calculations that follow.

These numbers are fundamental.  Repair of the brain down to the 
molecular level will require that we cope with them in some fashion.

     How Much Time

Another parameter whose value we must decide is the amount of repair 
time per molecule.  We assume that such repair time includes the time 
required to determine the location of the molecule in the frozen tissue 
and the time required to restore the molecule to its correct location, 
as well as the time to diagnose and repair any structural defects in the 
molecule.  The computational power required to analyze larger-scale 
structural damage - e.g., this mitochondria has suffered damage to its 
internal membrane structure (so called "flocculant densities") - should 
be less than the power required to analyze each individual molecule.  An 
analysis at the level of sub-cellular organelles involves several orders 
of magnitude fewer components and will therefore require correspondingly 
less computational power.  Analysis at the cellular level involves even 
fewer components.  We therefore neglect the time required for these 
additional computational burdens.  The total time required for repair is 
just the sum over all molecules of the time required by one repair 
device to repair that molecule  divided by the number of repair devices.  
The more repair devices there are, the faster the repair will be.  The 
more molecules there are, and the more time it takes to repair each 
molecule, the slower repair will be.

The time required for a ribosome to manufacture a protein molecule of 
400 amino acids is about 10 seconds[14, page 393], or about 25 
milliseconds to add each amino acid.  DNA polymerase III can add an 
additional base to a replicating DNA strand in about 7 milliseconds[14, 
page 289].  In both cases, synthesis takes place in solution and 
involves significant delays while the needed components diffuse to the 
reactive sites.  The speed of assembler-directed reactions is likely to 
prove faster than current biological systems.  The arm of an assembler 
should be capable of making a complete motion and causing a single 
chemical transformation in about a microsecond[85].  However, we will 
conservatively base our computations on the speed of synthesis already 
demonstrated by biological systems, and in particular on the slower 
speed of protein synthesis.

We must do more than synthesize the required molecules - we must analyze 
the existing molecules, possibly repair them, and also move them from 
their original location to the desired final location.  Existing 
antibodies can identify specific molecular species by selectively 
binding to them, so identifying individual molecules is feasible in 
principle.  Even assuming that the actual technology employed is 
different it seems unlikely that such analysis will require 
substantially longer than the synthesis time involved, so it seems 
reasonable to multiply the synthesis time by a factor of a few to 
provide an estimate of time spent per molecule.  This should, in 
principle, allow time for the complete disassembly and reassembly of the 
selected molecule using methods no faster than those employed in 
biological systems.  While the precise size of this multiplicative 
factor can reasonably be debated, a factor of 10 should be sufficient.  
The total time required to simply move a molecule from its original 
location to its correct final location in the repaired structure should 
be smaller than the time required to disassemble and reassemble it, so 
we will assume that the total time required for analysis, repair and 
movement is 100 seconds per protein molecule.

     Temperature of Analysis

Warming the tissue before determining its molecular structure creates 
definite problems: everything will move around.  A simple solution to 
this problem is to keep the tissue frozen until after all the desired 
structural information is recovered.  In this case the analysis will 
take place at a low temperature.  Whether or not subsequent operations 
should be performed at the same low temperature is left open.  A later 
section considers the various approaches that can be taken to restore 
the structure after it has been analyzed.

     Repair or Replace?

In practice, most molecules will probably be intact - they would not 
have to be either disassembled or reassembled.  This should greatly 
reduce repair time.  On a more philosophical note, existing biological 
systems generally do not bother to repair macromolecules (a notable 
exception is DNA - a host of molecular mechanisms for the repair of this 
molecule are used in most organisms).  Most molecules are generally used 
for a period of time and then broken down and replaced.  There is a slow 
and steady turnover of molecular structure - the atoms in the roast beef 
sandwich eaten yesterday are used today to repair and replace muscles, 
skin, nerve cells, etc.  If we adopted nature's philosophy we would 
simply discard and replace any damaged molecules, greatly simplifying 
molecular "repair".

Carried to its logical conclusion, we would discard and replace all  the 
molecules in the structure.  Having once determined the type, location 
and orientation of a molecule in the original (frozen) structure, we 
would simply throw that molecule out without further examination and 
replace it.   This requires only that we be able to identify the 
location and type of individual molecules.  It would not be necessary to 
determine if the molecule was damaged, nor would it be necessary to 
correct any damage found.  By definition, the replacement molecule would 
be taken from a stock-pile of structurally correct molecules that had 
been previously synthesized, in bulk, by the simplest and most 
economical method available.

Discarding and replacing even a few atoms might disturb some people.  
This can be avoided by analyzing and repairing any damaged molecules.  
However, for those who view the simpler removal and replacement of 
damaged molecules as acceptable, the repair process can be significantly 
simplified.  For purposes of this paper, however, we will continue to 
use the longer time estimate based on the premise that full repair of 
every molecule is required.  This appears to be conservative.   (Those 
who feel that replacing their atoms will change their identity should 
think carefully before eating their next meal!)

     Total Repair Machine Seconds

We shall assume that the repair time for other molecules is similar per 
unit mass.  That is, we shall assume that the repair time for the lipids 
(which each weigh about 500 amu, 100 times less than a protein) is about 
100 times less than the repair time for a protein.  The repair time for 
one lipid molecule is assumed to be 1 second.  We will neglect water 
molecules in this analysis, assuming that they can be handled in bulk.

We have assumed that the time required to analyze and synthesize an 
individual molecule will dominate the time required to determine its 
present location, the time required to determine the appropriate 
location it should occupy in the repaired structure, and the time 
required to put it in this position.  These assumptions are plausible 
but will be considered further when the methods of gaining access to and 
of moving molecules during the repair process are considered.

This analysis accounts for the bulk of the molecules - it seems unlikely 
that other molecular species will add significant additional repair 

Based on these assumptions, we find that we require 100 seconds x 1.2 x 
10^21 protein molecules + 1 second times 2 x 10^23 lipids, or 3.2 x 
10^23 repair-machine-seconds.  This number is not as fundamental as the 
number of molecules in the brain.  It is based on the (probably 
conservative) assumption that repair of 50,000 amu requires 100 seconds.  
Faster repair would imply repair could be done with fewer repair 
machines, or in less time.

     How Many Repair Machines

If we now fix the total time required for repair, we can determine the 
number of repair devices that must function in parallel.  We shall 
rather arbitrarily adopt 10^8 seconds, which is very close to three 
years, as the total time in which we wish to complete repairs.

If the total repair time is 10^8 seconds, and we require 3.2 x 10^23 
repair-machine-seconds, then we require 3.2 x 10^15 repair machines for 
complete repair of the brain.   This corresponds to 3.2 x 10^15 / (6.02 
x 10^23) or 5.3 x 10^-9 moles, or 5.3 nanomoles of repair machines.  If 
each repair device weighs 10^9 to 10^10 amu, then the total weight of 
all the repair devices is 53 to 530 grams: a few ounces to just over a 

Thus, the weight of repair devices required to repair each and every 
molecule in the brain, assuming the repair devices operate no faster 
than current biological methods, is about 4% to 40% of the total mass of 
the brain.

By way of comparision, there are about 10^14 cells[44, page 3] in the 
human body and each cell has about 10^7 ribosomes[14, page 652] giving 
10^21 ribosomes.  Thus, there are about six orders of magnitude more 
ribosomes in the human body than the number of repair machines we 
estimate are required to repair the human brain.

It seems unlikely that either more or larger repair devices are 
inherently required.  However, it is comforting to know that errors in 
these estimates of even several orders of magnitude can be easily 
tolerated.  A requirement for 530 kilograms of repair devices (1,000 to 
10,000 times more than we calculate is needed) would have little 
practical impact on feasibility.  Although repair scenarios that involve 
deployment of the repair devices within the volume of the brain could 
not be used if we required 530 kilograms of repair devices, a number of 
other repair scenarios would still work - one such approach is discussed 
in this paper.  Given that nanotechnology is feasible, manufacturing 
costs for repair devices will be small.  The cost of even 530 kilograms 
of repair devices should eventually be significantly less than a few 
hundred dollars.  The feasibility of repair down to the molecular level 
is insensitive to even large errors in the projections given here.


We now turn to the physical deployment of these repair devices.  That 
is, although the raw number of repair devices is sufficient, we must 
devise an orderly method of deploying these repair devices so they can 
carry out the needed repairs.

     Other Proposals: On-board Repair

We shall broadly divide repair scenarios into two classes:  on-board and 
off-board.  In the on-board scenarios, the repair devices are deployed 
within the volume of the brain.  Existing structures are disassembled in 
place, their component molecules examined and repaired, and rebuilt on 
the spot.  (We here class as "on-board" those scenarios in which the 
repair devices operate within the physical volume of the brain, even 
though there might be substantial off-board support.  That is, there 
might be a very large computer outside the tissue directing the repair 
process, but we would still refer to the overall repair approach as "on-
board").  The on-board repair scenario has been considered in some 
detail by Drexler[18].  We will give a brief outline of the on-board 
repair scenario here, but will not consider it in any depth.  For 
various reasons, it is quite plausible that on-board repair scenarios 
will be developed before off-board repair scenarios.

The first advantage of on-board repair is an easier evolutionary path 
from partial repair systems deployed in living human beings to the total 
repair systems required for repair of the more extensive damage found in 
the person who has been cryonically suspended.  That is, a simple repair 
device for finding and removing fatty deposits blocking the circulatory 
system could be developed and deployed in living humans[2], and need not 
deal with all the problems involved in total repair.  A more complex 
device, developed as an incremental improvement, might then repair more 
complex damage (perhaps identifying and killing cancer cells) again 
within a living human.  Once developed, there will be continued pressure 
for evolutionary improvements in on-board repair capabilities which 
should ultimately lead to repair of virtually arbitrary damage.  This 
evolutionary path should eventually produce a device capable of 
repairing frozen tissue.

It is interesting to note that "At the end of this month [August 1990], 
MITI's Agency of Industrial Science and Technology (AIST) will submit a 
budget request for 430 million ($200,000) to launch a 'microrobot' 
project next year, with the aim of developing tiny robots for the 
internal medical treatment and repair of human beings.  ... MITI is 
planning to pour 425,000 million ($170 million) into the microrobot 
project over the next ten years..."[86].  Iwao Fujimasa said their 
objective is a robot less than .04 inches in size that will be able to 
travel through veins and inside organs[17, 20].  While substantially 
larger than the proposals considered here, the direction of future 
evolutionary improvements should be clear.

A second advantage of on-board repair is emotional.  In on-board repair, 
the original structure (you) is left intact at the macroscopic and even 
light microscopic level.  The disassembly and reassembly of the 
component molecules is done at a level smaller than can be seen, and 
might therefore prove less troubling than other forms of repair in which 
the disassembly and reassembly processes are more visible.  Ultimately, 
though, correct restoration of the structure is the overriding concern.

A third advantage of on-board repair is the ability to leave functional 
structures intact.  That is, in on-board repair we can focus on those 
structures that are damaged, while leaving working structures alone.  If 
minor damage has occured, then an on-board repair system need make only 
minor repairs.

The major drawback of on-board repair is the increased complexity of the 
system.  As discussed earlier, this is only a drawback when the design 
tools and the resources available for the design are limited.  We can 
reasonably presume that future design tools and future resources will 
greatly exceed present efforts.  Developments in computer aided design 
of complex systems will put the design of remarkably complex systems 
within easy grasp.

In on-board repair, we might first logically partition the volume of the 
brain into a matrix of cubes, and then deploy each repair device in its 
own cube.  Repair devices would first get as close as possible to their 
assigned cube by moving through the circulatory system (we presume it 
would be cleared out as a first step) and would then disassemble the 
tissue between them and their destination.  Once in position, each 
repair device would analyse the tissue in its assigned volume and peform 
any repairs required.

     The Current Proposal: Off-Board Repair

The second class of repair scenarios, the off-board scenarios, allow the 
total volume of repair devices to greatly exceed the volume of the human 

The primary advantage of off-board repair is conceptual simplicity.  It 
employees simple brute force to insure that a solution is feasible and 
to avoid complex design issues.  As discussed earlier, these are 
virtures in thinking about the problem today but are unlikely to carry 
much weight in the future when an actual system is being designed.

The other advantages of this approach are fairly obvious.  Lingering 
concerns about volume and heat dissipation can be eliminated.  If a ton 
of repair devices should prove necessary, then a ton can be provided.  
Concerns about design complexity can be greatly reduced.  Off-board 
repair scenarios do not require that the repair devices be mobile - 
simplifying communications and power distribution, and eliminating the 
need for locomotor capabilities and navigational abilities.  The only 
previous paper on off-board repair scenarios was by Merkle[101].

Off-board repair scenarios can be naturally divided into three phases.  
In the first phase, we must analyze the structure to determine its 
state.  The primary purpose of this phase is simply to gather 
information about the structure, although in the process the disassembly 
of the structure into its component molecules will also take place.  
Various methods of gaining access to and analyzing the overall structure 
are feasible - in this paper we shall primarily consider one approach.

We shall presume that the analysis phase takes place while the tissue is 
still frozen.  While the exact temperature is left open, it seems 
preferable to perform analysis prior to warming.  The thawing process 
itself causes damage and, once thawed, continued deterioration will 
proceed unchecked by the mechanisms present in healthy tissue.  This 
cannot be tolerated during a repair time of several years.  Either 
faster analysis or some means of blocking deterioration would have to be 
used if analysis were to take place after warming.  We will not explore 
these possibilities here (although this is worthwhile).  The temperature 
at which other phases takes place is left open.

The second phase of off-board repair is determination of the healthy 
state.  In this phase, the structural information derived from the 
analysis phase is used to determine what the healthy state of the tissue 
had been prior to suspension and any preceding illness.  This phase 
involves only computation based on the information provided by the 
analysis phase.

The third phase is repair.  In this phase, we must restore the structure 
in accordance with the blue-print provided by the second phase, the 
determination of the healthy state.

     Intermediate States During Off-Board Repair

Repair methods in general start with frozen tissue, and end with healthy 
tissue.  The nature of the intermediate states characterizes the 
different repair approaches.  In off-board repair the tissue undergoing 
repair must pass through three highly characteristic states, described 
in the following three paragraphs.

The first state is the starting state, prior to any repair efforts.  The 
tissue is frozen (unrepaired).

In the second state, immediately following the analysis phase, the 
tissue has been disassembled into its individual molecules.  A detailed 
structural data base has been built which provides a  description of the 
location, orientation, and type of each molecule, as discussed earlier.  
For those who are concerned that their identity or "self" is dependent 
in some fundamental way on the specific atoms which compose their 
molecules, the original molecules can be retained in a molecular "filing 
cabinet."  While keeping physical track of the original molecules is 
more difficult technically, it is feasible and does not alter off-board 
repair in any fundamental fashion.

In the third state, the tissue is restored and fully functional.

By characterizing the intermediate state which must be achieved during 
the repair process, we reduce the problem from "Start with frozen tissue 
and generate healthy tissue" to "Start with frozen tissue and generate a 
structural data base and a molecular filing cabinet.  Take the 
structural data base and the molecular filing cabinet and generate 
healthy tissue."  It is characteristic of off-board repair that we 
disassemble the molecular structure into its component pieces prior to 
attempting repair.

As an example, suppose we wish to repair a car.  Rather than try and 
diagnose exactly what's wrong, we decide to take the car apart into its 
component pieces.  Once the pieces are spread out in front of us, we can 
easily clean each piece, and then reassemble the car.  Of course, we'll 
have to keep track of where all the pieces go so we can reassemble the 
structure, but in exchange for this bookkeeping task we gain a 
conceptually simple method of insuring that we actually can get access 
to everything and repair it.  While this is a rather extreme method of 
repairing a broken carburetor, it certainly is a good argument that we 
should be able to repair even rather badly damaged cars.  So, too, with 
off-board repair.  While it might be an extreme method of fixing any 
particular form of damage, it provides a good argument that damage can 
be repaired under a wide range of circumstances.

          Off-Board Repair is the Best that can be Achieved

Regardless of the initial level of damage, regardless of the functional 
integrity or lack thereof of any or all of the frozen structure, 
regardless of whether easier and less exhaustive techniques might or 
might not work, we can take any frozen structure and convert it into the 
canonical state described above.  Further, this is the best that we can 
do.  Knowing the type, location and orientation of every molecule in the 
frozen structure under repair and retaining the actual physical 
molecules (thus avoiding any philosophical objections that replacing the 
original molecules might somehow diminish or negate the individuality of 
the person undergoing repair) is the best that we can hope to achieve.  
We have reached some sort of limit with this approach, a limit that will 
make repair feasible under circumstances which would astonish most 
people today.

One particular approach to off-board repair is divide-and-conquer.  This 
method is one of the technically simplest approaches.  We discuss this 
method in the following section.


Divide-and-conquer is a general purpose problem-solving method 
frequently used in computer science and elsewhere.  In this method, if a 
problem proves too difficult to solve it is first divided into sub-
problems, each of which is solved in turn.  Should the sub-problems 
prove too difficult to solve, they are in turn divided into sub-sub-
problems.  This process is continued until the original problem is 
divided into pieces that are small enough to be solved by direct 

If we apply divide-and-conquer to the analysis of a physical object - 
such as the brain - then we must be able to physically divide the object 
of analysis into two pieces and recursively apply the same method to the 
two pieces.   This means that we must be able to divide a piece of 
frozen tissue, whether it be the entire brain or some smaller part, into 
roughly equal halves.  Given that tissue at liquid nitrogen temperatures 
is already prone to fracturing, it should require only modest effort to 
deliberately induce a fracture that would divide such a piece into two 
roughly equal parts.  Fractures made at low temperatures (when the 
material is below the glass transition temperature) are extremely clean, 
and result in little or no loss of structural information.   Indeed, 
freeze fracture techniques are used for the study of synaptic 
structures.  Hayat [40, page 398] says "Membranes split during freeze-
fracturing along their central hydrophobic plane, exposing 
intramembranous surfaces.  ...  The fracture plane often follows the 
contours of membranes and leaves bumps or depressions where it passes 
around vesicles and other cell organelles.  ... The fracturing process 
provides more accurate insight into the molecular architecture of 
membranes than any other ultrastructural method."  It seems unlikely 
that the fracture itself will result in any significant loss of 
structural information.

The freshly exposed faces can now be analyzed by various surface 
analysis techniques.  A review article in Science, "The Children of the 
STM,"  supports the idea that such surface analysis techniques can 
recover remarkably detailed information.  For example, optical 
absorption microscopy "...generates an absorption spectrum of the 
surface with a resolution of 1 nanometer [a few atomic diameters]."  
Science quotes Kumar Wickramasinghe of IBM's T. J. Watson Research 
Center as saying: "We should be able to record the spectrum of a single 
molecule" on a surface. Williams and Wickramasinghe said [51] "The 
ability to measure variations in chemical potential also allows the 
possibility of selectively identifying subunits of biological 
macromolecules either through a direct measurement of their chemical-
potential gradients or by decorating them with different metals.  This 
suggest a potentially simple method for sequencing DNA."   Several other 
techniques are discussed in the Science article.  While current devices 
are large, the fundamental physical principles on which they rely do not 
require large size.  Many of the devices depend primarily on the 
interaction between a single atom at the tip of the STM probe and the 
atoms on the surface of the specimen under analysis.  Clearly, 
substantial reductions in size in such devices are feasible[ft. 18].

High resolution optical techniques can also be employed.  Near field 
microscopy, employing light with a wavelength of hundreds of nanometers, 
has achieved a resolution of 12 nanometers (much smaller than a 
wavelength of light).  To quote the abstract of a recent review article 
on the subject:  "The near-field optical interaction between a sharp 
probe and a sample of interest can be exploited to image, 
spectroscopically probe, or modify surfaces at a resolution (down to ~12 
nm) inaccessible by traditional far-field techniques.  Many of the 
attractive features of conventional optics are retained, including 
noninvasiveness, reliability, and low cost.  In addition, most optical 
contrast mechanisms can be extended to the near-field regime, resulting 
in a technique of considerable versatility.  This versatility is 
demonstrated by several examples, such as the imaging of nanometric-
scale features in mammalian tissue sections and the creation of 
ultrasmall, magneto-optic domains having implications for high-density 
data storage.  Although the technique may find uses in many diverse 
fields, two of the most exciting possibilities are localized optical 
spectroscopy of semiconductors and the flourescence imaging of living 
cells."[111].  Another article said: "Our signals are currently of such 
magnitude that almost any application originally conceived for far-field 
optics can now be extended to the near-field regime, including:  
dynamical studies at video rates and beyond; low noise, high resolution 
spectroscopy (also aided by the negligible auto-fluorescence of the 
probe); minute differential absorption measurements; magnetooptics; and 
superresolution lithography."[100].

     How Small are the Pieces

The division into halves continues until the pieces are small enough to 
allow direct analysis by repair devices.  If we presume that division 
continues until each repair device is assigned its own piece to repair, 
then there will be both 3.2 x 10^15 repair devices and pieces.  If the 
1350 cubic centimeter volume of the brain is divided into this many 
cubes, each such cube would be about .4 microns (422 nanometers) on a 
side.  Each cube could then be directly analyzed (disassembled into its 
component molecules) by a repair device during our three-year repair 

One might view these cubes as the pieces of a three-dimensional jig-saw 
puzzle, the only difference being that we have cheated and carefully 
recorded the position of each piece.  Just as the picture on a jig-saw 
puzzle is clearly visible despite the fractures between the pieces, so 
too the three-dimensional "picture" of the brain is clearly visible 
despite its division into pieces[ft. 19].

     Moving Pieces

There are a great many possible methods of handling the mechanical 
problems involved in dividing and moving the pieces.  It seems unlikely 
that mechanical movement of the pieces will prove an insurmountable 
impediment, and therefore we do not consider it in detail.  However, for 
the sake of concreteness, we outline one possibility.  Human arms are 
about 1 meter in length, and can easily handle objects from 1 to 10 
centimeters in size (.01 to .1 times the length of the arm).  It should 
be feasible, therefore, to construct a series of progressively shorter 
arms which handle pieces of progressively smaller size.  If each set of 
arms were ten times shorter than the preceding set, then we would have 
devices with arms of:  1 meter, 1 decimeter, 1 centimeter, 1 millimeter, 
100 microns, 10 microns, 1 micron, and finally .1 microns or 100 
nanometers.  (Note that an assembler has arms roughly 100 nanometers 
long).  Thus, we would need to design 8 different sizes of manipulators.  
At each succeeding size the manipulators would be more numerous, and so 
would be able to deal with the many more pieces into which the original 
object was divided.  Transport and mechanical manipulation of an object 
would be done by arms of the appropriate size.  As objects were divided 
into smaller pieces that could no longer be handled by arms of a 
particular size, they would be handed to arms of a smaller size.

If it requires about three years to analyze each piece, then the time 
required both to divide the brain into pieces and to move each piece to 
an immobile repair device can reasonably be neglected.   It seems 
unlikely that moving the pieces will take a significant fraction of 
three years.

     Memory Requirements

The information storage requirements for a structural data-base that 
holds the detailed description and location of each major molecule in 
the brain can be met by projected storage methods.  DNA has an 
information storage density of about 10^21 bits/cubic centimeter.  
Conceptually similar but somewhat higher density molecular "tape" 
systems that store 10^22 bits/cubic centimeter [1] should be quite 
feasible.  If we assume that every lipid molecule is "significant" but 
that water molecules, simple ions and the like are not, then the number 
of significant molecules is roughly the same as the number of lipid 
molecules[ft. 20] (the number of protein molecules is more than two 
orders of magnitude smaller, so we will neglect it in this estimate).  
The digital description of these 2 x 10^23 significant molecules 
requires 10^25 bits (assuming that 50 bits are required to encode the 
location and description of each molecule).  This is about 1,000 cubic 
centimeters (1 liter, roughly a quart) of "tape" storage.  If a storage 
system of such capacity strikes the reader as infeasible, consider that 
a human being has about 10^14 cells[44, page 3] and that each cell 
stores 10^10 bits in its DNA[14].  Thus, every human that you see is a 
device which (among other things) has a raw storage capacity of 10^24 
bits - and human beings are unlikely to be optimal information storage 

A simple method of reducing storage requirements by several orders of 
magnitude would be to analyze and repair only a small amount of tissue 
at a time.   This would eliminate the need to store the entire 10^25 bit 
description at one time.  A smaller memory could hold the description of 
the tissue actually under repair, and this smaller memory could then be 
cleared and re-used during repair of the next section of tissue.

     Computational Requirements

The computational power required to analyze a data base with 10^25 bits 
is well within known theoretical limits[9,25,32].  It has been seriously 
proposed that it might be possible to increase the total computational 
power achievable within the universe beyond any fixed bound in the 
distant future[52, page 658].  More conservative lower bounds to nearer-
term future computational capabilities can be derived from the 
reversible rod-logic molecular model of computation, which dissipates 
about 10^-23 joules per gate operation when operating at 100 picoseconds 
at room temperature[85].  A wide range of other possibilities exist.  
Likharev proposed a computational element based on Josephson junctions 
which operates at 4 K and in which energy dissipation per switching 
operation is 10^-24 joules with a switching time of 10^-9 seconds[33, 
43].  Continued evolutionary reductions in the size and energy 
dissipation of properly designed NMOS[113] and CMOS[112] circuits should 
eventually produce logic elements that are both very small (though 
significantly larger than Drexler's proposals) and which dissipate 
extraordinarily small amounts of energy per logic operation.  
Extrapolation of current trends suggest that energy dissipations in the 
10-23 joule range will be achieved before 2030[31, fig. 1].  There is no 
presently known reason to expect the trend to stop or even slow down at 
that time[9,32].

Energy costs appear to be the limiting factor in rod logic (rather than 
the number of gates, or the speed of operation of the gates).  Today, 
electric power costs about 10 cents per kilowatt hour.  Future costs of 
power will almost certainly be much lower.  Molecular manufacturing 
should eventually sharply reduce the cost of solar cells and increase 
their efficiency close to the theoretical limits.  With a manufacturing 
cost of under 10 cents per kilogram[85] the cost of a one square meter 
solar cell will be less than a penny.  As a consequence the cost of 
solar power will be dominated by other costs, such as the cost of the 
land on which the solar cell is placed.  While solar cells can be placed 
on the roofs of existing structures or in otherwise unused areas, we 
will simply use existing real estate prices to estimate costs.  Low cost 
land in the desert south western United States can be purchased for less 
than $1,000 per acre.  (This price corresponds to about 25 cents per 
square meter, significantly larger than the projected future 
manufacturing cost of a one square meter solar cell).  Land elsewhere in 
the world (arid regions of the Australian outback, for example) is much 
cheaper.  For simplicity and conservatism, though, we'll simply adopt 
the $1,000 per acre price for the following calculations.  Renting an 
acre of land for a year at an annual price of 10% of the purchase price 
will cost $100.  Incident sunlight at the earth's surface provides a 
maximum of 1,353 watts per square meter, or 5.5 x 10^6 watts per acre.  
Making allowances for inefficiencies in the solar cells, atmospheric 
losses, and losses caused by the angle of incidence of the incoming 
light reduces the actual average power production by perhaps a factor of 
15 to about 3.5 x 10^5 watts.  Over a year, this produces 1.1 x 10^13 
joules or 3.1 x 10^6 kilowatt hours.  The land cost $100, so the cost 
per joule is 0.9 nanocents and the cost per kilowatt hour is 3.3 
millicents.  Solar power, once we can make the solar cells cheaply 
enough, will be over several thousand times cheaper than electric power 
is today.  We'll be able to buy over 10^15 joules for under $10,000.

While the energy dissipation per logic operation estimated by 
Drexler[85] is about 10^-23 joules, we'll content ourselves with the 
higher estimate of 10^-22 joules per logic operation.  Our 10^15 joules 
will then power 10^37 gate operations: 10^12 gate operations for each 
bit in the structural data base or 5 x 10^13 gate operations for each of 
the 2 x 10^23 lipid molecules present in the brain.

It should be emphasized that in off-board repair warming of the tissue 
is not an issue because the overwhelming bulk of the calculations and 
hence almost all of the energy dissipation takes place outside the 
tissue.   Much of the computation takes place when the original 
structure has been entirely disassembled into its component molecules.

     How Much Is Enough?

Is this enough computational power?  We can get a rough idea of how much 
computer power might be required if we draw an analogy from image 
recognition.  The human retina performs about 100 "operations" per 
pixel, and the human brain is perhaps 1,000 to 10,000 times larger than 
the retina.  This implies that the human image recognition system can 
recognize an object after devoting some 10^5 to 10^6 "operations" per 
pixel.  (This number is also in keeping with informal estimates made by 
individuals expert in computer image analysis).  Allowing for the fact 
that such "retinal operations" are probably more complex than a single 
"gate operation" by a factor of 1000 to 10,000, we arrive at 10^8 to 
10^10 gate operations per pixel - which is well below our estimate of 
10^12 operations per bit or 5 x 10^13 operations per molecule.

To give a feeling for the computational power this represents, it is 
useful to compare it to estimates of the raw computational power of the 
human brain.   The human brain has been variously estimated as being 
able to do 10^13[50], 10^15 or 10^16[114] operations a second (where 
"operation" has been variously defined but represents some relatively 
simple and basic action)[ft. 21].  The 10^37 total logic operations will 
support 10^29 logic operations per second for three years, which is the 
raw computational power of something like 10^13 human beings (even when 
we use the high end of the range for the computational power of the 
human brain).  This is 10 trillion human beings, or some 2,000 times 
more people than currently exist on the earth today.  By present 
standards, this is a large amount of computational power.  Viewed 
another way, if we were to divide the human brain into tiny cubes that 
were about 5 microns on a side (less than the volume of a typical cell), 
each such cube could receive the full and undivided attention of a 
dedicated human analyst for a full three years.

The next paragraph analyzes memory costs, and can be skipped without 
loss of continuity.

This analysis neglects the memory required to store the complete state 
of these computations.  Because this estimate of computational abilities 
and requirements depends on the capabilities of the human brain, we 
might require an amount of memory roughly similar to the amount of 
memory required by the human brain as it computes.  This might require 
about 10^16 bits (10 bits per synapse) to store the "state" of the 
computation.  (We assume that an exact representation of each synapse 
will not be necessary in providing capabilities that are similar to 
those of the human brain.  At worst, the behavior of small groups of 
cells could be analyzed and implemented by the most efficient method, 
e.g., a "center surround" operation in the retina could be implemented 
as efficiently as possible, and would not require detailed modeling of 
each neuron and synapse.  In point of fact, it is likely that algorithms 
that are significantly different from the algorithms employed in the 
human brain will prove to be the most efficient for this rather 
specialized type of analysis, and so our use of estimates derived from 
low-level parts-counts from the human brain are likely to be very 
conservative).  For 10^13 programs each equivalent in analytical skills 
to a single human being, this would require 10^29 bits.  At 100 cubic 
nanometers per bit, this gives 10,000 cubic meters.  Using the cost 
estimates provided by Drexler[85] this would be an uncomfortable 
$1,000,000.  We can, however, easily reduce this cost by partitioning 
the computation to reduce memory requirements.   Instead of having 10^13 
programs each able to "think" at about the same speed as a human being, 
we could have 10^10 programs each able to "think" at a speed 1,000 times 
faster than a human being.  Instead of having 10 trillion dedicated 
human analysts working for 3 years each, we would have 10 billion 
dedicated human analysts working for 3,000 virtual years each.  The 
project would still be completed in 3 calendar years, for each computer 
"analyst" would be a computer program running 1,000 times faster than an 
equally skilled human analyst.  Instead of analyzing the entire brain at 
once, we would instead logically divide the brain into 1,000 pieces each 
of about 1.4 cubic centimeters in size, and analyze each such piece 
fully before moving on to the next piece.

This reduces our memory requirements by a factor of 1,000 and the cost 
of that memory to a manageable $1,000.

It should be emphasized that the comparisons with human capabilities are 
used only to illustrate the immense capabilities of 10^37 logic 
operations.  It should not be assumed that the software that will 
actually be used will have any resemblance to the behavior of the human 

     More Computer Power

In the following paragraphs, we argue that even more computational power 
will in fact be available, and so our margins for error are much larger.

Energy loss in rod logic, in Likharev's parametric quantron, in properly 
designed NMOS and CMOS circuits, and in many other proposals for 
computational devices is related to speed of operation.  By slowing down 
the operating speed from 100 picoseconds to 100 nanoseconds or even 100 
microseconds we should achieve corresponding reductions in energy 
dissipation per gate operation.  This will allow substantial increases 
in computational power for a fixed amount of energy (10^15 joules).  We 
can both decrease the energy dissipated per gate operation (by operating 
at a slower speed) and increase the total number of gate operations (by 
using more gates).   Because the gates are very small to start with, 
increasing their number by a factor of as much as 10^10 (to 
approximately 10^27 gates) would still result in a total volume of 100 
cubic meters (recall that each gate plus overhead is about 100 cubic 
nanometers).  This is a cube less than 5 meters on a side.  Given that 
manufacturing costs will eventually reflect primarily material and 
energy costs, such a volume of slowly operating gates should be 
economical and would deliver substantially more computational power per 

We will not pursue this approach here for two main reasons.  First, 
published analyses use the higher 100 picosecond speed of operation and 
10^-22 joules of energy dissipation[85].  Second, operating at 10^-22 
joules at room temperature implies that most logic operations must be 
reversible and that less than one logic operation in 30 can be 
irreversible.  Irreversible logic operations (which erase information) 
must inherently dissipate at least kT x ln(2) for fundamental 
thermodynamic reasons.  The average thermal energy of a single atom or 
molecule at a temperature T (measured in degrees K) is approximately kT 
where k is Boltzman's constant.  At room temperature, kT is about 4 x 
10^-21 joules.  Thus, each irreversible operation will dissipate almost 
3 x 10^-21 joules.  The number of such operations must be limited if we 
are to achieve an average energy dissipation of 10^-22 joules per logic 

While it should be feasible to perform computations in which virtually 
all logic operations are reversible (and hence need not dissipate any 
fixed amount of energy per logic operation)[9,25,32,53], current 
computer architectures might require some modification before they could 
be adapted to this style of operation.  By contrast, it should be 
feasible to use current computer architectures while at the same time 
performing a major percentage (e.g., more than 99%) of their logic 
operations in a reversible fashion.

Various electronic proposals show that almost all of the existing 
combinatorial logic in present computers can be replaced with reversible 
logic with no change in the instruction set that is executed[112, 113].  
Further, while some instructions in current computers are irreversible 
and hence must dissipate at least kT x ln(2) joules for each bit of 
information erased, other instructions are reversible and need not 
dissipate any fixed amount of energy if implemented correctly.  
Optimizing compilers could then avoid using the irreversible machine 
instructions and favor the use of the reversible instructions.  Thus, 
without modifying the instruction set of the computer, we can make most 
logic operations in the computer reversible.

Further work on reversible computation can only lower the minimum energy 
expenditure per basic operation and increase the percentage of 
reversible logic operations.  A mechanical logic proposal by the 
author[105] eliminates most mechanisms of energy dissipation; it might 
be possible to reduce energy dissipation to an extraordinary and 
unexpected degree in molecular mechanical computers.  While it is at 
present unclear how far the trend towards lower energy dissipation per 
logic operation can go, it is clear that we have not yet reached a limit 
and that no particular limit is yet visible.

We can also expect further decreases in energy costs.   By placing solar 
cells in space the total incident sunlight per square meter can be 
greatly increased (particularly if the solar cell is located closer to 
the sun) while at the same time the total mass of the solar cell can be 
greatly decreased.  Most of the mass in earth-bound structures is 
required not for functional reasons but simply to insure structural 
integrity against the forces of gravity and the weather.  In space both 
these problems are virtually eliminated.  As a consequence a very thin 
solar cell of relatively modest mass can have a huge surface area and 
provide immense power at much lower costs than estimated here.

If we allow for the decreasing future cost of energy and the probability 
that future designs will have lower energy dissipation than 10^-22 
joules per logic operation, it seems likely that we will have a great 
deal more computational power than required.  Even ignoring these more 
than likely developments, we will have adequate computational power for 
repair of the brain down to the molecular level.

     Chemical Energy of the Brain

Another issue is the energy involved in the complete disassembly and 
reassembly of every molecule in the brain.  The total chemical energy 
stored in the proteins and lipids of the human brain is quite modest in 
comparison with 10^15 joules.  When lipids are burned, they release 
about 9 kilocalories per gram.  (Calorie conscious dieters are actually 
counting "kilocalories" - so a "300 Calorie Diet Dinner" really has 
300,000 calories or 1,254,000 joules).  When protein is burned, it 
releases about 4 kilocalories per gram.  Given that there are 100 grams 
of protein and 175 grams of lipid in the brain, this means there is 
almost 2,000 kilocalories of chemical energy stored in the structure of 
the brain, or about 8 x 10^6 joules.  This much chemical energy is over 
10^8 times less than the 10^15 joules that one person can reasonably 
purchase in the future.  It seems unlikely that the construction of the 
human brain must inherently require substantially more than 10^7 joules 
and even more unlikely that it could require over 10^15 joules.  The 
major energy cost in repair down to the molecular level appears to be in 
the computations required to "think" about each major molecule in the 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=0019.3