X-Message-Number: 4405
Date: Mon, 15 May 1995 23:14:03 +0200 (MET DST)
From: Eugen Leitl <>
Subject: RandomRamblingsII cot'd

One final effort: the (strange) attractors in CA spaces.

In physics, the concept of _state_ or _phase space_ is used to
visualize the temporal _evolution_ of a system. The total state
of a system at time t is entirely represented by a single point
in n-space, e.g. 6N-dimensional (three position coordinates and
three momentum coordinates for each particle) for N
unconstrained particles. A _Hamiltonian_ (the rule in our
terminology) defines a vector field, a mapping	on that system.
Differently put, the Hamiltonian is an _energy gradient_, a
pathway for a system to slide towards a (local) minimum. In our
version of finite binary integer CAs with quantized time, each
vector at t is mapped by the rule to the future vector at t+1.

Nothing new, so far. Now consider a initial _region_, a
contigous block of vectors. Each element of the block at t gets
mapped by the rule to another set of elements at t+1. Obviously,
the _volume_, the total number of cells in a given region cannot
grow bigger, it can shrink or remain constant, at most. Due to
the Liouville theorem a true Hamiltonian system's region volume
remains invariant (constant) during temporal evolution (note
that the CA rule is a superset of a Hamiltonian as it does
_also_ allow volume reduction). This has profound implications:
for a majority of systems the elements of an contigous regions
diverge with each evolution step, loosing their coherence they
are getting spread over increasingly larger volumes. The extreme
cases of this system behaviour is referred to as _ergodic_. The
"gas" phase type CAs are excellent specimens of this behaviour.
As a minimal disturbance in the starting vector produces a
maximal deviation within a few iterations, the gaseous CAs are
thus an excellent illustration of the "butterfly" effect.

Another important theorem states that exists only one
_trajectory_ to pass through each point of the state space. The
_periode_ of the trajectory, the number of transitions until the
randomly walking vector hits upon itself by chance is _very_
long in high-dimensional CAs (there are a great many directions
to go). This property, coupled with the lack of attractors/high
path entropy at equidistributed statespace point visit
probability makes this class for CAs particularly attractive for

A truly ergodic system performs a random walk through its state
space with the probability of visiting each point being
equidistributed. Additionally, the systems's evolution velocity
(the fraction of transition length over time step) along its
trajectory using the euclidian distance metric (square root of
the sum of quadrates of the element's differences) gets
increasingly evened out with increasing size of the space, the
mean translational velocity is roughly a constant. The bigger
the system, the less detectable is the individual's contribution
to the whole picture, an obvious parallele to the "billiard
table" ideal gas picture. Though subsystem components may
experience vigorous brownian motion, the whole system stands
rock firm.

One should remark, however, that for
Morse-modulated-potential-type ideal gases boxes there _are_
constraints, as introduced by the shape of the potential
function. One can calculate some kind of crazy
Schroedinger-square, telling which points are likely to be
visited and which not.

A system can also be seen as a state-space balett of interacting
subsystems of smaller dimensionality, choreographed by
higher-order rule. Particularly, using binary subsystem
clustering may appear constructive in implementation/analysis.

The dull_1 (solid) CAs are remain trapped in single points or
constrained to tiny volumes ping-pongying to and fro. If we
start to melt the solid rule, thus drawing nearer and nearer to
the liquid phase edge, the traversed regions become increasingly
larger and larger, the paths more and more complex. A liquid
system may stay liquid for a long while or make an expedition
into the chaos region, either to stay there indefinitely (there
is no way to tell) or returning back, eventually. It might also
freeze out, remaining trapped in a solid forever.

A typical liquid CA will also show special points (typically,
many), called _attractors_, around which on complex trajectories
the system will revolve. In a finite system, several convergent
paths will congeal into a single one, which might eventually be
trapped in a single point, the attractor. (This would never
happen in a continous system, of course).

The entire volume belonging to converging trajectories is called
the _basin of attraction_, a simile to basins of drainage in a
montainous massive. A system, released within its basin of
attraction, will remain there forever, unless disturbed by an
outside force. Two basins of attraction are meet at a ridge,
called _separatrix_, a region with one degree of freedom fewer
than the number of variables in the system.

Instead of being a point, an attractor might be a loop, a toroid
(of course, it's higher-dimensional analogues) or any such
thing. Moreover, there is a class of _strange_ or _chaotic_
attractors. Their main property is that two very close systems
let loose within the basin remain on the attractor surface but
float forth _diverging from each other_. Tiny initial deviations
gets amplified with time. For obvious reasons, real systems
featuring these strange attractor (or ergodic) property are
virtually noncomputable as they would require infinite precision
both at measurement and simulation stage. Both are unattainable.
The main distinction between ergodic systems and these
containing strange attractors is that a strange attractor/its
basin is tiny in relation to the total statespace size.

By now you must think I must be strangely attracted to those
attractors, but these beasts are actually useful. There is
_heavy_ experimental/theoretical evidence (both direct and
indirect) that the brain uses state-spaces and generic mappings
(coordinate transforms) to represent and process information
(see Churchland for an excellent introduction). Particularly,
the existance of attractors in the EEG ascertained 
Most, maybe all, part of machinery of the brain does
calculations utilizing trajectories and attractors in state
space according to Churchland et al. .

Actually, I believe any system (including us) which does any
kind of computation/life must be an edge-of-chaos one to be
efficient. Our computers, alas, reside quite firmly in the solid

4.2 What Are They Good For?

A good question. Are CAs just another irrelevant theoretical
construct? I think not.

- CAs are great for applied cryptography, both for rnd-stream
  and public-key. Sadly, CA are largely unknown among
  cryptographers as yet.

- CAs are great for efficient modeling (e.g. fluid dynamics),
  reducing the amount of necessary computation by many orders of
  magnitude. A modest fluid dynamic package will run on a
  PC in realtime. (I've seen some). Systems with purely
  local interactions (essentially all systems fall within
  this class) and (best) a noticeable number of constraints are
  prior candidates for CA modeling. Sadly again, most scientists
  haven't heard about CAs yet.

- neural nets (NNs) are special cases of CAs, which are
  special cases of automata networks (ANs). Knowledge transfer
  seems to be possible.

- CAs and ANs are great for self-organization and non-linearity
  science. Intuitive reasoning and/or a well-developed
  body of mathematical machinery is readily available.

- CAs can help up to understand embryonal morphogenesis.

- CAs are great toy systems for artificial life (AL).

- A family of CAs is computationally universal. They compare
  favourably (in time/space-complexity) to the Turing machine/von
  Neumann architectures. CAs might be computationally equivalent
  to Turing engines, but not in terms of time complexity. Their
  implementation in silicon is straightforward. Achievable
  computing power with the same transistor resources is several
  orders of magnitude higher than current architectures. Being
  redundant arrays of identical, extremely primitive elements,
  they are the major candidates for molecular electronics
  implementations. Achievable integration density and computation
  performance are of all architectures the highest theoretically

- CAs are interesting candidates for universe theories. They
  mimic a lot (reversibility of the time arrow, particles,
  light speed, anthropic principle, edge of chaos theories,
  information conservation, etc.) of the real-world properties.

4.3 A Recipe for a CA XOR-Rnd-Stream Cryptosystem.

- Choose a CA which does very good rands. Most CAs from the gas
  region will do nicely.
- Define all the boundary conditions. Code it.
- Choose a random space vector as initial key. Give the key to
  your partner via a safe channel.
- XOR your data stream with CA output stream to obtain cypher
  text. Your partner does the same to retrieve plain.
  Of course, this works also the other way round.

Why bother?

CA's form a _family_ of random generators, which are
well-behaved, produce excellent rands and run fast even on
von-Neumann machines. Internet nodes will be able to crypt
TCP/IP packets individually via a different crypt for each
target node. Routine eavesdropping will become much more
difficult this way, as the cryptoattacker will have to know the
right crypt method for each packet. Since crypt is transparent
(hidden somewhere in the lower OSI levels), top level
applications could (and should) use a different/better
cryptosystem, such as:

4.4 A Recipe for a CA Public Key Cryptosystem.

- Search CA (automaton network space, actually)  space (_not_
  rule space) for a cryptography-suitable automaton family. Choose
  one member. Implement it in C, give away code. Invite people to
  try cryptoattacks.

- Fill rule table with very good rands. Do _not_ use compiler
  built-int rands. Hardware rnd generator or another CA may do.

- Construct the inverse rule.

- Decide, which rule table to be your public key. Give it into
  the net. Keep the other in your safe.

All data sent to you, transformed n times (you must agree upon
how many) with your public key can be only decoded applying your
private key n times.

All data encoded by your private key, can be only read with
your public key.

You want to  know whether the sender is the person it purports.
Choose a big pseudo random number (best CA rnds, again), encode
it with your friend's public key and send cypher to the
pretender. Pretender codes the number with its private key, then
sends it back to you. You decode it using your friend's public
key. Is the obtained the same random number you chose initially?
If yes: Pretender=Friend. Bingo! No? Pretender=bad guy. Call the

Why bother, again?

Robust trapdoor cryptosystems with two inverse keys are not
exactly frequent in cryptology. The only one I know of is based
on large prime factoring, is supposed to be breakable with a
finite effort (and might become even more fragile with further
advances in number theory). Moreover, it takes a lot of
horsepower to compute. Using CAs might come quite handy here.

4.5 Two Simulations  \
4.5.1 Fluid Dynamics  \ though extremely interesting
4.5.2 Protein Folding / these chapters are ommited due
		     /	to lack of relevance to uploading
			If there are doubts CAs to be
			instrumental for large-scale
			scientific simulations I will

--- to be continued ----------------------------------------------


[1]  Roger Penrose, "The Emperor's New Mind", Vintage (1990).
[2]  Stephen Hawking, "A Brief History of Time", Bantam (1989).
[3]  Hans Moravec, "Mind Children", Harvard University Press (1988).
[4]  Stuart A. Kaufmann, "The Origins of Order", Oxford University
     Press (1993)
[5]  Gregoire Nicolis, Ilya Prigogine (ed.), "Die Erforschung des
     Komplexen", Piper (1987)
[6]  Dawkins, "The Blind Watchmaker", (1986)
[7]  R. Vollmar, "Algorithmen in Zellularautomaten", B.G. Teubner
     Stuttgart (1979).
[8]  G. E. Gorelik, "Rasmernost' Prostranstva", Isdatel'stvo
     Moskovskogo Universiteta (1983).
[9]  Andrew Hodges, "Alan Turing: the enigma", Vintage (1992).
[10] Koch/Segev (ed.), "Methods in Neuronal Modeling", MIT Press,
     2nd printing (1990)
[11] Yoh-Han Pao, "Adaptive Pattern Recognition and Neural Networks",
     Addison Wesley (1989).
[12] Igor Aleksander (ed.), "Neural Computer Architectures",
     North Oxford Academic (1989)
[13] Paul M. Churchland, "Representation and high-speed computation
     in neural networks" in "Foundation of artificial intelligence,
     a sourcebook", by Derek Patridge and Yorick Wilks, (ed.),
     Cambridge University Press (1990).
[14] P.W. Atkins, "Physical Chemistry", Oxford University Press (1987)

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=4405