IBRI Research Report #36 (1987, 1989)



Robert C. Newman
Biblical Theological Seminary
Interdisciplinary Biblical Research Institute
Hatfield, Pennsylvania

Copyright © 1987, 1989 by Robert C. Newman. All rights reserved.


The minimal complexity needed for life is examined by assuming that the simplest living thing is a self-reproducing automaton. The work of Von Neumann, Codd and Langton in designing mathematical models for such automata is briefly reviewed, and Langton's very simple self-reproducing automaton is described in detail. The complexity of Langton's automaton strongly suggests that life is designed rather than accidental.


Although the author is in agreement with the doctrinal statement of IBRI, it does not follow that all of the viewpoints espoused in this paper represent official positions of IBRI. Since one of the purposes of the IBRI report series is to serve as a preprint forum, it is possible that the author has revised some aspects of this work since it was first written. 

ISBN 0-944788-36-X

Self-Reproducing Automata and the Origin of Life


In recent decades a number of scientists have been active seeking to demonstrate that life arose from inanimate matter by purely natural processes.1 Most of their work has consisted of studying the biochemistry of simple life forms to propose steps by which the necessary level of complexity could be reached by chemical reactions, whether of greater or lesser likelihood.

To facilitate such proposals, various assumptions have been made about the nature of the earth's early atmosphere, about energy sources necessary to drive the needed reactions, and about specialized environments in which inorganic chemicals might be converted to simple organics such as sugars, amino acids and nucleotides. Some work has also been done in seeking to polymerize such simple organics to produce the very complex biopolymers such as proteins and the nucleic acids DNA and RNA which are the crucial biochemicals of living cells. The results to date have not been particularly persuasive.2

The great complexity of biopolymers such as DNA and proteins has made it difficult to understand how present-day life chemistry could have been produced by purely natural processes. The jump in complexity from simple organics to such biopolymers is much too large to be explained by a series of purely random processes. In response to this difficulty, those who envision a strictly naturalistic origin of life claim that some much smaller molecular arrangements may have existed which were capable of self-reproduction, and which were formed by prebiological selection processes. Such molecules then evolved by mutation and natural selection -- a process viewed as a powerful way of producing order in an otherwise random situation -- to the large, complex proteins and nucleic acids used today. These smaller molecules were eventually rendered obsolete by their more complex descendants and so passed off the scene.

In this paper, we do not intend to examine this claim directly by a discussion of biochemistry. Instead we wish to investigate the mathematical complexity of self-reproduction. For those who propose a naturalistic origin of life, a definition of a living thing as a self-reproducing machine should be satisfactory, even if others regard such a definition as too simple. What, then, can we find out mathematically about the simplest structure that can reproduce itself? How complex is such a structure? What is the likelihood that such a structure might have formed randomly in the available time and space our universe provides? Such questions lead us to the mathematical theory of self-reproducing automata.


A survey of the literature indicates that the first person to design a mathematically feasible machine which reproduces itself was John von Neumann (1903-1957). A native of Hungary, von Neumann came to the U.S. in 1930 after earning his doctorate in mathematics at the University of Budapest. In 1931 he became a professor at Princeton University and two years later a member of the Institute for Advanced Study there. Von Neumann was unusual among mathematicians in being interested in applications of all sorts and in being able to communicate easily with scientists and engineers. During World War 2 he was active in military applications of mathematics, and later served on the U.S. Atomic Energy Commission.

Although von Neumann invented the mathematical theory of games and made important contributions to both ergodic theory and the mathematical foundations of quantum mechanics, we are here interested in his pioneering work in computers, which eventually led to his mathematical theory of self-reproducing automata. Von Neumann got into computers through the problem of solving non-linear partial differential equations. Here he came to realize that mathematics had reached a stalemate in trying to obtain general solutions to such equations, but that specific cases could be solved numerically. These solutions could then be used as a guide to theorizing about general solutions.

Since numerical solutions to such equations are quite time-consuming when done by hand, von Neumann did some of the earliest work on electronic computers. He was a consultant on several early models (ENIAC, EDVAC and JONIAC), suggesting improvements in physical design and memory. He also invented the idea of using flow charts for program design, and pioneered the concept of using one language for the computer and another for the programmer. In working out a theory for automatic control of computers by internal programs, he was led to a mathematical theory of automata.

In all, von Neumann produced five works on automata: (1) "General and Logical Theory of Automata," written in 1948 and published in 1951; (2) "Theory and Organization of Complicated Automata," five lectures given in 1949; (3) "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Parts," 1952; (4) "Theory of Automata: Construction, Reproduction, Homogeneity," 1952-53; and (5) The Computer and the Brain, written 1955-56 and published in 1958. Items (1) and (3) are included in volume 5 of von Neumann's collected works;3 item (5) was published as a separate book;4 items (2) and (4), in which we are particularly interested, were published posthumously as The Theory of Self-Reproducing Automata.5

Though von Neumann intended eventually to design a mathematical model of reproduction that would realistically simulate a simple living organism, he lived long enough only to sketch out details for a much more abstract automaton which would make a copy of itself. This automaton was completed (with a few minor corrections) by Arthur Berks. Let us look briefly at von Neumann's automaton as a preparation for our further discussion of a much simpler automaton.

Von Neumann imagined his self-reproducing automaton as an organized collection of small automata which can be represented as individual computer chips filling a large two-dimensional plane. These chips are rectangular and linked to each of their four nearest neighbors. Each chip can be in any one of 29 different states, which include a quiescent or "turned-off" state, several "warm-up" states for converting the quiescent state into functional states, and finally several functional states which would transmit information in various directions or serve as junctions for information transmission.

Von Neumann's self-reproducing automaton consisted of imposing certain initial state-values on a set of chips at time t = 0 in such a way that this set of chips would subsequently send out a construction arm and convert a quiescent set of chips nearby into a copy of itself. This automaton was rather complex, requiring an array of some 300 x 500 chips for the memory control unit, something similar for the construction unit, and a "tail" of some 150,000 chips to store information on the details of the automaton to be built.6

Von Neumann's self-reproducing automaton is notable, like early computers, mainly for showing that such a machine is feasible. Its enormous complexity, however, hardly gives encouragement to those who hope that a self-reproducing machine might happen by itself.


In the years since von Neumann's proposal, various attempts have been made to simplify his automaton. E. F. Codd,7 for instance, was able to design an automaton in which each of the constituent computer chips needed only eight states instead of von Neumann's 29. Yet Codd's automaton was still as complex as a typical electronic computer, hardly likely to happen by chance.

Recently Christopher Langton8 has proposed a drastic simplification of von Neumann's automaton, following up on some ideas suggested by Codd. Langton notes that the automata of von Neumann and Codd were unnecessarily complex because each was designed to be able to make any kind of automaton (depending on the information stored in the machine's long tail). Thus each made copies of themselves as a special case of their capabilities as universal constructors. Langton notes that nothing so complicated as this is necessary, since the living things we are trying to mimic only make copies of themselves, not of some drastically different life forms. Langton thus abandons the idea that the automaton be able to make other kinds of automata and seeks the simplest machine that will make only a copy of itself. In what follows, let us go into some detail in order to appreciate Langton's simple self-reproducing automaton.

Following Codd, Langton represents the array of computer chips mathematically as a two-dimensional array of numbers, one for each chip, which specify the state in which each chip is presently functioning. Zero is used to represent the chip in its quiescent state, and the numbers 1 through 7 represent the other functional states. The automaton can thus be represented mathematically as a two-dimensional matrix of numbers which change with each unit of time as the machine functions.

The state of a particular chip at time t is calculated from its own state at the previous time-step t-1 as well as the states of its neighbors at t-1. What the seven functional states of the chip actually do in the machine can be chosen by the designer as he selects the set of "transition rules" which govern how each state changes with time. Following Codd, Langton defines state 1 as an element of a datapath. State 2 is used for elements of sheath material which protect the datapath. States 1 and 2 work somewhat like a nerve cell, which has a central core transmitting its signal protected by a sheath to keep the signal from straying, or like an insulated electric wire (see fig. 1).

The remaining numbers, now just 3 through 7, give us five signals which can be defined to tell the automaton to carry out various functions. Typically these signals will move along a datapath of 1's. The direction of movement of a signal is specified by making each signal a pair of digits, the leading digit being one of the numbers 3 through 7, the following digit a zero.

2 2 2 2 2 2
1 1 1 1 1 1
2 2 2 2 2 2
Figure 1

Figure 2 shows a 7 signal which will move to the right one step for each unit of time.

2 2 2 2 2 2
1 0 7 1 1 1
2 2 2 2 2 2
Figure 2

Datapaths form junctions where a path splits in two. See fig. 3.

2 1 2
2 1 2
2 2 2 1 2 2 2
1 1 1 1 1 1 1
2 2 2 2 2 2 2
Figure 3

When a signal approaches such a junction coming (say) from the left, it will split into two copies of the signal, one following each of the other datapaths. See fig. 4.

2 1 2
2 1 2
2 2 2 1 2 2 2
1 0 7 1 1 1 1
2 2 2 2 2 2 2

time t

2 1 2
2 1 2
2 2 2 1 2 2 2
1 1 0 7 1 1 1
2 2 2 2 2 2 2

t + 1

2 1 2
2 1 2
2 2 2 7 2 2 2
1 1 1 0 7 1 1
2 2 2 2 2 2 2

t + 2

Figure 4

Given these features, we can design a simple device which will send a periodically repeating signal along a datapath. We simply make a closed loop with an open "tail" in one corner. See fig. 5.

2 2 2 2 2 2 2
2 1 1 1 1 1 2
2 1 2 2 2 1 2
2 1 2   2 1 2
    2 1 2 2 2 1 2 2 2
    2 1 1 0 7 1 1 1 1
    2 2 2 2 2 2 2 2 2
Figure 5


The device shown will continually circulate its 7-0 signal counterclockwise around the small loop (or square, if you prefer), sending a 7-0 signal down the path to the right once every time the signal in the loop comes round to the junction at the lower right corner.

All this was noticed by Codd. It was the genius of Langton to observe that in this one simple device we already have the makings of a self-reproducing machine without adding a lot of additional complexity.

Suppose the 7-0 signal is so defined that, when it strikes the end of a datapath, it lengthens the datapath by one unit. See fig. 6.

    2 2 2
    0 7 1 2
    2 2 2

    time t 

    2 2 2
    1 0 7 2
    2 2 2

    t + 1 

    2 2 2
    1 1 1 1
    2 2 2

    t + 2

    2 2 2 2
    1 1 1 1 2
    2 2 2 2

    t + 3 

Figure 6

Now suppose the 4-0 signal is so defined that a pair of such signals striking the end of a datapath causes the path to make a lefthand corner. It turns out that we need to use another state (say 3) as an intermediate step in this maneuver, leaving us with just two states, 5 and 6, for anything else we want to do.

At this point we have the machinery necessary to send a repeating signal around a loop of the proper size such that the signal can extend the loop's arm (or tail) n units and then make the arm turn left. If we have the signal go around the loop four times, it will make the arm fold itself around into another loop, and we can design this new loop to be the same size as the original loop.

With some judicious choices for the transition rules (remember, these govern how a chip changes state at the next time-step on the basis of its previous state and the state of its four neighbors), we can design the two remaining signals, 5 and 6, to arise when a signal going around the new loop collides with a signal coming from the old loop. The 5 and 6 move away from the collision area in opposite directions. The 5 disconnects the "daughter" loop from the "mother" and causes the mother to form a new arm at the next corner counterclockwise from the original arm, where it will then begin the process of forming another daughter loop. The 6 makes a new arm on the daughter loop, so that it can begin to form a "grand-daughter."

Doubtless it took Langton a lot of experimenting to find the simplest such machine that worked, but he eventually came up with a loop ten units on a side, possessing a five-unit arm and storing its construction information in a sequence of six 7-0 signals followed by two 4-0 signals. This machine, with the signals properly located at time t=0 (see fig. 7), will first extend its arm six units and then make a left corner. By then the information will have cycled around the loop once and come back again, extending the arm in the new direction six units and turning it left again.

      2 2 2 2 2 2 2 2
    2 1 7 0 1 4 0 1 4 2
    2 0 2 2 2 2 2 2 0 2
    2 7 2         2 1 2
    2 1 2         2 1 2
    2 0 2         2 1 2
    2 7 2         2 1 2
    2 1 2 2 2 2 2 2 1 2 2 2 2 2
    2 0 7 1 0 7 1 0 7 1 1 1 1 1 2
    2 2 2 2 2 2 2 2 2 2 2 2 2
Figure 7

In 35 time units the arm turns its first corner, in 70 its second, in 105 its third corner. The new loop closes back on itself at t=124, separates into two loops at t=128, sets up the mother loop to start making another daughter at t=147, and has the first daughter in the same state as the mother originally was (t=0) at t=151.

Having programmed my small personal computer in the Basic language to apply Langton's transition rules to his initial loop, I can testify that it works! Due to limited memory space and a slower computer, I did not run the program past t=153 to see the birth of third and fourth generation loops as Langton did. He provides an interesting discussion of how the loops "die" when they run out of space to make further daughters, for which I refer you to his original paper. A copy of my program with transition rules and starting loop is provided in the appendix to this paper should you wish to adapt it to your computer and experiment with it yourself. It is much more sophisticated than the public-domain computer game called "Life."


So, Langton has brilliantly demonstrated that something mimicking life can be designed. Does this indicate that life arose by chance or by design? To get some feel for the answer to this question, we need to analyze the complexity of his automaton.

Clearly, Langton's device is at or near the minimum complexity for self-reproduction of any meaningful sort. The information stored in the datatrain is very small because of the loop's foursided symmetry. Even so, to specify the particular initial state of the automaton, we must indicate the state of n chips at t=0. Thinking of the automaton as a rectangular array of 10 by 15 chips, n=150. Or, we can ignore chips outside the structure itself, and make n=110. Ignoring the chips inside the "doughnut hole" of the loop, n=94. Finally, ignoring all the zero-state chips, we can get n down to 86. For each case but the last, each chip specified could be in any one of eight states, giving 8 to the nth power combinations. For the last, each could be in any one of seven, or 7 to the nth power combinations. Table One, below, in its middle column, gives the number N of possible combinations by which the chips could be specified in each case.

Number of
Number of 
Combinations with
Transition Rules
3 x 10135
5 x 10295
2 x 1099
5 x 10259
8 x 1084
1 x 10245
5 x 1072
2 x 10233

Thus, to produce the complexity of the initial state of the Langton automaton, we must search through some 5 times 10 to the 72nd power combinations for the one which works. or, allowing for the four rotations through 90 degrees, about 10 to the 72nd combinations.

But before we attempt to estimate how likely the random formation of such a functional automaton is, we should note that most of the complexity of this device has been hidden under the rug, so to speak, in the details of the transition rules. These rules can be simplified slightly from those listed in Langton's Table One by eliminating all rules which put the chip in state zero and specifying instead that all rules not listed give a zero result. Even doing this, there are 190 rules producing the other seven states. To produce these randomly, we would need to search 7 to the 190th combinations, about 2 times 10 to the 160th. Combining this with the complexity of the initial state calculated above, we obtain the results N' in the righthand column of our Table One, above. Again, allowing for rotations, we have about 5 times 10 to the 232nd combinations.

The number of combinations listed for the easiest case (n=86) corresponds to the number of 276-letter words (276=86+190) one can make with a seven-letter alphabet. How likely is it that such a number of combinations would be successfully searched in the history of the universe? To make such a calculation, we have to make some assumptions concerning how many objects are being searched and how rapidly.

Since it is the biochemical situation to which we will want to eventually apply our result, let us try to mimic this in a very rough way. Let us suppose that the seven letters correspond to seven common elements occuring in life chemicals and the words correspond to strange "molecules" formed of their combinations. (Please be advised that we are not doing any real chemistry here.) Let us take hydrogen as state zero and leave it out of the calculation. Let us ignore helium as unreactive and call the seven elements carbon, nitrogen, oxygen, sulfur, phosphorus, potassium and sodium. This calculation is going to be very sloppy, so let's give each element the same abundance, namely that of carbon, the most common of the seven, which has a cosmic abundance about .0003 that of hydrogen.9

Suppose all these elements within a given volume of the universe are forming only 276-atom chains and that they are forming new combinations as rapidly as a carbon atom can move to a new chain, say a distance of 10 Angstroms, at standard temperature, 300 degrees Kelvin. (Need I point out that all these assumptions are fantastically favorable?)

Then using Boltzmann's equation to calculate the speed of carbon atoms at 300 degrees Kelvin (8 x 10 to the 4th cm/sec), new chains will form at the rate of R = 8 x 10 to the 11th per second per chain. In the visible universe (actually the universe out to the Hubble radius) there are about 2 times 10 to the 12th galaxies averaging 10 to the 11th stars each.10 Taking our sun as an average star, it has a mass of 2 x 10 to the 33rd grams. This amounts to about 4 x 10 to the 56th grams of hydrogen within the Hubble radius. Assuming each of the seven elements being used is as common as carbon, the mass of elements reacting is about 8 x 10 to the 53rd grams. Using Avogadro's number and an average gram atomic weight for our elements of 24, this will give about 2 x 10 to the 76th atoms reacting in our chains or, dividing by the number of atoms per chain (276), about 7 x 10 to the 73rd chains. The time to form the total number of possible combinations is then:

               time =    Number of combinations N'
                                              Number of chains x rate

                                time =   (5 x 10232 )/[(7 x 1073 )(8 x 1011 )]

                                time =  10147 seconds

                                time =  3 x 10139 years

That is a long time! We can easily convert this into a probability that this will happen in our universe in its twenty billion year history merely by dividing the age of the universe by the time given above. The result is

              probability = 10-129


Such a probability is astronomically small. There are estimated to be only about 10 to the 81st elementary particles in the universe.11 Suppose one of these particles were marked and we were set the task of randomingly selecting this particle from all the others in the universe with no way of telling it from the others. The chance of successfully locating such a particle would be 10 to the 48th times larger than the chance formation of a molecule with the organized complexity of the Langton automaton.

Unless there is some flaw in these calculations, I see only two possible responses. (1) Maybe life is designed after all. This has been the response of several formerly agnostic scientists recently.12 (2) Perhaps the simplest self-replicating machine is really much simpler than the one we have analyzed. To which I respond, "Please design some such machine so that we can take your answer seriously."

Zoologist Richard Dawkins, author of The Selfish Gene, believes scientists should choose a naturalistic model for the origin of life even if the model predicts it is very unlikely to happen. He suggests that since there may be as many as 10 to the 20th earth-like planets in the universe, and since it took about one billion years for life to show up on earth, we should admit any model in which the formation of life on a given earth-like planet has a probability of only 1 in 10 to the 20th.13 According to our calculations here, the probability of forming a simple self-replicating molecule are more than 10 to the 100th power less than Dawkins' threshold. It seems to me that we are looking at very strong evidence that life is designed.



  Program in S-Basic to emulate Langton's self-reproducing
  automaton as described in Christopher G. Langton,
  "Self-Reproduction in Cellular Automata," PHYSICA 10D
  (1984), pp 135-144.  Mutations may be simulated by
  modification of the initial array A(X,Y) (external file
  "AUTO") or by changing transition rules (external file
                                Dr. Robert C. Newman
                                Biblical Seminary
                                200 N. Main Street
                                Hatfield, PA 19440
                                March, 1987
DIM INTEGER A(40,40) TR(8,77) TRBL (8,77) Z(40,40)
REM Read in Transition Rules TRBL(I,J), TR(I,J)
FOR I=0 TO 7
100       INPUT3 #2; TRBL(I,J), TR(I,J), TRBL(I,J+1), TR(I,J+1),
   TRBL(I,J+2), TR(I,J+2), TRBL(I,J+3), TR(I,J+3), TRBL(I,J+4),
   TR(I,J+4), TRBL(I,J+5), TR(I,J+5), TRBL(I,J+6), TR(I,J+6)
          IF TRBL(I,J+6)=7777 THEN 130 ELSE 120
120       J=J+7
          GOTO 100
130     NEXT I
        CLOSE #2
        REM Read in array A(X,Y)
        OPEN #2; "AUTO"
        FOR Y=0 TO 39
          INPUT3 #2; A(0,Y), A(1,Y), A(2,Y), A(3,Y), A(4,Y),
  A(5,Y), A(6,Y), A(7,Y), A(8,Y), A(9,Y), A(10,Y), A(11,Y),
  A(12,Y),  A(13,Y), A(14,Y), A(15,Y), A(16,Y),A(17,Y), A(18,Y),
  A(19,Y), A(20,Y), A(21,Y), A(22,Y), A(23,Y), A(24,Y), A(25,Y),
  A(26,Y), A(27,Y), A(28,Y), A(29,Y), A(30,Y), A(31,Y), A(32,Y),
  A(33,Y), A(34,Y), A(35,Y), A(36,Y), A(37,Y), A(38,Y), A(39,Y)
        NEXT Y
        REM Print array A(X,Y)
500     TEXT 1 ,&
                     COMPUTER SIMULATION OF

        PRINT #1; , , "TIME = "; TIME
        PRINT #1
        PRINT #1
        FOR Y=0 TO 39
          FOR X=0 TO 39
            IF A(X,Y)=0 THEN 510 ELSE 520
510         PRINT #1; "  ";
            GOTO 530
520         PRINT #1; A(X,Y);
530       NEXT X
          PRINT #1; CHR$(13); CHR$(10);
        NEXT Y
        PRINT #1; CHR$(12);
        REM Given array A(X,Y), calculate successor Z(X,Y)
        REM Main X,Y Loop
750     FOR Y=1 TO 38
          FOR X=1 TO 38
          REM Assign center and neighbors
          REM Four cyclic combinations of T,R,B,L
          REM Selecting N with lowest value
          IF N1<=N2 THEN 1000 ELSE 1030
1000      IF N1<=N3 THEN 1010 ELSE 1060
1010      IF N1<=N4 THEN 1020 ELSE 1080
1020      M=N1
          GOTO 1100
1030      IF N2<=N3 THEN 1040 ELSE 1060
1040      IF N2<=N4 THEN 1050 ELSE 1080
1050      M=N2
          GOTO 1100
1060      IF N3<=N4 THEN 1070 ELSE 1080
1070      M=N3
          GOTO 1100
1080      M=N4
          REM Look up transition value
1100      CASE C OF
                0: I=0
                1: I=1
                2: I=2
                3: I=3
                4: I=4
                5: I=5
                6: I=6
                7: I=7
1200      IF TRBL(I,J)=M THEN 1210 ELSE 1220
1210      Z(X,Y)=TR(I,J)
          GOTO 1250
1220      IF TRBL(I,J)>M THEN 1230 ELSE 1240
1230      Z(X,Y)=0
          GOTO 1250
1240      J=J+1
          GOTO 1200
1250      NEXT X
        NEXT Y
        REM End main X,Y loop
        REM Replace old A(X,Y) by new
1300    FOR Y=1 TO 39
          FOR X=1 TO 39
          NEXT X
        NEXT Y
        GOTO 500


[Forty lines of forty numbers each, separated by commas;
somewhere in the middle, with room to grow, locate the original
automaton; the rest of the numbers are zeros.]




                                                Produced for IBRI
                                                       PO Box 423
                                               Hatfield, PA 19440
                                      Write or fax (215) 368-7002
                                             for our free catalog

(April, 1989)

Since Research Report #36 was printed in 1987, this article was published (in a slightly edited form) in Perspectives on Science and Christian Faith 40, 1 (March 1988), 24-31.

In the March, 1989 issue of the same journal, Dr. John Byl, of the Department of Mathematical Sciences at Trinity Western University, Langley, British Columbia, took up the challenge offered in my conclusion and designed a much simpler self- reproducing automaton than that of Langton. See his letter, PSCF 41, 1 (March 89), 26-29 for details.

Briefly, Byl has designed a cellular automaton with simplified structure and transition rules which reproduces in only 25 time- steps. The initial configuration looks like this:


With an array of only 12 cells, with 36 special transition rules and 7 default rules, Byl uses my estimates for the probability of this automaton arising by chance in the known universe to get a timespan for formation of only 5 x 10-45 sec as against my value of 3 x 10139 years for the Langton automaton. This would seem to make the random production of a self- reproducing automaton quite likely somewhere in the history of our vast universe. Byl has made an important step forward in the search for the simplest possible self-reproducing automaton, but the conclusion regarding the ease of its formation does not follow.

Realizing that the Langton automaton was quite unlikely, I made a number of very generous concessions in the probability calculation to simplify it and to avoid haggling. In the interests of realism (though not wishing to appear stingy) I must take some of these back.

1. It was assumed that all relevant atoms in the universe were already in 276-link chains (or for the Byl automaton, 55-link chains). This is certainly not the case. The actual number of 55 (or larger) atom molecules is an astronomically small fraction of the atoms involved. As yet I do not know how to calculate this fraction.

2. It was assumed that these chains were trading atoms in such a way as only to make new combinations. This will probably not make more than an order of magnitude difference in the result.

3. It was assumed that these traded atoms were moving at a speed appropriate for a temperature of 300 degrees Kelvin (about 80° F). But few of the atoms in the universe are in such a temperature regime. Those in much colder regions will be moving around much more slowly, so that fewer combinations will be formed. In any case, life would not survive in such areas even if it could form, and it is not likely there would be much transport from such regions to warmer regions, as the mass movement is nearly all in the opposite direction (outward from stars). On the other hand, those atoms in much hotter regions will have much faster atomic motions, but that very motion will disrupt any long-chain molecules. It seems best to restrict our calculations to that fraction of matter in "life zones" around stars. Taking our solar system as an average, this fraction amounts to the ratio:

f = Mearth / Msun = 3 x 10-6.

Thus the fraction of atoms making such combinations is reduced by a third of a million.

Here on earth, it is only the material near the surface that is in a temperature/pressure regime for life to function. This fraction of the total earth's mass is something like that of a thin shell at the earth's surface (say 1 to 6 miles thick), which gives us a further reduction of 10-3 to 2 x 10-4.

4. It appears that an error was made in calculating the complexity of the Langton automaton which was also carried over to the Byl model. The transition rules were represented as one digit per rule (the result), but in fact a label is necessary for each rule to identify it. In Byl's automaton, each of the seven default rules needs one digit (the current value of the cell) to distinguish among them. The non-default transition rules depend upon the current values of the four neighboring cells, which thus require a four-digit label for each. Adding in this complexity raises the number of combinations from Byl's value of 6 x 1042 (page 28 of his article) to 2 x 10173. Without even taking back the concessions discussed in items 1-3, above, this gives a formation time of 3 x 1079 years again, and random formation appears to be out of the question.

I would appreciate correspondence from readers on possible improvements to this model calculation, as I believe the determination of minimum complexity for any reasonable analogues to life is desirable in working through the basic question of life's origin.

Robert C. Newman
Biblical Theological Seminary
200 N. Main Street
Hatfield, PA 19440


1 See, e.g., Cyril Ponnamperuma, The Origins of Life (New York: Dutton, 1972); Stanley L. Miller and Leslie E. Orgel, The Origins of Life on Earth (Englewood Cliffs, NJ: Prentice-Hall, 1974); John Farley, The Spontaneous Generation Controversy from Descartes to Oparin (Baltimore: Johns Hopkins, 1977); Sidney W. Fox and Klaus Dose, Molecular Evolution and the Origins of Life (New York: Marcel Dekker, 1977).

2 See, e.g., Charles B. Thaxton, Walter L. Bradley and Roger L. Olsen, The Mystery of Life's Origin: Reassessing Current Theories (New York: Philosophical Library, 1984); Robert Shapiro, Origins: A Skeptic's Guide to the Creation of Life on Earth (New York: Summit Books, 1986).

3 John von Neumann, Collected Works, A. W. Taub, ed. (6 vols.; New York: Pergamon Press, 1961-63).

4 John von Neumann, The Computer and the Brain (New Haven, CT: Yale University Press, 1958).

5 John von Neumann, Theory of Self-Reproducing Automata, ed. and completed by Arthur W. Berks (Urbana, IL: University of Illinois Press, 1966). The biographical data on von Neumann come from this source.

6 Theory of Self-Reproducing Automata gives no size information for the whole machine. Berks gives 337 x 547 for the Memory Control Unit on p 261 and notes (p 280) that von Neumann never finished the Construction Unit. An earlier popular discussion of von Neumann's machine is given in John Kemeny, "Man Viewed as a Machine," Scientific American 192 (April 1955): 18, 58-67; reprinted in Mathematics in the Modern World (San Francisco: W.H. Freeman and Co., 1968), chap 50. Kemeny's size estimate, 80 x 400 chips with a 150,000 chip tail, is certainly too small for the final functional version.

7 E. F. Codd, Cellular Automata (New York: Academic press, 1968).

8 Christopher G. Langton, "Self-Reproduction in Cellular Automata," Physica 10D (1984): 135-144.

9 See, e.g., the relative cosmic abundances in The McGraw-Hill Encyclopedia of Astronomy (1983), p. 108.

10 Calculations based on information from Martin Harwit, Astrophysical Concepts (New York: Wiley, 1973), pp. 42, 61.

11 Give or take a few orders of magnitude; see our number above for the total amount of hydrogen and multiply by Avogadro's number, times two to count the electrons; this is a few orders of magnitude larger if the neutrinos and photons are included.

12 See, e.g., Fred Hoyle and Chandra Wickramasinghe, Evolution from Space (New York: Simon and Schuster, 1981); Dean H. Kenyon, Foreword to Thaxton, Bradley and Olsen, Mystery of Life's Origin.

13 Richard Dawkins, The Blind Watchmaker (New York: Norton, 1986), pp. 143-146.