ArtificialLife and Cellular Automata

 

                                                               RobertC. Newman

 

Introduction

 

ArtificialLife (AL) is a rather new scientific discipline, which didn't really get goinguntil the 1980s.[1]  Unlike biology, it seeks to study lifenot out in nature or in the laboratory, but in the computer.[2]  AL seeks to mimic life ma­themati­cally,and especially to generate known features of life from basic princi­ples(Langton, 1989b, pp 2-5).  Some ofthe more gung-ho specialists in AL see them­selves as creat­ing life inthe elec­tron­ic medium (Ray, 1994, p 180); others think they are onlyimitat­ing it (Harnad, 1994, pp 544-49).  Without addressing this particu­lar question, theistscan at least agree that life does not have to be mani­fested in biochem­istry.

 

Thosewho believe in metaphysical naturalism C that "the Cosmos is all that is, orever was, or ever will be" C must presume a purely non-supernaturalorigin and development of life, unguided by any mind.  For metaphysical naturalism, no other kind of causal­ityreally exists.  Theists, bycontrast, believe that a mind C God C is behind it all, however Heworked.  Perhaps God created matterwith built-in capabili­ties for producing life; perhaps He imposed onmatter the information patterns character­is­tic of living things;perhaps He used some combi­nation of these two.

 

Currentnaturalis­tic explana­tions of life may generally be charac­terizedby three basic claims.  First, thatlife arose here on earth or elsewhere without any intel­li­gentoversight C a self-reproducing system somehow assem­bleditself.  Second, that the(essentially blind) Darwin­ian mecha­nism of mutation and naturalselection, which then came into play, was so effec­tive that it producedall the variety and com­plexity we see in modern life-forms.  Third, the time taken for the assemblyof the first self-repro­ducer was short enough, and the rate at whichmutation and natural selec­tion operates is fast enough, to account for thegeneral fea­tures of the fossil record and such partic­ulars as the"Cambrian explo­sion." A good deal of AL research seems aimed at establish­ing one or moreof these claims.

 

Whatsort of world do we actually live in? The "blind-watchmaker" universe of metaphys­icalnaturalism, or one struc­tured by a designing mind?  It is to be hoped that research in ALcan provide some input for answering this question.

 

Meanwhile,the field of AL is already large and is rapidly growing larger.  I have neither the expertise nor thespace to give a definitive picture of what is happening there.  Here we will try to whet your appetiteand provide some suggestions for further research by looking briefly at severalproposals from AL to see how they are doing in the light of the naturalisticclaims mentioned above.  First, weshall look at the cellu­lar automata devised by von Neumann, Codd, Lang­ton,Byl and Ludwig, both as regards the origin of signifi­cant self-repro­duc­tionand the question of how life might develop from these.  Second, we will sketch Mark Ludwig'swork on computer viruses, which he suggests are the nearest thing to artifi­ciallife that humans have yet de­vised. Third, we will examine one of Richard Dawkins' programs designed to simu­latenatural selection.  Fourth, we willlook at Thomas Ray's "Tierra" environ­ment, which seeks toexplore the effects of mutation and natural selec­tion on a population ofelectronic creatures. 

 

CellularAutomata

 

Beginning nearly half a century ago, longbefore there was any discipline called AL, computer pioneer John von Neumannsought to investigate the question of life's origin by trying to design aself-repro­ducing automaton. This machine was to operate in a very simpli­fied environment to seejust what was in­volved in repro­duction.  For the building blocks of this automaton, von Neumanndecided on computer chips fixed in a rigid two-dimension­al array ratherthan bio­chem­icals swim­ming in a three-dimensional soup.  [In practice, his machine was to beemulated by a single large computer to do the work of the many small computerchips.]

 

Eachcomputer chip is identi­cal, but can be made to behave differentlydepending on which of several operation­al states it is currently in.  Typically we imagine the chips as wiredto their four nearest neighbors, each chip identi­fying its current statevia a number on a liquid crystal display like that on a wrist­watch.  The chips change states synchonously indis­crete time-steps rath­er than con­tin­uously.  The state of each chip for the nexttime-step is determined from its own current state and those of its fourneighbors using a set of transition rules specified by the automa­ton'sdesigner. 

Theidea in the design of a self-reproducing automaton is to set up an initialarray of states for some group of these chips in such a way that they will turna neigh­boring set of chips into an infor­mation channel, and then usethis channel to "build" a copy of the original array nearby.

 

VonNeumann in the late '40s and early '50s attempted to design such a system(called a cellular automaton) that could construct any automaton from theproper set of encoded instruc­tions, so that it would make a copy of itselfas a special case.  But he died in1957 before he could complete his design, and it was finished by his associ­ateArthur Burks (von Neumann, 1966). Because of its complexity C some 300x500 chips for the memorycontrol unit, about the same for the constructing unit, and an instruction"tape" of some 150,000 chips C the machine von Neumann designed was notbuilt.

 

Sincevon Neumann's time, self-re­producing automa­ta have been great­lysimpli­fied.  E. F. Codd (1968)re­duced the number of states needed for each chip from 29 to 8.  But Cod­d's au­tomaton was alsoa "uni­ver­sal con­structor" Cable to re­pro­duce any cel­lu­lar au­tomaton includingitself.  As a re­sult, it wasstill about as com­plicat­ed as a computer.

 

ChristopherLangton (1984) made the real break-through to simplicity by modi­fying oneof the compo­nent parts of Codd's automa­ton and from it producing areally simple automaton (shown below) that will repro­duce itself in 151time-steps.  It repro­duces byextend­ing its arm (bottom right) by six units, turn­ing left,extending it six more units, turning left, extending six more, turning left athird time, extending six more, colliding with the arm near its beginning,breaking the connection between mother and daughter, and then making a new armfor each of the two automata.  Lang­ton'sautoma­ton, by design, will not con­struct other kinds of cellular au­tomataas von Neumann's and Codd's would. His device consisted of some 10x15 chips, including an instruction tapeof 33 chips, plus some 190 transition rules.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Justa few years later, John Byl (1989a, b) simpli­fied Langton's automatonfurther (see below) with an even sma­ller automaton that reproduced in just25 time-steps.  Byl's automatonconsisted of an array of 12 chips C of which 4 or 5 could be counted as theinstruction tape C and 43 transition rules.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Mostrecently, Mark Lud­wig (1993, pp. 107-108) has apparent­ly carried thissimpli­fication to its limit with a minis­cule automa­ton that re­producesin just 5 time-steps.  Thisautomaton consists of 4 chips, only one of which is the instruction"tape," and some 22 transition rules.

 

 

 

 

 

 

 

 

 

 

 

 

Itis interesting to note that the information contained in each of theseself-reproducing automata may be divided into three parts:  (1) the transition rules, (2) thegeometry of the chips, and (3) the instruction tape.  (1) The transition rules, which tell us how state succeedsstate in each chip, somewhat resemble the phys­ics or chem­is­tryof the environment in the biological ana­logue.  (2) The geometry of the automaton would correspond to thestruc­ture of a biological cell. (3) The instructions resem­ble the DNA.  Thus these automata have a division of informa­tionwhich corre­sponds to that found in life as we know it on earth.  In both cases self-reproduction dependsnot only on an in­struc­tion set, but also upon the structure of thereproducer and the nature of the physical realm in which it operates.

 

Forthe von Neumann and Codd automata, since they are universal constructors, thesize of the machine and its instruc­tions are enormous!  One could not seriously entertain anatural­istic origin of life if the original self-reproducing system had tohave anything like this complexity.

Thesmaller automata look much more promising, however.  Perhaps a self-reproducing biochemical system at this levelof complex­ity could have arisen by a chance assembly of parts.  In a previous paper (Newman, 1988) Isuggested that the random forma­tion of something as complex as the Langtonautomaton (even with very generous assump­tions) was out of the ques­tionin our whole universe in the 20 billion years since the big bang, as theprobability of formation with all this space and time available is only 1chance in 10129.

 

Inresponse to Byl's proposed automaton, I found it neces­sary (Newman, 1990a)to retract some of the generosity given to Langton, but by doing so found thateven Byl's automa­ton had only 1 chance in 1069 of forminganywhere in our universe since the big bang.

 

Ludwig'sautomaton looks so simple as to be a sure thing in a universe as vast and oldas ours is.  Indeed, by the assump­tionsused in doing my probability calculation for Byl's automa­ton, we wouldhave a Ludwig automaton formed every 7 x 10-15 seconds in ouruniverse. 

 

However,an enormously favorable assumption is contained in this calcu­lation Cthat all the carbon in the universe is tied up in 92-atom molecules whichexchange material to try out new combina­tions as quickly as an atom canmove the length of a molecule at room temperature.  If, however, we calculate the expected fraction of carbonthat would actually be found in 92-atom polymers through­out our universe,the expected time between formation of Ludwig automa­tons in our universejumps to about 1086 years! Thus it would still not be wise to put one's money on the randomformation of self-reproduction even at this simple level.

 

Besidesthe problem of formation time, the physics (transi­tion rules) of thesesmaller automata was specially con­trived to make the par­ticu­larautomaton work, and it is probably not good for anything else.  Since the automa­ta of Langton, Byland Ludwig were not designed to be universal con­struc­tors,self-reproduc­tion typical­ly collapses for any mutation in the instruc­tions.  To avoid this, the construc­tingmechanism in any prac­tical candi­date for the first self-reproduc­erwill have to be much more flexible so that it can continue to construct copiesof itself while it changes.

 

Thephysics of such automata could be made more general by going back toward thelarger number of states used in von Neu­mann's automaton.  Langton, for instance, has a signal forextending a data path in his information tape, but none for retracting one; asignal for a left-turn, but none for a right-turn.  These could be included rather easily by adding additionalchip states to his eight, thus making the physics more flexible.  Of course this would significantlyincrease the number of transi­tion rules and the consequent complexity ofhis automaton. 

 

This,obvious­ly, makes self-reproduc­tion even less likely to have happenedby chance.  But it would also helpalleviate the problem that these simpler automata don't have a big enoughvocabulary in their genetic information systems to be able to do anything but avery specialized form of self-reproduction, and they have no way to expand thisvocabulary which was de­signed in at the begin­ning.  This problem seems to me a serious onefor the evolution of growing levels of complexity in general.

 

Asfor building an automaton that is more general in its constructing abilitiesand not tied to a particular physics especially contrived for it, Karl Sigmund(1993, pp. 27-39) has described an attempt by John Conway to use theenvironment of his game of "Life" as a substrate on which to design auniversal constructor.  He succeedsin doing so, but the result is outra­geously complex, back in the leaguewith von Neumann's and Codd's automata.

 

Weshould be able to design a somewhatgeneral self-repro­ducing automaton on a substrate not especial­lydesigned for it.  This would be agood project for future research. We would then have a better handle on what complexi­ty appears to beminimal for significant self-reproduction, and what would be the likelihood itcould occur by chance in a universe such as ours.

 

Theenvironment in which each of these self-reproducing automata operates is emptyin the sense that nothing else is around and happening.  By design, the sea of unused cells isquiescent.  This is certainlyunlike the scenario imagined for the origin of biochemical life.  What will happen to our automata ifthey are bumped by or run into other objects in their space?  Are they too frag­ile to be realcandidates for the hypothetical original self-repro­ducer?  The Langton automaton certainlyis.  By running the program with a"pimple" placed on the surface of the automaton (i.e., the structureis touched by a single cell in any of the states 1-7), we find that theautomaton typically "crashes" in about 30 time-steps (the time takenfor the data to cycle once around the loop).  It appears that the automa­ton is very fragile or"brittle" rather than robust. This would cer­tainly not be satisfactory in a real-life situa­tion.

 

ComputerViruses

 

MarkLudwig, PhD in elementary particle physics, proprietor of American EaglePublications, and author of The Little Black Book of Computer Viruses (1991), has written a very stimulatingbook entitled Computer Viruses, Artificial Life and Evolution (1993).  Ludwig argues that comput­er viruses are really muchcloser to artificial life than any­thing else humans have produced so far,espe­cially in view of the fact that such viruses have gotten loose fromtheir creators (or been set loose) and, like the biochemical viruses for whichthey are named, are fending for themselves rather suc­cessfully in ahostile environ­ment.  

 

Likethe various cellular automa­ta we discussed above, computer viruses havethe ability to reproduce them­selves. In addition, they can typically hide themselves from "preda­tors"(anti­virus programs) by lurking inside the instruc­tions of some regularcomputer program which they have "in­fected."  They may also function as parasites,predators, or just clever annoy­ances as they ride programs from disk todisk, computer to comput­er, and user to user.  Some viruses (by design or not) damage files or programs ina comput­er's memory; others just clutter up memory or diskettes, or sendhumorous and irksome messages to the computer screen.

 

Sofar as I know, no one claims that computer viruses arose spontaneously in thememories of computers.  But how likelywould it be for something as complex as a simple virus to form by chance in thecomputer environment? 

 

Earlyin 1993, Ludwig spon­sored "The First Interna­tional Virus WritingContest," awarding a prize for the shortest virus that could be designedhaving certain rather minimal function (Ludwig, 1993, pp 319-321).  He provides the code (computer program)for the virus that was grand prize winner and for several runners-up, plus asample virus which he sent out with the original an­nounce­ment of thecontest (Ludwig, 1993, pp 322-331). These programs all turned out to be over 100 bytes in length. 

 

Ludwigcalculates for the shortest of these (101 bytes) that there are 10243possi­ble files of length 101 bytes. If we could get all the 100 million PC users in the world to run theirmachines full-time with a program that generates nothing but 101-byte randomsequences at 1000 files per second, then in 10 years the probability ofgenerating this particular virus is 2 x 10-224 (ibid., p254-5).  If they ran for the wholehistory of the universe, the probability would be 4 x 10-214.  If all the elemen­tary parti­clesin our universe were converted into PCs generating 1000 random 101-byte filesper second, the probabily of forming this particu­lar virus would be 6 x 10-110(ibid., p 255).  Obvi­ously ouruniverse does not have the proba­bilistic resources to generate this levelof order by random assembly!

 

Ludwigthen discusses two much smaller programs. One is a rather crude virus of 42 bytes, which just copies itself on topof all the programs in a computer's directory.  He notes that one might just expect to form this virus inthe history of the universe if all those elementary particles were PCs crankingout 1000 42-byte random files per second, but that if one only had the 100million PCs and ten years for the job, the probability would be only 4 x 10-81(ibid., pp 254-5).  This wouldimprove to 8 x 10-71 if one had the time since the big bang to workwith.

 

Thesmallest program Ludwig works with is not a virus, since it cannot make copiesof itself that are saved to disk, but only copies that remain in memory so longas the computer is running.  Thisprogram is only 7 bytes long.  Itcould easily be formed in ten years with 100 million PCs turning out 10007-byte sequences per second, but it would take a single computer about 2.5million years to do so. 

 

Itis doubtful that this is the long-sought self-reproducer that will show lifearose by chance.  The actualcomplexity of this program is considerably greater than 7 bytes because it usesthe copying routine provided by the computer.  The environ­ment provided for computer viruses is muchmore helpful for self-reproduction than is the biochemical environ­ment.

 

Asin the case of cellular automata, we see that a random search for self-reproduc­tion(before mutation and natural selec­tion can kick in) is an extremelyinefficient way to reach even very modest levels of organized complexity; butfor naturalism, that is the only path available.

 

Ludwigalso considers forming a virus by accidental mutation of an existing computerprogram (Ludwig, 1993, pp 259-263). This is an interesting discussion, but it tells us more about how abiochem­i­cal virus might get started in a world which already has alot of life than it does about how life might get started in abioticcircumstances.

 

Dawkins'"Weasel" Program

 

RichardDawkins claims that there is no need for a mind behind the universe.  Random processes, operating longenough, will eventually produce any level of order desired.  "Give enough monkeys enough time,and they will eventu­ally type out the works of Shake­speare." 

 

Ifindeed we grant that we live in a universe totally devoid of mind, then some­thinglike this must betrue.  And granting this, if we broadenour definition of "monkey" suffi­ciently to include anthropoidapes, then it has already hap­pened!  An apeevolved into William Shake­speare who eventually wrote Cand his descendants typed C his immortal works! 

 

Butseriously, this is merely to beg the question.  As Dawkins points out (Daw­kins, 1987, pp 46-47), thetime required to reasonably expect a monkey to type even one line from Shake­speareC say "Me­thinks it is like a wea­sel"from Hamlet Cwould be astronomical.  To get anysignificant level of order by random assembly of gibberish is out of thequestion in a universe merely billions of years old and a similar number oflight-years across.

 

ButDawkins (who, after all, believes our universe was devoid of mind until mindevolved) claims that selection can vastly shorten the time neces­sary toproduce such order.  He programshis comput­er to start with a line of gibberish the same length as thetarget sentence above and shows how the target may be reached by selec­tionin a very short time. 

 

Dawkinsaccomplishes this (ibid., pp 46-50) by having the comput­er make a randomchange in the original gibberish and test it against the target sen­tence,selecting the closer approxi­mation at each step and then start­ing thenext step with the selected line. For in­stance, starting with the line:

 

WDLTMNLTDTJBSWIRZREZLMQCO P

 

Dawkins'computer reaches its target in just 43 steps or "genera­tions."  In two other runs starting withdifferent gibberish, the same target is reached in 64 and 41 generations.

 

Thisis impressive C but it doesn't tell us much aboutnatural selection.  A minor problemwith Dawkins' program is that he has designed it to converge far more rapidlythan real muta­tion and selection would.  I devised a program SHAKES (New­man, 1990b) which allowsthe operator to enter any target sentence plus a line of gibber­ish of thesame length.  The computer thenrandomly chooses any one of the characters in the line of gibber­ish,randomly chooses what change to make in that character, and then tests theresult against the target.  If thechanged line is closer to the target than it was before the change, it replacesthe previous gibber­ish.  Ifnot, then the previous version remains. Dawkins did some­thing like this, but his version closes on itstarget far more rapidly.  Forinstance his version moves from

 

METHINKSIT IS LIKE I WEASEL

 

to

 

METHINKSIT IS LIKE A WEASEL

 

in just threegenerations (Dawkins, 1987, p 48). I suspect that what Dawkins has done is that once the computer gets aparticular character right, it never allows mutation to work on that charac­teragain.  That is certainly not howmutation works!  My version tookseveral hundred steps to move across a gap like the one above because themutation both had to randomly occur at the right spot in the line and random­lyfind a closer letter to put in that place.  My runs typically took over a thousand steps to converge onthe target from the original gibberish.

Buta far more serious problem with Dawkins' simulation is that real mutation andnatural selection don't have a template to aim at unless we live in a designeduniverse (see Ludwig, 1993, pp 256-259). A better simula­tion would be an open-ended search for anunspecified but meaningful sentence, something like my program MUNSEL (Newman,1990b).  This program makes randomchanges in the length and the characters of a string of letters without atemplate guiding it to some predetermined result.  Here a random­izing function either adds a letter orspace to one end of the string, or changes one of the existing letters orspaces to another.  This isintended to emulate the action of mutation in changing the nucleotide bases ina DNA molecule or the amino acids in a protein.

 

Inthis program, natural selection is simulated by having the operator manuallyrespond as to whether or not the resulting string consists of nothing butEnglish words.  If it does, thenthe mutant survives (is retained); if it doesn't, the mutant dies (is discarded).  This could be done more effi­ciently(and allow for much longer comput­er runs) if one would program thecomputer to use a spell-checker from a word-process­ing program to makethese decisions instead of a human operator.

 

Evenmore stringent requirements might be laid on the mutants to simulate thedevelopment of higher levels of order. For in­stance, the operator might specify that each successfulmutant conform to English syntax, and then that it make sense on larger andlarger size-scales.  This wouldgive us a better idea of what mutation and natural selection can do inproducing such higher levels of organization as would be neces­sary ifmacroevo­lution is really to work.

 

Ray's"Tierra" Environment

 

Oneof the most interesting and impressive attempts at the computer simula­tionof evolution I have seen so far is the ongoing experiment called "Tier­ra,"constructed by Thomas Ray at the University of Delaware (Ray, 1991).  Ray designed an elec­tronicorganism that is a small computer program which copies itself.  In this it resembles cellular automataand particularly computer viruses. It differs from these in that it lives in an environ­ment C"the soup," also designed by Ray C which explicitly includes both muta­tionand a natural competi­tion between organ­isms.

 

Toavoid prob­lems that can arise when computer viruses escape captivity, thesoup is a "virtual computer," a text file that simu­lates acomputer, so the programs are not actually roaming around loose in thecomputer's memory.  For most ofRay's runs, the soup contains 60,000 bytes, equivalent to 60,000instructions.  This will typicallyaccomodate a population of a few hundred organisms, so the dynamics will bethose of a small, isolated population.

 

Tocounter the problem of fragility or brittleness mentioned in our discussion ofcellular automata, Ray invented his own comput­er lan­guage.  This "Tierran" language ismore robust than the standard lan­guages, so not as easily dis­ruptedby mutations.  It is a modificationof the very low-level assembly language used by programmers, with two majordifferenc­es:  (1) it has veryfew commands C only 32 (compare the assembly languagefor 486 comput­ers, with nearly 250 commands [Brumm, 1991, 136-141]) Cand (2) it address­es other loca­tions in memory by the use of tem­plates,rather than address numbers C a feature modelled on the biochemi­caltechnique by which mole­cules "find" each other.  The program is set up so the operatorcan vary the maximum distance that an organism will search to locate a neededtem­plate.

 

Raystarts things off by introducing a single organism into the soup.  There it begins to multiply, with themother and resulting daughter organ­isms taking turns at copying themselvesuntil they have nearly filled the available memory.  Once the level of fullness passes 80%, a procedure kicks inwhich Ray calls "the Reaper." This keeps the soup from overcrowding by killing off organismsone-by-one, working down from the top of a hit list.  An organism at birth starts at the bottom of this list andmoves upward as it ages, but will move up even faster if it makes certainerrors in copying.  Alternatively,it can delay moving upward somewhat if it can successfully negotiate a coupleof difficult proce­dures.

 

Themaster computer which runs the simulation allows each organism to execute itsown instruc­tions in turn.  Theturn for each organism can be varied in different runs of the experi­mentso as to make this allowance some fixed number of in­structions per turn,or dependent on the size of the organ­ism so as to favor larger creatures,smaller ones, or be size-neutral.

 

Rayintroduces mutation into the system by fiat, and can change the rate ofmutation from zero (to simulate ecological situations on a timescale muchshorter than the mutation rate) up to very high levels (in which the wholepopulation perish­es).

 

Oneform of mutation is designed to simulate that from cosmic rays.  Binary digits are flipped at randomlocations in the soup, most of which will be in the organisms' genomes.  The usual rate which Ray sets for thisis one muta­tion for every 10,000 instruc­tions executed.

 

Anotherform of mutation is introduced into the copying procedure.  Here a bit is randomly flipped duringreproduc­tion (typically for every 1000 to 2500 instructions transferedfrom mother to daughter).  Thisrate is of similar magnitude to the cosmic ray mutation.

Athird source of mutation Ray introduces is a small level of error in theexecution of instructions, making their action slightly probabilistic ratherthan strictly deterministic.  Thisis intended to simulate occasional undesired reactions in the biochemistry(Ray, 1994, p 187).  Ray does notspecify the rate at which error in introduced by this channel.

 

Ray'sstarting organism consists of 80 intructions in the Tierran language, eachinstruction being one byte (of 5 bits) long.  The organism begins its reproduction cycle by reading andrecord­ing its length, using templates which mark the beginning and end ofits instruc­tion set.  It thenallo­cates a space in the soup for its daugh­ter, and copies its owninstruc­tions into the allocated space, using other templates among itsinstructions for the needed jumps from place to place in its program (subrou­tines,loops, etc.).  It ends its cycle byconsti­tuting the daughter a separate organ­ism.  Because the copying procedure is aloop, the original unmutated organism actually needs to execute over 800instruc­tions before it completes one full reproduc­tion.  Once there are a number of organisms inthe soup, this may require an organism to use several of its turns to completeone reproduc­tion.

 

Rayhas now run this experiment on his own personal computer and on much fastermainframe computers many times, with some runs going for billions ofinstructions.  (With 300 organismsin the soup, 1 billion instruc­tions would typically correspond to somefour thousand generations.)  Rayhas seen organisms both much larger and much smaller than the original developby mutation, and some of these have survived to do very well in the competi­tion. 

 

Rayhas observed the production of parasites, which have lost the instructions forcopying themselves, usually due to a mutation in a template that renders ituseless.  These are sterile inisolation, but in the soup they can often use the copy proce­dure of aneighbor by finding its template. This sort of mutant typically arises in the first few millioninstructions executed in a run (less than 100 generations after the soupfills).  Longer runs have produced(1) organisms with some resistance to parasites; (2) hyper-parasites, whichcause certain parasites to reproduce the hyper-parasite rather than themselves;(3) social hyper-para­sites, which can reproduce only in communi­ties;and (4) cheaters, that take advantage of the social hyper-parasites.  All these Ray would classify as microev­olution. 

 

Underthe category of macroevolution, Ray mentions one run with selection designed tofavor large-sized organisms, which produced apparently open-ended size increaseand some organisms longer than 23 thou­sand instruc­tions. 

 

Raynotes two striking examples of novelty produced in his Tierra simulations:  (1) an unusual procedure one organismuses to measure its size, and (2) a more efficient copying technique developedin another organism by the end of a 15-billion-instruc­tion run.  In the former of these, the organism,having lost its template that locates one end of its instructions, makes do byusing a template located in the middle and multiplying this length by two toget the correct length.  In thelatter, the copying loop has become more efficient by copying three instruc­tionsper loop instead of just one, saving the execution of several steps.

 

Withsize-neutral selection, Ray has found periods of stasis punctuated by periodsof rapid change.  Typically, thesoup is first dominated by organisms with length in the order of 80 bytes forthe first 1.5 billion instructions execut­ed.  Then it comes to be dominated by organisms 5 to 10 timeslarger in just a few million more instructions.  In general it is common for the soup to be dominated by oneor two size-classes for long periods of time.  Inevitably, however, that will break down into a period(often chaotic) in which no size dominates and sometimes no genotypes are breed­ingtrue.  This is followed by anotherperiod of stasis with one or two other size classes now dominating.

 

Ray'sresults are impressive.  But whatdo they mean?  For the origin oflife, not much.  Ray has notattempted to simulate the origin of life, and his crea­tures at 80 bytes inlength are complex enough to be very unlikely to form by chance.  Each byte in Tierran has 5 bits or 32combina­tions, so there are 3280 combina­tions for an80-byte program, which is 2 x 10120.  Follow­ing Ludwig's scheme of using all the earth's 100million PCs to generate 1000 80-byte combinations per second, we would need 7 x10100 years for the job. If all 1090 elementary particles were turned into computersto generate combinations, it would still take 7 x 1010 years,several times the age of the universe. Not a likely scenario, but one might hope a shorter program that couldpermanently start reproduction might kick in much earlier.

 

Whatabout the type of evolution experi­enced in the Tierra environment?  Is it such that we would expect toreach the levels of complexity seen in modern life in the available time­span?  It is not easy to answer this.  The Tierra simulation is typically runwith a very high rate of mutation, perhaps on the order of 1 in 5000 countingall three sources of mutation. Copying errors in DNA are more like 1 in a billion (Dawkins, 1987, p.124), some 200,000 times smaller. Thus we get a lot more varia­tion in a short time and many moremutations per generation per instruc­tion.  Ray justifies this by claiming that he is emulating thehypothetical RNA world before the development of the more sophis­ticatedDNA style of reproduc­tion, and that a much higher level of mutation is tobe expected.  Besides, for the sakeof simula­tion, you want to have something to study within the span ofreasonable computer times.  Allthis is true, but there is also the danger of simulating a world that is farmore hospitable to evolution than ours is (see the remark of Pattee andLudwig's discussion in Ludwig, 1993, pp. 162-164).

 

Theconsequences of mutation seem considerably less drastic in Tierra also, makingthat world especially favorable for evolution.  No organism in Tierra dies before it gets a shot atreproducing, whereas dysfunction, disease, preda­tors and acci­dentsknock off lots of these (fit or not) before they reproduce in our world.  This effectively raises the muta­tionrate in Tierra still higher while protecting against some of its dangers, andincreases the chance that an organism may be able to hop over a gap of dysfunctionto land on an island of function. 

 

InTierra, the debris from killed organisms remains in the environ­ment.  But instead of being a danger to livingorganisms as it so often is in our world, the debris is available asinstructions for para­sites whose programs are searching the soup for tem­plates.  This enormously raises the mutationrate for parasites, producing something rather like sexual reproduction in ahigh mutation environment.

 

Tierranorgan­isms have rather easy access to the innards of other organ­isms.  The program design allows them toexamine and read the in­struc­tions of their neighbors, but not writeover them.  The organisms aredesigned to be able to search in either direction from their location somedistance to find a needed template. This is typically set at 200-400 instructions, but on some runs has beenas high as 10,000, giving access to one-third the entire environ­ment!  This feature is not used by anyorganism whose original templates are intact, but it provides the various typesof para­sites with the opportunity to borrow genetic materi­al from upto one-third of the creatures in Tierra, and probably permits many of them toescape their parasitic lifestyle with a whole new set of genes.

 

Theinnards themselves, whether part of a living or dead organ­ism, are allnicely usable instructions.  Everybyte in each organism is an instruction, and once an organism has inhab­iteda particular portion of the soup, its instructions are left behind after itsdeath until written over by the instructions of another organism inhabitingthat space at a later time. 

 

TheTierran language is very robust; every mutation of every byte produces a mutantbyte which makes sense within the system. Gibberish only arises in the random arrangement of these bytes ratherthan in any of the bytes themselves. Thus, the Tierran language cannot help but have meaning at the level ofwords.  The real test, then, formacroevolution in Tierra will be how suc­cessful it is in producing meaningat the level of sentences, and this does not appear impressive so far.

 

Thereis a need for someone with facility in reading assem­bly language to take alook at the Tierran mutants to see what evolved programs look like.  How do these compare with the programsfound in the DNA of biochemical life? Are they compara­ble in effi­ciency, in elegance and infunction?  Does the DNA in ourworld look as though biochemical life has followed a similar history to that ofthese Tierran creatures?

 

ThomasRay's experiment needs to be continued, as it is a sophisticated procedure fordemonstrating what an evolutionary process can actually accomplish.  But the details of its design need tobe continually revised to make it more and more like the biochemical situation.  Ray has shown that the Tierraenviron­ment can occasionally produce apparent design by acci­dent.  Can it produce enough of this toexplain the prolif­era­tion and sophisti­cation of apparent designwe actually have in biochemical life on earth?

 

Ina more recent paper, Ray has begun an attempt to mimic multicellular life(Thearling and Ray, 1994).  So far,they have been unable to produce organisms in which the cells are differen­tiated.  And they have skipped the whole problemof how to get from unicellular to multicellular life. 

 

Onemight wish to say that the Tierran language is too restricted to be able toaccom­plish all the things that have happened in the history of life onearth.  But Maley (1994) has shownthat the Tierran language is computationally complete Cthat it is equivalent to a Turing machine, so that in principle it canaccomodate any function that the most sophisti­cate comput­er canperform.  Of course, it might takean astronom­ically longer time to accomplish this than a really goodcomputer would, but that brings us back to the question of whether simulationsmight be more efficient or less efficient than biochemistry to produce the sortof organization we actually find in nature.  Until we can answer this, it will be hard to use AL to provethat life in all its complexity could or could not have arisen in our universein the time available.

 

Conclusions

 

We'vemade a rather rapid (and incomplete) tour of some of the things that arehappening in Artificial Life research. The field is growing and changing rapidly, but we should have a betterhandle in just a few years on the questions of how complex self-reproduction isand what random mutation and natural selec­tion are capable ofaccomplishing.  At the moment,things don't look too good for the "Blind Watchmaker" side.

 

Thedefinition of self-reproduction is somewhat vague, and can be made much tooeasy (compared to the biochemical situation) in some computer simulations byriding on the copying capabili­ties of the host computer and its language.  We need to model something that is muchmore similar to biochemistry.

 

Aself-reproducing automaton apparently needs to be much closer to a universalconstructor than the simplest self-repro­ducers that have beenproposed.  In order that it notimmediately collapse when subjected to any mutation, it must be far morerobust.  It must be able tocontinue to construct itself as it changes from simple to complex.  In fact, it must somehow change bothitself and its instructions in synchronism in order to survive (continue toreproduce) and develop the levels of com­plexity seen in biochemicallife.  This is a tall order indeedfor any self-reproducer that could be expected to form in a universe as youngand as small as ours is.  Ofcourse, it can certainly be done if we have an infinite number of universes andan infinite time-span to work with, but there is no evidence that points inthis direction.

 

Inbiochemical life, multicellular organisms have a rather different way offunctioning than do single cell creatures, and a very different way ofreproducing.  Clearly, some realchange in technique is introduced at the point of transition from one to theother, that is, at the Cambrian explosion.  So far, nothing we have seen in computer simulations ofevolution looks like it is capable of the things that happened then.

 

Atthe moment, AL looks more like an argument for design in nature than for auniverse without it.

 


Bibliography

 

Ackley, D. H. and Littman, M. L.:  1994.  A Case for Lamarckian Evolution.  In Langton, 1994, pp 3-9.

Adami, C. and Brown, C. T.:  1994.  Evolutionary Learning in the 2D Artificial Life System"Avida."  In Brooks andMaes, 1994, pp 377-381.

Adami, C.:  1995.  On Modeling Life. Artificial Life1:429-438.

Agarwal,P.:  1995.  The Cell Programming Language.  Artificial Life 2:37-77.

Bonabeau, E. W. and Theraulaz, G.:  1994.  Why Do We Need Artifi­cial Life?  Artificial Life 1:303-325.

Brooks, R. A.and Maes, P.:  1994.  Artificial Life IV. Cam­bridge, MA:  MITPress.

Brumm, P., Brumm, D., and Scanlon, L. J.:1991.  80486 Program­ming. Blue Ridge Summit, PA: Windcrest.

Byl, J.:  1989a.  Self-Reproduction in Small Cellular Automata.  Physica D, 34:295-299.

Byl, J.:  1989b.  OnCellular Automata and the Origin of Life. Perspec­tives on Science and Christian Faith, 41(1):26-29.

Codd, E. F.:1968.  Cellular Automata. New York:  Academic.

Dawkins,R.:  1987.  The Blind Watchmaker. New York:  Norton.

Dennett,D.:  1994.  Artificial Life as Philosophy.  Artificial Life 1:291-292.

Dewdney, A.K.:  1989.  The Turing Omnibus. Rockville, MD:  ComputerScience Press.

Fontana, W., etal:  1994.  Beyond Digital Naturalism.  Artifi­cial Life 1:211-227.

Harnad, S.:  1994.  Artificial Life: Synthetic vs. Virtual.  InLangton, 1994, pp 539-552.

Koza, J. H.:  1994. Artificial Life: Spontaneous Emergence of Self-Replicating and EvolutionarySelf-Improving Computer Pro­grams. In Langton, 1994, pp 225-262.

Langton, C.G.:  1984.  Self-Reproduction in CellularAutomata.  Physica D, 10:135-144.

Langton, C. G.(ed.):  1989a.  Artificial Life. Reading, MA: Addison-Wesley.

Langton, C. G.:1989b.  Artificial Life.  In Langton, 1989a, pp 1-47.


Langton, C., Taylor, C., Farmer, D., andRasmussen, S. (eds.): 1991.  ArtificialLife II.  Red­wood City, CA:  Addison-Wesley.

Langton, C. G.(ed.):  1994.  Artificial Life III. Reading, MA: Addison-Wesley.

Ludwig, M.:  1991.  TheLittle Black Book of Computer Viruses.  Tucson, AZ:  American Eagle Publications.

Ludwig, M.:  1993.  ComputerViruses, Artificial Life and Evolu­tion.  Tucson,AZ:  American EaglePublications.  Current ad­dress:  American Eagle Publications, PO Box1507, Show Low, AZ 85901.

Maley, C. C.:  1994.  TheComputational Completeness of Ray's Tierran Assembly Language.  In Langton, 1994, pp 503-514.

Neumann, J. von:  1966.  The Theory of Self-Reproducing Automata, ed. A. W. Burks.  Urbana,IL:  University of Illinois.

Newman, R. C.:  1988. Self-Reproducing Automata and the Origin of Life.  Perspectives on Science andChristian Faith,40(1):24-31.

Newman, R. C.:  1990a.  Automataand the Origin of Life:  OnceAgain.  Perspectives on Scienceand Christian Faith,42(2):113-114.

Newman, R. C.:  1990b.  ComputerSimulations of Evolution.  MS-DOS diskette of computerprograms.  Hatfield, PA:  Interdisci­plinary BiblicalResearch Institute.

Pesvento, U.:  1995.  AnImplementation of von Neumann's Self-Reproducing Machine.  Artificial Life 2(4).  In press.

Ray, T. S.:  1991.  AnApproach to the Synthesis of Life. Artificial Life IIed. C. Langton, C. Taylor, J. D. Farmer and S. Rasmussen, pp 371-408.  Redwood City, CA:  Addison-Wesley.

Ray, T. S.:  1994.  AnEvolutionary Approach to Synthetic Biolo­gy:  Zen and the Art of Creating Life.  Artificial Life1:179-209.

Shanahan,M.:  1994.  Evolutionary Automata.  In Brooks and Maes, 1994, pp 388-393.

Sigmund, K.:  1993.  Gamesof Life:  Explorations in Ecology,Evolution, and Behaviour.  New York:  Oxford.

Sipper, M.:  1995.  StudyingArtificial Life Using a Simple, General Cellular Model.  Artificial Life 2:1-35.

Spafford, E.H.:  1994.  Computer Viruses as ArtificialLife.  Artificial Life 1:249-265.

Taylor, C. and Jefferson, D.:  1994.  Artificial Life as a Tool for Biological Inquiry.  Artificial Life 1:1-13.

Thearling, K. and Ray, T. S.:  1994.  Evolving Multi-Cellular Artificial Life.  In Brooks and Maes, 1994, pp 283-288.

 



 1. See Langton (1989b, pp 6-21) for the"prehistory" of artifi­cial life studies.

 2. Taylor and Jefferson (1994, pp 1-4)would define artificial life more broadly, to include synthetic biochemistryand robot­ics; so too Ray (1994, pp 179-80).

 3. A forthcoming article (Pesavento,1995) announces the recent imple­mentationof von Neumann's machine.

 4. See also Spafford (1994).

5. Some helpful attempts in this directionhave been made by Adami and Brown (1994) and Shanahan (1994).

6. See Dewdney (1989) for a discussion ofTuring machines.