Uncommon Descent Serving The Intelligent Design Community

Selensky, Shallit, & Koza vs artificial life simulations

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

It’s always a pleasure to host Dr Selensky’s thoughtful contributions. Here, he tackles the subject of artificial life simulations and the implications of modelling environments, assumptions, algorithms etc:

____________________________

On Modelling Spontaneous Emergence of Programs and Their Evolutionary Self-Improvement

Evgeny Selensky

Some time ago, I left a comment on Jeffrey Shallit’s blog. We exchanged a couple of phrases. In particular, I referred Jeffrey to the requirement for computational halting in models of artificial life. I praised David Abel’s work, which put him off. In response, Jeffrey recommended that I should “read and understand” the following article. I have done my homework now. However I’d like to post here at UD rather than visit Jeffrey’s blog again. Jeffrey of course is welcome to respond to my note here. My thoughts about the paper follow.

John R. Koza: Spontaneous Emergence of Self-Replicating and Evolutionarily Self-Improving Computer Programs. In: Proceedings of the Third Workshop on Artificial Life (ALIFE-3), 1993. [Link.]

The article, a quite old one now, proposes a mathematical model that, the author believes, justifies an affirmative answer regarding both:

  1. the possibility of spontaneous generation of self-replicating code; and
  2. its subsequent evolutionary self-correction.

The author analyses the possibility of spontaneous generation of computer code that can replicate itself from one or two parent programs as well as evolutionarily self-correct given enough primitive computational elements at the start. It is assumed that the primitive computational elements interact and collectively are Turing complete. Among other things, elements can form agglomerations.

A view of the minimal requisites of biological self-replication

Mathematical modelling allows one to create a virtual world with its own laws. We need therefore to be careful applying conclusions from mathematical modelling in the real world. Any such conclusions must be taken in with a pinch of salt. Even to model something that can allegedly spontaneously emerge, one needs an intelligently set experimental (computational) environment: a programmer (sometimes even a whole team), a complex computational system and intelligent active control of the experiment (broadly speaking, the knowledge how and when to change the algorithm parameter settings and how to guide the search for solutions). Ironically, the relevant branch of computer science is called artificial life whose objective is to demonstrate that biological life is just a simple consequence of a dozen or two of automatic rules given some basic assumptions (it all of course depends what the assumptions are); and that, once such rules are defined and applied to an initial population of self-reproducing organisms, everything else is settled. Well, yes in terms of cellular automata; probably not in terms of it being responsible for all observed biological complexity. In theory, there is no upper bound on the complexity of things that evolution can produce (at least I am not aware of any, other than natural limitations such as the amount of available resource, mutation and reproduction rates, etc.). And yet experience shows that unless one guides the search from outside, nothing genuinely novel and functionally complex can come out of it. Having said this, let’s have a look what the article has to say.

A von Neumann kinematic self-replicator

As usual in such papers, the trick is in the power of the starting assumptions and methodology. That evolution occurs in reality is nothing new: as soon as there is an initial population of open-ended self-replicating systems, its evolutionary behaviour is inevitable. Heritable differences in characteristics of individuals will necessitate a selective pressure from the environment, which will cause differential reproduction. Fine. However, the question is, firstly, to what extent we can rely on the conclusions from this or similar computational studies in matters concerning real phenomena such as biological evolution, and how general those conclusions are in real life.

The model kicks off from a starting configuration (the so-called primordial sea of primitive computational elements) given a Turing complete set of functions encoded by the elements. At any instant, a single program is searching for other programs from among its neighbours. The objective is to find self-reproducing programs able to self-improve over generations. The search looks through programs from among those in the sea that are neighbours to the program currently executing. The neighbourhood of a program is changing over time. A neighbour program is selected to execute next if it matches parts of the program currently running. The search terminates as soon as all programs have been run. In practice, however, the process has to be stopped after a few generations because the computational resources get exhausted. The total number of programs may grow over time. Overall, the model actually uses intelligent artificially guided search instead of blind Darwinian search.

[Overnight, let me insert an overview of the metabolic framework of cell-based life, using a compressed form of the typical wall-sized chart, to illustrate what has to be answered to:]

[Also, let me expand the top left corner, zooming in on Protein Synthesis:]

Protein Synthesis (HT: Wiki Media)

[And while we are at it, here is Yockey’s summary of the implied communication system involved, an irreducibly complex entity:]

Yockey’s analysis of protein synthesis as a code-based communication process

Now the details:

  • The paper assumes the presence of a primordial sea composed of a given number of primitive computational elements that model some basic functions of organic molecules. Although the presence of a population of primitive computational elements is taken for granted, one should not forget that the physical laws are not the same as the rules of operation of a computer. The first have physical necessity underpinning it whereas the second is due to some arbitrary logical (i.e. non-physical) correspondence between symbol (a physical token) and meaning (a physical effect). We cannot assume therefore that physical or chemical properties of matter (e.g. properties of carbon or oxygen) are analogous to the behaviour of a computer. This is the weakest point of any paper of this sort simply because drawing such an analogy between the physical laws and the logical rules is unwarranted. As American philosopher Alex Rosenberg put it, in our world no clump of matter is about any other clump of matter. On the contrary, information translation whereby one configuration of matter represents another configuration of matter is the unique property of two kinds of systems: (a) natural human languages or artificial linguistic systems such as mathematics or programming languages and (b) biological systems. Consequently, the most important question is, how the first population of information translators (i.e. physical embodiments of the data-protocol-interpreter tuple) came about. Clearly, computer code generated by some basic routine of a computational element must be syntactically correct to be interpreted correctly; otherwise generated ‘code’ is not code in the true sense but meaningless gibberish.
  • Next, the modified (generated) code needs to terminate in reasonable time, hence the need for a halting oracle, which I alluded to in the chat with Jeffrey. In the paper, a limit on the number of nested loops is imposed explicitly. “If the program fails to terminate for any reason within a reasonable number of steps or the symbols […] do not form a valid program”, the program is terminated by force. Recall Gregory Chaitin’s meta-life model [Chaitin 2011] where the programmer essentially acted as a halting oracle, and the computational resources were not considered limited; its critique can be found in [Ewert et. al 2013], a more informal accounts are here and here. Coming back to John Koza’s model, consequently, the efficiency of search is implicitly biased in favour of desired states. Note that syntactic rules are checked explicitly, which is also a means of implicitly biasing the experiments towards desired results.
  • There exist different types of complexity in relation to biopolymers (and consequently to the primitive computational elements of the model the paper proposes) [Abel, 2011]:
    • random sequence complexity (RSC) exemplified by a string of symbols whereby every symbol is equiprobable at each position in the string; this type of sequence represents chaotic structures of stochastic origin that are unconstrained by natural law or voluntary act of an agent;
    • ordered sequence complexity (OSC) whereby a string of symbols represents structures patterned either by natural regularities or some statistically weighted means (but again rather not by deliberate choice on the part of an agent);
    • functional sequence complexity (FSC) representing prescription whereby each symbol in a sequence is a result of a specific purposeful selection to encode a function. Functional sequences are physically arbitrary (indifferent, energy-degenerate) and are constrained by choices from an agent. It has been shown that only human languages, mathematics, computer code or biological polymers exhibit statistically significant levels of FSC [Abel 2011].
  • Clearly, the paper models the process of generating strings that exhibit high levels of FSC. The (Turing complete) set of primitive elements is selected such that they can already perform the basic functions. What the search is set to achieve is creating their agglomerations and in carefully control manner by collecting the initial pool of self-replicators with the “right” properties. In fact, the paper demonstrates a kind of game using a set of basic rules with a clearly defined goal: to compute for self-reproduction. Evolution in nature does not select for a future function, it selects from among existing functions in the context of a population of self-reproducing organisms competing for limited resources. Actually, even that is a metaphor because active selection is not what happens in nature. What is called selection in nature is rather a consequence of differential reproduction in the given environmental conditions. And so there is no active control as such in real life. However, in the model active control is everything. There is consequently a significant mismatch between the model and reality. Careful control and stages of selection of individual programs are in place to make sure that something presented as spontaneous emergence happens. Simply saying that this is analogous to what takes place in nature is not convincing.
  • In addition to that, in real environments processes are accompanied by noise, which the model does not take into account.
  • In the model, a function is evolutionarily gained by inclusion of a new neighbouring computational element from the sea based on matching the selection criteria at time t. What the paper calls a new evolutionarily gained function may not be novel. E.g. if the program before such inclusion had one function of type A but has acquired another function of the same type A as a result of inclusion from the sea, the second one is still regarded as new even though this may not contribute to any actual functional novelty. This inclusion only makes sense in the model in view of an explicit future goal state, self-reproduction, for which the model selects.
  • In practice, if the search for solutions is to be of any practical value, the programmer needs to guide the search process. Models relying on blind search are not efficient in practice. “Genetic algorithms are widely cited as demonstrating the power of natural selection to produce biological complexity […] Despite their merits, genetic algorithms establish nothing of the sort. Such algorithms succeed not through any intrinsic property of the search algorithm, but rather through incorporating sources of information derived from the programmer’s prior knowledge” [Ewert et al. 2012]. This knowledge is measured as active information that heuristically organizes the search for solutions. Heuristics are rules of thumb that embody expert knowledge of how the search should be organized based on previous experience of solving similar problems.
  • Natural selection is nowhere near intelligent selection in efficiency. Natural selection is most efficient and sensitive when selective pressure is acting with respect to a very limited number of traits (optimization criteria). The problem of sensitivity of the selection model in the paper is not addressed. The real questions are: how much active information is passed from the programmer to the algorithm (i.e. information about where in the search space the likelihood of finding “good” solutions is higher) and how it is done in each case. This paper is no exception. Active information is a measure of the extent to which natural selection acting on a population of organisms in real environments is implicitly replaced with artificial intelligent selection by the programmer. The best example of this occurring is genetic algorithms where:
    • The optimization (fitness) function is given explicitly (not so in nature);
    • The programmer can measure with extremely high accuracy the fitness gradient and then can actively control the search based on the value of the gradient (not so in nature);
    • The search is controlled by creating desired neighbourhoods via particular compositions of mutation operators and filtering out early on those no-hoper candidate solutions that the programmer knows would have only adversely affect the next population and wasted computational resources (nature cannot do this; like I said in nature noise is everywhere).
    • Finally, we cannot assume that nature has unlimited computational resources.
  • In a word, it is hard to put it better than the author did himself: the controller exists outside of the individuals. Consequently, the proposed model is a model of intelligent artificial selection, not of Darwinian evolution because it uses explicit control by the experimenter. Specifying which rules the elements should encode actively informs the search algorithm how and what to search. In particular, the programmer decides when, after the next element is taken from the neighbourhood, it should be executed and the results analyzed and when the search should only check symbolically if the element has the appropriate structure. Similar rules are in place to control when the program that is being executed start searching its neighbourhood for other programs. If a neighbour program matches a template, the currently executing program can be replaced with a larger program that is an agglomeration of the current one and its neighbour. As expected, the size of programs will tend to grow exponentially. Given that computational resources were necessarily limited this is why some filtering of programs was in place based on size and other parameters.
  • Like I said, I agree with the author that because the resources are limited, the existence of a population of differing self-reproducing open-ended systems automatically entails selection pressure and consequently evolution. Nonetheless, the scope of evolutionary movement in the phase space will be limited in practice by a whole range of key biological factors. The practical limits of evolvability are therefore determined by what is biologically feasible. As a result the areas in the phase space of a living system that are evolutionarily reachable from a given initial point are necessarily limited. Therefore, instead of a continent of function, biological systems are confined to evolve in islands/pockets of function; while the borders cannot be crossed. They may well change over time and some islands may perhaps even merge or new islands may form. But that does not resolve the problem of practical evolvability limits in principle.
  • The success of the experimental setup depends on a sequence of computational runs in order to get an initial pool of programs capable of self-reproducing. The requirement to have a sequence of runs to get to a starting point is not realistic. Nature will not wait or run experiments in parallel in order to select the best and cleanest intermediate results. This is so simply because nature has no decision making or goal setting capabilities. Therefore I think that even assuming a possibility for spontaneous emergence of a replicator, getting an initial pool of replicators that can trigger evolutionary growth of functional complexity is as hard a problem as that the model purports to solve.
  • As far as demonstrating that the model can learn, I have less of a concern: Darwinian evolution is able explain to some degree the adaptation of an existing function in the real world. And yet the model still achieves this by intelligent setup/coding. The respective explicit rules to control which individuals of a population to let produce offspring are presented in the paper. The objective function to the organisms need to emulate is given explicitly. Clearly, the more accurately the software organism emulates the given function, the better chance it has to survive and produce offspring, which is fine. In reality though, again, there will be a lot of noise and a lot less sensitivity/efficiency. The sieve of natural selection on an individual level is very coarse: it is either ‘yes’ (reproduction has been achieved) or a ‘no’ (reproduction has not been achieved for whatever reason).
  • The author notes time and time again that if computational resources were not limited, the model would not have to exercise fine control over the experimental setup. However, in reality biological resources are limited and therefore we cannot expect the same level of control and sensitivity from natural selection as is assumed or implemented in the model.

Summing this up, the paper exhibits the capabilities of careful experimental design and control. It models artificial selection and learning. It is no surprise: what you built in your model determines what you get out of it. The question is, as usual, to what extent it reflects reality. Design is written all over it.

Further Reading

  1. David Abel (2011): The First Gene.
  2. Gregory Chaitin (2011): To a Mathematical Theory of Evolution and Biological Creativity.
  3. Winston Ewert, William Dembski, Robert J. Marks II (2012): Climbing the Steiner Tree—Sources of Active Information in a Genetic Algorithm for Solving the Euclidean Steiner Tree Problem, In Bio-complexity.org.
  4. Winston Ewert, William Dembski, Robert J. Marks II (2013): Active Information in Metabiology, In Bio-complexity.org.

_____________________________

Considerable food for thought, as usual. END

Comments
KF @17: "The utter absence of informed objection in this technical science oriented thread to date is perhaps its most significant feature." 5 months later things remain unchanged: politely dissenting interlocutors lack serious arguments to discuss scientific issues.Dionisio
September 21, 2017
September
09
Sep
21
21
2017
02:59 AM
2
02
59
AM
PST
EugeneS, Very interesting comment @18. Thanks. BTW, did somebody say 'designed'? :)Dionisio
September 21, 2017
September
09
Sep
21
21
2017
02:55 AM
2
02
55
AM
PST
Of interest is a very recent publication about semantically closed evolving systems:
Semantic closure demonstrated by the evolution of a universal constructor architecture in an artificial chemistry Edward B. Clark, Simon J. Hickinbotham, Susan Stepney http://rsif.royalsocietypublishing.org/content/14/130/20161033
While the experiment is very interesting, the experimental setup 'in silico' is based, again, on the experimenter involvement. As a consequence, what is actually demonstrated is not what is written 'on the tin'. The phrase "evolution demonstrates" in the title could be understood in the sense of emerging rather than preservation. Semantic closure is what the experiment starts from (and preserves), not what is emerged via evolution. The authors describe a precursor of their system as one "explicitly designed to support the UCA" (UCA - universal constructor architecture). Emphasis added. This is all expected, of course.EugeneS
September 21, 2017
September
09
Sep
21
21
2017
02:29 AM
2
02
29
AM
PST
The utter absence of informed objection in this technical science oriented thread to date is perhaps its most significant feature.kairosfocus
April 25, 2017
April
04
Apr
25
25
2017
01:25 AM
1
01
25
AM
PST
Dr. Selensky @11:
BTW, I think that ‘guided evolution’ is an oxymoron: as soon as there is guidance, it is not evolution.
Exactly. Certainly not "evolution" as understood by every textbook, academic institution proclamation, the media, and scientists at large. Either intelligent guidance is required or it isn't.
Genetic algorithms model intelligent selection, not evolution.
Genetic algorithms are like any other computer program: you get out what you put into it. No-one disputes that if you have a relatively flat functional landscape, with rewards at regular intervals, leading toward an end goal, that you can get there. Such GA's have no ability to challenge real-world concepts like irreducible complexity and specified complexity (this means you too, Avida). It doesn't matter what neat stuff your GA produces; if it doesn't model biological reality then it is an exercise in irrelevance.
The level of [functional] complexity achievable via evolution is noise compared to what genuine biological novelty is (as you move between classes or phyla).
Well said.Eric Anderson
April 22, 2017
April
04
Apr
22
22
2017
07:41 PM
7
07
41
PM
PST
KF, thank you for correcting the error @7 as indicated @8. BTW, very insightful comment by Dr Selensky @11.Dionisio
April 21, 2017
April
04
Apr
21
21
2017
08:22 AM
8
08
22
AM
PST
Origenes, KF: My pleasure.EugeneS
April 21, 2017
April
04
Apr
21
21
2017
07:05 AM
7
07
05
AM
PST
Re Selensky, excellent thoughts. KFkairosfocus
April 21, 2017
April
04
Apr
21
21
2017
05:35 AM
5
05
35
AM
PST
Dr Selensky @11 Thank you for clarifying your position regarding the power of evolution.Origenes
April 21, 2017
April
04
Apr
21
21
2017
03:59 AM
3
03
59
AM
PST
Origines, Thanks. Here is my view. Evolution is a real phenomenon. It is just dynamics of self-reproducing open-ended systems. However, I don't believe that evolution (i.e. blind search whereby the only thing an evolving system can possibly do is distinguish between goal states and non-goal states) is responsible for all the observed biological complexity. BTW, I think that 'guided evolution' is an oxymoron: as soon as there is guidance, it is not evolution. Guided search is not blind! Genetic algorithms use various forms of intelligently guided search, i.e. knowledge what to do next and where to go. This is the primary reason why they are successful (the other reasons being the landscape characteristics, the landscape should (a) exist in the neighbourhood and (b) be smooth enough for GA to succeed; additional constraints kill local search). Genetic algorithms model intelligent selection, not evolution. Roughly speaking, all evolution can do is adapt an already existing biological function to changes in the environment. A (complex) bio-function is necessarily of intelligent origin. For evolution to even start you need information translation (and therefore intelligence). Evolution is incapable of selecting for as yet non-existent future function. In reality, it can probably achieve something 'novel' but largely due to loss of functional information (for example, antibiotic resistance). The level of complexity achievable via evolution is noise compared to what genuine biological novelty is (as you move between classes or phyla).EugeneS
April 21, 2017
April
04
Apr
21
21
2017
02:05 AM
2
02
05
AM
PST
Selensky: That evolution occurs in reality is nothing new: as soon as there is an initial population of open-ended self-replicating systems, its evolutionary behaviour is inevitable. Heritable differences in characteristics of individuals will necessitate a selective pressure from the environment, which will cause differential reproduction. Fine.
For clarity, I would like to know what is being said here. Is Selensky saying that evolution from microbe to man is feasible or “inevitable”? I guess that is not what he is saying. He may very well hold that differential reproduction ("evolution") doesn't find anything interesting, but, again, some clarity would be welcome.Origenes
April 20, 2017
April
04
Apr
20
20
2017
04:03 PM
4
04
03
PM
PST
Simulations and modeling of biological systems should be done from a top-down perspective instead of the reductionist bottom-up approach that is seen sometimes out there.Dionisio
April 20, 2017
April
04
Apr
20
20
2017
09:17 AM
9
09
17
AM
PST
Error @7: The word 'undersigned' should be removed from that text. Sorry for any confusion this error may have caused.Dionisio
April 20, 2017
April
04
Apr
20
20
2017
09:14 AM
9
09
14
AM
PST
KF, A couple of observations on modeling and simulations: First: How accurate our stimulations and modeling can be if we still lack so much knowledge of the functionality of biological systems? Still they should be done and updated as more data is added. Second: Humans looked at birds and designed airplanes, looked at fish and designed submarines, now look at extracellular matrix (ECM) to design biomaterials. Does that mean our most advanced designs are inspired in [snip] things that just popped up by chance? Are we serious?Dionisio
April 20, 2017
April
04
Apr
20
20
2017
05:04 AM
5
05
04
AM
PST
KF, This technical thread requires what we humans --including your politely dissenting interlocutors-- naturally lack: seriousness, humility, honesty to face the evidential facts with open-mindedness and thinking out of wrongly preconceived boxes. Perhaps that's a reason why you don't see here the folks that are so loudly active in other threads. Philosophical discussions attract more trolls because they can BS off topic a little longer than in a more technical discussion like this.Dionisio
April 20, 2017
April
04
Apr
20
20
2017
02:24 AM
2
02
24
AM
PST
PS: I decided to add some details on metabolism, protein synthesis and Yockey's inferred communication network, by way of illustrations that highlight the FSCO/I to be accounted for on the RW case, OoL.kairosfocus
April 20, 2017
April
04
Apr
20
20
2017
12:21 AM
12
12
21
AM
PST
VERRRRRRY interesting to see the silence of the objectors on the science front. Surely, they should have a ready answer to these issues on sim worlds vs the real one and the origin of cell-based metabolic, self-replicating life? After all, this issue is the ROOT of the Darwinist tree of life. KFkairosfocus
April 20, 2017
April
04
Apr
20
20
2017
12:12 AM
12
12
12
AM
PST
Nice breakdown, Dr. Selensky. Simulation of the god-of-the-gaps; or, should I say, god-in-the-gaps, as the gaps are the only place this djinn of theirs is allowed to dwell. "Artificial life" is basically an abstract field of software based engineering. They create algorithms that emulate a human designer's ability to search a design space; perhaps salt it with some arbitrary, token challenging factors carefully chosen to put the cost metrics in a good looking neighborhood while avoiding actually constraining desired results; then marvel at the "power of evolution". Some of these results are actually brilliant; potential engineering breakthroughs, even; but the narratives are wacky. These people have their heads on backwards.LocalMinimum
April 19, 2017
April
04
Apr
19
19
2017
09:44 AM
9
09
44
AM
PST
KF, Thank you for posting this interesting contribution by Dr. Selensky.Dionisio
April 19, 2017
April
04
Apr
19
19
2017
04:16 AM
4
04
16
AM
PST
Dr Selensky on AI sims vs the requisites of biological life.kairosfocus
April 19, 2017
April
04
Apr
19
19
2017
03:59 AM
3
03
59
AM
PST

Leave a Reply