- Share
-
-
arroba
Sorry if this post is a bit for computer programmers, anyway I trust that also the others can grasp the overall picture.
Evolutionists claim that what it takes to evolution to work is simply “a populations of replicators, random variations on them, and a competition for survival or resources”.
Today we will try to partially layout how to simulate on computer such process. First off, we need the replicators, i.e. digital programs able to self-reproduce. In informatics jargon, a computer program able to self-reproduce, i.e. to produce as output a copy of its source code is called a “quine”. Therefore in a sense a quine is a little, minimal digital “bio-cell”. You can write the code of a quine in any programming language. Also with the same language you can do that in various ways. Here I will examine a quine written in Perl language by Tushar Samant, it is but one of many examples.
Its source code is the following:
$a=’X‘; print map “\$a=’$a’; $_, q($_)”, q(print map “\$a=’$a’;$_, q($_)”)
If you have Perl installed on your computer you can easily verify that if you run this script, it prints itself on the screen. Eventually if you redirect the output to a file, such file will be a perfect copy of its generator file.
Von Neumann mathematically proved that a self-reproducing automaton must contain a symbolic description or representation of itself and a constructor (see my previous post). Also our quine contains a symbolic description of itself (the code on the right, the quoted “q(print map…)”), while the code on the left (the first “print map”) is the operation on the description (the constructor), to output the whole quine. Some say that, this way, the quine necessarily works somehow in “auto-referential” mode. (About “quines”, self-reference, automata, meta-languages and artificial intelligence I suggest reading the book by Douglas Hofstadter, “Gödel, Escher, Bach”, 1979.)
Why in the source code I have highlighted in blue color a “neutral” zone, in red color a “critical zone”? This distinction holds not only for this quine, rather for almost all quines (and in a sense even for almost all computer programs or any system in general). Random variations in the red zone destroy the self-reproducing function. Differently, most variations in the neutral zone don’t cause malfunctioning. If, for example, we change the value of the $a variable from “X” to, say, “fb_M+hF6.oia7-jj” we get a bigger script, but it continues to self-replicate.
Now, let’s imagine that we want to develop a small evolution simulator on our computer. We could set an initial number of those quines and make them self-reproduce to obtain a growing population. Eventually we could apply random variations, generation by generation, on their neutral zones. Then we have to write in our evolution simulator a “fitness function” working in this somewhat digital environment. A first simple idea could be to establish that only the bigger quines survive. However such evolution simulation would be very poor. In fact, the variations inside the digital organisms would be trivial, sure no new organization arises. Moreover the fitness function is poorly specified, because what matters is only the quantitative size of the quines, how much they are “fat” so to speak. Certainly no really different organism arise.
Therefore, if we want to test the above evolutionist claim, we could imagine a more complicated fitness function, based on the concept of predation, just a suggestion. The organisms that are somehow able to “eat” parts of other organisms are more fit to survive. They are the “predators”, while the organisms eaten are the “victims”, who necessarily die. This would be similar to what happens in nature per Darwinian selection. Also we could think of a selection based on a competition for resources.
At this point the question is: what variations are necessary to transform our initial quines into evolved predators or resource seekers? No random variation can produce such increase in organization, because, as seen above, almost all random variations in the red zone are fatal and the variations on the blue zone are neutral. However to transform our quines in predators or resource seekers is not impossible. But one has to increase the organization of the critical zone in substantial manner. New source code has to be written in the red zone. Changes in the blue zone are useless. The predation macro function needs sub-functions: movement, enemy detection, fight… Analogously, the resource seeker function needs: movement, resource detection, import of resources…
To keep our discourse simple, as an example, I modified the initial quine with a simple, very rudimental, movement sub-function (which serves to both the higher functions): now the replicator can perform a random walk on a grid with steps of 1 unit in 8 different directions. To do that I used the $p variable to store the X/Y information (where the replicator stays on the grid at a given time). The result could be something like this:
$a=’X‘;$p=q(500_500);$e=q(($x, $y) = split /_/, $p;$x+=int(rand(2))*(-1)**int(rand(2)); $y+=int(rand(2))*(-1)**int(rand(2));$p =~ s/\d+_\d+/${x}_${y}/;);eval $e;print map “\$a=’$a’; \$p=’$p’; \$e=’$e’; eval ‘$e’;$_, q($_)”, q(print map ” \$a=’$a’;\$p=’$p’;\$e=’$e’; eval ‘$e’; $_, q($_)”)
With this modification the automaton continues to be able to self-replicate, and — if introduced in a suitable evolution program simulator (which I have not programmed thus far) — it moves on a grid. Notice however that both the constructor and the symbolic description are changed.
All that leads us directly to what I call the “quine dilemma” of unguided evolution. If random variations are harmless or neutral (blue zone) they create no new organization. If evolution has to create complex functional novelties, new organization, it must operate in the red zone and necessarily become potentially destructive. To speak of “dilemma” here is euphemistic. This dilemma is worse than Hamletic, because de facto is a show-stopper for evolution. The quine dilemma holds in computer programming, as in biology. In fact, in the lab you can crash the cellular replication by introducing random variations in a cell. Needless to say, this dilemma has a lot to do with the experimental fact that unicellulars grown in the lab haven’t yet evolved in … frogs or butterflies (e.g. Lenski’s work).
I like to cite Larry Wall, the computer scientist who invented Perl, who sums it up best: “The potential for greater good goes right along with the potential for greater evil”. Larry said that in the context of software development, but mutatis mutandis it holds also in general, biology included. In short, no power without risk.
I said “biology included” because the objection by evolutionists might be that in biological replicators there is no “quine” problem, because the information for new organization (which random variation applies on) is decoupled from the information for construction. This claim is fully illogical because the information for new organization is the information for construction, what else. An organism is constructed according to assembly instructions. If you want a different organism you have to modify them. No decoupling is possible between instructions and organism because the latter is the direct product (bit by bit) of the former. No decoupling is possible between cause and effect.
To sum up, the initial claim that evolution needs only “a populations of replicators, random variations on them and a competition” is only an hope, because just in simple replicators it crashes against basic conceptual obstacles, one of which is indeed the “quine” dilemma.