Uncommon Descent Serving The Intelligent Design Community

# Writing Computer Programs by Random Mutation and Natural Selection

Share
Flipboard
Print
Email

The first computer program every student writes is called a “Hello World” program. It is a simple program that prints “Hello World!” on the screen when executed. In the course of writing this bit of code one learns about using the text editor, and compiling, linking and executing a program in a given programming environment.

Here’s a Hello World program in the C programming language:

#include <stdio.h>

int main(void)
{
printf(“Hello World!\n”);
return(0);
}

This program includes 66 non-white-space text characters. The C language uses almost every character on the keyboard, but to be generous in my calculations I’ll only assume that we need the 26 lower-case alpha characters. How many 66-character combinations are there? The answer is 26 raised to the 66th power, or 26^66. That’s roughly 2.4 x 10^93 (10^93 is 1 followed by 93 zeros).

To get a feel for this number, it is estimated that there are about 10^80 subatomic particles in the known universe, so there are as many 66-character combinations in our example as there are subatomic particles in 10 trillion universes. There are about 4 x 10^17 seconds in the history of the universe, assuming that the universe is 13 billion years old.

What is the probability of arriving at our Hello World program by random mutation and natural selection? How many simpler precursors are functional, what gaps must be crossed to arrive at those islands of function, and how many simultaneous random changes must be made to cross those gaps? How many random variants of these 66 characters will compile? How many will link and execute at all, or execute without fatal errors? Assuming that our program has already been written, what is the chance of evolving it into another, more complex program that will compile, link, execute and produce meaningful output?

I can’t answer these questions, but this example should give you a feel for the unfathomable probabilistic hurdles that must be overcome to produce the simplest of all computer programs by Darwinian mechanisms.

Now one might ask, What is the chance of producing, by random mutation and natural selection, the digital computer program that is the DNA molecule, not to mention the protein synthesis machinery and information-processing mechanism, all of which is mutually interdependent for function and survival?

The only thing that baffles me is the fact that Darwinists are baffled by the fact that most people don’t buy their blind-watchmaker storytelling.

DaveScot This species concept is widely challenged, because it is… not widely appliable. It doesn't work for bacteria, hermaphrodites, and so on. Good exemples exists in the wild, at world scale, regarding amphibians and their parasites (involuntary reproductive isolation via geographical isolation — it is not yet published, I only see this one is conferences), and in seabirds at a local scale (http://scholar.google.fr/scholar?num=20&hl=fr&lr=&cluster=15528891299310585283). Some papers dealing with mosquito species complex in africa are talking about reproductive isolation in sympatry. Also, you could find examples of sympatric speciation in parasites (There is a paper by McCoy in Trends in Parasitol named "what is sympatry" that you should definitely read) finchy
finchy There are lots of examples of speciation in the lab that could be given. Drosophila is relatively easy to coax into voluntary reproductive isolation. However, under the biological definition of species they must be involuntarily isolated - ie; not cross fertile producing at best sterile hybrids. As I recall there's at least one example of that too in drosophila although it's difficult to tell if cross infertility is absolute or merely greatly reduced. trrrl should have been able to give a well documented & widely published example that we could look at specifically. DaveScot
@DaveScot
What novel species were created in the laboratory evolution you mention and how was it determined these never evolved in nature before? I’ll need links to support your claims this time. This will be your last comment until you successfully support those claims so don’t even bother with anything else.
I've already read such papers. This one (http://cat.inist.fr/?aModele=afficheN&cpsidt=14390342) talks about interaction-induced speciation. Just send me an email if you want a PDF. Also, I have experimental/modelling papers on similar systems (I'm afraid trrll is at least partially correct in his affirmations), I'll gladly give you PDF if you feel like reading (maths are pretty hard, though). finchy
Maybe you think of 'function' as a sum of small functions (print, for, etc), whereas biological function is primarily determined by structure, that is the 'message' itself, in which 'print' and 'prind' would be equal — because of redundancy, chemical equivalence between amino acids, and stuff like that. I'm afraid your algorythm/genetic material comparison is way too simple. That would be the one argument against your point of view… finchy
[...] at Telic Thoughts Bradford resurrected a discussion based on my UD essay, Writing Computer Programs by Random Mutation and Natural Selection. In reference to the quote, “The set of truly functional novel situations is so small in [...] Mathematics and Darwinism — Plus a Math Problem to Solve | Uncommon Descent
[...] Descent blog entry I had come across some time ago and subsequently forgotten. Gil Dodgen wrote Writing Computer Programs by Random Mutation and Natural Selection. There were a number of interesting comments and numbered among the commenters was at least one [...] Getting With the Program - Telic Thoughts
[...] readers might also like to check out my essays on the obstacles presented by combinatoric explosion, and the willingness of Darwinists to accept storytelling as fact, with absolutely no analytical [...] Gil’s Involvement With The EIL | Uncommon Descent
Hi Dave Re 58: I suspect there is an underlying dynamic: many people are not fully aware of a key problem with the logic of implication. As my old Math Prof, Harald Niederriter was fond of putting it: Ex falso quod libet. That is, implication is such that an in fact false antecedent can give rise to true consequents, so that implication is not at all tantamount to equivalence. But also, it can give rise to false consequents, and that is why empirical refutation is so important in the real world. Modelling uses this intentionally: we set up a "simplified" analogue for reality, and use it to "predict" consequences, then if we are confident in the models we believe and act on the results of that process. But, why should we trust the models? ANS: since no model is better than its assumptions, input data and algorithms [GIGO . . .], we first look for plausibility there. Then, we validate, i.e test the model against the empirical world. If it survives long enough, we trust it even where we cannot trace it. But, a "simplified" analogue is of necessity, strictly FALSE to fact. (The point is, we test it to be confident of its robustness. And of course that is precisely what has happened with the nuke reactor modelling, which is based on a lot of serious physics and empirical observation over decades and hundreds of cases of reactors. Even so, sometimes things go wrong, as at Sellafield, Chernobyl and Three Mile Island.) Observe as well how in the linked case there is a built-in targetting of improvements and a scan across a candidate list of designs which then are promoted based on performance metrics. this is intelligently directed, artificial testing [maybe with some Monte Carlo runs on parameters], not at all natural selection. And therein lieth the fallacy. Oddly, there is credible evidence that there were natural reactors in appropriate ore bodies. Imagine randomly seeing up parameters that spontaneously get from that natural process to a sophisticated improved PBMR reactor -- every intervening "design" being functional and safe from meltdowns etc! BTW, evolution is not to be confused with NDT-style macro-evolution which has to start with some sort of realistic prebiotic soup model and credibly get to the first functional life form. I have lower confidence in getting to life and then to major body plans and thus to the biodiversity we observe, than in the above natural reactor to PBMR evolution by computer simulation! At least for the latter we know that a natural reactor is not improbable once we have the ore concentration. (And, a nuke reactor is far, far more strucurally simple than a DNA based life form.) So, it is entirely possible that von Neumann accepted that evolutionary mechanisms account for the development of life but had very low confidence that they were NDT based. [His threshold for a self replicating automaton is in fact very high! Think about a machine that has to have in it the blueprint for itself and the self assembling machines that then create itself from that blue print . . . where did the self-assembling machines to interpret and implement the blueprint come form? The language ofr the blue print? Etc etc? I think I see either an infinite regress or else that somewhere at some point some very sophisticated things were set up externally -- the notion that they could set themselves up by chance and natural regularities simply rapidly exhausts probabilistic resources -- which was the original "Hello World" point way back up there. All the red herrings dragged across the track to lead out to conveniently combustible strawmen put up by the evo mat advocates at PT etc notwithstanding. And worse, the brightly burning strawmen so hopefully ignited by the Thumbsters have been rapidly doused before they could cloud and poison the atmosphere. Cf my always linked. H'mm: is that why Denise was talking to Bill about the pay raise for the ever so useful Thumbsters?] GEM of TKI kairosfocus
I'm not interested in computer simulations unless they're modeling something that can be tested in the real world to verify the model. Nuclear weapons are tested in computer simulations. The simulations are known to accurately model the weapons because the simulation results were compared to reality and found to be accurate. I doubt you can point me to a computer simulation that models bacterial evolution and comes out with testable results of new species. What novel species were created in the laboratory evolution you mention and how was it determined these never evolved in nature before? I'll need links to support your claims this time. This will be your last comment until you successfully support those claims so don't even bother with anything else. DaveScot
Is that a completely vacuous positive claim or do you have in mind some way of testing it?
Of course. The prediction that evolution frequently leads to different outcomes from the same starting point is readily tested (and indeed, has been repeatedly tested) in computer simulations, as well as small scale laboratory evolution experiments with microorganisms. trrll
trrll If evolution were run again from the same starting conditions, it would produce completely different biology Is that a completely vacuous positive claim or do you have in mind some way of testing it? Your days are numbered here. You accuse of us of making vacuous claims then happily churn them out in great number yourself. I can't abide a hypocrite. DaveScot
The thrust of the argument is that the probability of mutation and selection achieving a predefined target is impossibly low. This is directly refuted by Dawkins "Methinks it is a weasel" experiment. Of course, neither is a model of evolution, because natural selection does not converge upon a predefined target, but rather optimizes the achievement of a set of goals that are defined by the fundamental laws of nature. If evolution were run again from the same starting conditions, it would produce completely different biology, whereas Dawkin's program always produces the same output string. However, Dawkin's exercise does refute the probabilistic argument in the form proposed, proving that the independent probability assumption does not correctly calculate the probability of achieving a predefined target by mutation and selection. The fallacy of the argument is twofold: 1) It uses an incorrect probabilistic model. While getting to a preselected target by independent simultaneous mutation is a very low probability event, getting to the same target by stepwise mutation and selection is a very high probability event. 2) It is guilty of "Painting the target after the arrow has struck." For example, if you shuffle a deck of cards, the specific order of cards is an astronomically low probability event. Yet it is clearly achievable, because in fact the number of acceptable sequences (in this case, all of them) is equally high. So a valid probabilistic calculation of evolution would have to consider, not merely the probability that natural selection would produce life as we know it today, but the probability that it would produce any form of life. trrll
The topic was also covered at: http://intelligent-sequences.blogspot.com/2006/06/generating-multiple-codes-through.html Paul pk4_paul
Re #52: I way oversimplified the problem in order to make my point. See Stu's comment #35. GilDodgen
Perhaps this has been said (I haven't had time to read all the comments) but this simple C program also requires a prior intelligence, namely the stdio.h library. Without it, the program would not be able to print out the phrase "Hello world!", it would do nothing. This means that random mutation and natural selection would first have to create (somehow) that library before this other randomly created program would function properly. A far greater hurdle than the one initially posed. cjanicek
Here is BerlinskiÃ¢â‚¬â„¢s direct quote from the interview: Ã¢â‚¬Å“John Von Neumann, one of the great mathematicians of the 20th century, just laughed at Darwinian theory. He hooted at it.Ã¢â‚¬Â I presume that Berlinski is not making this up. GilDodgen
Spent a bit of time googling for quotes...unfortunately "von neumann" and "evolution" and "darwinism" and "neo-darwinism" and "origin of life" only led to articles discussing such topics and rarely anything Von Neumann said himself. I did see several other people briefly mention that Von Neumann scoffed at Darwinism...but no sources for these assertions. Although I did find this one quote which was credited to him: "I shudder at the thought that highly purposive organizational elements, like the protein, should originate in a random process." Patrick

Re #46. I can't get the Berlinksi link to work - but a quick Google reveals this passage on Von Neuman (sorry about the length): (http://mayet.som.yale.edu/coopetition/vN.html)

[blockquote]
Von Neumann designed a self-replicating automaton that could use information to create progeny, even progeny of increasing complexity. He concluded that there is a "completely decisive property of complexity," a "minimum level . . . below which automata are degenerative (can only produce less complex automata than themselves) but above which some automata can produce equally or more complex progeny." Moreover, von Neumann elaborated on the nature of this threshold, above which "open-ended complication" or "emergent evolution" could occur. The automaton had to have the capacity to act on symbolically represented information--specifically, a symbolic description of itself. "Self-replication would then be possible if the universal constructor is provided with its own description as well as a means of copying and transmitting this description to the newly constructed machine."

The self-reproducing automaton, therefore, must have two components which are wholly distinct from one another--the machine and its description. A key insight in von Neumann's analysis of self-reproduction is this "categorical distinction between a machine and a description of a machine." The description of the machine is symbol, while the machine is matter--but for the reproduction to be successful, the description must not only be followed, but must also be duplicated. The description itself thus performs two distinct functions: "On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring." It was several years later that Watson and Crick would discover DNA, the instructions for living automata. They discovered that, astonishingly, DNA does indeed perform these two functions. It encodes the instructions for making the appropriate enzymes and proteins for a cell, and also unwinds and duplicates itself before a cell divides: "With admirable economy, evolution has built the dual nature of the genetic material into the structure of the DNA molecule itself."

[/blockquote]

Doesn't sound like someone who thinks evolution is laughable.

"With admirable economy, evolution has built the dual nature of the genetic material into the structure of the DNA molecule itself." Where did von Neuman say this? This appears to contradict his assertion that below a certain complexity replicators are degenerative. How does he posit the first replicator of sufficient complexity, which he posits requires both a working copy of the replicator plus a coded set of instructions describing how to construct another replicator, came to be? -ds Mark Frank
RE: #45: Hello, Mung! If I recall correctly, RBH works for a firm that uses genetic (and perhaps other) algorithms to solve problems for clients. If these algorithms are indeed solving practical problems confronting people in medicine or industry, then I wish blessing upon RBH and his co-workers in their endeavors. May their research bear fruit and make people happy with any clever solutions they apparently produce. As theoretical constructs of how living things really evolve, however, these algorithms are fatally flawed and rigged (consciously or unconsciously) to produce agreeable results. Anyone who makes a living solely to design and show off mathematical "vindications" of Darwinism will gain nothing but a paycheck. Such shams contain no currency otherwise. Best regards, apollo230 apollo230
Es58: Ã¢â‚¬Å“...also, John Von Neumann, responsible for major computer architecture, I believe, expressed himself against this Darwinian outlookÃ¢â‚¬Â David Berlinski comments in this interview http://www.theapologiaproject.org/media/berlinski.ram that Von Neumann, one of the greatest mathematicians and computer scientists of the 20th century, found modern Darwinian theory laughable. GilDodgen
I wonder if that is the same RBH that claims taht cost is not a problem for evolution because his GA's show that evolution happens just fine and is unrestricted by cost issues. When asked what the unit of cost is in his GA's, he had no answer. Yeah, that's the same RBH. He's smart, intelligent, but biased, blind, and unwilling to admit when he is wrong nor willing to correct the deficencies in his arguments. IOW, typical PT material. Mung
Gil, Stu Harris repsonded: The compiler runs on an complex specified operating system, which runs on complex hardware which is composed of metallurgical, mineral, and plastic complexities that Ã¢â‚¬Â¦.. well you get the picture. Just to belabor the Operating system component a little further, there is, at some level, every function supplied by an OS, including memory management, task management, data base mgt; task mgt includes scheduling, loading (probably some kind of linking); not to mention the mgt of the hardware that stores/retrieves the data; multi-tasking, multi-programming, multi-processing (on a REALLY massive scale), real-time mgt of an extroadinarily fine-tuend nature... I find it encouraging to find the name of Fred Brooks on the discovery's group of dissenters, b/c he was a chief engineer of the IBM 360/370 O/S, one of the earliest major products, and would have a real appreciation for what goes into setting this stuff up for the first time, not even just getting it from others; also, John Von Neumann, responsible for major computer architecture, I believe, expressed himself against this Darwinian outlook es58
Avida, Game of Life, Tierra, PT thumb referenced Hello World artifical (as in intelligent) selection demonstration. Any others? What do any of these have to do with naturalistic, mechanistic, unintelligent, mindless, evolution?
ID had no problem accepting Ã¢â‚¬Å“microevolutionÃ¢â‚¬Â, where organisms can be modified slightly and adapt to their environment.
ID also has no problem accepting common descent. But what does any of this have to do with RM+NS as conceived by Darwinists? Mung
I take one step back from my previous comment. The "interesting behaviour" as seen in the game of life does not proceed beyond the original givens, which are the reproduction, movement and dying properties inherent in the world in which it inhabits. All of the patterns, interactions etc. seen after the initial conditions are set never exceed in "useful" information what is there to begin with. SCheesman
I have to disagree with a great many of the points made about Conway's Life game. The results are "interesting" only in the same way that a kaleidescope is interesting, or the fabulous fractal properties of some simple mathematical functions are interesting. They are unexpected, or esthetically interesting. There is no front-loading of intelligence in any of these things. If in fact it did produce something of true novelty (like tomorrow's price of gold, or the set of prime numbers), I would have to agree it would be a significant breakthrough. Once again, you should not confuse complexity (which is relatively simple to produce in infinite quantities) with specified complexity. The Life game is a good example of a programme with no "hidden" front-loading or teleological behaviour, and its few simple rules do not betray that spirit. SCheesman
Ã¢â‚¬Å“the environment can play the same role as an artificial Ã¢â‚¬ËœselectorÃ¢â‚¬â„¢.Ã¢â‚¬Â What a beautiful specimen of begging the question. BK
"This is the difference between having an *intelligent selector* that can intelligently and immediatly decide on fate of a mutation and a *natural selection* which is powerless in picking intermediates which are not naturally selectable." Farshad, I think you're on to something here. But I would propose just the opposite: the *natural selector* can evaluate only the immediate fitness of the next letter in the sequence, without regard to the target sentence. It would do so based only on the physical characteristics of the letter (what else would it have to go on?). Thus, an L would be nearly as fit as a T, but not even a close substitute for an S. Fitness based on anything but physical characteristics is not allowed, since the generator must be "blind" to any ultimate target. Now what are the chances that the program will generate a meaningful sentence in a limited number of iterations? (I know this analogy is weakened by the fact that we have a small number of letters from which to select.) The *intelligent selector* operates on a different principle, which cannot be explained by the physical characteristics of each letter. (This is Polanyi's "tacit dimension.") Yes, each letter selected has distinct physical characteristics, but the letters are actually not selected by those characteristics. Lutepisc

"It seems ConwayÃ¢â‚¬â„¢s Life game was intelligently designed to produce interesting results. It is a type of front loading within the laws of nature. This may be one way the Intelligent Designer id it.

Are we saying that Conway was not an intelligent designer?"

This seems to be a standard type of comment here. Whenever someone produces a model that can explain how complexity might arise, then it can always be argued that the person who developed the model is intelligent and therefore it doesn't count. That sure is a fail-safe strategy.

Perhaps it's a standard type of comment because it's a standard type of flaw in the models. -ds Raevmo
I wrote: "Avida, Weasel and other simulations are desperate attempts of Darwinists to prove evolutionary mechanisms of RM+NS using computer simulations. In heart of all these simulations you see a fitness functions that *intelligently* selects what should be selected and what should be not. The *designers* of all of these softwares subtly feed their system with external intelligence which normally is not available in the nature." PvM at PT responded: "The level of cognitive dissonance (and strawmen) continue. Still missing the part that involves selection by the environment. Weasel, as well as the Ã¢â‚¬ËœHello WorldÃ¢â‚¬â„¢ example specify the fitness landscape a-priori but that is mostly a strawman argument. The real dissonance arises when arguing that a fitness function requires intelligent designers avoiding the simple fact that the environment can play the same role as an artificial Ã¢â‚¬ËœselectorÃ¢â‚¬â„¢." Guys at PT are insisting in deluding themselves that environment can do the same job as an *intelligently designed* fitness function can do in an evolutionary simulation. It seems that they have no understanding of intermediate steps that are not selectable by natural means. The fitness function in a computer simulation has the power to *foresee*. When there is a slight modification, the simulation can intelligently detect if it is another step in a predetermined pathway leading to something better or not. However, nature cannot forsee anything. In real environment all of the slight modifications in each step should yield an increase in reproductive output of the organism. Now assume a mutant bird is evolving and we are transiting from step(n) to step(n+1) in wing developement stage. If a mutation slightly expands the effective area of the wings then an *intelligent fitness function* can select it because the function can foresee the fact that slightly larger wings can lead to completed wings in future generations. Evidently, the fitness function inherits the intelligence of its programmer who knows what a completed wing should really look like. On the contrary, in real environment the aerodynamical improvement gained by that slight modification would be insignificant or none at all. Its effect on reproductive output would be negligible and nature could not even notice its existance. Go further and add some environmental noise and our slight modification will be totally ignored by the environment. This is the difference between having an *intelligent selector* that can intelligently and immediatly decide on fate of a mutation and a *natural selection* which is powerless in picking intermediates which are not naturally selectable. Farshad
DaveScot
Gil, Your "Hello World" program's existence is even more wildly improbable to arrive at by chance and selection than you have stated. It is written in C. This presumes some BNR definition of the C language to begin with. How did that come about by chance and selection? It assumes a compiler written in some other language that can compile and interpret the C code. How did that come about by chance and selection? The compiler runs on an complex specified operating system, which runs on complex hardware which is composed of metallurgical, mineral, and plastic complexities that ..... well you get the picture. Multiply the probability of a "Hello World" program by the probability of the pyramid below it and you can come close to the un-stateable improbability of the world your program is trying to greet. Stu Harris www.theidbookstore.com StuartHarris
And don't forget to put a couple rounds from a Glock nine through the motherboard once in a while to simulate meteor strikes that cause global extinctions. :lol: I kill me sometimes. DaveScot
This is all so incredibly naive. All these programs have fitness functions which is explicitely a direction given by an intelligent agent. Nature has no fitness function. Nature, or rather Darwin's version of nature, doesn't give a flying flop if anything is alive or not. In fact any student of nature knows that the rule is utter sterility. Everywhere we look other than the thin skin of the planet earth is a completely sterile environment and nature is as happy as a clam with nothing alive. So get rid of all fitness evaluations in these so-called simulations of evolution and see what happens. :roll: -ds DaveScot
2 Simple questions: 1. Does running this generator ALWAYS result in a program which displays Hello World? Which leads to... 2. If it truly is based upon the principles of RM+NS why couldn't it generate a program that displays a large variety of pithy comments including "I AM DAVESCOT/GOD! FEAR ME PUNY HUMAN!"? (And if the program ever did display that I'd start begging Dave not to do anything naughty while hacking my PC. ;) ) I'd really like to see the source code for this Hello World program generator. If it's anything like the Ã¢â‚¬Å“Methinks it is like a weaselÃ¢â‚¬Â program then I propose a simple "survivability check". If after an iteration the resulting program fails to compile and execute then the variations that were "favored" don't survive to the next iteration. Oh, what's that you say? The generator no longer produces anything you say? Reality sucks, don't it. Patrick
Raevmo: "What about ConwayÃ¢â‚¬â„¢s game of life?" Conway's Game of Life is law-based:
What mathematicians call functions, and what scientists prefer to call laws cannot explain the origin of CSI. The problem is that laws are deterministic and thus cannot yield contingency, without which there can be no information. The problem with laws is that they invariably yield only a single live possibility. Take a computer algorithm that performs addition. Let us say the algorithm has a correctness proof so that it performs additions correctly. Given the input data 2 + 2, can the algorithm output anything but 4? Computer programs are wholly deterministic. They allow for no contingency and thus can generate no information. At best, therefore, laws shift information around, or lose it, as when data gets compressed. What laws cannot do is produce contingency; and without contingency they cannot generate information, to say nothing of complex specified information. -- William A. Dembski, Intelligent Design,. p. 165.
Given arbitrary starting patterns, Conway's Game of Life just creates pretty patterns. [Interestingly, it's possible to use it as a universal Turing machine, but that requires intelligent design:
von Neumann...had realized -- and proved -- that a universal Turing machine (a Turing machine that can compute any computable function at all) could in principle be "built" in a two-dimensional world. Conway and his students also set out to confirm this with their own exercise in two-dimensional engineering. It was far from easy, but they showed how they could "build" a working computer out of simpler Life forms. Glider streams can provide the input-output "tape," for instance, and the tape reader can be some huge assembly of eaters, gliders, and other bits and pieces. What does this machine look like? Poundstone calculates that the whole construction would be on the order of 10^13 cells or pixels. Displaying a 10^13 pixel pattern would rerquire a video screen about 3 million pixels across at least. -- Daniel C. Dennett, Darwin's Dangerous Idea]
j
apollo230: "Gil, PandaÃ¢â‚¬â„¢s Thumb has responded to you with their own thread:" I am pleased that the Panda's Thumb crowd has created a thread designed to refute my arguments. This is indicative of the fact that a very sensitive nerve has been stricken, and that they are in a state of desperation to defend the indefensible. GilDodgen
Raevmo: "What about ConwayÃ¢â‚¬â„¢s game of life? There is no externaly imposed Ã¢â‚¬Å“fitness functionÃ¢â‚¬Â, just a few very simple rules that govern the interaction between neighboring cells. Yet I believe it has been shown that there are configurarions that increase in complexity forever." The same might be said of the Mandelbrot set, or the spread of frost across a window. The complexity seems to increase without limit, but in fact it is a mirage. These are all contingent situations, the game of life proceeds from a few simple rules and the initial conditions, and the results never differ, given the same starting point. There is never an increase in useful (specified) information. The crystal gets larger, but nothing truly informative ever occurs. Novelty without function is noise. The set of truly functional novel situations is so small in comparison with the total possible number of situations that they will never occur, which is the point of the original post. SCheesman
"The basic idea is to start with a simple configuration of counters (organisms), one to a cell, then observe how it changes as you apply Conway's "genetic laws" for births, deaths, and survivals. Conway chose his rules carefully, after a long period of experimentation, to meet three desiderata: There should be no initial pattern for which there is a simple proof that the population can grow without limit. There should be initial patterns that apparently do grow without limit. There should be simple initial patterns that grow and change for a considerable period of time before coming to end in three possible ways: fading away completely (from overcrowding or becoming too sparse), settling into a stable configuration that remains unchanged thereafter, or entering an oscillating phase in which they repeat an endless cycle of two or more periods." http://www.ibiblio.org/lifepatterns/october1970.html It seems Conway's Life game was intelligently designed to produce interesting results. It is a type of front loading within the laws of nature. This may be one way the Intelligent Designer id it. Are we saying that Conway was not an intelligent designer? idnet.com.au

A gentleman named RBH has posted the following on a PandaÃ¢â‚¬â„¢s Thumb thread mounted to respond to GilÃ¢â‚¬â„¢s original post:

http://www.pandasthumb.org/archives/2006/06/evolution_of_co_2.html.

My personal critiques of his statements are in parentheses:

RBH:
Ã¢â‚¬Å“Discussions of computer models of evolutionary processes typically dissolve into confusion due to the failure to carefully distinguish between two kinds of models that differ in the information used to calculate fitness.

1. Models with global fitness calculations. These are Dawkinsian METHINKSITISLIKEAWEASEL sorts of models, where the fitness of a replicator is calculated as the distance of its phenotype from some target phenotype. The fitness equation Ã¢â‚¬Å“knowsÃ¢â‚¬Â the target state, and replicators are more or less fit (and therefore survive to replicate and/or recombine) based on relative similarity (e.g. the Hamming distance) to that target state. These kinds of models are not models of biological evolution, and claims that they are such models flatly misconstrue biological evolution. However, they are useful in demonstrating the power of cumulative selection, which is all Dawkins sought to do with his METHINKS illustration. He explicitly said that the METHINKS program was not a model of evolution, but only of cumulative selection and its power to transform tiny probability into high probability. Creationist have consistently and persistently misconstrued that program since it was published, and DodgenÃ¢â‚¬â„¢s post is yet another example of that misconstrual.Ã¢â‚¬Â

(the Dawkins algorithm aims to converge a random string of 26 letters to the target phrase METHINKSITISLIKEAWEASEL in a series of trials. It does so quite rapidly because each trial is compared to the target phrase and the matching letters are retained for the next round. Since when do Darwinistic processes have targets?

Additionally, RBH claims that these algorithms are not models of biological evolution but Ã¢â‚¬Å“only of cumulative selection and its power to transform tiny probability into high probabilityÃ¢â‚¬Â. One of Darwinian theoryÃ¢â‚¬â„¢s fundamental claims is that undirected biological evolution has the power to climb such Ã¢â‚¬Å“Mount ImprobablesÃ¢â‚¬Â (DawkinsÃ¢â‚¬â„¢ phrase). All RBH has done really is to change the spelling of Ã¢â‚¬Å“biological evolutionÃ¢â‚¬Â to Ã¢â‚¬Å“cumulative selectionÃ¢â‚¬Â in his phrasing - apollo230)

RBH:

(here the claim is that the traderÃ¢â‚¬â„¢s phenotype would not be the target of the proposed algorithm, but rather the set of properties that would comprise a good trader. Seems to me that in algorithm-land, there is no real distinction between the ideal trader and the properties of such a creature. These targets are effectively identical (correct me if I am wrong. And again, there is a defined target-not allowed under Darwinistic rules. - apollo230)

RBH:
Ã¢â‚¬Å“Biological evolution is an algorithm of the second sort. The algorithm does not Ã¢â‚¬Å“knowÃ¢â‚¬Â a target phenotype in order to determine fitness on the basis of similarity to that target phenotype. Rather, the algorithm of biological evolution Ã¢â‚¬Å“knowsÃ¢â‚¬Â only locally determined fitness, where fitness is Ã¢â‚¬Å“calculatedÃ¢â‚¬Â implicitly as survival and relative reproductive success of the actual replicators in the population in a specific environment composed of physical variables and biological variables (conspecifics and other species).
As a consequence, any algorithm that incorporates a fitness calculation that refers to some phenotype (or genotype) not currently in the population is not a model of biological evolution. Biological evolution Ã¢â‚¬Å“knowsÃ¢â‚¬Â whatÃ¢â‚¬â„¢s better or worse in the current population only by virtue of the differential survival and reproduction of the members of that population; it does not Ã¢â‚¬Å“knowÃ¢â‚¬Â an optimal phenotype or genotype toward which it should evolve.Ã¢â‚¬Â

(the Ã¢â‚¬Å“locally determined fitnessÃ¢â‚¬Â is itself a target, and these algorithms are made to conveniently converge onto these destinations through Ã¢â‚¬Å“strictly random processesÃ¢â‚¬Â -never mind the computer code that (behind the figurative curtain) converges the process to the target.

Another fundamental flaw here is that to simulate actual biological evolution one must use actual genetic systems and actual creatures. This notion that flesh-and-blood animals can be reduced to computer code and still be called creatures is to effectively confuse virtual reality with the authentic biosphere.

The Darwinists have used carefully designed computer games to declare victory for RM/NS on some tiny little speck - a microchip. Then, through mental sleight-of-hand, they expand this dwarfish province into a grand macrocosm called Earth. - apollo230)

Then, through mental sleight-of-hand, they expand this dwarfish province into a grand macrocosm called Earth.

Well sure. And why shouldn't they? These are the same guys that extrapolate the mechanism that causes finch beaks to enlarge or moth wings to darken into a mechanism able to cause bacteria to become baboons. If you're willing to make that leap of the imagination there's not many leaps you won't make. I think Dawkins may be in hot water for copyright infringement. METHINKSITISLIKEAWEASEL is ripped off from the game show "Wheel of Fortune". -ds

apollo230
VCfRSN: "ll tht ws skd ws tht th cmptr prgrm crt th Hll Wrld prgrm thrgh RM + NS. Th prgrm ds XCTLY ths." ctlly, t mst hv sd tmtd rtfcl slctn, nt ntrl slctn. :lol: Ã¢â‚¬â€Ã¢â‚¬â€Ã¢â‚¬â€Ã¢â‚¬â€Ã¢â‚¬â€Ã¢â‚¬â€ Mark Frank: "Natural selection requires that each step be functional but it does *not* require it to be step towards a predetermined goal. ThatÃ¢â‚¬â„¢s the whole point." Exactly. And when there's no predetermined goal, complex specified information isn't generated. j
What about Conway's game of life? There is no externaly imposed "fitness function", just a few very simple rules that govern the interaction between neighboring cells. Yet I believe it has been shown that there are configurarions that increase in complexity forever. Besides the game of life, there are plenty of simulations without external fitness functions that show how genetic information increases over time, the only external input being a random number generator, and without any predefined goals. In other words, there is no doubt that RM+NS *can* create novelty. Why is this so hard to accept? I thought ID had no problem accepting "microevolution", where organisms can be modified slightly and adapt to their environment. If you accept that, then you have to accept that genomes can increase in complexity. Adaptation means that the genome incorporates information about the environment, almost by definition. If a genome in that sense copiesinformation from the environment, it couldbe argued there is not net increase in information in the system as a whole. So conservation of information, everybody happy. Raevmo
Darwinists are not able (or willing) to recognize design either in nature or in their algorithms. apollo230
Avida, Weasel and other simulations are desperate attempts of Darwinists to prove evolutionary mechanisms of RM+NS using computer simulations. In heart of all these simulations you see a fitness functions that *intelligently* selects what should be selected and what should be not. The *designers* of all of these softwares subtly feed their system with external intelligence which normally is not available in the nature. All similar simulations that claim they are correctly mimicing the nature are using a faulty logic. They are deluding themselves into believing that they could invent a perpetual motion machine, but they are not aware that they are subtly feeding the machine with an external source of energy. In this case the perpetual motion machine is the simulation of darwinian evolution and the *external energy feed* is the intelligence that the programmer puts in the fitness function of the simulation. In reality there is no such source of intelligence available in the nature that could replace the fitness functions we see in software simulations. Consequently, there is no simulation of this kind that can expand its complexity forever. All of those simulations are designed to converge to a certain predefined goal. For Dawkins' program the convergence limit was the sentence: "Methinks it is like a weasel". The convergence limit is directly related to the amount of *intelligence* that the fitness function inherits from its programmer. Farshad
I vagely remember studying genetically modified Bacteria that were used to 'digest' toxic spills,the metabolic pathways were engineered,such that they could digest chemicals like toluene,they were introduced to a toxic site and would clean it up and then the introduced populatiion would die out. They died out due to having to constantly produce protiens that no longer could break down there target chemical as it had been used up.The burden of producing a non used protien was too much and spelled the end,the frankenstein bugs could use other sources of nutrition. The point is that every point mutation alters the message and alters the product any computer simulation must have a high degree of resolution,for example :that takes into account changes in protien 3d shape through addition,subtraction,substitution of amino acids,redundancy of the genetic code,interaction of a new product with all the various existing protiens in a cell,just to name a few,never mind about multicellualr organisms! I think this type of resolution is currently not available.In order to boast that it can do the things avida apparently can. WormHerder

GilDodgen wrote:
"I would be curious to see the intimate details of the PandaÃ¢â‚¬â„¢s Thumb program. IÃ¢â‚¬â„¢ll bet dollars to donuts that the programmer cheated by defining intermediate fitness goals with the Hello World program in mind. RM+NS in the natural world canÃ¢â‚¬â„¢t work this way, because it is undirected and without a goal. It is not just blind, but comatose."

If you read Crepeau's paper, you'll see that he is not trying to model real-world natural selection. He is investigating the feasibility of evolving machine language software by random mutation and artificial selection. It's hardly fair to accuse him of cheating "by defining intermediate fitness goals with the Hello World program in mind," when the whole point of the experiment was to openly favor those variations that came closer to producing the "Hello World" string.

Regarding the claims that the program "smuggles" in teleology, the only two fitness criteria it uses are 1) the Hamming distance of the output string from the goal string of "Hello world", and 2) the length of the program producing the string. Nothing in these fitness criteria tells the genetic algorithm how to find solutions to the problem, and in fact different runs will come up with different solutions. It is simply incorrect to argue that the solution is somehow implicit in the fitness criterion and is thereby being smuggled in.

1) the Hamming distance of the output string from the goal string of Ã¢â‚¬Å“Hello worldÃ¢â‚¬Â Oh is that all. And here I thought it was being directed towards a certain goal like "Hello World". Does the program get to buy vowels? Does Vanna White flip the letters over when the program guesses? -ds zapatero
johnnyb - Have a look at Ray's "Tierra" a-life package. This may meet the criteria you're looking for. It would be intersting to see if his results were replicable using a standard instruction set (e.g x86), a bare bones microkernel (e.g. L4) and a minimal supervisor process (to cause radndom changes to memory locations and provide periodic process/memory dumps). Patrick Caldon
"Re #15. I know very little about these programs but my guess is that you are asking them to be a complete simulation of evolution when all they are doing is illustrating some aspect of how complexity can be achieved through trial and success. After all the example that kicked off this thread was hardly an accurate representation of life and yet you felt that there lessons to be learned from it. Comment by Mark Frank Ã¢â‚¬â€ June 11, 2006 @ 3:28 pm" Here's what AVIDA claims to do: http://dllab.caltech.edu/avida/about.shtml "Avida is an auto-adaptive genetic system designed primarily for use as a platform in Digital or Artificial Life research. The Avida system is based on concepts similar to those employed by the tierra program developed by Tom Ray. In lay terms, Avida is a digital world in which simple computer programs mutate and evolve. More technically, it is a population of self-reproducing strings with a Turing-complete genetic basis subjected to Poisson-random mutations. The population adapts to the combination of an intrinsic fitness landscape (self-reproduction) and an externally imposed (extrinsic) fitness function provided by the researcher. By studying this system, one can examine evolutionary adaptation, general traits of living systems (such as self-organization), and other issues pertaining to theoretical or evolutionary biology and dynamic systems. The power of Avida is that it gives us a controllable digital system in which to study the theories of evolutionary biology. Often, we can study elements of evolutionary theory that are difficult or impossible in biological systems." russ
Mark, Isn't the point that even the very best programs we can write cannot create even moderate complexity through trial and success without stacking the deck in their favor? Then imagine how the complexity of life which is 10 to a very large exponent more complex than any of the outputs of these computer programs, could ever arise without also stacking the deck Say by an intelligent designer. jerry
"I know very little about these programs but my guess is that you are asking them to be a complete simulation of evolution when all they are doing is illustrating some aspect of how complexity can be achieved through trial and success." The point is that even if they are a small example of one aspect, the aspect we are asking about is the ability to create specified complexity ex nihilo. I haven't read the methodology on the PT paper yet, but yes, in almost everything the teleology is invariably being snuck in somewhere. I appreciated the use of standard hardware, though. Interestingly, though, no genetic algorithm (that I know of) has yet been attempted where the genetic algorithm itself could be the subject of mutation. johnnyb
Re #15. I know very little about these programs but my guess is that you are asking them to be a complete simulation of evolution when all they are doing is illustrating some aspect of how complexity can be achieved through trial and success. After all the example that kicked off this thread was hardly an accurate representation of life and yet you felt that there lessons to be learned from it. Mark Frank
Re #12: I understand that natural selection requires each step to be functional but not a step toward a predetermined goal. That is my point about why these programs are invalid. Teleology is invariably smuggled into the algorithms. Goals and fitness criteria suitable to reach them are predefined (although sometimes subtly), and intermediate islands of "function" are rigged to be easily reachable by trial and error. The bottom line is that the theory of random mutation and natural selection is dead as an explanation for the origin of life's complexity, diversity, information content and functionally integrated machinery. Actually, it isn't even a theory; it's wildly wishful speculation that flies in the face of common sense and hopelessly huge improbabilities. There isn't a shred of evidence that RM+NS has the creative power attributed to it. This is not science. But it's all the Darwinists have, so it will be defended to the death by any means available, no matter what. GilDodgen
Go away, Blipey (VOICEofREASON). You were one of the first people I put on the blacklist here and nothing has changed. Even before I knew it was you again I didn't approve anything you wrote under this new alias. You are wasting your time. I don't read past the first line of anything you write anymore and no one else sees it at all. DaveScot
Hi Mark, I agree thats why I think the program is too simplistic-each step must represent a functional possible endpoint,each change must also lead to the possibility of differing functions (as demonstrated by co-option and differentiation) because we do not know when the process will terminate,I was asking if that had been factored into the analogy-I dont think it has and so it would need more complexity -leading to a lower probability. WormHerder
Re #10. I think it is the other way round. Natural selection requires that each step be functional but it does *not* require it to be step towards a predetermined goal. That's the whole point. Mark Frank
Hi Guys, Correct me please if I am wrong (im sure you will),but the 'Hello World' program analogy is inadequate for the task of modeling naturalistic evolutionary theory,it is way too simplistic. EACH change of the program must demonstrate a step towards the predetermined goal of the program.This is difficult enough,but within naturalistic evolutionary theory,each individual change no matter how small, must not only be a step in the right direction(retrospectively)but also be functional, and a possible end in its self,as the process is apparently blind. How do you factor in usefulness at each step into this analogy,given that the end of the process is unknown?Is it enough to say that each change toawrds 'Hello world' is analogous to each functional change in naturalistic evolutionary theory?Every small change would possibly indicate a different function (co-option).so I think more complexity needs to be added to this program which of course would result in lower probability. WormHerder
I would be curious to see the intimate details of the Panda's Thumb program. I'll bet dollars to donuts that the programmer cheated by defining intermediate fitness goals with the Hello World program in mind. RM+NS in the natural world can't work this way, because it is undirected and without a goal. It is not just blind, but comatose. Check out my comments on question-begging computer simulations here: https://uncommondesc.wpengine.com/index.php/archives/802 and Eric Anderson's article here: http://evolutiondebate.info/BitByte.pdf GilDodgen
Hello, Dave! I did not mean to say that Panda's Thumb's response was effective or substantive. I was only giving Gil a friendly "heads-up" that PT had noticed his post and was responding to him. If one uses the Law of Conservation of Information as a book-keeping device (per William), all these over-achieving evolutionary algorithms are indeed shown to be front-loaded one way or another. (Garbage in, garbage out) , or (intelligence in, intelligence out) are indeed the only two fundamental designs algorithms can have. Neo-Darwinists would have us follow the Wizard of Oz' immortal admonition when we evaluate these algorithmic "vindications" of Darwinism: "DON'T LOOK AT THAT MAN BEHIND THE CURTAIN!" :) apollo230
[troll]

How is Gil's claim anything more than your incomplete assessment of PvM's?
It is okay to have Gil put up empty posts, but not anyone else?
Which, of course, brings us to the question of whether or not Pim's post is actually empty.

All that was asked was that the computer program create the Hello World program through RM + NS. The program does EXACTLY this. If you would like to define the parameters of selection, you are free to do so, of course. Now, to be fair, so is anyone else. It would be nice if Pim told us what the parameters of his program were. It would also be nice if you, DaveScot, would present what you think the parameters should be. Then we look at each set and see how well they model a real-life situation.

Either way, the argument you have is not with the program itself, but with what fitness critereon were used. This, at best, highlights problems with an experimenter, and not with the experiment itself.

Now, axe this perfectly legitimate post, which is both lucid and polite. Of course, you aren't looking for lucid posts, or posts that mean anything in general. You are looking to have your ass licked.

VOICEofREASON
But nightlight, have you heard, the Neo-Darwinians have the TRUTH! Therefore, they deserve priority seating at the center of the Great Speck! (make sure your sarcasm-meter is on!) Best regards, apollo230 :)) apollo230

Gil, Panda's Thumb has responded to you with their own thread:

http://www.pandasthumb.org/archives/2006/06/evolution_of_co_2.html

Best regards,
apollo230

The response is empty. Pim Van Meurs cites a program (written by intelligent agents I presume) that can create a "Hello World" program from some unspecified genetic algorithm. The way this is accomplished is not disclosed and if it were disclosed I'm sure we'd find the program is cheating by sneaking information in via the filter which ranks the "fitness" of the intermediate outputs. This is nothing more than trial and error. Finding solutions by trial and error is nothing new. Building upon partial successes is an obvious extension of the process. This process of trial and error, and building upon partial successes, is likely as old as humanity itself. Trial and error and the choosing of partially successful intermediate solutions requires purpose and direction. These are supplied by the programmers of the so-called genetic algorithms. -ds apollo230
-- quote: Now one might ask, What is the chance of producing, by random mutation and natural selection, the digital computer program that is the DNA molecule, not to mention the protein synthesis machinery and information-processing mechanism, all of which is mutually interdependent for function and survival? -------- It seems even more striking to ask what are the chances of random mutations & natural selection to not just build the DNA with its biochemical properties, but build it so that this DNA then gathers and arranges large clusters of other molecules in just such a clever way that these clusters then end up moving together still other molecules from around the Earth, from the air down to the depths under the mountains, arranging them into computers and writing programs for them, from 'Hello World' through the rest of computer software, along with arranging yet other molecules into the rest of science and technology. According to the neo-Darwinian dogma, they claim they can 'scientifically' encircle and delimit, somehow, a particular very, very tiny space-time & matter-energy speck of this whole vast process (which, as all agree, is a manifestly an intelligent process), then declare the movements of this particular speck as the sole seat of all of the "intelligence" of the entire process, while the rest of the process is merely operating by a 'dumb luck'. Further, this tiny intelligent speck, the sole seat of all of the intelligence, as luck would have it, happens to contain (presumably, at its very center) the neo-Darwinians themselves. Yeah, sure. nightlight
SCheesman makes an interesting point, but the thrust of my post is to reveal how quickly combinatorics create insurmountable probabilistic barriers, even in simple systems. Actually, the problem is much worse than I describe, because before you can even write a Hello World program you need the computing hardware, the programming language and its conventions, the editor, libraries of precompiled subroutines, header files, and the compiler and linker. Living things not only need DNA code, but a whole lot of tightly functionally integrated machinery to make use of it. GilDodgen
SCheesman -- I think that the issue is that the cell has to solve BOTH problems, not to mention creating the underlying machine to execute the problem. johnnyb
The only thing that baffles me is the fact that Darwinists are baffled by the fact that most people donÃ¢â‚¬â„¢t buy their blind-watchmaker storytelling. There is a great thread here (http://www.freerepublic.com/focus/f-news/1300661/posts) FreeRepublic concerning a 2003 speech by Michael Crichton. It includes:
There is no such thing as consensus science. If it's consensus, it isn't science. If it's science, it isn't consensus. Period. In addition, let me remind you that the track record of the consensus is nothing to be proud of. Let's review a few cases. In past centuries, the greatest killer of women was fever following childbirth . One woman in six died of this fever. In 1795, Alexander Gordon of Aberdeen suggested that the fevers were infectious processes, and he was able to cure them. The consensus said no. In 1843, Oliver Wendell Holmes claimed puerperal fever was contagious, and presented compellng evidence. The consensus said no. In 1849, Semmelweiss demonstrated that sanitary techniques virtually eliminated puerperal fever in hospitals under his management. The consensus said he was a Jew, ignored him, and dismissed him from his post. There was in fact no agreement on puerperal fever until the start of the twentieth century. Thus the consensus took one hundred and twenty five years to arrive at the right conclusion despite the efforts of the prominent "skeptics" around the world, skeptics who were demeaned and ignored. And despite the constant ongoing deaths of women.
tribune7
I completely agree with the main thrust of the argument here, as a someone who writes scientific software for a living. However, for the purpose of the argument, you might want to consider re-casting into terms of the available operators and arguments (e.g. "printf()", "return" "int" etc.), not the letters which spell them out. For instance, the "return" operator is one among "N" possible operators in the "C" language, so its appearance on the last line, as opposed to any other operator, would be about 1/N, assuming all operators were equally probable. To this you'd have to factor in the odds of it NOT appearing anywhere else, which would cause premature termination. The chance of spelling "return" out of the available letters seems to me a completely different problem. Perhaps it would be better to consider the compiled code instead, once the C representations of the arguments have been replaced by their more fundamental units, or even looking at the binary representation of the executable. In any case, any programmer relying on Darwinian processes to write better code would experience rapid, unnatural de-selection. SCheesman