Uncommon Descent Serving The Intelligent Design Community

Writing Computer Programs by Random Mutation and Natural Selection

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The first computer program every student writes is called a “Hello World” program. It is a simple program that prints “Hello World!” on the screen when executed. In the course of writing this bit of code one learns about using the text editor, and compiling, linking and executing a program in a given programming environment.

Here’s a Hello World program in the C programming language:


#include <stdio.h>

int main(void)
{
printf(“Hello World!\n”);
return(0);
}

This program includes 66 non-white-space text characters. The C language uses almost every character on the keyboard, but to be generous in my calculations I’ll only assume that we need the 26 lower-case alpha characters. How many 66-character combinations are there? The answer is 26 raised to the 66th power, or 26^66. That’s roughly 2.4 x 10^93 (10^93 is 1 followed by 93 zeros).

To get a feel for this number, it is estimated that there are about 10^80 subatomic particles in the known universe, so there are as many 66-character combinations in our example as there are subatomic particles in 10 trillion universes. There are about 4 x 10^17 seconds in the history of the universe, assuming that the universe is 13 billion years old.

What is the probability of arriving at our Hello World program by random mutation and natural selection? How many simpler precursors are functional, what gaps must be crossed to arrive at those islands of function, and how many simultaneous random changes must be made to cross those gaps? How many random variants of these 66 characters will compile? How many will link and execute at all, or execute without fatal errors? Assuming that our program has already been written, what is the chance of evolving it into another, more complex program that will compile, link, execute and produce meaningful output?

I can’t answer these questions, but this example should give you a feel for the unfathomable probabilistic hurdles that must be overcome to produce the simplest of all computer programs by Darwinian mechanisms.

Now one might ask, What is the chance of producing, by random mutation and natural selection, the digital computer program that is the DNA molecule, not to mention the protein synthesis machinery and information-processing mechanism, all of which is mutually interdependent for function and survival?

The only thing that baffles me is the fact that Darwinists are baffled by the fact that most people don’t buy their blind-watchmaker storytelling.

Comments
DaveScot This species concept is widely challenged, because it is… not widely appliable. It doesn't work for bacteria, hermaphrodites, and so on. Good exemples exists in the wild, at world scale, regarding amphibians and their parasites (involuntary reproductive isolation via geographical isolation — it is not yet published, I only see this one is conferences), and in seabirds at a local scale (http://scholar.google.fr/scholar?num=20&hl=fr&lr=&cluster=15528891299310585283). Some papers dealing with mosquito species complex in africa are talking about reproductive isolation in sympatry. Also, you could find examples of sympatric speciation in parasites (There is a paper by McCoy in Trends in Parasitol named "what is sympatry" that you should definitely read)finchy
September 23, 2008
September
09
Sep
23
23
2008
01:54 AM
1
01
54
AM
PDT
finchy There are lots of examples of speciation in the lab that could be given. Drosophila is relatively easy to coax into voluntary reproductive isolation. However, under the biological definition of species they must be involuntarily isolated - ie; not cross fertile producing at best sterile hybrids. As I recall there's at least one example of that too in drosophila although it's difficult to tell if cross infertility is absolute or merely greatly reduced. trrrl should have been able to give a well documented & widely published example that we could look at specifically. DaveScot
September 22, 2008
September
09
Sep
22
22
2008
10:04 AM
10
10
04
AM
PDT
@DaveScot
What novel species were created in the laboratory evolution you mention and how was it determined these never evolved in nature before? I’ll need links to support your claims this time. This will be your last comment until you successfully support those claims so don’t even bother with anything else.
I've already read such papers. This one (http://cat.inist.fr/?aModele=afficheN&cpsidt=14390342) talks about interaction-induced speciation. Just send me an email if you want a PDF. Also, I have experimental/modelling papers on similar systems (I'm afraid trrll is at least partially correct in his affirmations), I'll gladly give you PDF if you feel like reading (maths are pretty hard, though).finchy
September 22, 2008
September
09
Sep
22
22
2008
04:21 AM
4
04
21
AM
PDT
Maybe you think of 'function' as a sum of small functions (print, for, etc), whereas biological function is primarily determined by structure, that is the 'message' itself, in which 'print' and 'prind' would be equal — because of redundancy, chemical equivalence between amino acids, and stuff like that. I'm afraid your algorythm/genetic material comparison is way too simple. That would be the one argument against your point of view…finchy
September 22, 2008
September
09
Sep
22
22
2008
04:16 AM
4
04
16
AM
PDT
[...] at Telic Thoughts Bradford resurrected a discussion based on my UD essay, Writing Computer Programs by Random Mutation and Natural Selection. In reference to the quote, “The set of truly functional novel situations is so small in [...]Mathematics and Darwinism — Plus a Math Problem to Solve | Uncommon Descent
September 21, 2008
September
09
Sep
21
21
2008
07:56 PM
7
07
56
PM
PDT
[...] Descent blog entry I had come across some time ago and subsequently forgotten. Gil Dodgen wrote Writing Computer Programs by Random Mutation and Natural Selection. There were a number of interesting comments and numbered among the commenters was at least one [...]Getting With the Program - Telic Thoughts
September 20, 2008
September
09
Sep
20
20
2008
08:52 PM
8
08
52
PM
PDT
[...] readers might also like to check out my essays on the obstacles presented by combinatoric explosion, and the willingness of Darwinists to accept storytelling as fact, with absolutely no analytical [...]Gil’s Involvement With The EIL | Uncommon Descent
December 2, 2007
December
12
Dec
2
02
2007
08:39 PM
8
08
39
PM
PDT
Hi Dave Re 58: I suspect there is an underlying dynamic: many people are not fully aware of a key problem with the logic of implication. As my old Math Prof, Harald Niederriter was fond of putting it: Ex falso quod libet. That is, implication is such that an in fact false antecedent can give rise to true consequents, so that implication is not at all tantamount to equivalence. But also, it can give rise to false consequents, and that is why empirical refutation is so important in the real world. Modelling uses this intentionally: we set up a "simplified" analogue for reality, and use it to "predict" consequences, then if we are confident in the models we believe and act on the results of that process. But, why should we trust the models? ANS: since no model is better than its assumptions, input data and algorithms [GIGO . . .], we first look for plausibility there. Then, we validate, i.e test the model against the empirical world. If it survives long enough, we trust it even where we cannot trace it. But, a "simplified" analogue is of necessity, strictly FALSE to fact. (The point is, we test it to be confident of its robustness. And of course that is precisely what has happened with the nuke reactor modelling, which is based on a lot of serious physics and empirical observation over decades and hundreds of cases of reactors. Even so, sometimes things go wrong, as at Sellafield, Chernobyl and Three Mile Island.) Observe as well how in the linked case there is a built-in targetting of improvements and a scan across a candidate list of designs which then are promoted based on performance metrics. this is intelligently directed, artificial testing [maybe with some Monte Carlo runs on parameters], not at all natural selection. And therein lieth the fallacy. Oddly, there is credible evidence that there were natural reactors in appropriate ore bodies. Imagine randomly seeing up parameters that spontaneously get from that natural process to a sophisticated improved PBMR reactor -- every intervening "design" being functional and safe from meltdowns etc! BTW, evolution is not to be confused with NDT-style macro-evolution which has to start with some sort of realistic prebiotic soup model and credibly get to the first functional life form. I have lower confidence in getting to life and then to major body plans and thus to the biodiversity we observe, than in the above natural reactor to PBMR evolution by computer simulation! At least for the latter we know that a natural reactor is not improbable once we have the ore concentration. (And, a nuke reactor is far, far more strucurally simple than a DNA based life form.) So, it is entirely possible that von Neumann accepted that evolutionary mechanisms account for the development of life but had very low confidence that they were NDT based. [His threshold for a self replicating automaton is in fact very high! Think about a machine that has to have in it the blueprint for itself and the self assembling machines that then create itself from that blue print . . . where did the self-assembling machines to interpret and implement the blueprint come form? The language ofr the blue print? Etc etc? I think I see either an infinite regress or else that somewhere at some point some very sophisticated things were set up externally -- the notion that they could set themselves up by chance and natural regularities simply rapidly exhausts probabilistic resources -- which was the original "Hello World" point way back up there. All the red herrings dragged across the track to lead out to conveniently combustible strawmen put up by the evo mat advocates at PT etc notwithstanding. And worse, the brightly burning strawmen so hopefully ignited by the Thumbsters have been rapidly doused before they could cloud and poison the atmosphere. Cf my always linked. H'mm: is that why Denise was talking to Bill about the pay raise for the ever so useful Thumbsters?] GEM of TKIkairosfocus
July 29, 2007
July
07
Jul
29
29
2007
04:06 AM
4
04
06
AM
PDT
I'm not interested in computer simulations unless they're modeling something that can be tested in the real world to verify the model. Nuclear weapons are tested in computer simulations. The simulations are known to accurately model the weapons because the simulation results were compared to reality and found to be accurate. I doubt you can point me to a computer simulation that models bacterial evolution and comes out with testable results of new species. What novel species were created in the laboratory evolution you mention and how was it determined these never evolved in nature before? I'll need links to support your claims this time. This will be your last comment until you successfully support those claims so don't even bother with anything else.DaveScot
September 16, 2006
September
09
Sep
16
16
2006
10:35 AM
10
10
35
AM
PDT
Is that a completely vacuous positive claim or do you have in mind some way of testing it?
Of course. The prediction that evolution frequently leads to different outcomes from the same starting point is readily tested (and indeed, has been repeatedly tested) in computer simulations, as well as small scale laboratory evolution experiments with microorganisms.trrll
September 16, 2006
September
09
Sep
16
16
2006
10:17 AM
10
10
17
AM
PDT
trrll If evolution were run again from the same starting conditions, it would produce completely different biology Is that a completely vacuous positive claim or do you have in mind some way of testing it? Your days are numbered here. You accuse of us of making vacuous claims then happily churn them out in great number yourself. I can't abide a hypocrite.DaveScot
September 16, 2006
September
09
Sep
16
16
2006
12:47 AM
12
12
47
AM
PDT
The thrust of the argument is that the probability of mutation and selection achieving a predefined target is impossibly low. This is directly refuted by Dawkins "Methinks it is a weasel" experiment. Of course, neither is a model of evolution, because natural selection does not converge upon a predefined target, but rather optimizes the achievement of a set of goals that are defined by the fundamental laws of nature. If evolution were run again from the same starting conditions, it would produce completely different biology, whereas Dawkin's program always produces the same output string. However, Dawkin's exercise does refute the probabilistic argument in the form proposed, proving that the independent probability assumption does not correctly calculate the probability of achieving a predefined target by mutation and selection. The fallacy of the argument is twofold: 1) It uses an incorrect probabilistic model. While getting to a preselected target by independent simultaneous mutation is a very low probability event, getting to the same target by stepwise mutation and selection is a very high probability event. 2) It is guilty of "Painting the target after the arrow has struck." For example, if you shuffle a deck of cards, the specific order of cards is an astronomically low probability event. Yet it is clearly achievable, because in fact the number of acceptable sequences (in this case, all of them) is equally high. So a valid probabilistic calculation of evolution would have to consider, not merely the probability that natural selection would produce life as we know it today, but the probability that it would produce any form of life.trrll
September 15, 2006
September
09
Sep
15
15
2006
11:53 PM
11
11
53
PM
PDT
The topic was also covered at: http://intelligent-sequences.blogspot.com/2006/06/generating-multiple-codes-through.html Paulpk4_paul
June 14, 2006
June
06
Jun
14
14
2006
10:02 PM
10
10
02
PM
PDT
Re #52: I way oversimplified the problem in order to make my point. See Stu's comment #35.GilDodgen
June 14, 2006
June
06
Jun
14
14
2006
07:47 PM
7
07
47
PM
PDT
Perhaps this has been said (I haven't had time to read all the comments) but this simple C program also requires a prior intelligence, namely the stdio.h library. Without it, the program would not be able to print out the phrase "Hello world!", it would do nothing. This means that random mutation and natural selection would first have to create (somehow) that library before this other randomly created program would function properly. A far greater hurdle than the one initially posed.cjanicek
June 14, 2006
June
06
Jun
14
14
2006
01:11 PM
1
01
11
PM
PDT
http://www.newscientisttech.com/article.ns?id=dn9302&feedId=online-news_rss20Patrick
June 13, 2006
June
06
Jun
13
13
2006
12:01 PM
12
12
01
PM
PDT
Here is Berlinski’s direct quote from the interview: “John Von Neumann, one of the great mathematicians of the 20th century, just laughed at Darwinian theory. He hooted at it.” I presume that Berlinski is not making this up.GilDodgen
June 13, 2006
June
06
Jun
13
13
2006
08:34 AM
8
08
34
AM
PDT
Spent a bit of time googling for quotes...unfortunately "von neumann" and "evolution" and "darwinism" and "neo-darwinism" and "origin of life" only led to articles discussing such topics and rarely anything Von Neumann said himself. I did see several other people briefly mention that Von Neumann scoffed at Darwinism...but no sources for these assertions. Although I did find this one quote which was credited to him: "I shudder at the thought that highly purposive organizational elements, like the protein, should originate in a random process."Patrick
June 13, 2006
June
06
Jun
13
13
2006
12:35 AM
12
12
35
AM
PDT

Re #46. I can't get the Berlinksi link to work - but a quick Google reveals this passage on Von Neuman (sorry about the length): (http://mayet.som.yale.edu/coopetition/vN.html)

[blockquote]
Von Neumann designed a self-replicating automaton that could use information to create progeny, even progeny of increasing complexity. He concluded that there is a "completely decisive property of complexity," a "minimum level . . . below which automata are degenerative (can only produce less complex automata than themselves) but above which some automata can produce equally or more complex progeny." Moreover, von Neumann elaborated on the nature of this threshold, above which "open-ended complication" or "emergent evolution" could occur. The automaton had to have the capacity to act on symbolically represented information--specifically, a symbolic description of itself. "Self-replication would then be possible if the universal constructor is provided with its own description as well as a means of copying and transmitting this description to the newly constructed machine."

The self-reproducing automaton, therefore, must have two components which are wholly distinct from one another--the machine and its description. A key insight in von Neumann's analysis of self-reproduction is this "categorical distinction between a machine and a description of a machine." The description of the machine is symbol, while the machine is matter--but for the reproduction to be successful, the description must not only be followed, but must also be duplicated. The description itself thus performs two distinct functions: "On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring." It was several years later that Watson and Crick would discover DNA, the instructions for living automata. They discovered that, astonishingly, DNA does indeed perform these two functions. It encodes the instructions for making the appropriate enzymes and proteins for a cell, and also unwinds and duplicates itself before a cell divides: "With admirable economy, evolution has built the dual nature of the genetic material into the structure of the DNA molecule itself."

[/blockquote]

Doesn't sound like someone who thinks evolution is laughable.

"With admirable economy, evolution has built the dual nature of the genetic material into the structure of the DNA molecule itself." Where did von Neuman say this? This appears to contradict his assertion that below a certain complexity replicators are degenerative. How does he posit the first replicator of sufficient complexity, which he posits requires both a working copy of the replicator plus a coded set of instructions describing how to construct another replicator, came to be? -ds Mark Frank
June 12, 2006
June
06
Jun
12
12
2006
10:59 PM
10
10
59
PM
PDT
RE: #45: Hello, Mung! If I recall correctly, RBH works for a firm that uses genetic (and perhaps other) algorithms to solve problems for clients. If these algorithms are indeed solving practical problems confronting people in medicine or industry, then I wish blessing upon RBH and his co-workers in their endeavors. May their research bear fruit and make people happy with any clever solutions they apparently produce. As theoretical constructs of how living things really evolve, however, these algorithms are fatally flawed and rigged (consciously or unconsciously) to produce agreeable results. Anyone who makes a living solely to design and show off mathematical "vindications" of Darwinism will gain nothing but a paycheck. Such shams contain no currency otherwise. Best regards, apollo230apollo230
June 12, 2006
June
06
Jun
12
12
2006
05:03 PM
5
05
03
PM
PDT
Es58: “...also, John Von Neumann, responsible for major computer architecture, I believe, expressed himself against this Darwinian outlook” David Berlinski comments in this interview http://www.theapologiaproject.org/media/berlinski.ram that Von Neumann, one of the greatest mathematicians and computer scientists of the 20th century, found modern Darwinian theory laughable.GilDodgen
June 12, 2006
June
06
Jun
12
12
2006
03:28 PM
3
03
28
PM
PDT
I wonder if that is the same RBH that claims taht cost is not a problem for evolution because his GA's show that evolution happens just fine and is unrestricted by cost issues. When asked what the unit of cost is in his GA's, he had no answer. Yeah, that's the same RBH. He's smart, intelligent, but biased, blind, and unwilling to admit when he is wrong nor willing to correct the deficencies in his arguments. IOW, typical PT material.Mung
June 12, 2006
June
06
Jun
12
12
2006
02:55 PM
2
02
55
PM
PDT
Gil, Stu Harris repsonded: The compiler runs on an complex specified operating system, which runs on complex hardware which is composed of metallurgical, mineral, and plastic complexities that ….. well you get the picture. Just to belabor the Operating system component a little further, there is, at some level, every function supplied by an OS, including memory management, task management, data base mgt; task mgt includes scheduling, loading (probably some kind of linking); not to mention the mgt of the hardware that stores/retrieves the data; multi-tasking, multi-programming, multi-processing (on a REALLY massive scale), real-time mgt of an extroadinarily fine-tuend nature... I find it encouraging to find the name of Fred Brooks on the discovery's group of dissenters, b/c he was a chief engineer of the IBM 360/370 O/S, one of the earliest major products, and would have a real appreciation for what goes into setting this stuff up for the first time, not even just getting it from others; also, John Von Neumann, responsible for major computer architecture, I believe, expressed himself against this Darwinian outlookes58
June 12, 2006
June
06
Jun
12
12
2006
02:48 PM
2
02
48
PM
PDT
Avida, Game of Life, Tierra, PT thumb referenced Hello World artifical (as in intelligent) selection demonstration. Any others? What do any of these have to do with naturalistic, mechanistic, unintelligent, mindless, evolution?
ID had no problem accepting “microevolution”, where organisms can be modified slightly and adapt to their environment.
ID also has no problem accepting common descent. But what does any of this have to do with RM+NS as conceived by Darwinists?Mung
June 12, 2006
June
06
Jun
12
12
2006
02:31 PM
2
02
31
PM
PDT
I take one step back from my previous comment. The "interesting behaviour" as seen in the game of life does not proceed beyond the original givens, which are the reproduction, movement and dying properties inherent in the world in which it inhabits. All of the patterns, interactions etc. seen after the initial conditions are set never exceed in "useful" information what is there to begin with.SCheesman
June 12, 2006
June
06
Jun
12
12
2006
12:09 PM
12
12
09
PM
PDT
I have to disagree with a great many of the points made about Conway's Life game. The results are "interesting" only in the same way that a kaleidescope is interesting, or the fabulous fractal properties of some simple mathematical functions are interesting. They are unexpected, or esthetically interesting. There is no front-loading of intelligence in any of these things. If in fact it did produce something of true novelty (like tomorrow's price of gold, or the set of prime numbers), I would have to agree it would be a significant breakthrough. Once again, you should not confuse complexity (which is relatively simple to produce in infinite quantities) with specified complexity. The Life game is a good example of a programme with no "hidden" front-loading or teleological behaviour, and its few simple rules do not betray that spirit.SCheesman
June 12, 2006
June
06
Jun
12
12
2006
11:49 AM
11
11
49
AM
PDT
“the environment can play the same role as an artificial ‘selector’.” What a beautiful specimen of begging the question.BK
June 12, 2006
June
06
Jun
12
12
2006
11:08 AM
11
11
08
AM
PDT
"This is the difference between having an *intelligent selector* that can intelligently and immediatly decide on fate of a mutation and a *natural selection* which is powerless in picking intermediates which are not naturally selectable." Farshad, I think you're on to something here. But I would propose just the opposite: the *natural selector* can evaluate only the immediate fitness of the next letter in the sequence, without regard to the target sentence. It would do so based only on the physical characteristics of the letter (what else would it have to go on?). Thus, an L would be nearly as fit as a T, but not even a close substitute for an S. Fitness based on anything but physical characteristics is not allowed, since the generator must be "blind" to any ultimate target. Now what are the chances that the program will generate a meaningful sentence in a limited number of iterations? (I know this analogy is weakened by the fact that we have a small number of letters from which to select.) The *intelligent selector* operates on a different principle, which cannot be explained by the physical characteristics of each letter. (This is Polanyi's "tacit dimension.") Yes, each letter selected has distinct physical characteristics, but the letters are actually not selected by those characteristics.Lutepisc
June 12, 2006
June
06
Jun
12
12
2006
05:22 AM
5
05
22
AM
PDT

"It seems Conway’s Life game was intelligently designed to produce interesting results. It is a type of front loading within the laws of nature. This may be one way the Intelligent Designer id it.

Are we saying that Conway was not an intelligent designer?"

This seems to be a standard type of comment here. Whenever someone produces a model that can explain how complexity might arise, then it can always be argued that the person who developed the model is intelligent and therefore it doesn't count. That sure is a fail-safe strategy.

Perhaps it's a standard type of comment because it's a standard type of flaw in the models. -ds Raevmo
June 12, 2006
June
06
Jun
12
12
2006
03:11 AM
3
03
11
AM
PDT
I wrote: "Avida, Weasel and other simulations are desperate attempts of Darwinists to prove evolutionary mechanisms of RM+NS using computer simulations. In heart of all these simulations you see a fitness functions that *intelligently* selects what should be selected and what should be not. The *designers* of all of these softwares subtly feed their system with external intelligence which normally is not available in the nature." PvM at PT responded: "The level of cognitive dissonance (and strawmen) continue. Still missing the part that involves selection by the environment. Weasel, as well as the ‘Hello World’ example specify the fitness landscape a-priori but that is mostly a strawman argument. The real dissonance arises when arguing that a fitness function requires intelligent designers avoiding the simple fact that the environment can play the same role as an artificial ‘selector’." Guys at PT are insisting in deluding themselves that environment can do the same job as an *intelligently designed* fitness function can do in an evolutionary simulation. It seems that they have no understanding of intermediate steps that are not selectable by natural means. The fitness function in a computer simulation has the power to *foresee*. When there is a slight modification, the simulation can intelligently detect if it is another step in a predetermined pathway leading to something better or not. However, nature cannot forsee anything. In real environment all of the slight modifications in each step should yield an increase in reproductive output of the organism. Now assume a mutant bird is evolving and we are transiting from step(n) to step(n+1) in wing developement stage. If a mutation slightly expands the effective area of the wings then an *intelligent fitness function* can select it because the function can foresee the fact that slightly larger wings can lead to completed wings in future generations. Evidently, the fitness function inherits the intelligence of its programmer who knows what a completed wing should really look like. On the contrary, in real environment the aerodynamical improvement gained by that slight modification would be insignificant or none at all. Its effect on reproductive output would be negligible and nature could not even notice its existance. Go further and add some environmental noise and our slight modification will be totally ignored by the environment. This is the difference between having an *intelligent selector* that can intelligently and immediatly decide on fate of a mutation and a *natural selection* which is powerless in picking intermediates which are not naturally selectable.Farshad
June 12, 2006
June
06
Jun
12
12
2006
02:49 AM
2
02
49
AM
PDT
1 2 3

Leave a Reply