Uncommon Descent Serving The Intelligent Design Community

The Darwinist and the computer programmer

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Actually the available hardware computing power is enormous and the software technologies are very sophisticated and powerful. Given the above fortunate situation about the technological advance of informatics, many phenomena and processes in many fields are successfully computer simulated. Routinely airplane pilots and astronauts learn their job in dedicated simulators, and complex processes, as weather forecast and atomic explosions, are simulated on computers.

Question: why Darwinian unguided evolution hasn’t been yet computer simulated? I wonder why evolutionists haven’t yet simulated it, so to prove us that Darwinism works. As known, experiments of evolution in vitro failed, then maybe experiments in silico would work. Why don’t evolutionists show us in a computer the development of new biological complexity by simulating random mutations and selection on self-reproductive digital organisms?

Here I try my answer, then you are free to provide your own. I will do it in the format of an imaginary dialogue. Let’s suppose a Darwinist who meets a computer programmer to ask him to develop a simulation program of Darwinian evolution.

Programmer (P): “What’s your problem? I can program whatever you want. What we need is a detailed description of the phenomenon and a correct model of the process.”

Darwinist (D): “I would like to simulate biological evolution, the process thanks to which a species transforms into another species, by means of random mutations and natural selection”.

P: “Well, I think first off we need a model of an organism and its development, or something like that”.

D: “We have a genotype (containing the heritable information, the genome, the DNA) and its product, the phenotype”.

P: “I read that the DNA is a long sequence of four symbols. We could model it as a long string of characters. String of characters and operations on them are easily manipulable by computers. Just an idea.”

D: “Good, it is indeed unguided variations on DNA that drive evolution.”

P: “Ok, if you want, after modeling the genome, we can perform on the DNA character strings any unguided variation: permutations, substitutions, translations, insertions, deletions, import, export, pattern scrambling, whatever you like. We have very good pseudo random generators to simulate these operations”.

D: “Cool. Indeed those unintelligent variations produce the transformations of the phenotypes, what is called ‘evolution'”.

P: “Hmm… wait, just a question. There is a thing not perfectly clear to me. To write the instructions to output the phenotype from the genotype I need also a complete model of the phenotype and a detailed description of how it arises from the genotype. You see, the computer wants anything in the format of sequences composed of 0s and 1s, it is not enough to send it generic commands”.

D: “The genotype determines the genes and in turn the genes are receipts for proteins. The organisms basically are made of proteins.”

P: “Organisms are made of proteins, like buildings are made of bricks, aren’t they? It seems to me that these definitions are an extremely simplistic and reductive way of considering organisms and buildings. Both are not simple “containers” of proteins/bricks, as potatoes in a bag. It seems to me it is entirely missing the process of construction from proteins to organisms (while it is perfectly known in the case of bricks and buildings)”.

D: “To be honest I don’t know in detail how the phenotype comes from the genotype… actually no one on earth do.”

P: “Really? You know, in my damn job one has to perfectly specify all instructions and data in a formal language that doesn’t allow equivocations. It is somewhat mathematical. If you are unable to perfectly specify the phenotypic model and the process driving the construction of the phenotype from the genotype, I cannot program the simulation of evolution for you. What we would eventually obtain would be less than a toy and would have no explicative value compared to the biological reality (by the way I assure you that, differently, all computer games are serious works, where everything is perfectly specified and programmed, at the bit and pixel level, believe me)… Sorry… I don’t want to be indiscreet, but how can Darwinists claim with such certainty that variations in a process produce certain results if they know little of the models and nothing of the process involved in the first place?

D: _no-answer_

The above short dialogue between the Darwinist and the programmer shows us a thing. There are two worlds: the world of informatics where all instructions/data must be perfectly specified and have to pass checks, otherwise the business doesn’t work; and the world of the just so stories, where the statements may be equivocal and even inconsistent and have to pass no check. Evolutionism pertains to the latter kind of worlds. As the programmer politely noted, evolutionism pretends to claim that variations on a process produce specific results when the process itself is unknown and unspecified. In other words, why – to put it a la Sermonti – from the genome of a fly arises a fly, not a horse? If they cannot answer that basic question, how can they claim that unguided variations on genomes produced even the 500 million past and living species?

This fundamental incoherence and simplism can “work” in the Darwin’s world, but stops at the outset in the logic world of informatics. This is one of the reasons why a convincing and complete computer simulation of Darwinian evolution has not yet been performed far now, despite Darwinians would like to get it.

P.S. Thanks to Mung for the suggestion about the topic of this post.

Comments
arlin #63 When asking "why Darwinian unguided evolution hasn’t been yet computer simulated?" obviously I meant "computer simulated realistically and successfully". I know that "scientists have been simulating evolution for decades". But these simulations have been either not realistic or not successful or both.niwrad
November 26, 2013
November
11
Nov
26
26
2013
10:45 AM
10
10
45
AM
PST
Chastising scientists for refusing to simulate evolution would be like chastising engineers for refusing to design an electric car. Anyone with half a brain can verify in a few minutes that the premise is untrue. Scientists have been simulating evolution for decades. One of the advantages of simulations is the capacity to explore the behavior of models too complex to solve mathematically. I think some of the most interesting models are ones that use RNA folding. For instance, see The Ascent of the Abundant: How Mutational Networks Constrain Evolution by Cowperthwaite, et al. Unlike traditional models in population & evolutionary genetics that ignore development, models based on RNA folding implement a folding algorithm. The RNA is encoded by a gene (which could be RNA or DNA), its folded structure is computed, and then its fitness is computed based on some function of the folded structure. Add in mutation, recombination, etc., and you have the basis of a sophisticated simulation. Results from such models are not simply reiterating old Darwinian ideas about selection. The title of the paper cited above invokes a concept of abundance in state-space, similar to an argument made earlier by Stuart Kauffman. This is not fundamentally a Darwinian argument. By the way, Behe is mistaken to suggest that there has been no analysis of constructive neutral evolution. Apparently he just read a newsy piece by Lukes, et al without bothering to read the original piece by Stoltzfus, 1999, in which "constructive neutral evolution" was first proposed. This included simulations of an evolutionary model that later (due to an independent proposal by Force, Lynch, et al) became known as the DDC (duplication-degeneration-complementation) or "neutral subfunctionalization" model. Today you can't read any of the gene duplication literature without coming across this model. The original papers in 1999 & 2000 have received thousands of citations in the scientific literature.arlin
November 26, 2013
November
11
Nov
26
26
2013
09:43 AM
9
09
43
AM
PST
equate65:
Off topic but just had a quick question: Is symbiogenesis, epigenetics, and saltationism part of the Modern evolutionary synthesis?
Not really off topic at all. We're wondering what it would take to simulate evolution in a computer. Whether it's even possible. And certainly we'd want to ask if these need to be taken into consideration in any simulation. But to answer your question, no. To neo-Darwinism, aka the Modern Synthesis, development is a black box. It's a theory about how genes spread through populations, not a theory about how phenotypes are derived from genotypes.Mung
November 8, 2013
November
11
Nov
8
08
2013
11:13 AM
11
11
13
AM
PST
Let us assume, for the sake of argument, that the genome drives the construction of the organism, i.e. the genome is a set of assembly instructions for the embryo development. (Personally I believe that is reductive, IMO there are many unknown levels of organizational direction upon the genome.) Usually instructions contain symbolic links. An error in a symbolic link might be more devastating than a direct error at the level of the material result. Example. Let's suppose that an embryonic instruction sounds "at time X add Y cardiac cells to the Z zone of the heart" and a mutation causes an error in the last word, which changes from "heart" to "brain". We would have that Y cardiac cells go into the brain, where likely they would work as a cancer. What I mean is that to reason in terms of instructions doesn't reduce the danger of mutations/errors. Indeed the contrary. Mutations/errors in the instructions are even more dangerous than direct errors in the final molecules. Bottom line: Darwinism from an informatics point of view is even more absurd than thought from other perspectives.niwrad
November 8, 2013
November
11
Nov
8
08
2013
04:59 AM
4
04
59
AM
PST
I suppose the degree of harm caused by a mutation depends on where it happens in the genetic code. It is certain that the genome is organized hierarchically and that most DNA sequences are used for regulatory (control) purposes. A mutation in a regulatory sequence high in the hierarchy is likely to have severely deleterious, if not fatal, consequences. It's a good thing that error correcting mechanisms are in place, otherwise no living organism would survive. This is an insurmountable problem for Darwinists because the evolutionary process depends on random mutations but, if given a free reign, truly random mutations would quickly destroy the organism.Mapou
November 7, 2013
November
11
Nov
7
07
2013
11:30 PM
11
11
30
PM
PST
This may mean that some (many) of such random mutations may not degrade the Program directly (or make it immediately crash).
This is precisely the limitation of knockout experiments. Furthermore, even with catastrophic changes, the change will not appear catastrophic until the particular routine is called. This is extremely common with complex technologies, and we see it all the time with our computers, our cars, and so on. Finally, there is the issue of redundancy. If we knock out two of the gyroscopes on the Hubble Telescope and it still works, does it mean those two gyroscopes served no purpose? Of course not. There are lots of ways that a particular mutation can be harmful, but the harm can lie dormant or be hidden for a time. Indeed, it is quite possible that a reasonable fraction of the allegedly neutral mutations could turn out to be "quietly harmful" rather than purely neutral.Eric Anderson
November 7, 2013
November
11
Nov
7
07
2013
11:08 PM
11
11
08
PM
PST
InVivoVeritas:
. . . there is a chance that changing those amino acid positions – with similar ones – may still have negative side effects – possibly far removed from the place and time of change.
Well said. I've raised this point in the past as well, and I think it is worth remembering, at least in the back of our mind. Most of the time I'm willing to grant for purposes of discussion and for assessing probabilities that many substitutions will be neutral. However, the fact remains that we do not know if all or even most of these allegedly neutral substitutions are indeed neutral. There is a whole host of downline operations that could, potentially, be affected by the substitution. The translation process itself often involves multiple snips and concatenations, and error-correction mechanisms. So a substitution in a nucleotide base may end up being neutral, not because it was initially neutral, but because the translation process picked up the change and corrected it. More intriguing would be if the translation process picked up the change and acted differently as a result -- a different cut, a different concatenation, etc. Furthermore, if amino acids can come in different forms (we're barely starting to grasp some of the quantum effects in the cell) or be affected by added tags, then there could be other downline effects. For example, the protein folding process is not, contrary to regular assertions to the contrary, simply an automatic process, but is a moderated process, with its own protocols and error-correction detection mechanisms. Do we know whether there are any changes in folding process, the completed fold, or post-folding error detection and destruction with particular nucleotide substitutions? Additionally, there may be stretches of DNA that are involved in multiple processes and/or with multiple reading frames. In those cases, we can't assume that the mutations would be 100% neutral. Anyway, just throwing some possible ideas out for consideration. I do agree the genetic code seems relatively robust to perturbation, and it might indeed be the case that many nucleotide substitutions are 100% neutral and invisible to the organism. But it is perhaps too soon and our knowledge too limited to allow for such a confident early assertion.Eric Anderson
November 7, 2013
November
11
Nov
7
07
2013
11:01 PM
11
11
01
PM
PST
Mapou at #50
The problem with using computer software as an analogue for living organisms is that software is algorithmic, i.e., it is a sequential chain of instructions. Break any link in the chain and the entire program crashes. Living organisms, by contrast, are non-algorithmic parallel computing systems and, as such, are very fault tolerant. A malfunction in one or even many components will rarely cause a catastrophic failure. They tend to degrade gracefully. Living system simulations must take this into account, in my opinion.
You are right to say that there are very significant differences between the computer programs (computing systems) and the biological organisms. I have a few comments though on this thought. * I am sure that there are many "parallel" (micro) resources available in biological systems that partial failure can be masked by unaffected resources. * At the macro scale there are still real "heart failures", "kidney failures" or strokes. * The "qualitative similarities" between the Simulator (Programming Artifacts in this Proposal) and the Simulated are: - both are Irreducible Complex Systems (made of a large number of interacting components or sub-systems that are precisely coordinated) - it is logically similar for the two that any (or at least many) "mutations" in a perfect, harmoniously tuned and finely system may affect (compromise?) that nice working and cooperation between parts. - my previous comment at #56 identified certain mutations that can induce also graceful degradations. It may be that these qualitative similarities between the Simulator and the Simulated may convey a reasonable level of realism to the Simulation.InVivoVeritas
November 7, 2013
November
11
Nov
7
07
2013
07:33 PM
7
07
33
PM
PST
Jguy at #46
If I’m not mistaken, about half of amino acid positions in proteins can be substituted with another amino acid – especially if it has at least a similar chemical traits (e.g. hydrophobic or hydrophilic and/or whatever other generic property) … of course, that only applies to coding regions of DNA. So, assume that is the general rule, would you be able to change 50% of bits in compiled code without crashing the program? I’m not sure how important that is to be representative. I have not thought a lot about it.
Several points here: * I am not a biologist but there is a chance that changing those amino acid positions - with similar ones - may still have negative side effects - possibly far removed from the place and time of change. It is hard to be sure of anything in biology except that most probable the things are as they are with a very good (at least initial) reason. * When talking about the Chess Program (CP) binary executable we should assume that this binary contains not only the executable code proper but also the CP database of moves and known strategies and also any other configuration and metadata information (structure of the chess table, the desription of valid moves for each figure, etc.). It is know that a key element of the success of Chess Programs is among others an extensive "chess knowledge" database. * Now when a "random mutation" is injected it may be into the "database space" of the CP binary or in its "configuration space". This may mean that some (many) of such random mutations may not degrade the Program directly (or make it immediately crash). If the random change modify a "chess move" from the database that is seldom used (or not used in the sequence of games of that particular mutated Program, this may imply a "graceful degradation" of that program - that can be judged somehow quite similar of what you mentioned. If the ratio of Program Space to Database Space in the Binary is (let's say) 1/4 then 80% of mutations may not be immediately pernicious.InVivoVeritas
November 7, 2013
November
11
Nov
7
07
2013
07:15 PM
7
07
15
PM
PST
JGuy @53:
Another maybe: if the system is more resilient with multiple threads running – which I think is apoint htat could still be debated – wouldn’t this mean the selection aspect of the process would be even more unlikely to identify a beneficial effect(?) – when the organism is compared to rivals.
This is an interesting point. There is clearly parallel computing going on,* both inside cells and between cells. And yes, that allows for some robustness (one cell dies, for example, and the whole organism doesn't come to a screeching halt). But it does make it even more challenging to (a) get a mutation captured across the board in those individual places where it is needed/relevant, and (b) get things integrated across the whole. One thing we can say with near certainty based on our experience with technology is that increasing levels of functional, coordinated complexity make it harder, not easier, for any given change to work seamlessly across the whole. The whole point of modularity is to try and deal with the escalating cascade of complexity that would otherwise attain. ----- * Parallel computing in the sense of various instruction sets being executed at multiple locations simultaneously clearly occurs pervasively throughout the organism. Parallel computing in the sense of multithreading is, I believe, still more of an open question. Arguably, one could say that a form of simple multithreading occurs when, for example, multiple transcriptions from a large DNA section are occurring simultaneously. One might also perhaps argue that the creation of multiple amino-acid-to-protein strands from a single mRNA transcript is a form of multithreading. Nevertheless, I don't know if we could call these true multithreading events or if there even is true multithreading occurring with molecular machines. That would be a remarkable achievement if it does happen! (Incidentally, we need to distinguish between true multithreading and the existence of protocol hierarchies, such as when the cellular reproduction mechanism initiates a temporary shutdown of transcription activity so that the DNA can be faithfully copied. The latter is more of a break-in override protocol, than true multithreading.)Eric Anderson
November 7, 2013
November
11
Nov
7
07
2013
06:51 PM
6
06
51
PM
PST
Off topic but just had a quick question: Is symbiogenesis, epigenetics, and saltationism part of the Modern evolutionary synthesis?equate65
November 7, 2013
November
11
Nov
7
07
2013
05:21 PM
5
05
21
PM
PST
Mapou @ 50 I don't think it matters. You're really only looking for hopeful beneficial mutations, adding them together..and seeing it there is such a detectable step-wise path to higher complexity and function, or whether there is not. Perhpas, that's too simplistic, but that's what it seems to me. Another maybe: if the system is more resilient with multiple threads running - which I think is apoint htat could still be debated - wouldn't this mean the selection aspect of the process would be even more unlikely to identify a beneficial effect(?) - when the organism is compared to rivals.JGuy
November 7, 2013
November
11
Nov
7
07
2013
05:00 PM
5
05
00
PM
PST
SirHamster:
The value of comparing computer software to living organisms is not that they’re close equivalents; it’s that the known-human-design is much simpler than life and provides a floor for the minimum amount of “work” needed to accomplish what the more complex design does.
Well said.Eric Anderson
November 7, 2013
November
11
Nov
7
07
2013
04:52 PM
4
04
52
PM
PST
The problem with using computer software as an analogue for living organisms is that software is algorithmic, i.e., it is a sequential chain of instructions. Break any link in the chain and the entire program crashes. Living organisms, by contrast, are non-algorithmic parallel computing systems and, as such, are very fault tolerant. A malfunction in one or even many components will rarely cause a catastrophic failure. They tend to degrade gracefully.
Parallel computing is more difficult than serial computing. The lack of fault tolerance in serial computing is a conscious trade-off of cost vs. reliability rather than a penalty of serial computing. Typical computing environments aren't hazardous to computers. Satellites and spacecraft are some areas where there is hardware/software hardening to guarantee functionality despite adverse environments. The value of comparing computer software to living organisms is not that they're close equivalents; it's that the known-human-design is much simpler than life and provides a floor for the minimum amount of "work" needed to accomplish what the more complex design does. While some of the graceful degradation observed in life may be a function of the molecules ("harmless" amino acid substitutions), a substantial part is from system "design" which is a function of the information encoded in the system, and not of the molecular properties of the materials. (ex: DNA checking/repairing molecules)SirHamster
November 7, 2013
November
11
Nov
7
07
2013
04:43 PM
4
04
43
PM
PST
InVivoVeritas et al, The problem with using computer software as an analogue for living organisms is that software is algorithmic, i.e., it is a sequential chain of instructions. Break any link in the chain and the entire program crashes. Living organisms, by contrast, are non-algorithmic parallel computing systems and, as such, are very fault tolerant. A malfunction in one or even many components will rarely cause a catastrophic failure. They tend to degrade gracefully. Living system simulations must take this into account, in my opinion.Mapou
November 7, 2013
November
11
Nov
7
07
2013
04:14 PM
4
04
14
PM
PST
ppps. and i would not think you want to make it so that it just tunes exiting functions... trial an error can find the settings that are finer tuned... you need to allow it to look for novel functions (i.e. new complex information)JGuy
November 7, 2013
November
11
Nov
7
07
2013
04:12 PM
4
04
12
PM
PST
pps. to illustrate better..... in such a chess program experiment..... it doesn't seem you really need to modify the skeleton of the chess program....rather, it seems you just need to modify the function(s) that evaluate how the program calculates the value of possible positions.JGuy
November 7, 2013
November
11
Nov
7
07
2013
04:10 PM
4
04
10
PM
PST
p.s. I don't think the entire program needs to be modified. You could simply consider the a set of methods and logic rules that act as building blocks... enough primitives to build almost any logical process. For example, if I link an AND function and an OR function it will not error out. You just get useless output..that is, if it isn't helping - in this case, to beat other chess programs.JGuy
November 7, 2013
November
11
Nov
7
07
2013
04:06 PM
4
04
06
PM
PST
... b. At the Executable Binary level. This is much simpler to accomplish – and still preserve a reasonable analogy with “random mutations” in the DNA of a cell/organism. ... If I'm not mistaken, about half of amino acid positions in proteins can be substituted with another amino acid - especially if it has at least a similar chemical traits (e.g. hydrophobic or hydrophilic and/or whatever other generic property) ... of course, that only applies to coding regions of DNA. So, assume that is the general rule, would you be able to change 50% of bits in compiled code without crashing the program? I'm not sure how important that is to be representative. I have not thought a lot about it.JGuy
November 7, 2013
November
11
Nov
7
07
2013
04:01 PM
4
04
01
PM
PST
Jguy at #43 I thought about at what level the Random Mutations are to be "injected": a. At the Programming Language Level (Perl, C, Java, etc.). This is a non-trivial problem because, the "Mutator" may need to become "Programming Language Aware" and Replace/Modify one or a group of Language Statements with another Group that - although may not make sense from "what they need to accomplish" point of view, they still must allow: 1. A successful compilation of the "mutated" Chess Program (CP) 2. A Successful Build of the CP. 3. A Successful Execution (Start) of the CP. b. At the Executable Binary level. This is much simpler to accomplish - and still preserve a reasonable analogy with "random mutations" in the DNA of a cell/organism. I think that the Proposed Simulator far from being perfect it is still a Reasonable Approach - that can be defended as I tried in my comment above at #44. I believe also that this Proposal - as it is - and because it carries a Strong Analogy with the simulated target in essential aspects may provide us with a good "projection" and understanding of the Enormity (and logical Impossibility) of the task that Darwinian Evolution pretends to be able to achieve.InVivoVeritas
November 7, 2013
November
11
Nov
7
07
2013
02:18 PM
2
02
18
PM
PST
Niwrad, Thanks for your interesting topic and for your comments to my entry. The question is: why we should not think that a living organism (maybe order of magnitude more complex then our Chess Program) is also sensitive and mostly negatively affected by random mutations? We know for sure that a biological organism functions efficiently, precisely and is a very complex composition of interacting parts that, together, as a system of sub-systems metabolize successfully, replicates successfully, etc. Why a random mutation of any of its sub-systems should not (most probably) negatively affect "the order" and "the working plan" that it uses (no matter how this "working plan" came to be)? I propose that my simulation model is quite adequate from this point of view. It's quite probable that a few or repeated mutations of the Chess Program binary will crash the Program - but not the Operating System - which, if well designed - should be isolated from application crashes or failures. Also the computer should not crash. It is true that we can speculate that biological organisms can be more resilient to defend/protect against random mutations - I speculate just because they may have very complex defensive, mechanisms. The fundamental questions for the proposed Evolution Simulator are: Q1. Does it simulates reasonably well random mutations? Q2. Does it simulates reasonably well natural selection? (and the fight for survival)? Q3. Does it minimally emulates a Irreducible Complex System (Behe) (ICS)that we know that a living organism really is? Our Chess Program definitely is an ICS. Q4. Does it provides "tuning knobs" to allow playing various "Simulation Scenarios"? Yes by changing various parameters: The Length in bits of a Random Mutation; the Number of Mutation before each Game (set of Games); the number of losses before the Framework will declare an individual Ches Program as "dead"; etc.InVivoVeritas
November 7, 2013
November
11
Nov
7
07
2013
02:05 PM
2
02
05
PM
PST
Niwad, Maybe a variation of his proposal written in a scripted language would work, therefore no O/S crashes... and/or... mutations aren't at the bit level, but maybe the byte level (well, that might crash easy)...or at the expression level. That is, mutated by substituting in valid random expressions - forcing valid lexical structure (e.g. syntax, coding rules... semantics). It may still crash, but not the system. So, instead of bits being your primitive mutations, you move it up a level. It would still be impossible for it to evolve new beneficial logical functions, I think (not to be confused with defining a function..which in programming can be one line of code).JGuy
November 7, 2013
November
11
Nov
7
07
2013
01:44 PM
1
01
44
PM
PST
InVivoVeritas Interesting idea. However consider that usually binary executables, as those we find in our computers, are very critical under random variations. In practice just few random bit mutations crash the code, and, depending on the program and the operating system, they could even halt the computer. So you can bet that the outcome of your CPFS simulation would be Hypothesis B: Darwinian Evolution is false. Luckily the biological codes are more robust than... Windows and Office, from this point of view. But this of course doesn't mean that random variations on them can create new organization, as neo-Darwinism pretends.niwrad
November 7, 2013
November
11
Nov
7
07
2013
12:04 PM
12
12
04
PM
PST
Here is a brief proposal to Simulate Evolution using Programming Artifacts. A. Let’s consider a well performing Chess Program (CP) – that let’s say usually wins chess games against human chess masters. B. Let’s make relatively easy modifications to the CP so that two instances of the Chess Program can play against each other until one wins or a draw is declared. C. Let’s consider a Population of Chess Programs (PCP) where initially all CPs in the Population are identical copies of the same Chess Program under discussion. Each Copy of the CP has a unique Identity and an individual “evolution life” that will be described farther down. D. Let’s create a Chess Program Fight and Survival (CPFS) programmed Framework (F) by which: a. each individual CP: CP(i) can play a chess game with other individual CP: CP(k) selected randomly by the Framework F; b. the result of a game increases the Loss Count (LC) recorded for the losing CP. c. In case of a draw the loss count stay unchanged for the two CPs. d. After a defined Dying Threshold (DF) of losses (let’s say 20 losses) recorded for a CP, that CP “dies” and exits the Chess Program Fight and Survival - after its “life”, “game loses” and “demise” are carefully recorded by the Framework for that particular (individual) CP. E. The “evolution” is represented by “random mutations” in a particular CP. a. In this context it is proposed that a single “random mutation” consists in changing the value of N consecutive bits of the executable binary of the CP to a random value (RV) of also N consecutive bits starting from a Randomly selected Offset (O) counted from the beginning of the CP executable binary. b. The Framework (F) will “inject” a pre-determined number of such “random mutations” (let’s say 10) in each individual CP after every (let’s say) 5 games. c. In case one or more “random mutation” makes an individual CP non-responsive (i.e. does not respond in the time granted for a chess move) the Framework F will record a loss for that individual CP. d. Similarly if an “evolved” individual CP is not even able to start a chess game (or to provide the expected response at the start of a chess game) the Framework (F) records a loss for that individual CP (and might even declare it “dead” even if the “Dying Threshold” of losses was not reached by that CP. F. The Chess Program Fight and Survival (CPFS) competition will go on until only 0.01% of the original population remains “alive” (avoided death by consistently beating other “less/more evolved” individuals) G. Half of the Population of Chess Programs (PCP) will not be subjected to “random mutations” and will preserve unaltered their original executable binary code during the whole Chess Program Fight and Survival (CPFS) competition. H. Hypothesis A: If Darwinian Evolution is true (and works) then it is expected that NONE of the CPs that were spared the “evolution” (i.e. they were not subjected to random mutations) will be among the 0.01% of surviving population of CPs. I. Hypothesis B: If Darwinian Evolution is false (i.e. does not work) then it is expected that: a. All CPs in the surviving 0.01% population are of “not-mutated” population b. More so: it is expected that when the Original PCP halved – during competition, large majority of individual surviving CPs are from “non-mutated” population. NOTES: • A lot of Variations on this theme can be imagined and played out. • Although this is not a simulation at the level of biological, chemical, organizational details of actual organisms (who can dream of such simulation? this is not possible) I pretend that it capture and emulates quite realistically what “evolution” is expected to achieve: • By affecting through random mutations a system very complex that is known that originally functions with precision and is effective (in winning chess games). Similarly it is legitimately assumed that evolution of biological life started only on a "self-replicating" high precision machinery substrate. • The mutations are random and there is no “design” involved in “guiding the evolution” in any way (all values are random). • There is a fairly representative competition and fair fight for survival – and there are as legitimate expectations as in the Darwinian Evolution that the “most evolved” CPs win the Chess Program Fight and Survival (CPFS).InVivoVeritas
November 7, 2013
November
11
Nov
7
07
2013
01:03 AM
1
01
03
AM
PST
Why not build a semi-complex replicating program that copies itself, and place it in a virtual environment where it has access to program bits, bytes or whatever…and competes with others. Perhaps, even include in the replicator program, a 3D representation. It seems to me, that this would at least test the creative power of RM + NS. My predication is that the code will end up smaller than the original replicator… not a replicator with more novel & more complex survival features (physical traits or or behaviors).
A random search will eventually find something given enough time and allowed to run indefinitely. But if extinction is modeled, any reasonable computer simulation will show what a dead end RM + NS is - because a few bad mutations will cripple the self-replication, ending the experiment. If there is no possibility of failure, the simulation tells us nothing about real world results, where failures are known to be common (extinct species).SirHamster
November 6, 2013
November
11
Nov
6
06
2013
09:58 AM
9
09
58
AM
PST
After reading through this thread, I was bothered by one curious detail. Suppose the Programmer could create the simulation, there is still the problem of the fact that the simulation required a designer to run. The program would have to have specified rules, such as what new strings of code qualify as "living" and functional. This also raises an issue of the initial organism. Is it pre-designed, or must we expect it to emerge from the simulation? If it is expected to emerge, at what point can one distinguish an output representative of inorganic versus organic? If a method is used, such as introducing new packets of information (to represent, perhaps early atmospheric changes, etc.) and results are seen, then we are still only showing that observation, with a carefully controlled "randomization" lead to the result. It seems silly that NDEs entertain the simulation idea. Any simulation would require a designer creating a simulation favorable to life as they cannot emerge from chaos.TSErik
November 6, 2013
November
11
Nov
6
06
2013
08:47 AM
8
08
47
AM
PST
corrected link: Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle – “Haldane’s Ratchet” – Christopher L. Rupe and John C. Sanford – 2013 http://www.creationicc.org/abstract.php?pk=293bornagain77
November 6, 2013
November
11
Nov
6
06
2013
03:39 AM
3
03
39
AM
PST
Calling all Darwinists, where is your best population genetics simulation? - September 12, 2013 Excerpt: So Darwinists, what is your software, and what are your results? I’d think if evolutionary theory is so scientific, it shouldn’t be the creationists making these simulations, but evolutionary biologists! So what is your software, what are your figures, and what are your parameters. And please don’t cite Nunney, who claims to have solved Haldane’s dilemma but refuses to let his software and assumptions and procedures be scrutinized in the public domain. At least Hey was more forthright, but unfortunately Hey’s software affirmed the results of Mendel’s accountant. https://uncommondescent.com/evolution/icc-2013-calling-all-darwinists-where-is-youre-best-population-genetics-simulation/ Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory - 2008 Abstract: Evolutionary genetic theory has a series of apparent “fatal flaws” which are well known to population geneticists, but which have not been effectively communicated to other scientists or the public. These fatal flaws have been recognized by leaders in the field for many decades—based upon logic and mathematical formulations. However population geneticists have generally been very reluctant to openly acknowledge these theoretical problems, and a cloud of confusion has come to surround each issue. Numerical simulation provides a definitive tool for empirically testing the reality of these fatal flaws and can resolve the confusion. The program Mendel’s Accountant (Mendel) was developed for this purpose, and it is the first biologically-realistic forward-time population genetics numerical simulation program. This new program is a powerful research and teaching tool. When any reasonable set of biological parameters are used, Mendel provides overwhelming empirical evidence that all of the “fatal flaws” inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified—with a degree of certainty which should satisfy any reasonable and open-minded person. http://www.icr.org/i/pdf/technical/Using-Numerical-Simulation-to-Test-the-Validity-of-Neo-Darwinian-Theory.pdf Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle - "Haldane's Ratchet" - Christopher L. Rupe and John C. Sanford - 2013 Excerpt: We then perform large-scale experiments to examine the feasibility of the ape-to-man scenario over a six million year period. We analyze neutral and beneficial fixations separately (realistic rates of deleterious mutations could not be studied in deep time due to extinction). Using realistic parameter settings we only observe a few hundred selection-induced beneficial fixations after 300,000 generations (6 million years). Even when using highly optimal parameter settings (i.e., favorable for fixation of beneficials), we only see a few thousand selection-induced fixations. This is significant because the ape-to-man scenario requires tens of millions of selective nucleotide substitutions in the human lineage. Our empirically-determined rates of beneficial fixation are in general agreement with the fixation rate estimates derived by Haldane and ReMine using their mathematical analyses. We have therefore independently demonstrated that the findings of Haldane and ReMine are for the most part correct, and that the fundamental evolutionary problem historically known as "Haldane's Dilemma" is very real. Previous analyses have focused exclusively on beneficial mutations. When deleterious mutations were included in our simulations, using a realistic ratio of beneficial to deleterious mutation rate, deleterious fixations vastly outnumbered beneficial fixations. Because of this, the net effect of mutation fixation should clearly create a ratchet-type mechanism which should cause continuous loss of information and decline in the size of the functional genome. We name this phenomenon "Haldane's Ratchet". http://creationicc.org/more.php?pk=46 Here is a short sweet overview of Mendel's Accountant: When macro-evolution takes a final, it gets an "F" - Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory (Mendel's Accountant) Excerpt of Conclusion: This (computer) program (Mendel’s Accountant) is a powerful teaching and research tool. It reveals that all of the traditional theoretical problems that have been raised about evolutionary genetic theory are in fact very real and are empirically verifiable in a scientifically rigorous manner. As a consequence, evolutionary genetic theory now has no theoretical support—it is an indefensible scientific model. Rigorous analysis of evolutionary genetic theory consistently indicates that the entire enterprise is actually bankrupt. http://radaractive.blogspot.com/2010/06/god-versus-darwin-when-macro-evolution.html A bit more detail on the history of the junk DNA argument, and how it was born out of evolutionary thought, is here: Functionless Junk DNA Predictions By Leading Evolutionists http://docs.google.com/View?id=dc8z67wz_24c5f7czgm as to 'drift: Thou Shalt Not Put Evolutionary Theory to a Test - Douglas Axe - July 18, 2012 Excerpt: "For example, McBride criticizes me for not mentioning genetic drift in my discussion of human origins, apparently without realizing that the result of Durrett and Schmidt rules drift out. Each and every specific genetic change needed to produce humans from apes would have to have conferred a significant selective advantage in order for humans to have appeared in the available time (i.e. the mutations cannot be 'neutral'). Any aspect of the transition that requires two or more mutations to act in combination in order to increase fitness would take way too long (>100 million years). My challenge to McBride, and everyone else who believes the evolutionary story of human origins, is not to provide the list of mutations that did the trick, but rather a list of mutations that can do it. Otherwise they're in the position of insisting that something is a scientific fact without having the faintest idea how it even could be." Doug Axe PhD. http://www.evolutionnews.org/2012/07/thou_shalt_not062351.html Michael Behe on the theory of constructive neutral evolution - February 2012 Excerpt: I don’t mean to be unkind, but I think that the idea seems reasonable only to the extent that it is vague and undeveloped; when examined critically it quickly loses plausibility. The first thing to note about the paper is that it contains absolutely no calculations to support the feasibility of the model. This is inexcusable. - Michael Behe https://uncommondescent.com/evolution/michael-behe-on-the-theory-of-constructive-neutral-evolution/bornagain77
November 6, 2013
November
11
Nov
6
06
2013
03:01 AM
3
03
01
AM
PST
Mung #19
Another problem with the OP is that it attempts to turn a strength of programming and simulation (abstraction) into a weakness in evolutionary theory. There’s no justification for this. It’s like saying we can’t take every element of an organism’s ecology and put it into a computer therefore evolution is false. It just doesn’t follow.
The OP doesn't affirm that evolution is false because and only because it is not computer simulated. Evolution is false just for countless other reasons. The computer simulation of evolution would simply add an additional reason. The OP simply asks "why Darwinian unguided evolution hasn't been yet computer simulated?", and has got many interesting answers. Among them I particularly like the following by drc466 #30:
Actually, Darwinism would not be that difficult to simulate in a computer program, but no Darwinist will ever do it because it will show what they don’t want to see – mutations kill.
niwrad
November 6, 2013
November
11
Nov
6
06
2013
12:27 AM
12
12
27
AM
PST
Mung #19
One problem with the OP is that it creates a straw-man version of neo-Darwinism. In neo-Darwinism, how one gets from genotype to phenotype is irrelevant.
Darwinism does or does not pretend to be the cause of the construction of all organisms? (if it doesn't we all can go home). To what pretends to be the cause of the construction the construction is not irrelevant. If I pretend to be a constructor and a client wanting a building asks me how I construct, I cannot answer "the construction is irrelevant".niwrad
November 6, 2013
November
11
Nov
6
06
2013
12:11 AM
12
12
11
AM
PST
1 2 3

Leave a Reply