Actually the available hardware computing power is enormous and the software technologies are very sophisticated and powerful. Given the above fortunate situation about the technological advance of informatics, many phenomena and processes in many fields are successfully computer simulated. Routinely airplane pilots and astronauts learn their job in dedicated simulators, and complex processes, as weather forecast and atomic explosions, are simulated on computers.
Question: why Darwinian unguided evolution hasn’t been yet computer simulated? I wonder why evolutionists haven’t yet simulated it, so to prove us that Darwinism works. As known, experiments of evolution in vitro failed, then maybe experiments in silico would work. Why don’t evolutionists show us in a computer the development of new biological complexity by simulating random mutations and selection on self-reproductive digital organisms?
Here I try my answer, then you are free to provide your own. I will do it in the format of an imaginary dialogue. Let’s suppose a Darwinist who meets a computer programmer to ask him to develop a simulation program of Darwinian evolution.
…
Programmer (P): “What’s your problem? I can program whatever you want. What we need is a detailed description of the phenomenon and a correct model of the process.”
Darwinist (D): “I would like to simulate biological evolution, the process thanks to which a species transforms into another species, by means of random mutations and natural selection”.
P: “Well, I think first off we need a model of an organism and its development, or something like that”.
D: “We have a genotype (containing the heritable information, the genome, the DNA) and its product, the phenotype”.
P: “I read that the DNA is a long sequence of four symbols. We could model it as a long string of characters. String of characters and operations on them are easily manipulable by computers. Just an idea.”
D: “Good, it is indeed unguided variations on DNA that drive evolution.”
P: “Ok, if you want, after modeling the genome, we can perform on the DNA character strings any unguided variation: permutations, substitutions, translations, insertions, deletions, import, export, pattern scrambling, whatever you like. We have very good pseudo random generators to simulate these operations”.
D: “Cool. Indeed those unintelligent variations produce the transformations of the phenotypes, what is called ‘evolution'”.
P: “Hmm… wait, just a question. There is a thing not perfectly clear to me. To write the instructions to output the phenotype from the genotype I need also a complete model of the phenotype and a detailed description of how it arises from the genotype. You see, the computer wants anything in the format of sequences composed of 0s and 1s, it is not enough to send it generic commands”.
D: “The genotype determines the genes and in turn the genes are receipts for proteins. The organisms basically are made of proteins.”
P: “Organisms are made of proteins, like buildings are made of bricks, aren’t they? It seems to me that these definitions are an extremely simplistic and reductive way of considering organisms and buildings. Both are not simple “containers” of proteins/bricks, as potatoes in a bag. It seems to me it is entirely missing the process of construction from proteins to organisms (while it is perfectly known in the case of bricks and buildings)”.
D: “To be honest I don’t know in detail how the phenotype comes from the genotype… actually no one on earth do.”
P: “Really? You know, in my damn job one has to perfectly specify all instructions and data in a formal language that doesn’t allow equivocations. It is somewhat mathematical. If you are unable to perfectly specify the phenotypic model and the process driving the construction of the phenotype from the genotype, I cannot program the simulation of evolution for you. What we would eventually obtain would be less than a toy and would have no explicative value compared to the biological reality (by the way I assure you that, differently, all computer games are serious works, where everything is perfectly specified and programmed, at the bit and pixel level, believe me)… Sorry… I don’t want to be indiscreet, but how can Darwinists claim with such certainty that variations in a process produce certain results if they know little of the models and nothing of the process involved in the first place?
D: _no-answer_
…
The above short dialogue between the Darwinist and the programmer shows us a thing. There are two worlds: the world of informatics where all instructions/data must be perfectly specified and have to pass checks, otherwise the business doesn’t work; and the world of the just so stories, where the statements may be equivocal and even inconsistent and have to pass no check. Evolutionism pertains to the latter kind of worlds. As the programmer politely noted, evolutionism pretends to claim that variations on a process produce specific results when the process itself is unknown and unspecified. In other words, why – to put it a la Sermonti – from the genome of a fly arises a fly, not a horse? If they cannot answer that basic question, how can they claim that unguided variations on genomes produced even the 500 million past and living species?
This fundamental incoherence and simplism can “work” in the Darwin’s world, but stops at the outset in the logic world of informatics. This is one of the reasons why a convincing and complete computer simulation of Darwinian evolution has not yet been performed far now, despite Darwinians would like to get it.
P.S. Thanks to Mung for the suggestion about the topic of this post.
Some notes on trying (and failing) to model organisms realistically with computers:
Related notes:
Dr. Stephen Meyer comments at the end of the preceding video,,,
Which would be like trying to understand a battleship by modeling the interactions its molecules. One can only hope to “understand” such entities by hypothesizing design macro-feature purpose and reverse engineering. Nothing useful can be gleaned from the materialist approach; only the assumption that the macro-feature was designed and purposefully engineered offers a worthwhile, actionable investigatory pathway.
Given an algorithm to translate genotype to phenotype, you would need to model biochemistry – that too me seems impossible to model in a computer program.
Why not build a semi-complex replicating program that copies itself, and place it in a virtual environment where it has access to program bits, bytes or whatever…and competes with others. Perhaps, even include in the replicator program, a 3D representation.
It seems to me, that this would at least test the creative power of RM + NS.
My predication is that the code will end up smaller than the original replicator… not a replicator with more novel & more complex survival features (physical traits or or behaviors).
p.s. perhaps not impossible, but biochemistry seems would be far too computationally intensive to be practical.
Darwinist have tried and tried. Dr.Dawkins thought he had a great “METHINKS IT IS LIKE A WEASEL” program and even sold it to unwitting followers for $10 ! When nothing worked, they declared ‘Evolution has no goal’ and are still complaining that probability is being misused by IDist. When there is no goal, no process, no system to follow, there can be no model. The closest that can model aimless evolution is stochastic process, but what do you model when there is no aim?
JGuy @3:
Yeah, this is what Darwinists have claimed to do with evolutionary algorithms like Avida. Unfortunately, the devil is in the details and when you have a very easily-achievable result with the digital “organism” being carefully lead up the back side of Mount Improbable, it is not particularly surprising that you get some directional change, which is touted by the Wizard of Oz as confirmation of the theory. The problem with things like Avida is that they don’t simulate anything in the real world, so we can have digital organisms “mutating” and “developing” all we want and it teaches us precisely nothing about whether evolution would work in real biology.
The only way to model evolution is to have a very good handle on what is involved. And no-one has anything even approaching a solid idea as to what is required to turn creature A into creature B.
Furthermore, what is to be simulated is even questionable. For example, no-one knows whether fiddling with DNA is even in principle capable of forming a new creature (apart from minor allele traits between members of the same species). So even if such an event were simulated in silico (which we know wouldn’t work, but let’s assume for purposes of discussion that it did), it still would not confirm that it is relevant to actual organisms in the real world.
The difference between modeling evolution and, say, the flight simulator training niwrad refers to is that in the latter case we have a very good sense as to the factors involved and how they interact with each other (aerodynamics, thrust, weigh-ratios, wind speed, vectors, and so on); we have precise mathematical calculations and well-defined parameters.
We have nothing even approaching this in evolutionary theory. As of 2013 the idea continues to consist of little more than vague generalizations and hypothetical assertions. There is no comprehensive list of parameters; not even close. There are no well-defined equations that state that if x occurs, y will be the outcome. All we have is a blanket assertion, void of all relevant details, that if something occurs then something else will result.
—–
Now, having said all that, I do agree that there is great value in using computer models to deal with very specific aspects of biological interactions. But unfortunately it is practically impossible, given our current state of knowledge and technology, to adequately model biological systems. And even if the simulation didn’t work, the Darwinist would simply say “Well, all that shows is that it didn’t happen with this particular set of parameters. It must have happened some other way.”
I suppose developmental biology, molecualr biology, biochemistry, physiology and ecology should all be discarded since the processes underlying these sciences can’t be simulated at the same level of detail you require?
BTW, what ever happened to that blogger here who’s every post was an ode to how awesome the physics simulation he used was?
wd400
No, developmental biology, molecular biology, biochemistry, physiology and ecology shouldn’t be discarded also if not computer simulated, insofar as they provide descriptions of facts, sure data and eventually sensible hypothesis related to facts/data.
The case of Darwinism is fully different, because it is only an hypothesis, not a fact. Worse yet, Darwinism is an absurd and contradictory hypothesis, contrary to all principles and all evidences.
So, your position is that a science that explains the way changes in genotype and environment manifest themselves in phenotype (developmental biology and quant. genetics) but can’t model that process is fine. Likewise, a science that explains the way organisms interact with each other and abiotic parts of their environment but can’t model that process in detail (ecology) is fine too.
But in order build a theory that includes the results of developmental biology, quant. genetics and ecology then we need to model those processes down to the individual atoms? And you call “Darwinism” absurd and contradictory?
wd400 claims that Darwinism is a,,,
Yet the actual fact of the matter is that,,
wd400 claims that Darwinism is a,,,
Actually, I didn’t.
And wd400, since Darwinism doesn’t, and IMHO can’t possibly, explain how ‘changes in genotype and environment manifest themselves in phenotype’, you support Darwinism why exactly?
Try and think things through…
For the record
I’m not a Darwinist.
Evolutionary biology doesn’t explain how changes in genotype and environment manifest themselves in phenotype (that’s the domain of developmental biology and quantitative genetics).
It does require that some genetic changes alter the phenotype of their carriers
It is obviously true that some genetic changes alter the phenotype of their carriers.
“I’m not a Darwinist.”
Really??? Well blow me over with a feather, All of the sudden I’ve lost all interest in anything else in this thread, please do tell??
There’s no great revelation in that statement. Like many evolutionary biologists, I tend to emphasizing non-Darwinian mechanisms (drift, sub-functionalisation etc) because there are large amounts of data that purely Darwinian evolution can’t explain (notably, the preponderance of junk DNA in many eukaryote genomes , a comment which I’m sure will set you off on other round of link spam…)
“the preponderance of junk DNA in many eukaryote genomes”
LOL, yep your a Darwinist alright! You may deny it trying to save face, but only a Darwinist would ever claim that!
Against my better judgement, one last comment.
Try and create a Darwinian (i.e. selection-focused) explanation for junk DNA..
One problem with the OP is that it creates a straw-man version of neo-Darwinism.
In neo-Darwinism, how one gets from genotype to phenotype is irrelevant.
The Changing Role of the Embryo in Evolutionary Thought: Roots of Evo-Devo
wd400:
Easy. It’s not under selection, that’s what allows it to accumulate.
If it doesn’t accumulate, it’s evolution. If it does accumulate, it’s evolution. Ain’t modern evolutionary theory grand!
Think of the programing language as the genotype and the program itself as the phenotype. How one gets from the genotype to the phenotype is termed development.
In programming, changes to the programming language may or may not have an effect on a program. e.g., for compiled languages, the program may or may not require re-compilation.
Consider a theory of how programs change over time. Imagine such a theory that focuses only on the programs and the programming languages. That’s neo-Darwinism.
Easy. It’s not under selection, that’s what allows it to accumulate.
Right… so that’s non-Darwinian because it’s something that happens without selection. It’s also an idea that allows us to make predictions about what future data will like. We know, for instance, that selection is stronger when effective population sizes are larger. In this way, you might predict that, all else being equal, organisms with large effective population sizes will smaller (less junk-ridden) genomes for instance…
Mung @ 19
Are you primarily referring to or considering the statement in the OP: “If they cannot answer that basic question, how can they claim that unguided variations on genomes produced even the 500 million past and living species?”
Another problem with the OP is that it attempts to turn a strength of programming and simulation (abstraction) into a weakness in evolutionary theory. There’s no justification for this. It’s like saying we can’t take every element of an organism’s ecology and put it into a computer therefore evolution is false. It just doesn’t follow.
That said, I appreciate the questions raised in the OP, but I suggest that they need some refinement.
Still seems to me there should be some way to test the most basic idea of NS + RM to develop new information in the form of novel complex functions. I don’t think real biochemistry needs to be simulated to disprove Darwinism. Just one of the most important features of the simulation needs to ensure the simulation is sterile of programmers intelligence – i.e. keeping outside information from leaking into the simulation.
wd400
You seem to imply that non-functional DNA is useless junk.
If so, is it not possible that ‘junk DNA’ is being stored in a dormant state for a reason? Perhaps, like a savings account, it is being keep in reserve in order to preserve and assure greater evolutionary potential.
Since we now know that organisms can manipulate their DNA arrangements to adapt, etc., (a la James Shapiro), it seems quite premature to jump to the conclusion that nonfunctional DNA is the accumulation of waste.
Littlejohn,
It’s always possible to make a post-hoc justification to rescue a favorred hypothesis – but (James Shapiro notwithstanding) there is no evidence for this, and it’s very hard to imagine how something like this could possibly work.
The only place Junk DNA really exists is in the imagination of neo-Darwinists!
The only place Junk DNA really exists is in the imagination of neo-Darwinists!
Actually, Darwinism would not be that difficult to simulate in a computer program, but no Darwinist will ever do it because it will show what they don’t want to see – mutations kill.
To simulate Darwinism, you would simply write a program that has all of the elements of the simplest life form possible – self-contained code to replicate, metabolise, code to interpret that code, code to execute its own code for replication/interpretation, etc. Then place it in a virtual machine where that code competes against other copies of that same code for resources needed to continue existence. Then you would allow purely random modifications of the code itself during replication, that allow any type of mutation Darwinists believe exists – deletions, modifications, duplications, etc. No direction allowed – the code or vm must not have any code that arbitrarily picks “winning” or “losing” code, beyond the code’s ability to continue competing for resources. And there must not be any restrictions on the types of mutations that can occur – any change to any section of the code must be allowed.
Then set it loose and see what happens. Any programmer knows what will happen when you allow mutations (aka errors) randomly to occur in code. The code breaks.
Of course, a realistic simulation would be much more stringent. You’d have to create a vm with resources, and then randomly inject bits and bytes and wait for a piece of self-replicating code to magically appear. Yeah, right.
(This is a great thread because it targets the biggest vulnerability of the theory of evolution, in my opinion.)
drc466 @29, you hit the nail on the head. The very thing that supposedly drives innovation in Darwinian evolution is what kills it dead before it gets a chance to do anything.
It never ceases to amaze me how some of the most brilliant people on earth actually believes in this cr@p. It’s either a case of mass stupidity or mass cowardice or both. Worse, the stupidity is blatant and in your face. The same can be said about materialism.
wd400 makes a valid point that organisms are subject to various mechanisms that do not depend directly on the classical RM+NS mechanism (drift, for example). And I agree that there are good mathematical models that can be brought to bear relating to population genetics.
However, RM+NS is still considered to be the primary avenue of biological change. More importantly, regardless of whether something results from, say, drift, the original source of the biological novelty is still allegedly what essentially amounts to a random event.
So, yes, the NS part of the equation is problematic because it may not function perfectly to preserve or to discard. But the much worse problem is the RM part. Does it really have the capacity to create what we see around us?
It doesn’t matter whether we are relying on natural selection, neutral mutations, genetic drift, sexual selection or otherwise to preserve something. The real question for evolutionists is: What is your evidence that these random changes can do all this work of creating?
That is what needs to be modeled. It can’t be modeled in even semi-comprehensive detail, because too many particulars are still unknown. But I do agree it can be modeled perhaps in a simple fashion (such as that suggested by commenters above). And it is found utterly wanting.
wd-400 #27
If I am not mistaken, the immune systems of mammals and other animals use programmed DNA rearrangements to produce anti-bodies. This evidence might help you imagine how intrinsic genetic manipulation is likely exploited by other bio-systems.
More than that, how many other organelles, cells, tissues, organs, and body plan structures and/or components, would you consider to be composed of large volumes of waste, and why should we expect the genome to break the pattern of precise optimization of resources that we seem to find at every other level of organization?
Just imagine junk DNA as packets of evolutionary potential, just waiting to be activated or utilized when the time is right.
Just imagine junk DNA as packets of evolutionary potential, just waiting to be activated or utilized when the time is right.
… and accruing mutations (which are univerally bad news according to many IDists…) while they wait.
Mung #19
Darwinism does or does not pretend to be the cause of the construction of all organisms? (if it doesn’t we all can go home). To what pretends to be the cause of the construction the construction is not irrelevant.
If I pretend to be a constructor and a client wanting a building asks me how I construct, I cannot answer “the construction is irrelevant”.
Mung #19
The OP doesn’t affirm that evolution is false because and only because it is not computer simulated. Evolution is false just for countless other reasons. The computer simulation of evolution would simply add an additional reason. The OP simply asks “why Darwinian unguided evolution hasn’t been yet computer simulated?”, and has got many interesting answers. Among them I particularly like the following by drc466 #30:
Calling all Darwinists, where is your best population genetics simulation? – September 12, 2013
Excerpt: So Darwinists, what is your software, and what are your results? I’d think if evolutionary theory is so scientific, it shouldn’t be the creationists making these simulations, but evolutionary biologists! So what is your software, what are your figures, and what are your parameters. And please don’t cite Nunney, who claims to have solved Haldane’s dilemma but refuses to let his software and assumptions and procedures be scrutinized in the public domain. At least Hey was more forthright, but unfortunately Hey’s software affirmed the results of Mendel’s accountant.
http://www.uncommondescent.com.....imulation/
Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory – 2008
Abstract: Evolutionary genetic theory has a series of apparent “fatal flaws” which are well known to population geneticists, but which have not been effectively communicated to other scientists or the public. These fatal flaws have been recognized by leaders in the field for many decades—based upon logic and mathematical formulations. However population geneticists have generally been very reluctant to openly acknowledge these theoretical problems, and a cloud of confusion has come to surround each issue.
Numerical simulation provides a definitive tool for empirically testing the reality of these fatal flaws and can resolve the confusion. The program Mendel’s Accountant (Mendel) was developed for this purpose, and it is the first biologically-realistic forward-time population genetics numerical simulation program. This new program is a powerful research and teaching tool. When any reasonable set of biological parameters are used, Mendel provides overwhelming empirical evidence that all of the “fatal flaws” inherent in evolutionary genetic theory are real. This leaves evolutionary genetic theory effectively falsified—with a degree of certainty which should satisfy any reasonable and open-minded person.
http://www.icr.org/i/pdf/techn.....Theory.pdf
Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle – “Haldane’s Ratchet” – Christopher L. Rupe and John C. Sanford – 2013
Excerpt: We then perform large-scale experiments to examine the feasibility of the ape-to-man scenario over a six million year period. We analyze neutral and beneficial fixations separately (realistic rates of deleterious mutations could not be studied in deep time due to extinction). Using realistic parameter settings we only observe a few hundred selection-induced beneficial fixations after 300,000 generations (6 million years). Even when using highly optimal parameter settings (i.e., favorable for fixation of beneficials), we only see a few thousand selection-induced fixations. This is significant because the ape-to-man scenario requires tens of millions of selective nucleotide substitutions in the human lineage.
Our empirically-determined rates of beneficial fixation are in general agreement with the fixation rate estimates derived by Haldane and ReMine using their mathematical analyses. We have therefore independently demonstrated that the findings of Haldane and ReMine are for the most part correct, and that the fundamental evolutionary problem historically known as “Haldane’s Dilemma” is very real.
Previous analyses have focused exclusively on beneficial mutations. When deleterious mutations were included in our simulations, using a realistic ratio of beneficial to deleterious mutation rate, deleterious fixations vastly outnumbered beneficial fixations. Because of this, the net effect of mutation fixation should clearly create a ratchet-type mechanism which should cause continuous loss of information and decline in the size of the functional genome. We name this phenomenon “Haldane’s Ratchet”.
http://creationicc.org/more.php?pk=46
Here is a short sweet overview of Mendel’s Accountant:
When macro-evolution takes a final, it gets an “F” – Using Numerical Simulation to Test the Validity of Neo-Darwinian Theory (Mendel’s Accountant)
Excerpt of Conclusion: This (computer) program (Mendel’s Accountant) is a powerful teaching and research tool. It reveals that all of the traditional theoretical problems that have been raised about evolutionary genetic theory are in fact very real and are empirically verifiable in a scientifically rigorous manner. As a consequence, evolutionary genetic theory now has no theoretical support—it is an indefensible scientific model. Rigorous analysis of evolutionary genetic theory consistently indicates that the entire enterprise is actually bankrupt.
http://radaractive.blogspot.co.....ution.html
A bit more detail on the history of the junk DNA argument, and how it was born out of evolutionary thought, is here:
Functionless Junk DNA Predictions By Leading Evolutionists
http://docs.google.com/View?id=dc8z67wz_24c5f7czgm
as to ‘drift:
Thou Shalt Not Put Evolutionary Theory to a Test – Douglas Axe – July 18, 2012
Excerpt: “For example, McBride criticizes me for not mentioning genetic drift in my discussion of human origins, apparently without realizing that the result of Durrett and Schmidt rules drift out. Each and every specific genetic change needed to produce humans from apes would have to have conferred a significant selective advantage in order for humans to have appeared in the available time (i.e. the mutations cannot be ‘neutral’). Any aspect of the transition that requires two or more mutations to act in combination in order to increase fitness would take way too long (>100 million years).
My challenge to McBride, and everyone else who believes the evolutionary story of human origins, is not to provide the list of mutations that did the trick, but rather a list of mutations that can do it. Otherwise they’re in the position of insisting that something is a scientific fact without having the faintest idea how it even could be.” Doug Axe PhD.
http://www.evolutionnews.org/2.....62351.html
Michael Behe on the theory of constructive neutral evolution – February 2012
Excerpt: I don’t mean to be unkind, but I think that the idea seems reasonable only to the extent that it is vague and undeveloped; when examined critically it quickly loses plausibility. The first thing to note about the paper is that it contains absolutely no calculations to support the feasibility of the model. This is inexcusable. – Michael Behe
http://www.uncommondescent.com.....evolution/
corrected link:
Using Numerical Simulation to Better Understand Fixation Rates, and Establishment of a New Principle – “Haldane’s Ratchet” – Christopher L. Rupe and John C. Sanford – 2013
http://www.creationicc.org/abstract.php?pk=293
After reading through this thread, I was bothered by one curious detail. Suppose the Programmer could create the simulation, there is still the problem of the fact that the simulation required a designer to run.
The program would have to have specified rules, such as what new strings of code qualify as “living” and functional. This also raises an issue of the initial organism. Is it pre-designed, or must we expect it to emerge from the simulation? If it is expected to emerge, at what point can one distinguish an output representative of inorganic versus organic?
If a method is used, such as introducing new packets of information (to represent, perhaps early atmospheric changes, etc.) and results are seen, then we are still only showing that observation, with a carefully controlled “randomization” lead to the result.
It seems silly that NDEs entertain the simulation idea. Any simulation would require a designer creating a simulation favorable to life as they cannot emerge from chaos.
A random search will eventually find something given enough time and allowed to run indefinitely.
But if extinction is modeled, any reasonable computer simulation will show what a dead end RM + NS is – because a few bad mutations will cripple the self-replication, ending the experiment.
If there is no possibility of failure, the simulation tells us nothing about real world results, where failures are known to be common (extinct species).
Here is a brief proposal to Simulate Evolution using Programming Artifacts.
A. Let’s consider a well performing Chess Program (CP) – that let’s say usually wins chess games against human chess masters.
B. Let’s make relatively easy modifications to the CP so that two instances of the Chess Program can play against each other until one wins or a draw is declared.
C. Let’s consider a Population of Chess Programs (PCP) where initially all CPs in the Population are identical copies of the same Chess Program under discussion. Each Copy of the CP has a unique Identity and an individual “evolution life” that will be described farther down.
D. Let’s create a Chess Program Fight and Survival (CPFS) programmed Framework (F) by which:
a. each individual CP: CP(i) can play a chess game with other individual CP: CP(k) selected randomly by the Framework F;
b. the result of a game increases the Loss Count (LC) recorded for the losing CP.
c. In case of a draw the loss count stay unchanged for the two CPs.
d. After a defined Dying Threshold (DF) of losses (let’s say 20 losses) recorded for a CP, that CP “dies” and exits the Chess Program Fight and Survival – after its “life”, “game loses” and “demise” are carefully recorded by the Framework for that particular (individual) CP.
E. The “evolution” is represented by “random mutations” in a particular CP.
a. In this context it is proposed that a single “random mutation” consists in changing the value of N consecutive bits of the executable binary of the CP to a random value (RV) of also N consecutive bits starting from a Randomly selected Offset (O) counted from the beginning of the CP executable binary.
b. The Framework (F) will “inject” a pre-determined number of such “random mutations” (let’s say 10) in each individual CP after every (let’s say) 5 games.
c. In case one or more “random mutation” makes an individual CP non-responsive (i.e. does not respond in the time granted for a chess move) the Framework F will record a loss for that individual CP.
d. Similarly if an “evolved” individual CP is not even able to start a chess game (or to provide the expected response at the start of a chess game) the Framework (F) records a loss for that individual CP (and might even declare it “dead” even if the “Dying Threshold” of losses was not reached by that CP.
F. The Chess Program Fight and Survival (CPFS) competition will go on until only 0.01% of the original population remains “alive” (avoided death by consistently beating other “less/more evolved” individuals)
G. Half of the Population of Chess Programs (PCP) will not be subjected to “random mutations” and will preserve unaltered their original executable binary code during the whole Chess Program Fight and Survival (CPFS) competition.
H. Hypothesis A: If Darwinian Evolution is true (and works) then it is expected that NONE of the CPs that were spared the “evolution” (i.e. they were not subjected to random mutations) will be among the 0.01% of surviving population of CPs.
I. Hypothesis B: If Darwinian Evolution is false (i.e. does not work) then it is expected that:
a. All CPs in the surviving 0.01% population are of “not-mutated” population
b. More so: it is expected that when the Original PCP halved – during competition, large majority of individual surviving CPs are from “non-mutated” population.
NOTES:
• A lot of Variations on this theme can be imagined and played out.
• Although this is not a simulation at the level of biological, chemical, organizational details of actual organisms (who can dream of such simulation? this is not possible) I pretend that it capture and emulates quite realistically what “evolution” is expected to achieve:
• By affecting through random mutations a system very complex that is known that originally functions with precision and is effective (in winning chess games). Similarly it is legitimately assumed that evolution of biological life started only on a “self-replicating” high precision machinery substrate.
• The mutations are random and there is no “design” involved in “guiding the evolution” in any way (all values are random).
• There is a fairly representative competition and fair fight for survival – and there are as legitimate expectations as in the Darwinian Evolution that the “most evolved” CPs win the Chess Program Fight and Survival (CPFS).
InVivoVeritas
Interesting idea. However consider that usually binary executables, as those we find in our computers, are very critical under random variations. In practice just few random bit mutations crash the code, and, depending on the program and the operating system, they could even halt the computer. So you can bet that the outcome of your CPFS simulation would be Hypothesis B: Darwinian Evolution is false.
Luckily the biological codes are more robust than… Windows and Office, from this point of view. But this of course doesn’t mean that random variations on them can create new organization, as neo-Darwinism pretends.
Niwad,
Maybe a variation of his proposal written in a scripted language would work, therefore no O/S crashes… and/or… mutations aren’t at the bit level, but maybe the byte level (well, that might crash easy)…or at the expression level.
That is, mutated by substituting in valid random expressions – forcing valid lexical structure (e.g. syntax, coding rules… semantics). It may still crash, but not the system.
So, instead of bits being your primitive mutations, you move it up a level. It would still be impossible for it to evolve new beneficial logical functions, I think (not to be confused with defining a function..which in programming can be one line of code).
Niwrad,
Thanks for your interesting topic and for your comments to my entry.
The question is: why we should not think that a living organism (maybe order of magnitude more complex then our Chess Program) is also sensitive and mostly negatively affected by random mutations?
We know for sure that a biological organism functions efficiently, precisely and is a very complex composition of interacting parts that, together, as a system of sub-systems metabolize successfully, replicates successfully, etc. Why a random mutation of any of its sub-systems should not (most probably) negatively affect “the order” and “the working plan” that it uses (no matter how this “working plan” came to be)?
I propose that my simulation model is quite adequate from this point of view.
It’s quite probable that a few or repeated mutations of the Chess Program binary will crash the Program – but not the Operating System – which, if well designed – should be isolated from application crashes or failures. Also the computer should not crash.
It is true that we can speculate that biological organisms can be more resilient to defend/protect against random mutations – I speculate just because they may have very complex defensive, mechanisms.
The fundamental questions for the proposed Evolution Simulator are:
Q1. Does it simulates reasonably well random mutations?
Q2. Does it simulates reasonably well natural selection? (and the fight for survival)?
Q3. Does it minimally emulates a Irreducible Complex System (Behe) (ICS)that we know that a living organism really is? Our Chess Program definitely is an ICS.
Q4. Does it provides “tuning knobs” to allow playing various “Simulation Scenarios”?
Yes by changing various parameters: The Length in bits of a Random Mutation; the Number of Mutation before each Game (set of Games); the number of losses before the Framework will declare an individual Ches Program as “dead”; etc.
Jguy at #43
I thought about at what level the Random Mutations are to be “injected”:
a. At the Programming Language Level (Perl, C, Java, etc.). This is a non-trivial problem because, the “Mutator” may need to become “Programming Language Aware” and Replace/Modify one or a group of Language Statements with another Group that – although may not make sense from “what they need to accomplish” point of view, they still must allow:
1. A successful compilation of the “mutated” Chess Program (CP)
2. A Successful Build of the CP.
3. A Successful Execution (Start) of the CP.
b. At the Executable Binary level. This is much simpler to accomplish – and still preserve a reasonable analogy with “random mutations” in the DNA of a cell/organism.
I think that the Proposed Simulator far from being perfect it is still a Reasonable Approach – that can be defended as I tried in my comment above at #44.
I believe also that this Proposal – as it is – and because it carries a Strong Analogy with the simulated target in essential aspects may provide us with a good “projection” and understanding of the Enormity (and logical Impossibility) of the task that Darwinian Evolution pretends to be able to achieve.
…
b. At the Executable Binary level. This is much simpler to accomplish – and still preserve a reasonable analogy with “random mutations” in the DNA of a cell/organism.
…
If I’m not mistaken, about half of amino acid positions in proteins can be substituted with another amino acid – especially if it has at least a similar chemical traits (e.g. hydrophobic or hydrophilic and/or whatever other generic property) … of course, that only applies to coding regions of DNA.
So, assume that is the general rule, would you be able to change 50% of bits in compiled code without crashing the program?
I’m not sure how important that is to be representative. I have not thought a lot about it.
p.s. I don’t think the entire program needs to be modified. You could simply consider the a set of methods and logic rules that act as building blocks… enough primitives to build almost any logical process. For example, if I link an AND function and an OR function it will not error out. You just get useless output..that is, if it isn’t helping – in this case, to beat other chess programs.
pps. to illustrate better….. in such a chess program experiment….. it doesn’t seem you really need to modify the skeleton of the chess program….rather, it seems you just need to modify the function(s) that evaluate how the program calculates the value of possible positions.
ppps. and i would not think you want to make it so that it just tunes exiting functions… trial an error can find the settings that are finer tuned… you need to allow it to look for novel functions (i.e. new complex information)
InVivoVeritas et al,
The problem with using computer software as an analogue for living organisms is that software is algorithmic, i.e., it is a sequential chain of instructions. Break any link in the chain and the entire program crashes. Living organisms, by contrast, are non-algorithmic parallel computing systems and, as such, are very fault tolerant. A malfunction in one or even many components will rarely cause a catastrophic failure. They tend to degrade gracefully.
Living system simulations must take this into account, in my opinion.
Parallel computing is more difficult than serial computing. The lack of fault tolerance in serial computing is a conscious trade-off of cost vs. reliability rather than a penalty of serial computing. Typical computing environments aren’t hazardous to computers.
Satellites and spacecraft are some areas where there is hardware/software hardening to guarantee functionality despite adverse environments.
The value of comparing computer software to living organisms is not that they’re close equivalents; it’s that the known-human-design is much simpler than life and provides a floor for the minimum amount of “work” needed to accomplish what the more complex design does.
While some of the graceful degradation observed in life may be a function of the molecules (“harmless” amino acid substitutions), a substantial part is from system “design” which is a function of the information encoded in the system, and not of the molecular properties of the materials. (ex: DNA checking/repairing molecules)
SirHamster:
Well said.
Mapou @ 50
I don’t think it matters. You’re really only looking for hopeful beneficial mutations, adding them together..and seeing it there is such a detectable step-wise path to higher complexity and function, or whether there is not. Perhpas, that’s too simplistic, but that’s what it seems to me.
Another maybe: if the system is more resilient with multiple threads running – which I think is apoint htat could still be debated – wouldn’t this mean the selection aspect of the process would be even more unlikely to identify a beneficial effect(?) – when the organism is compared to rivals.
Off topic but just had a quick question: Is symbiogenesis, epigenetics, and saltationism part of the Modern evolutionary synthesis?
JGuy @53:
This is an interesting point. There is clearly parallel computing going on,* both inside cells and between cells. And yes, that allows for some robustness (one cell dies, for example, and the whole organism doesn’t come to a screeching halt).
But it does make it even more challenging to (a) get a mutation captured across the board in those individual places where it is needed/relevant, and (b) get things integrated across the whole.
One thing we can say with near certainty based on our experience with technology is that increasing levels of functional, coordinated complexity make it harder, not easier, for any given change to work seamlessly across the whole. The whole point of modularity is to try and deal with the escalating cascade of complexity that would otherwise attain.
—–
* Parallel computing in the sense of various instruction sets being executed at multiple locations simultaneously clearly occurs pervasively throughout the organism.
Parallel computing in the sense of multithreading is, I believe, still more of an open question. Arguably, one could say that a form of simple multithreading occurs when, for example, multiple transcriptions from a large DNA section are occurring simultaneously. One might also perhaps argue that the creation of multiple amino-acid-to-protein strands from a single mRNA transcript is a form of multithreading.
Nevertheless, I don’t know if we could call these true multithreading events or if there even is true multithreading occurring with molecular machines. That would be a remarkable achievement if it does happen!
(Incidentally, we need to distinguish between true multithreading and the existence of protocol hierarchies, such as when the cellular reproduction mechanism initiates a temporary shutdown of transcription activity so that the DNA can be faithfully copied. The latter is more of a break-in override protocol, than true multithreading.)
Jguy at #46
Several points here:
* I am not a biologist but there is a chance that changing those amino acid positions – with similar ones – may still have negative side effects – possibly far removed from the place and time of change. It is hard to be sure of anything in biology except that most probable the things are as they are with a very good (at least initial) reason.
* When talking about the Chess Program (CP) binary executable we should assume that this binary contains not only the executable code proper but also the CP database of moves and known strategies and also any other configuration and metadata information (structure of the chess table, the desription of valid moves for each figure, etc.). It is know that a key element of the success of Chess Programs is among others an extensive “chess knowledge” database.
* Now when a “random mutation” is injected it may be into the “database space” of the CP binary or in its “configuration space”. This may mean that some (many) of such random mutations may not degrade the Program directly (or make it immediately crash). If the random change modify a “chess move” from the database that is seldom used (or not used in the sequence of games of that particular mutated Program, this may imply a “graceful degradation” of that program – that can be judged somehow quite similar of what you mentioned. If the ratio of Program Space to Database Space in the Binary is (let’s say) 1/4 then 80% of mutations may not be immediately pernicious.
Mapou at #50
You are right to say that there are very significant differences between the computer programs (computing systems) and the biological organisms. I have a few comments though on this thought.
* I am sure that there are many “parallel” (micro) resources available in biological systems that partial failure can be masked by unaffected resources.
* At the macro scale there are still real “heart failures”, “kidney failures” or strokes.
* The “qualitative similarities” between the Simulator (Programming Artifacts in this Proposal) and the Simulated are:
– both are Irreducible Complex Systems (made of a large number of interacting components or sub-systems that are precisely coordinated)
– it is logically similar for the two that any (or at least many) “mutations” in a perfect, harmoniously tuned and finely system may affect (compromise?) that nice working and cooperation between parts.
– my previous comment at #56 identified certain mutations that can induce also graceful degradations.
It may be that these qualitative similarities between the Simulator and the Simulated may convey a reasonable level of realism to the Simulation.
InVivoVeritas:
Well said. I’ve raised this point in the past as well, and I think it is worth remembering, at least in the back of our mind.
Most of the time I’m willing to grant for purposes of discussion and for assessing probabilities that many substitutions will be neutral.
However, the fact remains that we do not know if all or even most of these allegedly neutral substitutions are indeed neutral.
There is a whole host of downline operations that could, potentially, be affected by the substitution. The translation process itself often involves multiple snips and concatenations, and error-correction mechanisms. So a substitution in a nucleotide base may end up being neutral, not because it was initially neutral, but because the translation process picked up the change and corrected it.
More intriguing would be if the translation process picked up the change and acted differently as a result — a different cut, a different concatenation, etc.
Furthermore, if amino acids can come in different forms (we’re barely starting to grasp some of the quantum effects in the cell) or be affected by added tags, then there could be other downline effects. For example, the protein folding process is not, contrary to regular assertions to the contrary, simply an automatic process, but is a moderated process, with its own protocols and error-correction detection mechanisms. Do we know whether there are any changes in folding process, the completed fold, or post-folding error detection and destruction with particular nucleotide substitutions?
Additionally, there may be stretches of DNA that are involved in multiple processes and/or with multiple reading frames. In those cases, we can’t assume that the mutations would be 100% neutral.
Anyway, just throwing some possible ideas out for consideration. I do agree the genetic code seems relatively robust to perturbation, and it might indeed be the case that many nucleotide substitutions are 100% neutral and invisible to the organism. But it is perhaps too soon and our knowledge too limited to allow for such a confident early assertion.
This is precisely the limitation of knockout experiments.
Furthermore, even with catastrophic changes, the change will not appear catastrophic until the particular routine is called. This is extremely common with complex technologies, and we see it all the time with our computers, our cars, and so on.
Finally, there is the issue of redundancy. If we knock out two of the gyroscopes on the Hubble Telescope and it still works, does it mean those two gyroscopes served no purpose? Of course not.
There are lots of ways that a particular mutation can be harmful, but the harm can lie dormant or be hidden for a time. Indeed, it is quite possible that a reasonable fraction of the allegedly neutral mutations could turn out to be “quietly harmful” rather than purely neutral.
I suppose the degree of harm caused by a mutation depends on where it happens in the genetic code. It is certain that the genome is organized hierarchically and that most DNA sequences are used for regulatory (control) purposes. A mutation in a regulatory sequence high in the hierarchy is likely to have severely deleterious, if not fatal, consequences.
It’s a good thing that error correcting mechanisms are in place, otherwise no living organism would survive. This is an insurmountable problem for Darwinists because the evolutionary process depends on random mutations but, if given a free reign, truly random mutations would quickly destroy the organism.
Let us assume, for the sake of argument, that the genome drives the construction of the organism, i.e. the genome is a set of assembly instructions for the embryo development. (Personally I believe that is reductive, IMO there are many unknown levels of organizational direction upon the genome.)
Usually instructions contain symbolic links. An error in a symbolic link might be more devastating than a direct error at the level of the material result. Example. Let’s suppose that an embryonic instruction sounds “at time X add Y cardiac cells to the Z zone of the heart” and a mutation causes an error in the last word, which changes from “heart” to “brain”. We would have that Y cardiac cells go into the brain, where likely they would work as a cancer.
What I mean is that to reason in terms of instructions doesn’t reduce the danger of mutations/errors. Indeed the contrary. Mutations/errors in the instructions are even more dangerous than direct errors in the final molecules.
Bottom line: Darwinism from an informatics point of view is even more absurd than thought from other perspectives.
equate65:
Not really off topic at all. We’re wondering what it would take to simulate evolution in a computer. Whether it’s even possible. And certainly we’d want to ask if these need to be taken into consideration in any simulation.
But to answer your question, no. To neo-Darwinism, aka the Modern Synthesis, development is a black box. It’s a theory about how genes spread through populations, not a theory about how phenotypes are derived from genotypes.
Chastising scientists for refusing to simulate evolution would be like chastising engineers for refusing to design an electric car. Anyone with half a brain can verify in a few minutes that the premise is untrue. Scientists have been simulating evolution for decades.
One of the advantages of simulations is the capacity to explore the behavior of models too complex to solve mathematically. I think some of the most interesting models are ones that use RNA folding. For instance, see The Ascent of the Abundant: How Mutational Networks Constrain Evolution by Cowperthwaite, et al. Unlike traditional models in population & evolutionary genetics that ignore development, models based on RNA folding implement a folding algorithm. The RNA is encoded by a gene (which could be RNA or DNA), its folded structure is computed, and then its fitness is computed based on some function of the folded structure. Add in mutation, recombination, etc., and you have the basis of a sophisticated simulation.
Results from such models are not simply reiterating old Darwinian ideas about selection. The title of the paper cited above invokes a concept of abundance in state-space, similar to an argument made earlier by Stuart Kauffman. This is not fundamentally a Darwinian argument.
By the way, Behe is mistaken to suggest that there has been no analysis of constructive neutral evolution. Apparently he just read a newsy piece by Lukes, et al without bothering to read the original piece by Stoltzfus, 1999, in which “constructive neutral evolution” was first proposed. This included simulations of an evolutionary model that later (due to an independent proposal by Force, Lynch, et al) became known as the DDC (duplication-degeneration-complementation) or “neutral subfunctionalization” model. Today you can’t read any of the gene duplication literature without coming across this model. The original papers in 1999 & 2000 have received thousands of citations in the scientific literature.
arlin #63
When asking “why Darwinian unguided evolution hasn’t been yet computer simulated?” obviously I meant “computer simulated realistically and successfully”.
I know that “scientists have been simulating evolution for decades”. But these simulations have been either not realistic or not successful or both.