Uncommon Descent Serving The Intelligent Design Community

The Simulation Wars

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I’m currently writing an essay on computational vs. biological evolution. The applicability of computational evolution to biological evolution tends to be suspect because one can cook the simulations to obtain any desired result. Still, some of these evolutionary simulations seem more faithful to biological reality than others. Christoph Adami’s AVIDA, Tom Schneider’s ev, and Tom Ray’s Tierra fall on the “less than faithful” side of this divide. On the “reasonably faithful” side I would place the following three:

Mendel’s Accountant: mendelsaccount.sourceforge.net

MutationWorks: www.mutationworks.com

MESA: www.iscid.org/mesa

Comments
Pendulum: "Well, if you don’t accept that the fitness functioni subsumes all interactions of a phenotype and its environment, I can understand why you feel somewhat isolated." I definitely don't accept that. "Simulations such as Tierra and ECHO may be more to your liking, as these don’t use an explicit fitness function." I have been interested in Tierra exactly because of that, but as far as I understand it is not so different from the others because even if the fitness function is not explicit, the environment still seems to measure specific properties. But I could be wrong. "But I think a GP system that created radio antenna designs is just as valid according to your criteria. The fitness of each antenna arises spontaneously from its interaction with the laws of physics." I would like to know more about those systems, I will try to read something. Going back to the general problem, my idea is simple enough: 1) NS is defined as a mechanism of necessity 2) The selection has to be "natural", IOW it has tobe a consequence of the interaction between replicator and the environment, and of the intrinsic functions of the replicator and their variations. 3) Fitness is fitness. In NS, it is measure only by the capacity of the replicatore to survive and repèlicate, and by nothing else. 4) The modifications of fitness (acquisition of new functions) in the RV + NS model are due to RV. The new information accruing from RV is comnpletely "unexpected" by the environment, and the only measure of that information takes place at the level of survival and relication, however it happens. 5) Any simulation of NS must have the same characteristics: the replicators and the environment will be digital systems (after all, it is a simulation), the vairation can be targeted at different values and modalities, but the selection must take pace in exactly the same way: out of random variation, the digital replicators have to develop new information which confers to them new functions capable of exploting better the existing dogotal environment. The environment must be totally blind to that. The new functions must be true functions, spontaneously giving survival or replication advantage to the digital replicator, and must not in any way be "recognized" by the system for other characteristics.gpuccio
March 27, 2009
March
03
Mar
27
27
2009
02:44 AM
2
02
44
AM
PDT
skeech: a passive role is a role just the same. What I mean is that the environment has no role in creating or modeling the information which determines fitness, while the information already existing in the replicator, its functions (especially survival and replication itself) and the random variation in that function are the real source of the new information. The selection itself is not made by the environment, but is a consequence of the interaction between the replicator and the environment, and of how good the functions of the replicator are in that environment. That's what I mean by saying that the environment has a passive role. Moreover, it should be obvious that the environment is totally blind to the replicator, has no relationship with its necessities or potentialities, if not as a passive consequence of the fact that the replicator has to be fit for the environment (be it because it was designed for it, or because it has adapted to it). No theory I know (except perhaps the extreme forms of TE) really assigns an "active" role to the environment (in the sense which I have tried to explain). It's the replicator which is functional, not the environment. It's the replicator which has rules of necessity to be able to survive and replicate: the environment is just permissive or not permissive to the existence of the replicators. If NS has to be considered a mechanism of necessity (and that's usually the way ot is considered), then it's the replicator who creates the rules, because it's its survival and replication which generates the output. We can say that the replicator self-selects itself according to the random modificaions of its environment and to the variations/adaptations it can exhibit in response, and always according to the basic laws of necessity intrinsic in life and replication. A wolf eating a rabbit is just part of an interaction. In itself, it does not select anything. It's the rabbit adapting to wolves which self-selects itself for survival.gpuccio
March 27, 2009
March
03
Mar
27
27
2009
01:57 AM
1
01
57
AM
PDT
gpuccio, Well, if you don't accept that the fitness functioni subsumes all interactions of a phenotype and its environment, I can understand why you feel somewhat isolated. Simulations such as Tierra and ECHO may be more to your liking, as these don't use an explicit fitness function. But I think a GP system that created radio antenna designs is just as valid according to your criteria. The fitness of each antenna arises spontaneously from its interaction with the laws of physics.Pendulum
March 26, 2009
March
03
Mar
26
26
2009
09:41 PM
9
09
41
PM
PDT
NS is more an effect of the replicator than of the environment, and the environment plays only a passive role.
That makes no sense. The environment has a huge effect on natural selection. Would polar bears be white if snow were black? Would insects be camouflaged if all of their predators were sightless? When a rabbit (a replicator) is eaten by a wolf (part of its environment), is the environment acting passively?skeech
March 26, 2009
March
03
Mar
26
26
2009
09:34 PM
9
09
34
PM
PDT
Pendulum: "The program knows the target, but the population does not know the target." That's what makes, IMO, all EAs silly in the measure that they are used as a even vague simulation of NS. I have debated this principle many times here and, although I seem to be rather alone in believing that, I still stick to it. The idea is simple: NS is a kind of selection where the fitness arises of its own in some environment: it's not that the environment "recognizes" it, as many seem to think. NS is more an effect of the replicator than of the environment, and the environment plays only a passive role. The fundamental principle is therefore that the environment must know nothing of the replicator, or of the principles which determine its fitness, In other words, there must be no fitness function programmed in advance. What I mean is that fitness must arise of itself, "on its bootstraps": any other situation, where fitness is in some way "recognized", has nothing to do with NS, and is a form of intelligent selection. The weasel is just a form of trivial IS. Other EAs are smarter, but still IS they are, all of them, including Zachriel's plays with words and phrases. This fundamental difference between "real fitness" and "recognized fitness" is fundamental, and yet constantly overlooked. Fitness has to be fitness, and not the adherence to a pre-ordained function where the "fitness" is artificially conceded because the system recognizes a target which it is programmed to recognized. That is not fitness, but only the intelligent recognition of a pattern by observation and measurement of it. I have suggested a couple of times the only kind of simulation which would really simulate NS: any simulation where the replicator and population and mutation are in some way controlled, but the selection is not. In other way, a simulation which should generate, in a natural computer environnment, replicators which "spontaneously" improve their fitness in the environment, profiting of the natural rules of the envirnment, while the environment does not actively recognize any fitness function, but just acts as a passive filter for spontaneous fitness. And such a simulation should be able to generate a significant generation of functional complexity in the replicators. That would be a simulation of NS. And, IMO, would never work. In the meantime, I have nothing against simulations of IS, provided it is understood that they are good simulations of ID, and not of NS. Indeed, intelligent selection is probably a tool which can easily allow the implementation of ID, together with guided or targeted variation. As I have argued elsewhere, that's the only kind of implementation of ID by partial random search of which we have examples both in nature (antibody maturation) and in human simulation of it (protein engineering).gpuccio
March 26, 2009
March
03
Mar
26
26
2009
09:00 PM
9
09
00
PM
PDT
The reason for this simulation is that one of the current theories of evolution is that part of the DNA is non coding and is essentially mutating away because of its non use. Eventually a small number of these non coding DNA sections becomes useful. A demonstration that a random process can produce something close to a functional protein over time or that even after millions of iterations it was never able to get close to one would also be interesting. So I had this idea to see if some form of mutation could ever lead to something useful. Suppose one took a random string of DNA or maybe a repeating sequence that is typical in a genome that is 240 nucleotides long. Nothing magic about 240 except it would represent a protein of 80 amino acids. Then take a set of say a 1000 proteins of length 90-100. I don't know how many exist of exactly that length but maybe the 1000 proteins could be generated by taking sub lengths of slightly longer proteins. Then mutate the string, say two or three nucleo tides at a time to represent a certain time period and determine the protein and compare it to each of the 1000 proteins starting at 5 proteins in. Determine a measure for how close the mutated protein is to any of the sample proteins and keep score. After x iterations see how the similarity is to any of the functional protein polymers. The mutation could be of several different types such as an insertion, deletion, SNP or something else. The similarity would assess whether two comparable amino acids have similar chemical properties or not and adjust the distance measure accordingly. The new protein could be shifted up or down each of the 1000 targeted polymers to see if some type of shift would improve the distance measure or closeness of the base protein with each of the targeted proteins. Do this a large number of times to see if any of the functional proteins or subsections of these functional proteins could ever be approached. There are a number of issues but a basic one is something I do not know much about. How long would each iteration take since the basic polymer would have to be compared to say a 1000 other polymers maybe 10 times each (shift the frame up by 5 and down by 5 so there would be 10,000 comparisons at each iteration. Maybe I am dreaming on how fast computers are these days and this type of experiment would need a super computer. A second issue is that 1000 is a small subset or potential proteins but this could be remedied by including more proteins in the target set. A third issue is how similar are two different amino acids. Some are very similar and some are very different so the distance measure has to reflect this. There are probably a lot of other details that I haven't dreamed of but the purpose would be to see if a random process can approach anything useful. It is always possible that the mutated DNA leads to a useful protein but just not in the set of known ones that are in the simulation. Since I know very little about proteins, there may be some way of determining if a particular protein could ever fold in to a potentially useful shape just by its amino acid sequence. This could be a second measure and possibly a way of pre screening potential sequences for additional mutations or comparison to a larger number of proteins. There is no selection in this because the theory says selection does not begin till the potential polymer becomes useful. When a particular iteration became potentially useful, then maybe some selection could be included.jerry
March 26, 2009
March
03
Mar
26
26
2009
08:19 PM
8
08
19
PM
PDT
gpuccio @ 11, That Weasel had a string hard wired into the fitness function is not important. Dr. Dembski makes this point very clearly in his discussion of MESA, where he points out that MESA is hard wired to optimise for a sting of 0s, and that this is exactly what Weasel does. The program knows the target, but the population does not know the target.Pendulum
March 26, 2009
March
03
Mar
26
26
2009
08:06 PM
8
08
06
PM
PDT
DonaldM, As much interest as there is in EC, I don't get the impression that much of it is directed at simulating natural evolution. Lots of people just want to reap the benefits of an approach that works. If you showed this group an algorithm that was based on baraminology and worked faster than EC, or solved otherwise intractable problems, they'd buy it. Other scientists that are trying to simulate nature have to pick and choose which facet of nature to explore. John Holland's ECHO is trying to do something very different from MESA. Neither system atempts to understand interactions of entities reproducing at vastly different timescales, even though a big part of our genome is dead viruses. Bottom line - there are not enough scientists, and there is a lot of great science waiting to be done. A personal supercomputer costs under $10K from Dell. This is science the Discovery Institute can and should be sponsoring.Pendulum
March 26, 2009
March
03
Mar
26
26
2009
07:57 PM
7
07
57
PM
PDT
Dawkins didn't intend to tell us anything about real biology - didn't you just read the Dawkins' quote that R0b provided? He intended to demonstrate a principle that in other contexts might apply to real biology - that is true, because it's a powerful principle that has been proven to be useful in many fields. It's not Dawkins or those of us here discussing Dawkins' view of Weasel that are trying to make more of Weasel than it is.hazel
March 26, 2009
March
03
Mar
26
26
2009
06:49 PM
6
06
49
PM
PDT
Huh. It just occurred to me from ROb's comment above why Joseph (in the latching thread) misunderstands the notion of "cumulative" selection. Cumulative in TBW means that the total phrase is closer to the target, not that each letter is. So when a 28 letter target is matched by 15 letters, a progeny that matches 16 letters will be an cumulative advance even if a particular letter reverts. In short, Dawkins's use of "cumulative" implies non-latching of individual letters.David Kellogg
March 26, 2009
March
03
Mar
26
26
2009
06:31 PM
6
06
31
PM
PDT
Rob in #12. I really think you miss the point. Dawkins's program doesn't demonstrate anything at all about biological reality. Nothing. In TBW he cleverly tries to avoid the problems inherent with natural selection by inventing a new term cumulative selection. But what has he actually explained? Nothing. His clever phrase tells absolutely nothing about how evolution actually brought about the multi-variations of life forms that exist on this wonderful planet. He could have called phlophertophby selection and it would explained just as much. Niether his book nor his program tell us anything about biological reality. Its a rhetorical gimmick and little else, all his caveats not-with-standing.DonaldM
March 26, 2009
March
03
Mar
26
26
2009
06:24 PM
6
06
24
PM
PDT
gpuccio:
The weasel algorithm, whatever the details of its working, has one distinguishing feature: the program already knows the phrase it is looking for. And if that seems silly, well, it is. Just ask Dawkins: he will certainly give many complex arguments for his utilization of such a silly model, but silly it remains anyway.
Actually, it has at least two distinguishing features: #1 A completely artificial fitness function, as you point out. #2 An algorithm that uses cumulative selection to maximize fitness. Dawkins' explicit stated purpose is to illustrate cumulative selection (#2 above). If you don't like the fitness function (#1 above), you can replace it with one you like better. It makes no difference to his illustration. Dawkins was very careful to point out that his target-based fitness function has no analogue in nature, lest there be any confusion:
Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal.
His caveats seem to have fallen on deaf ears.R0b
March 26, 2009
March
03
Mar
26
26
2009
04:40 PM
4
04
40
PM
PDT
uoflcard: "My question is, does nature really select for parts of an effective protein, for example? Don’t you need the entire code (in the right context) before it can be of any use?" The answer to that is easy. Nature cannot select for parts of an effective protein (unless they are functional themselves). And yes, you need the entire code, or at least the minimum code which can give the necessary and selectable function. That's, very simply, why the darwinian model cannot work. The weasel algorithm, whatever the details of its working, has one distinguishing feature: the program already knows the phrase it is looking for. And if that seems silly, well, it is. Just ask Dawkins: he will certainly give many complex arguments for his utilization of such a silly model, but silly it remains anyway. Just a reminder: nature is not supposed to know what it is searching for. Indeed, nature is not supposed even to search for anything. The "only" thing nature is supposed to do is to provide thousands and thousands of complex functional proteins, to connect them in complex organized networks, and many other smart things which I will not state for brevity, but all of that blindly, and finally never know that such an astonishing task was accomplished.gpuccio
March 26, 2009
March
03
Mar
26
26
2009
02:37 PM
2
02
37
PM
PDT
Sorry, on a totally different (and for once non argumentative) note, how come some of the threads have less comments then displayed on the UD mainpage? E.g: Message Theory – A Testable ID Alternative to Darwinism – Part 2 (says 21 comments) but only 8 are displayed. And why are the comments closed? Thats why posted here, as this thread is not busy. Thanks!eintown
March 26, 2009
March
03
Mar
26
26
2009
01:42 PM
1
01
42
PM
PDT
to uoflcard The answer is no. Excluding letters that are correct from mutating is not what Weasel does. This has been discussed at length starting here You might start by reading my post #320.hazel
March 26, 2009
March
03
Mar
26
26
2009
01:08 PM
1
01
08
PM
PDT
This is somewhat off-topic, as it has to do with phylogenetic software rather than evolutionary simulations, but what can be said about the Phylogenetic Tree of Mixed Drinks?anonym
March 26, 2009
March
03
Mar
26
26
2009
12:38 PM
12
12
38
PM
PDT
A couple questions, one for Mr. Dembski (although it could be answered by others) and one for anyone in the know: #1) What types of things can make an evolution simulation "less than faithful" to biological reality? #2) I was looking briefly at MutationWorks. It tries to form the phrase "METHINKS IT IS LIKE A WEASEL", supposedly with biological evolutionary techniques. In each generation, the letters that aren't what they are supposed to be are changed while the ones that are correct stay the same. So just looking at the first word, if your first generation produces: XITIWWKQ Then it will keep the T and K and change the others, until finally it reaches: METHINKS My question is, does nature really select for parts of an effective protein, for example? Don't you need the entire code (in the right context) before it can be of any use?uoflcard
March 26, 2009
March
03
Mar
26
26
2009
12:37 PM
12
12
37
PM
PDT
Pendulum
I agree that evolutionary algorithms can give widely divergent results, given different parameter settings and data sets. I think that is why it is important to be open source, and publish parameter settings,etc. so that work is reproducible.
I think you've hit on one of the major issues with these types of programs: no one knows what the correct algorithm ought to be or what the actual parameters are supposed to be to model biological evolution. What seems to be missing in all of these sorts of studies is any sort of mapping of the computational onto the biological or vice-versa. I recall in the Avida study referred to in the OP that Lenski et.al. defended the way they put their model together as mirroring "exactly what evolution requires." But exactly what evolution requires was precisely the point at issue! What was missing was a correspondance of the computational program to biological reality. That problem seems to be endemic to all these studies, which is one reason why there's such a diversity of models, parameters and outcomes.DonaldM
March 26, 2009
March
03
Mar
26
26
2009
12:29 PM
12
12
29
PM
PDT
I agree that evolutionary algorithms can give widely divergent results, given different parameter settings and data sets. I think that is why it is important to be open source, and publish parameter settings,etc. so that work is reproducible. Should we wait for a longer excerpt or summary of this paper in the near future? I think from all the comments recently on GA related topics, it is clear that many people here are interested in this topic, and would like to hear your thoughts.Pendulum
March 26, 2009
March
03
Mar
26
26
2009
11:39 AM
11
11
39
AM
PDT
Fantastic, I look forward to it, I'll keep an eye out! Oh, and God speed, regarding the opposition at Baylor and general Darwinist opposition that seeks to muffle your progress. I have confidence that the facts and logic supporting the ID position will eventually overcome the authority of the establishment. It has to.PaulN
March 26, 2009
March
03
Mar
26
26
2009
10:16 AM
10
10
16
AM
PDT
In answer to the two previous questions, there was a conference back in April 2000 that I helped organize titled THE NATURE OF NATURE. It was sponsored through Baylor's then Michael Polanyi Center, which I directed (the center itself was dismantled right after the conference because of agitation by Darwinists in and outside Baylor -- see here for the rise and fall of that center). Because the center was shut down, no conference proceedings were ever published ... until now. Those conference papers, all updated, as well as a number of new ones will be part of an anthology that will be appearing later this year (the volume will be titled THE NATURE OF NATURE). The essay described in this post, coauthored with Bob Marks, will be appearing in that volume.William Dembski
March 26, 2009
March
03
Mar
26
26
2009
10:03 AM
10
10
03
AM
PDT
Excellent. I'd love to be able to read it, is it going to be publicly available when complete?PaulN
March 26, 2009
March
03
Mar
26
26
2009
09:33 AM
9
09
33
AM
PDT
Is this essay your forthcoming paper or related to it?tragic mishap
March 26, 2009
March
03
Mar
26
26
2009
08:56 AM
8
08
56
AM
PDT
1 11 12 13

Leave a Reply