Uncommon Descent Serving The Intelligent Design Community

The Simulation Wars

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

I’m currently writing an essay on computational vs. biological evolution. The applicability of computational evolution to biological evolution tends to be suspect because one can cook the simulations to obtain any desired result. Still, some of these evolutionary simulations seem more faithful to biological reality than others. Christoph Adami’s AVIDA, Tom Schneider’s ev, and Tom Ray’s Tierra fall on the “less than faithful” side of this divide. On the “reasonably faithful” side I would place the following three:

Mendel’s Accountant: mendelsaccount.sourceforge.net

MutationWorks: www.mutationworks.com

MESA: www.iscid.org/mesa

Comments
virital = virtualJT
March 27, 2009
March
03
Mar
27
27
2009
07:58 PM
7
07
58
PM
PST
hazel wrote [37]: to JT: I was using latching and non-latching in the sense that they were used in that other thread. Sometimes we talked about explicit latching in which there is a rule that prohibits correct letters from mutating and implicit (or what kf has called quasi-latching) to mean what you have been meaning by latching. Given that the distinct between the two is related to the distinction I mentioned: random vs non-random in respect to fitness, it would seem best for there to be a consistent usage of the word latching. OK. To return to the mutationworks example (from the OP), they say that Dawkin's latches letters into place. But the mutationworks people say their own simulation does not latch like Dawkins, and that that's why their implementation is more realistic and Dawkins always wins. But the mutationworks simulation does latch - whether they realize it or not. It latches the original configuration, because any other mutations that can occur are rejected in favor of the original configuration, (save one and only one target config out of the entire space). Really, I think latching should mean any fixation of a trait, and artificial latching should mean any process that models latching by means a single individual (instead of a population) and the actual prevention of negative mutations. But latching of some type (real or virital or artificial or actual) does occur in nature. The question is (which the mutationworks people for example do not address) is why should only two individuals latch - the original config and one specific distant target config - as happens in the mutationworks simulation. Why cannot their be multiple intermediate configs that "latch".JT
March 27, 2009
March
03
Mar
27
27
2009
07:57 PM
7
07
57
PM
PST
Yes, that is exactly what I and others have explained on the other thread. And, as I will point out again (because it doesn't seem like anyone wants to respond), all letters need to have the possibility of mutation because mutation is random in respect to fitness: that is a critical part of the model. And as JT points out (and I have pointed out elsewhere), the fact that occasionally a correct letter mutates doesn't have a net effect is because such mutations are detrimental to survival, so a phrase which has a mutated correct letter is very unlikely to be the most fit individual in a generation. So I agree: what is difficult to understand about this?hazel
March 27, 2009
March
03
Mar
27
27
2009
07:35 PM
7
07
35
PM
PST
Jerry wrote [43]: Explicit latching as it is defined here more closely resembles reality. Once something becomes functional, here the matching to the desired letter, it will tend to be conserved by natural selection. But it is not in Dawkins’ program so it is Dawkins’ thinking that is buggy. My guess is that Dawkins tried the latching first and found it to be too easy so he changed the program but unwittingly went to poor evolution to do so. I honestly don't know what you're talking about. The way Dawkins' antagonists have characterized and implemented the algorithm there is only one individual in the population. What does that have to do with reality? In that scenario all you can do is artifically prevent certain beneficial mutations from changing in that one individual. But with an actual population, and beneficial mutations overtaking the entire population (a simplifying assumption as well admittedly but certainly comprehensible) any detrimental mutations in certain individuals (of already benficial changes) will be swamped by the rest of the population that did not have these mutations. In my own implementation, A population of 500 and a a mutation rate of even 5% means that on any given iteration there will be on average 25 mutations of a beneficial letter. But there are 475 individuals that did not have this detrimentatal mutation, so the chances are highly unlikely for the winning candidate for that iteration [generation] to have the detrimental mutation in question. What is difficult to understand about this?JT
March 27, 2009
March
03
Mar
27
27
2009
07:23 PM
7
07
23
PM
PST
And just to emphasize (what others have as well) intermediate functionality need not be directly associated with only one specific distant target (subcomponents of an eye for example could be used elsewhere - don't believe anyone has successfully ruled this out).JT
March 27, 2009
March
03
Mar
27
27
2009
07:09 PM
7
07
09
PM
PST
KF wrote [32]: "Had there been a serious functionality constraint - which was what Hoyle stipulated to begin with, Weasel would wander around hopelessly in the sea of non-function, unable to find a shore of function to hill-climb." That's only because you're defining "Methinks it is a weasel" as the only functional string. Presumably you accept that string as being functional. If you're just defining every other string of letters as nonfunctional, then obviously there won't be any intermediates that can be preserved. 1,000 bits is credibly unreachable by our observed cosmos, and first life credibly needs 600 k bits; [NOTE: the following up to the dashed line is a digression from the subject of simulations] If you're talking about first life (or first RNA strand or whatever) then obviously that cannot come about by Darwinian selection. Everyone would agree with that. But does that logically necessitate that it just poofed into existence instantaneously via Divine fiat? Or does it allow also that it inexplicably coalesced bit by bit, but only because it was being guided by the unseen hand of Providence, a Providence exercizing some sort of non-physical fundamental force of nature called "intelligence". But what about when a human is formed via the automatic process of epigenesis, starting from something that looks entirely different - an embryonic cell. Is that an automated mechanized process, or is that also guided to completion in real-time via some "Intelligence". Obviously its the former - it resulted from a mechanized blind physical process. Now you're saying, "But that embryonic cell is a program for a human being!" And granted that is indeed the case. But any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate. There is simply no getting around that. So why couldn't there be some set of prexisting physical conditions in the universe prior to RNA that resulted in RNA coming into existence? What law rules that out? Of course you're saying, "Well whatever that thing is that caused RNA, it couldn't have come into existence by blind chance either, because it would be like saying an embryonic cell could come into existence by blind chance." And again, granted, that is a fact. But by saying those physical causes of RNA would have to be caused by intelligence basically explains nothing, because intelligence isn't actually defined as such in I.D. And in fact any sort of definition of it is implicity ruled out as it is asserted that intelligence is nondeterministic. If something is nondeterminstic that means that no description exists to accurately characterize (and thus predict) its behavior. And to say, "Well we know that human intelligence for example is this mysterious nondeterminsitic thing", well then that is merely begging the question. But to return to the point where I think you and I implicitly agree - any physical cause for RNA would be no more probable than RNA itself to occur by blind chance, and furthermore any physical cause proposed for RNA would just be pushing back what needs to be explained. So I think we both agree on that. In fact any set of physical conditions and laws that resulted in the formation of RNA would in fact equate to RNA, just as if f(x) = y, then f(x) equates to y, and just as an embryonic cell (+epigenetic machinery) equate to a human being. But just as an embryonic cell doesn't look anything like a human being, any set of physical contigencies that resulted in RNA might not look anything like RNA (or life or animals or human beings). Such contingencies could be diffuse and disparate and indirect and remote but could still collectively result in RNA. What law rules that out - nothing. And certainly you're just pushing back what needs to be explained, but intelligence isn't an explanation so at some point in the regression lets just say instead you hit something that has always existed ( and thius did not need to be caused by anything.) ------------------------------------------------------- Sorry about that long digression, but there was actually another point I wanted to make in the context of evolution and simulations regarding your above comment : 1,000 bits is credibly unreachable by our observed cosmos, and first life credibly needs 600 k bits; I assume you realize if they're functional intermediates then your 1000 bits limit of what's reachable by blind chance goes out the window. If those intermediates in Dawkin's Weasel are functional, then your 1000 bits are irrelevant - right? Consider the example of Monkeys with Typewriters, but with some modifications: You have a bunch of monkeys that have access to a hat filled with a couple of thousand words or so - all words in Hamlet. They can go and grab a word out of the hat and then tack it on to either the beginning or end of a sequence of words in a sentence. Once it reaches say 10 words, a human comes in and reads it, and if its not a 10 word phrase from Hamlet, then he picks up all the words and throws them back into the hat and the monkeys have to start over. Now presumably we could consider any 10 word phrase from Hamlet as functional (You considered the Weasel sentence functional). Certainly any 10 word phrase from Hamlet could be termed "sublime poetry" or maybe the "work of genius". But before discussing that further let it be noted that in the above scenario the monkeys will never generate a ten word phrase from Hamlet as the odds agains them are 1 in (2000^10)/10000 [assumming there are 10000 10 word sequences in Hamet]. However what if the rule is "preserve any n-word sequence from hamlet and reject any additional word not resulting in a n-word sequence from hamlet." Obviously the monkey will eventually hit some 10 word sequence from Hamlet. Certainly a vivid metaphor can be painted with just a couple of words together. And the following phrase is not ten words either: "A rose by any other name smells just as sweet.". So preserving any sequence from Hamlet seems justified. [In the context of evolution think about an additional mutation being a viable organism or not.] What if instead of Hamlet it were a biology textbook, or perhaps the rule "any valid english sentence." But concerning nature, let's say some biological entity exists but its origin is unknown. Now supposing that entity is functional, lets say its an eye - or maybe a heart or hand or whatever. Without regard to its origin there is a reason why this entity's complex physical configuration results in a certain function and why that function conveys on its posessor certain advantages. Plunk that entity down in a certain context, and presumably reality itself will dictate how the entity's complex configuration confers on it certain advantages: "This part of the entity interacts with this part and this part [etc.] and the result is such and such function." So reality itself is parsing a biological sentence and saying, "OK that's valid - keep it." or "That makes no sense - get rid of it." Now of course I am anthropomorphizing "reality" or "nature" but perhaps that is fundamentally unavoidable. Reality is after all making such discriminations. And if such an unavoidable view of reality or nature seems to confer on it some sort of intelligence, then maybe that is something we have to live with (I.Dists shouldn't have any problem with that). In any case, I think you would have to say that God and Reality equate. You could never say that God exists in reality because it would imply that reality was a more transcendent concept than God was. So anyway, the idea would be God is reality, i.e God is the environment, God is nature. In reality, in nature, certain things can exist (or persist) and certain things cannot, i.e. certain things are viable and certain things are not. If Man for example is what ultimately persists (hypothetically) then that tells you something about the eternal nature of reality. And there is probably a more direct transition to a proof of God via this path for someone more deft than me. But to reiterate a point from much earlier in this post, ...any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate...any physical causes for RNA [for example] would be no more probable than RNA itself to occur by blind chance, and furthermore any physical cause proposed for RNA would just be pushing back what needs to be explained. So I think we both agree on that. In fact any set of physical conditions and laws that resulted in the formation of RNA would in fact equate to RNA, just as if f(x) = y, then f(x) equates to y, and just as an embryonic cell (+epigenetic machinery) equates to a human being. In the case of preserving Hamlet, we have Hamlet as a preexisting template. In the case of English Language sentences, it is something able to recognize and parse English sentences. In such enviroments "functionality" can very easily be built up under time contraints not any where near combinatorial intractibility. What is nature or reality able to parse - What are legal sentences in that context. What does that tell us about the "intelligence" of reality or nature. Now, I'm just repeating myself and others. Don't necessarily want to attempt an even more lengthy defense of the above if its not already clear and compelling on its own. Just trying to add to the discussion a bit.JT
March 27, 2009
March
03
Mar
27
27
2009
07:00 PM
7
07
00
PM
PST
What, precisely, is wrong with letting me use HTML to specify an ordered list? You might as well turn off the "preview" if it's not a preview. -------------- To make the Weasel program into Wandering Weasel: 1. Initialize the target randomly. 2. Mutate the target at the end of each generation. 3. Let the program run many generations. 4. In each generation, output the current target along with the current parent and its fitness.Sal Gal
March 27, 2009
March
03
Mar
27
27
2009
06:53 PM
6
06
53
PM
PST
For those of you working with Weasel implementations, make the following modifications to obtain a Wandering Weasel program: Initialize the target randomly.Mutate the target at the end of each generation.Let the program run many generations.Output the current target along with the current parent in each generation. To study the behavior empirically, you will need to do, say, 100 runs for various settings of the three parameters I identified in my previous comment. In data analysis, you should plot, for each combination of parameter values, the mean and median fitness of parents in the 100 runs, as a function of generation. Dispersion of fitness (error bars, standard deviation values) is also of interest.Sal Gal
March 27, 2009
March
03
Mar
27
27
2009
06:41 PM
6
06
41
PM
PST
"Implicit latching" -- the term itself -- reflects gross misunderstanding of the evolution strategy. The Weasel program should have no termination criterion. That is, it should not stop itself, just as evolution does not stop itself. Replace the fitness function in Dawkins' example with one that draws a new target sentence uniformly at random whenever the argument (the sentence passed to the function by the main program) matches the current target sentence perfectly. The upshot is that the evolution strategy (ES) has to start all over when it obtains a target. Dealing with this time-varying fitness function requires no change whatsoever to the (non-terminating) ES. Statistically, the behavior of the ES in going from the Hamlet sentence to the new target is identical to that in going from the random initial parent to the Hamlet sentence. The ES "latches" the Hamlet sentence no more than it does the very first parent of the run. (Note that there are "self-adapting" ESs that adjust mutation "step size" dynamically. Any "latching" is in reduction of the expected distance of the offspring from the parent. There is no such reduction in Dawkins' ES, inasmuch as the mutation rate is constant.) A more subtle, but considerably more interesting, approach would be to mutate the target sentence in each generation. Consider the entropy of the n-th target T_n conditioned on the n-th parent P_n, H(T_n | P_n), with P_1 and T_1 drawn uniformly at random. This is a measure of how much information you have to be given, on average, to know the target when you already know the parent. An objective measure of success for the ES is H(T_n | P_n) decreasing in n (perhaps reaching some minimum). Clearly the ability of the ES to track the moving target depends on the number of offspring, the mutation rate in reproduction, and the mutation rate in copying the target from one generation to the next. The reason I focus on reduction of entropy is that Dawkins evidently was thinking of accumulation of information when he wrote the strange term cumulative selection. I've just described how to illustrate accumulation of information "about" a randomly-initialized target that may drift to any point whatsoever in the search space.Sal Gal
March 27, 2009
March
03
Mar
27
27
2009
06:24 PM
6
06
24
PM
PST
jerry, No, non-latching resembles reality. Mutations take place at the level of the gene, and are random. Selection takes place at the level of the organism, and is related to fitness.David Kellogg
March 27, 2009
March
03
Mar
27
27
2009
03:58 PM
3
03
58
PM
PST
Explicit latching as it is defined here more closely resembles reality. Once something becomes functional, here the matching to the desired letter, it will tend to be conserved by natural selection. But it is not in Dawkins' program so it is Dawkins' thinking that is buggy. My guess is that Dawkins tried the latching first and found it to be too easy so he changed the program but unwittingly went to poor evolution to do so.jerry
March 27, 2009
March
03
Mar
27
27
2009
03:52 PM
3
03
52
PM
PST
Great - now I get it, and I agree.hazel
March 27, 2009
March
03
Mar
27
27
2009
03:01 PM
3
03
01
PM
PST
hazel, Pendulum's joke is a play on the catch-phrase "It's not a bug, it's a feature!", which is commonly heard in arguments among engineers over whether a hardware design or computer program is doing what it's supposed to. Explicit latching is a bug, because it fails to conform to Dawkins' original description of the Weasel program's intent and because latching violates the principle that mutations should be random with respect to fitness. So-called "implicit latching" (which as David points out really means "non-latching") is the way the program is supposed to work. It's not a bug, it's a feature! KF's complaint about "implicit latching that rewards non-functional but closer population members" is therefore, to use a couple of his favorite phrases, "distractive" and a "red herring".skeech
March 27, 2009
March
03
Mar
27
27
2009
02:40 PM
2
02
40
PM
PST
I don't get this. I think this is wrong, but the smiley face makes it seem like a joke. I'm confused.hazel
March 27, 2009
March
03
Mar
27
27
2009
12:07 PM
12
12
07
PM
PST
Pendulum, Possibly. I think "implicit latching" is a way to avoid saying "non-latching."David Kellogg
March 27, 2009
March
03
Mar
27
27
2009
11:45 AM
11
11
45
AM
PST
hazel @ 37, The difference between explicit and implicit latching is that one is a bug, and the other is a feature. :)Pendulum
March 27, 2009
March
03
Mar
27
27
2009
11:14 AM
11
11
14
AM
PST
to JT: I was using latching and non-latching in the sense that they were used in that other thread. Sometimes we talked about explicit latching in which there is a rule that prohibits correct letters from mutating and implicit (or what kf has called quasi-latching) to mean what you have been meaning by latching. Given that the distinct between the two is related to the distinction I mentioned: random vs non-random in respect to fitness, it would seem best for there to be a consistent usage of the word latching.hazel
March 27, 2009
March
03
Mar
27
27
2009
11:05 AM
11
11
05
AM
PST
gpuccio writes:
A wolf eating a rabbit is just part of an interaction. In itself, it does not select anything. It’s the rabbit adapting to wolves which self-selects itself for survival.
Yes, that's it. Some rabbits select themselves to survive, and others select themselves to be eaten. The wolves are purely passive. *rolls eyes* gpuccio, it's really quite simple. In evolution, whether an individual survives and reproduces depends on both the individual and the environment. In an evolutionary simulation, whether an individual survives and reproduces depends on both the individual and the fitness function. You seem to be straining at gnats in order to avoid admitting the obvious parallels between the two. Why is that?skeech
March 27, 2009
March
03
Mar
27
27
2009
10:50 AM
10
10
50
AM
PST
KF @32, Why bring up Hoyle? Weasel was not a response to anything by Hoyle. Considering Weasel's extremely limited didactic goals, I'm amazed how much people have obsessed over it. And had trouble admitting that they made mistakes understanding it. Perhaps we should blame Dawkins for thinking so little of his example that his explanation was too skimpy.Pendulum
March 27, 2009
March
03
Mar
27
27
2009
10:48 AM
10
10
48
AM
PST
DonaldM, your experience with targetting Bach reinforces SJ Gould's "rewind the tape of life" comment at the end of "Wonderful Life". As you saw, each time you got something different, perhaps something beautiful, but not Bach.Pendulum
March 27, 2009
March
03
Mar
27
27
2009
10:28 AM
10
10
28
AM
PST
Speaking of these sorts of programs, another program that could approach the evolutionary algorithm problem from a different angle can be found in music. My hobby happens to be messing around in my home recording studio, which utilizes MIDI (Musicial Instrument Digital Interface) extensively. Back in the late 80's/early 90's THE computer for music applications was, believe it or not, the Atari ST computers. A brilliant programmer, Emile Tobenfeld, developed a music sequenceing program called Dr. T's KCS (for Keyboard Controlled Sequencer). Besides being able to do multi-midi track recording, the program had a Programmable Variations Generator (PVG). The PVG was the first (as I recall) ever algorithmic music generator (AGM) and I've never encountered anything like since. There are other AGM's out there, but nothing close to what PVG could do. Here's what it does
Variations can be programmed to be Consecutive (give me 16 variations on this theme using the original as the basis for each) or Evolving (give me 16 variations on this theme basing each variation on the preceding one) - The PVG consists of ten pages of functions (over 500 of them) divided into a series of logical groups - these are: Changes: The introduction of new elements (via random or deterministic selection) Signed: Size and weight plus direction Gaussian: Statistical control of changes Constant: Size and weight but NOT direction Swap/Copy: The rearrangement of existing data in a sequence, or between two sequences using random selection Set Values: The selection of data at random that can be mapped to any set value. Any configuration of data is possible. Global 1: Provides transposition, inversion, erasure and deletion. Global 2: Maps specified data to set values Split/Pattern: An extension of the Global Protection function, it permits important characteristics of a sequence, particularly interval patterns, to be defined as a protection "template" and the varied material split from the original to form new material Ornaments: The addition of adjacent or simultaneous data with up to 18 different additions available at one transformation Add Controllers: Similar to Ornaments. Used to add controller, program, aftertouch and pitch bend data Vary Controllers: Similar to Changes. Used to vary controller, program, aftertouch and pitch bend data Macros: Up to 16 of the above presets can be combined to operate simultaneously or sequentially on a sequence. Control over each preset's range and direction of reading An additional function appears if PVG is called from Open Mode - In-Betweens, which permits two sequences to be "morphed" from one to the other. In addition, the Master Editor provides functions that don't easily fit into PVG's environment. Of particular note is the Pitch Map - select any pitch on any channel and map it to a new pitch and/or channel: this can also be done recursively. What the PVG does for the composer is to allow them to create their own tools that can be made to emulate virtually any conceivable compositional or pre/post-production MIDI editing process. For example, much composition requires "pre-processing", the manipulation of existing material via the user's own criteria to form new material - counterpoint is a good example of this. Practically any "rule" for extracting thematic material can be created or otherwise mimicked in the PVG: the musical devices of counterpoint, such as inversion, rotation, augmentation, diminution and reflection, can be programmed and applied to any aspect of the music - other compositional procedures are just as easily created. The PVG can also be used as an "ideas" generator: in short, KCS is a tremendous grab-bag of customizable tools suitable for both top-down and bottom-up composition and editing.
I think this program might have some direct applications to what is being discussed here. Back when I was running my studio off the ST, I remember trying to set algorithms to see if a target sequence (4 measures of something from Bach let's say) could be generated starting with a boring 3 octave quarter note chromatic scale. I ran hundreds of permutations...but never once did it hone in on something from Bach (or Mozart or Beethoven). Every permutation was unique musically...and some were even quite usable. But I've often wished I could run the program again in light of discussions like these about evolutionary algorithms. I think there would be some interesting applications.DonaldM
March 27, 2009
March
03
Mar
27
27
2009
10:03 AM
10
10
03
AM
PST
JT: Heading out after a morning on phone calls, emails and slide presentations. On way out the door . . . latching in the context of Weasel has to do with either explicit letter by letter partitioned search, or to do with implicit latching that rewards non-functional but closer population members. Had there been a serious functionality constraint -- which was what Hoyle stipulated to begin with, Weasel would wander around hopelessly in the sea of non-function, unable to find a shore of function to hill-climb. Hill climbing begs the key question: ORIGIN of bio-function based on complex, specific information. (That is why we keep stressing FSCI: 1,000 bits is credibly unreachable by our observed cosmos, and first life credibly needs 600 k bits; with novel body plans at phylum or sub phylum level weighing in at 10's - 100's of M bits.) Optimisation/diversification/loss of already achieved function is not even an issue. GEM of TKIkairosfocus
March 27, 2009
March
03
Mar
27
27
2009
08:23 AM
8
08
23
AM
PST
[30]: OK once in 24 I did use latching in the sense of artificially prohibiting detrimental mutations (which should be apparent by context) but elsewhere I meant "latching" as it could actually be expected to occur in reality through population dynamics. (I will be off at least for several hours.)JT
March 27, 2009
March
03
Mar
27
27
2009
08:02 AM
8
08
02
AM
PST
Hazel wrote [just now in the weasel thread]: "Non-latching implies that mutation is random in respect to fitness. Latching implies that mutation is not random in respect to fitness." Just need to clarify that in this thread (starting in 24), I personally have been using "latching" to mean merely a benfecial trait not changing (or changing rarely) in a species once it becomes fixated. So thus it does not generally change even though one or a few individuals in a population experiences a mutation for that trait. Of course, mutationworks et. al. have written their own version of the weasel algorithm that only operates on one individual, (instead of a population) and wherein latching is caused by expressly prohibiting mutations where it is not "beneficial" in that one individual. But I didn't mean latching in this highly contrived and artificial sense. Rather, I mean merely the obvious fact that traits can become fixated and extremely difficult to change, unless some other very beneficial trait emerges.JT
March 27, 2009
March
03
Mar
27
27
2009
07:54 AM
7
07
54
AM
PST
28 cont. My point would be that even the mutation works simulation is latching, as its implied that no intermediate can take hold. They can't possibly mean that all those "neutral" intermediates as they term them is overtaking the entire population. So therefore, they're saying the original config is latched and stays latched until one and only one target out of 4^6 is hit upon by chance in one go, and then that and that only replaces the original config which remained latched for every other conceivable sequence generated by mutations. TO REITERATE: MUTATIONWORKS LATCHES AS WELL. BUT ONLY THE ORIGINAL AND TARGET CONFIG - THE IMPLICATION IS THAT NO OTHER CONCIEVABLE CONFIG CAN LATCH. DOES THAT SOUND REALISTIC?JT
March 27, 2009
March
03
Mar
27
27
2009
06:38 AM
6
06
38
AM
PST
I had an epiphany as to how to explain the whole latching issue (maybe): Take a perfectly adapted species - say we don't know how it originated. But say a detrimental mutation happens at some gene in one individual of that species. Everyone here presumably will agree that that mutation will dissapear very quickly. So the bit effected by this mutation was "latched" into place preventing a permenant change, but only by virtue of population dynamics, because one detrimental mutation in one individual will not be enough to take hold in a population. But someone will counter - even a single beneficial mutation in one individual could not take hold in a population. But then you're assuming that NO change of any kind is even possible, so why the charade of calculating how many hundreds of millions of years to get a specific 6 character sequence (as in the mutationworks simulation.)JT
March 27, 2009
March
03
Mar
27
27
2009
06:25 AM
6
06
25
AM
PST
gpuccio @ 22, 3) Fitness is fitness. In NS, it is measure only by the capacity of the replicatore to survive and repèlicate, and by nothing else. Comparing your criteria for fidelity to biological evolution, this is the only one where there might be some conceptual mismatch. My understanding of what you are saying here is that fitness can only be measured after the entity is dead, and we can add up its total contribution to the genetic content of the ongoing simulation. Is that correct? If so, then I understand that from your perspective 'fitness function' is a misnomer. "Environmental scoring function" might be more precise. Fitness in the post hoc sense can be tallied up after the modules that carry out selection and reproduction are done. You may also be interested in the 'agent based modeling' approach to simulation, such the Sugarscape model used by Axtell and Epstein in Growing Artificial Societies.Pendulum
March 27, 2009
March
03
Mar
27
27
2009
06:24 AM
6
06
24
AM
PST
As far as the MESA project, it looks like Dr. Dembski is directly associated with that so I'll have to study that carefully. Actually, I had assumed that those three examples of "reasonably faithful" simulations of evolution he mentioned would be from evo-theorists. I certainly did not expect none of them would be. Maybe it was meant "reasonbly faithful" to the I.D. cause. The first one is from ReMine - I should look at that one as well, but there's nothing on the website itself giving any sort of useable overview of the algorithm. But as far as mutationworks - "You are the weakest link. Goodbye."JT
March 27, 2009
March
03
Mar
27
27
2009
05:43 AM
5
05
43
AM
PST
[23] cont. [mutationworks.com] So you have a 6 character sequence, where each character can have one of four values. The assumption they make for the simulation is that between the starting point and the target, there are no advantageous intermediates. So it becomes simply a 4^6 exhaustive search. However, if there were advantageous intermediates, then if such an intermediate occured it would overwhelm a population, and thus any detrimental mutations would be higly unlikely to revert something back (thus virtual latching as a result of population dymanics). They say only one offspring per generation but if the mutation rate is as low as they say, there would be plenty of time for an advantageous intermediate to duplicate repeatedly, thus overwhelming any isolated mutations away from this advantageous intermediate. (Probably the reason the'yre only assuming one offspring per generation is because they erroneously make the same assumption for Dawkin's weasel). But basically they're just assuming that no advantageous intermediates exist so any intermediate can change back at the same rate as anything else, and thus its just an exhaustive (non-cumulative) search. And then their other point is the supposed low rate of mutation means it would takes 100's of millions of years for this 6 character sequence to occur.JT
March 27, 2009
March
03
Mar
27
27
2009
05:29 AM
5
05
29
AM
PST
So mutationworks.com was one "simulation" personally recommended by Dr. Demski in the OP. At the website they present their "simulation" in a contest with the Dawkin's weasel and it is to show that Dawkin's weasel always wins. And thus apparently the whole purpose of the website is illustrate the ostensible unreality of the weasel algorithm. In the"Signficance of Simluation" page it says, "Dawkins' simulation has letters that never change once they are right. Nucleotides, by contrast are never immune to mutation." So, this website that Dembski is endorsing also says that letters are latched into place. This is one of the purported reason they're giving as to why Dawkin's weasel always beats they're own simulation. However, an even more crucial reason for their own simulation's failure can be found on the initial page: "Your lineage begins with a single asexually reproducing organism that leaves one descendent. This pattern is repeated for all generations thereafter. All prior mutations are preserved in your lineage. [empasis added]." So all mutations are preserved from generation to generation - nothing is rejected evidently. This website is basically a joke - they have Dawkins image pasted up there like the boogie man, and each time you hit the "Next Point Mutation" button, it generates new sarcastic and/or humorous comments.JT
March 27, 2009
March
03
Mar
27
27
2009
04:13 AM
4
04
13
AM
PST
1 10 11 12 13

Leave a Reply