Uncommon Descent Serving The Intelligent Design Community

The Simulation Wars

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I’m currently writing an essay on computational vs. biological evolution. The applicability of computational evolution to biological evolution tends to be suspect because one can cook the simulations to obtain any desired result. Still, some of these evolutionary simulations seem more faithful to biological reality than others. Christoph Adami’s AVIDA, Tom Schneider’s ev, and Tom Ray’s Tierra fall on the “less than faithful” side of this divide. On the “reasonably faithful” side I would place the following three:

Mendel’s Accountant: mendelsaccount.sourceforge.net

MutationWorks: www.mutationworks.com

MESA: www.iscid.org/mesa

Comments
One more thing, kairosfocus:
So, from the outset, Weasel is a misleading, question-begging icon of evolution.
Who made it an icon of evolution? Do you think that evolutionary biologists give a fig about Weasel? It's a BASIC program in a 1986 pop science book, for crying out loud. It's the IDists who keep propping it up, which they're welcome to do, but they should at least prop up the right algorithm.R0b
April 15, 2009
April
04
Apr
15
15
2009
03:29 PM
3
03
29
PM
PST
kf in 370, writes,
And, while you may wish to point out that those who created explicitly latched weasels thereafter tended to not bother with the step of clustering trial runs in groups and picking the "best of group" to promote to the next stage, that makes no MATERIAL difference to the explicitly latched case.
1. Good. kf answered my question: he does know that Dembski et all just used one child per generation, and did not pick the best child out of a population. 2. Dismissing this as "not bothering" to select from a population because it makes no "MATERIAL difference" is rather mind-boggling. Selection from a population is the MAIN point of the exercise. I'm rather flabbergasted to find this out.hazel
April 15, 2009
April
04
Apr
15
15
2009
01:38 PM
1
01
38
PM
PST
I think kairosfocus’s intent is that we suffer brain damage as we beat our heads against our desks.
I suspect you are right. It seems we are playing a game of "Last Man Standing"! :)Alan Fox
April 15, 2009
April
04
Apr
15
15
2009
01:18 PM
1
01
18
PM
PST
kairosfocus:
The latter for instance runs very long, very fast with a LOT of reversions and re-advances, producing an odd winking effect. The former very credibly latches or at he very least quasi-latches, with implicit latching being the most plausible mechanism. The latter simply does not show latching type behaviour.
I think kairosfocus's intent is that we suffer brain damage as we beat our heads against our desks.R0b
April 15, 2009
April
04
Apr
15
15
2009
12:01 PM
12
12
01
PM
PST
kairosfocus:
Question-begging loaded assertion embedded in the question.
We all, including yourself, agree that the "explicit latching" interpretation is erroneous, and yet it's question-begging for me to take that as a given? My apologies. I'll henceforth refer to it as the allegedly erroneous interpretation.
And, while you may wish to point out that those who created explicitly latched weasels thereafter tended to not bother with the step of clustering trial runs in groups and picking the "best of group" to promote to the next stage, that makes no MATERIAL difference to the explicitly latched case.
You offered Heng/Green's "explicit latching" as evidence that the mistake was reasonable and natural. This is a valid data point only if they made the mistake independent of Truman and Dembski. I submit that they did not. The point of Weasel was to illustrate cumulative selection, as Dawkins reminds us throughout his description of the program; and he describes the algorithm as selecting a winner from the progeny in each generation. Are we to suppose that Truman, Dembski, and Heng/Green all independently missed the point of Weasel and supposed that it involved no selection? And that they all independently came up with the idea that incorrect letters should mutate every time, in obvious contradiction to Dawkins' output? Isn't it more likely that Dembski simply accepted Truman's characterization, and Heng/Green followed suit?
Not to mention, you are not addressing the force of the actual statements by Mr Dawkins circa 1986, as most recently excerpted at 285. understanding Mr Dawkins to be describing an explicitly latched program is a reasonable and natural reading.
You quoted a lot of Dawkins in 285. In which of those statements is "understanding Mr Dawkins to be describing an explicitly latched program is a reasonable and natural reading"?
As for the idea that various runs of explicitly latched weasels that differ from a given 1986 run in line 1 to line 2 makes the programs materially different, that seems just a tad overdrawn. The Weasel run c 1986 is one run. The rest are similarly selected runs. They will naturally differ at points.
The first two generations of Dawkins' output are almost identical. In Truman etc.'s output, we see the exact opposite. Are you saying that this discrepancy is due to undersampling (we looked at only one run of each version)? If so, did Dawkin's two sequences just happen to be almost identical, or did Truman etc.'s two sequences just happen to be completely different? As to your repeated criticism that Weasel ducks the issues of abiogenesis and isolated islands of functionality, you're absolutely right. It ducks them in the same way that software ducks the issue of electrical power. Weasel assumes the existence of life and gradual evolutionary pathways. Maybe you think that the latter doesn't exist. Maybe you think that a species can't, over time, gradually improve its speed or camouflage or vision. But if you don't subscribe to this brand of selective hyperskepticism, and you're willing to admit the existence of life and of at least some gradual evolutionary pathways, then you're admitting that Weasel has some point of analogy to biology.R0b
April 15, 2009
April
04
Apr
15
15
2009
11:42 AM
11
11
42
AM
PST
From wesley Elsberry Ph. D., quoted from here: Since [KF] seems to forget what he has claimed from time to time,let's see what he said that he is on the hook for:
Under other cases, as the pop size and mutation rates make multiple letter mutations more and more likely, letter substitutions and multiple new correct letter champions will emerge and begin to dominate the runs. This, because the filter that selects the champions rewards mere proximity without reference to function; as Mr Dawkins so explicitly stated. So, to put up a de-tuned case as if it were the sort of co-tuned latching case as we see in the 1986 runs, is a strawman fallacy.
[KF] was claiming there that runs that demonstrate the property of summary output that shows no loss of a correct base have to have the population size and mutation rate "tuned" for that outcome. He also claims that the expectation changes as one runs up the population size and mutation rate. Certainly [KF] is correct that increasing the mutation rate increases the proportion of outcomes where the summary output would show loss of a correct base somewhere. But [KF] is completely wrong that increasing the population size will also do that. Notice also that [KF] is concerned about what applies to the 1986 results. This means that we should pay attention to all the clues given concerning those results, including the distribution of generations to convergence reported there. Those will exclude broad ranges of parameter settings as not matching the expected values, and make other parameter settings more likely. The data I've been presenting is relevant to these various claims. The claim of "tuning" of parameters implies that there is a narrow region where one finds an expectation that summary output for three runs would not show loss of a correct base. Just to stave off foaming at the mouth on [KF]'s part, here is an expression of his giving just that:
Under reasonably accessible co-tuning of program mutation rates, population sizes and this filter, we will see that once a letter goes correct, form genration to gneration, the champions will preserve the correct letters due to that co-tuning.
But the data show no such "tuning" was ever required. Population size as a variable shows that increasing population size leads to a lowered expectation that summary output will show the loss of a correct base, and that also holds for sequentially considered best candidates from the generations of runs. I presented some results yesterday showing that. Here are some more "weasel" results at 10000 runs per parameter set. Earlier, Gordon Mullings seemed to often use a mutation rate of 0.05 as a basis for discussion, so these use that relatively high mutation rate and vary the population size.
Runs=10000, PopSize=00050, MutRate=0.05000, Gen. min=044, Gen. max=566, Gen sum=1394077, Gen. avg.=139.4, Runs with losses=6202, Runs with dlosses=5351 Runs=10000, PopSize=00100, MutRate=0.05000, Gen. min=032, Gen. max=237, Gen sum=782130, Gen. avg.=78.2, Runs with losses=3945, Runs with dlosses=2841 Runs=10000, PopSize=00200, MutRate=0.05000, Gen. min=024, Gen. max=138, Gen sum=484796, Gen. avg.=48.5, Runs with losses=2481, Runs with dlosses=1364 Runs=10000, PopSize=00250, MutRate=0.05000, Gen. min=023, Gen. max=120, Gen sum=427161, Gen. avg.=42.7, Runs with losses=2181, Runs with dlosses=1073 Runs=10000, PopSize=00300, MutRate=0.05000, Gen. min=022, Gen. max=099, Gen sum=389581, Gen. avg.=39.0, Runs with losses=1974, Runs with dlosses=920 Runs=10000, PopSize=00500, MutRate=0.05000, Gen. min=018, Gen. max=066, Gen sum=312730, Gen. avg.=31.3, Runs with losses=1562, Runs with dlosses=557 Runs=10000, PopSize=01000, MutRate=0.05000, Gen. min=017, Gen. max=042, Gen sum=251991, Gen. avg.=25.2, Runs with losses=1214, Runs with dlosses=340 Runs=10000, PopSize=02000, MutRate=0.05000, Gen. min=016, Gen. max=032, Gen sum=216359, Gen. avg.=21.6, Runs with losses=1016, Runs with dlosses=226 Runs=10000, PopSize=10000, MutRate=0.05000, Gen. min=013, Gen. max=021, Gen sum=172933, Gen. avg.=17.3, Runs with losses=664, Runs with dlosses=94
Even with the high mutation rate of 0.05, where the distribution of generations most closely matches the three reported runs by Dawkins the odds of any one result showing loss of a correct base is always less than 1 in 4. In order to get near having even an expectation that 1 in 2 summary outputs would show loss of a correct base, one would have to couple the relatively high 0.05 mutation rate with a population size in the 50s. Here are some runs to try to find the 1 in 2 expectation population size for mutation rate at 0.05:
Runs=1000, PopSize=00040, MutRate=0.05000, Gen. min=062, Gen. max=532, Gen sum=175140, Gen. avg.=175.1, Runs with losses=695, Runs with dlosses=618 Runs=1000, PopSize=00050, MutRate=0.05000, Gen. min=047, Gen. max=426, Gen sum=139932, Gen. avg.=139.9, Runs with losses=636, Runs with dlosses=555 Runs=1000, PopSize=00052, MutRate=0.05000, Gen. min=052, Gen. max=440, Gen sum=131848, Gen. avg.=131.8, Runs with losses=590, Runs with dlosses=508 Runs=1000, PopSize=00054, MutRate=0.05000, Gen. min=045, Gen. max=364, Gen sum=131937, Gen. avg.=131.9, Runs with losses=584, Runs with dlosses=484 Runs=1000, PopSize=00056, MutRate=0.05000, Gen. min=050, Gen. max=485, Gen sum=124665, Gen. avg.=124.7, Runs with losses=563, Runs with dlosses=472 Runs=1000, PopSize=00058, MutRate=0.05000, Gen. min=050, Gen. max=343, Gen sum=122644, Gen. avg.=122.6, Runs with losses=584, Runs with dlosses=472 Runs=1000, PopSize=00060, MutRate=0.05000, Gen. min=050, Gen. max=292, Gen sum=116839, Gen. avg.=116.8, Runs with losses=565, Runs with dlosses=465
It looks to be bracketed by population sizes of 52 and 54 when the relatively high mutation rate of 0.05 is applied. Notice the average generations required and how it is over twice the largest number of generations reported by Dawkins for his three runs in 1986, making it unreasonable to assert that Dawkins' runs might have used such a set of parameters. For a more reasonable mutation rate of 1/28 ~= 0.0357, here is data showing where we would expect 1 in 2 summary outputs to demonstrate loss of a correct base:
Runs=10000, PopSize=00028, MutRate=0.03571, Gen. min=074, Gen. max=834, Gen sum=2398174, Gen. avg.=239.8, Runs with losses=5931, Runs with dlosses=5413 Runs=10000, PopSize=00030, MutRate=0.03571, Gen. min=059, Gen. max=804, Gen sum=2228602, Gen. avg.=222.9, Runs with losses=5631, Runs with dlosses=5076 Runs=10000, PopSize=00032, MutRate=0.03571, Gen. min=072, Gen. max=722, Gen sum=2110742, Gen. avg.=211.1, Runs with losses=5463, Runs with dlosses=4896
To get to an [i]expectation[/i] of as [b]low[/b] as 1 in 8 that three summary outputs should [b]not[/b] show loss of a correct base using a reasonable mutation rate, the population size has to be about 30. What does that do to the distribution of generations to convergence? It shows an average generations to convergence over three times as high as the longest run Dawkins reported in 1986. Mullings again had it wrong: the results reported by Dawkins in 1986 do not admit of a set of conditions that would lead us to [i]expect[/i] three summary outputs to necessarily show loss of a correct base. [KF]:
In certain cases, the latching of the letters is practically all but certain. This is what on preponderance of evidence happened in 1986 in ch 3 of TBW, and in the NS run. Indeed, there we can see that of 300+ positions that could change, 200+ show that letters, once correct stay that way, and none are seen to revert. Such a large sample provided by the man who in the same context exults in how progress is “cumulative,” is clearly representative.
The issue is that there is no "latching"; "latching" requires a mechanism to protect correct bases from mutation. [KF] has gone to a lot of rhetorical trouble to make it appear that he has not had to retreat from earlier claims that correct bases were protected from mutation and that that protection was what allowed "weasel" to converge. Nor is it so that "weasel" preserves correct bases only "in certain cases", which is just another way [KF] expresses his bizarre "tuning" argument. Summary output from three runs is not "a large sample". Output from a thousand or more runs is a large sample, and those large samples show that it is the case where one would expect summary output to show loss of a correct base that has to be carefully tuned, requiring higher mutation rates and small population sizes, both of which are decidedly contrary to the situation that applies in biology. [KF]:
In short, implicit latching. AND in your case, with tearaway run to the target through multiple beneficial mutations on a probabilistically and empirically implausible model. [Think about the skirts of the mutations distribution and what happens with 500 iterations per generation of 5% per letter odds of mutation.]
What about population size 500 and mutation rate 0.05? The data given above show that it is unremarkable, simply another point intermediate in expectation between the smaller and larger population sizes that bracket it, and in no way showing that "skirts" on a distribution magically make the expectation of loss of a correct base in summary output go up with population size. It simply shows that the case that such an expectation [i]goes down[/i] as population size increases is supported. Biological population sizes are only rarely so small as the numbers that we are talking about here. And nowhere are biological mutation rates as high as what we are talking about here. Empirically, choosing a small mutation rate, one that yields an expectation that a daughter copy will have one or fewer mutations, is perfectly reasonable. Empirically, choosing a large population size makes sense. [KF]'s assertions that the three summary outputs from Dawkins in 1986 should be [i]expected[/i] to show loss of a correct base with reasonable parameter settings goes against the empirical data. Even an unreasonably high mutation rate of 0.05 does not lead to that expectation for population sizes that yield a reasonable distribution for the number of generations reported for those three runs. Tuning is not necessary for the case of having reduced expectation that summary output will show loss of a correct base, but tuning is required to get to parameters that lead to a strong expectation that summary output will show loss of a correct base. Once you obtain parameters leading to such an expectation, though, the resulting distribution of generations to convergence demonstrates that such a parameter set cannot reasonably be ascribed to use by Dawkins in 1986.Alan Fox
April 15, 2009
April
04
Apr
15
15
2009
08:24 AM
8
08
24
AM
PST
to Joseph: your metaphor that the unselected children in each generated are "aborted" is wrong. The children are born, but never reproduce. to kairosfocus: do you understand R0b's point that the TDMHG versions he studied don't even have a generation of children from which the best is selected, but rather just have each parent produce a single child who becomes the next parent?hazel
April 15, 2009
April
04
Apr
15
15
2009
07:16 AM
7
07
16
AM
PST
PPS: And, in a context where just this morning I was remarking on the wave of spamming in my email, the persistent insistent violation of my privacy by Anti Evo is a HIGHLY material issue of civility. The insistence on slandering design thought as being synonymous to creationism has had well known -- and intended unjustifiable public policy and career consequences. In short Mr Kellogg is indulging in enabling behaviour. That is sad, and sadly telling.kairosfocus
April 15, 2009
April
04
Apr
15
15
2009
07:01 AM
7
07
01
AM
PST
PS: Nor is it irrelevant or a red herring to point out yet again the unanswered challenge to originate functionally specific complex bio-information, the precise question that weasel avoids by question-begging and distraction.kairosfocus
April 15, 2009
April
04
Apr
15
15
2009
06:53 AM
6
06
53
AM
PST
Onlookers: Notice that the just above happens on a day when I have taken pains, with examples, to point out the gap between the mathematics of varying Weasel type text strings and the effects of a proximity filter on a population of such variants, in further substantiation of why I consider that Weasel 1986 is very different in its behaviour from Weasel 1987 and other similar programs. [The latter for instance runs very long, very fast with a LOT of reversions and re-advances, producing an odd winking effect. The former very credibly latches or at he very least quasi-latches, with implicit latching being the most plausible mechanism. The latter simply does not show latching type behaviour.] As to the case presented by Rob, I have shown why I reject his conclusions, just the opposite of a closed mind. I disagree for stated reasons, I have not simply dismissed by closed mindedness. GEM of TKIkairosfocus
April 15, 2009
April
04
Apr
15
15
2009
06:49 AM
6
06
49
AM
PST
Wow, kairosfocus, your "response" to R0b (which I just read a little closely) demonstrates conclusively that on the issue of Weasel, your beliefs are impervious to evidence. That response should be saved for posterity and put on display under "rhetorical evasion."David Kellogg
April 15, 2009
April
04
Apr
15
15
2009
05:21 AM
5
05
21
AM
PST
kairosfocus, as usual, a great deal of your response is nonresponsive, chock full of red herrings about evolutionary materialism and the inevitable strawman distractions about privacy, civility, etc. I'll let others take apart your specific points -- I'm too exhausted to deal with the onslaught of verbiage. Your writing is so noisy there is barely an audible signal. I will object to this:
Frequent reversions a la 1987 will appear as the likelihood of such substitutions rises,
You still don't get it. The 1987 video showed frequent reversions because it showed all the mutated phrases, not just the winning ones. The difference between 1986 and 1987 is an effect of the display only.David Kellogg
April 15, 2009
April
04
Apr
15
15
2009
05:16 AM
5
05
16
AM
PST
PPPS: For the 4% odds per letter of mutation case, the probability of a given pop member containing no changes is ~ 32%. The onward probability of a population of 50 having NO zero-mutation cases is thus 0.68^50 ~ 4.2 * 10^-9. So, we can safely assume that in any case where a substitution occurs, there will by overwhelming probability be a zero mutation case for it to compete with. The decisive issue will be the proximity filter, and if that is based on the binary value of he letters and space character, the result may be a very complex function of what the newly correct letter is, what the newly incorrect letter is, and what the original string was; not to mention remaining distance to target.kairosfocus
April 15, 2009
April
04
Apr
15
15
2009
05:03 AM
5
05
03
AM
PST
PPS: I should of course add that in the just presented we also see the preservation effect, i.e. the no-change case "often" wins. (We are in a low intrinsic probability of change per letter regime, here 4% i.e. ~ 1 letter per member shifts on average and about 1/3 will have no change. That is, a generational sample of size N is likely to have a significant proportion of no-change candidates [~N/3 for 4%] so the likely winners will be no change, then a further letter goes correct, then a substitution [double letter change one to correct the other to incorrect] then such a substitution with an additional correct letter; because of the impact of the proximity to target filter. As N rises as well, the likelihood that a generational sample will contain one or more of the substitution cases rises -- BTW, this is not the same as choosing the champion. [Why: An interesting issue is when we have no-change cases AND substitutions (likely to occur where a substitution happens for such a case as is under examination); which will likely have the same distance metric: either a preference for substitutions or a lottery among the least distance candidates may have to be invoked to decide. Such a decision will materially affect the incidence of substitutions and no-change cases in the resulting run of champions . . . or, should that be, roughly, queen bees?] So, what we see in such a run is not simply a matter of the mathematics of what changes and what does not change per the mathematical odds of letters changing. [For instance, suppose that the program sets a module: if distance to target is not superior among the mutants, pass the current champion to the next generation; this would produce NO substitution cases. By contrast, choose a case that has same distance but is different form current champion would pass the maximum number of substitutions, and a lottery would be intermediate, trending to pass few of the relatively rare substitutions. O/p's in the three cases would be significantly different in terms of the characteristics of the runs of champions. And, of course proximity to target metrics can be composed to do these things in varying degrees, automatically, since the ACSCII codes of letters etc have bit values correlated to the sequence of the alphabet.])kairosfocus
April 15, 2009
April
04
Apr
15
15
2009
04:22 AM
4
04
22
AM
PST
PS: It would be useful for onlookers to play with the Atom implementation, where we can see events live for ourselves, generation by generation. run the 4% 50 pop version a few times. You will see that there are occasional reversions [colour goes back to black from reddish], and reflective of a quasi-latched condition on such runs. There will be abundant cases of non-reversion to select "good" ["cumulative"] runs from too, i.e the runs latch implicitly. In short, as the effective number of samples rises (here, through multiplied runs), we do see the appearance of substitutions etc, from the far skirt. Here is a case in point, on 4%, 50 pop/gen, after some runs to catch a case:
130. METHKNKS IT IS LIME A WEASEL 131. METHKNKS IT IS LIME A WEASEL 132. MRTHANKS IT IS LIKE A WEASEL 133. MBTHANKS IT IS LIKE A WEASEL 134. MBTHANKS IT IS LIKE A WEASEL 135. MBTHANKS IT IS LIKE A WEASEL
Notice the substitution effect. Implicit latching places a probabilistic barrier, not an absolute one. Some runs will latch, others will quasi-latch, as a result. And if your "good" runs are those with "cumulative" progress, you arte going to be likely to showcase the former. Frequent reversions a la 1987 will appear as the likelihood of such substitutions rises, which will require more sample size [to be more likely to get skirt members in the sample] and/or higher mutation rates [to get more multi-letter mutations and fewer zero letter mutations cases].kairosfocus
April 15, 2009
April
04
Apr
15
15
2009
01:56 AM
1
01
56
AM
PST
2] you have strongly suggested that the selections from TBW should show reversions if such were possible. You here confuse what is POSSIBLE with what is sufficiently probable to be likely to be credibly observable under relevant typical or likely to be selected circumstances, especially for a showcased "good" run of "cumulative selection." [Cf the example of the avalanche of rocks spontaneously forming the phrase "Welcome to Wales." The difference is the foundation stone of the 2nd law of thermodynamics in statistical mechanics. It is also the foundation of he issue posed by FSCI to the claimed spontaneous origin of life and major body plans. As outlined above.] And by the way, a single example shows that a possibility is real. AND, I have given the dynamical context in which that reality takes place: double mutations with substitution, which I have exemplified. 3] Rob, 365: is the erroneous interpretation of Dawkins’ Weasel really “reasonable” and “natural”? Question-begging loaded assertion embedded in the question.(AKA fallacy of the complex question.) the only decisive evidence that Weasel 1986 did not explicitly latch is credible code. Which, 23 years later, is not likely to be forthcoming; though it would be welcome. It is on preponderance of evidence based on trusting the reported statement of Mr Dawkins c 2000 that leads to the conclusion that the Weasel 1986 was most likely implicitly not explicitly latched. So far as the raw evidence goes, Weasel 1986 AND [thanks to Apollos' point that an explicitly latched program can also show reversions . . . ) can be accounted for on BOTH mechanisms. And, while you may wish to point out that those who created explicitly latched weasels thereafter tended to not bother with the step of clustering trial runs in groups and picking the "best of group" to promote to the next stage, that makes no MATERIAL difference to the explicitly latched case. (Not to mention, you are not addressing the force of the actual statements by Mr Dawkins circa 1986, as most recently excerpted at 285. understanding Mr Dawkins to be describing an explicitly latched program is a reasonable and natural reading. Which was my point.) As for the idea that various runs of explicitly latched weasels that differ from a given 1986 run in line 1 to line 2 makes the programs materially different, that seems just a tad overdrawn. The Weasel run c 1986 is one run. The rest are similarly selected runs. They will naturally differ at points. What is crucially consistent is that runs of 40 or so and 60 or so gens to target are consistent with a good explicitly latched run. (And of course implict latching will act in much the same way as explicit latching; at least so far as the run of champions is concerned. And, Joseph is right: the champions are the only real children per generation -- the rest are abortions. The champions that are selected on mere proximity, not functionality.) On typos and the like on "nonsense phrases," there is little or no consequence one way or another. 4] GLF, :Tone and substance are one thing . . . Tone has to do with serious moral issues: violation of privacy, slanderous conflation of Design thought and creationism and general uncivil conduct. Such are even more important and of much wider general impact than matters of substance on the particular narrow issue engaged, for uncivil conduct undermines the ability of our civilisation to thrive and prosper; i.e. its sustainability. (And yes, this is an allusion to Kant's Categorical imperative: uncivil conduct is civilisation-damaging conduct. Right now, Internet incivility is a growing menace to our civilisation.) Substance does have to do with facts and reasoning related thereto. So, the inference that by raising issues of "tone and substance," I have failed to address data is at best careless and revealing. And in fact I have already pointed out the specific regime of the distribution I have been addressing all along, i.e. as discussed above on observability of the zero change case, which has a key influence on the circumstances where the proximity filter applies and will advance while preserving previously correct letters. And, onlookers, I provided actual cases that illustrate and exemplify what I am talking about. I should think that these would count as "data," should thery not? Especially, as the published runs of 1986 would be showcased "good" ones illustrative of the power of "cumulative selection." So good in fact that they show no reversions at all in 200 cases where letters go correct and have in principle the opportunity to revert, of 300 cases of changeable letters. Just that little bit TOO good . . . 5] new information like this . . . I addressed this case yesterday. I have given furrther details this morning, which should suffice for onlookers to see that the crucial point is that there is a regime where what I am speaking of will happen, with significant reproducibility; especially relative to the showcasing of "good" runs of "cumulative selection." [That is, a near synonym to the root ideas behind "ratcheting" and "latching."] As I have already shown from 234 on. So, you may indeed push the observed numbers sufficiently far that you have say 1000 runs * 100 member pop per gen * 50/gen typ ~ 5 mn observed mutants [including maybe 1/3 which will be zero change]. But that does not undermine that you will get the case of "good" runs of cumulative selection that will latch implicitly. As I have shown. GEM of TKIkairosfocus
April 15, 2009
April
04
Apr
15
15
2009
12:52 AM
12
12
52
AM
PST
Onlookers (And DK et al): We must set the above exchanges in context (as there is a significant tangential tendency in this issue): evolutionary materialism faces a critical challenge to account for the functionally specific, complex information in life, from credible first life [~ 600 - 1,000 k bits] on to major body plans [10's to 100's of M bits, dozens of times over]. Weasel, in the end, sadly,ducks that challenge, rather than answering it. For, it fails utterly to address the need to get to a credible threshold of functionality before hill-climbing mechanisms such as random variations plus natural selection can be applied. Worse, it reverts to artificial, foresighted, targetted selection as a "substitute" for NS, even though from the outset Mr Dawkins knew that this was "misleading." So, from the outset, Weasel is a misleading, question-begging icon of evolution. One that should have long since been retired tot he hall of shame of such icons, next to the Haekel embryo drawings and the Piltdown and Nebraska man fossils etc. The sustained failure to address that key issue, is thus the backdrop against which we need to assess the obsession of the Anti Evo advocates and fellow travellers with trying to show me wrong on the observation that Weasel c. 1986 evidently latched its o/p, and that there are two reasonable mechanisms for it, explicit latching [search is by the letter] and implicit latching [search is by the phrase, but as pop size, per letter mutation rates and the proximity to target filter interact, weasel latches]. Of these, on preponderance of evidence, implicit latching best accounts for Weasel c 1986, that evidently latches; and, Weasel c 1987 which does not. (To get to the latter, it is likely that the parameters listed were detuned enough that no-change cases become effectively unobservable or sufficiently so that we see regular reversions of correct letters, i.e. non-lathing. For the former, so long as the per letter mutation rate is low enough that no-change cases are observable enough, and the pop size is below a threshold where the far harder to observe double change with substitution cases or substitutions with a third letter going correct are seen, then we will often enough see lathing then at the threshold quasi-lathing, that appearance of such in a showcased "good" run of "cumulative selection" is not unlikely. As an idea on observability of the zero change "mutants," for 5% per letter mutation rate, 0.95^28 ~ 24% [4% giving 32%], for 10%, 0.9^28 ~ 5%, for 15% 0.85^28 ~ 1%, for 25% 0.75^28 ~ 0.03%, for 50% ,0.5^28 ~ 3.7*10^-7%. Thus, for instance: 0.03% of 1,000 ~ 0.3, while 24% of 50 ~ 12. That is sufficient, given the "nearest is picked a next champion" filter, the rest -- as Joseph aptly said -- being aborted. What is happening, visually, is that the more or less bell shaped distribution is being pulled away to the right from the origin as the odds of per letter mutation grow, so the 0-change case is increasingly becoming a a tail member.) In short, we have a molehill issue [on which Joseph and I are right] being made into a mountain, distracting attention from the basic fact that one has to get to the shores of Isle Improbable to then try to climb up to its peak, Mt Improbable. And, Isle Improbable is incredibly isolated in the Sea of Non-Function, so that if you do not have an intelligently prepared chart, it is hard indeed to find it by what amounts to random search. Now, on a few points that seem worth a further remark: 1] DK, 364: Dr. Elsberry’s data show that “the number of runs [showing] any loss of a correct base from parent to best candidate in the following generation” — that is, where a case exhibiting reversion is the best choice — decreases as the population increases. The highlighted findings relate to a regime of behaviour utterly distinct from the one I have addressed, as summarised just above and showed from 234 supra with reference to the start-point, namely start from the implicitly latched case then allow pop size to grow enough to make the substitution effect easily observable; while the no-change case is still likely to be present. Under those circumstances we will see lathing then quasi-latching then disappearance of latching. And, if one seeks "good cases" of cumulative selection, one will showcase precisely that. So, even where on multiplying number of runs exceedingly -- say 5 mn total sample size so that 10% is 1/2 million of which 100 - 500 observations of reversions of one species or another are all less than 0.1% -- one sees that one is in a quasi-latched case, a selected "good" run will of course implicitly latch. I am also well aware of the fact that the binomial is a highly peaked distribution under interesting circumstances [the flipped coins case is used to illustrate statistical thermodynamics principles], so that it is hard to see the upper and lower far skirts once they are both sufficiently in evidence; i.e the bulk overwhelms the skirt. (Thus, typical samples will tend to cluster to the mean. Multiplying the number of cases of runs is a way of greatly expanding effective sample size so that what is not likely to be evident in any one case becomes observable on the multiplication of cases, as just noted. For comparison tot he real world of origin of life and of body plans, the UPB threshold is set off making things that are searched for on the gamut of the cosmos' search resources unobservable, i.e about 1 in 10^150. First life and major body plans by that threshold are unobservable based on non-intelligent search strategies. Of this issue, Mr Dawkins c. 1986 decided that "single step" changes of odds about 1 in 10^40 could be dismissed and a cumulative change where the odds of any one letter going correct on random change are 1 in 27, substituted. And, he did so for cases where he knew that a search of 1 in 10^40 was infeasible for his search resources. In short the key question is begged from the outset of creating Weasel type programs.) [ . . . ]kairosfocus
April 15, 2009
April
04
Apr
15
15
2009
12:51 AM
12
12
51
AM
PST
I think R0b just counted the horses teeth :)Alan Fox
April 14, 2009
April
04
Apr
14
14
2009
02:37 PM
2
02
37
PM
PST
Wow R0b! Very interesting post!Alan Fox
April 14, 2009
April
04
Apr
14
14
2009
02:35 PM
2
02
35
PM
PST
kairosfocus:
Per Weasel 1986, absent the testimony as reported to us in recent days, explicit latching is a very reasonable understanding. (Indeed, the Monash University biologists, naturally understood it that way until Mr Elsberry “corrected” them.)
You have made this claim repeatedly in this thread, but is the erroneous interpretation of Dawkins' Weasel really "reasonable" and "natural"? Let's look a little closer. Three parties have published the erroneous interpretation: - Truman, in 12/1998 - Dembski, starting (I think) in 9/1999 - Heng/Green at Monash, in 2007 The striking thing is that all three parties have contradicted Dawkins on the same points in the same way, and it's not limited to the latching question. Let's compare Dawkins' description of Weasel to the erroneous algorithm, which I'll call TDMHG (Truman/Dembski/Marks/Heng/Green). - Dawkins says that multiple progeny are produced in every generation. In TDMHG, only one child is produced. - Dawkins says that a winning phrase is chosen from each generation. This makes no sense in TDMHG, since each generation consists of only one phrase. - Dawkins repeatedly says that he's illustrating cumulative selection. In TDMHG, there is no competition and no selection. - Dawkins says that the sequences reproduce "with a certain chance of random error - 'mutation' - in the copying". In TDMHG, incorrect letters are guaranteed to mutate, while correct letters are guaranteed to not mutate. - The most obvious difference is in the respective outputs. Dawkins reports the first two generations from one of his runs as follows: WDLDMNLT DTJBKWIRZREZLMQCO P WDLTMNLT DTJBSWIRZREZLMQCO P (The second 'D' in the first sequence is omitted in TBW -- a typo.) In contrast, here are the first two generations as reported by Truman: WDLTMNLT DTJBKWIRZREZLMQCO P SEE SNXD ETHAIYGSWCWVFCQCQMZ and from a run of Heng/Green's applet: tynsaue voledpljhuradvlyatvla cqgrnfuskiprnorcasm vpyvbcpyp and from a run of Marks/Dembski's script: YMIHOOSYFKLTT JVZUHTSKMEDONZ OPNHSJLWTKBRHQY CQDIJJOEPGLC Dawkins' 1st two generations are almost identical, while the first two generations in TDMHG are almost completely different. How, then, is it "reasonable" and "natural" to conclude that TDMHG is the same as Dawkins' algorithm? How could all three parties independently come up with the same algorithm and fail to notice that its description and output are manifestly different from Dawkins'? My guess is that Truman made the original gaffe, Dembski followed Truman, and Heng/Green followed Dembski & Truman. (BTW, I've decompiled the applet from Monash and verified that the algorithm is the same as the one implemented by the EvoInfo Lab, both of which match the description and results reported by Truman 1998 and the description by Dembski 1999.)R0b
April 14, 2009
April
04
Apr
14
14
2009
11:58 AM
11
11
58
AM
PST
kairosfocus, what you wrote was this:
But equally, from the statistics involved, as the population size grows enough relative to mutation rates, skirts will begin to tell, and the program will first quasi-latch, with occasional reversions,t hen eventually as the far skirt begins to dominate the filter on sufficient population per the per letter mutation rate, we will see multiple reversions and the like, i.e. latching vanishes.
Dr. Elsberry's data show that "the number of runs [showing] any loss of a correct base from parent to best candidate in the following generation" -- that is, where a case exhibiting reversion is the best choice -- decreases as the population increases. To the extent that I can slog through your prose, you have said that it increases. Further, you have strongly suggested that the selections from TBW should show reversions if such were possible. Here too, the data say otherwise. Your comment [234], in which you place so much confidence, shows single runs. Dr. Elsberry presents the results from a total of 15,000 runs. You refer to Dr. Elsberry's program as "Weasel-like," but I think there is a much more reasonable candidate for that label.David Kellogg
April 14, 2009
April
04
Apr
14
14
2009
08:18 AM
8
08
18
AM
PST
hazel:
Is it true or not, in your opinion, that in Weasel it is possible for one of the children in a generation who isn’t the best fit and doesn’t get selected to have a letter reversal?
There is only one child per generation. The rest are arbortions. And what those abortions were is irrelevant as they were not discussed, described nor illustrated in TBW.Joseph
April 14, 2009
April
04
Apr
14
14
2009
07:13 AM
7
07
13
AM
PST
Onlookers (and Mr Kellogg): You can simply cf the runs above from 234 on in this thread, to see that the point I actually made is valid; being logically-dynamically credible and empirically demonstrated. (I don't doubt that you may find all sorts of oddities in behaviour of Weasel-like pgms, as set up to do all sorts of things by those who write them. To cite such to "prove" that I am wrong in what I assert about a specific data set published in TBW 1986, and as shown to be further plausible on the dynamics of substitution required to get a reversion, and as further shown by recreative examples, is simply out of order. Nor, have I EVER "assert[ed]" what Mr Elsberry attributes to me in the cite above: [kf]’s assertion that increasing population size N increased the likelihood that a set of three runs sampled at ten-generation intervals would show any loss of a correct base in there This is strawman fallacy at its worst, putting words that don't belong there in my mouth. What I have said, with reasons given, is that as mutation rate, pop size and choose- slightest- increment- in- proximity filter are detuned from a latched condition as I have shown, there will appear reversions in a context of substitutions. [Why: (i) a significant proportion of "mutants" in a given gen are no-change; (ii) so a selected new champion must have at least that proximity, (iii) if a reversion occurs it must be compensated for, (iv) thus we will see reversion + substitution. Also, (v) to be likely to see far-tail pop members like in iii just past, you need to push up pop numbers sufficiently that these will begin to appear as observable, i.e. we see that on LOLN, (vi) N*p -> ~1 from the low side. Simple logic based on acting dynamics and patterns.] This, as anyone can see by scrolling up to 234 on, I have specifically empirically demonstrated with instances on successive runs of Atom's recreation of a proximity filter Weasel. And yes, the above are successive runs starting from the out of the box default condition.) Moreover, the underlying issue with Weasel and kin s that they duck the fundamental issue that targetted search without reference to a realistic threshold of functionality is not relevant to the actual challenge of getting to first life that credibly has 600 kbits of DNA and onward to dozens of novel body plans that require 10's - 100's of M bits. As for Mr Elsberry, sadly, the latest excerpts from his fulminations abundantly show from both his tone and substance that men of such ilk are uncivil and damaging to our culture and institutions of science and science education. We would be prudent to heed the caution that to be forewarned is to be forearmed. "A word to the wise . . . " GEM of TKIkairosfocus
April 14, 2009
April
04
Apr
14
14
2009
06:37 AM
6
06
37
AM
PST
kairosfocus, I'm relying a comment by Dr. Elsberry. Some of the formatting is lost, and kairosfocus's proper name has been removed. The link is provided at the end for those who need the original formatting. The following is from Dr. Elsberry: For once, [kf] had something useful to contribute to the discussion when he mentioned that models that didn't match up to the empirical data were bad. So to see whether [kf]'s assertion that increasing population size N increased the likelihood that a set of three runs sampled at ten-generation intervals would show any loss of a correct base in there, I coded a faster "weasel" in C++ and used it to gather statistics on sets of 1000 runs per parameter set. What gets reported is the number of runs with the same settings, the population size, the mutation rate, the minimum number of generations to convergence, the maximum number of generations to convergence, the average number of generations to convergence, the number of runs where any loss of a correct base from parent to best candidate in the following generation was seen, and, finally, the number of runs where a correct base was lost when comparing the [0,1,10,...,floor(gen/10)] best candidates sequentially. That last figure divided by the number of runs is the proportion of runs expected to show a loss of a correct base in the sort of output Dawkins put in print in 1986. Code Sample Runs=1000, PopSize=00100, MutRate=0.03704, Gen. min=038, Gen. max=231, Gen. avg.=78.5, Runs with losses=273, Runs with dlosses=189 Runs=1000, PopSize=00200, MutRate=0.03704, Gen. min=028, Gen. max=136, Gen. avg.=48.8, Runs with losses=149, Runs with dlosses=86 Runs=1000, PopSize=00250, MutRate=0.03704, Gen. min=026, Gen. max=085, Gen. avg.=42.6, Runs with losses=124, Runs with dlosses=52 Runs=1000, PopSize=00300, MutRate=0.03704, Gen. min=024, Gen. max=091, Gen. avg.=39.6, Runs with losses=131, Runs with dlosses=67 Runs=1000, PopSize=01000, MutRate=0.03704, Gen. min=020, Gen. max=041, Gen. avg.=26.2, Runs with losses=72, Runs with dlosses=20 Runs=1000, PopSize=10000, MutRate=0.03704, Gen. min=015, Gen. max=022, Gen. avg.=18.2, Runs with losses=41, Runs with dlosses=6 As one can see, the proportion of expected times a reduced output selection like that used by Dawkins in 1986 should show a loss of a correct base is nowhere over 0.2 (less than 1 in 5), and decreases with increasing population size N, exactly the opposite of [kf]'s assertion. Nor do large population sizes fit with the generation numbers reported by Dawkins. By N=1000, the maximum number of generations is equal to the smallest reported generation in a completed run by Dawkins, and all of Dawkins' runs are outside the range when N=10000. It appears from the distribution of generations that 250 is a likely estimate for the population size N that may have been used by Dawkins given the three reported runs with 41, 43, and 64 generations to convergence. At that population size, the result above shows that the estimate of the proportion of runs that would yield a visible loss of a correct base in the best candidates reported in the fashion Dawkins used is only 0.052, or just over 1 in 20 runs. Even interpolating between the figures for the adjacent population sizes does not increase that estimate by enough to make anyone suspect that a set of three outputs would necessarily show the loss of a correct base. I should note that Dawkins only reported the best candidate from generation 1 once in The Blind Watchmaker, but I have made that more expansive pattern the basis for my stats, thus being more generous to the assertion made by than strictly required. What about the effect of mutation rate? Fixing N=250 and varying the mutation rate gives the following results: Code Sample Runs=1000, PopSize=00250, MutRate=0.02000, Gen. min=027, Gen. max=106, Gen. avg.=48.8, Runs with losses=44, Runs with dlosses=29 Runs=1000, PopSize=00250, MutRate=0.03000, Gen. min=028, Gen. max=105, Gen. avg.=44.0, Runs with losses=101, Runs with dlosses=51 Runs=1000, PopSize=00250, MutRate=0.04000, Gen. min=025, Gen. max=140, Gen. avg.=42.6, Runs with losses=152, Runs with dlosses=69 Runs=1000, PopSize=00250, MutRate=0.05000, Gen. min=024, Gen. max=096, Gen. avg.=42.7, Runs with losses=217, Runs with dlosses=98 Runs=1000, PopSize=00250, MutRate=0.06000, Gen. min=025, Gen. max=101, Gen. avg.=43.6, Runs with losses=303, Runs with dlosses=165 Runs=1000, PopSize=00250, MutRate=0.07000, Gen. min=025, Gen. max=135, Gen. avg.=46.0, Runs with losses=379, Runs with dlosses=214 Runs=1000, PopSize=00250, MutRate=0.08000, Gen. min=024, Gen. max=132, Gen. avg.=48.7, Runs with losses=537, Runs with dlosses=309 Runs=1000, PopSize=00250, MutRate=0.09000, Gen. min=024, Gen. max=131, Gen. avg.=52.9, Runs with losses=614, Runs with dlosses=402 Runs=1000, PopSize=00250, MutRate=0.10000, Gen. min=024, Gen. max=156, Gen. avg.=58.0, Runs with losses=730, Runs with dlosses=514 While Dawkins does not directly address mutation rate in his discussion of "weasel", he does do so in relation to his biomorph program, noting that a mutation rate that produces one altered gene per offspring is "very high" and "unbiological". This indicates that Dawkins would have been unlikely to have picked a mutation rate over 1/28 ~= 0.0357 in "weasel" runs. The runs with 0.2 <= u <= 0.037 all show the expected proportion of runs showing a loss visible in summary output of 1 in 19 or less. Yes, empirical data counts, [kf]. Too bad you didn't bother to check before mouthing off in ignorance. **** linkDavid Kellogg
April 14, 2009
April
04
Apr
14
14
2009
06:00 AM
6
06
00
AM
PST
Hazel: First, let's look at the relevant on slightest increment towards target phrase in its context again:
The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . .
We are operating in a digital context, so the slightest increment in a "nonsense phrase" is the letter. Multiply by a case where -- per the 1986 runs as published (cf. simulation runs above, too) -- no-change wins the generational champion contest about 1/2 the time, and single step changes dominate the rest. That is, we know that the population of mutant "nonsense phrases" (i.e. functionality is not being factored in . . .) has a high proportion of 0-change and single change cases. Of these, some will sometimnes have an advantageous change and so we see about half the time, no change wins, and most of the rest, one letter increment to target wins. Beyond that, when the mut rate, sampled per gen pop size rise enough, we occasionally have substitution double mutations showing up, and other rarer outcomes might win very occasionally. So, we see implicit latching under certain circumstances, and beyond it a gradual relaxation of latching. AND, that the slightest increment to target is in fact the letter, which under implicit latching will be preserved by overwhelming force of probabilities. You raised a second point, claiming "You don’t get the simpler bit." Sorry, but I cannot let this one pass without corrective comment. For, the overall context is a very loaded and polarised one, recall Mr Dawkins' "ignorant, stupid, insane or wicked jibe." And, in the context of this particular subject, I have been personally subjected to extremely uncivil -- and to date unacknowledged and unapologised for -- personal attacks by too many on your side of the issue, including violation of privacy, gleeful citation of attacks by a journalist blood slandering Christians as morally equivalent to islamIST terrorists, and in that context enabling public lewdness. 1 --> First, the issue is not that I do not understand what is going on with weasel 1986: above in this thread we can see how my analysis and predictions (which were derided and worse when made) were substantiated by current simulations; plainly much to the discomfort of those who had earlier so gleefully derided what they viewed as my ignorance and gross errors. 2 --> Nor, do I fail to understand what you are saying on which of the two approaches is "simpler" i.e. explicit or implicit latching. I DISAGREE, for given reasons. 3 --> One of those reasons is, quite simply, that the very concept of implicit latching is plainly much harder to understand than explicit latching -- as the exchanges for weeks have shown. (It is precisely because my background leads me to understand the issue of dominance or a distribution's tail in physical behaviour that I was poised to see that this could happen here, in the first place. For instance, think about the emergence of conduction in a semiconductor or an insulator, e.g. what happens to its electrical conductivity as glass is heated up, due to a small proportion of high energy electrons jumping the band gap from valence to conduction bands.) 4 --> Linked to that, a core issue in design theory is that we have discrete state entities that form configuration spaces, in which islands of function or other special interest are deeply isolated. So, we see the challenge of search spaces and overwhelming improbability as an effective barrier. (Again, this ties into statistical thermodynamics, the underlying principle of the second law, and phase space concepts. Cf App 1 my always linked.) 5 --> In this context, we can look again at the evidence: Mr Dawkins presented excerpts of runs, mostly at the tenth generations, with some 200+ of 300+ letters in a position to revert form correct status refusing to revert in the samples. With no counter-instances. 6 --> You will doubtless recall how stoutly my use of the law of large numbers and the associated issue of fluctuations was resisted, and even derided, in inferring from the evidence that this is strong empirical evidence of latching. (There was even an attempt to assert that the sample size was too small to make such a conclusion; some even going so far as to assert that since 6 or 8 or so generations were samples, we were dealing with small samples. [Their silence over the past several days in the face of the current runs is sadly eloquent, on both substance and attitude.]) 7 --> Further to this, we have Mr Dawkins' remarks on cumulative selection, proximity filtering on the slightest increment to target, etc etc. So, it is natural to understand him as reading the target on a letterwise basis and making a letterwise random search that once a letter hits the target, it is explicitly locked down. And, given the "toy example" context, that is a legitimate understanding. [I object to the toy example, but because it subverts the key Hoylean challenge: thr5eshold of function, giving a very misleading impression of the probability of getting to the shores of islands of function.] 8 --> Moreover, once you are doing a distance to target metric on letters, you will naturally have the data in hand to latch letters, making a shadow register with certain letter positions masked off. It is then trivial to scan the phrase to be mutated letterwise and insert the comparison with a mask register, to then see if the case allows the letter to be subjected to random mutation at a probability of say 4%. (That is what flag registers in CPUs are for and what branching in programs is about.) 9 --> So, explicit latching is natural to do on reading the published o/p data and commentary context, and is easy to conceive programmatically, as not only Mr Royal Trueman [odd, I had always thought this was a pseudonym, looks like he must be a Jamaican or something like that . . . ] and Messrs Dembski and Marks did, but also the Monash U folks. It is also RELATIVELY justifiable on a toy example frame of thought. [It is not justifiable in the context of its misleading nature, especially as an exercise in public education.] 10 --> By contrast, implicit latching is far subtler, and requires co-tuning of pop per gen, mutation rates and proximity filter. (I in fact think that the published runs in TBW etc c. 1986 were what was then thought by Mr Dawkins to be "good" runs to showcase, being especially illustrative of "cumulative" selection. but, the very fact of latching being evident was a signpost highlighting the fundamental flaw of Weasel: it is targetted search that ducks rather than addresses fairly the issue of the threshold of complexity issue raised by Mr Hoyle and others.) 11 --> In that context, it is on the strength of the reported statement of 2000 by Mr Dawkins that he died not explicitly latch weasel, that I have concluded that the best explanation on preponderance of the overall evidence, is implicit latching. 12 --> But the bottomline is clear: Weasel is fundamentally and irretrievably misleading and should never have been used as an example of the power of mutation and selection to achieve complex functionality step by step. And, until there is a solid answer to the issue of getting to the shorelines of islands of function, Weasel and kin are question-begging exercises, not serious evidence of the plausibility of materialistic evolution to account for the origin of life and its body plan level diversity. GEM of TKIkairosfocus
April 14, 2009
April
04
Apr
14
14
2009
02:35 AM
2
02
35
AM
PST
Atom @357
We select for the Anagram set, which by definition contains the target, rather than for the target directly
Thank you for the explanation. My next question is, why the heck are you doing that? Does this model some real world process? JJJayM
April 13, 2009
April
04
Apr
13
13
2009
12:27 PM
12
12
27
PM
PST
Correction to my last post: You will eventually find it, usually with better than unassisted blind search performance, since the target IS an anagram,...Atom
April 13, 2009
April
04
Apr
13
13
2009
09:12 AM
9
09
12
AM
PST
JayM, If you read my preceding post, you see what results I'm referring to. We select for the Anagram set, which by definition contains the target, rather than for the target directly. We expect, and the results confirm, that this will lead to performance that is better (at finding the target itself) than blind unassisted search, yet is not as good as the Proximity Reward matrix, which encodes more target specific information than the Anagram Reward matrix does. Sorry if my "results" posts were a little confusing, but people (including myself) aren't used to thinking of fitness functions in terms of how much information about they target they encode. The results speak for themselves. And feel free to run some experiments yourself and post your results. That's what the GUI is for, so that everyone can code their own fitness functions, compare the performance of different reward matrices and run their own experiments. Later you guys, I'm off to the Four Corners. KF, thanks and I'll let her know! Atom PS Final query count updated (I left it running....) 774,366,900 queries with the target not found. (It was found repeatedly for "BITS" and "HELLO" with much better performance than unassisted blind search, so don't complain "If you're selecting for anagrams, you'll never find the target!" That is incorrect. You will eventually it, since the target IS an anagram, and since you are including some target specific information, namely what letters the target contains. That target specific information is the basis for increased search performance.)Atom
April 13, 2009
April
04
Apr
13
13
2009
08:56 AM
8
08
56
AM
PST
Joseph, here's a simple question: Is it true or not, in your opinion, that in Weasel it is possible for one of the children in a generation who isn't the best fit and doesn't get selected to have a letter reversal?hazel
April 13, 2009
April
04
Apr
13
13
2009
07:44 AM
7
07
44
AM
PST
hazel:
The phrase “however slightly” refers to choosing the best child out of the whole population of children.
It's called selection.
It doesn’t say that some of the children may in fact not be an improvement, or may even be less fit.
Selection. Then when one adds the word "cumulative" to it, and provides a target, the inference of ratcheting is clear.Joseph
April 13, 2009
April
04
Apr
13
13
2009
06:43 AM
6
06
43
AM
PST
1 2 3 13

Leave a Reply