Uncommon Descent Serving The Intelligent Design Community

The Simulation Wars

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I’m currently writing an essay on computational vs. biological evolution. The applicability of computational evolution to biological evolution tends to be suspect because one can cook the simulations to obtain any desired result. Still, some of these evolutionary simulations seem more faithful to biological reality than others. Christoph Adami’s AVIDA, Tom Schneider’s ev, and Tom Ray’s Tierra fall on the “less than faithful” side of this divide. On the “reasonably faithful” side I would place the following three:

Mendel’s Accountant: mendelsaccount.sourceforge.net

MutationWorks: www.mutationworks.com

MESA: www.iscid.org/mesa

Comments
One more thing, kairosfocus:
So, from the outset, Weasel is a misleading, question-begging icon of evolution.
Who made it an icon of evolution? Do you think that evolutionary biologists give a fig about Weasel? It's a BASIC program in a 1986 pop science book, for crying out loud. It's the IDists who keep propping it up, which they're welcome to do, but they should at least prop up the right algorithm. R0b
kf in 370, writes,
And, while you may wish to point out that those who created explicitly latched weasels thereafter tended to not bother with the step of clustering trial runs in groups and picking the "best of group" to promote to the next stage, that makes no MATERIAL difference to the explicitly latched case.
1. Good. kf answered my question: he does know that Dembski et all just used one child per generation, and did not pick the best child out of a population. 2. Dismissing this as "not bothering" to select from a population because it makes no "MATERIAL difference" is rather mind-boggling. Selection from a population is the MAIN point of the exercise. I'm rather flabbergasted to find this out. hazel
I think kairosfocus’s intent is that we suffer brain damage as we beat our heads against our desks.
I suspect you are right. It seems we are playing a game of "Last Man Standing"! :) Alan Fox
kairosfocus:
The latter for instance runs very long, very fast with a LOT of reversions and re-advances, producing an odd winking effect. The former very credibly latches or at he very least quasi-latches, with implicit latching being the most plausible mechanism. The latter simply does not show latching type behaviour.
I think kairosfocus's intent is that we suffer brain damage as we beat our heads against our desks. R0b
kairosfocus:
Question-begging loaded assertion embedded in the question.
We all, including yourself, agree that the "explicit latching" interpretation is erroneous, and yet it's question-begging for me to take that as a given? My apologies. I'll henceforth refer to it as the allegedly erroneous interpretation.
And, while you may wish to point out that those who created explicitly latched weasels thereafter tended to not bother with the step of clustering trial runs in groups and picking the "best of group" to promote to the next stage, that makes no MATERIAL difference to the explicitly latched case.
You offered Heng/Green's "explicit latching" as evidence that the mistake was reasonable and natural. This is a valid data point only if they made the mistake independent of Truman and Dembski. I submit that they did not. The point of Weasel was to illustrate cumulative selection, as Dawkins reminds us throughout his description of the program; and he describes the algorithm as selecting a winner from the progeny in each generation. Are we to suppose that Truman, Dembski, and Heng/Green all independently missed the point of Weasel and supposed that it involved no selection? And that they all independently came up with the idea that incorrect letters should mutate every time, in obvious contradiction to Dawkins' output? Isn't it more likely that Dembski simply accepted Truman's characterization, and Heng/Green followed suit?
Not to mention, you are not addressing the force of the actual statements by Mr Dawkins circa 1986, as most recently excerpted at 285. understanding Mr Dawkins to be describing an explicitly latched program is a reasonable and natural reading.
You quoted a lot of Dawkins in 285. In which of those statements is "understanding Mr Dawkins to be describing an explicitly latched program is a reasonable and natural reading"?
As for the idea that various runs of explicitly latched weasels that differ from a given 1986 run in line 1 to line 2 makes the programs materially different, that seems just a tad overdrawn. The Weasel run c 1986 is one run. The rest are similarly selected runs. They will naturally differ at points.
The first two generations of Dawkins' output are almost identical. In Truman etc.'s output, we see the exact opposite. Are you saying that this discrepancy is due to undersampling (we looked at only one run of each version)? If so, did Dawkin's two sequences just happen to be almost identical, or did Truman etc.'s two sequences just happen to be completely different? As to your repeated criticism that Weasel ducks the issues of abiogenesis and isolated islands of functionality, you're absolutely right. It ducks them in the same way that software ducks the issue of electrical power. Weasel assumes the existence of life and gradual evolutionary pathways. Maybe you think that the latter doesn't exist. Maybe you think that a species can't, over time, gradually improve its speed or camouflage or vision. But if you don't subscribe to this brand of selective hyperskepticism, and you're willing to admit the existence of life and of at least some gradual evolutionary pathways, then you're admitting that Weasel has some point of analogy to biology. R0b
From wesley Elsberry Ph. D., quoted from here: Since [KF] seems to forget what he has claimed from time to time,let's see what he said that he is on the hook for:
Under other cases, as the pop size and mutation rates make multiple letter mutations more and more likely, letter substitutions and multiple new correct letter champions will emerge and begin to dominate the runs. This, because the filter that selects the champions rewards mere proximity without reference to function; as Mr Dawkins so explicitly stated. So, to put up a de-tuned case as if it were the sort of co-tuned latching case as we see in the 1986 runs, is a strawman fallacy.
[KF] was claiming there that runs that demonstrate the property of summary output that shows no loss of a correct base have to have the population size and mutation rate "tuned" for that outcome. He also claims that the expectation changes as one runs up the population size and mutation rate. Certainly [KF] is correct that increasing the mutation rate increases the proportion of outcomes where the summary output would show loss of a correct base somewhere. But [KF] is completely wrong that increasing the population size will also do that. Notice also that [KF] is concerned about what applies to the 1986 results. This means that we should pay attention to all the clues given concerning those results, including the distribution of generations to convergence reported there. Those will exclude broad ranges of parameter settings as not matching the expected values, and make other parameter settings more likely. The data I've been presenting is relevant to these various claims. The claim of "tuning" of parameters implies that there is a narrow region where one finds an expectation that summary output for three runs would not show loss of a correct base. Just to stave off foaming at the mouth on [KF]'s part, here is an expression of his giving just that:
Under reasonably accessible co-tuning of program mutation rates, population sizes and this filter, we will see that once a letter goes correct, form genration to gneration, the champions will preserve the correct letters due to that co-tuning.
But the data show no such "tuning" was ever required. Population size as a variable shows that increasing population size leads to a lowered expectation that summary output will show the loss of a correct base, and that also holds for sequentially considered best candidates from the generations of runs. I presented some results yesterday showing that. Here are some more "weasel" results at 10000 runs per parameter set. Earlier, Gordon Mullings seemed to often use a mutation rate of 0.05 as a basis for discussion, so these use that relatively high mutation rate and vary the population size.
Runs=10000, PopSize=00050, MutRate=0.05000, Gen. min=044, Gen. max=566, Gen sum=1394077, Gen. avg.=139.4, Runs with losses=6202, Runs with dlosses=5351 Runs=10000, PopSize=00100, MutRate=0.05000, Gen. min=032, Gen. max=237, Gen sum=782130, Gen. avg.=78.2, Runs with losses=3945, Runs with dlosses=2841 Runs=10000, PopSize=00200, MutRate=0.05000, Gen. min=024, Gen. max=138, Gen sum=484796, Gen. avg.=48.5, Runs with losses=2481, Runs with dlosses=1364 Runs=10000, PopSize=00250, MutRate=0.05000, Gen. min=023, Gen. max=120, Gen sum=427161, Gen. avg.=42.7, Runs with losses=2181, Runs with dlosses=1073 Runs=10000, PopSize=00300, MutRate=0.05000, Gen. min=022, Gen. max=099, Gen sum=389581, Gen. avg.=39.0, Runs with losses=1974, Runs with dlosses=920 Runs=10000, PopSize=00500, MutRate=0.05000, Gen. min=018, Gen. max=066, Gen sum=312730, Gen. avg.=31.3, Runs with losses=1562, Runs with dlosses=557 Runs=10000, PopSize=01000, MutRate=0.05000, Gen. min=017, Gen. max=042, Gen sum=251991, Gen. avg.=25.2, Runs with losses=1214, Runs with dlosses=340 Runs=10000, PopSize=02000, MutRate=0.05000, Gen. min=016, Gen. max=032, Gen sum=216359, Gen. avg.=21.6, Runs with losses=1016, Runs with dlosses=226 Runs=10000, PopSize=10000, MutRate=0.05000, Gen. min=013, Gen. max=021, Gen sum=172933, Gen. avg.=17.3, Runs with losses=664, Runs with dlosses=94
Even with the high mutation rate of 0.05, where the distribution of generations most closely matches the three reported runs by Dawkins the odds of any one result showing loss of a correct base is always less than 1 in 4. In order to get near having even an expectation that 1 in 2 summary outputs would show loss of a correct base, one would have to couple the relatively high 0.05 mutation rate with a population size in the 50s. Here are some runs to try to find the 1 in 2 expectation population size for mutation rate at 0.05:
Runs=1000, PopSize=00040, MutRate=0.05000, Gen. min=062, Gen. max=532, Gen sum=175140, Gen. avg.=175.1, Runs with losses=695, Runs with dlosses=618 Runs=1000, PopSize=00050, MutRate=0.05000, Gen. min=047, Gen. max=426, Gen sum=139932, Gen. avg.=139.9, Runs with losses=636, Runs with dlosses=555 Runs=1000, PopSize=00052, MutRate=0.05000, Gen. min=052, Gen. max=440, Gen sum=131848, Gen. avg.=131.8, Runs with losses=590, Runs with dlosses=508 Runs=1000, PopSize=00054, MutRate=0.05000, Gen. min=045, Gen. max=364, Gen sum=131937, Gen. avg.=131.9, Runs with losses=584, Runs with dlosses=484 Runs=1000, PopSize=00056, MutRate=0.05000, Gen. min=050, Gen. max=485, Gen sum=124665, Gen. avg.=124.7, Runs with losses=563, Runs with dlosses=472 Runs=1000, PopSize=00058, MutRate=0.05000, Gen. min=050, Gen. max=343, Gen sum=122644, Gen. avg.=122.6, Runs with losses=584, Runs with dlosses=472 Runs=1000, PopSize=00060, MutRate=0.05000, Gen. min=050, Gen. max=292, Gen sum=116839, Gen. avg.=116.8, Runs with losses=565, Runs with dlosses=465
It looks to be bracketed by population sizes of 52 and 54 when the relatively high mutation rate of 0.05 is applied. Notice the average generations required and how it is over twice the largest number of generations reported by Dawkins for his three runs in 1986, making it unreasonable to assert that Dawkins' runs might have used such a set of parameters. For a more reasonable mutation rate of 1/28 ~= 0.0357, here is data showing where we would expect 1 in 2 summary outputs to demonstrate loss of a correct base:
Runs=10000, PopSize=00028, MutRate=0.03571, Gen. min=074, Gen. max=834, Gen sum=2398174, Gen. avg.=239.8, Runs with losses=5931, Runs with dlosses=5413 Runs=10000, PopSize=00030, MutRate=0.03571, Gen. min=059, Gen. max=804, Gen sum=2228602, Gen. avg.=222.9, Runs with losses=5631, Runs with dlosses=5076 Runs=10000, PopSize=00032, MutRate=0.03571, Gen. min=072, Gen. max=722, Gen sum=2110742, Gen. avg.=211.1, Runs with losses=5463, Runs with dlosses=4896
To get to an [i]expectation[/i] of as [b]low[/b] as 1 in 8 that three summary outputs should [b]not[/b] show loss of a correct base using a reasonable mutation rate, the population size has to be about 30. What does that do to the distribution of generations to convergence? It shows an average generations to convergence over three times as high as the longest run Dawkins reported in 1986. Mullings again had it wrong: the results reported by Dawkins in 1986 do not admit of a set of conditions that would lead us to [i]expect[/i] three summary outputs to necessarily show loss of a correct base. [KF]:
In certain cases, the latching of the letters is practically all but certain. This is what on preponderance of evidence happened in 1986 in ch 3 of TBW, and in the NS run. Indeed, there we can see that of 300+ positions that could change, 200+ show that letters, once correct stay that way, and none are seen to revert. Such a large sample provided by the man who in the same context exults in how progress is “cumulative,” is clearly representative.
The issue is that there is no "latching"; "latching" requires a mechanism to protect correct bases from mutation. [KF] has gone to a lot of rhetorical trouble to make it appear that he has not had to retreat from earlier claims that correct bases were protected from mutation and that that protection was what allowed "weasel" to converge. Nor is it so that "weasel" preserves correct bases only "in certain cases", which is just another way [KF] expresses his bizarre "tuning" argument. Summary output from three runs is not "a large sample". Output from a thousand or more runs is a large sample, and those large samples show that it is the case where one would expect summary output to show loss of a correct base that has to be carefully tuned, requiring higher mutation rates and small population sizes, both of which are decidedly contrary to the situation that applies in biology. [KF]:
In short, implicit latching. AND in your case, with tearaway run to the target through multiple beneficial mutations on a probabilistically and empirically implausible model. [Think about the skirts of the mutations distribution and what happens with 500 iterations per generation of 5% per letter odds of mutation.]
What about population size 500 and mutation rate 0.05? The data given above show that it is unremarkable, simply another point intermediate in expectation between the smaller and larger population sizes that bracket it, and in no way showing that "skirts" on a distribution magically make the expectation of loss of a correct base in summary output go up with population size. It simply shows that the case that such an expectation [i]goes down[/i] as population size increases is supported. Biological population sizes are only rarely so small as the numbers that we are talking about here. And nowhere are biological mutation rates as high as what we are talking about here. Empirically, choosing a small mutation rate, one that yields an expectation that a daughter copy will have one or fewer mutations, is perfectly reasonable. Empirically, choosing a large population size makes sense. [KF]'s assertions that the three summary outputs from Dawkins in 1986 should be [i]expected[/i] to show loss of a correct base with reasonable parameter settings goes against the empirical data. Even an unreasonably high mutation rate of 0.05 does not lead to that expectation for population sizes that yield a reasonable distribution for the number of generations reported for those three runs. Tuning is not necessary for the case of having reduced expectation that summary output will show loss of a correct base, but tuning is required to get to parameters that lead to a strong expectation that summary output will show loss of a correct base. Once you obtain parameters leading to such an expectation, though, the resulting distribution of generations to convergence demonstrates that such a parameter set cannot reasonably be ascribed to use by Dawkins in 1986. Alan Fox
to Joseph: your metaphor that the unselected children in each generated are "aborted" is wrong. The children are born, but never reproduce. to kairosfocus: do you understand R0b's point that the TDMHG versions he studied don't even have a generation of children from which the best is selected, but rather just have each parent produce a single child who becomes the next parent? hazel
PPS: And, in a context where just this morning I was remarking on the wave of spamming in my email, the persistent insistent violation of my privacy by Anti Evo is a HIGHLY material issue of civility. The insistence on slandering design thought as being synonymous to creationism has had well known -- and intended unjustifiable public policy and career consequences. In short Mr Kellogg is indulging in enabling behaviour. That is sad, and sadly telling. kairosfocus
PS: Nor is it irrelevant or a red herring to point out yet again the unanswered challenge to originate functionally specific complex bio-information, the precise question that weasel avoids by question-begging and distraction. kairosfocus
Onlookers: Notice that the just above happens on a day when I have taken pains, with examples, to point out the gap between the mathematics of varying Weasel type text strings and the effects of a proximity filter on a population of such variants, in further substantiation of why I consider that Weasel 1986 is very different in its behaviour from Weasel 1987 and other similar programs. [The latter for instance runs very long, very fast with a LOT of reversions and re-advances, producing an odd winking effect. The former very credibly latches or at he very least quasi-latches, with implicit latching being the most plausible mechanism. The latter simply does not show latching type behaviour.] As to the case presented by Rob, I have shown why I reject his conclusions, just the opposite of a closed mind. I disagree for stated reasons, I have not simply dismissed by closed mindedness. GEM of TKI kairosfocus
Wow, kairosfocus, your "response" to R0b (which I just read a little closely) demonstrates conclusively that on the issue of Weasel, your beliefs are impervious to evidence. That response should be saved for posterity and put on display under "rhetorical evasion." David Kellogg
kairosfocus, as usual, a great deal of your response is nonresponsive, chock full of red herrings about evolutionary materialism and the inevitable strawman distractions about privacy, civility, etc. I'll let others take apart your specific points -- I'm too exhausted to deal with the onslaught of verbiage. Your writing is so noisy there is barely an audible signal. I will object to this:
Frequent reversions a la 1987 will appear as the likelihood of such substitutions rises,
You still don't get it. The 1987 video showed frequent reversions because it showed all the mutated phrases, not just the winning ones. The difference between 1986 and 1987 is an effect of the display only. David Kellogg
PPPS: For the 4% odds per letter of mutation case, the probability of a given pop member containing no changes is ~ 32%. The onward probability of a population of 50 having NO zero-mutation cases is thus 0.68^50 ~ 4.2 * 10^-9. So, we can safely assume that in any case where a substitution occurs, there will by overwhelming probability be a zero mutation case for it to compete with. The decisive issue will be the proximity filter, and if that is based on the binary value of he letters and space character, the result may be a very complex function of what the newly correct letter is, what the newly incorrect letter is, and what the original string was; not to mention remaining distance to target. kairosfocus
PPS: I should of course add that in the just presented we also see the preservation effect, i.e. the no-change case "often" wins. (We are in a low intrinsic probability of change per letter regime, here 4% i.e. ~ 1 letter per member shifts on average and about 1/3 will have no change. That is, a generational sample of size N is likely to have a significant proportion of no-change candidates [~N/3 for 4%] so the likely winners will be no change, then a further letter goes correct, then a substitution [double letter change one to correct the other to incorrect] then such a substitution with an additional correct letter; because of the impact of the proximity to target filter. As N rises as well, the likelihood that a generational sample will contain one or more of the substitution cases rises -- BTW, this is not the same as choosing the champion. [Why: An interesting issue is when we have no-change cases AND substitutions (likely to occur where a substitution happens for such a case as is under examination); which will likely have the same distance metric: either a preference for substitutions or a lottery among the least distance candidates may have to be invoked to decide. Such a decision will materially affect the incidence of substitutions and no-change cases in the resulting run of champions . . . or, should that be, roughly, queen bees?] So, what we see in such a run is not simply a matter of the mathematics of what changes and what does not change per the mathematical odds of letters changing. [For instance, suppose that the program sets a module: if distance to target is not superior among the mutants, pass the current champion to the next generation; this would produce NO substitution cases. By contrast, choose a case that has same distance but is different form current champion would pass the maximum number of substitutions, and a lottery would be intermediate, trending to pass few of the relatively rare substitutions. O/p's in the three cases would be significantly different in terms of the characteristics of the runs of champions. And, of course proximity to target metrics can be composed to do these things in varying degrees, automatically, since the ACSCII codes of letters etc have bit values correlated to the sequence of the alphabet.]) kairosfocus
PS: It would be useful for onlookers to play with the Atom implementation, where we can see events live for ourselves, generation by generation. run the 4% 50 pop version a few times. You will see that there are occasional reversions [colour goes back to black from reddish], and reflective of a quasi-latched condition on such runs. There will be abundant cases of non-reversion to select "good" ["cumulative"] runs from too, i.e the runs latch implicitly. In short, as the effective number of samples rises (here, through multiplied runs), we do see the appearance of substitutions etc, from the far skirt. Here is a case in point, on 4%, 50 pop/gen, after some runs to catch a case:
130. METHKNKS IT IS LIME A WEASEL 131. METHKNKS IT IS LIME A WEASEL 132. MRTHANKS IT IS LIKE A WEASEL 133. MBTHANKS IT IS LIKE A WEASEL 134. MBTHANKS IT IS LIKE A WEASEL 135. MBTHANKS IT IS LIKE A WEASEL
Notice the substitution effect. Implicit latching places a probabilistic barrier, not an absolute one. Some runs will latch, others will quasi-latch, as a result. And if your "good" runs are those with "cumulative" progress, you arte going to be likely to showcase the former. Frequent reversions a la 1987 will appear as the likelihood of such substitutions rises, which will require more sample size [to be more likely to get skirt members in the sample] and/or higher mutation rates [to get more multi-letter mutations and fewer zero letter mutations cases]. kairosfocus
2] you have strongly suggested that the selections from TBW should show reversions if such were possible. You here confuse what is POSSIBLE with what is sufficiently probable to be likely to be credibly observable under relevant typical or likely to be selected circumstances, especially for a showcased "good" run of "cumulative selection." [Cf the example of the avalanche of rocks spontaneously forming the phrase "Welcome to Wales." The difference is the foundation stone of the 2nd law of thermodynamics in statistical mechanics. It is also the foundation of he issue posed by FSCI to the claimed spontaneous origin of life and major body plans. As outlined above.] And by the way, a single example shows that a possibility is real. AND, I have given the dynamical context in which that reality takes place: double mutations with substitution, which I have exemplified. 3] Rob, 365: is the erroneous interpretation of Dawkins’ Weasel really “reasonable” and “natural”? Question-begging loaded assertion embedded in the question.(AKA fallacy of the complex question.) the only decisive evidence that Weasel 1986 did not explicitly latch is credible code. Which, 23 years later, is not likely to be forthcoming; though it would be welcome. It is on preponderance of evidence based on trusting the reported statement of Mr Dawkins c 2000 that leads to the conclusion that the Weasel 1986 was most likely implicitly not explicitly latched. So far as the raw evidence goes, Weasel 1986 AND [thanks to Apollos' point that an explicitly latched program can also show reversions . . . ) can be accounted for on BOTH mechanisms. And, while you may wish to point out that those who created explicitly latched weasels thereafter tended to not bother with the step of clustering trial runs in groups and picking the "best of group" to promote to the next stage, that makes no MATERIAL difference to the explicitly latched case. (Not to mention, you are not addressing the force of the actual statements by Mr Dawkins circa 1986, as most recently excerpted at 285. understanding Mr Dawkins to be describing an explicitly latched program is a reasonable and natural reading. Which was my point.) As for the idea that various runs of explicitly latched weasels that differ from a given 1986 run in line 1 to line 2 makes the programs materially different, that seems just a tad overdrawn. The Weasel run c 1986 is one run. The rest are similarly selected runs. They will naturally differ at points. What is crucially consistent is that runs of 40 or so and 60 or so gens to target are consistent with a good explicitly latched run. (And of course implict latching will act in much the same way as explicit latching; at least so far as the run of champions is concerned. And, Joseph is right: the champions are the only real children per generation -- the rest are abortions. The champions that are selected on mere proximity, not functionality.) On typos and the like on "nonsense phrases," there is little or no consequence one way or another. 4] GLF, :Tone and substance are one thing . . . Tone has to do with serious moral issues: violation of privacy, slanderous conflation of Design thought and creationism and general uncivil conduct. Such are even more important and of much wider general impact than matters of substance on the particular narrow issue engaged, for uncivil conduct undermines the ability of our civilisation to thrive and prosper; i.e. its sustainability. (And yes, this is an allusion to Kant's Categorical imperative: uncivil conduct is civilisation-damaging conduct. Right now, Internet incivility is a growing menace to our civilisation.) Substance does have to do with facts and reasoning related thereto. So, the inference that by raising issues of "tone and substance," I have failed to address data is at best careless and revealing. And in fact I have already pointed out the specific regime of the distribution I have been addressing all along, i.e. as discussed above on observability of the zero change case, which has a key influence on the circumstances where the proximity filter applies and will advance while preserving previously correct letters. And, onlookers, I provided actual cases that illustrate and exemplify what I am talking about. I should think that these would count as "data," should thery not? Especially, as the published runs of 1986 would be showcased "good" ones illustrative of the power of "cumulative selection." So good in fact that they show no reversions at all in 200 cases where letters go correct and have in principle the opportunity to revert, of 300 cases of changeable letters. Just that little bit TOO good . . . 5] new information like this . . . I addressed this case yesterday. I have given furrther details this morning, which should suffice for onlookers to see that the crucial point is that there is a regime where what I am speaking of will happen, with significant reproducibility; especially relative to the showcasing of "good" runs of "cumulative selection." [That is, a near synonym to the root ideas behind "ratcheting" and "latching."] As I have already shown from 234 on. So, you may indeed push the observed numbers sufficiently far that you have say 1000 runs * 100 member pop per gen * 50/gen typ ~ 5 mn observed mutants [including maybe 1/3 which will be zero change]. But that does not undermine that you will get the case of "good" runs of cumulative selection that will latch implicitly. As I have shown. GEM of TKI kairosfocus
Onlookers (And DK et al): We must set the above exchanges in context (as there is a significant tangential tendency in this issue): evolutionary materialism faces a critical challenge to account for the functionally specific, complex information in life, from credible first life [~ 600 - 1,000 k bits] on to major body plans [10's to 100's of M bits, dozens of times over]. Weasel, in the end, sadly,ducks that challenge, rather than answering it. For, it fails utterly to address the need to get to a credible threshold of functionality before hill-climbing mechanisms such as random variations plus natural selection can be applied. Worse, it reverts to artificial, foresighted, targetted selection as a "substitute" for NS, even though from the outset Mr Dawkins knew that this was "misleading." So, from the outset, Weasel is a misleading, question-begging icon of evolution. One that should have long since been retired tot he hall of shame of such icons, next to the Haekel embryo drawings and the Piltdown and Nebraska man fossils etc. The sustained failure to address that key issue, is thus the backdrop against which we need to assess the obsession of the Anti Evo advocates and fellow travellers with trying to show me wrong on the observation that Weasel c. 1986 evidently latched its o/p, and that there are two reasonable mechanisms for it, explicit latching [search is by the letter] and implicit latching [search is by the phrase, but as pop size, per letter mutation rates and the proximity to target filter interact, weasel latches]. Of these, on preponderance of evidence, implicit latching best accounts for Weasel c 1986, that evidently latches; and, Weasel c 1987 which does not. (To get to the latter, it is likely that the parameters listed were detuned enough that no-change cases become effectively unobservable or sufficiently so that we see regular reversions of correct letters, i.e. non-lathing. For the former, so long as the per letter mutation rate is low enough that no-change cases are observable enough, and the pop size is below a threshold where the far harder to observe double change with substitution cases or substitutions with a third letter going correct are seen, then we will often enough see lathing then at the threshold quasi-lathing, that appearance of such in a showcased "good" run of "cumulative selection" is not unlikely. As an idea on observability of the zero change "mutants," for 5% per letter mutation rate, 0.95^28 ~ 24% [4% giving 32%], for 10%, 0.9^28 ~ 5%, for 15% 0.85^28 ~ 1%, for 25% 0.75^28 ~ 0.03%, for 50% ,0.5^28 ~ 3.7*10^-7%. Thus, for instance: 0.03% of 1,000 ~ 0.3, while 24% of 50 ~ 12. That is sufficient, given the "nearest is picked a next champion" filter, the rest -- as Joseph aptly said -- being aborted. What is happening, visually, is that the more or less bell shaped distribution is being pulled away to the right from the origin as the odds of per letter mutation grow, so the 0-change case is increasingly becoming a a tail member.) In short, we have a molehill issue [on which Joseph and I are right] being made into a mountain, distracting attention from the basic fact that one has to get to the shores of Isle Improbable to then try to climb up to its peak, Mt Improbable. And, Isle Improbable is incredibly isolated in the Sea of Non-Function, so that if you do not have an intelligently prepared chart, it is hard indeed to find it by what amounts to random search. Now, on a few points that seem worth a further remark: 1] DK, 364: Dr. Elsberry’s data show that “the number of runs [showing] any loss of a correct base from parent to best candidate in the following generation” — that is, where a case exhibiting reversion is the best choice — decreases as the population increases. The highlighted findings relate to a regime of behaviour utterly distinct from the one I have addressed, as summarised just above and showed from 234 supra with reference to the start-point, namely start from the implicitly latched case then allow pop size to grow enough to make the substitution effect easily observable; while the no-change case is still likely to be present. Under those circumstances we will see lathing then quasi-latching then disappearance of latching. And, if one seeks "good cases" of cumulative selection, one will showcase precisely that. So, even where on multiplying number of runs exceedingly -- say 5 mn total sample size so that 10% is 1/2 million of which 100 - 500 observations of reversions of one species or another are all less than 0.1% -- one sees that one is in a quasi-latched case, a selected "good" run will of course implicitly latch. I am also well aware of the fact that the binomial is a highly peaked distribution under interesting circumstances [the flipped coins case is used to illustrate statistical thermodynamics principles], so that it is hard to see the upper and lower far skirts once they are both sufficiently in evidence; i.e the bulk overwhelms the skirt. (Thus, typical samples will tend to cluster to the mean. Multiplying the number of cases of runs is a way of greatly expanding effective sample size so that what is not likely to be evident in any one case becomes observable on the multiplication of cases, as just noted. For comparison tot he real world of origin of life and of body plans, the UPB threshold is set off making things that are searched for on the gamut of the cosmos' search resources unobservable, i.e about 1 in 10^150. First life and major body plans by that threshold are unobservable based on non-intelligent search strategies. Of this issue, Mr Dawkins c. 1986 decided that "single step" changes of odds about 1 in 10^40 could be dismissed and a cumulative change where the odds of any one letter going correct on random change are 1 in 27, substituted. And, he did so for cases where he knew that a search of 1 in 10^40 was infeasible for his search resources. In short the key question is begged from the outset of creating Weasel type programs.) [ . . . ] kairosfocus
I think R0b just counted the horses teeth :) Alan Fox
Wow R0b! Very interesting post! Alan Fox
kairosfocus:
Per Weasel 1986, absent the testimony as reported to us in recent days, explicit latching is a very reasonable understanding. (Indeed, the Monash University biologists, naturally understood it that way until Mr Elsberry “corrected” them.)
You have made this claim repeatedly in this thread, but is the erroneous interpretation of Dawkins' Weasel really "reasonable" and "natural"? Let's look a little closer. Three parties have published the erroneous interpretation: - Truman, in 12/1998 - Dembski, starting (I think) in 9/1999 - Heng/Green at Monash, in 2007 The striking thing is that all three parties have contradicted Dawkins on the same points in the same way, and it's not limited to the latching question. Let's compare Dawkins' description of Weasel to the erroneous algorithm, which I'll call TDMHG (Truman/Dembski/Marks/Heng/Green). - Dawkins says that multiple progeny are produced in every generation. In TDMHG, only one child is produced. - Dawkins says that a winning phrase is chosen from each generation. This makes no sense in TDMHG, since each generation consists of only one phrase. - Dawkins repeatedly says that he's illustrating cumulative selection. In TDMHG, there is no competition and no selection. - Dawkins says that the sequences reproduce "with a certain chance of random error - 'mutation' - in the copying". In TDMHG, incorrect letters are guaranteed to mutate, while correct letters are guaranteed to not mutate. - The most obvious difference is in the respective outputs. Dawkins reports the first two generations from one of his runs as follows: WDLDMNLT DTJBKWIRZREZLMQCO P WDLTMNLT DTJBSWIRZREZLMQCO P (The second 'D' in the first sequence is omitted in TBW -- a typo.) In contrast, here are the first two generations as reported by Truman: WDLTMNLT DTJBKWIRZREZLMQCO P SEE SNXD ETHAIYGSWCWVFCQCQMZ and from a run of Heng/Green's applet: tynsaue voledpljhuradvlyatvla cqgrnfuskiprnorcasm vpyvbcpyp and from a run of Marks/Dembski's script: YMIHOOSYFKLTT JVZUHTSKMEDONZ OPNHSJLWTKBRHQY CQDIJJOEPGLC Dawkins' 1st two generations are almost identical, while the first two generations in TDMHG are almost completely different. How, then, is it "reasonable" and "natural" to conclude that TDMHG is the same as Dawkins' algorithm? How could all three parties independently come up with the same algorithm and fail to notice that its description and output are manifestly different from Dawkins'? My guess is that Truman made the original gaffe, Dembski followed Truman, and Heng/Green followed Dembski & Truman. (BTW, I've decompiled the applet from Monash and verified that the algorithm is the same as the one implemented by the EvoInfo Lab, both of which match the description and results reported by Truman 1998 and the description by Dembski 1999.) R0b
kairosfocus, what you wrote was this:
But equally, from the statistics involved, as the population size grows enough relative to mutation rates, skirts will begin to tell, and the program will first quasi-latch, with occasional reversions,t hen eventually as the far skirt begins to dominate the filter on sufficient population per the per letter mutation rate, we will see multiple reversions and the like, i.e. latching vanishes.
Dr. Elsberry's data show that "the number of runs [showing] any loss of a correct base from parent to best candidate in the following generation" -- that is, where a case exhibiting reversion is the best choice -- decreases as the population increases. To the extent that I can slog through your prose, you have said that it increases. Further, you have strongly suggested that the selections from TBW should show reversions if such were possible. Here too, the data say otherwise. Your comment [234], in which you place so much confidence, shows single runs. Dr. Elsberry presents the results from a total of 15,000 runs. You refer to Dr. Elsberry's program as "Weasel-like," but I think there is a much more reasonable candidate for that label. David Kellogg
hazel:
Is it true or not, in your opinion, that in Weasel it is possible for one of the children in a generation who isn’t the best fit and doesn’t get selected to have a letter reversal?
There is only one child per generation. The rest are arbortions. And what those abortions were is irrelevant as they were not discussed, described nor illustrated in TBW. Joseph
Onlookers (and Mr Kellogg): You can simply cf the runs above from 234 on in this thread, to see that the point I actually made is valid; being logically-dynamically credible and empirically demonstrated. (I don't doubt that you may find all sorts of oddities in behaviour of Weasel-like pgms, as set up to do all sorts of things by those who write them. To cite such to "prove" that I am wrong in what I assert about a specific data set published in TBW 1986, and as shown to be further plausible on the dynamics of substitution required to get a reversion, and as further shown by recreative examples, is simply out of order. Nor, have I EVER "assert[ed]" what Mr Elsberry attributes to me in the cite above: [kf]’s assertion that increasing population size N increased the likelihood that a set of three runs sampled at ten-generation intervals would show any loss of a correct base in there This is strawman fallacy at its worst, putting words that don't belong there in my mouth. What I have said, with reasons given, is that as mutation rate, pop size and choose- slightest- increment- in- proximity filter are detuned from a latched condition as I have shown, there will appear reversions in a context of substitutions. [Why: (i) a significant proportion of "mutants" in a given gen are no-change; (ii) so a selected new champion must have at least that proximity, (iii) if a reversion occurs it must be compensated for, (iv) thus we will see reversion + substitution. Also, (v) to be likely to see far-tail pop members like in iii just past, you need to push up pop numbers sufficiently that these will begin to appear as observable, i.e. we see that on LOLN, (vi) N*p -> ~1 from the low side. Simple logic based on acting dynamics and patterns.] This, as anyone can see by scrolling up to 234 on, I have specifically empirically demonstrated with instances on successive runs of Atom's recreation of a proximity filter Weasel. And yes, the above are successive runs starting from the out of the box default condition.) Moreover, the underlying issue with Weasel and kin s that they duck the fundamental issue that targetted search without reference to a realistic threshold of functionality is not relevant to the actual challenge of getting to first life that credibly has 600 kbits of DNA and onward to dozens of novel body plans that require 10's - 100's of M bits. As for Mr Elsberry, sadly, the latest excerpts from his fulminations abundantly show from both his tone and substance that men of such ilk are uncivil and damaging to our culture and institutions of science and science education. We would be prudent to heed the caution that to be forewarned is to be forearmed. "A word to the wise . . . " GEM of TKI kairosfocus
kairosfocus, I'm relying a comment by Dr. Elsberry. Some of the formatting is lost, and kairosfocus's proper name has been removed. The link is provided at the end for those who need the original formatting. The following is from Dr. Elsberry: For once, [kf] had something useful to contribute to the discussion when he mentioned that models that didn't match up to the empirical data were bad. So to see whether [kf]'s assertion that increasing population size N increased the likelihood that a set of three runs sampled at ten-generation intervals would show any loss of a correct base in there, I coded a faster "weasel" in C++ and used it to gather statistics on sets of 1000 runs per parameter set. What gets reported is the number of runs with the same settings, the population size, the mutation rate, the minimum number of generations to convergence, the maximum number of generations to convergence, the average number of generations to convergence, the number of runs where any loss of a correct base from parent to best candidate in the following generation was seen, and, finally, the number of runs where a correct base was lost when comparing the [0,1,10,...,floor(gen/10)] best candidates sequentially. That last figure divided by the number of runs is the proportion of runs expected to show a loss of a correct base in the sort of output Dawkins put in print in 1986. Code Sample Runs=1000, PopSize=00100, MutRate=0.03704, Gen. min=038, Gen. max=231, Gen. avg.=78.5, Runs with losses=273, Runs with dlosses=189 Runs=1000, PopSize=00200, MutRate=0.03704, Gen. min=028, Gen. max=136, Gen. avg.=48.8, Runs with losses=149, Runs with dlosses=86 Runs=1000, PopSize=00250, MutRate=0.03704, Gen. min=026, Gen. max=085, Gen. avg.=42.6, Runs with losses=124, Runs with dlosses=52 Runs=1000, PopSize=00300, MutRate=0.03704, Gen. min=024, Gen. max=091, Gen. avg.=39.6, Runs with losses=131, Runs with dlosses=67 Runs=1000, PopSize=01000, MutRate=0.03704, Gen. min=020, Gen. max=041, Gen. avg.=26.2, Runs with losses=72, Runs with dlosses=20 Runs=1000, PopSize=10000, MutRate=0.03704, Gen. min=015, Gen. max=022, Gen. avg.=18.2, Runs with losses=41, Runs with dlosses=6 As one can see, the proportion of expected times a reduced output selection like that used by Dawkins in 1986 should show a loss of a correct base is nowhere over 0.2 (less than 1 in 5), and decreases with increasing population size N, exactly the opposite of [kf]'s assertion. Nor do large population sizes fit with the generation numbers reported by Dawkins. By N=1000, the maximum number of generations is equal to the smallest reported generation in a completed run by Dawkins, and all of Dawkins' runs are outside the range when N=10000. It appears from the distribution of generations that 250 is a likely estimate for the population size N that may have been used by Dawkins given the three reported runs with 41, 43, and 64 generations to convergence. At that population size, the result above shows that the estimate of the proportion of runs that would yield a visible loss of a correct base in the best candidates reported in the fashion Dawkins used is only 0.052, or just over 1 in 20 runs. Even interpolating between the figures for the adjacent population sizes does not increase that estimate by enough to make anyone suspect that a set of three outputs would necessarily show the loss of a correct base. I should note that Dawkins only reported the best candidate from generation 1 once in The Blind Watchmaker, but I have made that more expansive pattern the basis for my stats, thus being more generous to the assertion made by than strictly required. What about the effect of mutation rate? Fixing N=250 and varying the mutation rate gives the following results: Code Sample Runs=1000, PopSize=00250, MutRate=0.02000, Gen. min=027, Gen. max=106, Gen. avg.=48.8, Runs with losses=44, Runs with dlosses=29 Runs=1000, PopSize=00250, MutRate=0.03000, Gen. min=028, Gen. max=105, Gen. avg.=44.0, Runs with losses=101, Runs with dlosses=51 Runs=1000, PopSize=00250, MutRate=0.04000, Gen. min=025, Gen. max=140, Gen. avg.=42.6, Runs with losses=152, Runs with dlosses=69 Runs=1000, PopSize=00250, MutRate=0.05000, Gen. min=024, Gen. max=096, Gen. avg.=42.7, Runs with losses=217, Runs with dlosses=98 Runs=1000, PopSize=00250, MutRate=0.06000, Gen. min=025, Gen. max=101, Gen. avg.=43.6, Runs with losses=303, Runs with dlosses=165 Runs=1000, PopSize=00250, MutRate=0.07000, Gen. min=025, Gen. max=135, Gen. avg.=46.0, Runs with losses=379, Runs with dlosses=214 Runs=1000, PopSize=00250, MutRate=0.08000, Gen. min=024, Gen. max=132, Gen. avg.=48.7, Runs with losses=537, Runs with dlosses=309 Runs=1000, PopSize=00250, MutRate=0.09000, Gen. min=024, Gen. max=131, Gen. avg.=52.9, Runs with losses=614, Runs with dlosses=402 Runs=1000, PopSize=00250, MutRate=0.10000, Gen. min=024, Gen. max=156, Gen. avg.=58.0, Runs with losses=730, Runs with dlosses=514 While Dawkins does not directly address mutation rate in his discussion of "weasel", he does do so in relation to his biomorph program, noting that a mutation rate that produces one altered gene per offspring is "very high" and "unbiological". This indicates that Dawkins would have been unlikely to have picked a mutation rate over 1/28 ~= 0.0357 in "weasel" runs. The runs with 0.2 <= u <= 0.037 all show the expected proportion of runs showing a loss visible in summary output of 1 in 19 or less. Yes, empirical data counts, [kf]. Too bad you didn't bother to check before mouthing off in ignorance. **** link David Kellogg
Hazel: First, let's look at the relevant on slightest increment towards target phrase in its context again:
The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . .
We are operating in a digital context, so the slightest increment in a "nonsense phrase" is the letter. Multiply by a case where -- per the 1986 runs as published (cf. simulation runs above, too) -- no-change wins the generational champion contest about 1/2 the time, and single step changes dominate the rest. That is, we know that the population of mutant "nonsense phrases" (i.e. functionality is not being factored in . . .) has a high proportion of 0-change and single change cases. Of these, some will sometimnes have an advantageous change and so we see about half the time, no change wins, and most of the rest, one letter increment to target wins. Beyond that, when the mut rate, sampled per gen pop size rise enough, we occasionally have substitution double mutations showing up, and other rarer outcomes might win very occasionally. So, we see implicit latching under certain circumstances, and beyond it a gradual relaxation of latching. AND, that the slightest increment to target is in fact the letter, which under implicit latching will be preserved by overwhelming force of probabilities. You raised a second point, claiming "You don’t get the simpler bit." Sorry, but I cannot let this one pass without corrective comment. For, the overall context is a very loaded and polarised one, recall Mr Dawkins' "ignorant, stupid, insane or wicked jibe." And, in the context of this particular subject, I have been personally subjected to extremely uncivil -- and to date unacknowledged and unapologised for -- personal attacks by too many on your side of the issue, including violation of privacy, gleeful citation of attacks by a journalist blood slandering Christians as morally equivalent to islamIST terrorists, and in that context enabling public lewdness. 1 --> First, the issue is not that I do not understand what is going on with weasel 1986: above in this thread we can see how my analysis and predictions (which were derided and worse when made) were substantiated by current simulations; plainly much to the discomfort of those who had earlier so gleefully derided what they viewed as my ignorance and gross errors. 2 --> Nor, do I fail to understand what you are saying on which of the two approaches is "simpler" i.e. explicit or implicit latching. I DISAGREE, for given reasons. 3 --> One of those reasons is, quite simply, that the very concept of implicit latching is plainly much harder to understand than explicit latching -- as the exchanges for weeks have shown. (It is precisely because my background leads me to understand the issue of dominance or a distribution's tail in physical behaviour that I was poised to see that this could happen here, in the first place. For instance, think about the emergence of conduction in a semiconductor or an insulator, e.g. what happens to its electrical conductivity as glass is heated up, due to a small proportion of high energy electrons jumping the band gap from valence to conduction bands.) 4 --> Linked to that, a core issue in design theory is that we have discrete state entities that form configuration spaces, in which islands of function or other special interest are deeply isolated. So, we see the challenge of search spaces and overwhelming improbability as an effective barrier. (Again, this ties into statistical thermodynamics, the underlying principle of the second law, and phase space concepts. Cf App 1 my always linked.) 5 --> In this context, we can look again at the evidence: Mr Dawkins presented excerpts of runs, mostly at the tenth generations, with some 200+ of 300+ letters in a position to revert form correct status refusing to revert in the samples. With no counter-instances. 6 --> You will doubtless recall how stoutly my use of the law of large numbers and the associated issue of fluctuations was resisted, and even derided, in inferring from the evidence that this is strong empirical evidence of latching. (There was even an attempt to assert that the sample size was too small to make such a conclusion; some even going so far as to assert that since 6 or 8 or so generations were samples, we were dealing with small samples. [Their silence over the past several days in the face of the current runs is sadly eloquent, on both substance and attitude.]) 7 --> Further to this, we have Mr Dawkins' remarks on cumulative selection, proximity filtering on the slightest increment to target, etc etc. So, it is natural to understand him as reading the target on a letterwise basis and making a letterwise random search that once a letter hits the target, it is explicitly locked down. And, given the "toy example" context, that is a legitimate understanding. [I object to the toy example, but because it subverts the key Hoylean challenge: thr5eshold of function, giving a very misleading impression of the probability of getting to the shores of islands of function.] 8 --> Moreover, once you are doing a distance to target metric on letters, you will naturally have the data in hand to latch letters, making a shadow register with certain letter positions masked off. It is then trivial to scan the phrase to be mutated letterwise and insert the comparison with a mask register, to then see if the case allows the letter to be subjected to random mutation at a probability of say 4%. (That is what flag registers in CPUs are for and what branching in programs is about.) 9 --> So, explicit latching is natural to do on reading the published o/p data and commentary context, and is easy to conceive programmatically, as not only Mr Royal Trueman [odd, I had always thought this was a pseudonym, looks like he must be a Jamaican or something like that . . . ] and Messrs Dembski and Marks did, but also the Monash U folks. It is also RELATIVELY justifiable on a toy example frame of thought. [It is not justifiable in the context of its misleading nature, especially as an exercise in public education.] 10 --> By contrast, implicit latching is far subtler, and requires co-tuning of pop per gen, mutation rates and proximity filter. (I in fact think that the published runs in TBW etc c. 1986 were what was then thought by Mr Dawkins to be "good" runs to showcase, being especially illustrative of "cumulative" selection. but, the very fact of latching being evident was a signpost highlighting the fundamental flaw of Weasel: it is targetted search that ducks rather than addresses fairly the issue of the threshold of complexity issue raised by Mr Hoyle and others.) 11 --> In that context, it is on the strength of the reported statement of 2000 by Mr Dawkins that he died not explicitly latch weasel, that I have concluded that the best explanation on preponderance of the overall evidence, is implicit latching. 12 --> But the bottomline is clear: Weasel is fundamentally and irretrievably misleading and should never have been used as an example of the power of mutation and selection to achieve complex functionality step by step. And, until there is a solid answer to the issue of getting to the shorelines of islands of function, Weasel and kin are question-begging exercises, not serious evidence of the plausibility of materialistic evolution to account for the origin of life and its body plan level diversity. GEM of TKI kairosfocus
Atom @357
We select for the Anagram set, which by definition contains the target, rather than for the target directly
Thank you for the explanation. My next question is, why the heck are you doing that? Does this model some real world process? JJ JayM
Correction to my last post: You will eventually find it, usually with better than unassisted blind search performance, since the target IS an anagram,... Atom
JayM, If you read my preceding post, you see what results I'm referring to. We select for the Anagram set, which by definition contains the target, rather than for the target directly. We expect, and the results confirm, that this will lead to performance that is better (at finding the target itself) than blind unassisted search, yet is not as good as the Proximity Reward matrix, which encodes more target specific information than the Anagram Reward matrix does. Sorry if my "results" posts were a little confusing, but people (including myself) aren't used to thinking of fitness functions in terms of how much information about they target they encode. The results speak for themselves. And feel free to run some experiments yourself and post your results. That's what the GUI is for, so that everyone can code their own fitness functions, compare the performance of different reward matrices and run their own experiments. Later you guys, I'm off to the Four Corners. KF, thanks and I'll let her know! Atom PS Final query count updated (I left it running....) 774,366,900 queries with the target not found. (It was found repeatedly for "BITS" and "HELLO" with much better performance than unassisted blind search, so don't complain "If you're selecting for anagrams, you'll never find the target!" That is incorrect. You will eventually it, since the target IS an anagram, and since you are including some target specific information, namely what letters the target contains. That target specific information is the basis for increased search performance.) Atom
Joseph, here's a simple question: Is it true or not, in your opinion, that in Weasel it is possible for one of the children in a generation who isn't the best fit and doesn't get selected to have a letter reversal? hazel
hazel:
The phrase “however slightly” refers to choosing the best child out of the whole population of children.
It's called selection.
It doesn’t say that some of the children may in fact not be an improvement, or may even be less fit.
Selection. Then when one adds the word "cumulative" to it, and provides a target, the inference of ratcheting is clear. Joseph
1) The phrase "however slightly" refers to choosing the best child out of the whole population of children. It doesn't say that some of the children may in fact not be an improvement, or may even be less fit. 2. You don't get the simpler bit. I'll give up on that one. hazel
PS: Joseph, right as rain. As Mr Dawkins -- I STILL cannot just put the name down, seems so abrupt -- said, cf 285: The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . That "however slightly" can only refer to a letterwise increment. And then multiply that by the published runs that show no reversions in a sample of 300+ with 200+ that in principle could revert. mix in cumulative selection etc etc. Ratcheting and latching drop our as very reasonable understandings of what was put down on record in 1986. Subsequent cases that do not show that behaviour reflect significantly different approaches to doing weasel, whether by different5 algorithm or -- as in the BBC Horizon case -- by most likely varying mut rate and pop size so that quasi-latching or non-latching emerge. [BACK to work . ..] kairosfocus
Hazel Why didn't you simply continue on with the same cite from no 343, point 4:
Not at all. [. . . continuing . . .] What I am saying — and have said many times now — is that the explicit latch mechanism searches per letter, and the implicitly latched case searches on the phrase as a whole and may exhibit implicit letter latching due to interaction effects of the pop size, mut rate and proximity-to-target filter. In the former case, once the letters hit home, they are locked off by a masking filter. In the latter, once the phrase hits home, it is locked off by an implicit masking filter, which is explicitly the halting subroutine. (And, I guess that term dates me . . . )
Per letter searches are conceptually simpler to develop and execute. As the programmers among us have both said and done -- notice they partitioned search tends to come first then the more elaborate ones. (Back to following up on PMBOK vs PRINCE2 vs PCM . . . paying for my sins doubtless.) GEM of TKI kairosfocus
Atom @342
690,301,700 queries and no target found using replication, mutation, and selection, but for Anagrams of that phrase.
I'm a little confused by this. Are you saying that random mutation followed by non-random selection can't find anagrams or are you saying that selecting for anagrams doesn't result in finding the target string? The first sounds like a bug. The seconds sounds like exactly what one would expect. Why would you keep running once you found an anagram? You can't select for one thing and expect another. JJ JayM
Joseph @347
I have also noticed that YOU cannot provide a reference from TBW which states that with CS once a matching letter is found it is allowed to be lost.
On the contrary, I have noted on several occasions that the important term is "random mutation." From The Blind Watchmaker:
It now ‘breeds from’ this random phrase. It duplicates it repeatedly, but with a certain chance of random error - ‘mutation’ - in the copying. The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.
There is no way to interpret this as preventing letters from changing once they are correct. You are purely and simply incorrect on this point. Joseph @349
IOW take what Dawkins says before and after, then take a look at the outputs and the inference of a ratcheting process is clear.
In other words, ignore the clear text of the description of the Weasel program and do whatever is necessary to avoid admitting error. Since you have difficulty understanding Dawkins', I suggest you try Jeremiah 5:21 instead.
Again that is the whole purpose behind cumulative selection.
You have missed the whole point that Dawkins was making, namely that the cumulative selection is the result of mutation that is random with respect to fitness combined with reproduction that is dependent on fitness. JJ JayM
JayM:
The full text of the description of the Weasel program is excerpted in my comment 182 above. Please point out, using that excerpt, how anyone can reasonably assume explicit latching (or ratcheting) based on that description.
As I have already told you the relevant phrases are before and after the "weasel" program (in TBW). There is a sentence after the "weasel" illustration which talks of "slight improvemnets". Not only that there isn't one reversal to be found is the printed results of the program. IOW take what Dawkins says before and after, then take a look at the outputs and the inference of a ratcheting process is clear. Again that is the whole purpose behind cumulative selection. Joseph
When I asked kf, “Are you saying that the “explicitly latched partitioned search” does NOT have a generation size, a mutation rate, and a filter?”, he answered
Not at all.
Good - I didn’t think you thought that. So why did you write, at 323,
So I simply repeat: That approach which simply takes Mr Dawkins’ TBW at face value and does a straightforward letterwise, explicitly latched partitioned search is conceptually and programmatically simpler to do than one that has to balance per generation size, mutation rates and a filter towards proximity to the target.
Both the explicit and implicit cases are conceptually identical in all respects except for the one difference that we have agreed on: Explicit: if the letter is incorrect, p(mut) = p if the letter is correct, p(mut) = 0 Implicit p(mut) = p? If this is the only conceptual difference in the program, why is explicit simpler than implicit? hazel
JayM, The whole purpose of cumulative selection is that once something is found the search for it is over. Pure and simple- just like you. I have also noticed that YOU cannot provide a reference from TBW which states that with CS once a matching letter is found it is allowed to be lost. Until you do that you don't have anything. Joseph
David Kellog
Joseph, your proposed test is mildly interesting. In any such test, though, subjects should read The Blind Watchmaker first.
Why? Joseph
OOPS: rage = rare . . . kairosfocus
PS: Feebish, glance up at 297, and see if that helps. The basic point is that we are looking at Weasel 1986, which shows an o/p where in 200+ of 300+ potentially changeable places, once a letter goes right, it stays right. The simplest way to view that is to see it as beinfg a letter wise search that locks in successful letters; which is why Dembski et al use "ratcheting." however, Mr Dawkins per recent reports, did not do that. It is also the case that you can latch IMPLICITLY. For that to happen, observe first that in the published runs c. 1986 we saw 40+ and 60+ generations of champions [selected off being simply closest to target] before target was hit, i.e no-change won about 1/2 the time, and 1 change dominated the rest; with no sampled cases of reversion. Now, the random mutations of letters with odds of say 4% is a binomially distributed probability, with 0-change a significant chance for a 28 letter "nonsense phrase." Odds are actually against any single letter change being to correct, but the odds in favour make it very observable. but, as we look at possibilities for multiple letters changing, we see a tail emerging in the distribution. The odds of a double mutation are low, and the odds of the double muts being both correct are far lower: product of the three odds. Similarly, while the odds of a change affecting a presently correct letter and making it go incorrect are as a rule higher, this has to be now coupled with odds of getting a presently incorrect letter to go correct if such a mutant is to have a chance to win against the no-change members of the population; the champion selecting filter is off proximity to target. And, odds of such a substitution with a further advance to the target in which a third letter goes correct are even lower yet. So, we will under certain circumstances see no change much of the time and single step advances the rest of the time. This is the implicitly latched case. [Play around with Atom's simulation to see this in action. Cf runs above.] In other cases, where we have a big enough odds of mutation per letter, and/or a large enough pop that what would otherwise be unlikely to come up in a sample will be more likely to occur [For sample size N and probability of occurrence p, observability in a sample improves as N*p --> ~ 1 from the lower side; i.e law of large numbers as applied to chances of seeing at least once], then we begin to see substitutions and onwards substitutions with advances, as 2-letter change and 3-letter change champions. Such champions will have the effect that we will no longer see latching, as once a letter reverts to incorrect status, it will on average take dozens of gens for it to go correct. At first, we would see rage reversions [quasi-latching] and then such reversions would become common [non-latching]. On preponderance of evidence the published Weasel runs of 1986 were implicitly latched. The videotaped run of 1987 which featured on BBC Horizon was non-latching. the best inference is that it was detuned -- probbaly for videographic impact. kairosfocus
Footnotes: 1] Ratcheting: As used by Dembski et al, relates to Weasel 1986, and to cumulativeness in the context of latching of o/p. this can be implemented explicitly or implicitly. Beyond a threshold of pop-mut rate, quasi-lathing takes over [rare reversions] then there will be no evident tendency to latch as such. Given evident o/p latching on "good" published runs circa 1986, Weasel latched the o/p. Explicit latching is a viable explanation, but is not the "best" per the statement reported of Mr Dawkins that he did not explicitly latch. 2] DK, 336: neither Hoyle nor Lewontin have anything to do with how Weasel works Just the opposite. Weasel (as Wiki testifies, in yet another admission against interest as it sought to justify what Weasel did) was designed to make it SEEM that Hoyle's challenge to the evo mat scenarios for origins had been answered. This, undfortunately was by injecting active tartgetted informaiton into the process of search. Thus, Weasel has inherent intelligent design embedded, in a context -- recall, "BLIND watchmaker" [which ALSO makes Paley relevant] -- that supposedly tries to show that intelligence is not required to get to complex functional entities. In so doing, it further failed to address the issue that until you can get TO shores of functionality credibly per the search space implied by the info basis for function, befoe resorting to hill-climbing cumulative selection. And, Mr Lewontin's remarks show how by inserting evo mat at the outset, the institutional science tends to defy common sense logic, and fundamentally begs the question, censoring out obviously relevant alternatives. So, the attempted deflections above are distractive not cogent on the merits. 3] DK, 335: The selected phrase is likely to contain previously correct letters but does not have to.” The phrase is what is selected. Of course it’s likely to contain letters that were correct before. Here we see the failure to understand that a probabilistic barrier is a barrier, though it is not an enforced explicit one. Similarly -- and the context of statistical thermodynamics is specifically relevant -- there is nothing in principle that PREVENTS "heat" [roughly: random, molecular scale thermal motion] flowing from cold bodies to hot ones. And as a matter of fact, that happens all the time. just, by the balance of probabilities under the circumstances, the NET flow is from hot to cold. For a sufficiently large body on observable times and scales, the flow is sufficiently certain that classical thermodynamics saw an observable regularity, hence the classical forms of the 2nd law of thermodynamics. In the case of Weasel under relevant circumstances, the zero-change and single change mutations dominate the champion selection process. As, the probabilities of getting substitutions and substitutions with further advances to target are sufficiently low that for specified relevant mutation rates and generational pop sizes, the latter are effectively unobservable. So, we see the cases as published where Weasel implicitly latches. 4] Hazel, 332: Are you saying that the “explicitly latched partitioned search” does NOT have a generation size, a mutation rate, and a filter? Not at all. What I am saying -- and have said many times now -- is that the explicit latch mechanism searches per letter, and the implicitly latched case searches on the phrase as a whole and may exhibit implicit letter latching due to interaction effects of the pop size, mut rate and proximity-to-target filter. In the former case, once the letters hit home, they are locked off by a masking filter. In the latter, once the phrase hits home, it is locked off by an implicit masking filter, which is explicitly the halting subroutine. (And, I guess that term dates me . . . ) 5] AF, Cite , in 328. Thanks for confirming further the accuracy of the cites discussed in 285 and 88 above. Here is my remark on the section you cite:
Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. [--> if you know from the outset that an exercise in public education is "misleading in important ways," why do you still insist on using it? --> Other than, it is the intent to make plausible on the rhetoric what would on the merits be implausible?] One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, [--> He knows -- from the outset -- that promotion to generation champion based on proximity without reasonable criteria of functionality is misleading in important ways!!!!!!!!!!!!!] the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. [--> he knows that this artificially selected, targetted search without reference to functionality is irrelevant to the issues over the origins of information rich systems in life] Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, [ --> That is he knows that he has used artificial selection off proximity to a desired future state, not natural selection based on differential functionality, begging the question of origin of function --> thus, the underlying question of the BLIND Watchmaker creating complex information rich functionality at the threshold of realistic function is being ducked and begged . . . ducking Hoyle's Q and that asked by ID] although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [--> that is, he knows right from get-go, that he has begged the question bigtime, but he obviously thought his rhetoric would work. --> From abundant evidence, that is all too well -- albeit cynically [I doubt that "weasel" is an accident; this paragraph being an exercise in weasel words] — judged.]
Care to comment on that? _______________ ATOM, keep up the good work. And give The Luminous One a special greeting from us all (still an excellent proof of aesthetically excellent design!). Enjoy your vac! GEM of TKI kairosfocus
Final Query count for "Methinks it is like a Weasel" Anagram Search: 690,301,700 queries and no target found using replication, mutation, and selection, but for Anagrams of that phrase. Compare this to the 12,200 queries it takes on average to find the same phrase using a Proximity Reward Fitness function with the same setup (population size 100, mutation rate 10%) and the huge number of queries necessary for blind search. Atom Atom
333 Joseph "04/12/2009 9:35 am Cumulative selection, as described and illustrated via the “weasel” program in TBW, is a ratcheting process. Isn’t that what the Dembski/ Marks paper refers to it as?" After skimming through this very long thread, I'm not sure it's a ratcheting process as much as it is a demi-ratching process. Kairosfocus could be helpful here, perhaps, if he could flesh out some of the ideas he has sketched out thus far. feebish
PS I'll be gone next week traveling to the Grand Canyon and Colorado. I probably won't have net access (since we'll be on the road) so I'm sorry if I won't be able to answer questions that come up. Just keep playing with the GUI and let me know via email if you guys find any more bugs in the Beta version. Atom Atom
Hey guys, I updated Weasel this morning fixing a small bug I found with one of the modes (Partially-Neutral: CRC32) and adding the "Partially-Neutral: Simple Sum Distance" fitness function I described above as a standard mode. Also, for Anagram Search to eventually find your target you will need to tailor the mutation rate so that you get at least two letters changed per child, since an Anagram cannot change only one letter and still have it be an Anagram (unless the change randomly replaces with the same letter.) So if you only change one letter at a time starting from one anagram, the mutated string will always be less fit, since it will no longer be an anagram. Therefore, you'll want to create a "swapping" effect, where two letters in the Anagram can swap place, and eventually you will find the target, if your target space is small enough. Some preliminary results with Anagram Search: The four letter phrase "BITS" has an unassisted random search median of 368,000 queries, but Anagram Search found the phrase with a median of 193,000 queries after 1,000 runs. The five letter word "HELLO" has an unassisted random search median of 9,950,000 queries, whereas Anagram Search found the phrase with a median of 804,500 after 1,000 runs. The eight letter phrase "METHINKS" has a blind search median of 196,000,000,000 queries, but Anagram Search appears to have a median of about 200,000,000 queries (the length of runs is insufficient so far to accurately estimate, so this is a guess based on the few runs completed.) The 28 letter phrase "METHINKS IT IS LIKE A WEASEL" has an unassisted search median of 8.30 x 10^39 queries, and an Anagram Search of over 500 million queries has yet to find the target. If it eventually does, I'll post those results. The other results confirm my intuition: The set of Anagrams is smaller than the set of all possible strings and since the Anagram set contains the target phrase, an Anagram Search will eventually find the target by selecting only for anagrams, and do so faster than an unassisted search would. However, since we are limiting the amount of active information encoded in our reward matrix (rewarding based on Anagram distance rather than distance to the target, therefore introducing a level of uncertainty), it takes longer than Proximity Reward Search. Since Anagram Search selects based on the Anagram subset, a small muation rate and large enough population size will ensure that selection keeps your search close to the Anagram subspace, where your target resides. This keeps the "relevant" search space smaller than the total search space, hence the improvement in search performance. It appears that the more information about your target you use in your fitness function, the easier it is to find your target and the more you limit the target specific information present, the closer your search will behave like unassisted blind search. These results are consistent with that hypothesis. Atom Happy Easter everyone! Atom
Joseph @331
As for people wanting me to supply quotes from TBW- Been there, done that and now the book is back at the library. But tomorrow I will go back and order it again- it isn’t at the local so they will get it from another library in their network. Ya see I had already made my case using the book but I know I can do it again but I need the book to do so.
If you've already made such a strong case using direct quotes from the book, you could, you know, just provide a reference to those posts. JJ JayM
Joseph, your proposed test is mildly interesting. In any such test, though, subjects should read The Blind Watchmaker first. David Kellogg
kairosfocus, Happy Easter! That Lewontin quote never gets old, does it? Nor does referring to Hoyle. Of course, neither Hoyle nor Lewontin have anything to do with how Weasel works, but they're always fun to mention. David Kellogg
Clive [322], I should have been clearer. I should have said "The selected phrase is likely to contain previously correct letters but does not have to." The phrase is what is selected. Of course it's likely to contain letters that were correct before. It just doesn't have to. Letters will revert because mutation is random, but the phrase that is selected for the next generation (that is, the best phrase among the mutatated phrases) is not likely to have a reverted letter. David Kellogg
Joseph @333
Wasn’t the paper peer-reviewed which should mean that someone checked their references- one of which being TBW?
Peer review is the beginning, not the culmination, of the assessment of a paper. If the mistake is allowed to persist until publication, there will no doubt be refutations published.
Take 100 educated people- scientists- and have them read the Dembski/ Marks paper. But these people cannot have any knowledge of this specific debate- no pre-biases. Then have them read “The Blind Watchamker”- at least the relevant chapter- and have each one decide if the paper’s inference on the ratcheting properties of cumulative selection is correct.
The full text of the description of the Weasel program is excerpted in my comment 182 above. Please point out, using that excerpt, how anyone can reasonably assume explicit latching (or ratcheting) based on that description. Please directly address Dawkins' use of the term "random mutation" to describe the creation of progeny.
I would bet the majority agrees with the inference.
I would bet that you are incapable of addressing the plain text of The Blind Watchmaker. JJ JayM
Cumulative selection, as described and illustrated via the "weasel" program in TBW, is a ratcheting process. Isn't that what the Dembski/ Marks paper refers to it as? Wasn't the paper peer-reviewed which should mean that someone checked their references- one of which being TBW? The following would be a good test- Take 100 educated people- scientists- and have them read the Dembski/ Marks paper. But these people cannot have any knowledge of this specific debate- no pre-biases. Then have them read "The Blind Watchamker"- at least the relevant chapter- and have each one decide if the paper's inference on the ratcheting properties of cumulative selection is correct. I would bet the majority agrees with the inference. Joseph
At 323, kf writes:
So I simply repeat: That approach which simply takes Mr Dawkins’ TBW at face value and does a straightforward letterwise, explicitly latched partitioned search is conceptually and programmatically simpler to do than one that has to balance per generation size, mutation rates and a filter towards proximity to the target.
Are you saying that the “explicitly latched partitioned search” does NOT have a generation size, a mutation rate, and a filter? My understanding is that the general form of both the explicit and implicit case is exactly the same: 1) The current parent produces a population of size N of children 2) Each child is formed by subjecting each letter in the parent phrase to the possibility of mutation 3) All the children are compared to the target to see how many correct letters they have 4) The child with the most correct letters becomes the parent of the next generation (with some kind of rule to break ties. 5) The process is repeated until we have a child which has all the correct letters. Is this a correct summary of the general form of the program for both the explicit and implicit case? hazel
As for people wanting me to supply quotes from TBW- Been there, done that and now the book is back at the library. But tomorrow I will go back and order it again- it isn't at the local so they will get it from another library in their network. Ya see I had already made my case using the book but I know I can do it again but I need the book to do so. Joseph
hazel:
b) Programs, including Atom’s, that show that explicit latching is not necessary to produce the kind of results shown in BWM
You have issues- serious issues. I have NEVER said anything about exlicit latching. As a matter of fact I said that the latching takes place given the proper parameters. And that is a fact. Joseph
hazel, With cumulative selection- according to how it is portrayed in TBW- once something useful is found the search for it is over. With a partitioned search- same thing. After writing about the "weasel" program Dawkins made it clear about CS and "slight improvements". Now ONLY an intelligent designer could have a reversal as a slight improvement because designers can figure out future issues may force a reversal but that the reversal can then be used to get an improvement. There isn't anything in TBW which states nor implies that with cumulative selection once something is found it can then be lost such that it has to be found again. Also the watchamker that is cumulative natural selction has never been observed in nature to do anything more than provide slight variations that fit perfectly well within the creation framework of variations within a kind. Joseph
Joyeuses Pâques à tous! BTW form page 50 of "The Blind Watchmaker": [Weasel] is misleading in important ways. One of these is that, in each generation of selective "breeding", the mutant "progeny" were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution.In real life, the criterion for selection is always short-term, either simple survival, or more generally, reproductive success. If, after the aeons, what looks like progress to some distant goal, seems, with hindsight, to have been achieved, this is always an incidental consequence of many generations of short-term selection. The "watchmaker" that is cumulative natural selection is blind to the future and has no long-term goal. Alan Fox
And, as the Easter Sun rises: A happy Easter to all. GEM of TKI kairosfocus
PS: Mr Lewontin immediately goes on to a telling further remark: The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen. a --> At best, this is utterly ignorant of the fact that the major founders of modern science were precisely such theists who believed in the miraculous. They saw God as the Creator who made and intelligible and orderly cosmos that we can understand, being gifted by him with minds matched to our world. So, science started out as "thinking God's [creative and sustaining] thoughts after him." b --> In that context, miracles are not chaotic, whimsical or arbitrary: God, for good reason acts in extraordinary ways in the world, and this action points to himself as our loving Lord and Saviour. For such miracles to stand out as signposts, they REQUIRE a predictable general order to creation. [Thus for instance the signpost significance of a resurrection of a certain crucified Saviour, witnessed by 500+ eyewitnesses as 1 Cor 15:1 - 11 outlines in the first written record of the C1 church's testimony circa 35 - 38 AD; a signpost significance that among other things tells us, as Paul said in Acts 17 to the Athenian elites on Mars Hill, that we are eternally accountable before God. And, are therefore called to change our thinking and behaviour in that light of evidence. That is, to repent.] c --> Thus, the suppression of the relevant history of scientific thought and debates is a disservice to our understanding of even science, much less our worldview options. d --> And, since the very founders of modern science predominantly practiced in a specifically theistic context, we have no good grounds for the idea that such a theistic view is inherently antithetical to the scientific outlook or to rationality. e --> But if one can be induced to think that science = evolutionary marterialism and that rationality = rationalism, then the confusion is easy to account for. f --> The solution is just as plain: a little instruction in the true history of science and the real range of credible scientific approaqches, would at once stop the rot. (Newton's general Scholium to the Principia would be a good start point . . . and a real look at what Mr Paley actually argues in his much derided book, would be a good stopping point along the way too, as would be an examination of Malthus' work and the consequences of Malthusianism in say Ireland in the 1840's. [After all, it IS the backdrop of Darwin's theory, is it not?]) g --> But, that might not fit the preferred outcomes of certain powerful agendas exemplified by the attitude of Mr Lewontin . . . To which I respond: truth and fairness are much more important than agendas. GEM of TKI kairosfocus
10 --> Therefore, the real issue is, what sort of search is credibly able to get us to those shores? 11 --> And, remember, we have found that the technology of life rests on information rich bio-polymer molecules working together in an integrated information system that exhibits: digital data storage in data structures, codes for the data and for working with them, codes in computer languages, step by step processes that physically instantiate algorithms, and self-replication capacity [i.e the blue print and self assembly mechanism are also part of the system]. 12 --> Such an entity is deeply dependent on functionally specific, complex information well beyond the search capacity of our whole cosmos, and is irreducibly complex as well. (If you doubt the latter, take components out of your PC at random and/or change its software at random, and see how well it works.) 13 --> Indeed, we have not5 stumbled across Paley's a stone vs. a watch in a field, but instead a COMPUTER in the heart of cell-based life. (And, BTW, when we brush aside the many strawmen mischaracterisations and dismissals, then actually read Paley, we will see that 200+ years ago he raised some serious questions on the matter, including the point that self-replication implies further complexity and functionality, not less. [Indeed, he actually discussed the issue of a self-replicating watch being found in that field and what that would really call forth by way of best explanation . . . ]) 14 --> So, the real issue has long been on the table, but has for 150 years now been cleverly distracted from and ducked through what in the end turns out to be a grand begging of the question. Well, the time for question-begging is over, long since over. 15 --> First, we have plainly seen for thousands of years three well known mechanisms for phenomena: [a] mechanical forces/dynamics, [b] undirected stochastic contingency (chance) and [c] purposefully directed contingency [design]. They can act together, but we routinely separate these factors as aspects in our analyses of circumstances, objects and events. (Just think about how a die falls to the ground and tumbles then comes to rest with a particular reading. the fall is dynamical per gravitation. The tumbling and coming to a reading is chance or design, depending on whether it is fair or loaded. [And believe it or not, there has been much pretense that this is unclear and/or controversial in recent months at UD. I'd say that if one is sufficiently confused not to be able to understand something as simple and familiar as a tossed die, then that does not speak well of the explanatory power of one's materialistic worldview.]) 16 --> There has been no credible fourth alternative, and after 2300 + years of waiting, we cannot take unredeemed promissory notes seriously that there is a fourth factor. Those who suggest such plainly now have the burden to warrant their claims. 17 --> Dynamical necessity of course starts from initial conditions [which may exhibit contingency] and unfolds the future by forces of necessity. If Condition X occurs under forces F, then consequence C(t) will flow forth, reliably and with low contingency relative to X, per whatever differential equations apply. (In cases of sensitive dependence on initial conditions or quantum uncertainty etc, the issue is not the predictability of dynamics but the uncertainly or physical unobservability to us (or "nature") of the initial conditions. [Which includes Schrodinger's poor poisoned cat.]) 18 --> Undirected stochastic contingency has a definite signature: outcomes are orderly at the next level: statistical distributions, once we can identify the way in which the underlying patterns act. 19 --> In that context -- pace much distractive digital ink spilled over recent weeks -- MOST reasonably large samples will pick up the bulk patterns of the distributions, and as the size of samples grows, we will see tails [or other isolated target zones] playing a role as sample size N goes large enough that the relative statistical weight of the target zone, p [often in the guise of a probability metric], will act thusly: N*p --> ~ 1. That is, the target zone is now credibly observable. 20 --> The law of large numbers, in short, is legitimate, relevant and important [which is not even controversial in any other context of consequence . .. ]. 21 --> Indeed, the heart of the complex specified information challenge to the chance + necessity view of our world, is that the whole cosmos acting as a sampler is insufficient to get us to that threshold of observability for the relevant functionally specific, complex information-rich phenomena of life. 22 --> In a nutshell, a contingent system that has 1,000 bits of information storage capacity, will have a config space of 2^1,000 ` 10^301. Since the number of quantum states of our observed cosmos of some 10^80 atoms is about 10^150, the whole universe acting as a search engine can only sample up to about 1 in 10^151 of the config space of just 1,000 bits. That means that something that functions based on such information of at least that threshold is in effect unobservable per chance and/or necessity. At least, on the gamut of the observed universe. [And this last, onlookers, is an invitation to go to the multiverse concept, which immediately implies a resort to the unobserved and inescapably metaphysical; not scientific.] 23 --> But in fact, the OBSERVED life forms [those who put up RNA worlds etc must justify such EMPIRICALLY] start at 500 - 1000 k bits worth of DNA. And, knockout studies etc strongly suggest that 300,000 4-state elements is a lower limit for independent life. that is, we are looking at 600,000 bits, or about 10^180,617 configs,as the realistic threshold for life. this is utterly beyond any reasonable chance + necessity based search capability of our cosmos. [And id one infers to a law of nature that has "life" written on it: [a] where is the independent empirical warrant for such, and [b] would that not be very suggestive about the designed character of the laws of our universe?] 24 --> This is what gives so much bite to the challenge of the tornado in a junkyard spontaneously forming a jumbo jet put up by the late, great Sir Fred Hoyle. 25 --> The precise challenge that Weasel distracts attention from by begging the question. 26 --> The precise challenge that if it cannot be cogently answered utterly demolishes the plausibility of the chance + necessity in rpebiotic soups --> life, models of evo mat. 27 --> The same challenge that decisively undercuts the pre-existing life + RV + NS --> body plan level bio-diversification macroevolutionary models. 28 --> And yet, we all know, per massive experience and observation, that intelligent designers exist and are ROUTINELY capable of generating directed contingencies that create information rich functional systems that go well beyond the 1,000 bit threshold. 29 --> So, intelligent design of FSCI is well warranted empirically. Indeed, in every case where we directly know the causal story, FSCI is the product of such design. For instance just look at the posts in this thread and blog, or the PCs we use to access it. 30 --> Thus, we are properly entitled to use FSCI as a reliable sign of such design,and we routinely do so every time we infer to authors not lucky noise as the best explanation for posts in this thread etc. 31 --> All of this is reasonably accessible to an intelligent high school student [much less College professors and members of the US National Academy of Sciences, etc], so, why do we see the sort of question begging Weasel represents [most recently exposed at 285] used to stoutly resist the obvious and well warranted? 32 --> ANS: Because of what Mr Lewontin [a member of said NAS] has so openly confessed: "It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door." ________________ In short folks, the root issue is not whether or not Weasel latches explicitly or implicitly. It is not even whether or not the evidence points to the o/p of Weasel circa 1986 is latched, which it plainly does. The root issue is that Weasel is a case study on how science is being subverted from being an unfettered search for the truth about our world based on evidence to being a tool of manipulative rhetoric and advocacy for evolutionary materialistic atheism and associated secularist, radical relativist and [a]moral -cultural agendas, frankly to the ruin of our civilisation. And science education is the means by which this captivity of science to an agenda is imposed on the general public. So, on this Resurrection Celebration Day, let us rise up and throw off the shackles of mental slavery! GEM of TKI kairosfocus
Onlookers (and Mr Kellogg et al): Pardon some direct remarks, preparatory to getting the discussion back on track: I am pretty well convinced at this stage that most of what Mr Kellogg is saying is distractive, rather than substantial. I in particular note that to date he has not cogently addressed the primary issue with Weasel, whether in 1986 form or in whatever newer forms: it is an example of foresighted, targetted search that illustrates intelligent design not any reasonable BLIND watchmaker. As such, it ducks the essential issue posed by Mr Hoyle and many others. Namely, there is no probabilistically credible materialistic pathway to first life and body plan level innovations, once we have identified the functional, tightly integrated, information-rich complexity of the nanotech of life. In that context, Weasel -- sadly -- has since 1986 been a grand exercise in question-begging, and the many personalities-tainted rhetorical attacks here and else where over latching -- especially in light of the refusal to address the cogency of the responses and demonstrations -- have been a grand red herring exercise. A red herring led out to strawman mischaracterisations that were soaked in ad hominems and ignited to cloud and poison the atmosphere. (The latest recirculated one being complaints that I am long and unclear, when in fact the real truth is that it takes time and words to answer objections at a responsible level.] Onlookers, let's set the record straight (yet again). And, pardon the necessity of a bit of a 101 level tutorial; which will have a certain unavoidable length. First, though, we can -- yet again [see what, sadly, I mean about failing to responsibly address cogent, easily accessible evidence . . . ? (Cf. 313 just above) ] -- roll the tape back to December last -- yet again -- where you will see the issues I raised as primary and as associated:
[107:]the problem with the fitness landscape [i.e. as envisioned for the biological world] is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. [Notice how this has never been seriously addressed: getting to body plan no 1, with credibly 600 k bits or so of bio-information as the threshold of functionality, i.e a config space ~ 10^180,617; and onward for body plans requiring 10's - 100's of mega bits of increments of functional information] So far in fact — as I discuss in the linked in enough details to show why I say that — that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy. But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge. [Notice, what is central to the issue, right from the outset.] [111, excerpted paragraph used by GLF in his threadjack:] Weasel [i.e. as published in 1986] sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. [If you doubt this, simply observe the o/p . . . ] That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.
Now, per 101 . . . 1 --> It should be very plain that the latching concept was there from the outset, as an explanatory concept primarily relative to the o/p behaviour evident form Weasel 1986. 2 --> The terms explicit and implicit latching were descriptive summaries [so is Mr Dembski's ratcheting] of that concept. 3 --> Then, when issues over mechanisms came up, I posited three T1 -- random search, T2 -- explicit latching, and T3 -- implicit latching. (And, by the lucky noise principle, random variations can in logical principle mimic any pattern in the world; the issue is that when the odds against the lucky noise alternative are sufficiently long, it becomes implausible. This is a premise of science and that common sense reason that says we live in an intelligible world, not a chaos in which all happens by accident and/or by blind necessity. Down that chance + necessity only road lies self referential incoherence of the deepest and most irretrievably self-defeating kind. [For more details, kindly read appendix 7, my always linked.]) 4 --> All of this, Mr Kellogg knows, or should know. So -- sadly -- his objection above is objection for the sake of objection and/or -- which would be even worse -- intended to annoy and to create distractions from the issues on the primary merits, and even the secondary issue through tactics of repeated red herrings, strawmen, and ad hominems. 5 --> To deal with that, we should realise that those who repeatedly distract attention from a subject and the framework of evidence and reasoning that point where it is heading do so as a rule because they may be deeply confused and/or fearful and/or deeply hostile to where that subject is heading. 6 --> In this case, evolutionary materialism is a deeply held view that dominates the academy and many other influence or power centres of our civilisation, and gives comfort to radical secularism and its fellow travellers in their cultural-moral agendas, that they are "scientific." But in fact, science is being held captive to teh censorship of evolutionary materialistic philosophy,as Mr Lewontin's confession in the NY review of Books, 1997, so abundantly documents, and as has in recent years been enforced upon science education, courtrooms and even parliaments. Let's remind ourselves:
Our willingness to accept scientific claims that are against common sense is the key to an understanding of the real struggle between science and the supernatural. We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [NY review of books, 1997]
7 --> So, we must determine to restore science to being an unfettered (but intellectually and ethically responsible) pursuit of the truth about our world in light of the empirical evidence. 8 --> In fact, the truth is that evo mat has no good -- empirically well warranted -- origin of life account, once we see the incredible complexity and information basis for cell based life, multiplied by the fine-tuned complexity of the physics that undelries such a cosmos that facilitartes such life. Similarly, it has no good account for the origin of body plan level biodiversity that elaborates on teh complexities of such life. [Cf my always linked for a reasonable 101 level discussion of this.] 8 --> Thirdly, it has no good grounds for the credibility of our minds and consciences, i.e it is self referentially absurd at its core. [Cf the Appendix 7 the always linked for a 101 level discussion on this. (And just perhaps this sort of thing and the sort of thing that appears in the correctives and the glossary at UD are why we see the sort of rhetorical tactics that have been evident in recent weeks. The future of our civlisation is at stake,and who holds the academic high ground is a big part of it. Worse, some pretty indefensible positions have been taken by the evo mat magisterium, and have been defended by tactics that will not stand the light of day in too many instances.)] 9 --> In that context, we can see why the Hoylean challenge was ducked through a question-begging exercise. The issue is -- pace Wiki's attempted defense of Weasel etc. -- not whether or no RV + NS as BLIND watchmaker can climb hills to the peaks of Mt Improbable, the issue is to first credibly get to the shorelines of the island of function on which the mt sits. For, the islands of function are tiny, hard to find, dots in a vast sea of plainly non-functional configs. [ . . . ] kairosfocus
Hazel: Pardon a few direct words: you are beginning to recycle already long since adequately answered objections. So I simply repeat: That approach which simply takes Mr Dawkins' TBW at face value and does a straightforward letterwise, explicitly latched partitioned search is conceptually and programmatically simpler to do than one that has to balance per generation size, mutation rates and a filter towards proximity to the target. The very length of the exchange on the latter is a strong enough proof of that. So is th fact that consistently, the first resort is to do a partitioned search program, and the implicitly latched, quasi-latched etc cases come later, and often with apparently much more program development effort. GEM of TKI PS: Clive, Mr Kellogg et al have a problem with seeing that we are dealing with a tailed distribution [the binomial] for the letters, with a significant fraction being no-change under the relevant odds of random change per letter. So, with a population N sufficiently large to bring out the bulk pattern of the distribution, but small enough that the double and triple mutation cases required for the proximity filter to pick up substitutions of a newly correct letter for a letter that reverts to incorrect status as champion, of probabilities p, will be PRACTICALLY impossible [effectively unobservable, N* p -/-> ~ 1], the Weasel program will implicitly latch as no change and single step changer champions dominate the runs to target; and just beyond that range quasi-latch will occur, as only very rarely will we see the relevant reversions and substitutions. Not even the provision of actual runs of such an implicitly latched case [cf above thanks to Atom] has proved sufficient to move them from their preconceptions. At the root, the problem is that the Law of large numbers on observable patterns of samples from a population, is fatal to their underlying evolutionary materialism. kairosfocus
David Kellogg, "That phrase is likely to conserve correct letters but does not have to in any particular generation." I'm sorry David, this makes no sense to me. The phrase conserves letters? Clive Hayden
hazel, Thanks for that clarification. So there is a target phrase that is being approximated to. I see. The program is making an effort at getting these letters in their correct places in order to make the phrase, right? Clive Hayden
kairosfocus insists that he distinguished between explicit and implicit latching from "the outset." Certainly those terms were used from the outset of this thread, but the discussion precedes this thread by some months (and, in typical kairosfocus fashion, by tens of thousands of words -- over 10,000 from kairosfocus in this thread alone!). Those who examine that earlier history, however, will find that the term latching arose from the primeval soup of words sometime after kairosfocus was asked to back up a claim that Weasel searched for and fixed correct letters. The species L. explicitus and L. implicitus evolved sometime after that, with L. implicitus giving rise to the closely related subspecies (some would say it's the same species) L. quasitus. David Kellogg
Atom wrote that one (or maybe Apollos), but they had confused other factors into their response. Let's not dig up the past. How is if the letter is incorrect, p(mut) = p if the letter is correct, p(mut) = 0 simpler than p(mut) = p? hazel
hazel: The programmers among us have testified to that fact, that an explicitly latched version of Weasel is simpler than an implicit one. And, remember that we do need to do a bit of parameter tuning through runs to get the implicit latching effect too. GEM of TKI kairosfocus
Onlookers (and DK): Let's clarify. 1 --> Is there o/p latching in Weasel 1986? [Me -- per LOLN, yes! DK et al originally, no -- Hazel being a notable exception. Evidence of Atom's sims: very probably so.] 2 --> Are there various ways to explain such o/p latching, e.g. by explicit and implicit latching? (Yes -- but DK et al thought that mechanism T3, IMPLICIT latching was non latching. [Period. Onlookers cf 39 and 41 supra.]) 3 --> Do explicit letterwise and implicit latching differ? (Obviously: the former partitions letterwise and explicitly masks off successful guesses, letter by letter. The latter -- by definition -- arrives at o/p latching [what was to be explained . . . ] by co-tuning of pop per generation, mutation rate and proximity only filter, in light of the binomial statistical distribution under LOLN. [Far skirt members regularly or reasonably are expected to show up only when there is enough in a sample to make that reasonable. Criterion N.p --> 1 or so.]) 4 --> So why is it now a big thing to pretend and argue as though the undersigned needs to concede, that this difference [which I pointed out from the outset in defining the two mechanisms] exists? 5 --> ANS: Simple, onlookers: We have now demonstrated implicit latching by actual simulation so there can be no debate on its reality and credibility. This moves beyond arguments to what is more persuasive and hard to deny: directly observable fact. Facts that in fact underscore that a much objected to argument was correct all along. 6 --> So, do we see a serious response to the now confirmed balance on the merits? (Sadly, no. History is being re-written counterfactually before our eyes.) 7 --> So let's roll the tape briefly:
DK, 39: I think “implicit latching” is a way to avoid saying “non-latching.” SK, 41: So-called “implicit latching” (which as David points out really means “non-latching”) . . . DK, 315: kairosfocus, again I say: what you call “implicit latching” means non-latching at the mutation level. Can you bring yourself to say that? Say it yourself: There is no latching at the mutation level.
8 --> In short, what was never in dispute about how implicit latching works [i.e why it is implicit as opposed to explicit!], is now trotted out as a qualification ex post facto, and put rhetorically up as though I would have to now concede it. Sad. And sadly revealing. GEM of TKI kairosfocus
KF writes, "Additionally, it [explicit latching] is the easiest to develop an algorithm for and to code." NO! NO! NO! In 136, I wrote, and you eventually agreed, that the only difference between the two cases is
In the implicit case, for each letter p(mut) = p In the explicit case, for each letter if the letter is incorrect, p(mut) = p if the letter is correct, p(mut) = 0
How in the world can you say that the explicit case is simpler when it includes a longer conditional statement where the implicit case has a simple one-line rule? I dare you to respond to just this one question, preferably in under 300 words. Take this on as a challenge, kairosfocus - an exercise in restraint. Don’t mention any other topic, all of which you have posted on at least a dozen times, and just address this issue. Good luck. hazel
kairosfocus, again I say: what you call "implicit latching" means non-latching at the mutation level. Can you bring yourself to say that? Say it yourself: There is no latching at the mutation level. Feels good, doesn't it? I maintain that "implicit latching" is an obscuring term that can do nothing to explain how the program (a) does not latch at the mutation level, and (b) works anyway. Throughout this discussion, critics of Weasel have repeatedly failed to understand it because they have failed to understand in what sense it (however crudely) models evolution. To do what it says, Weasel must model: 1. Random variation at the level of the gene 2. Nonrandom selection at the level of the individual 3. Competition among individuals in a population. The only form of Weasel that satisfies these is one where the letters don't latch. Why make up the term "implicit latching"? The only thing kit does is confuse points (1) and (2). Repeat: in Weasel, mutation is and remains random at the level of the letter. That's the only thing that makes sense. David Kellogg
PS: I need to amplify a point. I should add that the best explanation per the evidence in hand for the difference between the published Weasel program o/p 1986 and tfhe videotaped run of 1987 remains that Mr Dawkins modified the program. Specifically, on the model of implicit latching circa 1986, that the program in 1987 -- which behaves radically differently form that of the 1986 o/p [i.e. reversions are frequent, as opposed to credibly absent, and it seems to run for a much larger number of generations] -- is that the parameters for population size, mutation rate were shifted in ways that detune the program. [Cf the various runs I did above.] And that is not an "accusation" -- much less an unwarranted accusation -- on my part, it is a reasonable explanation on evidence of radically different behaviour. kairosfocus
h --> I have, finally, ALWAYS said that the latching question is a secondary defect of weasel, but has significance as it is so obvious and points to the PRIMARY defect: Weasel is a case of targetted search that fails to properly address the need to get to the shores of islands of complex information rich function, in a sea of non-function. i --> Indeed, from the original thread where this all came up last December, I stated [and as was cited repeatedly in the current multi thread exchange, including at 64, 177 and 268 supra, so DK has little excuse for this]:
[107:]the problem with the fitness landscape [i.e. as envisioned for the biological world] is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. [Notice how this has never been seriously addressed: getting to body plan no 1, with credibly 600 k bits or so of bio-information as the threshold of functionality, i.e a config space ~ 10^180,617; and onward for body plans requiring 10's - 100's of mega bits of increments of functional information] So far in fact — as I discuss in the linked in enough details to show why I say that — that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy. But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge. [Notice, what is central to the issue, right from the outset.] [111, excerpted paragraph used by GLF in his threadjack:] Weasel [i.e. as published in 1986] sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. [If you doubt this, simply observe the o/p . . . ] That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.
3] Hazel, 304: There is no mention [in TBW] of an explicit latching rule. There are statements that pretty well support that understanding, as may be seen from looking at 285 and Joseph's remarks. Per Weasel 1986, absent the testimony as reported to us in recent days, explicit latching is a very reasonable understanding. (Indeed, the Monash University biologists, naturally understood it that way until Mr Elsberry "corrected" them.) And of course the point is, that while latching is very much a secondary defect of the Weasel 1986 program, it points like a signpost to the primary defect: targetted search without reference to a reasonable threshold of funcitonality. Intelligent design, with foresight built in, being presented in a rhetorical context that seeks to make a claimed BLIND -- non foresighted -- watchmaker seem credible. In short, the entire exercise in 1986 was fundamentally misleading. [Cf 285 for why I say that.] I think that a better expenditure of effort on the part of those who have laboured long and hard to try to justify it, would be to acknowledge that Weasel is fundamentally misleading, was known to be so from the outset [cf 285 above] and should never have been used. It should be withdrawn -- and, frankly, apologised for. 3] Atom, 311:In nature, the “current” fitness function is based on the organism itself, the random environmental factors, the other organisms in its biosphere, etc. It is a function of many inputs. So for a specific organism, which is just a “permutation” of possible traits, we can assign a “fitness” value to it: more or less how many organisms it successfully brings into the next breeding generation. Perhaps the “fitness function” at the time can assign two different values for the same “permutation”, which is where random events come into play, but that can just be modelled as the moving between two similar fitness reward matrices. Physically intantiated fitness functions change over time, sometimes in complex ways, and can be density dependent, etc. Yes. I just note that the further complication is that this starts from a functioning organism and body plan. The first question is to get TO that functioning body plan, from the very first one. And in the light of the credible degree of complexity that obtains, e.g. as measured by the bit capacity of the DNA, about 600 k bits. And, onward, to get to new body plans, 10's - 100's of mega bits is credible. This, in my considered opinion [cf my always linked], is the central problem/weakness with the whole Darwinist scheme for macroevolution and the associated spontaneous origin of life models. GEM of TKI kairosfocus
Onlookers: It seem that some further remarks are neeeded, in light of onward fairly sharpish comments by esp. DK, and some telling associated silences on key points that were up to recent days central objections [e.g. on law of large numbers]. Of course, let us first underscore: the MAIN problem with Weasel is that it is targetted search, which does not properly reckon with the need to first achieve complex functionality before one can hill-climb to optimal function, so it ducks the Hoylean challenge. That is, natural selection etc, are not capable of creating function that does not exist, they can only describe what happens when populations with differential fitness to environment interact, i.e. the currently fittest survive and reproduce themselves. Weasel, as Mr Dawkins knew from the outset, sadly, is adn was always fundamentally misleading. [Cf my remarks on his statements at 88 and as reproduced at 285; which document precisely what I just claimed.] A few footnotes on further comments: 1] DK, 305: What people here call “implicit latching” turned out to be trivial once it was clear what you were talking about (that is, no latching at the mutation level) Of course, a main objection maintained for weeks was that there was no latching in Weasel 1986's o/p much less anywhere else. And, my adverting to the significance of the Law of Large Numbers to infer to o/p latching in the data as published by Mr Dawkins in 1986, was supposedly illustrative of my ignorance. VANISHED, unacknowledged, as though it never were, now that thanks to Atom, we have been able to replicate the circumstances and show that this is a very reasonable conclusion on the published o/p circa 1986. (But in fact, had the import of LOLN been acknowledged from the outset, this would not have ever been an issue.) Next, I have for weeks been repeatedly very, very explicit and specific that the issue is to explain the Weasel 86 o/p latching, for which mechanisms T2 -- EXPLICIT, and T3 -- IMPLICIT latching were reasonable candidates. On the data in hand circa 1986, explicit latching is a very reasonable interpretation, and it is the simplest way to account for what was said in the text as can be seen at 285. Additionally, it is the easiest to develop an algorithm for and to code. Also, if we conceive of the target on a letterwise basis, achieving it by random search per letter with proximity filtering is not materially different form doing so on a phrasewise basis. And, indeed the letterwise interpretation fits in very well with "cumulative selection" that "chooses the one which, however slightly, most resembles the target phrase." However, on being told that c. 2000 Mr Dawkins said that he did not explicitly latch on a per letter basis, we have accepted that T3, implicit latching is the best explanation, and we have been able to materially replicate the situation again courtesy Atom. 9the existence and capacity of such implicit latching was also objected to, it was hardly a trivial matter. Indeed, DK is on record, among others, as saying, in effect, that implicit latching = non-latching, specifically, that it was allegedly a way to AVOID saying non-latching. [Cf 39 and e.g. Skeech's response at 41]) It is thus fair comment to say that the above has not been a "trivial" exercise. Implicit latching is real -- currently observed, and [as described and predicted by the undersigned and Joseph] it arises from the interaction of per letter mutation rate, population size and filter that rewards mere proximity. And, of course, it is precisely because it arises from interaction across three factors that it is different from use of a mask that once letters achieve their target, they will be preserved directly and explicitly. In short, the excerpt being commented on is an attempt to word a concession as if it were a victory. Sad. 2] explicit latching radically misconstrues what the program was trying to show . . . . The accusation for years was that Weasel explicitly fixed letters. kairosfocus insisted this was the best explanation, accused Dawkins of modifying the program between 1986 and 1987, and only backed down (kind of) when told Dawkins never used explicit latching. Recently he’s said the issue is unimportant, but it sure was important for him earlier. Highly misleading, ad hominem-laced mischaracterisation of what has happened over the past weeks: a --> It is a very reasonable, natural and simple understanding of Weasel 86' o/p, that it latches letters; especially given where 200+ of 300+ potentially changeable sampled letters go right then stay right. b --> That this is so was stoutly resisted for weeks and, now that it has been abundantly vindicated thanks to Atom's simulation, there is a studied silence on the matter, especially of the significance of the law of large numbers. (Just scroll up and look at the way my darts and charts thot expt was objected to.) c --> On looking at the evidently latched o/p, the question that arises is mechanism, and when that came up as an issue, I proposed two: T2 (explicit) and T3 (implicit) latching. [T1 of course, was the "select all at once" option, which in principle [per lucky noise] can replicate any other pattern of champions in a run; in praxis, it most likely will never achieve the target in any reasonable length of time -- its relevance lying in that the 28 latter phrase is well below the reasonable threshold of first life function, 600 k bits. In short what Mr Dawkins dismissed as "single step" mutation, is the most biologically credible case of getting to FIRST function. Hill climbing can only begin when you already have function.] This of course sets up an inference to best, empirically anchored explanation exercise. d --> As I will give more details on in responding to Hazel, the simplest explanation on the published run excerpts and statements of Mr Dawkins circa 1986 [cf 285 supra and focus on the implications of rewarding the slightest increment in proximity multiplied by remarks on cumulative selection and on the pattern of o/p where we see not one reversion form a letter hitting target], is that the program explicitly latched. e --> However, implicit latching was still a feasible mechanism [as an alternative explanation], and when a statement was presented to us that Mr Dawkins c 2000 denies explicitly latching the program, it has become the best explanation on preponderance of evidence. f --> This is, of course, not a definitive proof. (Only credible code would be decisive; e.g. Apollos has been able to show that an explicitly latched version of Weasel can show reversions, once they are written in.) g --> Nor is such a response to evidence per the provisionality of empirical reasoning properly characterised as making [presumably, ill-founded] accusations. [ . . . ] kairosfocus
iskim labmildew, You seem to be getting ahead of the argument, at least in terms of what is presented in the Weasel 2.0 documentation. First, there is a formal/mathematical point and later a physical/biological interpretation. My goal with Weasel is to explore the formal structure of fitness space in hopes of perhaps later shedding light on the physical implications for biology. But lots of work and experiments needs to be done before we get to that step. I should begin by explaining my nomenclature more fully. As you know, a function simply takes a set of input and maps it to a set of outputs. In our case, we have a set of string permutations as inputs, and we can assign each permutation a "fitness" value between 0 and F. Let's start simple, with a two bit string and a two value fitness function. This creates a mapping, which I call a Reward Matrix (since we're rewarding strings based on this value and since it is a matrix, similar to a truth-table row in Symbolic Logic. You can also refer to the rows as "reward vectors", which may be more descriptive, but for now I'll refer to each row as a reward matrix, since you can write them in two-dimensional matrix form as well.) The complete table for two bit strings is this: 11 10 01 00 Function --------------------------------- 0 0 0 0 (2)f0 0 0 0 1 (2)f1 0 0 1 0 (2)f2 0 0 1 1 (2)f3 0 1 0 0 (2)f4 0 1 0 1 (2)f5 0 1 1 0 (2)f6 0 1 1 1 (2)f7 1 0 0 0 (2)f8 1 0 0 1 (2)f9 1 0 1 0 (2)f10 1 0 1 1 (2)f11 1 1 0 0 (2)f12 1 1 0 1 (2)f13 1 1 1 0 (2)f14 1 1 1 1 (2)f15 As you can see from the table, for a two bit string with two possible fitness values, you have exactly 2^(2^2) = 16 possible reward matrices. (Each row represents the mapping of the inputs to a set of outputs, using one fitness function.) I labeled the fitness functions as follows: (fitness_values_base) f Vector_Value The first part in parentheses is the number of possible fitness values, 0 to F. In our case it is (2). Then "f" signifies it is a fitness function, and the number that follows is the reward matrix number in that base. For example, (2)f5 would mean: take the base-2 representation of 5, which is 0101, and this is the output row of assigned fitness values. This notation works for all your permutation numbers and bases; if your input string gets more characters, you simply write them in alphabetical order, with the "lowest" string permuation on the far right column. (In our case 00 is the lowest.) If we want to specify the base of our input string in the notation, you could like so: (4,2)f0. If not, then we assume that all values to the left of the most significant bit will be zero. For example, when looking at the (2)f0 row for a two bit string (0000), this is identical to the (2)f0 row of a four bit binary string, except there are zeros for the leftmost, higher order bits (00000000). So noting the base of the input string is unnecessary, though you can note them. My notation scheme is arbitrary, but it allows us to talk about specific fitness functions and the "fitness space" in a formal way. We can get any function, for example (4)f15, and construct the unique reward matrix row from this: 0033. Now that we see all the possible fitness value assignments, we can ask "How many of these improve our search? How many of them hinder it?" This is what Weasel 2.0 allows us to begin to explore (and some other sims in the works will help explore in an easier manner.) So far the problem is strictly formal/mathematical and calls for numerical and/or empirical analysis. Now, you bring up the question of physically instantiated fitness functions. This is a different question and is beyond what my work at this point is ready to fully address. But in general, given any number discrete inputs, there are a fixed number of possible fitness value assignements, if your fitness values are discrete and bound by a limit. (For example, 0 through F.) In nature, the "current" fitness function is based on the organism itself, the random environmental factors, the other organisms in its biosphere, etc. It is a function of many inputs. So for a specific organism, which is just a "permutation" of possible traits, we can assign a "fitness" value to it: more or less how many organisms it successfully brings into the next breeding generation. Perhaps the "fitness function" at the time can assign two different values for the same "permutation", which is where random events come into play, but that can just be modelled as the moving between two similar fitness reward matrices. Physically intantiated fitness functions change over time, sometimes in complex ways, and can be density dependent, etc. So at each moment, we can see that one of the possible fitness functions will be instantiated, and will assign the corresponding fitness values to organisms ("permutations") present. (This is not entirely correct, since the organism itself does the reproducing, thus the "assignment" of fitness value, but the environment and other organisms play the part in controlling how successfully the organism can survive and reproduce.) Sticking with our metaphor, you examine a particular insect and the shape of its reproductive organ and see that it must have a certain shape to be able to successfully copulate with members of the opposite sex; so at that time, given the input of that organisms traits (permuation), and the traits of females as additional input, that should be assigned a high fitness value. However, if the shape of the female becomes different, then that same trait permutation will get a low fitness value. So fitness functions change over time and the same trait can be assigned a high value or a low value depending on the fitness function in place at that time (which is dependent on the other inputs, the laws of physics, etc.) So far I have not disagreed with you, I have just clarified. So are some reward matrices uninstantiateable? Perhaps. This is an empirical question that should be explored. If, however, we are limited in the number of physically instantiateable fitness functions, that that may directly limit what traits we can effectively select for, if not every fitness function allows us to have a successful search for a target. The more primary question is this: How many of the possible fitness functions allow for success in a specific search? If we find over and over again that only a few fitness functions will help us find our target (namely those with high information about the specific target), then this would have obvious implications for biology. As of right now, the more basic research hasn't been done (as far as I know.) I simply don't know the answer to a lot of those questions, but I'm beginning to investigate them. Hopefully Weasel will allow others to begin exploring and experimenting with me. Atom PS Sorry for the long post PSS I refer to the output matrix rather than simply talking about fitness functions, since for any input there are an infinite number of functions that can map it to that same output. (Just keep adding useless parts to your function/circuit. This is why we do circuit reduction, since a simpler circuit may be able to acheive the same logic, meaning map the same input to the same output.) Talking about the mappings themselves, therefore, is more precise. Atom
Clarification: "It" in the "It preserves" [309] refers to Weasel, not to Clive's question. David Kellogg
Clive [307], your question makes no sense. It preserves the closest phrase among the progeny. That phrase is likely to conserve correct letters but does not have to in any particular generation. That's how Weasel works. David Kellogg
Clive, I don't know whether you have been paying close attention to the discussion or not, but everyone agrees that it knows what is a correct letter by comparing each phrase to the target phrase. There has never been any controversy about this aspect of the situation. The issue has been about the details of how and when in the program this happens. hazel
David Kellogg, "Of course it will tend to conserve correct letters. Again, that was part of the point." How does it know what is a correct letter? How does it know to "conserve" any letter? It seems to me that if you didn't already have the whole phrase in mind, you wouldn't get any correct letters, because you would have nothing to approximate towards "conserving." Clive Hayden
Correction: "I submit that the longstanding focus insistence on what we’re now calling explicit latching demonstrates that people didn’t understand what Dawkins was saying." Obviously those who wanted to correct it had to focus on it to do so. David Kellogg
What people here call "implicit latching" turned out to be trivial once it was clear what you were talking about (that is, no latching at the mutation level). That's what makes the program work: it moves the phrase (not the letters) toward the target at the selection level. Why was refuting explicit (mutation-level) latching important? Because explicit latching radically misconstrues what the program was trying to show. Explicit latching both (a) runs counter to the plain sense of the BW text, and (b) misrepresents the function of this sort of (admittedly minor) exercise. The accusation for years was that Weasel explicitly fixed letters. kairosfocus insisted this was the best explanation, accused Dawkins of modifying the program between 1986 and 1987, and only backed down (kind of) when told Dawkins never used explicit latching. Recently he's said the issue is unimportant, but it sure was important for him earlier. I think it became unimportant when he had to give up the claim of explicit latching. I submit that the longstanding focus on what we're now calling explicit latching demonstrates that people didn't understand what Dawkins was saying. Now, I don't always agree with Dawkins: in the fights between Dawkins and Gould, I was usually more on Gould's side. But obscurity is not one of his faults. Whatever else he is, Dawkins is a clear writer. On his worst day he writes more clearly than kairosfocus at his best. I submit that ID proponents' long-term insistence on explicit latching and the current obfuscation over "implicit latching" suggest a greater failure of understanding. In short, Weasel is hard for you to understand as an analogy for evolution because you don't understand evolution. I wish it were otherwise. I wish the problems were merely ones of interpretation. But I think your repeated attention to the "gnat" of Weasel obscures the big conceptual log you are ignoring. David Kellogg
Joseph - 1. Dembski and Marks assumed a latching rule was in place 2. Here is what people have provided to show that a latching rule is not what Dawkins used a) Direct quotes from BWM where Dawkins describes the process. There is no mention of an explicit latching rule. b) Programs, including Atom's, that show that explicit latching is not necessary to produce the kind of results shown in BWM c) Quotes from elsewhere in the book (See 263) that show that Dawkins knows that mutation is random in respect to fitness. With this said, how can you possibly say,
To date not one person has provided anytrhing that would show that the inference made by Dembski and Marks is wrong.
Clearly people have provided a great deal of evidence that Dembski was wrong. You can say that you don't accept the arguments provided, but you can't possibly say that no one has provided any. hazel
Joseph Appreciated. Just, we need to maintain an example of civlity, if we are to have a reasonably merit based forum. [Believe you me there are any number who would want to trot up crying hypocrite at us if they could.] GEM of TKI PS: Hazel: per the substance of what we have from 1986, Joseph is right. It is because of additional information, i.e. a report of testimony, that I take implicit latching on preponderance of evidence; which does not explicitly lock up letters, but the dynamics I just explained -- again -- to DK -- show why implicit latching will latch. So, Joseph is right again on that mechanism: once a letter goes right under the right circumstances it will not normally be knocked off its perch -- you have to push out into the skirts far enough for that to happen with Quasi-latching. (Notice, the preponderance conclusion is not at all "beyond reasonable doubt" or "to moral certainty" or the like strongest degree of empirical warrant.) kairosfocus
KF, My aplogies I do not have your patience. As a hockey player I just want to jam. But that is me... Joseph
To date not one person has provided anytrhing that would show that the inference made by Dembski and Marks is wrong.”
I have. Repeatedly. And others have.
Can you reproduce those quotes? I haven't read any.
You keep asking whether people have read the book, and challenging people to provide quotes that support their position.
That is how it goes.
I challenge you to provide a direct quote, from the book, that makes you think that the program explicitly locks letters so that once a “best fit” phrase is selected in a generation the correct letters in that phrase will always be passed to its children (Dawkins calls them progeny), without exception.
I take it you have reading comprehension issues. That was never my point. My point is and always has been that with cumulative selection once someting (useful) is found the search for it is (essentially) over. And that is what a patitioned search is. Joseph
Joseph writes, “To date not one person has provided anytrhing that would show that the inference made by Dembski and Marks is wrong.” I have. Repeatedly. And others have. You keep asking whether people have read the book, and challenging people to provide quotes that support their position. I challenge you to provide a direct quote, from the book, that makes you think that the program explicitly locks letters so that once a “best fit” phrase is selected in a generation the correct letters in that phrase will always be passed to its children (Dawkins calls them progeny), without exception. Please quote the exact text, with page numbers. Thanks. hazel
PS: Joseph, thanks on the substantial matter, but please . . . on words. kairosfocus
Joseph @283
I challenged anyone to provide the reference quote from TBW which would show that with a cumulative selection process once a matching letter is found the program keeps looking for it. To date not one person has provided anytrhing that would show that the inference made by Dembski and Marks is wrong.
That statement does not reflect the facts of the matter. David Kellogg has posted a link to a line by line refutation of your claim. Further, several people, including myself, have noted the key lines from The Blind Watchmaker:
It now 'breeds from' this random phrase. It duplicates it repeatedly, but with a certain chance of random error - 'mutation' - in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.
This is the essence of Dawkins' description. There is no reasonable interpretation of these words that suggests that letters are "latched." In fact, the term "random error" suggests exactly the opposite. Joseph @284
Your selective quoting of TBW didn’t capture what he stated after-
I quoted the entire description of the Weasel program. No quote mining, nothing "selective."
That being that cumulative selection is a process of slight improvements- however slight.
Please provide the quote to which you are referring and explain how it is pertinent to this discussion. Dawkins' words about the Weasel algorithm are very clear, and they show that your interpretation is untenable.
IOW Jay you are being very deceptive which is to be expected.
Someone is certainly trying to be deceptive here, on that we agree. Joseph @286
How can you blame Dawkins for NOT mentioning things that his program was NOT doing, or for not having the foresight to anticipate that Dembski would mistakenly make incorrect assumptions about the program.
Did you read the book? He makes it clear t5hat once something is found the search is over.
Provide the quote supporting your claim or retract it. After all, you wouldn't want to be seen as lacking in intellectual integrity or being deceptive, would you? JJ JayM
DK: Ignoring for the moment a long list of issues you have to address in light of the runs in evidence above, I think you need to look a bit closer:
The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL.
1 --> We are dealing with discrete state things, where the element is the 27-state character, from the set {A, B, C, . . . Z, * [for space]} 2 --> In that context, the slightest increment possible is that there is a one-letter improvement in proximity to the target. 3 --> next, we are dealing with runs where we take 40+ and 60+ as published, which means that about 1/2 the time, no-change wins. [That no change may do such we can see from the runs of close proxies to Weasel c 1986 on implicit latching above.] 4 --> We may freely infer that no-change cases are frequent in the underlying per generation populations of "mutant NONSENSE phrases" from which the champions come; at least for the relevant published runs and close enough cases. 5 --> thus only improvements of one or more letters, or substitutions [cf. cases in point above] may win by competition with the no-change cases. (As well they may occur together: substitution plus improvement.) 6 --> however, we are dealing with a Bell or reverse J distribution. So, multiple correct mutants will be relatively rare, far skirt cases, and though substitutions are possible, they too will be rare, comparatively. 7 --> So for a certain range, we will see latching, and as pop rises and/or mutation rate, we can then see quasi-latched cases then much less cumulative cases. [In one run it was amusing to see how it was like several times a letter would go right then another in parallel would lose its correct state,and then the program would flounder until the next substitution on and on till finally we got the last le3tter in place.] 8 --> In short, letterwise latching is there in the text of TBW, just look carefully in light of the fact that we are dealing with a discrete state, i.e. digital context. GEM of TKI kairosfocus
David Kellogg:
Of course it will tend to conserve correct letters. Again, that was part of the point.
And that is what I have been saying. Therefor the inference is once a matching letter is found the search for it is (essentially) over. In a partitioned search once a matching letter is found the search for it is (essentially) over. IOW I fail to see the big issue that have the anti-IDists pissing themselves. Joseph
Of course it will tend to conserve correct letters. Again, that was part of the point. David Kellogg
The program does it as a matter of course given the proper parameters. That is the whole purpose behind cumulative selection- once something is found the search for it is over.
Why would a program not halt when it hits its goal?
A program halts when it is supposed to or if there is a bug in the system or program. Ya see once the target phrase is reached the search is over- just as I have been saying. Otherwise I will have to infer that you know as much about this as you do about nested hierarchies- and you have proven you don’t know anything about NH.
I freely admit I am no expert.
Yes I know.
Why waste the A-team
You need all the help you can get. Joseph
David:
Joseph, the program selects for the closest phrase. It does not selecct for letters.</blockquote? LoL!!! It's the LETTERS which bring the output closer to the target. And given the proper parameters once a matching letter is found it will never change. That is the prupose of cumulative selection- once something is found the search for it is over. That is the difference between CS and random selection.
Joseph
Right Dawkins did not fix the letters.
Not even implicitly? In a quasi fashion?
The program does it as a matter of course given the proper parameters. That is the whole purpose behind cumulative selection- once something is found the search for it is over.
Why would a program not halt when it hits its goal?
If you think otherwise please provide the relevant quote or quotes from TBW.
As I don't...
Otherwise I will have to infer that you know as much about this as you do about nested hierarchies- and you have proven you don’t know anything about NH.
I freely admit I am no expert. Why waste the A-team Alan Fox
rm fox the issue is not that I have cited quotes from Wiki’s attempted defense of Mr Dawkins [which quotes have been publicly inadvertently confirmed for onlookers by one from your side . . .], but the cites themselves and what they ever so painfully obviously mean.
The quotes you quote are accurate. I asked if you had read the book, as there is much more in chapter 3 than is quoted in Wiki. It might give you more context, and prevent you from drawing erroneous conclusions from extracts. So, I take it you haven't read TBW, then. Alan Fox
Addendum: That "Weasel" tends to keep correct letters while selecting only for closer phrases is, of course, the pedagogical point. It shows the interaction of two processes, one random (mutation) and one nonrandom (selection). David Kellogg
Joseph, the program selects for the closest phrase. It does not selecct for letters. That is utterly clear in the 1986 text of TBW:
The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL. (Emphasis added)
I have no idea how anybody got the idea that it selected for correct letters: not from the 1986 text. However, once arrived at, that idea got fixed -- explicitly latched, one might say -- in the minds of Dawkins's opponents. David Kellogg
Alan, Right Dawkins did not fix the letters. The program does it as a matter of course given the proper parameters. That is the whole purpose behind cumulative selection- once something is found the search for it is over. If you think otherwise please provide the relevant quote or quotes from TBW. Otherwise I will have to infer that you know as much about this as you do about nested hierarchies- and you have proven you don't know anything about NH. Joseph
PS: What an issue to have to deal with on a day where one of several money quotes is a Roman Governor dismissively saying "What is truth?" even as he embarks on known injustice. kairosfocus
hazel:
How can you blame Dawkins for NOT mentioning things that his program was NOT doing, or for not having the foresight to anticipate that Dembski would mistakenly make incorrect assumptions about the program.
Did you read the book? He makes it clear t5hat once something is found the search is over. That is the purpose of cumulative selection. Joseph
Mr Fox: I have used the citations of Ch 3 TBW, in Wikipedia as below, as this will prove to be a case of admissions on two levels against interest -- [1] by Mr Dawkins the author and again [2] by Wikipedia who cite him in trying to deflect criticism [NB: the person who typed out a long cite from the same chapter corroborates the accuracy of the Wiki cites]: Excerpting 88 supra with my comments in sq brackets on points, emphases put in again, probably a bit different from in 88: ________________ I don’t know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. [--> that is, he KNEW that the issue is want of search resources to access complex functionality, which is Hoyle's challenge] Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence ‘Methinks it is like a weasel’, and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . . [--> Biosystems often have DNA of storage capacity comparable to Shakespeare's corpus, i.e he knew he was making a toy example pointing away from the challenge. The red herring has begun to drag away from the trail of truth.] We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. [--> And in the real world, what is a credible incidence of mutations,and what fraction of these are credibly beneficial? --> What fraction give rise to novel body plans? With what empirical basis? --> And, that starts with the first body plan, including the DNA - RNA - ribosome enzyme programmable, algorithmic information processing system in the cell] The computer examines the mutant nonsense phrases, [--> the issue of getting to shores of functionality has just been begged without even a pause to note on what that shift in focus does to the relevance of Weasel to OOL and origin of body plans --> Namely it means Weasel is now of zero relevance to the issue Hoyle et al raised: getting TO complex function based on information rich molecules] the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . [ --> targetted search rewarding mere proximity without any credible threshold of function --> Ideas of fitness functions are therefore irrelevant, and equivocate off proximity to target vs the sort of algorithmic functionality DNA etc [including of course epigenetic structures . . DNA underestimates the info required . .. ] drives for first life and major body plans –> targetted and with programmed choice, so foresighted] The exact time taken by the computer to reach the target doesn’t matter. [--> oh yes it does, as the realistic threshold would credibly never get done in any reasonable time, much less a lunch time] If you want to know, it completed the whole exercise for me, the first time, while I was out to lunch. It took about half an hour. (Computer enthusiasts may think this unduly slow. The reason is that the program was written in BASIC, a sort of computer baby-talk. When I rewrote it in Pascal, it took 11 seconds.) Computers are a bit faster at this kind of thing than monkeys, but the difference really isn’t significant. [--> Distractive] What matters is the difference between the time taken by cumulative selection, [--> thus, ratcheting and latching, as observed in the 1986 o/p . . . and decidedly not in the 1987 o/p --> cumulative, programmed selection that ratchets its way to a target, rewarding the slightest improvement in proximity of "nonsense phrases," without regard to realistic thresholds of function . . . ] and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: [--> Strawmanised form of the key objection: Mr Dawkins is ducking the issue of getting to shorelines of functionality] about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . . [--> he KNOWS -- or, should know (which is worse) — that a realistic threshold of functionality is combinatorially so explosive that the search is not reasonable –> but good old Will the Spear shaker with feather pen in hand probably tossed it off in a couple of minutes by intelligent design –> So he is pointing away from the most empirically credible explanation of FSCI] Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. [--> if you know from the outset that an exercise in public education is "misleading in important ways," why do you still insist on using it? --> Other than, it is the intent to make plausible on the rhetoric what would on the merits be implausible?] One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, [--> He knows -- from the outset -- that promotion to generation champion based on proximity without reasonable criteria of functionality is misleading in important ways!!!!!!!!!!!!!] the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. [--> he knows that this artificially selected, targetted search without reference to functionality is irrelevant to the issues over the origins of information rich systems in life] Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, [ --> That is he knows that he has used artificial selection off proximity to a desired future state, not natural selection based on differential functionality, begging the question of origin of function --> thus, the underlying question of the BLIND Watchmaker creating complex information rich functionality at the threshold of realistic function is being ducked and begged . . . ducking Hoyle's Q and that asked by ID] although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [--> that is, he knows right from get-go, that he has begged the question bigtime, but he obviously thought his rhetoric would work. --> From abundant evidence, that is all too well -- albeit cynically [I doubt that "weasel" is an accident; this paragraph being an exercise in weasel words] — judged.] ____________________ The cited words are so credibly beyond dispute, and make the point once their meaning is brought out by proper emphasis and comment. Weasel is indefensibly highly misleading and should not ever have been used. ESPECIALLY IN AN EDUCATIONAL CONTEXT, AS THOSE IN NEED OF EDUCATION ARE BY DEFINITION IGNORANT OF THE ISSUES AT STAKE AND THE SUBTLETIES OF THE DIFFERENT SIDES OF CONTROVERSIES. Weasel is an exercise in clever indoctrination, not legitimate education. It should be retired to the hall of shameful misleading icons of evolutionary materialism, right next to Haeckel's embryo drawings and Piltdown man [how many PhDs were done on misleading plaster casts of an obvious fraud?] Not to mention the pig's tooth that played so effective a role in the infamous Scopes monkey trial. And, if this is a yardstick of what is going on, the Texas etc education authorities and parents all over our planet have a RIGHT to be outraged, not merely "concerned." rm fox the issue is not that I have cited quotes from Wiki's attempted defense of Mr Dawkins [which quotes have been publicly inadvertently confirmed for onlookers by one from your side . . .], but the cites themselves and what they ever so painfully obviously mean. So, please deal with the issue, not the distractive red herrings and strawmen. GEM of TKI kairosfocus
BTW JayM, Your selective quoting of TBW didn't capture what he stated after- That being that cumulative selection is a process of slight improvements- however slight. IOW Jay you are being very deceptive which is to be expected. Joseph
JayM, I challenged anyone to provide the reference quote from TBW which would show that with a cumulative selection process once a matching letter is found the program keeps looking for it. To date not one person has provided anytrhing that would show that the inference made by Dembski and Marks is wrong. Joseph
Alan Fox, I read the book- twice now. And by reading the book Dembski and Marks were correct- that being once a matching letter is found the search for it is over. Joseph
Sorry to press, Mr. M., but are your comments at 88 based on the book or what Wiki says about the book? Joe managed to get a copy, which I am hoping he will read. Alan Fox
BTW I am originally from Warwickshire, England but now live in the Languedoc in France. Re 1 and 2. Fair enough. I adopt the principal that having nothing worth stealing is the best protection against theft. Re 3. Are we clear now? Dawkins did not fix letters (because he did not need to). Dr Elsberry confirmed this with Dawkins way back in 2000. Re 4, I expect others with programming expertise are able to comment more usefully than I. Zachriel would be one example, who I believe is unable to comment here. Alan Fox
Mr Fox: Again, kindly observe my remarks on the way the Weasel program [and its kin] can have strong but misleading rhetorical impact that can then be obfuscated and shielded from challenge by using artful qualifications. That gap between the direct impact of what is headlined and shown vividly -- e.g. on BBC Horizon in 1987 -- and what is qualified using what is now known as weasel words, is a well known misleading rhetorical tactic. And even in this thread, we see cases where people are being misled even though Weasel is artificial not natural selection and it begs the question of getting to shores of function before climbing to hilltops of optimised function by hill climbing algorithms. Weasel was KNOWN from the outset to be "misleading" on the BLIND watchmaker claimed to be the focus of TBW, as 88 quotes. So, if something is misleading it should not be used, especially in a supposedly educational context. And that is a serious question to be answered by your side: why was Weasel used if it was known to be misleading and question-begging? (Cumulative selection -- if it is to be relevant to the real world -- must first address getting to complex, information rich function by chance + necessity without intelligent intervention. But then, that is the general challenge raised by ID in recent decades, and just as consistently ducked. [Onlookers, the massive evidence of observation is that FSCI is the product of design in cases where we know its origin, not just its replication. Try the origin of the texts in posts in this thread as a case in point of meaningful, code-bearing, functional digital data strings.]) Sadly, with the very long list of misleading icons of evolutionary materialism out there, this is hardly a new issue. GEM of TKI kairosfocus
Mr. M., Re point 5. Do I take it that you are critquing TBW on the basis of the Wikipaedia entry? It may be worth reading chapter 3 of BTW, as I am sure I recall you pointing out the doubtful quality of Wikipaedia entries. Alan Fox
I don't get this at all. In [235], kairosfocus shouts:
SO SOON AS IT WAS REPORTED THAT MR ELSBERRY HAS PASSED ON TESTIMONY THAT MR DAWKINS DID NOT EXPLICITLY LATCH WEASEL 1986, I AND OTHERS HAVE ACCEPTED THAT; AND WE HAVE INFERRED THAT WEASEL 1986'S O/P IS THEN BEST EXPLAINED ON IMPLICIT LATCHING
I actually missed that both because kf's definitions of "implicit latching" have hard to pin down and becuase he writes so much, and so little of it is to the point. Consider most recently:
Through Weasel 1986, my and Joseph’s analyses of the patterns that the statistics of mutant pops and of selection filters acting on same, it is highly evident that the mechanisms to create such evident latching of o/p can be based on BOTH explicit and implicit latching. [The latter being the better explanation on the overall cluster of evidence inclusive of Mr Dawkins' reported testimony of 2000 that he did not explicitly latch Weasel 1986.]
Aside from the laughable idea that Joseph analyzed anything on this thread, we have here the kairofocus effect, in which the one relatively clear thing kairfosfocus has written previously is now re-muddied. We saw this over and over in the discussion with hazel, where kf could not even bring himself to say "yes" or "no" to a simple question about mutation with respect to fitness. (An aside: Why shout through caps in the quote above? Maybe because in this thread alone, kairosfocus has written upwards of 13,000 words after mentioning that the issue wasn't worth discussing more. When you write that much, and so much is hard to pin down or beside the point, yo have to shout to be heard over yourself.) David Kellogg
Mr Fox: 1] You are perhaps in different circumstances than I am. As a matter of fact I found out from previous situations that my name seemed to trigger spam, so I stopped using it. I make no broader claims than what holds for me; other than to note on a recent incident in your presidential election cycle [assuming you are American] which shows that there is a definite problem of Internet vandalism on the part of those likely to be associated with your views. Indeed, it was such vandalism [including deliberate use of rather vulgar language -- which I have objected to when I have seen it at UD] that led me to close off a free comments policy on my own blog. (And FYI, in recent weeks I have seen a rise in unwelcome email visitors (thankfully not a sudden, overwhelming surge); though also there was one acquaintance of old whose new name almost made me spam list him. Then, I remembered that there was someone with that name out there . . . (The good news on this is that it seems the spam filters of this world are working better today.)] 2] Even if I were to be wrong on this, it is a generally accepted principle of the Internet that there is a respect for privacy, especially given the problem of spam, and the associated ones of vandalism, identity theft and fraud; though I have taken other precautions against such that make me less vulnerable; e.g. I still refuse to use a personal Credit Card -- we can get away with using debit cards of one type or another here in the Caribbean. (I almost need not mention that "outing" of ID proponents is a prelude to expelling them. Just, that will not work in my case.) In short a set of fairly serious duties of care have been violated by advocates for your side. 3] There was a statement in the second thread in this chain in recent weeks, reporting from Mr Elsberry and onward to Mr Dawkins circa 2000; tot he effect that he did not explicitly latch his Weasel 1986. On seeing it, I have accepted that it tips the balance of evidence on the Weasel 1986 case to IMPLICIT latching as the overall best current explanation; detuning (perhaps to give good video footage) then explaining 1987 not a full change of algorithm. 4] Implicit latching does not require deliberate intent, just fiddling with parameters over a few to several runs, to get what seem to be "good" results. [E.g. I am fairly sure that Atom's 4%, 50/gen default was not a set of numbers that was just pulled out of the air at random. Similarly, you can see that I pushed the numbers towards those that would pull in more and more of the far skirt cases, and saw the effects I expected.] 5] I await your comments on 88 above. ____________ GEM of TKI kairosfocus
9] Underluing all of this is the basic failure of Weasel; it is targetted search...Everyone agrees this! Dawkins never claimed anything alse!
...— and by Mr Dawkins’ own admission...
Dawkins stated this upfront. "it is reall a bit of a cheat".
— it is not a good analogy to the BLIND watchmaker, natural selection, that is the champion of the book of that title.
Nor was it ever intended to be an analogy for evolution by natural selecion of variations in popultations of living organisms. Dawkins never claimed it was. On the contrary, he was careful to state that it wasn't.
Alan Fox
Oops scuse HTML error! Alan Fox
Mr. M., Are you saying that Dawkins does say his letters fixed. Is he saying it explicitly? Do you have a quote? Or is it implicit>/b> somewhere? Perhaps you can clarify? Alan Fox
An update on an experiment. I published my email address here a few weeks ago. Since then I have received no unsolicited or junk mail on that address! Mr. M., even publishing your valid email here does not generate spam. It may be due to traffic level, however. Alan Fox
Mr Fox: 1] Kindly cf supra, 88; as has been repeatedly pointed out. [You will find there the excerpts from TBW used by Wiki in an attempt to justify Mr Dawkins' argument. So, the matter constitutes a telling admission against interest, once my parenthetical notes are used to bring out the underlying issues. I find it interesting how after many days there is no serious dealing with that evidence as I (and Joseph independently) have put it on the table and as necessary pointed to it again and again.] 2] I think there are also a few matters of serious (and sometimes ad hominem laced) assertions against Joseph and myself and our case, made by yourself, those at Anti Evo [as I gather, your associates],and by your co-belligerents in this thread above, that need to be reckoned with in light of some specific experimental evidence over the past 24 or so hours. (Not to mention, insistent violations of my privacy.) 3] High on this agenda, is the point that it is now clear beyond reasonable dispute that Weasel 86 latched the o/p: once a letter went correct, we have further excellent reason to infer from the evidence in hand, that it stayed that way. My reasoning about the law of large numbers, in particular, is strongly supported by the way it logically consistently led to correct -- and plainly unexpected by objectors and detractors -- predictions of further evidence. [And BTW, much of the debating on specifics of filters is irrelevant: as long as the filter gives a proximity metric capable of showing letterwise progress, the same basic results will follow.] 4] Through Weasel 1986, my and Joseph's analyses of the patterns that the statistics of mutant pops and of selection filters acting on same, it is highly evident that the mechanisms to create such evident latching of o/p can be based on BOTH explicit and implicit latching. [The latter being the better explanation on the overall cluster of evidence inclusive of Mr Dawkins' reported testimony of 2000 that he did not explicitly latch Weasel 1986.] 5] We have -- thanks to Atom -- demonstrated, empirically, and posted above: implicit latching, quasi-latching, the double mutation substitution effect, associated reversions of letters, impact of higher mutation rates and of bigger per generation populations, etc. Several of these mechanisms were disputed or dismissed [too often with ad hominems], but are now demonstrated empirically, beyond reasonable dispute. 6] In short the model that generational champions used to make populations of variants based on random variations of letters with a certain probability of mutation per letter, and filtered by a mere proximity filter, leading to the next champion, credibly accounts for the Weasel o/p of 1986 and 87. 7] So, we see that o/p latching is credible in Weasel 1986, and that on preponderance of evidence implicit latching is a viable mechanism to explain it. 8] Similarly, we see that de-tuning of the parameters leads to quasi latching and to other effects as the pop size and mutation rate rises or falls. [Scroll up and see the data on runs.] 9] Underluing all of this is the basic failure of Weasel; it is targetted search without reference ot a reasonable threshold of first funciton and incremental function. As such -- and by Mr Dawkins' own admission -- it is not a good analogy to the BLIND watchmaker, natural selection, that is the champion of the book of that title. 10] However, because of the difference in popular level impact between a computer simulation and the qualifications attached thereto in a book over several pages, the rhetorical impact of Weasel is to improperly persuade many that so-called cumulative selection suffices to answer the Hoylean challenge ot get TO shores of islands of function before hill climbing can begin by NS etc. 11] In fact, Weasel and kin are a case of question begging leading to distraction from, deflection of, and onward dismissal of a serious challenge. It is yet another misleading icon of evolutionary materialism -- one of a sadly long list. ___________ Now, onlookers, I cannot force Mr Fox et al to deal with the evidence fairly and squarely, but we can draw our own conclusions for ourselves about (a) the want of quality of their case on the merits, on (b) the underlying want of quality of the way they have approached the issues, and on (c) the question of a sadly plain want of dealing with others fairly, and civilly on the part of too many evolutionary materialists and their fellow travellers. In turn, such issues are a warning to us, if we are interested in the health of science, science education and of general discussion of serious matters. Which are necessary things if our civilisation is to have a healthy future. GEM of TKI kairosfocus
Scuse typos Alan Fox
GEM quoting Joe!
2 –> Joseph, 246: The deception is that people think that cumulative selection is a real thing because of what Dawkins wrote. Prezactly.
Let's have the quote,then. What did Dawkins wrtie that misleads anyone over whether letters fixed in his "Weasel" program? What he actually wrote about "Weasel" is in chapter 3 of TBW (1986 edition) starting on page 43. see above for sections already quoted. I can find nothing to suggest correct letters were fixed and prevented from "mutating" further. As Hazel says:
How can you blame Dawkins for NOT mentioning things that his program was NOT doing, or for not having the foresight to anticipate that Dembski would mistakenly make incorrect assumptions about the program.
Do you not see, Mr M., how this undermines yourcredibility? Alan Fox
PPS: I should comment on the champion selection function wars. So long as a function rewards mere proximity to target [without reference to credible complex functional information] and can detect a one-letter increment in "progress" the pattern where the pop size will interact with the mut rate and the tail of the distribution to push towards the target will happen. And that targetted search will illustrate exactly the key failure of Weasel to address the need to first get tot he shores ofd islands of function inthe config space, before applying hill-climbing methods. As I noted in the original December 08 thread at 107 and 111:
[107:] the problem with the fitness landscape [i.e. as envisioned for the biological world] is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. [Notice how this has never been seriously addressed: getting to body plan no 1, with credibly 600 k bits or so of bio-information as the threshold of functionality, i.e a config space ~ 10^180,617; and onward for body plans requiring 10's - 100's of mega bits of increments of functional information] So far in fact — as I discuss in the linked in enough details to show why I say that — that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy. But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge. [111, excerpted paragraph used by GLF in his threadjack:] Weasel [i.e. as published in 1986] sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. [If you doubt this, simply observe the o/p . . . ] That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.
kairosfocus
Onlookers: An interesting result overnight, nuh? A few comments: 1--> H, 263: as already pointed out, the material point is not whether there was or was not a random-mutation sub process in the search [in any case in serious considerartion on Weasel 1986, random search was used]; but, whether the target was defined letterwise or phrasewise. (In explicit latching the definition is letterwise, in implicit, phrasewise. Again cf. 88 above.) 2 --> Joseph, 246: The deception is that people think that cumulative selection is a real thing because of what Dawkins wrote. Prezactly. 3 --> JayM, 255: Kindly observe no 88 above. Your claims were anticipated -- not merely answered -- in this thread. 4 --> iskim labmildew: Excellent alterantive and mroe credible term. My discussions above hint that something like proximity metric is more accurate. 5 --> H, 263: no locking of letters insofar as this means that on preponderance of evidence we accept that no EXPLICIT latching was used, that is acceptable. However, if it is a case of searching for emanations of penumbras of the weasel words in the text of TBW to claim that the most natural reading of that text and o/p was that there was no explicit letterwise latching is a case of trying to fly in the face of obvious facts. [Cf the natural reading of the Monash University people (who support "your" side), as I have pointed out previously.] One cannot stop that, but one can point out that it is an attempt to justify the indefensible, cf 88 above. 6 --> GP, 244: Descent with modification in the presence of selection pressures has a meaning only when some selectable modification has been created, The tornado argument shows the implausibility that anything selectable of reasonable complexity may ever be generated by RV. The problem with darwinian theory is that is states, assumes, believes, imposes, but never proves that selectable complex traits can come out of RV, or in alternative be deconstructed as a gradual accumulation of smaller selectable steps. Prove that, or at least show a credible model of that, and your theory will begin to be at least debatable. Bullseye, as usual! __________________ Atom; keep the good stuff coming! (Thanks in advance.) GEM of TKI PS: There is an attempt in the above linked anti evo thread -- which at long last respects my privacy at least in the first few posts -- to twist my remarks above, on what I have long since highlighted at UD, that there is o/p latching evident in Weasel 1986 runs as published, that may be explained by mechanisms T2 [explicit latch] and T3 [implicit latch] into a claimed admission of "defeat." Pathetic. (Anti evo folks, please, the strawman of your making [that conflates latching of o/p with only explicit latching as mechanism] has never been accurate. I have pointed out that on a natural reading of the o/p of 1986 and the remarks of Mr Dawkins, o/p latching is so beyond reasonable doubt. The material issue is mechanisms, and T2 vs T3 is to be assessed on inference to best explanation. On the TBW evidence circa 1986, T2 is a very natural reading. It is on testimony from CRD, via Mr Elsberry, that T3 is preferred on preponderance of evidence -- despite some odd points that stick out on comparing the videotaped run of BBC horizon c. 1987, which does not latch or quasi-latch.) kairosfocus
Atom, Thanks for the response. The terms seem to be off, though. How can a "fitness function" be so called if it is false to fact? Wolpert and MacReady used "cost function," which at least doesn't try to say that any particular cost function corresponds to reality. The issue to track is what happens in nature. Dr. Dembski's position is that algorithms, functions, and natural law have problems generating CSI, and advances the argument that searching for just the right fitness function is a problem. But if, say, the lock that I discussed last time opens to some "011" attribute of an organism, and each individual from a population gets to try such a lock itself, how is there any "search" for a "fitness function?" Before one says that this isn't applicable in biology, I should say that some insects utilize genitals structured like wards and keys. That would seem to have some of the flavor of the locking example I've been referring to. The one and only thing that happens is that organisms who have the "011" property get in, and the ones that don't, don't. The organisms trying the lock never select a fitness function. They just either can or cannot utilize whatever lies behind the lock. They never encounter a situation where the wrong property unlocks it (as might happen with 254 of the 256 possible "fitness functions"), or the right property fails to unlock it (as happens in 128 out of 256 of them). Isn't this a problem for Dembski's argument? iskim labmildew
Atom @263
I am not on Uncommon Descent a lot (people sometimes think I’ve left the site, since I don’t post for long intervals sometimes) so condensing the main points and posting them here would help me, at least.
If you're not here often, how would summarizing Zachriel's points here help you? Just go interact with him where he's allowed to participate. I suspect the two of you would get along well. Those of us in the peanut gallery would certainly benefit. JJ JayM
Someone has brought the following quotes from BWM to my attention: Pg. 307
It is only if you define 'random' as meaning 'no general bias towards bodily improvement' that mutation is truly random.
Pg. 312
Mutation is not systematically biased in the direction of adaptive improvement, ...
Since Dawkins description of Weasel includes the phrase “but with a certain chance of random error - 'mutation' - in the copying...”, I think anyone who has followed the overall argument of the book would know that when Dawkins said “random error - 'mutation' - in the copying” he would of course be referring to “'random' as meaning 'no general bias towards bodily improvement'” Hence, no locking of letters: as once clearly shown earlier, only if you have no latching of letters at the mutation level is the mutation random in respect to the fitness function. QED hazel
David Kellogg, I did have a look for myself earlier today, but I am not on Uncommon Descent a lot (people sometimes think I've left the site, since I don't post for long intervals sometimes) so condensing the main points and posting them here would help me, at least. Also, I'm sure that others who don't frequent AntiEvo would appreciate you (or a volunteer) posting anything that is relevant and interesting so we can discuss it. hazel, Thanks. I appreciate people taking the time to help me with beta testing and hopefully we all benefit in the end. Atom Atom
I appreciate your outlook, Atom. Debugging, like all revision processes, improves a product, and collaborative work with others "beta testing" a program can be valuable. I find it refreshing that you aren't put off by responses from people who might been seen as critics, and are in fact able to stay focused on the actual details of the work rather than any larger differences you all might have. hazel
Atom, as much as I enjoy being Zachriel's secretary, it's probably more useful to see the discussion over there. There's less chance of error (mutation?) in transcription. Plus, nobody there thinks you're a "dummy," as you put it. David Kellogg
iskim labmildew, You bring up an interesting discussion point. In your example, you say that the fitness functions can possibly assign a high fitness value (1) to a configuration that does not correspond to the target we are hoping to reach (namely, 011) and it may assign a low fitness value to the target (011 gets assigned 0 in some functions.) This is correct, which is why it is important to select the correct fitness function for what we are hoping to select for. A lot of the possible fitness functions will hinder our search by providing negative active information (to borrow the EIL phrase.) The examples you give illustrate this. To counter this, we could choose a fitness function that "aligns" with our target, meaning, encodes some information about the target we are hoping to reach, so that it can guide the search along. If we pick a function that contributes negative active information, we can actually see the search perform worse than random unassisted search. This is an area ripe for empirical research and I'm hoping that the Weasel Ware 2.0 software allows us to begin exploring this. Since users can code their own custom fitness functions, we can make progress exploring in a way that benefits everyone. So all of the possible fitness functions / reward matrices are actually relevant, they just may not be good for our search. The fitness function does not tell you if a target has been reached or not; it only gives a numeric value allow you to order your organisms. This is a key distinction. There is a lot to discuss on this topic, but this post is too long so I'll stop right here. A proper treatment of this subject would require at least a paper; for my part, I'm developing the tools and designing some of the experiments in the meantime. Atom Atom
Joseph writes,
So the bottom line is stop blaming Dembski and Marks for Dawkins’ sloppiness and deception.
How can you blame Dawkins for NOT mentioning things that his program was NOT doing, or for not having the foresight to anticipate that Dembski would mistakenly make incorrect assumptions about the program. Dembski is a Ph.D mathematician. With just a little of analysis of his own he should have know that locking letters in place was not needed to account for the published data, and therefore the fact that such locking was not mentioned should have been enough to prevent Dembski from assuming it was there. You just can't blame someone for not explicitly stating all the things they are not doing just to keep someone else from jumping to a wrong conclusion. hazel
Joseph @252
I have asked for quotes FROM THAT BOOK that would refute the logic behind that inference yet not one has been produced.
Not so. David Kellogg has repeatedly posted a link to a web page that does exactly what you request. That page makes it very clear that no reasonable reading of The Blind Watchmaker would lead to the conclusion of explicit latching. JJ JayM
Joseph, As much as I'd like to be a perfect programmer, I know that any code I produce will intially have some bugs in it. I want to eliminate any bugs anyone finds, so that everyone can have a useful simulator for whatever experiments they want to run. If people at Antievo pinpoint bugs in the GUI, I would like to acknowledge that and fix them for everyone as soon as possible. I don't care if they think I'm a dummy for having bugs, I've worked with coders for years, so I know that pretty much everyone has bugs when they code. I'm more interested in having a good working GUI. :) Atom Atom
Atom, The "fitness function" space discussion is interesting. Are you using "fitness function" in the way that Wolpert and MacReady use "cost function"? I'm not sure what the value is of the extra 25 thousand or so "fitness functions" beyond the 40 thousand possible orderings of 8 alternatives. What if we take as an example Dembski's favorite sort of fitness functions, the ones where the "target" gets the sole high value and everything else gets zip? Then, F=1, and the number of "fitness functions" is 2^8 = 256. The problem would be like a lock that only unlocks when three switches are in "011" or off-on-on position, and you are looking to unlock the device by flipping switches. What is the upshot of that? Just one of those 256 "fitness functions" corresponds to the case where the "011" string gets assigned 1 and the rest get assigned 0, so in just one case does the "fitness function" say that the lock unlocks with "011" and stays locked with all the other positions. 128 of those "fitness functions" actually assign your target string, "011," a value of 0, so the "fitness function" tells you that the lock is still locked even though the switches are in the "011" position. 254 of them also assign some other string or strings a value of 1, so the "fitness function" tells you that the lock is unlocked even when the switches are not in the "011" position. Doesn't it seem strange to you to be talking about selecting from "fitness functions" that don't actually apply to the situation at hand? iskim labmildew
Joseph @252
Again I reference “The Blind Watchmaker” as did Dembski and marks. From reading that reference the logical inference is that once a letter is found the search for it is over which essentially means it is latched in place.
This is simply not the case. I posted the full Weasel excerpt from The Blind Watchmaker in message 182 and invited you to point out exactly how you could come to your erroneous conclusion. You did not respond. Since the text is still up there, please show how anything that Dawkins wrote could remotely be construed to suggest explicit latching of characters. JJ JayM
Atom:
Please relay anything useful and relevant here.
That's a joke, right? ;) Joseph
Atom @251
Please relay anything useful and relevant here.
Atom, You should consider participating there, as well. It would be more efficient than David Kellogg copying and pasting. Plus, Zachriel has always struck me as a courteous and intelligent person. Your conversations would no doubt be mutually beneficial. JJ JayM
hazel:
Yes, what is settled, thank goodness, is that Dembski was wrong when he assumed in the paper he and Marks wrote (I can’t find a link write now) that letters were explicitly latched.
We can agree to disagree. Again I reference "The Blind Watchmaker" as did Dembski and marks. From reading that reference the logical inference is that once a letter is found the search for it is over which essentially means it is latched in place. I have asked for quotes FROM THAT BOOK that would refute the logic behind that inference yet not one has been produced. So the bottom line is stop blaming Dembski and Marks for Dawkins' sloppiness and deception. Joseph
David Kellogg, Please relay anything useful and relevant here. Thanks! Atom Atom
I have fixed a bug in Proximity Reward Search that affected population sizes of "1", and updated the code on the site. Thanks to Bill for checking the GUI and finding the bug. Everyone else, please continue to report any bugs you find. I can only fix the bugs I'm aware of. Atom Atom
Addendum, Here is another simple fitness/evaluation function you can try: aError = Math.abs(this.stringSum(target) - this.stringSum(a)); bError = Math.abs(this.stringSum(target) - this.stringSum(b)); This one bases fitness on the distance between the target's ASCII sum and the string's ASCII sum, providing indirect information about the target. This differs from CRC32 in that the ASCII sum will always be similar for similar strings (as far as I know), so it should be a smooth function without surprises. How do you guys think this search will perform? Atom Atom
KF, You're very welcome. There are more simulations coming and more epxeriments to be done, and it is all my pleasure to help settle questions empirically. gpuccio, Weasel Ware 2.0 also contains fitness functions / reward matrices that are not based on the target, or are only based on some property of the target, to greater or lesser degrees. In a word you can now alter the amount of information the reward matrix contains about the target, and see how the search performs. For example, Proximity Neutral Search using the Partially Neutral: CRC32 method will select based not on proximity to the target string, but based on the CRC32 checksum's distance from the target's CRC32 checksum. (This gives less information about the target, since multiple strings can have the same CRC32 checksum and since the CRC32 algorithm will contain surprises in fitness values of similar strings, mimicking the existence of small genomic changes having large effects.) Or you can also use Simple Sum method, which takes the ASCII sum of the strings and uses that as a basis for fitness, thus not containing any information about the target but still using fitness values, replication, mutation and selection. You will see first hand what happens when you limit the amount of information you encode in the reward matrix, or increase it. KF's "Advanced" example demonstrates this interactively, as you alter the grouping length in real time. Most importantly, eveyone can now do experiments and design their own fitness functions, basing fitness on whatever criteria they like. I gave a few examples to get you started. Atom PS If anyone codes a cool fitness function, please post it here. Hopefully I can get the admins to come up with a post dedicated to interesting user created fitness functions and their effect on weasel, if people are interested. Atom
Alan, The deception is that people think that cumulative selection is a real thing because of what Dawkins wrote. Joseph
And if Dawkins was aware from the start that it was “a bit of a cheat”, that is another point I will consider in my general opinion of him.
Not "aware", gpuccio, he explicitly stated this. There was and is no deception on Dawkins' part. Alan Fox
faded_Glory (#216): So, I am happy we agree about what the weasel is not. And if Dawkins was aware from the start that it was "a bit of a cheat", that is another point I will consider in my general opinion of him. And it is certainly a good example of the general moral notion that cheats do not bear good results. I must disagree with you about the fact that the point "about needing prior knowledge has actually been addressed many times." It depends on what you mean with "addressed". The fact remains that so called simulations of NS abound, and that not one of them is free from any prior knowledge of the results to be obtained. In other words, not one of them is a simulation of NS, while all of them are simulations of IS, in different forms. If you agree on that, I will agree that you have "addressed" my point, but I still would wonder why so many darwinists are spending time making good simulations of ID while pretending (sometimes admitting they are cheating) that they are in somw way simulations of RV + NS. They are not. All existing simulations are simulations of some form of ID. No wonder that some of them are very successful! You say: "Evolution has no goal, but that does not prevent it from generating solutions that are adapted to their environment." That's exactly what should be proven. That's exactly what we in ID believe (and prove) to be false. It's strange how darwinists, when they are short of arguments, just recur to simple statements of their beliefs, as if that could solve any problem. You say: "The main flaw in the tornado argument is that it ignores descent with modification in the presence of selection pressures." That's because the tornado argument is about RV, not NS. It shows what RV can accomplish: practically nothing. The impotence of NS to select from that nothing can be addressed separately in many other ways. So, the tornado argument is very good, but it addresses only a part of the theory. The other part (NS) can be shown false separately, using the criticism of RV as an engine of selectable of information which is so well exemplified in the tornado argument (but which can be specified much more rigorously in other ways). You say: "The main flaw in the tornado argument is that it ignores descent with modification in the presence of selection pressures." It is not a flaw. Descent with modification in the presence of selection pressures has a meaning only when some selectable modification has been created, The tornado argument shows the implausibility that anything selectable of reasonable complexity may ever be generated by RV. The problem with darwinian theory is that is states, assumes, believes, imposes, but never proves that selectable complex traits can come out of RV, or in alternative be deconstructed as a gradual accumulation of smaller selectable steps. Prove that, or at least show a credible model of that, and your theory will begin to be at least debatable. But strangely, all darwinists become evasive at this point, and change the subject. You say: "As a criticism of Darwinian evolution it misses its target by miles. Small stepwise tornadoes will indeed not result in an airplane. Do you really think anyone believes they do?" There is no limit to what darwinists seem to be able to believe. And you know, small stepwise tornadoes will not even generate parts of an airplane. Or anything selectable as a part of an airplane beyond the absolutely trivial. Can you understand that? The argument shows (not in detail, but intuitively, like all metaphors: but a detailed treatment of the problem has been given many times in ID) that RV cannot generate anything functional and selectable beyond a very low threshold of complexity; and that complex functional systems (like an airplane) cannot obviously be deconstructed as a random assemblage of smaller trivial parts. gpuccio
PS: Not explicitly latched. kairosfocus
Hazel On my way out on a wet day. A few notes: 1] Absent credible code circa 1986, we do not have definitive information to decide on whether the weasel o/p circa 1986 latched explicitly or implicitly. (It remains possible that Weasel 1986 was explicitly latched, while Weasel 1987 was not, on strict accounting of the facts and possibilities in evidence; especially on the record of the published information in TBW and new Scientist. It is not on these facts but a report that an inference to implicit latching was reverted to.) 2] We do, however, have more than adequate information to conclude to moral certainty that the o/p was latched. (We have for a long time, just the data runs today show this beyond reasonable dispute by replicating the pattern and tendencies inferred per law of large numbers.) 3] On being informed of a personal declaration that Weasel 86 was not latched, we have accepted this as establishing that the best explanation of the o/p on the preponderance of evidence -- note the third, increasingly weakened degree of warrant being used here -- was implicitly latched. 4] The Atom sims (thanks again Atom) and my runs above suffice to show beyond reasonable doubt that the implicit latching is demonstrated to be real, as is quasi latching, as is the pattern of dynamics associated with larger rates and/or populations, up to and including substitutions. 5] Further to this, the weeks- long parade of dismissals, distortions and claims of major error on my part -- starting with GLF, and going on to many others in this thread, at Anti Evo and elsewhere, including Mr Elsberry -- are shown to themselves be based on error, beyond reasonable doubt. 6] For, as a scan up will show, the pattern of dynamics and predictions I made have been substantiated, up to the still unobserved triple mutation effect, which I suspect that on the maxing out of population, I have not gone far enough to see it. 7] Now, let us see how the same ones who so stridently and confidently argued, asserted and declared that Joseph and I were grossly wrong and ignorant or worse, will now correct the record, and address the many ad hominems and uncivil actions they have undertaken. 8] And, lastly, H, this thread and the previous ones have not in the main been about Mr Dembski, but about Joseph and the undersigned. _____________ GEM of TKI kairosfocus
Yes, what is settled, thank goodness, is that Dembski was wrong when he assumed in the paper he and Marks wrote (I can't find a link write now) that letters were explicitly latched. Even now the Weasel Ware page says,
In the search proposed by Dr. Dawkins, letters are chosen randomly. For each letter, we can envision spinning a roulette wheel and randomly selecting a letter. Once a letter hits at a location, we keep it.
Perhaps this will now be fixed, and Dembski will not make this assumption again if he writes more about Weasel. hazel
PPPPPs; Forgot the substitution effect. 239, case F: _____________ 105. MEWHINKS IT ISRLIKE A WEASEL 106. METHINKS IT ISRLIKE A WNASEL [ . . . ] 296. MEGHINKS IT IS LIKE A WEASEL 297. METHINKS IT IS LIKE A WEQSEL ___________ Two double mutations, with a 1-letter advance compensated by a 1-letter reversion. As also predicted. GEM of TKI kairosfocus
PPPPS; Case F, 999, 25%: _____________ 1. SSLHNNJAPJTDIIMALWGOTNLGZ TF 2. SUTHNNJAPJTDIPMALHY VNLGC TF 3. SJTHYNJA JTDIPMALHY ONLGA TN 4. MJTHYNJA JTDIPMAIHY ONLGA TO 5. MQTHINJA JT IGMAIHF KNLMA TO [ . . . ] 25. MLTHINK IT WE LWKN A WEASEL 26. METHINK KIT WY LWKN A WEASEL 27. METHINKKIIT JY LWKN A WEASEL 28. METHINKJIIT JY LWKN A WEASEL 29. METHINKJYIT JY LGKE A WEASEL 30. METHINKJYIT JY LGKE A WEASEL [ . . . ] 105. MEWHINKS IT ISRLIKE A WEASEL 106. METHINKS IT ISRLIKE A WNASEL 107. METHINKS IT ISRLIKE A WNASEL 108. METHINKS IT ISRLIKE A WNASEL 109. METHINKS IT ISRLIKE A WNASEL 110. METHINKS IT ISRLIKE A WNASEL [ . . . ] 295. MEGHINKS IT IS LIKE A WEASEL 296. MEGHINKS IT IS LIKE A WEASEL 297. METHINKS IT IS LIKE A WEQSEL 298. METHINKS IT IS LIKE A WEQSEL 299. METHINKS IT IS LIKE A WEQSEL 300. METHINKS IT IS LIKE A WEFSEL 301. METHINKS IT IS LIKE A WEFSEL 302. METHINKS IT IS LIKE A WEFSEL 303. METHINKS IT IS LIKE A WEFSEL 304. METHINKS IT IS LIKE A WEFSEL 305. METHINKS IT IS LIKE A WEFSEL 306. METHINKS IT IS LIKE A WEFSEL 307. METHINKS IT IS LIKE A WEFSEL 308. METHINKS IT IS LIKE A WEFSEL 309. METHINKS IT IS LIKE A WEFSEL 310. METHINKS IT IS LIKE A WEFSEL 311. METHINKS IT IS LIKE A WEFSEL 312. METHINKS IT IS LIKE A WEFSEL 313. METHINKS IT IS LIKE A WEFSEL 314. METHINKS IT IS LIKE A WEFSEL 315. METHINKS IT IS LIKE A WEFSEL 316. METHINKS IT IS LIKE A WEFSEL 317. METHINKS IT IS LIKE A WEFSEL 318. METHINKS IT IS LIKE A WEASEL _________________ Reversions aplenty and hard to close the deal. GEM of TKI kairosfocus
PPPS: Case E; 999/gen, 16%: __________________ 1. BPPMIFLKYSMEIJWTOYEBNORYPQCI 2. Y PMINLKYSMAIJWTZKEBNORYPQCI 3. Y PMINLKYSMAIS TZKEBNORYPQCZ 4. Y TMINLKYSMAIS DYKEUNORYPQCL 5. Y TUINLKBSMAIS LRKE NORYPQCL 6. R TKINLKUSMAISJLIKE S MYPQCL 7. RETKINLKUSMAISJLIKE S MVPQCL 8. RETKINLDUSMDISJLIKE S WVPQOL 9. METKINLDUSMDISJLIKE S WVPQRL 10. METKINYSUMTDISJLIKE S WYPQRR 11. METKINYSUMTDIS LIKE S WYPEBR 12. METLINYSUMTDIS LIKE S WYPEBL 13. METLINYSUITDIS LIKE S WYPEBL 14. METLINYSUITDIS LIKE S WMAEBL 15. METLINCSUITDIS LIKE A WMAEBL 16. METLINKSUITDIS LIKE A WMAEBL 17. METUINKSUITDIS LIKE A WMAEEL 18. METUINKSUITDIS LIKE A WEAEEL 19. METUINKS ITDIS LIKE A WEAEEL 20. METUINKS IT IS LIKE A WEAEEL 21. METOINKS IT IS LIKE A WEAEEL 22. METOINKS IT IS LIKE A WEAEEL 23. METOINKS IT IS LIKE A WEAKEL 24. METOINKS IT IS LIKE A WEAKEL 25. METOINKS IT IS LIKE A WEAKEL 26. METOINKS IT IS LIKE A WEAKEL 27. METHINKS IT IS LIKE A WEAKEL 28. METHINKS IT IS LIKE A WEAKEL 29. METHINKS IT IS LIKE A WEAKEL 30. METHINKS IT IS LIKE A WEAKEL 31. METHINKS IT IS LIKE A WEAKEL 32. METHINKS IT IS LIKE A WEAKEL 33. METHINKS IT IS LIKE A WEAKEL 34. METHINKS FT IS LIKE A WEASEL 35. METHINKS FT IS LIKE A WEASEL 36. METHINKS IT IS LIKE A WEASEL __________________ Late reversion. GEM of TKI kairosfocus
PPS: Case D: 999/gen [maxed out], 8%: ________________ 1. MMCJXLTPPCNATTMLKDXOBDKMBJQX 2. MMCJXL PPCNATT LKDXOBDKMAJQX 3. MMCJXL PPCNATT LKDXOB KMAJUX 4. MECJXL PPCLATT LIDXOB KMAJUX 5. MECJXL PPWPOVS LIDXOB WMAJUX 6. MECLXL PPWPOVS LIDXOY WMAJUL 7. MECLXL SPWPOVS LIDXOY WMAJUL 8. MECLXL SJWPOIS LITXOY WMAJUL 9. MECLXL SJWP IS LIZXOY WTASUL 10. MECLXL S WP IS LIZAOY WTASUL 11. MECLXL S IP IS LIZAOY WTASEL 12. MECLXL S IT IS LIZAOY WTASEL 13. MECLXL S IT IS LIKNOY WTASEL 14. MECLXL S IT IS LIKEOY WTASEL 15. MECHXL S IT IS LIKE Y WUASEL 16. METHXZ S IT IS LIKE Y WUASEL 17. METHXZ S IT IS LIKE A WUASEL 18. METHKN S IT IS LIKE A WUASEL 19. METHKN S IT IS LIKE A WEASEL 20. METHIN S IT IS LIKE A WEASEL 21. METHINKS IT IS LIKE A WEASEL __________________ Here we see speeding up of run to the target. GEM of TKI kairosfocus
PS: Run C, 500 /gen, 8% mutation rate: _______________ 1. QB NRQWFVIDGVT FLOPLWCGHLIJM 2. MB NRQWFVIDVVT FLOPLW GHLIJV 3. MB NRQWFVIDVVT FLOPNW GZLIEV 4. ME NRQWFVITVVT FLOPNW GZLIEV 5. ME NRQWFVITVVT LLOPNW GZLBEV 6. MEXNRQWFVITVTT LLKPNW GZLBEV 7. MEXNRQKFVITVTT LLKPTW GZLBEV 8. MEXNRRKFVITVTT LLKPTW CXLBEL 9. MEXNRRKFVIT TT LLKETW CXYBEL 10. MEXNRIKFVIT TT LLKETW CEYBEL 11. MEXNRIKFVIT TT LLKETA CEYBEL 12. MEXNRIKFVIT TT LLKETA CEABEL 13. MEXNRIKFVIT TT LIKERA REABEL 14. MEXNRIKFVIT TS LIKERA REABEL 15. MEKNRIKFVIT TS LIKERA REASEL 16. MEKNRIKSVIT TS LIKERA REASEL 17. MEKNIIKSVIT TS LIKERA REASEL 18. MEKNINKSVIT TS LIKERA REASEL 19. MEKNINKSVIT IS LIKERA REASEL 20. MEKNINKS IT IS LIKEDA REASEL 21. MEKNINKS IT IS LIKE A REASEL 22. MEKHINKS IT IS LIKE A REASEL 23. MEKHINKS IT IS LIKE A REASEL 24. METHINKS IT IS LIKE A REASEL 25. METHRNKS IT IS LIKE A WEASEL 26. METHRNKS IT IS LIKE A WEASEL 27. METHHNKS IT IS LIKE A WEASEL 28. METHHNKS IT IS LIKE A WEASEL 29. METHHNKS IT IS LIKE A WEASEL 30. METHHNKS IT IS LIKE A WEASEL 31. METHHNKS IT IS LIKE A WEASEL 32. METHHNKS IT IS LIKE A WEASEL 33. METHHNKS IT IS LIKE A WEASEL 34. METHHNKS IT IS LIKE A WEASEL 35. METHINKS IT IS LIKE A WEASEL _____________ Observe the reversion at 24/25, and the time it took to recover. indeed, the reverted letter was the last one to go correct inthe end. Case C exhibits Quasi latching, with letter reversion and recovery, with a high pop size per gen and a high per letter mutation rate. As predicted, but derided and dismissed. GEM of TKI kairosfocus
3] AF, 217: “Weasel” did not fix correct letters. Will you accept Professor Dawkins’ confirmation that this is so? Pardon a direct word or two: Mr F, when will you [and your friends over at Anti Evo] stop misrepresenting what I have said? I have for several weeks and threads here at UD highlighted that there is an observed latching of the o/p of weasel 1986. I then proposed that we explain that by two mechanisms, T2: letterwise explicitly latched search, and T3: implicit latching that works off co-tuned mut rate and pop size together with the mere proximity filter. Much digital ink has been wasted in trying to say that implicit latching is not latching, etc. The above printoffs should suffice to answer that decisively. More importantly, SO SOON AS IT WAS REPORTED THAT MR ELSBERRY HAS PASSED ON TESTIMONY THAT MR DAWKINS DID NOT EXPLICITLY LATCH WEASEL 1986, I AND OTHERS HAVE ACCEPTED THAT; AND WE HAVE INFERRED THAT WEASEL 1986'S O/P IS THEN BEST EXPLAINED ON IMPLICIT LATCHING. (The credibility of this explanation has been now abundantly and directly confirmed; thanks to Atom's public spirited effort.) This has been repeatedly stated, in this thread and elsewhere, including to you above. I therefore find it disappointing to see you at this late stage trying to insist or imply that I have not done so. 4] Crater, 219: Alan, you are rather late to the party. This has been pointed out numerous times to KF over the several weeks this latching/non-latching discussion has been going on. Please see the directly above. You too are seriously misrepresenting me -- and in your case, not for the first time. Also, I must note to you that it is strictly correct to refer to a gentleman as Mr, regardless of academic titles, in preference to just using his given or surname (which I find in the first place a bit familiar, and in the second somewhat abrupt). To do so is not at all a matter of disrespect. 5] error read the above printoffs, and run your own sims. I have plainly made no material error. 6] Joseph, 220: As I said according to Dawkins once something is found the search for it is over. THAT is the whole premise behind cumulative selection. And THAT is how Dawkins portrayed it in TBW. Correct. And, we have been plainly vindicated. 7] Hazel, 225: fitness function A proximity reward metric and algorithm are precisely not a BLIND watchmaker fitness function. And that is the main fault with Weasel. 8] DK, 230: I’m not going to edit everything for your delicate sensibility on the off chance you might get the vapors. The conflation of design thought with biblical creationism is a slander, and often a calculated one. I asked you (and others) -- very properly -- to refrain from propagating that common slander here. In your presentation of Mr Elsberry's arguments, you replicated that slander -- a slander you yourself have resorted to in this thread. I therefore noted (again) correctively. For good reason. Contra your "vapours" dismissive remark; which reveals a want of concern for truth and fairness on your part. Which just happen to be duties of civil care. 9] The bulk of your response to Dr. Elsberry was non-responsive. I would have thought you would have attacked his math with fewer words and more calculations — more than none, at least. The basic problem -- as I have pointed out previously -- with Dr Elsberry's remarks was CONCEPTUAL. On the GIGO principle, no mathematical model or algorithm is any better than its relevance, its input data, assumptions and logical/dynamical structure. I summarised the basic error yesterday. Today, thanks to Atom, I present empirical data that show that I am right on the material points. And that is enough for any reasonable onlooker. _____________ Atom, thanks again. GEM of TKI kairosfocus
GP, other participants and onlookers, Further footnotes. But first, a big thank-you to Atom for providing a good test-bed. And BTW, out of the box, at 50 members per generation and with 4% per letter mutation rate, proximity reward search latched or so close to latched on my first run, as makes no difference. You will also see predominance of both no-change cases and of the single step advances, just as Joseph and I have remarked on. Finally, I did a 500 pop at 4% run as run no 2. In 31 gens it hit target, i.e the tail effect shows up. QED. _______________ RUN A: 50/gen, 4% per letter mut rate: 1. HIMMITFEBTIYEVJHKWLSQZBWWZHW 2. MIMMITFEBTIYEVJHKWLSQZBWWZHW 3. MIMMITFEBTIYEVJHKWL QZBWWZHW [ . . . ] 27. MEIFINKE IT DS KGKL A VEXJXT 28. MEIFINKE IT DS KGKL A VEXIXT 29. MEIFINKE IT DS KGKL A VEXIXT 30. MEIFINKE IT DS LGKL A VEXIXT 31. MEIFINKE IT DS LGKL A VEXIXT 32. MEIFINKE IT DS LGKL A VEXIXT 33. MEIFINKE IT DS LGKF A VEXZXT 34. MEIFINKE IT DS LGKF A VEXZXT 35. MEIHINKE IT DS LGKF A VEXZXT 36. MEIHINKE IT DS LNKF A VEXZXT 37. MEIHINKE IT VS LNKF A VEXZXT 38. METHINKE IT VS LNKF A VEXZXT 39. METHINKE IT VS LNKF A VEXZXT 40. METHINKE IT VS LNKF A VEXZXT 41. METHINKE IT VS LNKF A VEXZXT 42. METHINKE IT VS LNKF A VEXZXT 43. METHINKE IT VS LIKF A VEXZBT 44. METHINKE IT VS LIKF A VEXZBT 45. METHINKE IT VS LIKF A WEXZBT 46. METHINKE IT VS LIKF A WEXZBT [ . . . ] 62. METHINKS IT GS LIKK A WEXSBG 63. METHINKS IT GS LIKK A WEXSBG 64. METHINKS IT NS LIKK A WEXSBG 65. METHINKS IT NS LIKK A WEXSBG 66. METHINKS IT NS LIKK A WEXSBG 67. METHINKS IT NS LIKK A WEXSBG [ . . . ] 120. METHINKS IT IS LIKE A WEASEG 121. METHINKS IT IS LIKE A WEASEG 122. METHINKS IT IS LIKE A WEASEG 123. METHINKS IT IS LIKE A WEASEG 124. METHINKS IT IS LIKE A WEASEG 125. METHINKS IT IS LIKE A WEASEG 126. METHINKS IT IS LIKE A WEASEG 127. METHINKS IT IS LIKE A WEASEG 128. METHINKS IT IS LIKE A WEASEG 129. METHINKS IT IS LIKE A WEASEG 130. METHINKS IT IS LIKE A WEASEG 131. METHINKS IT IS LIKE A WEASEL _________________ RUN B, 500 pop/gen, 4% per letter mut rate: 1. MEL LSI YHXMAJLMDGMVKTSKGW 2. MEL LSI YHXIAJLMDNMVKTSKGW 3. MEL LSI YHXISJLMDNMJKTSKGW 4. MEL LSI YHXISJLMDN JKTSKGW 5. MEL LNI YHXISJLDDN JKTSKGW 6. MEL LNI YHXISJLDDN JKTEKGW 7. MEL LNB BHXISJLDDN JKTEKGE 8. MEL LNB BHXISJLIDN JKTEKGE 9. MEL LNB BHXISJLIDN JKTEKSE 10. MEL LNB BHXISJLIDN JKTEKSEL 11. MEL LNK BHXISJLIDN JKTEKSEL 12. MEL LNK BHXIS LIDN JKTEKSEL 13. MET LNKV BHXIS LIDN JKTEKSEL 14. MET LNKV BHXIS LIDN AKTEKSEL 15. MET LNKV BHXIS LIDE AKFEKSEL 16. MET LNKV BHXIS LIKE AKFEKSEL 17. MET LNKS BHXIS LIKE AKFEKSEL 18. MET LNKS BH IS LIKE AKFEKSEL 19. MET LNKS BH IS LIKE AKFEKSEL 20. MET LNKS BH IS LIKE AKWEKSEL 21. MET INKS BH IS LIKE AKWEKSEL 22. MET INKS BH IS LIKE AKWEKSEL 23. MET INKS BH IS LIKE AKWEKSEL 24. MET INKS IH IS LIKE AKWEKSEL 25. MET INKS IH IS LIKE A WEKSEL 26. MET INKS IH IS LIKE A WEASEL 27. MET INKS IH IS LIKE A WEASEL 28. METHINKS IH IS LIKE A WEASEL 29. METHINKS IH IS LIKE A WEASEL 30. METHINKS IH IS LIKE A WEASEL 31. METHINKS IT IS LIKE A WEASEL ______________ The matter in the main now settled, on a few points: 1] GP, 214:the weasel argument in itself is completely silly as even generically related to the problem of biological evolution, unless Dawkin’s intent was to show that intelligent, designed selection is a very powerful tool. Which, I believe, was not probably his purpose. So, his bringing about that argument, and using it for so long, has anyway been an act of cognitive dissonance Correct. 2] I must say that I don’t understand fully the necessity of my ID friends to concentrate so much on the “latching” issue. In fact, GP, we have tried to emphasise the main issue that you have, but the Anti Evo folks seem to have thought -- plainly from the above, in gross error -- that they had a point where they could attack and discredit. So, the focus you saw was responsive, not primary. But now, the matter is plainly settled beyond reasonable doubt by Atom's kind provision of software open to anyone. [ . . . ] kairosfocus
(notes cont...) Does the fitness function we use matter? ================================== There is no way to code for all the possible Reward Matrices created by all the different possible fitness functions, so the above are just a tiny sampling. Some (like the Proximity Reward matrix) will help a search. Others will not, or may even hinder it. For a string of length n, where each position can have one of m characters, the possible number of string permutations is m ^ n. If we assign each of those permutations a value between 0 and F (in other words, give it a "fitness" value), we get (F + 1) ^ (m ^ n) distinct matrices, which can be generated by as many different fitness functions. To put real numbers to this, assume we have a binary string of three bits that we are searching for. (Let's say we're looking for the string 011.) Since we can have two possible characters (0 or 1) and our string is three bits, we have 2 ^ 3 = 8 possible permutations we'd have to search through. (This is our Search Space.) If we now want to use a fitness function to help find the string, we can decide to limit ourselves to fitness functions that assign values to the three bit strings that are between 0 and 3. Therefore, we now have 4 ^ (2 ^ 3) = 65,536 unique fitness/reward matrices and as many fitness functions that output a unique reward matrix for the string permutations. We must therefore choose our fitness function wisely from the 65,536 available, since not every fitness function will help us in our search. Since Weasel Ware allows you to try a variety of different fitness functions you can see for yourself that it isn't the power of selection that allows a difficult search to be successful, but the choosing of an information-rich fitness matrix that aligns closely enough with your target. Using just any fitness reward matrix (such as one that is neutral to the target) will by no means guarantee success, even if mutation, reproduction and selection are available, as they are in the Proximity Neutral Search. USAGE NOTES: ============ "Offspring" in the Proximity Reward and Proximity Neutral controls how many strings per generation are generated based on the current best string. Mutation Rate is the per-letter mutation rate for offspring strings. The default of 4.0% will change roughly one letter in a 28-letter string each replication, on average. Mutli Run Mode will allow you to test a given search strategy over a fixed number of runs. The "View Results" button will then show the results in both HTML display and CSV display, for pasting into a spread-sheet analysis program such as Excel. Atom's Notes: ========== Any errors in interpretation or understanding in the version 2.0 release are strictly mine, are unintentional, and should not reflect negatively on the EIL or EIL staff. If you do find any errors in the implementation, please let me know by contacting me via my website (atomthaimmortal.com, the contact form.) If you do choose to contact me, please be as friendly as you would in person. Atom
Here goes the notes, since browsers may not word-wrap the text file... Changes in Version 2.0: ================= - Two new Searches: Proximity Reward Search and Proximity Neutral Search - Multi-Runs Mode: Useful for running the GUI many times in a row, for gathering data on average perfromance. - Sortable Searches: You can now reorder the searches, based on which searches you're interested in viewing. Just grab the search by its label and move up or down. - Disabling of Searches: You can disable searches, so that you spend your CPU cycles on only the searches you are interested in. - Proximity Neutral Search - New Reward Functions: Simple Sum, CRC32, Wave Interference, Partially Neutral: CRC32 Based, Partially Neutral: Anagram, Custom Fitness Function mode (where you can code your own fitness function to run in the GUI.) Weasel Ware 2.0 Highlights: ===================== To latch or not to latch?: Version 2.0 contains both a Partitioned Search, that uses information about the target to freeze correct letters in place, as well as a Proximity Reward Search, that does not freeze letters, but uses information about the target to assign values to candidate strings. Weasel Ware now allows you to run both types of searches side by side. Proximity Neutral Search: Since choosing a reward function rich in active information, such as a simple Proximity Reward function, is unrealistic for biological purposes, we have a Proximity Neutral Search available. This search can use fitness functions with varying levels of active information. There are fitness functions that hardly reference the target string at all, other than using information about the target length (such as CRC32, Simple Sum and Wave Interference) and there are those that use some information about the target string to further narrow the search space. These functions (prefixed with "Partially Neutral: ") do narrow the relevant search space, but not to the extent of a Proximity Reward function. For example, the Anagram fitness function ranks all Anagrams of the target string (which by definition will include the target string) as the highest rank, and will rank all other strings based on how many letters they contain that the target does not, or vice versa. (In other words, based on their distance from the Anagram subset.) In this way, it quickly converges on a subspace of only n! options from the full set of m^n options, where n is the string length and m is the number of letters in the currently used alphabet. For an english word ten characters long, we reduce the relevant search space from 205,891,132,094,649 options (27^10, the 27th "letter" being the space symbol) to only 3,628,800 options (10!), which we'd then have to search through using random methods. (Though even here we can further intelligently tailor our mutation rate and population size to aid our search.) The available Reward (Fitness) Functions for Proximity Neutral Search are: - Simple Sum: This function creates a reward mapping based on the ASCII sum of the characters of the string. It plugs this sum into a sin() function and takes the absolute value to get a number between 0.0 and 1.0, which it then multiplies by the string length. This value becomes the number of "errors" in the string. The string with the lowest number of errors is then chosen. - CRC32: This function uses a CRC32 checksum of the string to get a 32-bit integer, which it then plugs into a sin() function and multiplies against the string length, similar to the Simple Sum method. - Wave Interference: This function uses the interference pattern created by two sin waves of different wavelength and phase to get the value for each string. The lowest value represents the best string. - Partially Neutral CRC32: This function uses the distance between the CRC32 checksum of the string compared to the target string. It then selects the string with the smallest CRC32 distance from that target. - Partially Neutral Anagram: See discussion above. Selects based on distance from being an anagram of the target string. - Custom Fitness Function: In addition to the above mentioned functions, users can test any other fitness function they can come up with by using the Custom Fitness Function mode. A button labeled "Edit Custom Code" appears when that mode is selected, where users can enter valid javascript to assign two values used for comparison: aError and bError. Whichever has the lowest value becomes the "fittest" string, and is selected. The user has access to any standard javascript function and can also code their own functions, as the default examples show. (The default code loaded is a javascript snippet for a Proximity Reward function.) The two strings the users will compare are available as variables "a" and "b" in the javascript code, and the target string is simply the variable "target". The user's code will be evaluated using eval() and if an exception occurs it will be trapped and a message displayed to the user in red text. (On exceptions, aError and bError are both assigned a fitness value of 0.) (to be cont...) Atom
Gentlemen, Weasel Ware 2.0 is now available for a sneak preview here: http://eil.digitalrelics.com/weasel It now features both latching and unlatched versions (Partitioned Search and Proximity Reward Search), custom fitness functions via the Proximity Neutral Search, re-orderable searches, multi-run mode (for doing large scale experiments), search disabling (to focus only on certain searches), and much, much more. Please take a look and relate any errors you notice. KF, You'll be pleased to know that your "grouped letters" mode is also available via the Custom Fitness Functions, as the "Advanced" Example. (You need to click on the "Tips for Creating Custom Fitness Functions", then "Examples" to find it...you can load it into the editor by clicking the button provided, then hitting "Update".) Your intuition was correct, search performance drastically degrades when grouping 2, 3 or more letters. An overview of the changes is available here: Changes in version 2.0 (textfile) This will be put on on the EIL site soon, so feel free to test and run some experiments in the meantime. Again, if you notice any errors, please let me know. Atom Atom
kairosfocus, I edited the post I forwarded so your name doesn't appear. That's the limit of what I might do. I'm not going to edit everything for your delicate sensibility on the off chance you might get the vapors. Buy some smelling salts. The bulk of your response to Dr. Elsberry was non-responsive. I would have thought you would have attacked his math with fewer words and more calculations -- more than none, at least. David Kellogg
Joseph said: 'The only thing the “weasel” program demonstrates is that Dawkins can write a computer program. It doesn’t do anything else.' It riles an awful lot of people :) faded_Glory
I don't know if there is a direct quote or not, and I grant you that initially I myself more or less assumed that letters were fixed once found. However, upon reflection and considering the context of the entire argument in TBW, it is much more reasonable to let all letters mutate regardless of the fitness level of the phrase. Only in that way will the software mimic 'mutations random wrt fitness'. Published examples of Weasel programs that operate that way show clearly that the target phrase is reached quite quickly, making the point that Dawkins intended all along - cumulative selection vastly outperforms random selection. At this stage I really am unsure of what the discussion is all about. Everybody seems to agree that 'explicit latching', i.e. a partitioned search, is not required for the process to work. So why are we arguing? faded_Glory
Joseph @219
As I said according to Dawkins once something is found the search for it is over.
Please provide a quote from the Weasel discussion in The Blind Watchmaker that supports this claim. If you read Dawkins' actual explanation of the Weasel program, there is nothing to suggest that letters are not subject to mutation once correct. In fact, the term "random mutation" is used. Either defend your statement with reference to what the Dawkins actually wrote or have the integrity to retract it. JJ JayM
The only thing the "weasel" program demonstrates is that Dawkins can write a computer program. It doesn't do anything else. Joseph
kf writes,
...And, tha tis compounded by a slanderous conflation of design thought and Creationism — which DK I specifically requested that you refrain form using here.
I'm not sure that your request means much, kf. I could request that you never again write about how Weasel is faulty because the fitness function is merely comparing to a phrase and not anything functional, because we all know that and have acknowledged that to death, but you are under no obligation to honor my request. This is an internet discussion forum, and people just take what other people have the time and inclination to give. hazel
faded_glory:
What is found and kept and improved upon is the phenotype, i.e. the phrases.
The phrase is the phenotype.
Not the genes, the letters.
There aren't any genes, only letters in the "weasel" program.
Those keep mutating happily away in the background, oblivious to how close the winning phenotype of each generation is to the target.
Please provide the relevant quote or quotes from TBW. This discussion is in reference to TBW only. Joseph
Alan Fox:
There was nothiong in Dawkins’ “Weasel” program that fixed correct letters.
One more time for the willfully ignorant: 1- Given a target 2- Given a survival qualification of "closest to the target" 3- Given a small enough mutation rate AND 4- Given a large enough sample size The output will NEVER be less than the input. That means the letters are fixed just because of the programming. And that appears to be the whole point behind cumulative selection- that you don't keep looking for what has already been found. Now if you can provide the quotes from TBW that refutes that premise. Joseph
Joseph, What is found and kept and improved upon is the phenotype, i.e. the phrases. Not the genes, the letters. Those keep mutating happily away in the background, oblivious to how close the winning phenotype of each generation is to the target. faded_Glory
Alan Fox:
I asked Douglas Axe if he had successfully applied the explanatory filter in a biological context. He said “No”. What was wrong with the question?
For starters Dr Axe stated:
I have in fact confirmed that these papers add to the evidence for ID. I concluded in the 2000 JMB paper that enzymatic catalysis entails "severe sequence constraints". The more severe these constraints are, the less likely it is that they can be met by chance. So, yes, that finding is very relevant to the question of the adequacy of chance, which is very relevant to the case for design. In the 2004 paper I reported experimental data used to put a number on the rarity of sequences expected to form working enzymes. The reported figure is less than one in a trillion trillion trillion trillion trillion trillion. Again, yes, this finding does seem to call into question the adequacy of chance, and that certainly adds to the case for intelligent design.--Douglas Axe
(for the original Evolution News and Views article go HERE) Therefor you should have asked: "Dr Axe according to this quote (provide him the quote and the context) you said these papers add to the evidence for ID because you have determined that chance cannot account for the sequence specificity. How did you make that determination?" Ya see Alan the EF first eliminates chance and necessity, then if specificity is met, along with complexity, design is inferred. So just going by his quote if he didn't use the EF explicitly, he certainly used something very EF-like. However I don't expect you to understand that. Joseph
Ummm in TBW Dawkins implies that not only does what is found, kept, it is also improved upon, however slightly. Alan Fox chimes in with:
The reason letters tend to stay largely unchanged is because the closest match to target is selected at each generation.
IOW Alan you don't understand my point. But anyway I challenge anyone to find the quote or quotes from TBW which would demonstrate that cumulative selection is a lost and found mechanism. As I said according to Dawkins once something is found the search for it is over. THAT is the whole premise behind cumulative selection. And THAT is how Dawkins portrayed it in TBW. So if you can provide the relevant quote or quotes that refute that. Joseph
It was intended to demonstrate the power of cumulative selection as opposed to random selection
Really, Alan, you are rather late to the party. This has been pointed out numerous times to KF over the several weeks this latching/non-latching discussion has been going on. His insistence in not conceding this point, along with his continued use of Mr. when referring to Doctor Elsberry, has led me to the conclusion that he is not going to acknowledge any error on his part. Onlookers, even sympathetic ones such as me, have long since noted. crater
Sure, Weasel is not an example of unguided evolution...
Which Dawkins himself makes very clear from the outset. It was intended to demonstrate the power of cumulative selection as opposed to random selection. "It is really a bit of a cheat" -Richard Dawkins. Alan Fox
Mr M, There was nothiong in Dawkins' "Weasel" program that fixed correct letters. The reason letters tend to stay largely unchanged is because the closest match to target is selected at each generation. This is a simple matter of fact. "Weasel" did not fix correct letters. Will you accept Professor Dawkins' confirmation that this is so? By any stretch of the imagination, Dr Elsberry (and other, see links) have also demonstrated that latching is unnecessary. "Failed to make his case" is farcically denying the obvious. Alan Fox
gpuccio, Sure, Weasel is not an example of unguided evolution. We can repeat this endlessly but I'm not sure why, because it was never presented as such, least of all by Dawkins who explicitly stated that it 'is a bit of a cheat'. I am at a loss why some people spend so much time and so many words to erect and tear down this strawman of their own making. The other point about needing prior knowledge has actually been addressed many times. Evolution has no goal, but that does not prevent it from generating solutions that are adapted to their environment. The main flaw in the tornado argument is that it ignores descent with modification in the presence of selection pressures. As a criticism of Darwinian evolution it misses its target by miles. Small stepwise tornadoes will indeed not result in an airplane. Do you really think anyone believes they do? faded_Glory
faded_Glory: Ah, I forgot... And anyway, the weasel in no way "counters the ‘tornado in a junkyard’ argument against evolution". The tornado is not supposed to know in advance what it has to build in the junkyard. That remains by far the main difference which has to be countered. Even small, stepwise tornadoes in a junkyard, given a very long time, would never build up an airplane without any prior knowledge of what an airplane is. When has Dawkins, or anybody else, countered this argument? gpuccio
faded_Glory: I have not followed the discussion about latching because scarcely interested in it. I think hazel's summary at 205 pretty well summarizes the substance of the matter. Least of all I am interested in showing is Dawkins was smart or not when he wrote those words in TBW: I already think that he is not smart at all, but for much more serious reasons. My idea is that the weasel argument in itself is completely silly as even generically related to the problem of biological evolution, unless Dawkin's intent was to show that intelligent, designed selection is a very powerful tool. Which, I believe, was not probably his purpose. So, his bringing about that argument, and using it for so long, has anyway been an act of cognitive dissonance, even if I am glad of how much that helped us in ID. In the same way, to be fair, I must say that I don't understand fully the necessity of my ID friends to concentrate so much on the "latching" issue. Finally, even if I have some minor problems with your terminology, I could well accept you statement that: "Weasel demonstrates that a process of steady-state random mutation of ‘genes’ (letters) coupled with selection of offspring ‘organisms’ (phrases) is far more likely to hit the target (a particular pre-defined phrase) than random selection of all letters in one generation." provided we add the following: "Weasel demonstrates that a process of steady-state random mutation of ‘genes’ (letters) coupled with intelligent selection of offspring ‘organisms’ (phrases) by means of a previous knowledge of the final target is far more likely to hit the target (a particular pre-defined phrase) than random selection of all letters in one generation. gpuccio
3] Increasing N, the population size, does not make it more likely that the best candidate in a generation will show a loss of a correct base from its parent, it makes it less likely. On the contrary, when per letter mutation rate, population size and nearest- to- target filtering [with no reference to functionality -- NB: "nonsense phrases"] interact, under certain circumstances, it will be possible for letter substitutions and letter substitutions with advances in a third letter to occur. The relevant circumstances are that sufficent mutated phrases -- samples from the overall population of 10^40 configs -- are sampled for a generation that the far skirt/tail becomes a material factor in the generations, sufficiently so that enough double and triple mutation cases happen that the cases just outlined become reasonably likely to be sampled by the mutation mechanism. A threshold for that is that Np --> 1 or so, i.e. we are now in a sufficiently large sample population [N] that low probability cases as described are reasonably likely to be observed. [Thus, the force of my darts and charts illustration, where of course area of stripes in the bell or reverse J curve are proportional to odds of getting hit. The peak is far more likely to be hit than the tail. But if enough "drops" hit the chart and scatter more or less evenly, eventually you will see hits on the tail as a reasonable expectation.] And, the underlying assumption here is that EXPLICIT latching of the correct letters undergoing mutation -- i.e. in the previous champion -- is forbidden. So, ANY and all of the 28 letters faces the same probability of mutation, but one that is low enough that no-chance will be material, and that 1-change, 2-chance, 3-change etc will peak then tail off. Then, when the filter acts, it selects the closest to target -- not a random process but a hamming digital space distance metric or the like. (For ease of insight, let us say that an incorrect letter is a 1, and a correct one is a 0, so that the metric is a number that runs form 28 to 0.) In such a situation, most 1-letter changes will be the same or farther back than the previous champion. A relative few -- based on the 1 in 27 odds of letters being correct on random change -- will possibly advance. So, if the population is in a moderate range and the rate is small enough, we will see no change winning fairly frequently, and single steps forward dominating teh rest; which fits with the 1/2 or so cases being no change and the rest being dominated by single steps to target so that 40+ - 60+ gens are "good" showcased runs. Under these circumstances, double mutations of the previous champion will be rarer, and triples rarer still. but as the number in the generation of mutants rises, far-skirt cases will begin to appear, and eventually we will see substitutions [which would cause reversions while preserving distance to target] and substitutions with an advance [which would cause a step forward beyond where the previous champion was]. Such changes would then appear in the champions march, and we would see letters reverting and the like -- i.e first quasi-latching [rare shifts] then complete breakdown of latching -- especially int he earlier stages. Once we are close to target and MOST letters are correct, the population dynamics will shift significantly, as most single letter mutations would be reversions and substitutions and substitutions with steps forward will be harder to achieve. In short, Mr Elsberry has failed to make his case. ________________ GEM of TKI kairosfocus
Onlookers: It must first be pointed out that the evo mat advocates at the often linked site have inadvertently told us a lot about how such would use power in institutions and the general community -- and, what we see is a clear warning to our civlisation of the peril it is in at such hands. We have been warned; if we are paying attention. Second, I must -- again -- thank Joseph for his kind intervention. he is of course right, but it is clear that all we can do is make the evident truth on the merits a matter of record for onlookers. Now, there are some points that are worth remarking on: 1] fG, 211: [Weasel] counters the ‘tornado in a junkyard’ argument against evolution . . . Actually, it simply begs the question of the need to achieve a minimal threshold of complex functionality [~ 600 - 1,000 bits worth of DNA; MUCH more than is in the Weasel sentence] before cell based life can emerge and before novel major body plans [~ 10's - 100's of mega bits] can emerge. Weasel works by INTELLIGENT design, carrying out a targeted search that makes no reference to the need for adequate functionality before hill climbing can be a reasonable strategy. That is acknowledged by Mr Dawkins in the qualifications -- or, is that: weasel words -- he makes Ch 3 of TBW, but the fact that he still used Weasel plainly reflected his judgement that it would be rhetorically effective nevertheless in distracting attention from the cogency of the Hoylean objection on the credible threshold of complex function before hill-climbing mechanisms such as natural selection is held to be, can be reasonable. It also illustrates how a confessedly "misleading" [cf 88 above] simulation can be all too rhetorically effective. In short Weasel is yet another misleading icon of evolutionary materialism. So, let us be warned. 2] Mr Elsberry, as cited DK, 202: the misuse of “latching” FYI Mr Elsberry, I (among others) first used this term and similar terms such as "ratcheting," specifically, to refer to the way the o/p of Weasel 1986 [not 1987, not various neo-weasels since and not the various quasi-weasels] acts: across 200+ of 300+ letters that could in principle vary, once a letter becomes correct, it stays that way on the o/p. You don't have any right to twist this around to assert that we are "misusing" the terms that describe what was evidently happening with the o/p of Weasel in 1986. That is a strawman fallacy, with ad hominem component. And, tha tis compounded by a slanderous conflation of design thought and Creationism -- which DK I specifically requested that you refrain form using here. On the contrary, you have a duty of care to represent those you wish to criticise fairly and accurately. This -- in the teeth of easily accessible information to the contrary -- you have failed to do. Back on the substantial issue: as I pointed out at 183 above, it is utterly implausible for the o/p of Weasel 1986 to be such that reversions occur but are always offstage, so that the observed o/p latching is only apparent, and artifact of sampling. For, the required patterns of behaviour are simply utterly probabilistically implausible. The simplest, best explanation of the o/p -- the march of generational champions -- as published by Mr Dawkins at that time, is that it is latched. Thus, the real question is the mechanism to explain it, per empirically based inference to best explanation. Weeks ago now, once this issue was on the table, I put forth T2 -- explicit, and T3 -- implicit, mechanisms. Of these, implicit mechanisms based on interaction of the proximity-only filter with the mutation rate and population size. Specifically, the number of generations to target indicates that no-change wins the generation championship about 1/2 the time in Weasel 1986, and that the rest of the time 1 step changes predominate. These are pretty direct inferences from the printouts. they entail that the double-step substitution mutation where a correct letter reverts and another substitutes, or the triple mutations where a substitution plus an advance where another letter goes correct are excessively far-skirt to materially affect the march of champions to the target, as published. All of this is consistent with the pattern of interaction between filter, populaiton per gneration and per letter mutation rates as described, under the rubric: implicit latching as a mechanism for the observed o/p latching. There is no misuse of terms -- at least, on my part. [ . . . ] kairosfocus
Weasel demonstrates that a process of steady-state random mutation of 'genes' (letters) coupled with selection of offspring 'organisms' (phrases) is far more likely to hit the target (a particular pre-defined phrase) than random selection of all letters in one generation. It counters the 'tornado in a junkyard' argument against evolution. That is all it does, no more and no less. Why does anybody actually have a problem with that? faded_Glory
Looking back, I now regret going along with "implicit" and "quasi-" latching. It would have been better to have used "latching" for just one thing - the situation where an explicit rule keeps a correct letter from ever be subject to mutation again - and found a different word to refer to the relatively steady progress towards the target. I'll know better next time, if there is a next time. hazel
Link broken in #208 Link to Zachriel's "weasel" program. The site JayM links to in 207 also gives a lucid explanation. Alan Fox
Dawkins' "weasel" didn't latch. We'll see if Professor Dawkins wishes to confirm this yet again. "Weasel" does not need to latch. This is amply demonstrated by Zachriel here. @ Joe I asked Douglas Axe if he had successfully applied the explanatory filter in a biological context. He said "No". What was wrong with the question? Alan Fox
Joseph @206
n the original sense Dembski et al. referenced “The Blind Watchmaker” for this “latching”. And going by TBW there isn’t anything to indicate otherwise. Cumulative selection as described and illustrated by the “weasel” program- in TBW- is a latching process.
This has been repeatedly shown to not be the case. David Kellogg has posted copious, clear refutations of this claim. He even posted a link to a site that goes through the text of The Blind Watchmaker and creates the Weasel program from Dawkins own words. Rather than simply repeat your baseless assertions, why don't you go through the same exercise and show how it is even remotely possible to come to your erroneous conclusion? JJ JayM
hazel:
In the original Dembski sense, latching meant explicitly fixing correct letters so they were no longer subject to mutation, and that is the meaning we should have stuck with.
In the original sense Dembski et al. referenced "The Blind Watchmaker" for this "latching". And going by TBW there isn't anything to indicate otherwise. Cumulative selection as described and illustrated by the "weasel" program- in TBW- is a latching process. No need to go looking for what you have already found. Had he said cumulative selection was a lost and found mechanism then that wouldn't quite illustrate how nature can be a designer. "Here is nature, not only blind but also clumsy." wouldn't quite grab the readers' attention nor would it help make a case for TBW. So by reading the referenced material Dembski et al.'s inference of latching looks OK. Joseph
I concur with most (maybe all) of what faded glory has said. Much of the confusion has been because we have been unable to settling on some definitions. In the beginning, "latching" meant that the program contained a rule that said that once a letter was correct in a parent, it would never change - it would not even be subject to the possibility of mutating. Once correct, the letter was fixed, or latched. Dembski's explanation in the paper mentioned way above assumed Weasel used latching in this sense of the word. The alternative to this is non-latching: correct letters can indeed mutate. Everyone agrees that in the long run letters that are correct stay correct, not because they are latched but because the selection process moves us slowly towards the target - which is the idea the program was written to illustrate. Because of this, kairosfocus added the phrases implicit latching or quasi-latching to describe this second idea. That is: non-latching at the mutation level leads to implicit latching at the selection level. Everyone, even kairosfocus, accepts that this is true. However confusion about the terminology just keeps muddying the waters. Arguing about what is "really" latching is fruitless. In the original Dembski sense, latching meant explicitly fixing correct letters so they were no longer subject to mutation, and that is the meaning we should have stuck with. hazel
Alan Fox, You are not even addressing my argument. I take it that means you don't understand it. IOW you think that your lack of understanding is some sort of refutation. IOW you are right. Trying to have a discussion with you is a waste of time because you cannot think. But anyway:
Weasel did/does not latch because it does not need to latch.
If a target is given, AND the qualification for the survivor is "closest to the target", AND the mutation rate is small enough, AND if the sample size is large enough, the output will NEVER be less than the input. IOW latching happens because it was designed into the program given target, survivor qualification, small mutation rate AND large sample size. And yes I remember Doug Axe. Last I knew he didn't respond unequivocally and you didn't ask the proper questions. Joseph
Joe, I know it is a complete waste of time to attempt to get you to think about anything objectively, but... Weasel did/does not latch because it does not need to latch. Run Zachriel's version. Letters can and do revert. But because the closest match to the target phrase is chosen in each generation, the target is found without recourse to latching. Latching is unnecessary. Dawkins did not use latching! I can ask him to confirm it if you like! Remember Doug Axe. He responded unequivocally to a clarification. Alan Fox
kairosfocus, I'll relay something from Dr. Elsberry. You write:
But equally, from the statistics involved, as the population size grows enough relative to mutation rates, skirts will begin to tell, and the program will first quasi-latch, with occasional reversions,t hen eventually as the far skirt begins to dominate the filter on sufficient population per the per letter mutation rate, we will see multiple reversions and the like, i.e. latching vanishes.
Emphasis added: Dr. Elsberry responds as follows (the rest of this post is his with minor edits):
Ignoring the misuse of "latching" (I suppose I can always-link, too), the above is precisely wrong. Increasing N, the population size, does not make it more likely that the best candidate in a generation will show a loss of a correct base from its parent, it makes it less likely. Let's recap the math one more time for the terminally obtuse:
Probability of a candidate changing a parent's correct base to an incorrect base = PCandidate_C2I = (1 - (1 - (u * (K - 1) / K))^C) Probability that a population will have at least one candidate that preserves all the correct bases from the parent of the previous generation = PPopulation_C2C = 1 - (PCandidate_C2I )^N Notice the power of N in there. As the population increases, the chance that the best candidate in each generation will change a correct base to an incorrect one falls off sharply, achieving the teensy-tiny reaches of small probability otherwise beloved of IDC advocates very quickly. We don't see changes of correct characters in the output of best candidates per generation in Dawkins' "The Blind Watchmaker" because it is by far the most probable outcome of a run of an accurate (and thus non-latching) "weasel" with a reasonable population size and a reasonable mutation rate. For the case where N=50, u=0.05, and the best candidate from the previous generation had 27 correct bases, the probability that the best candidate in the new generation still has all those bases correct is = 1.0 - (0.73614)^50 = 0.999999777
For [kf]'s assertion to be true, it would have to be the case that the value being subtracted from 1.0 would have to become larger with increasing N. Given that the term is a probability raised to the power of N, though, anybody with a thimbleful of knowledge (that a probability lies in the range [0 .. 1]) will recognize that can never be the case. It is the case that it grows smaller with increasing N, and thus the likelihood that the best candidate in each generation retains all the correct bases from its parent increases as N increases. [kf], please do avoid the betting games based on probability estimation. It is evident that these would do your wallet severe harm.
David Kellogg
KF said: 'The real problem is that Weasel is targetted search that rewards mere proximity to the target, without requiring achievement of realistically improbable functionality first before hill climbing is permitted to proceed. As a result, it begs the question of getting to shores of function in the sea of relevant but non-functional configurations, and in so begging the question gives the misleading impression that the issue is not serious. but, I think a few rumours are beginning to get out into the sanctum of the peer reviewed literature, on what is really going on...' That Weasel is a targeted search and as such a 'bit of a cheat' has been known since 1986, from the moment that Dawkins himself qualified the programme in the Blind Watchmaker. The functionality issue seems misplaced. In terms of Weasel, functionality is defined as closeness to the target, so clearly phrases that better resemble the target are more functional than others. I still think you are blaming Weasel for not fulfilling objectives it never was meant for. All it purports to do is refute the notion that 'evolution is improbable because it requires many changes to happen at once' (tornado in a junkyard). That was its aim and it achieves it nicely. It had no further goals. faded_Glory
elsberry sez:
"Latching" would require an internal mechanism with knowledge of "correct" states and the ability to protect "correct" states from mutational processes.
That is false. Given a target, a small enough mutation rate and a large enough sample size latching will always take place. It is inevitable.
That would be counter to what we know of biology, and, indeed, Dawkins himself thought that ascribing "latching" would be didactically wrong.
That is why Dawkins admitted the program was not a relection of biology. Keep getting your "knowledge" through elsberry. You will never "know" much of anything. Joseph
Alan Fox:
==> Dawkins wrote “Weasel” to illustrate a point in a popular book. Whether “Weasel” is flawed has no bearing on the case for the theory of evolution.
What point was Dawkins trying to illustrate?
2==> The “Weasel” program did not latch. Wesley Elsberry has categorically confirmed this on Dawkins’ behalf.
It does latch for all the reasons I have already provided: 1- Given a target 2- A small enough mutation rate AND 3- a large enough sample size you will NEVER see a reversal
3==> The “Weasel” program did not need to latch.
It just does it as a matter of programming. Joseph
I am not sure what the objection is against implicit latching. Isn't the entire point of Weasel to show that the random initial phrase evolves over successive generations into the target phrase? How would that ever be accomplished unless over time more and more correct letters are found and preserved? Objecting to implicit latching equals to objecting that the software works as intended! Regarding the 1986 published samples, these are samples of the winners from several generations, not unbiased samples from the entire population of each generation. Clearly the winners will have a much higher probability to show correct letters than random members of the population. After all having a significant number of correct letters is what helps make them winners in the first place! Hardly remarkable, nor evidence of sleigh-of-hand. faded_Glory
5] The fundamental flaw with weasel and kin . . . . .. IS NOT LATCHING! The real problem is that Weasel is targetted search that rewards mere proximity to the target, without requiring achievement of realistically improbable functionality first before hill climbing is permitted to proceed. As a result, it begs the question of getting to shores of function in the sea of relevant but non-functional configurations, and in so begging the question gives the misleading impression that the issue is not serious. but, I think a few rumours are beginning to get out into the sanctum of the peer reviewed literature, on what is really going on:
To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory [172, 173]. Evolution has no goal [174, 175]. Evolution provides no steering toward potential computational and cybernetic function [4, 6-11] . . . . At the same time, we have spent much of the last century arguing to the lay community that we have proved the current biological paradigm. Unfortunately, very few in the scientific community seem critical of this indiscretion. One would think that if all this evidence is so abundant, it would be quick and easy to falsify the null hypothesis put forward above. If, on the other hand, no falsification is forthcoming, a more positive thesis might become rather obvious by default. Any positive pronouncement would only be labeled metaphysical by true-believers in spontaneous self-organization. Those same critics would disingenuously fail to acknowledge the purely metaphysical nature of the current Kuhnian paradigm rut [178]. A better tact [SIC -- should be tack] is to thoroughly review the evidence. Let the reader provide the supposedly easy falsification of the null hypothesis. Inability to do so should cause pangs of conscience in any scientist who equates metaphysical materialism with science . . . . While proof may be evasive, science has an obligation to be honest about what the entire body of evidence clearly suggests. We cannot just keep endlessly labeling abundant evidence of formal prescription in nature “apparent.” The fact of purposeful programming at multiple layers gets more “apparent” with each new issue of virtually every molecular biology journal [179-181].[Abel, 2009]
______________ I think the above notes should suffice to show the onlooker where the balance of the argument really lies on its merits. And, if you doubt me, go get some graph paper, a pair of scissors, some backing, notebook, calculator with stats features etc -- don't forget a dart or two [esp those neat mini-darts] -- and some time to try out the darts and charts exercise. That will teach you more, more surely and confidently, than rivers of digital ink and pro-weasel rhetoric. GEM of TKI kairosfocus
Onlookers: A few brief notes: 1] Zachriel's neo-Weasel irrelevant to the issue of Weasel 1986. Z's neo-weasel may like the 1987 version not latch, but that has little or nothing to do with the import of those 200 latched letters from 300 that could in principle change in the 1986 o/p. Cf above at 183 to see what would have to happen for the o/p to revert and correct while producing apparently latched o/p letters. It is overwhelmingly improbable that the published 1986 runs did not latch [probably via implicit mechanisms]. 2] Mr Elsberry on latching: Apart from insisting on violations of my privacy and on the ID = creationism slander [Onlookers: note again how the trend of uncivil conduct by evo mat advocates is ever so predictably seen . . . ], he is failing to observe the distinction between explicit and implicit latching so is tilting at a strawman misrepresentation; as Mr Kellogg is above. I repeat; if you want me to respond to the specific claims being made, you will have to summarise the case here at UD, without slanders and privacy violations. That is a basic insistence on civil discussion of matters on the merits, and it is significant that just that is what the evolutionary materialism advocates and fellow travellers seem unable or unwilling to do. I will thematically respond in outline to what seem to be the major points, per the above comments. 3] LOLN and darts and charts Again, I have given an illustration of how as sample points scattered at random mount in numbers from one to a few to a few dozen, they as a rule become more and more representative of the bulk patterns of a trend or distribution. That is the practical (and very useful) import of the law of large numbers. Then, as the sample points get large enough, the far skirts ["tails"] will tend to show up, as Np --> a reasonable fraction of 1, i.e as number of sample pints and relevant distribution probability give us reason to expect to see at least one point. That is not controversial, it is a simple enough exercise that you can do yourself, by making up a bell-chart [as a typical case in point] and throwing or dropping darts on it from a range where they will more or less scatter evenly across its face. Or, you could pour grains of sand out on it. If you do it by darts, you can record the points by coordinates [draw up the chart on graph paper with marked axes], and by number in sequence. That way you can record trend related data that can then be plotted on say a +/- 3-sigma sequence chart. So, we can relate apparently static distributions to trends; using statistical process control techniques. (In fact, the point of the little darts and charts thought exercise is the exact point of the hypothesis testing approach that looks at whether or not a point is in the far skirts of a null hyp of chance based on one or another bell shaped distribution. Up to a certain confidence level, on the effective sample size, you are sufficiently unlikely to get in the far-skirt region, that one infers to an alternative hyp, that usually is in context some species or other of intentional action. AS A BIOLOGIST, MR ELSBERRY MUST HAVE DONE THAT LEVEL OF STATISTICS, SO HE MUST KNOW THIS. interested parties can look at my linked discussion of the Caputo case, in appendix 6 the always linked. Here is a nice simple intro. Of course there are the usual debates and disputes over any academic issue where rules of thumb and conventions are used, but the fact is that hyp testing dominates the real world, for some very good reasons.) Relevance to our case is of course that the binomial distribution applicable to letter mutation per a certain probability p is going to be a skewed bell or more or less reverse-J, i.e. skirted, distribution. Implicitly latched cases -- relevant to Weasel circa 1986 -- happen because the balance of mutation rates and per generation pop size is such that few far skirt members will appear, so that the odds of a substitution and reversion between the 1986 samples, will be negligible, by the time the run of champions -- on mere proximity to target without reference to realistic function [hence Mr Dawkins' "nonsense phrases"] -- hits the weasel sentence. 4] How that works for Weasel 1986 (not 1987, and not any neo-Weasels and quasi-weasels out there that do not latch): By the data published in 1986, and by the considerations in 183 above, that is what Weasel Circa 1986 was, beyond reasonable doubt. (It seems I need to again link on the fallacy of selective hypersketicism, and its fellow traveller, endless objectionism.) Back on point: in Weasel circa 1986 as published the no-change members of the population were selected as champion about 1/2 the time, and the next most common case that dominated the output was a single-letter goes correct mutation. That is, no-change and single letter mutations were the bulk of the distribution. Double letter mutations that substituted for a reverting letter another that was correct are an order of magnitude or more less likely, and to then have the reverted letter correct itself was even less likely by another order of magnitude or so. And, to lock in the likelihood of winning the championship by having a third letter go correct at the same time pushes us out even further into the skirt. In short, implicit latching is not so hard to understand, if one is willing to take the issue up on the merits objectively. [ . . . ] kairosfocus
BTW Mr. M, I forgot to ask... Did you try Zachriel's version of "Weasel"? And I am beginning to wonder whether you think it is possible to win an argument by attrition. :) Alan Fox
Mr. M, Others with more time and expertise have responded to you but it seems moderation has intervened. In particular, Dr. Elsberry has responded. I am afraid I didn't give much attention to your latest comments, relying on Hazel's assessment thazt there was no new content. Alan Fox
KF writes, "I trust that sufficient has been said for the record." God, I hope so. To paraphrase a common saying, if you can't find anything new to say, don't say it. hazel
kairosfocus,
If Mr Elsberry is prepared to state that Weasel did not IMPLICITLY latch, then he has a case to make, and one against stiff issues of evidence.
Oh come now. Dr. Elsberry has made such a case, you have been invited to respond, and you not done so. He provided a definition showing why latching runs counter to Dawkins's declared pedagogical goals, a precise statistical explanation for "weasel," and a shredding (his words, but they're true) of your faulty appeal to the law of large numbers. The math has been done, but you haven't dealt with it. David Kellogg
PPS: Hot, just published, peer reviewed paper that hits hard on all the bases raised in this thread and previous ones, by Abel. Just one clutch of excerpts (note the tongue in cheek ironies of tone . . . ):
Attempts to relate complexity to self-organization are too numerous to cite [4, 21, 169-171]. Under careful scrutiny, however, these papers seem to universally incorporate investigator agency into their experimental designs. To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally-occurring physicodynamic complexity. Evolutionary algorithms, for example, must be stripped of all artificial selection and the purposeful steering of iterations toward desired products. The latter intrusions into natural process clearly violate sound evolution theory [172, 173]. Evolution has no goal [174, 175]. Evolution provides no steering toward potential computational and cybernetic function [4, 6-11] . . . . At the same time, we have spent much of the last century arguing to the lay community that we have proved the current biological paradigm. Unfortunately, very few in the scientific community seem critical of this indiscretion. One would think that if all this evidence is so abundant, it would be quick and easy to falsify the null hypothesis put forward above. If, on the other hand, no falsification is forthcoming, a more positive thesis might become rather obvious by default. Any positive pronouncement would only be labeled metaphysical by true-believers in spontaneous self-organization. Those same critics would disingenuously fail to acknowledge the purely metaphysical nature of the current Kuhnian paradigm rut [178]. A better tact [SIC -- should be tack] is to thoroughly review the evidence. Let the reader provide the supposedly easy falsification of the null hypothesis. Inability to do so should cause pangs of conscience in any scientist who equates metaphysical materialism with science . . . . While proof may be evasive, science has an obligation to be honest about what the entire body of evidence clearly suggests. We cannot just keep endlessly labeling abundant evidence of formal prescription in nature “apparent.” The fact of purposeful programming at multiple layers gets more “apparent” with each new issue of virtually every molecular biology journal [179-181].
Muy interesante . . . kairosfocus
KF:
If Mr Elsberry is prepared
Since we know your interest in making sure everything is stated accurately, KF, you probably should stop referring to Wesley Elsberry as Mr. and use his earned honorific of Dr. crater
OOPS: 10's - 100"s of millions of bits worth of mutations to create novel body plans. kairosfocus
For the record: Onlookers, it is clear that this thread has had some positive effect, but also that Weasel has proved itself to be a telling example of a misleading icon of evolutionary materialism. For, it is precisely a capital illustration of refusing to acknowledge the force of the point that once we deal with complex, information-rich functionality, we first need to credibly get to the shores of islands of function within the available probabilistic/search resources, before we can get to hill-climbing algorithms. And so, the basic point that observed cell-based life seems to have a threshold at about 600 - 1,000 k bits of information, shows that there is no credible pathway form prebiotic environments to first life, on the gamut of the observed cosmos. [This, I discuss in my always linked, section B.] (And, it also illustrates that the application of unrealistically high rates of beneficial mutations will give the illusion that such hill climbing is more likely to succeed than it is. In the real world, double simultaneous mutations are hard to get, and triples are a practical barrier. To incrementally get 10's - 100's of mutations.) A few [hopefully] wrap-up observations on points raised: 1] fG, 185: Weasel shows that there are other ways to find complex solutions than creating entire phrases at random, and therefore such objections are without merit. I believe this is really all that Dawkins wanted to illustrate with the programme. As he himself says, setting a pre-defined target is ‘cheating’ and is one reason why Weasel does not actually model biological evolution, but just one element of the proposed Darwinian process . . . Weasel does so by ignoring the issue that before one can hill-climb up Mt Improbable, one has first got to get to the shores of said island. And, on the precise point that Weasel would have to be illustrative, Dawkins has had to admit that it bears no reasonable resemblance to natural selection, the proposed BLIND watchmaker. In short, we see here in a nutshell the point of just how rhetorically misleading Weasel is. 2] AF, 186: Whether “Weasel” is flawed has no bearing on the case for the theory of evolution. Weasel -- along with a long string of other misleading icons -- illustrates that all too often, the evidence used to persuade the public [and even students] of the apparent credibility of evolutionary materialism is highly misleading. When a theory -- across decades -- is sold to the public and students in large part based on a clutch of misleading icons, that does not speak well of its underlying degree of warrant. 3] The “Weasel” program did not latch. Wesley Elsberry has categorically confirmed this on Dawkins’ behalf. The o/p of Weasel circa 1986, as published, is in a latched condition beyond reasonable dispute. (Onlookers, cf 169 above for why I say that. You will see that of 300+ letters that could change -- well beyond the point where the law of large numbers lends credibility to the representativeness of a sample [not to mention the well known practice of showcasing typical "good" results in scientific or general audience publications; what most likely happened here is that the obvious o/p latching was a warning flag that was not spotted as a flaw in the rhetoric until after the fact . . . ] -- the only ones that do change are the ones that are not "on target." Once a letter is on target, for 200+ cases it stays there, in some cases all the way through a run. And, as discussed in 183, it is simply not credible probabilistically for this to be seen in the samples o/p's if there is sufficient of a high mutation rate that the sort of multiple letter changes required to make reversions and recoveries happen so rapidly, were at work. Especially if -- per 40+ and 60+ gens to target -- no change was winning the in-generation championship race about 1/2 the time.) For the o/p to show 200+ cases where there is evident latching while in fact all along between sample points, reversions and corrections are going on, strains the limits of reasonable probabilities, as I have summarised at 183. There is therefore a question of mechanisms to explain that o/p latching, and the two credible candidates are: explicit and implicit latching. Moreover, so far as the reported record above and in previous threads shows, Mr Elsberry has confirmed that Mr Dawkins has stated that Weasel did not EXPLICITLY latch. On that, I and others have accepted that per preponderance of evidence, Weasel circa 1986 was per best explanation of its o/p pattern, implicitly latched. This, relative to co-tuning of population size per generation and mutation rates in a context that rewards mere proximity of "nonsense" -- i.e. non-functional -- phrases. [Thus is begged the Hoylean challenge that underlies Weasel's context: getting TO shores of islands of complex functionality before hill-climbing can reasonably be applied. And dismissing this as "single step" change etc is an evasion of the point.] Therefore, to make the sort of blanket statement that Mr Fox has in that context, is an improper appeal to authority at minimum. If Mr Elsberry is prepared to state that Weasel did not IMPLICITLY latch, then he has a case to make, and one against stiff issues of evidence. No, one cannot simply say that Weasel circa 1986 -- on its o/p as sampled -- was not EXPLICITLY latched as to mechanism, and that it was therefore also not implicitly latched. (Mr Kellogg's unfounded assertions to the contrary notwithstanding.) Mr Fox, I would appreciate not being misrepresented on this; here or elsewhere. 4] The “Weasel” program did not need to latch. This underscores to me that Mr Fox evidently has not done a sufficient examination of the case he is objecting to before making adverse comments. FYI: No one asserts that implicit latching will hold for all sets of values of population and per letter mutation rates. (Indeed, that is precisely how the difference between the published behaviour circa 1986 and the videotaped run on BBC Horizon circa 1987 -- note the jump in time -- is explained. SO, MR FOX HERE HAS INADVERTENTLY CONFIRMED THAT HE IS FOLLOWING MR KELLOG'S ATTEMPT TO INSIST THAT LATCHING MECHANISMS ONLY INCLUDE EXPLICIT ONES. This is utterly without warrant, and misrepresents what has been argued by the undersigned and others, to the point of being a plain strawman fallacy.) In fact, under relevant circumstances, Weasel will latch as evidently happened in 1986. (Please note again that in the published cases runs to target were 40+ and 60+ gens long, i.e. no change won the generation proximity championship about 1/2 the time. Under those circumstances, the multiple mutation skirts plainly did not dominate the filter. So, with a high enough proportion of no-change cases, and probably also only 1-change cases coming up with sufficient probability to make a difference, the program will latch, implicitly. For, long before enough population members will come up to make the multiple correct mutation or substitution skirt cases show up, the target will be hit. Again, this is tied to unrealistic beneficial mutation rates and the use of a proximity filter, without reference to threshold of reasonable function.) But equally, from the statistics involved, as the population size grows enough relative to mutation rates, skirts will begin to tell, and the program will first quasi-latch, with occasional reversions,t hen eventually as the far skirt begins to dominate the filter on sufficient population per the per letter mutation rate, we will see multiple reversions and the like, i.e. latching vanishes. All of this has been explained in details above and in previous threads, repeatedly; and under the term IMPLICIT LATCHING (and QUASI-LATCHING). There is therefore no good reason why such an objection should be seen at this late stage. Mr Fox, your "summary" is highly misleading; and that without any reasonable justification. Please, do better than this, next time. __________________ I trust that sufficient has been said for the record. GEM of TKI PS: JayM 182: Cf 88 above. Reflect on the force of "cumulative selection" and "[t]he computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase" in light of the context of the sampled o/p of the program. the most natural reading of this is that there was a letterwise partitioned search that once letters hit, they were locked as successful. A further telling point on this is that this was the "natural" understanding of the Monash University -- pro Darwinist -- team, who came to the issue fresh; as they acknowledge, they had to be "corrected" by Mr Elsberry to line up with the standard Darwinist line on Weasel. (NB: Even in the implicit case, generally speaking, lockup happens once the phrase is hit: the program in effect masks off the target in either case, the only question is whether per letter or per phrase.) kairosfocus
Well, kairosfocus, I don't see that you said anything new, and I've made and defended my points with specifics and clarity, I think, so there is no sense in repeating myself. At this point we'll have to let the onlookers decide who has made the best case for the various points we've discussed. And thanks to Alan for his summary. hazel
Three points, Mr. M, (Onlookers, please note!) 1==> Dawkins wrote "Weasel" to illustrate a point in a popular book. Whether "Weasel" is flawed has no bearing on the case for the theory of evolution. 2==> The "Weasel" program did not latch. Wesley Elsberry has categorically confirmed this on Dawkins' behalf. 3==> The "Weasel" program did not need to latch. Anyone can download Zachriel's "Weasel" demo and see for themselves how there can be occasional reversions. There is even a handy counter included. Alan Fox
I haven’t posted here before but I’m a regular lurker, so greetings to all contributors! I want to thank the participants in this Weasel discussion for their detailed analysis of the program. I read the Blind Watchmaker along time ago and I am aware of the ongoing discussions about the Weasel examples in the ID debate. I admit that I have never really thought in great detail about how the algorithm works, and in all fairness I more or less automatically assumed that it is so successful in finding the target because it explicitly latches correct letters. Because of the analysis here I now understand that explicit latching is not necessary at all, and that in fact it makes no sense in the context of what Weasel purports to illustrate: how the interplay between random mutations (of letters) and selection (of phrases) over many generations in a population finds a solution much, much quicker than single-step creation of entire phrases could ever do. So thanks for clarifying that! A long-standing objection to the Darwinian model of evolution is that the probability of it creating complex features is extremely small. These objections are often based on single-step assumptions (tornado-in-a-junkyard models). Weasel shows that there are other ways to find complex solutions than creating entire phrases at random, and therefore such objections are without merit. I believe this is really all that Dawkins wanted to illustrate with the programme. As he himself says, setting a pre-defined target is ‘cheating’ and is one reason why Weasel does not actually model biological evolution, but just one element of the proposed Darwinian process - random mutation of genes and selection of the fittest of the resulting progeny. faded_Glory
3] H, 179 If one just naively looks at the published results in BWM and doesn’t think about the process that has been described to produce them, than one can say, “Hey, it looks like once a letter is set, it never changes.” But if one thinks about the process, and is aware that the published data is a highly selective subset of data that is a result of a process that moves towards a target, and is not a randomly distributed variable, then one realizes that the sample is entirely inadequate on its own to draw any conclusion about explicit vs implicit latching. Cf the above, on what is required on an implicit latching case for the 200+ samples to show apparent latching while all along there are reversions and corrections going on "offstage." (And of course the relevant population from the beginning is not eh generations as a whole but the march of published champions. It its for this population that o/p latching is credibly observed, and it is thus this population whose peculiarities need to be explained. This population is not to be presumed unrepresentative of itself. It is the evidence to be explained on inference to best explanation.) It is plainly far more plausible that the program latches, on considering the actual required processes to give the imagined offstage reversion and recovery, as discussed above. (That which is strictly logically possible is often so implausible probabilistically relative to a simpler explanation that for good reason we choose the simpler one.) 4] Explicit latching means latching at the mutation level. Implicit latching means non-latching at the mutation level. Not at all. In an explicitly latched case, members of a generation would compete on proximity, by which the champion is selected. Letters in the champion that are newly correct would then be added to the mask. Plainly, latching is based on population, but is letterwise. _________________ It should be plain that the idea that Weasel does not latch at all is riddled with implausibilities. The idea that per the 1986 data and statemets by Mr Dawkins, explicit latching is not a simple explanation and a very logical way to understand him, is equally implautible. Such is the import of the discussion on rewarding the least progress towards the target, the remarks on cumulative progress, and the published data that shows evident latching of letters leading to ratcheting progress to target. It is on the direct statement that he chose instead to not explicitly latch letters, that on the resulting preponderance of evidence that it is concluded that implicit latching based on co-tuning of population size and mutation rates -- both biologically very implausible -- with the all important selection on mere proximity without requiring reasonable functionality -- becomes the best overall explanation. GEM of TKI kairosfocus
All: One of the subtler aspects of the Weasel issue is that it reveals the force of the underlying challenge to get to complex functional information, and hos deeply that challenge has been underestimated for many years by the evolutionary materialist establishment. For instance, above there was the suggestion on a calculation of confidence levels, one could in effect dismiss the idea that latching on the o/p is the best explanation for the pattern in the o/ps published by Mr Dawkins in 1986. But, this overlooked some interesting little points:
a --> To have 200+ cases in which EVERY reversion is reversed by the time of the next sample, one of two things would more or less have to happen: b--> Option a: phase 1: a substitution where by another letter becomes correct to replace the "lost" letter, and an advance to the target would more or less have to happen [to win the champion contest], then phase 2: within the same frame, a reverse substitution to the reversed letter, and a further advance [to again win the contest]. this requires two favourable triple mutations of rather special sorts. c --> Option b: only the substitution occurs, but the want of advance is "covered" by either no no-change cases, and/or by fortuitously being picked as winner by a tie-breaker soub module. This, for both the phase 1 and the phase 2. [of course a blended mode of the options is possible.] d --> Such requires so high a rate of mutations that it would then be utterly unlikely that we would not see incomplete phase 1 only cases, with 200 letters in o/p latched condition. e --> It is much simpler explanation that the o/p appears latched for the very excellent reason that the algorithm has a mechanism that locks, either explicitly or implicitly. And of these, on the o/p and Mr Dawkins' discussion in 1986 only, explicit latching, i.e. letterwise partitioned search, is the simpler.
So, we may now see that here is a failure to address comparative difficulties before putting up objections to the explanatory power of the observations that the o/p credibly latches and that its best explanation is a mechanism that does that, explicitly or implicitly. look at the latest remarks, on points: 1] H, 179: the only necessary difference between the explicit and implicit cases, as we have agreed on, is the additional rule in the explicit case that if the letter is correct, p(mut) = 0 Of course, this omits that the population size and mutation rates then have to be co-tuned to achieve implicit latching. It also omits that there is already present the information to put in the line,and that the line of code or two to do so is not hard to code, indeed it is simpler than co-tuning. 2] in the implicit case mutation is random with respect to fitness, and in the explicit case it is not. Here,t he insistence on a misleading "standard" term -- here a case of improper appeal to modesty in the face of claimed authority that fails to reckon with the point that authorities are no better than their facts, assumptions and reasoning -- has led the argument astray. There is no fitness, only proximity to target without reference to function. So, the issue is whether the target was conceived as a phrase or on a letterwise basis. And if "function" is reduced to matching the target, rewarding the smallest increment in progress to target, it makes sense to latch individual letters that hit the target. On this, the random search process is happening on a letterwise basis, and once letters hit home, ta da . . . they have achieved optimal function. Only, the concept of function and the rates of "beneficial" mutation being implicitly used beg the question of the Hoylean challenge to get to credible body-plan based life functionality on the observed capacity of chance + necessity only. [ . . . ] kairosfocus
kairosfocus @177
First, just from Mr Dawkins’ description, it is pretty plain that understanding Weasel circa 1986 as latching per the published runs is a legitimate understanding.
No, it isn't. The page you refuse to read demonstrates this very clearly. I certainly can't see how you can draw your conclusion from what Dawkins actually wrote. Here is the relevant text from The Blind Watchmaker pages 47-8 (excerpted under Fair Use):
So much for single-step selection of random variation. What about cumulative selection; how much more effective should this be? Very very much more effective, perhaps more so than we at first realize, although it is almost obvious when we reflect further. We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before: WDLMNLT DTJBKWIRZREZLMQCO P It now 'breeds from' this random phrase. It duplicates it repeatedly, but with a certain chance of random error - 'mutation' - in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL. In this instance the winning phrase of the next 'generation' happened to be: WDLTMNLT DTJBSWIRZREZLMQCO P Not an obvious improvement! But the procedure is repeated, again mutant 'progeny' are 'bred from' the phrase, and a new 'winner' is chosen. This goes on, generation after generation. After 10 generations, the phrase chosen for 'breeding' was: MDLDMNLS ITpSWHRZREZ MECS P After 20 generations it was: MELDINLS IT ISWPRKE Z WECSEL By now, the eye of faith fancies that it can see a resemblance to the target phrase. By 30 generations there can be no doubt: METHINGS IT ISWLIKE B WECSEL Generation 40 takes us to within one letter of the target: METHINKS IT IS LIKE I WEASEL And the target was finally reached in generation 43. A second run of the computer began with the phrase: Y YVMQKZPFfXWVHGLAWFVCHQXYOPY, passed through (again reporting only every tenth generation): Y YVMQKSPFTXWSHLIKEFV HQYSPY YETHINKSPITXISHLIKEFA WQYSEY METHINKS IT ISSLIKE A WEFSEY METHINKS IT ISBLIKE A WEASES METHINKS IT ISJLIKE A WEASEO METHINKS IT IS LIKE A WEASEP and reached the target phrase in generation 64. m a third run the computer started with: GEWRGZRPBCTPGQMCKHFDBGW ZCCF and reached METHINKS IT IS LIKE A WEASEL in 41 generations of selective 'breeding'.
Can you please show how any possible reading of the textual description (not your inferences from the very limited sample output) could even suggest explicit latching? JJ JayM
kairosfocus writes,
Second, the samples of 200+ letters that on observation latch beyond reasonable doubt on the champions is reason per LOLN, to conclude that the most reasonable mechanisms at work are explicit or implicit latching ones. Of these, and per the published evidence circa 1986, explicit latching is the simplest explanation. It is on the reported testimony of Mr Dawkins, that I have accepted that implicit latching per preponderance of evidence is the best explanation of Weasel 1986.
No, the simplest explanation is NOT explicit latching, for two reasons, one practical and one philosophical: a) practical reason: the only necessary difference between the explicit and implicit cases, as we have agreed on, is the additional rule in the explicit case that if the letter is correct, p(mut) = 0. This rule is not necessary in the implicit case. It is surely simpler to not have the rule than to have the rule. b) philosophical reason: in the implicit case mutation is random with respect to fitness, and in the explicit case it is not. Also, you invoke the “samples of 200+ letters” and state that, of the possible mechanism of explicit or implicit latching, “per the published evidence circa 1986” that explicit latching is the simplest explanation. Ironically enough, in 170 you write,
2 –> In that light, the mere presentation of statistical calculations that ignore that context is therefore worthless, save as a means to distract attention from the matter on the merits:
And yet your arguments about sampling fall prey to exactly what you warn against: they are out of context and therefore worthless. If one just naively looks at the published results in BWM and doesn’t think about the process that has been described to produce them, than one can say, “Hey, it looks like once a letter is set, it never changes.” But if one thinks about the process, and is aware that the published data is a highly selective subset of data that is a result of a process that moves towards a target, and is not a randomly distributed variable, then one realizes that the sample is entirely inadequate on its own to draw any conclusion about explicit vs implicit latching. And last, for the record: Explicit latching means latching at the mutation level. Implicit latching means non-latching at the mutation level. hazel
kairosfocus, One more relay from Zachriel at anti-evo (because I know you won't go there). He is direct and clear.
kairosfocus: It is on the reported testimony of Mr Dawkins, that I have accepted that implicit latching per preponderance of evidence is the best explanation of Weasel 1986.
Zachriel: It took many threads, hundreds of comments, tens of thousands of words (and years in the case of Dembski) to reach an understanding of how Dawkins' Weasel works—but you still miss the essential point. The children in each generation may very well exhibit reversions in letters. There is *no* latching.
kairosfocus: Kindly note: implicit latching as a mechanism to explain o/p latching in the run of champions is NOT non-latching, regardless of what you want to assert.
Zachriel: That's the whole point of Weasel. Letters that match the target tend to become fixed across succeeding generations—even though there is no direct selection by letter.
kairosfocus: And of course I have never said that Weasel “has” to latch or has to latch explicitly to work in the sense of getting to target ...
Zachriel: The problem is that you don't understand evolution, so you still have no idea what Dawkins was trying to show. This is fundamental. Contrary to your strawman, in Dawkins' Weasel, mutation is random with respect to the fitness function. As Dembski et al. have had trouble absorbing this simple point, even when explained repeatedly, why should we expect them to have any insight into the limitations of such a simulation?
kairosfocus: In that light, the mere presentation of statistical calculations that ignore that context is therefore worthless ...
Zachriel: Yes, that was my point concerning your faulty appeal to statistical sampling. David Kellogg
kairosfocus [178], I have not said that ID is "creationism in a cheap tuxedo." I'm just saying that the confusion between the two is made possible by ID. That's why I suggested the term "creationish." If ID wants to avoid the confusion, it should reject creationism outright and quit working hand-in-hand with creationists in publications. The insistence of (some) ID proponents that they are entirely separate viewpoints is regularly belied by their practice. For a start, you might quit citing creationists in your own work. As for the sampling issue, I give up. People who have a far better understanding of statistics and of evolutionary computing than either of us (for example, Zachriel) have tried to explain where you're wrong. Both Zachriel and hazel have been clear and patient. For a moment in looked like hazel and you had reached some sort of agreement. Alas, no. So I give up. David Kellogg
PPS: Mr Kellogg: I repeat, I have refused to deal with slanderous incivility from the outset. For excellent reason; and YOU know or should know that design theory is quite distinct from Creationism -- i.e. your remarks just above, sadly, indicate you are joining in the "creationism in a cheap tuxedo" slander. Worse, up to date -- after several threads and dozens of explicit statements on the point -- you seemingly cannot even get what I have pointed out, repeatedly, straight:
[DK, 173:] No latching there, certainly not explicit latching: yet it will tend to show the same results that kairosfocus insists must be due to explicit latching.
Mr Kellogg, this is truly sad. How many times do I have to point out that:
a --> we OBSERVE evident latching of the o/p on the run of champions? b --> That this calls out for explanations based on mechanisms? c --> That there are two credible ones: T2, explicit latching and T3, implicit latching? d --> That on the evidence of the showcased o/p of 1986 and the statements made by Mr Dawkins then, that the simplest explanation is T2, explicit latching? e --> Or, that on the reported testimony that he did not explicitly latch even in 1986, the best explanation on preponderance of evidence is IMPLICIT latching due to co-tuning of [unrealistically high] beneficial mutation rates, population sizes and the [question-begging] mere proximity without reference to complex information based functionality filter? f --> Indeed, I have more than once explicitly corrected you on this very point, e.g. above when you tried to assert that "implicit latching" means "non-latching." g --> Zachriel's neo-Weasel, per your remarks, exhibits implicit latching behaviour, which is one of the two mechanisms I have taken pains to point out and explain. Kindly note: implicit latching as a mechanism to explain o/p latching in the run of champions is NOT non-latching, regardless of what you want to assert. h --> Do you not understand that the persistent, agenda-serving distortion of what another person says, twisting it into the opposite of what he said is uncivil misrepresentation? i --> FYI: I have NEVER said that latching on o/p's always requires explicit latching of letters. j --> FYFI: Just the opposite, I have put forth two mechanisms, pointing out that implicit latching with sufficient detuning becomes quasi-latching then complete breakdown of latching behaviour as the increased presence of far skirt multiple letter correct cases [due to unrealistic mutation rates -- cf on what it takes to see beneficial double mutations in real world genomes] and letter substitution cases with the unrealistic proximity filter causes a different behaviour to emerge.
Now, also, you have for some time now had an open invitation to present the argument here without uncivil behaviour and have not. That tells me that the argument you and otehrs have linked is not particularly strong in the context of the excerpts you may re-read at 88 above; which constitute telling admissions against interest by both Mr Dawkisn and Wikipedia. _________________ PPPS: As to Zachriel's statistical calculations, my comment is simple -- barking up the wrong tree: 1 --> Mr Dawkins made some pretty strong contextual remarks, per 88 above, that STRONGLY indicate that the published runs circa 1986 were very typical not atypical o/p; as is a commonplace in reporting scientific results -- showcasing "good" data. (Kindly note that socio- institutional context of what typical praxis is. that creates a very strong presumption on what was done in TBW ch 3 and New Scientist circa 1986. In short, it it those who argue against the import of the natural sense of what was said and showcased who have a stiff burden of proof to meet. I refuse rto go along with selective hyperskepticism that pretends otherwise.) 2 --> In that light, the mere presentation of statistical calculations that ignore that context is therefore worthless, save as a means to distract attention from the matter on the merits: Weasel per the 1986 o/p obviously latches on the run of champions. 3 --> And of course I have never said that Weasel "has" to latch or has to latch explicitly to work in the sense of getting to target, only that the OBSERVED latching per 1986 points to the REAL problem with it: targetted search without a realistic functionality threshold that rewards mere proximity. 4 --> So, it is just the opposite of a BLIND watchmaker at work. It is a misleading icon of evolutionary materialism, and, sadly, what you have just presented illustrates just how effective it is at being misleading. 5 --> Strawman mischaracterisations of what I have had to say and rebuttals to what I have NOT said, simply underscore the point that Weasel and its modern derivatives and kin, are in the end fundamentally a rhetorical exercise in ducking the challenge of credibly getting to complex bio-information on the gamut of our observed cosmos without intelligent direction. Sadly. GEM of TKI kairosfocus
Hazel: First, just from Mr Dawkins' description, it is pretty plain that understanding Weasel circa 1986 as latching per the published runs is a legitimate understanding. As mot only Marks-Dembski and Roula Truman et al show, but Monash University's pro-Darwin site. All, as long since discussed in adequate details. Second, the samples of 200+ letters that on observation latch beyond reasonable doubt on the champions is reason per LOLN, to conclude that the most reasonable mechanisms at work are explicit or implicit latching ones. Of these, and per the published evidence circa 1986, explicit latching is the simplest explanation. It is on the reported testimony of Mr Dawkins, that I have accepted that implicit latching per preponderance of evidence is the best explanation of Weasel 1986. In that context, Weasel as videotaped for BBC circa 1987 is best seen as a de-tuned for video version, which shows fairly regular reversions; by sharp contrast with what one has the perfect right to see as the representative o/p for "good" runs, circa 1986 as published by Mr Dawkins. In both cases -- and As Mr Fox below you shows -- I again need to underscore, the material point on Weasel and the like since 1986, is that they were put up in the confext of the Hoylean challenge to get TO shores of functional complexity in the configuration space of organic molecules starting with pre-biotic environments; and also the need for novel body plans. But, since Weasel and close kin ate targetted search that reward mere proximity without reference to achieving complex function before allowing incremental hill-climbing, they are fundametnally flawed and misleading. In short, as Mr Fox et al should understand, you have to credibly get TO the shores of Island Improbable befoer you can climb to the mountaintops by your favourite hill climbing algorithm. (And for that even 1000 bits of information is such that our observed cosmos acting as search engine will be unable to get through more than 1 in 10^150 part of configs before using up its search resources. First life credibly starts at 600,000 bits [> 10^180,000 configs] and body plan innovation credibly requires increments of 10's - 100's of mega bits of information. Thus, origin of functionally specific, complex bio-information is a major roadblock to the materialist origins stories we are being told. And Weasel ducks rather than answers the question.) And, that is what I pointed out from December last, highlighting the o/p latching effect as illustrative of the fundamental flaw in Weasel and kin. the second of the below is the paragraph rthat was taken out of context to try to suppoert what has tunred into a camapign to give the rhetorical impressiont hat the main issue we have had is latching, and that this reflects a misunderstansding of the issue. On the contrary it is only a signpost pointing to the real issue -- Weasel et al are yet another misleading icon of evolutionary materialist philosophy imposed in the guise of origins science. Re-excerpting, as in 64 point 4 above [Please note, Mr Fox]:
[107:] the problem with the fitness landscape is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. So far in fact — as I discuss in the linked in enough details to show why I say that — that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy. But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge. [111, excerpted paragraph used by GLF in his threadjack several threads back now:] Weasel [of course, in context, circa 1986] sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.
GEM of TKI _____________ PS: Mr Fox: On fair comment, you need to read the context of the discussion before making adverse comments, as already noted above. In particular, cf. 88 above, and the remarks on the REAL root problem with Weasel and kin, as reiterated above in this comment. kairosfocus
For a confidence interval of 2% (letters from 0 to 26 ± 0.5), and a confidence level of 95%, we need to sample 70% of a population of 1000. (Interestingly, we only need to sample about 2400 in a population of a million or a billion for the same level of confidence. This is why a drop of blood containing trillions of particles can represent the composition of all the blood.) Anyone can see that Weasel doesn't require latching to work. With reasonably large populations, the best of the brood will only occasionally show a letter reversion. A typical sample of ten Mother Weasels will show the same results that kairosfocus insists must be due to latching. The appeal to sampling is obviously faulty as it is contrary to simple observation.
ZachrielIt is a shame that Zachriel and Kairosfocus could not communicate directly, if that is possible, (having observed Hazel's efforts. Zachriel's Weasel Alan Fox
I don't think you get the sampling issue, and aren't going to, so I'll drop it. hazel
Hazel; Follow up points: 1] 0.1 % I of course cited the blood sampling paradigm to highlight that relative scale of sample once the absolute scale is adequate, is irrelevant. And I gave a reason for that, too. One that as I pointed out earlier, you agreed with in the end. It is one that is backed up by the force of sampling theory: once samples are reasonably randomly or evenly scattered [and not pathologically correlated to the dynamics], and once the numbers become reasonable,t hey will pick out a picture of what is going on. So, the blood sample at 0.1% analogy is apt. For, we have a couple of good reasons to see that a 300+ sample of the letters will be representative of the Weasel runs circa 1986. 2] the 8 phrases Dawkins’ printed What is relevant is as just shown, the 300 sampled letters, with 200 showing latching. And, he did not print 8 phrases. Just go count: 15 x 28 = 420 letters; and letters is what is relevant. of these 200+ show the latching of the o/p. That from 300+ that count as variable. 3] We know, however, that the 9600 phrases (assuming N = 150 for illustration’s sake) are absolutely NOT the same. In the relevant aspect, they suffice to show a strong trend. There are two readily identifiable classes of letters, those that match and those that do not. As the run progresses, those that do not vary art random sharply, and once they find the correct value, they stop varying, per the evidence. There are 200+ instances of the latter in action, without counter-instance of reversion. And, that is a context that -- cf 88 onlookers -- explicitly speaks of cumulative progress to target. So, we have every reason to infer tha the progress of champions as illustrated is typical of "good" runs circa 1986. 4] you can object each time I use the word “fitness” if you wish, but it is the standard term and I’m not going to abandon it When a "standard" is misleading, it should be changed. Here, there is no fitness, just proximity without reference to function. thus, Weasel fails to illustrate NS or any other credible blind watchmaker. Worse, it dodges the issue that the Hoylean challenge was to ge TO shores of functionality in vastly beyond astronomical configuration seas of non-function. Hill-climbing to optimise function is by comparison a mere afterthought. GEM of TKI kairosfocus
kairosfocus, I'm relaying something from Zachriel in response to this from you:
Not at all. We see 300+ samples of letters, of which 200+ are in a go-correct then stay correct condition. There are NO observed reversions.
Zachriel writes:
For a confidence interval of 2% (letters from 0 to 26 ± 0.5), and a confidence level of 95%, we need to sample 70% of a population of 1000. (Interestingly, we only need to sample about 2400 in a population of a million or a billion for the same level of confidence. This is why a drop of blood containing trillions of particles can represent the composition of all the blood.) Anyone can see that Weasel doesn't require latching to work. With reasonably large populations the best of the brood will only occasionally show a letter reversion. A typical sample of ten Mother Weasels will show the same results that kairosfocus insists must be due to latching. The appeal to sampling is obviously faulty as it is contrary to simple observation.
FYI, the link is to an Excel file from which you can run Zachriel's Weasel. Check it out! No latching there, certainly not explicit latching: yet it will tend to show the same results that kairosfocus insists must be due to explicit latching. David Kellogg
kairosfocus [169], it remains the case that you engaged in the very kind of rhetorical dismissal which you lament. Further, there was no slander in the opening sentence. The author connected ID and creationism, but that's just an argument. Indeed, many ID advocates, by rejecting common descent, are in fact creationists. Further, the first "ID textbook," Of Pandas and People, was authored by two creationists. Also, many ID advocates, including you, routinely cite avowed creationists in support of their claims. The lines seem easy enough to draw. ID proponents may ally with creationists, co-author with creationists, and be creationists themselves, but ID opponents are not allowed to say that ID is a form of creationism? How about this? I once heard a secular Jewish comedian say that she wasn't a Jew, she was just "Jew-ish." Perhaps those on the evolution side should refer to ID not as creationism but as "creation-ish." :-) David Kellogg
Not much time this morning, but let me point out that Kairosfocus’s example of taking 0.1% of your blood as a sample is most assuredly NOT analogous to looking at the 8 phrases Dawkins’ printed as a sample of all the children in the 64 generations of the run. We have good, empirical reasons for thinking that the composition of the blood is the same throughout the body, so taking any small fraction of it will reflect the composition of the whole. If I have 1000 white balls in a bag, sampling one will be sufficient. We know, however, that the 9600 phrases (assuming N = 150 for illustration’s sake) are absolutely NOT the same. Because of the process of picking the best child from each generation, even if we printed each parent for every generation, and not each tenth, we would not have a random variable. The whole point of the program is to demonstrate that while the mutation process itself is random, the results are not random because of the action of the fitness function. So it is not even remotely accurate to compare displaying the 0.1% sample of phrases of most fit children after every 10 generation to drawing 0.1% of your blood for a blood test (And kf, you can object each time I use the word “fitness” if you wish, but it is the standard term and I’m not going to abandon it. I’ve made it clear - gpuccio said so ;) - that I know that the general term does not necessarily imply anything about biolgical functional fitness.) hazel
pS; on "naive" i simply point out that the text and the print runs show what would in other contexts be fairly conclusive evidence. It is tbecause of the direct statement that is reported, that I have gone with implicit over explicit latching to explain 1986. Absent that, explicit latching is the best explanation on the evidence of TBW, ch 3. As the initial Monash university understanding [of a pro-Darwinist team] also substantiates. Natural, not naive. kairosfocus
Onlookers (and participants): Further follow up on points. For record. 1]DK, 167: Don’t like the first sentence? Refuse to consider the argument. From this, you would note get a clear understanding that I objected to uncivil conduct to the point of slander in the opening sentence. Nor, that I stated in so objecting that those who use the article will need to present the substantial case here -- without uncivil language -- if they want my response. (Which no-one has evidently thought it fit to do.) Nor, that, e.g. at 88 above, I have outlined step by step why I have concluded from Mr Dawkins' words as I do. 2] Hazel, 168: The law of large numbers is about the behavior of a random variable: it says that the observed average value of a set of observations will more closely approach the expected value of the variable as the number of observations increase. Not quite. LOLN is about the behaviour of credibly random samples from a population. Namely, that there is a reasonably strong tendency for the samples to be representative once they have adequate numbers. Subtle, but that is where the rest of the analysis goes off the rails. 3] it says that the observed average value of a set of observations will more closely approach the expected value of the variable as the number of observations increase. Nope, it is broader than that. E.g. there is a reason why the average of samples tends to population avg. For, random -- in principle, equiprobable [there are variations . . . ] -- samples tend to reflect the distribution of the population, once you have enough of them. So, on analysing a population probabilistically so that a certain subset is fraction p of the pop, a good sample of size N will tend to have fraction Np being from the relevant subset. As a result, once Np is a reasonable number, you will expect to see subsets appearing in the sampled population. (Which in turn is the basis for my remarks about far-skirt members. Think about the darts and chart illustration/thought expt.) 4] the strings in BWM are NOT random variables. They are a product of a process that selects for fitness - they are not instances of a random variable. First, I again object, for good reason, to the insistence on a very flawed term, "fitness." The context is just the opposite: selecting on mere proximity without reference to fucntionality. note Mr Dawkins' "nonsense phrases." Next, one way to get a representative cross section of a process and to infer to its dynamics is to sample it at regular intervals uncorrelated with the credible process dynamics; but of course within the bandwidth thereof, on the good old Sa-freq is at least 2f rule. [That is how for instance a CD works, or digital video or a digital storage oscilloscope. [I need not go on into details on phase shifts and sampling rise times for transients. In short, we see here different domains of sampling at work: telecomms and instrumentation and control, as well as broader physical sciences are highly relevant contexts in which a whole world of sampling is also done. FYI, H, I used to regularly set a lab exercise very similar to the dart and chart one as the very first lab exercise for physics students doing in effect first year of a 4-year pgm college physics; making them do various sampling population analyses and 3-sigma control chart exercises. In turn that was based on and extended my own very first university physics lab exercise.]) So, the Weasel samples circa 1986 can indeed be representative of the trends in "good" runs of champions, circa 1986. And, the relevant population is that of the letters within the champions. Mr Dawkins' description in TBW, ch 3 at that time, underscored that the published runs were in fact representative of "good" runs. Cf 88 above. 5] the sample size here is very small. Dawkins shows 8 members of a 64 generation run, including the first and last. Not at all. We see 300+ samples of letters, of which 200+ are in a go-correct then stay correct condition. There are NO observed reversions. All of that in a context that positively enthuses over cumulative selection and progress to the target. 6] If we use a population N = 150, then there have been 150 x 64 = 9600 phrases, of which we only see 8, which is less than 0.1%. This is an insufficient sample . . . Many relevant populations are of continuous variables [between any two distinct values, you can find another valid member of the population] or quasi-continuous variables [i.e. very fine-grained discrete behaviour we approximate as effectively continuous; e.g the origin of gas pressure in molecular collisions], or of indefinitely large numbers of actualised or potential events. That is, the population is in effect empirically infinite. But, through reasonable samples we can be quite confident of picking up patterns in the overall population. Thanks to, LOLN. For instance, consider a blood sample of a few cc's, say 5. Typical humans have ~ 5 litres of blood, i.e. the sample size is 1 in 10,000. Blood constituents are not a fundamentally random population, being driven by body processes (though of course there will be fluctuations in any one 5 ml sample]. Samples are as a rule taken at a given convenient site, and are well below 0.1%. But, they sufficiently reliably reflect the general patterns of our blood to be routinely used in diagnostics; including on matters of life and death. In short I think onlookers will see here why I cite this to illustrate what is to my mind a case of selective hyperskepticism. 7] we have agreed, I think, that in the implicit latching case the probability of a child with a mutated correct letter being the most fit in the generation is extremely low, and since Dawkins is only showing a sample of best fit children every 10 generations, there is an extremely low probability that that set of data would show a letter reversal. In short, in the end, you agree with my analysis that the samples will correctly reflect the implicitly latched behaviour. 8] within the limits of reasonable probabilities, Dawkins data is just as likely to be the result of non-latching (implicit) as it is of explicit latching. Latching of o/p as credibly observed is explained by two possible latching mechanisms: explicit, or implicit. Implicit latching is latching, not "non-latching." 9] Dawkins said nothing about latching Let's guess, from 88 above: Cumulative selection, rewarding the smallest increment in proximity, etc? Not to mention, publishing the following o/p pattern, on pp. 47 - 48 of TBW, and the second again in New Scientist that same year, 1986:
WDL*MNLT*DTJBKWIRZREZLMQCO*P WDLTMNLT*DTJBSWIRZREZLMQCO*P MDLDMNLS*ITJISWHRZREZ*MECS*P MELDINLS*IT*ISWPRKE*Z*WECSEL METHINGS*IT*ISWLIKE*B*WECSEL METHINKS*IT*IS*LIKE*I*WEASEL METHINKS*IT*IS*LIKE*A*WEASEL Y*YVMQKZPFJXWVHGLAWFVCHQXYPY Y*YVMQKSPFTXWSHLIKEFV*HQYSPY YETHINKSPITXISHLIKEFA*WQYSEY METHINKS*IT*ISSLIKE*A*WEFSEY METHINKS*IT*ISBLIKE*A*WEASES METHINKS*IT*ISJLIKE*A*WEASEO METHINKS*IT*IS*LIKE*A*WEASEP METHINKS*IT*IS*LIKE*A*WEASEL
_____________ GEM of TKI kairosfocus
Hi David, now that we’ve gotten clear on the fundamental difference between the explicit and implicit cases, I’d like to go back and respond to something you said at 135:
kairosfocus, since the 1986 “observations” are a highly biased sample (by the very nature of the experiment) from a population of unknown size, you can conclude precisely nothing about latching from it.
kairosfocus has claimed that the law of large numbers supports his argument that the data in BWM leads one to the conclusion that Dawkins used an explicit latching routine, but I don’t think the law of large numbers really applies to the situation. The law of large numbers is about the behavior of a random variable: it says that the observed average value of a set of observations will more closely approach the expected value of the variable as the number of observations increase. First of all, the strings in BWM are NOT random variables. They are a product of a process that selects for fitness - they are not instances of a random variable. Secondly, the sample size here is very small. Dawkins shows 8 members of a 64 generation run, including the first and last. We don’t know the generation population size, but other people’s programs show that populations in the range of 100 - 200 produce results similar to Dawkins. If we use a population N = 150, then there have been 150 x 64 = 9600 phrases, of which we only see 8, which is less than 0.1%. This is an insufficient sample even if the children were truly instances of a random variable. And last, since we have agreed, I think, that in the implicit latching case the probability of a child with a mutated correct letter being the most fit in the generation is extremely low, and since Dawkins is only showing a sample of best fit children every 10 generations, there is an extremely low probability that that set of data would show a letter reversal. So within the limits of reasonable probabilities, Dawkins data is just as likely to be the result of non-latching (implicit) as it is of explicit latching. The law of large numbers really has nothing to do with this, I think. And since Dawkins said nothing about latching, and since non-latching is random in respect to fitness and explicit latching is not, I see no reason (other than a lack of understanding or a bias) for thinking that explicit latching is the “natural” interpretation of the data. It may the naive interpretation, and therefore natural to some, but it is not the best interpretation if one thinks both about the probabilities and about Dawkins perspective. hazel
kairosfocus [161], I have very little time today but wanted to respond to this, your response to a request to evaluate Patrick May's walk-through of the passage in TBW as a guide to coding:
Mr Kellogg et al lost me in that linked, in the very first sentence, on slanderous incivility.
Observers, that is is a prime example of what kairosfocus calls "rhetorical dismissal." Don't like the first sentence? Refuse to consider the argument. David Kellogg
Thank you, gpuccio. I'm glad you found what I wrote clear and simple. One of my goals in discussions is for people, even if they are in disagreement, to be at least clear about what they disagree about. :) hazel
Hazel: I was on my way somewhere else but paswed by. A few points: 1] 162: “Fitness function” is the standard phrase used for that part of the program which evaluates candidates to see which are passed on to the next generation. I am objecting to the term used, its denotation and the inextricably attached connotation; precisely because of rhetorical impact. "Fitness" cannot evade the import of function in a context. We are dealing with mere proximity to a target, of non-functional "nonsense phrases." 2] No one - Dawkins himself nor anyone else - has ever claimed that matching the target string modeled biologically functional fitness Mr Dawkins set his up to answer a challenge, from Hoyle and others, on the problem of achieving complex bio- functionality. He did so by arguing that cumulation of micro-increments in function was enough. Then, he provided a case study of target proximity search, withourt reference to function. Of coruse i am well aware of his qualifgying words and disclaimers -- cf 88 above -- but I am also aware that the rhetorical impact of the example will still go through. indeed, the very choice of the phrase that highlighted the term "weasel" strongly hints that Mr Dawkins intended the example to make its point by taking advantage of the difference between the example by computer and the qualifying words. Indeed, his qualifications include that he understood the example to be misleading on the issue of natural selection -- the precise point at stake. Do you see why I am not at all amused? And, why alluding to qualifications in a context where a misleading example is being headlined [and thus having its rhetorical impact], is NOT tilting at strawmen? 3] in order to mask the correct letters you have to have consulted the target phrase, and stored that information on a letter by letter case as additional information about the phrase. As I pointed out, the issue is when you mask, and whether it is by letter or by phrase, not if you mask. BOTH explicit and implicit latched versions of Weasel are on the wrong side of the issue of functionality as an a priori of any meaningful natural selection analogy. 4] in the letterwise case the mutation function knows which letters to not mutate, so mutation is not entirely random in respect to fitness, and in the “target as a whole” case the mutation function mutates irrespective of whether the letters are right, so mutation is random in respect to fitness. Again, I object to the use of "fitness" -- including the context of its definition and standardisation. Onward, the point is that in both cases functionality is ignored in Weasel, and mere proximity is rewarded. Whether that leads to masking done letter by letter or phrase as a whole, makes little difference. In both cases, Weasel is fundamentally misleading. On the narrow technical point of probabilities, the situation is that if letters are explicitly latched, once they hit the target per letter, it locks off further search. If they are not explicitly latched, pop size and [unrealistic] mutation rates and probabilities of being "good" latch progress, until the phrase is filled in and the mask blocks further mutations. In either case, Weasel's performance depends on being fundamentally distinct form the world of living things and proto-living things. And, to present such an example in a book on BLIND watchmakers, is thus misleading. Seriously so. GEM of TKI kairosfocus
hazel and others: Excuse the intrusion, I have not followed the discussion because I was not specially interested. Just wanted to thank hazel for this clear definition: "“Fitness function” is the standard phrase used for that part of the program which evaluates candidates to see which are passed on to the next generation. The word “fitness” does not necessarily have to mean biological fitness, and it doesn’t have to mean functional fitness. The word is a very general word that refers to how well an entity meets whatever criteria is present in the program under discussion." I do like it. It is clear and simple. And that is exactly the reason why I believe that all simulations using a fitness function are simulating some form of Intelligent Selection, and never Natural Selection. To simulate NS, as I have many times stated, no fitness function must be present. Fitness has to be true functional fitness, and must be sufficient to guarantee a reproductive advantage in the system "of its own", and not because it is "recognized" by some pre-programmed function in the system. I know of no simulation of NS. gpuccio
to kairosfocus. “Fitness function” is the standard phrase used for that part of the program which evaluates candidates to see which are passed on to the next generation. The word “fitness” does not necessarily have to mean biological fitness, and it doesn’t have to mean functional fitness. The word is a very general word that refers to how well an entity meets whatever criteria is present in the program under discussion. In this case, fitness refers to how many correct letters are in the phrase. That’s all. You keep arguing about issues that are not issues. No one - Dawkins himself nor anyone else - has ever claimed that matching the target string modeled biologically functional fitness. To mix metaphors, you keep tilting at a strawman of your own making. Then you write,
Then, the mutation module per se has no “knowledge” of the target in any case. All it would know on an explicit case is that some letters are masked off. turn off mask, and with the right pop and rate, you are at implicit case.
Yes, but in order to mask the correct letters you have to have consulted the target phrase, and stored that information on a letter by letter case as additional information about the phrase. Of course if you turn the mask off you get the implicit case, but that is exactly what I said. The difference is whether the mutation function does or does not have information about the target on a letter-by-letter basis. If the mutation function does have such information, in the form of a mask or flag having been stored with the letter, then the mutation function is not entirely random in respect to mutation. And, you write,
And, on the implicit case, when the target phrase has been hit, the whole is masked off at once from further mutations. The difference is whether you define hitting the target as a whole or as a letterwise case. When to mask, not if.
Of course when the target phrase is found, there are no more mutations, because the program quits. And, yes the difference is whether you define “hitting the target as a whole or as a letterwise case.” Again, in the letterwise case the mutation function knows which letters to not mutate, so mutation is not entirely random in respect to fitness, and in the “target as a whole” case the mutation function mutates irrespective of whether the letters are right, so mutation is random in respect to fitness. I agree that “letter by letter” and “phrase as a whole” is another way of highlighting the essential difference between explicit and implicit. hazel
kairosfocus @161
7] Jay M 159, David Kellogg posted a link in another thread that shows pretty convincingly that Dawkins’ text in The Blind Watchmaker cannot reasonably be read to suggest explicit latching. Mr Kellogg et al lost me in that linked, in the very first sentence, on slanderous incivility.
"Slanderous incivility"? The linked page equates ID with creationism, very briefly (with reference to the famous changes in Of Pandas and People). Rude? Perhaps. Unnecessary? Certainly. Hardly a reason to ignore the real issue raised.
As to what Mr Dawkins said circa 1986, and what it naturally means, cf 88 above for my latest citation and comments. You will see that explicit latching, for good reason, is a very natural understanding.
In the article you refuse to read the author quotes the full text regarding the weasel algorithm from TBW and goes through line-by-line building a program directly from Dawkins' own words. The text of the book is readily available via Google. Could you also go through line-by-line and show where Dawkins' explanation (not his sample output, but his actual explanatory text) could be interpreted to specify explicit latching? I've now re-read it myself several times and cannot see any way to support that contention. JJ JayM
Onlookers and particiapnts Further follow up on points of note: 1] Hazel, 155: felt that we were saying the same thing in different ways, but I wanted to make sure. On the cited point, yes. On material context, note my remarks above, and below. 2] I also face a “hostile audience” This site (for all its flaws and troubles) bears little material resemblance to the likes of Anti Evo, et al. An audience that disagrees is one thing, the sort of routine contempt, dismissive rhetoric laced with that, and general nastiness I have seen at sites such as the above named, are beyond the pale of basic civility. Underneath, we hear the distinct echo of Mr Dawkins' notorious claim that those who differ with his evolutionary materialism [especially if influenced by a religious perspective] are "ignorant, stupid, insane or wicked." And, we see that backed up by abusive magisterial power of major institutions, and expressed in question-begging censorship and hijacking of science in service to a highly controversial worldview and its agenda: materialism. Only where there is a willingness to address matters on the merits can we have serious progress. Which, is why I have in latter days principally dialogued with you on the Weasel matter. But, I have to always remember the hostile onlookers. [Which, inter alia, is why I have to repeatedly underscore such matters as the import of the actual pattern of o/p's and discussion thereof by Mr Dawkins circa 1986.] 3] DK, 157: Focusing on the winners is only relevant in examining what you are calling “implicit latching.” First, apology appreciated. (You will note my own, where it seems i inadvertently used language that while intended to be on the merits seems to have been overly pointy.) Generation chamnpions, in the context of the conditions of Weasel c. 1986, and especially Mr Dawkins' remarks on cumulative progress, are actually telling us a lot about the population of the runs. For instance, if showcased "good" runs are taking 40+ and 60+ generations to hit target, then we know that no-change is winning ~ 50% of the time. That means that the mechanism strongly tends to preserve letters already on-target. Multiply by 200+ cases of letters once hit, never being seen to revert; leading to strong runs as a dominant characteristic of samples of over 300 letters in principle capable of changes. That is, the evidence is that steps forward are preserved, i.e. cumulative progress to target, just as described by Mr Dawkins. Thus, there is good reason to infer on the runs as published and the surrounding commentary that on simplest explanation, letters were explicitly latched on hitting their individual target. It is on reported remarks circa 2000, and just recently reported that implicit latching becomes a better explanation of What Mr Dawkins did in 1986, on preponderance of evidence. (But, per the remarks of 1986, a letterwise partitioned version of Weasel is a legitimate version, one of the many possible Weasels.) 4] The Law of Large Numbers. The underlying point in LOLN, is that "large enough" samples of a population, that are on reasonably credible grounds, not unduly biased, will reflect the population as a whole. The illustration I have used in this discussion is to draw up a bell-chart slit into even stripes and place it on a floor, then drop darts more or less evenly onto it. One hit could be anywhere. A few will be allover the place, but as we get to about two dozen, we will begin to see that the numbers of hits in stripes will more and more reflect the fraction of the overall area in the bands. That is, -- as fractional area of such stripes on a bell curve [or the like] is a probability metric -- if probability is p, and we have N samples, the fraction in a zone of probability p will trend more and more to Np [its "expected value"] as N rises. This is why observed fraction of N samples in a band, f/N, tends to the value p. It is also why the average of a large enough sample will tend to the population's average, up to the classic distribution of sampling means: for, "Avg" = SUM [pi. value], for sub populations of probability pi each. It is why fluctuations "often" tend to go as root-N so to double precision one needs to quadruple sample size, etc. In short, reasonably large samples -- and 300+ is a rather good case, on the whole of that -- will with high likelihood reflect the behaviour of the relevant population as a whole. And, skirt-catching needs big enough samples that it becomes reasonable to see far-skirt values in the sample. 5] What is the expected probability that a correct letter [i.e. circa 1986] will revert in the Weasel program? On 200+ samples of such letters from rune without exception, nearly -- effectively -- zero. The basis (up to now I thought this needed no explicit expansion . . . ) is that Expected Ealue, EV = N.p, while as N rises, Observed Value, OV --> EV. Then, on our case: N = 200+, and OV = 0. So p --> 0. 6] you can’t know what to expect know unless you know the population size and the mutation rate. On the very contrary, we have before us samples from "good" showcased runs circa 1986, of the relevant pop, of generation champions. They show the very strong appearance of latching, and on LOLN, we may very reasonably infer to latching on the o/p, as just explained and shown. The issue is mechanism, and from the original thread, I have pointed to explicit and implicit latching as reasonable mechanisms. It is on explicit reported testimony that implicit latching becomes the best explanation on preponderance orf evidence. 7] Jay M 159, David Kellogg posted a link in another thread that shows pretty convincingly that Dawkins’ text in The Blind Watchmaker cannot reasonably be read to suggest explicit latching. Mr Kellogg et al lost me in that linked, in the very first sentence, on slanderous incivility. Whatever emanations of penumbras of the text may have been brought into play to make you think that the text of TBW ch 3, circa 1986, cannot reasonably be understood as saying that the best explanation for Weasel on that text is explicit latching, I simply point tot he Monash University as an outside group sympathetic to Mr Dawkins' views. (Mr Elsberry had to explicitly "correct" them by saying that Mr Dawkins did not latch explicitly.) As to what Mr Dawkins said circa 1986, and what it naturally means, cf 88 above for my latest citation and comments. You will see that explicit latching, for good reason, is a very natural understanding. 8] Hazel, 160: In the implicit case, the mutation function does not depend on and has no knowledge of the target phrase or any other details of the fitness function. First, I must insist: a target proximity fucntion that rewards mere closeness without reference to curtrent fuctionality in any meaningful snese -- observe Mr Dawkins' "nonsense phrases" -- is NOT a "fitmness fucntion." And, this is the main reason why Weasel fails to be a reasonable presentation of the power of natural selection, which may only reward difference of current functionality. had a biologically reasonable threshold of such function been put in place, Weasel would have failed directly -- as Mr Dawkins admitted in so many words, though he did not discuss the implications of a search space of 1 in 10^40 [27^28] vs 1 in 10^ 180,000 [4^300,000] for even reasonable first life. Then, the mutation module per se has no "knowledge" of the target in any case. All it would know on an explicit case is that some letters are masked off. turn off mask, and with the right pop and rate, you are at implicit case. And, on the implicit case, when the target phrase has been hit, the whole is masked off at once from further mutations. The difference is whether you define hitting the target as a whole or as a letterwise case. When to mask, not if. 9] it is correct to say that in the explicit case, mutation is not random in respect to fitness. Weasel, quite explicitly [cf 88 supra], dodges the issue of fitness, i.e of credible functionality and associated combinatorial complexity. So, in neither explicit nor implicit latching cases can one suggest correctly that mutation is in any wise related to "fitness." Mutation is related to letters, and then a filter looks for proximity to a target. In the implicit case, it locks off further mutations on hitting the whole phrase. In the explicit case, it does so letterwise. In neither case do we see any serious assessment of first having to get to function so that relative fitness can be a properly material consideration. Thus, the build-up to an inference on divergent letterwise probability is irrelevant. The key fallacy has long since been made. And, interpreting mask off on a letterwise vs a phrase wise basis are BOTH on the wrong side of the relevant fallacy, of rewarding non-functionality on mere proximity, down to the letterwise level. GEM of TKI kairosfocus
Well, kairosfocus and I seemed to have cleared this one point up, as he wrote in 149, "Hazel: There is no material difference between us on the substantial matters, once we see how explicit and implicit latching can work." This was in respect to my point that the essential logical difference between the explicit and implicit latching situations is in the mutation function. Kf had written, "Yes, in an implicit case, P(mut) is the same whether or no a let[t]er has already hit target. Yes, the p(mut) for non latched letters in explicit latchi[n]g is different from that of latched ones ….," and I had written,
Implicit: for each letter, p(mut) = p Explicit: for each letter, if letter is incorrect, p(mut) = p if letter is correct, p(mut) = 0
With that said and agreed upon, I'd like to return to a previous point I had made, which will be clearer now that we have clarified this essential difference between the implicit and explicit cases. I claim that it is accurate to say:
In the implicit case, mutation is random in respect to fitness. In the explicit case, mutation is not random in respect to fitness.
Let me explain more about why the above is correct. In the implicit case, the mutation function does not depend on and has no knowledge of the target phrase or any other details of the fitness function. Every letter always has the same probability of mutating irrespective of whether it is correct or not. Mutation is random - the only factor being the mutation rate p that is applied uniformly to all letters at all times. Mutations happen entirely independently of any effect the mutation or lack thereof may have on fitness. This is why it is correct to say that In the implicit case, mutation is random in respect to fitness. In the explicit case, the mutation function is dependent upon and influenced by the fitness function, because for each letter it must reference the target string to see which of the two rules to apply: if the letter is incorrect, p(mut) = p or if the letter is correct, p(mut) = 0. In this case, if a letter is subject to mutation (by being incorrect), the probability that it will mutate is random, and so is the probability that it will mutate to the correct letter. But whether a letter is subject to possible mutation is not random: that is determined by comparing the letter to the target string. This is why it is correct to say that in the explicit case, mutation is not random in respect to fitness. hazel
kairosfocus @150
Hazel, the evidence on the o/p aqdn Mr Dawkins’ discussion of it circa 1986 strongly supports that there are very, very few if any such reversions. Indeed, it at minimum strongly suggests that there are none. That is why, absent taking the testimony that there was not explicit latching at work, explicit latching is a very viable and natural explanation of what was displayed and what was said about it.
David Kellogg posted a link in another thread that shows pretty convincingly that Dawkins' text in The Blind Watchmaker cannot reasonably be read to suggest explicit latching. Can you go similarly step-by-step through Dawkins' description and show how explicit latching is a "viable and natural" explanation? JJ JayM
Joseph [156], are you stuck on the difference between climb and climbing or between mountain climbing and other mountain sports? The difference is trivial in either case. Anyway, modifying the Google search, this is from a climbing teacher's journal:
Thursday Apr 20 Climbing 30:00 [3] teaching anchor class at hammond pond. about half hour of cumulative climbing
And here's a climber (or maybe a biker) writing about his sport watch:
I love my Suunto. It keeps track of cumulative climbing, so if you are going up and down (lot of PUD's) it sorts out the "up" part. It is pretty amazing what the cumulative climb can show. Anyway... I am trying to compare the change in pressure due to a front compared to a change due to 100 feet of altitude change. Are they about the same? Just guessing.
David Kellogg
Moderators, this failed to post earlier. Could you post it please? kairosfocus [144], I apologize for saying anything that might be taken to impugn your motives or integrity. Let me focus on two issues: the lottery example, and the Law of Large Numbers as it relates to total population. 1. The lottery example. You write:
here the relevant population is the generation champions, which is where the o/p latching was observed in the first place. This is a study of lottery winners, not the overall population, and the point of IMPLICIT latching as an explanatory mechanism is that the o/p will latch based on how the lottery is run
No. In explicit latching, the relevant population is the whole population. The question is not whether correct letters that revert are ever selected (that is, "win" the lottery), but whether they ever at all. Focusing on the winners is only relevant in examining what you are calling "implicit latching." 2. The Law of Large Numbers. Here it is:
The Law of Large Numbers says that in repeated, independent trials with the same probability p of success in each trial, the chance that the percentage of successes differs from the probability p by more than a fixed positive amount, e > 0, converges to zero as the number of trials n goes to infinity, for every positive e.
Wolfram Math World puts it more generally:
A "law of large numbers" is one of several theorems expressing the idea that as the number of trials of a random process increases, the percentage difference between the expected and actual values goes to zero.
What is the expected probability that a correct letter will revert in the Weasel program? You haven't given such a probability. Why? Because you can't know what to expect know unless you know the population size and the mutation rate. Therefore, you can't say anything about latching from the examples in TBW. David Kellogg
David Kellogg, I see that the word context still eludes you. And still nothing about mountain climbers using the term "cumulative climbing". Your issues are not my problem. Joseph
kairosfocus, you writes,
Hazel: There is no material difference between us on the substantial matters, once we see how explicit and implicit latching can work.
Thank you. I felt that we were saying the same thing in different ways, but I wanted to make sure. You also write,
Unlike you, i have to bear in mind a hostile audience fraction who will gleefully extract what they can find to caption as an occasion for rhetorical dismissal. They already have done so. Repeatedly. Please try to understand that.
I would like to address this issue, as one of my main interests is how people with differing perspectives can constructively communicate with each other. I also face a "hostile audience", in that my overall perspective is different from the prevailing perspective at this forum, and I often have my points met with "rhetorical dismissal." However, I prefer to not think of that as hostile, and I definitely prefer to not respond with hostility and rhetoric: I believe pretty strongly that I should do unto others as I would have them do unto me rather than doing to others what they do to me. Two wrongs don't make a right. And to make a less platitudinous point, I believe that when I am met with behavior that I think is wrong, that is even more reason for me to try to behave well: if the other person is behaving poorly, then I need to behave twice as well in order to make up for their shortcomings. So when I am met with rhetorical dismissal or other non-constructive responses from people who disagree with me, my response is just to stay positively focused on the immediate issues. And last, you write,
PPS: Hazel, the evidence on the o/p aqdn Mr Dawkins’ discussion of it circa 1986 strongly supports that there are very, very few if any such reversions. Indeed, it at minimum strongly suggests that there are none. That is why, absent taking the testimony that there was not explicit latching at work, explicit latching is a very viable and natural explanation of what was displayed and what was said about it.
This is an example of something that you don't need to bother saying to me, because I have not been discussing this issue, nor been interested in it, for a very long time, and I've said that to you a number of times. I wish you could hear that, and limit your responses to me to topics that are currently on the table between us rather than continuing to repeat points that are not currently on the table. That also makes for more productive communication. hazel
Joseph, you write:
Climbers do NOT refer to that as “cumulative climbing”
A simple Google search for the phrase "cumulative climb" and the word "mountain" demonstrates that this is incorrect. Here are some examples:
The orphaned Cataloochee pavement starts at Sal Patch Gap (3580') and descends into the valley. At mile 3 the pavement crosses Cataloochee Creek (2600') and is joined from the right by the gravel road coming 2 miles from Mt Sterling Rd. The pavement continues up the valley along the creek and past the campground. The road passes by several old settlements before turning to gravel (mile 5) and terminating at mile 6 (2860'). The loop is 7 miles with a cumulative climb of 1000'.
Here's another:
This hike will follow the MST south on the Shut-in Trail to the Sleepy Gap Overlook for lunch, and return on the same trail. Cumulative climb is about 1600 feet. Grade is mostly moderate. Nice views of the French Broad. First meeting place: Ingle’s, US 25N, Hendersonville. Second meeting place: Biltmore Square Mall parking lot, near McDonalds.
(I've hiked that one.) Here's one for biking:
Day 2: Bled – Ribcev Laz (Bohinj Lake) (40 km, cumulative climb 700 m). Uphill to Pokljuka high plateau. From there you cycle descending down to the Bohinj valley, place of unique beauty of nature and tiny villages in Alpine valley.
And another biking one:
The first day of this 5 day duathlon involved running 30km from close to Everest base camp to the largest town in the area, Namche bazaar. Along the way the competitors would drop over 2500m but would also climb a cumulative total of over 800m.
David Kellogg
Am h Dict: cu·mu·la·tive (kymy-ltv, -y-l-tv) adj. 1. Increasing or enlarging by successive addition. 2. Acquired by or resulting from accumulation. 3. Of or relating to interest or a dividend that is added to the next payment if not paid when due. 4. Law a. Supporting the same point as earlier evidence: cumulative evidence. b. Imposed with greater severity upon a repeat offender: cumulative punishment. c. Following successively; consecutive: cumulative sentences. 5. Statistics a. Of or relating to the sum of the frequencies of experimentally determined values of a random variable that are less than or equal to a specified value. b. Of or relating to experimental error that increases in magnitude with each successive measurement. GEM of TKI kairosfocus
hazel:
A process can be cumulative and at the same time you can occasionally lose some of what you have, which is different than what you said.
Only if you re-define the word "cumulative". And tat appears what evolutionists always want to do- redefine words to suit their needs. Joseph
hzel:
Two examples I have used: when climbing a mountain, you occasionally go downhill for a while.
Climbers do NOT refer to that as "cumulative climbing".
When accumulating savings, occasionally you have less money than you did the month before.
If you ever have less than before then it is NOT an example of cumulative savings. And again perhaps Dawkins should use the term "back-n-forth selection". But if he did that then he could never illustrate his point that selection can account for something. Joseph
Joseph: You are of course materially correct, but I suspect that all you can really hope for is that the correction of the record here will make sure that onlookers can see the holes in the endlessly recycled objections. GEM of TKI PS: I also suggest that with truly large per generation populations, sufficient of the skirts will show up that multiple mutation cases will break through and will make the multiple mutation cases that are eve4r so rare win the championship match on mere proximity. That is why I speak of co-tuned mutation rates and population sizes. PPS: Hazel, the evidence on the o/p aqdn Mr Dawkins' discussion of it circa 1986 strongly supports that there are very, very few if any such reversions. Indeed, it at minimum strongly suggests that there are none. That is why, absent taking the testimony that there was not explicit latching at work, explicit latching is a very viable and natural explanation of what was displayed and what was said about it. kairosfocus
Hazel: There is no material difference between us on the substantial matters, once we see how explicit and implicit latching can work. Unlike you, i have to bear in mind a hostile audience fraction who will gleefully extract what they can find to caption as an occasion for rhetorical dismissal. They already have done so. Repeatedly Please try to understand that. GEM of TKI kairosfocus
A process can be cumulative and at the same time you can occasionally lose some of what you have, which is different than what you said. Two examples I have used: when climbing a mountain, you occasionally go downhill for a while. When accumulating savings, occasionally you have less money than you did the month before hazel
kellogg:
It just occurred to me from ROb’s comment above why Joseph (in the latching thread) misunderstands the notion of “cumulative” selection.
Nice bald accusation.
Cumulative in TBW means that the total phrase is closer to the target, not that each letter is.
Exactly. And given a target, a small enough mutation rate and a large enough sample size the selected offspring will never be farher away fromn the target than the parent. So when a 28 letter target is matched by 15 letters, a progeny that matches 16 letters will be an cumulative advance even if a particular letter reverts. So you are saying that at least one offspring received three mutations? One that flipped a correct letter and two others that matched the target? Some much for the gradual change that Dawkins was trying to illustrate. And so much for small mutations rates. In short, Dawkins’s use of “cumulative” implies non-latching of individual letters. Not according to his description and illustration in TBW. In TBW Dawkins uses “weasel” to illustrate cumulative selection. “Cumulative” means “increasing by successive additions”. INCREASING BY SUCCESIVE ADDITIONS. “Ratchet” means to “move by degrees in one direction only”. Increasing by additions means to move by degrees in one direction only. Dawkins NEVER mentions that one or more steps can be taken backward. He never says anything about regression. Therefor cumulative selection is a ratcheting process as described and illustrated by the “weasel” program in TBW. That is once a matching letter is found the process keeps it there. No need to search for what is already present. Translating over to nature this would be taken to mean once something useful is found it is kept and improved on. IOW it is not found, lost, and found again this time with improvements. By reading TBW that doesn’t fit what Richard is saying at all. And he never states that he uses the word “cumulative” in any other way but “increasing by successive additions”. How can a process be “cumulative” and at the same time allow you to keep losing what you have? Joseph
Kairosfocus writes,
Hazel, I have already pointed out that the very term latching implies that once a latched letter hits the target, its probability of further change falls to effectively zero. (A bit more than that in the case of quasi-latching, and also that of explicit latching that triggers post target shifting.)
And I’ll point out, and have pointed out several times before, that I understand that. What I don’t understand is why you keep telling me things that we already agree upon. kf writes,
These are why we need credible code to make a definitive conclusion beyond the preponderance of evidence.) I have also pointed out that (i) this is well warranted by Mr Dawkins’ statements c. 1986 as already cited and remarked on, and that (ii) locking up on a letter by letter basis is not i8n principle different from locking up on the basis of hitting ther phrase.
And I have said, repeatedly, that I am not interested in, or discussing, the historical problem of what Dawkins did or didn’t do, nor am I interested in specific implementations or interpretations of others such as Apollos (I know nothing about what he did.) I am interested in the pure logic, and the programming implementation of that logic, of the basic difference between the explicit and implicit latching cases. A few days ago, you wrote,
Yes, in an implicit case, P(mut) is the same whether or no a let[t]er has already hit target. Yes, the p(mut) for non latched letters in explicit latchi[n]g is different from that of latched ones ….
So, if I were writing code, or if you were, for both an explicit latching version and an implicit latching version, could we write, based on what you wrote above,
Implicit: for each letter, p(mut) = p Explicit: for each letter, if letter is incorrect, p(mut) = p if letter is correct, p(mut) = 0
Does this capture the essential, fundamental logical difference between the two situations? hazel
PS: On sample size. Onlookers, that was raised and properly answered in the original thre4ad where the issue was raised by GLF in a threadjack attempt. Sample points generate 28 letter samples per snapshot. Latching is evident on the succession of letters, not the phrase as a whole. DK is simply again trying to find a way to suggest that a sample of 300+ letters that could change with runs of latched letters for 200+ of them is below the LOLN threshold. Recycling already adequately answered objections. Shall we call it "objecting in circles" to avoid being overly direct? [is that phrasing acceptable, Mr Hayden?] kairosfocus
Mr Kellogg: I -- for excellent reason -- take serious exception to the following remark, which directly implies that I am lying, in a context where I have repeatedly given the Law of Large Numbers [LOLN, henceforth] grounds for my conclusions, over three threads now:
[DK, 140:] You know the sample is unrepresentative, but you persist. You don’t know the population size, but you persist . . .
1 --> As I have repeatedly pointed out and explained, even where the population at large is indefinitely large, a sufficiently large sample -- hence LOLN -- will as a rule be representative thereof. [Andf here the relevant population is the generation champions, which is where the o/p latching was observed in the first place. This is a study of lottery winners, not the overall population, and the point of IMPLICIT latching as an explanatory mechanism is that the o/p will latch based on how the lottery is run. So, it is distraction to advert to the possibility that within the generation, there may indeed be members where the letters that latch in the run of champions, are not latched. Due to the way champions are selected, that is of no EFFECTIVE consequence, as has been repeatedly highlighted and explained. That is, so long as the pop is small enough and the per letter mutation rate per letter is sufficiently low relative to that, that a significant number of zero change and only one change members are present, a Weasel program will latch or at worse quasi-latch. This is because far-skirt multiple change members that substitute one good letter for a reverted one will be too rare to show up significantly in the runs of Weasel before it hits the target. And, when the parameters are shifted to allow that substitution effect to trigger reversions, we will see first quasi-latching, then also cases of multiple letter jumps towards the target as the skirt comes into play; leading to relatively speaking a tearaway rush tot he target. The reported 500, 5% cases that run to target in about 20 - 30 gens show that case aptly. It is also possible to have versions of Weasel that converge extra slowly, 1000+ gens, which will show reversions etc, as Apollos inadvertently demonstrated though an error in his program. (Contrast that in the published 1986 runs, 40+ and 60+ gens were used for showcased "good" runs. That is, no-change won the generation championship about 1/2 the time. All this has been repeatedly pointed out, over three threads now.] 2 --> While there are pathological cases, it should be abundantly plain that sampling in the main at every tenth generation of champions will not correlate with any reasonable Weasel algorithm, and 3 --> in addition, Mr Dawkins' statements on "cumulative" progress and the like lend further reason to believe that the published excerpts of runs circa 1986 were representative of performance on good runs at that time. 4 --> It is a longstanding statisticians' rule of thumb that 20 - 30 is more or less the range where big enough allows LOLN to begin to kick in. 5 --> I have also pointed out the significance of strong runs in a trend. _______________ In short, I have warranted my conclusions. To date, I find no indication that you have seriously interacted with the sampling issue lurking in the LOLN. And yet, you are willing to draw quite serious conclusive and dismissive inferences. I find it further interesting that the same issue is precisely the underlying point in the concept of Complex Spacified Information and its relevant subset, FSCI. Namely, a search that is random or otherwise equivalent [cf Dembski-Marks on active information and cost of search as well as search for a search] will be so overwhelmingly dominated by the typical configurations -- on the gamut of the search resources of the observed cosmos as a whole -- that it will be maximally unlikely to find islands of function requiring 500 - 1,000 or more bits of capacity to store the used information. Thus, onlookers: we see -- again -- where the selectively skeptical objection to one thing leads, step by step, to a point where we see that one has an inconsistency in his or her scheme of warrant. CONCLUSION: It is plain that there is good reason to see that the published runs circa 1986 were representative of what were thought to be "good" runs at that time. Since then Weasel [circa 1987] and neo-Weasel programs have been as a rule carefully set up NOT to latch. the reason is that the obvious latching -- 200 out of 300 changeable letters without a single exception -- led to the recognition of the key flaw in the program: targetted search, without reference to functionality of relevant complexity. So, Weasel is not a good illustration of the powers of any BLIND watchmaker, as it is an example of intelligently designed, targetted, foresighted search. Of intelligent design, in fact. GEM of TKI kairosfocus
Hazel: I have already pointed out that the very term latching implies that once a latched letter hits the target, its probability of further change falls to effectively zero. (A bit more than that in the case of quasi-latching, and also that of explicit latching that triggers post target shifting. These are why we need credible code to make a definitive conclusion beyond the preponderance of evidence.) I have also pointed out that (i) this is well warranted by Mr Dawkins' statements c. 1986 as already cited and remarked on, and that (ii) locking up on a letter by letter basis is not i8n principle different from locking up on the basis of hitting ther phrase. Given the partly hostile audience context here and elsewhere (which I need to always keep in mind), that extra bit is very important. GEM of TKI kairosfocus
Mr Hayden (and others): My apologies. My intent was to advert to the fallacy of the closed mind; not to make a personal attack or to slight a person. I strictly intended to speak to the issue on the merits. GEM of TKI kairosfocus
David Kellogg, and Kairosfocus, Kairosfocus: Don't call folks closed-minded. David: "Finally, you accuse me of close-minded repetition, yet your posts in this thread and the previous one have repeated the same nonsense over and over" Don't call folk's work nonsense. Both of you need to tone down the slighting and jabbing at each other. Clive Hayden
kairosfocus [138], you are wrong in so many ways I scarcely know where to start. You say that I am providing
Capital illustration of trying to make something seem so by closed-minded repetition of selectively hyperskeptical claims
Not at all. I am merely saying what it true: that you can make no inference about trends when you have a highly biased sample from a population of unknown size. You know the sample is unrepresentative, but you persist. You don't know the population size, but you persist. I once heard a five-year old claim that "Most people are pink." He'd seen a sizable sample of people in his life -- certainly a great deal more than 200 -- but the sample he had observed was highly biased, and the world population was greater than he knew. It is as though you are drawing conclusions about the economy based on a survey of people who won the lottery! Finally, you have the sample size wrong. Each letter is not a member of the population: each phrase is. You can't sample below the level of the 28-letter phrase. Those are the "individuals" in the population. So your sample size is not 300, not 200, but 7 in one case and 8 in the other. Finally, you accuse me of close-minded repetition, yet your posts in this thread and the previous one have repeated the same nonsense over and over, for over 5000 words in this thread alone. David David Kellogg
kairosfocus, you again responded to me on many issues that I didn’t bring up. I don’t understand why you do that. To summarize: You wrote, in 132,
Yes, in an implicit case, P(mut) is the same whether or no a let[t]er has already hit target. Yes, the p(mut) for non latched letters in explicit latchi[n]g is different from that of latched ones ….
I wrote, in 136, (shortened a bit)
Let p = the mutation probability and let p(mut) = the probability that a particular letter in a particular phrase will mutate. In the implicit case, for each letter p(mut) = p In the explicit case, for each letter if the letter is incorrect, p(mut) = p and if the letter is correct, p(mut) = 0
I think what I wrote is just agreeing with you, but is a little more specific about what p(mut) is in each case. Does what I wrote agree with what you wrote? Would you be so kind as to answer this question in a single paragraph or two? I’m not arguing any of the other points you’ve made, so I don’t think it’s necessary to bring them up again. hazel
2] Hazel, 136: I don’t believe this is an issue that needs to be brought up anymore, at least not with me. Unfortunately, this is exactly the key issue that needs the most emphasis in the teeth of a barrage of attempts to dismiss or ignore or bury it. Weasel is targetted search that rewards proximity without regard to threshold of function. It is invalid as a claimed or implied example of a BLIND watchmaker. That is what shows what was going on in 1986, and it remains the problem with ever so much of Darwinian advocacy in the classroom and the popular media today. Worse, the want of basic realism of all too many computer simulations of "evolution" in the research literature shows that the problem of misleading computer icons of evolution is still with us. [Do I need to point out that once you can set up an algorithm, you can make a PC do almost anything and often make it look "real" enough? Only reality is reality! Simulations are not experiements! Computer projections/models are not FACTS -- contrary to what the Al Gores of this world may think. GIGO!!! models and theories must be validated against reality, and logic, and alternative assumptions, and must be seen as at best only provisionally supported, not true beyond any reasonable doubt. Ask the ghosts of Ptolemy, Galileo and Newton.] 3] “the probability of mutations of particular letters for members of the population.” On this, I object both that he individual probability of mutation is unrealistically large and the proportion of "good" possible mutations among these are also far too large. I also add, that the information capacity in question is also far too small. Such are highly relevant because we are in a search space context, and the cumulative effect it to give a highly misleading impression of relative ease of finding islands of functionality. 4] In the explicit case, for each letter if the letter is incorrect, p(mut) = p if the letter is correct, p(mut) = 0 Save for the case where as Apollos showed, we can program in mutations in explicitly latched cases [this is why only code, credible code, is demonstrative] that would be correct and has never been in dispute. After all, "explicit latching" plainly means that correct letters -- save as specifically programmed to do so -- will not revert. The temptation, though, is to then ignore the material context: letters arrive at that state because they have been mutated art random and rewarded on hitting target. That is, the target is partitioned letter-wise instead of seen as a whole. You will note that in by far and away most versions of Weasel out there, once the target is hit, no further mutations occur. An explicit latching case would "simply" do the same basic thing, but on a letter by letter basis. 5] Re GLF (et al): I ask you once more. How many letters total were in the run? It was not 300. It was not 200. How many letters were in the population as a whole? And of the 300+ letters shown, what section of the total population did they represent? Could there have been a sampling bias there? What is happening here is that, since these objections were answered with reference to the law of large numbers and sampling theory several threads ago, now in the archives of UD, they are being recirculated with selectively hyperskeptical assertions as though they were not cogently answered long since. Onlookers; GLF et al have NEVER been able to cogently address the issue that there is such a thing as the law of large numbers [which is foundational to statistical sampling, a generally accepted practice]. That is, the objection is selectively hyperskeptical as the samples in evidence do not point where they wish to go -- and, obviously, had it fit their agenda, they would have accepted the same results without a blink. Moreover, they have no good answer to the observation that there is reason to see that when Mr Dawkins showcased the o/p samples circa 1986 as "good results" we have every reason to take his statements that the o/ps showed "cumulative" progress to target at face value. Such circling back to already answered objections, sadly, is a hallmark of the fallacy of the closed mind. [I sampled this case only to illustrate the root problem, and to show why I no longer take GLF and ilk seriously; save as saddening examples of the rising tide of irresponsible intellectual and even uncivil conduct that threatens our liberties.] GEM of TKI kairosfocus
Follow-up remarks: 1] Re DK, 135: the 1986 “observations” are a highly biased sample (by the very nature of the experiment) from a population of unknown size, you can conclude precisely nothing about latching from it. Capital illustration of trying to make something seem so by closed-minded repetition of selectively hyperskeptical claims, backed up by refusal to look seriously at how Mr Dawkins himself described the typical trends, i.e as cumulative, strong trends. 9not to mention his overall confe3ssions as I summarised above at 88 from easily accessible excerpts of the discussion, by those trying to justify what he did.] I have long since pointed out that once we have a reasonably sufficient sample of a population or a trend/timeline pattern and no reason to infer to bias, MOST samples are representative of the population and/or trends, especially those outcomes that dominate it statistically. This is called the law of large numbers, and is the foundation of sampling theory. So far as the population of possible outcomes of Weasel type exercises is concerned, 28 27-state elements have 27^28 ~ 10^40 states, which is a known size and in theory one that could be enumerated. But long before that, we will as a rule get a good enough look by sampling the population. Meaningful phrases in English, much less a unique one, will be vanishingly rare compared to nonsense, non-functional ones. That is why there is a challenge to get to shores of function. Mr Dawkins sought to overcome this by using a targetted search mechanism, but undercut its validity decisively by choosing to use a proximity rewarding search technique that makes no reference to functionality. Now, on picking up the trends as shown in the published o/p circa 1986, as a good rule of thumb, for linear trends not masked by excessive noise [including curvilinear], 5 - 9 samples are often enough to get a pretty good picture, though I am more comfortable with 20 - 30 if you have the time to get that, with further concentrations at knees. [But, too, data logging and automated point collection of thousands of data points are not always feasible, nor necessary.] For populations in general, 20 - 30 is a good enough point where the law of large numbers comes into force. 9tha tis a pint where the layman's law of averages has a point,though the layman's view ios often riddled with fallacies on reversions to the mean etc: the idea that if there has been a run of heads, a tail is overdue is not correct. But, if I were to see a coin you cannot directly inspect tossed 10 times and heads every time [odds on a fair coin being about 1 in 1,000], I would not be prepared to bet against it being a double-header. TRENDS, esp strong ones that lead to runs in data, often are very revealing.] In the case of the Weasel 1986, we have 300+ points where letters could change, and of these 200+ show letters going correct then staying that way, across dozens of generations, sampled by and large every tenth generation. There are NO cases of observed reversions: a strong run indeed. By sharpest contrast, the 1987 and later renditions of neo-Weasel, consistently show fairly frequent reversions. The striking difference is best explained by explicit or implicit latching in 1986, with implicit being the best on preponderance of evidence. 1987, on this model, was de-tuned so it shows reversions and winking. Neo-Weasel programs, extending the model, are designed not to fall into the now known trap of unpersuasively showing a dominant feature that spotlights what is wrong with the whole exercise: latching points to the invalidating significance of the proximity-reward search pattern that does not reckon with the search resources implications of the threshold of complex functionality manifested by real life forms. (This is also the reason why the same ilk objects so strongly to the otherwise obvious and easily exemplified concept of functionally specific complex information and its known cause.) As to the claim that the 1986 sample is "biased" we see that Mr Dawkins chose to showcase it. He and his supporters have to live with the consequences of such showcasing, and indeed, his further descriptions strongly suggest that this was typical of what he then considered "good" results. Then, the objections rolled in, and there was a shift to emphasising non-latching variations or at most quasi-latching variations. [ . . . ] kairosfocus
kairosfocus, first you write,
The basic problem — as have noted since December — is, the program is pursuing a target, using a basic strategy that rewards mare proximity, with no reference to function. that means that it in effect suggests that it is rewarding differential function, but it is not. It is not BLIND watchmaker.
I agree with this, and think it is reasonably well stated. I think most people here agree with this. I don’t believe this is an issue that needs to be brought up anymore, at least not with me. Then you write,
Once that is in, questions on the probability of mutations of particular letters for members of the population are effectively moot.
For me, this is not a moot issue - this is the issue I’m interested in, because I’m interested in the programming logic. So, given that I accept your point in the first quote above, I’d like to continue the discussion about “the probability of mutations of particular letters for members of the population.” You write,
Yes, in an implicit case, P(mut) is the same whether or no a let[t]er has already hit target. Yes, the p(mut) for non latched letters in explicit latchi[n]g is different from that of latched ones ....
Good. I agree with this also. In fact, I think we could write the following: Let the mutation probability be p. For instance, many people have been using p = 5% in their examples. Also let us use, as you did, p(mut) as the probability that a particular letter in a particular phrase will mutate. In the implicit case, for each letter p(mut) = p In the explicit case, for each letter if the letter is incorrect, p(mut) = p if the letter is correct, p(mut) = 0 Do you agree that this adequately describes the difference between p(mut) in the implicit and explicit cases? hazel
kairosfocus, since the 1986 "observations" are a highly biased sample (by the very nature of the experiment) from a population of unknown size, you can conclude precisely nothing about latching from it. David Kellogg
George, "Then why is 99.999999999999999999999+% and more of the universe empty of life? That is the question you were asked. If the universe was designed for life, why is it basically empty of life? Your avoidence of that question has been noted." How much life would you deem necessary, percentage wise, in order to consider the universe designed for life? And what universe would you be using by comparison to say that this universe is not designed for life? Usually the argument that aliens do exist is used as an argument that our existence is nothing special, and so is the argument that there are no aliens. Folks need to chose which argument they'll advance against ID, and not use both. I see both arguments trotted out as "evidence" against ID, but they're logically contradictory. And secondly, if we had an idea of a healthy universe, then we could call ours sick. But we have no such knowledge, for the sample of the universe is 1. Clive Hayden
George L Farquhar, "Is there some reason I was put into moderation? Clive?" So I can keep an eye on you :) Clive Hayden
Hazel The basic problem -- as have noted since December -- is, the program is pursuing a target, using a basic strategy that rewards mare proximity, with no reference to function. that means that it in effect suggests that it is rewarding differential function, but it is not. It is not BLIND watchmaker. Once that is in, questions on the probability of mutations of particular letters for members of the population are effectively moot. Yes, in an implicit case, P(mut) is the same whether or no a leter has already hit target. Yes, the p(mut) for non latched letters in explicit latchig is different from that of latched ones [which will only mutate again if that is written in as Apollos demonstrated]. But, in context, we are seeking to explain what was observed in 1986, and what was said about it. What was observed is 200+ latched letters in a sample of 300+. No exceptions. What was said was, cumulative progress that rewards the slightest increment in proximity to target. Taken together, that more than legitimates understanding Weasel C 1986 to be explicitly latched, letterwise partitioned search, as Monash U showed -- until Mr Elsberry "corrected" them.. It is on the further statement that Mr Dawkins did not explicitly latch Weasel in 1986, that I have accepted that the best explanation per preponderance of evidence, a few oddities still sticking out, is implicit latching. And, we agree that there is a way to do it that with detuning also explains the 1987 videotaped o/p. Mr Dawkins' major problems are not with latching, but with setting up a foresighted watchmaker as if that answered to the challenge of a BLIND watchmaker. That is weasel's fundamental failing and it is why that program should be withdrawn, with an explanation of why it was misleading. GEM of TKI kairosfocus
It seems that long comments get held up in moderation but short ones go right through. What a pity! Is there some reason I was put into moderation? Clive? George L Farquhar
These are some comments I posted some time ago which did not appear at the time. Kariosfocus
Weasel has no legitimate illustrative or didactic role and only serves to distract from the Hoylean challenge
Then why do you suppose Dembski and Marks have a section of their evoinfo website dedicated to it?
Plainly, for 150 years now, Darwinism has had no real answer to this question. And, it is the key question.
If the question is "how did life arise" then no, there is "no real answer". Nobody knows. Do you know Kariosfocus? What insight does your position bring to the table?
Darwinism, once we move beyond minor micro-changes to already functioning organisms, is an empirical failure, a massive one; now increasingly sustained by the powers of an orthodoxy, complete with its own magisterium.
Is ID research being suppressed in the Caribbean too then?
In certain cases, the latching of the letters is practically all but certain. This is what on preponderance of evidence happened in 1986 in ch 3 of TBW, and in the NS run.
Is "practically certain" different to explicit latching then? That would be different to your original statement, right? Lets remind outselves of what you said originally
Weasel sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met.
And so we return to sampling bias, you said.
Indeed, there we can see that of 300+ positions that could change, 200+ show that letters, once correct stay that way, and none are seen to revert.
I ask you once more. How many letters total were in the run? It was not 300. It was not 200. How many letters were in the population as a whole? And of the 300+ letters shown, what section of the total population did they represent? Could there have been a sampling bias there?
Such a large sample provided by the man who in the same context exults in how progress is “cumulative,” is clearly representative.
How many letters are in the population as a whole? David Kellog has noted that Wesley Elsberry has made argued that the law of large numbers works against you. You have not responded to mathematical argument on its merits. and also I have pointed out that the data are highly non-representative because they are the products of a large selection bias (the best sampple from each generation provided). This was obvious from the text of TBW. Will you now respond to the mathematical argument on its merits? Or continue to claim a hollow victory? Will you address the bias issue or continue to simply ignore it and proclaim victory?
For the record, I have already long since pointed out that such o/p letter latching as samples published in 1986 show beyond reasonable doubt, can be implicitly achieved, not just explicitly so.
The maths says you have not. I find it interesting that you won't defend your position mathmatically yet claim it's correct because of the maths. Very interesting.
So, to put up a de-tuned case as if it were the sort of co-tuned latching case as we see in the 1986 runs, is a strawman fallacy. One that has been used in recent weeks over and over again. I note: the 1987 o/p is materially different from the 1986 one,a nd we have two viable mechanisms to explain the difference.
As has been explained several times, the difference is down to the fact that in the video the entire population was shown. And yet you persist with your misrepresentation.
All of this has been pointed out, and the meaning of “implicit latching” has long since been explained across three threads and hundreds of posts now in recent weeks.
And this is the critical point. You say that "implicit latching" proves your point - yet the solution is a mathmatical one and you refuse to address it. The probability of a letter changing once it is correct has been detailed several times, linked to several times. Yet you pretend that it simply does not exist? Why?
Indeed, the best explanation for our cosmos, in light of the factors and patterns we see, is that it is the product of design, powerful, elegant design too. And, it is COSMOLOGICAL design that points to an extra-cosmic, powerful and intelligent, artistically creative designer.
Then why is 99.999999999999999999999+% and more of the universe empty of life? That is the question you were asked. If the universe was designed for life, why is it basically empty of life? Your avoidence of that question has been noted.
but of course, one is free to shut one’s eyes to the obvious, and to dismiss the powerful testimony of the cosmos in which we live,
The obvious fact is that the only life in the universe that we are aware of is on this plaet. If the cosmos was designed for life, where is it all?
Repeat: Weasel ducks the real challenge — getting TO shores of function, not hill climbing to more or less optimal function.
What's your answer to the origin of life Kariosfocus?
Weasel begs the question of first having to get to shores of function before differential section based on degree of performance, and properly be taken into account.
What's your answer to the origin of life Kariosfocus?
And, by his own admission, that was to get away from the inconvenient fact that even in a toy example, a realistic functionality threshold would have been beyond the probabilistic search resources in the computer.
What's your answer to the origin of life Kariosfocus? George L Farquhar
Have the problems with comments not showing been resolved? A number of mine have not yet appeared! George L Farquhar
kf writes, "The imposed rule, that whichever member is closest to target will be champion — without reference to functionality — has the effect of latching, once the pop statistics are such that at least preservation of current position is very likely." Yes, I know that, and have agreed to that multiple times. Then, when I wrote, "Re: in this case the mutation routine never changes - every letter has a p% chance of mutating every time: it is the interaction with the rest of the system that produces the latching," you wrote,
The problem is, that being a member of the population in each generation is of secondary import.
But I would like to discuss this question that is of secondary import, given that we are in agreement about that which is of primary import. Just looking at the mutation part of the situation, and not at all the rest (which we are in agreement about), do you see anything wrong in saying this:?
In explicit latching, the mutation function knows about the target phrase because it only considers incorrect letters for mutation, but in implicit latching the mutation function itself does not know about the target phrase, and thus considers all letters, correct or incorrect, as subject to possible mutation.
Do you see anything wrong with the above? hazel
Hazel: Re: in this case the mutation routine never changes - every letter has a p% chance of mutating every time: it is the interaction with the rest of the system that produces the latching. The problem is, that being a member of the population in each generation is of secondary import. The imposed rule, that whichever member is closest to target will be champion -- without reference to functionality -- has the effect of latching, once the pop statistics are such that at least preservation of current position is very likely. Also, on the sort of interpretation I deprecated earlier this AM, on seeing each letter as a function, the case can easily be viewed as optimisation in place so latching on hitting for the letters is "reasonable" and "justified" . . . you got to the target by random variation and cumulative selection. [And, recall, the overwhelming import of the text in 1986, including the published runs, shows latching, for which the simplest explanation is ratcheting based on explicit latching. I have gone with implicit latching as the better explanation of the 1986 results on the report that Mr Dawkins says he did not explicitly latch.] And, on the "tearaway run to target" on run no 1 case, Dawkins was clearly in the zone where multiple correct letters will crop up reasonably frequently and will be selected for by the imposed rule. (BTW, Jerry, it seems that this is a further data point on o/p latching -- i.e. he comes across as implying that that observed progress to target was cumulative and inexorable.) GEM of TKI kairosfocus
Many people, as has been discussed here and elsewhere, have gotten results similar to those in the BWM with a mutation rate of 5% and populations around 100 or 200. I don't believe there are any other necessary parameters. hazel
Yesterday, I found my copy of the Blind Watchmaker so I decided to read the chapter about the Weasel simulation. In Dawkins first example in the Blind Wathcmaker, after 20 generations, 20 of the 28 characters were found. Any one have any idea of what population size, mutation rate or other parameters were programmed in to get so quick a result? jerry
Ah, progress. Re kf @ 123 First, when I wrote, “I will agree with kairosfocus about how implicit latching works,” he wrote, “Thus, after many cycles, finis on that point. Spirals do progress.” Actually, I’ve always agreed about the cause of implicit latching - I don’t think that has been a point of contention for me. kf writes,
Up to the point of letters being correct, there is little difference between (a) explicit and (b) implicit cases. In the former, once a letter is correct, it is removed from the search list, as it has attained its target . . . i.e. partitioned, letter by letter targetted search. In the latter, an interaction — note that point (in systems, interaction is a major source of emergent phenomena) — between per letter mutation rates, per generation sample of population [total of which is 10^ 40 or so] and the choosing of champions on mere proximity without reference to functionality — note Dawkins’ “mutant nonsense phrases” — cause lock-up of already successful letters. This, without an explicit per letter rule.
Good summary - I agree with all this also. Now, let me say this slightly differently. In explicit latching, the mutation function - that which goes along and decides whether to mutate each letter in the phrase - is influenced by or has knowledge of the target string because once a letter is correct the mutation function is instructed to leave it as it is. That is, the mutation function is no longer totally random - the chances of a letter mutating are p% (whatever mutation rate is being used) if the letter is incorrect but 0% if it is correct. In implicit latching, the latching is an emergent property of the interaction among such things as the mutation rate, the population, and the fitness routine which just counts how many correct letters a phrase has in respect to the target and then decides which phrase becomes the new parent. Note that in this case the mutation routine never changes - every letter has a p% chance of mutating every time: it is the interaction with the rest of the system that produces the latching. Therefore, the following distinction can be made: In explicit latching, mutation is not entirely random in respect to fitness: As the phrase gets more fit, fewer letters can mutate. In the implicit case, mutation is random in respect to fitness: no matter how fit the phrase is, every letter has the same chance of mutating every single time. Of course, because of the implicit latching, correct letters which mutate very, very seldom survive because of what you describe in the quote above. But the mutation routine itself does not know that - it just mutates without regard to or knowledge of what the rest of the process is going to do. Given that I absolutely understand what you are saying about the causes of explicit and implicit latching, do you agree with the distinction I am making in the above two paragraphs? hazel
Onlookers (and current participants): Looking at onward exchanges, one would overlook that at what is currently 88 above, Dawkins' own words have laid out the matter clearly enough. Weasel is plainly highly "misleading" [teh word is Mr Dawkins'] on the central issue, fails to address the origin of functional bio-information, uses a filter that simply addresses proximity to target without function [thus actually is a form of intelligent design, as opposed to the BLIND watchmaker of the title of the 1986 book] and should be acknowledged as such and moved on from. However, there are some points that do need to be picked up and commented on. (i note to Jerry at 95 that I am writing for the record, showing he reductio ad absurdum plainly going on on the other side of this exchange.) On points: 1] Hazel, 90: I will agree with kairosfocus about how implicit latching works. Thus, after many cycles, finis on that point. Spirals do progress. 2] I am also not at all interested in the history or larger purpose (or non-purpose) of Dawkin’s program. Alas, from December on, that has been the key point: Weasel, as 88 shows, exists for a sadly rhetorical purpose; and works by in effect a subtle bait and switch that uses context to make its persuasive point, all the while, even more sadly, using -- here it comes -- weasel words to cover its tracks. (That concatenation of target phrase and rhetorical strategy chills me to the bones.) 3] in explicit latching, the mechanism is a rule about letters that is invoked in the mutation function, and in the implicit case, the mechanism involves members of the generation being selected, on probabilistic grounds, by the fitness function. First, recall, latching is in the first instance an observed o/p behaviour. The remaining question is mechanism, for which two candidates are credible. Up to the point of letters being correct, there is little difference between (a) explicit and (b) implicit cases. In the former, once a letter is correct, it is removed from the search list, as it has attained its target . . . i.e. partitioned, letter by letter targetted search. In the latter, an interaction -- note that point (in systems, interaction is a major source of emergent phenomena) -- between per letter mutation rates, per generation sample of population [total of which is 10^ 40 or so] and the choosing of champions on mere proximity without reference to functionality -- note Dawkins' "mutant nonsense phrases" -- cause lock-up of already successful letters. This, without an explicit per letter rule. In short, up to the insertion of a "fitness function," you are close to correct. (And of course as the coupling just described is relaxed, first we see occasional reversions then we see regular flicking backs, and also emergence of multiple-letter jumps in numbers of letters that are correct.] 4] DK, 92: please look a few of the the Weasel programs created by Zachriel, Patrick May, and others that claim not to use latching The answer to that is simple. First, simply look at the o/p of these neo-Weasels out there, by contrast with that linked for the original circa 1986. Latching is in the first instance recognised from the program's behaviour, a la 1986 o/p vs 1987. (Explicitness vs implicitness is a question of mechanisms to account for the observed o/p behaviour of the program as published in 1986 and discussed esp. in my Dec remarks.) I will (given the way the issues have been raised since 1986) guarantee that in by far and away most cases presented by evolutionary materialists and Darwinists more generally since 1987 on, such programs -- absent cases where they allow you to set parameters in the co-tuned range -- will most emphatically NOT latch. For, that was the obvious red flag that highlighted what was wrong with Weasel from 1986. (The Monash case is the obvious exception, and it was duly "corrected" by Mr Elsberry.) But, such neo-Weasels collectivley are simply yet another a bait-switch. The point I have always made is, as Mr Dawkins confessed in his weasel words on Weasel, circa 1986: Weasel is targetted search that rewards mere proximity, not functionality. As such it cannot be a reasonable illustration of the power of natural selection as a BLIND watchmaker. (For, NS is about differential PRESENT function.) And, the unanswered problem has always been to get to the islands of function per chance + necessity alone, not whether hill climbing once on such an island is more or less plausible. In short, the real question posed by the late, great, Sir Fred Hoyle and many others has been repeatedly begged and distracted from for 23 years. 5] Hazel, 96: I am trying to have an honest discussion about limited, specifics aspects of the topic. I am just trying to clear about one simple point - one that has nothing to do with the history of Weasel or it’s application, or lack thereof, to evolution. Why should kairosfocus, you, or anyone else feel that there is something wrong or threatening about this? Sadly, H, you are very much the exception. And, unfortunately, in answering you, I have to bear in mind the Anti Evo folks, there and here. So, like it or lump it [and I find it distasteful], I have to make it clear from immediate context that there is something fishy when they quote-mine. [To illustrate: Look at how they pounced on and trumpeted an ad hominem-laced dismissal in the J'can media that the Gleaner had to publish a corrective over; a dismissal that indulged in blood slander and served as enabling rhetoric for public lewdness. Then, when I blew the whistle on it, there was no serious accountability over that. And, that in a context of privacy violation. [BTW, once a real name is used these days, our friendly swindlers out there are perfectly capable of opening accounts etc in your name and doing nasty things with that identity theft.]) 6] SG, 118: I have argued that intelligence is a hypothetical construct. SG, your very first datum is that you are a conscious, choosing, thinking and acting, embodied agent, one who may sometimes make errors but is often right. Such as when you eat because you are hungry, choosing what you eat. Such as when you make sure no car is coming when you cross the road. If that is mere invisible, unobservable construct, then all else falls apart, including the claimed observations of the external world and the associated much vaunted 3rd person, onlooker perceptive. [Hint: observation is not to be equated to the 3rd person view, as we are capable of partly self-transcending reflection.] Is this not a plain case of self-referential incoherence? 7] if you feed programs in a Turing-complete language into the explanatory filter, Rice’s theorem makes things messy . . . . E.T. can say absolutely nothing about the purpose or function of your program. The linked reference of course continues to beg the same point, as though we do not personally and collectively instantiate semiotic agents who have track record of showing behaviours and creating artifacts that manifest aspects with distinct, empirically reliable signs of intelligence. Take for instance: There is no way to infer that a text is a program, let alone a unique machine for which the text is a program. Sounds impressive, but for the fact that we routinely do just that all the time. (And BTW, this reflects a significant part of the issue over FSCI. FIRST, observe function -- not just strings or what have you of symbols or glyphs in the abstract but symbols in action; then note that it is based on high-capacity storage of information, i.e is specified by function, is complex and is informational. Per experience such FSCI is a reliable sign of intelligence, e.g even your post, much less observed algorithmic function.) In short, sadly, self-referential incoherence yet again. 8] 121: I’m not saying that the string of characters [i.e. the methinks sentence] should function as a sentence. The environment pays off a particular character (trait) in each of 28 dimensions (positions). In essence, the simulated organism may function in as many as 28 different ways. This of course boils down to partitioned search, and to an absurdly question-beggingly low and unjustifiable threshold of function and probability [1 of 27], one that Mr Dawkins explicitly rejects in his direct statement "nonsense phrases." In short, SG here attempts to justify precisely a letterwise partitioned search as was the occasion of so much hearted dismissal overt the course of three threads to date in recent weeks. Sigh. _______________ Plainly, Weasel fails to illustrate teh BLIND watchmaker of the title of Mr Dawkins' 1986 book, and the attempts to justify Weasel and the various neo-Weasels, end up inadvertently underscoring the fundamental problem of evolutionary materialism. Namely, it has no credible account for the origin of functionally specific complex biological information. And, as 88 above shows, that was plain from 1986 [providing one read the text with a suitably critical eye], 23 years ago. GEM of TKI kairosfocus
JT, 117 refers to 112. Having taught the theory of computation several times, and AI many times, I had to slip in a response. Good night. Sal Gal
JT,
In the Weasel program, if the goal were “Me thinks it is a _______” and the blank could be filled in with any english noun, that would seem to make reaching a target easier.
It is easier.
I realize that in the WW, you say you mutate the target at random, but I’m thinking that what would be functional in nature wouldn’t be random.
Recall from 63 that I'm not saying that the string of characters should function as a sentence. The environment pays off a particular character (trait) in each of 28 dimensions (positions). In essence, the simulated organism may function in as many as 28 different ways.
But presumably there would be multiple N-bit string that were functional, that is there would be some programmatic description of what is a functional target.
You've guessed another modification I had waiting in the wings. With multiple environmental targets and a larger population, it is possible for subpopulations to track multiple targets simultaneously.
And it occurs to me that an individual target would not have to be an “exceedingly remote island of functionality” as for example an individual target need not be compressible.
There could be linked positions in targets. In other words, two targets could be constrained always to match one another in certain positions. To relate this to biology, consider that there may be multiple environmental niches for a species, and while they may differ in some ways, they do not necessarily drift independently. Sal Gal
I’ve said before that to me, intelligence has to do with behavioral complexity, and also the degree of perceptual discrimination and acuity. So that if you have a program that does almost the same thing regardless of input (for example possibly because the program can only “see” the first n-bits of any input, then it is relatavely unintelligent. Or if you have a program that sees a much larger percentage of its input and furthermore the output it generates is highly variable, not trivially deducible from the input via any simple program for example, than that program is more intelligent. [also off-topic] JT
SG [118]: What remarks of mine are you referring to? (I think I would agree that "intelligence is a hypothetical construct".) JT
JT [off-topic], I have pointed out that if you feed programs in a Turing-complete language into the explanatory filter, Rice's theorem makes things messy. And I have argued that intelligence is a hypothetical construct. Sal Gal
"multiple N-bit string" = "multiple N-bit strings" JT
SG: I realize that in the WW, you say you mutate the target at random, but I'm thinking that what would be functional in nature wouldn't be random. But presumably there would be multiple N-bit string that were functional, that is there would be some programmatic description of what is a functional target. And it occurs to me that an individual target would not have to be an "exceedingly remote island of functionality" as for example an individual target need not be compressible. JT
OK, I can think of an answer: Just make the goal "Me thinks it is a", that is, presumably you can make the target as short or long as you want. JT
[113]:
JT:The simulation has only one target, a string of all 0’s (more about that in a minute).
SalGal:In both programs, it is irrelevant how the target is initialized. Any is as hard to locate as any other.
In the Weasel program, if the goal were "Me thinks it is a _______" and the blank could be filled in with any english noun, that would seem to make reaching a target easier. If MESA makes the weasel program more accurate by making it more difficult (by variable coupling for example), shouldn't it be more accurate wrt attributes that make it easier to reach a target as well. JT
JT, I hoped someone would bring up MESA. The reason I have harped on the Wandering Weasel is that it is similar in flavor to MESA. The fitness function in MESA is static, but noisy. The Wandering Weasel fitness function varies randomly in time. In MESA, there is the question of how coupling affects optimization speed. In the Wandering Weasel program, there is the question of how the "match" of mutation rate to the rate of environmental change affects tracking of the environment. There is also the question of the impact of self-adaptation of mutation rate on tracking of the environment. It took very little change to the Weasel program to obtain a system comparable in complexity and interest to MESA. I'll note that I believe that information gain in the Wandering Weasel is analytically tractable -- in the case of fixed mutation rate, anyway.
The simulation has only one target, a string of all 0’s (more about that in a minute).
In both programs, it is irrelevant how the target is initialized. Any is as hard to locate as any other.
I don’t see what purpose it serves to make fitness, or an aspect of fitness, random.
Actually, there is randomness in selection, and you can model that by adding random quantities to fitness and making selection deterministic. Sal Gal
[111 cont.] Note: I always consider the I.D. conception of intelligence a fundamental flaw and that has nothing to do with playing the devil's advocate. I had a graduate level course in the Theory of Computability and I realized the applicability of it to the evolution debate. It never occurred to me for it to be unreasonable in the slightest to view humans, the universe, cogntion, and evolution strictly in a TM computable framework. It would have seemed to me at the time to be a self-evidently valid approach to both sides of the debate. Well, the I.D. conception of intelligence is diametrically at odds with such a view. So I do sincerely believe there has been something fundamentally wrong in the I.D. conception from the outset. JT
Sal Gal: See the Scholarpedia article on ES’s. Thanks, I will do that. ------------------- To all - Admittedly I have aligned with the adversaries of I.D. to a great extent in this forum. Actually I started many many years ago to prove evolution wrong, but if a person spends enough time researching the enemy, and get emmersed in their terminology and way of understanding the world, you eventually make a transition to identify with them as well. (A remote variant of the Stockholm Syndrome, I guess). And also, if someone else is getting all the glory for ostensibly proving evolution wrong -you might as well try to take them down a peg or two. I don't think that makes me a troll. If the name of the forum were "Evolution- the Only True Science" my sentiments might be different. The post I made in 109 is serious for example, and I would assume and hope that someone in this forum is actually familiar with MESA enough to address it. JT
I just got finished reading the overview of MESA written by Dr. Dembski. MESA was one of the simulations he asked us in the OP to consider and no one else in this thread has discussed it yet. I have only read the overview and don't have more time to devote to it now, so if Dr Dembski or anyone else familiar with MESA wants to counter my observations below, go ahead. The simulation has only one target, a string of all 0's (more about that in a minute). The only type of modifications possible in the simulation either make it achieving the target more difficult, or effect it randomly. In the latter case, there is a "Fitness Perturbation" options in the simulation. You can set a "Fitness Perturbation Range" k but then that just causes the fitness function for each individual to be effected randomly in that range. I don't see what purpose it serves to make fitness, or an aspect of fitness, random. So you have something that is fit according to some criteria, and you say, "Let's just randomly change its fitness level, so that some significant portion of its fitness is not tied to anything identifiable at all." What sort of conclusions could you draw from such a simulation. The other fitness option "Binomial Fitness Perturbation" also revolves around a "binomial random variate". The other type of modification is coupling of variables and will always make achieving the target more difficult. So once again, the only type of modifications allowable in the simulation either make it more difficult to achieve the target, or effect it randomly. If the simulation's intent is to demonstrate the untenability of evolution then why do the only options available make it more difficult? Why wouldn't there be any options to make it easier (at least for testing purposes)? The one option I am thinking of is the possibility of multiple targets. Supposing the word "ostensibly" occurs by blind chance in some context, and someone observes the probability of that happening is 26^-9. They have made a fundamental error, because what is noteworthy is not that specific string of characters, but rather that it is a word from the english language. So you would have to modify the odds to account for the number of words in English of that length (as we would be just as amazed at any word of that length occurring by chance). Of course, Dr. Dembski knows this and has undoubtedly himself addressed this topic more systematically elsewhere. So then why does the MESA simulation only have one target (all 0's). Obviously multiple targets would drastically increase the odds, and presumably it would be something that someone running the simulation would want to consider. It seems its what evo-theorists themselves repeatedly point out in this forum for example - that there isn't ony one potential target. So why is such an option left out of the MESA simulation. JT
JT, It is easy to arrange for self-adaptation of the mutation rate in the evolution strategy. See the Scholarpedia article on ES's. The upshot is that you, the programmer, can set the mutation rate to a large value, and it generally will go to a small value as the parent approaches the target sentence in the Weasel problem. Fine-tuning of the initial mutation rate is not required. In my Wandering Weasel variant, the mutation rate generally will go to a value that "matches" the mutation rate of the environment. In other words, the best setting of the mutation rate in reproduction depends upon how rapidly the environment changes. There is no need for the programmer to "smuggle in" information about the rate of environmental change in initialize the mutation rate for reproduction. The ES generally can adapt the mutation rate to the rate of environmental change. I'm not making this stuff up. As I've said before, there's a huge base of theory and practice for evolutionary strategies. Sal Gal
Clive, I was asking about genetic entropy. Some people believe that ever since The Fall, evolutionary change has only destroyed information supplied by the Designer. A maxim from Coleridge's Biographia Literaria I try to keep in mind: "until you understand a writer's ignorance, presume yourself ignorant of his understanding." I know nothing about what you know and don't know. When I write of statistical information, I have stuff like entropy, conditional entropy, relative entropy, and mutual information in mind. See Wikipedia. Sorry, but I'm not flying by the seat of my pants when I talk about information, and much of what I say will not make sense if you do not know the formal definitions. Sal Gal
Off topic but I hope someone can help me out. I would like to know how electro-magnetic radiation is handled by the body; i.e. do we have a radiation firewall imbedded in our cells? Can anyone point the way here? Oramus
jerry, you write:
Keep on typing and we can expect a sonnet or two by the end of the millennium. Maybe quicker if you hire a couple monkeys.
As it happens I've published a small amount of poetry in literary journals. Only a bit of it (this for example) deliberately uses randomization as a technique. But it's worth noting that many writers and artists over the past century and longer have deliberately used randomness as a generative technique: James Joyce, John Cage (both in his poetry and his music), Louis Zukofsky, and Italo Calvino are just a few names that come to mind. My point is that in literature, chance can be valuable and can even enhance meaning (sometimes a "targeted" meaning, sometimes meanings outside the writer's control). David Kellogg
"Hey! A mutation that adds functionality! My mistake adds meaning and improves the sentence." Keep on typing and we can expect a sonnet or two by the end of the millennium. Maybe quicker if you hire a couple monkeys. jerry
JT,
How difficult would it be for nature to create something that was flat on the bottom and more curved on the top?
Interestingly, the evolution strategy was first used, back in the mid-1960's, for "evolving optimal shapes of minimal drag bodies in a wind tunnel." On a couple of occasions, I've heard one of the inventors of the ES, Hans-Paul Schwefel, say that he couldn't think of any other way to solve the problem. The Weasel program implements an ES. "Weasel program" is actually a misnomer. It would be more appropriate to say that Dawkins applied an ES to the Weasel problem. Focusing on this toy problem is absurd. ES's have been applied successfully to an enormous range of real-world problems. There is a sizable body of theoretical results on simple forms of the ES. Sal Gal
Clive, Thanks---it looks as if several others lost posts at around the same time, so the server move seems a likely cause. madsen
Sal Gal, What is my position on what? Do I think that organisms accumulate "statistical information"? If I had any idea what you mean by this, I might be able to answer. Are alleles statistics? What new environments are humans pushing into? flats and condominiums? To be honest, I can't much sense of most of what you wrote, because it isn't very clear in the particular points, and the particular points are not unpacked and explained before a new point is taken up. The map is not clear and we keep taking detours. Clive Hayden
madsen, I personally have not deleted any comments at all except from David Kellog asking to be taken out of moderation. We switched UD to a new server, so that may explain these strange occurrences. Clive Hayden
jerry, One of my posts also vanished this morning. Hopefully some explanation will be forthcoming. madsen
madsen, My post went up at 7:47 this morning on a laptop as I was eating breakfast. I went up stairs about an hour later to our office and used a second computer and saw that my post was gone. I went back to the laptop and there it was on the browser as I left it. I saved the file and took a screen shot to make sure I wasn't dreaming. I then refreshed the browser and the comment that had been deleted was still there. I opened a new window and a new screen of the page and it was still there. I tried to refresh again and it was still there but not on the other computer which I now had side by side. I closed the browser on the laptop and reopened all again and the page was still there on the laptop but not on the desk top computer. This has never happened before because I frequently refresh the screen when I am logged on to see if something new came up and it always updates correctly. So while I understand the cache often has pages saved when I refresh they always change to the updated version but not this time. The odd thing was the page that was deleted kept on reappearing and it had my deleted comment on it as if it was trying to fool me into thinking it was still there. I then turned the computer off and after rebooting the page with my comment on it was gone and replaced by one with other comments that was on the other computer. Very strange. But I do have the screen shot and the old saved file still on the laptop. jerry
It seems more posts have gone missing. Is this due to the spamming problem, or have they been deleted by a moderator? madsen
Hazel, By all means go right ahead. No one is asking anyone to stop posting. All I am suggesting is that many posts don't deserve a reply. But some people feel they must reply to everything and I am saying that is often a waste of time. Others feel if there is no reply to their comment, then they have some how won the argument. And many of the posts are not meant to inform or have a conversation and these are the types of comments that should not be answered. jerry
to Jerry: I am not making up nonsense, I am not trying to "thwart, deflect, [or] distort." - I am trying to have an honest discussion about limited, specifics aspects of the topic. I am just trying to clear about one simple point - one that has nothing to do with the history of Weasel or it's application, or lack thereof, to evolution. Why should kairosfocus, you, or anyone else feel that there is something wrong or threatening about this? And in particular, if you don't think the topic is interesting or worthwhile, why do you keep posting about it? hazel
This comment was deleted after it was posted at 7:47 as comment #89 this morning. I assume someone found something objectionable in it. Here is what I wrote earlier. Kairosfocus, you wrote “There is actually very little point in onward extension of the issues over minutiae of Weasel ” There is nothing but truth in that. But you then wrote about 3000 words on the subject. Your are being baited and by responding to the bait you are feeding the Alice in Wonderland world this new group of critics are trying to create. They will make up nonsense and you feel you must refute every bit of nonsense they write. For example, two have already wrote about artificial selection or breeding as part of this discussion. My suggestion is that you and the rest generally ignore them. As I said they are not here to discuss or learn or have an honest debate. They are here to thwart, deflect, distort and then revel in their own absurdity. It is easy to determine when an honest discussion is going on and when it isn’t. I suggest politely saying good bye when it becomes obvious that one is not being held. They will claim all sorts of victory with their last comments and their objections to ID or anything else being said are not being answered but attempts to answer each irrational and farcical thing they say is only feeding them. Watch what they say to this comment that I am making here and if you or anyone else tries to answer them, they will silently know they have won. ---------------- Interesting why it was deleted but I will keep posting it till I am told why it shouldn't be posted. jerry
KF: Under reasonably accessible co-tuning of program mutation rates, population sizes and this filter, we will see that once a letter goes correct, form genration to gneration, the champions will preserve the correct letters due to that co-tuning. In the explicit case, a per letter distance metric specifically locks the letter once it hits the target. In the implicit case, the use of a mere proximity metric co-tuned to population size and mutation rates sets up that there are some zero-mutations members of the population, and so only members that have at least that many correct letters will advance I may not be reading you correctly, but you seem you want to suggest the following: That what Dawkin's has done is contrive something via "cotuning" of various parameters (mutation rates population sizes, etc) to merely give the appearance of something ("latching") which can actually be accomplished via a much simpler method (design (?)). In essence, you're saying what Dawkin's has done is a parlor trick, accomplished through an intricate formula. But as far as "cotuning" there is no such intricate balancing of parameters necessary in the Dawkin's version. Both the explicit latching version and Dawkin's version employ a certain mtuation rate. The mutation rate in Dawkin's case doesn't require some specific value, just any reasonable value will do. The only difference between the explict version and Dawkin's version is that the latter actually has a population of animals, so that when a mutation occurs its occurring to only one or a few individuals in that population. That's how it would happen in nature, right? In the explicit latching case, there's only one individual - its would be like saying any negative mutation in a population by necessity had to overtake the entire population, and you have to introduce this agent into this picture to say, "Nope we can't have a mutation happen there - that's not allowed." (W/apologies if I've misunderstood what you're saying). In essence, on dozens of parameters of crucial importance to the cosmos as we see it, we live in a universe that is knife’s edge balanced to facilitate the existence to the kind of carbon chemistry aqueous medium cell based life we observe and experience. Scale and constitution of the observed cosmos are directly connected to the existence of the sort of galaxy in which we live, and its having a habitable zone, such as our local spur between two major spiral arms and a bit over 1/2 way from the central core to the rim. In turn, it seems there are a few dozen more interesting finely tuned factors that have led to our having a habitable planet such as our own on which we live. Indeed, the best explanation for our cosmos, in light of the factors and patterns we see, is that it is the product of design, powerful, elegant design too. And, it is COSMOLOGICAL design that points to an extra-cosmic, powerful and intelligent, artistically creative designer. Is life's emergence and development tracable to preexisting and coexisting physical conditions in the universe? If it is then we don't have to think of life being designed anymore than a new born baby is designed. I mean I personally understand the sentiment for someone to look at a newborn baby and see the Hand of God at work - I would say its a valid sentiment. But it doesn't explain from an operational standpoint why a baby has his mother's eyes or his father's nose or his Uncle's baldness gene.(Actually I think the "Hand of God" would be the universe.) The implication regarding the probability of getting necesary parameters in the universe for life speaks for itself. But as I've said many times, "Design" or "Intelligent Design" as conceived in I.D. as a nondeterminsitic third force distinct from either chance or necessity is incoherent and does not explain anything. And also you talk about God being "creative" and "artistic" and to me this really lessens God. "Form follows function" has been an ethos in the Western World for two centuries. Beauty in architecture for example comes from matching to the environment. Analysis of an envrionment is an iterative observational activity, and admittedly also simulational activity. It is also objective and nonpersonal. It is constrained by environmental factors. When I think of "artistic" people, I think of neurosis, self-indulgence, insularity, emotionalism, vanity, etc. JT
typo: I mean 'thick fog,' but 'think fog' has its own richness. Hey! A mutation that adds functionality! My mistake adds meaning and improves the sentence. David Kellogg
I think a different question may provide a way to cut through the think fog of kairosfocus's writing. Here it is: kairosfocus, please look a few of the the Weasel programs created by Zachriel, Patrick May, and others that claim not to use latching: Do those programs work by what you call "implicit latching"? David Kellogg
Sal Gal [63]: The Weasel program makes better sense if you stop thinking of the simulated organisms as genotypes, but instead as phenotypes making 28 predictions of the environment. The count of matching characters in the fitness function is then the total payoff for correct predictions. Dawkins specified an environmental sequence of symbols and held it constant to provide a clear illustration of information accrual through selection. Just wanted to say I personally wasn't ignoring your post. The signficance of what you're saying above just now fully hit me: If a sequence of characters representing an appproximation of the weasel sentence is thought of as phenotypic characteristics of an organism (as opposed to a genetic recipe) its easier to think of an incremental improvements in the utility of such a sequence. So, I was thinking just now about one aspect of the environment that say birds for example are able to exploit: 3-D space. So land animals are only able to exploit essentially 2 dimensions. And as it happens there is this invisible substance called air occupying large amounts of 3-D space and its possible for this substance to be grabbed and climbed (like say a tree). And what is it that enables air to be climbed? Well for humans at least a very simple innovation - the air foil: something that is completely flat on the bottom and slightly curved on the top. That is the only thing necessary to create lift, and its principle is simple: When air moves over a wing the air pressure on the bottom of a wing is greater than on the top because of the slight differences in shape between the two - causing lift. It took the Wright Bros for ever to discover it, but its an extremely simple device. This is what enabled flight (plus deflectable control surfaces on the wings and tail). How difficult would it be for nature to create something that was flat on the bottom and more curved on the top? What would incremental improvements in flying control do for an organisms ability to say, catch insects, escape predators, or gain access to exploitable environments physically remote from potential competitors. It seems that what enables air to be climbed is a simpler device than what enables a tree to be climbed. (Not that any of this pertains specifically to what you were talking about.) JT
Oops again - I forgot to check the formatting before I hit submit: here kf's quote is correctly shown: At 84, when david kellogg wrote to kairosfocus, “your comment helps me understand what you mean by “implicit latching.” You mean “non-latching,” kf again gave a long explanation about explicit and implicit latching. I will agree with kairosfocus about how implicit latching works. I am also not at all interested in the history or larger purpose (or non-purpose) of Dawkin’s program. I just want some clarification of this one little point. to kairosfocus: in explicit latching, the letters latch because the mutation function includes a rule that says once a letter is fixed, it cannot mutate again. in implicit latching, the letters latch because the fitness function invariably does not select for phrases in which correct letters have mutated. In kf’s words,
In the explicit case, a per letter distance metric specifically locks the letter once it hits the target. In the implicit case, the use of a mere proximity metric co-tuned to population size and mutation rates sets up that there are some zero-mutations members of the population, and so only members that have at least that many correct letters will advance. Since it is hard to get to a double that substitutes a new letter for an old that has reverted, on probabilistic grounds, the correct letters will overwhelmingly latch in such cases.
So we are in agreement, I think, that the mechanisms that cause the latching are different in the two cases: in explicit latching, the mechanism is a rule about letters that is invoked in the mutation function, and in the implicit case, the mechanism involves members of the generation being selected, on probabilistic grounds, by the fitness function. Is this correct, kf? I think it would be useful, and I certainly would appreciate it, if your response, if any, would addresses just the subject and not the many others that you often include in your replies. Thanks hazel
At 84, when david kellogg wrote to kairosfocus, “your comment helps me understand what you mean by “implicit latching.” You mean “non-latching,” kf again gave a long explanation about explicit and implicit latching. I will agree with kairosfocus about how implicit latching works. I am also not at all interested in the history or larger purpose (or non-purpose) of Dawkin’s program. I just want some clarification of this one little point. to kairosfocus: in explicit latching, the letters latch because the mutation function includes a rule that says once a letter is fixed, it cannot mutate again. in implicit latching, the letters latch because the fitness function invariably does not select for phrases in which correct letters have mutated. In kf’s words, In the explicit case, a per letter distance metric specifically locks the letter once it hits the target. In the implicit case, the use of a mere proximity metric co-tuned to population size and mutation rates sets up that there are some zero-mutations members of the population, and so only members that have at least that many correct letters will advance. Since it is hard to get to a double that substitutes a new letter for an old that has reverted, on probabilistic grounds, the correct letters will overwhelmingly latch in such cases. So we are in agreement, I think, that the mechanisms that cause the latching are different in the two cases: in explicit latching, the mechanism is a rule about letters that is invoked in the mutation function, and in the implicit case, the mechanism involves members of the generation being selected, on probabilistic grounds, by the fitness function. Is this correct, kf? I think it would be useful, and I certainly would appreciate it, if your response, if any, would addresses just the subject and not the many others that you often include in your replies. Thanks hazel
SG: What Mr Dawkins was trying to convey, circa 1986, courtesy Wikipedia, trying to justify what he did. I add my notes on points, and emphases: _________________ I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. [--> that is, he KNEW that the issue is want of search resources to access complex functionality, which is Hoyle's challenge] Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . . [--> Biosystems often have DNA of storage capacity comparable to Shakespeare's corpus, i.e he knew he was making a toy example pointing away from the challenge. The red herring has begun to drag away from the trail of truth.] We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. [--> And in the real world, what is a credible incidence of mutations,and what fraction of these are credibly beneficial? --> What fraction give rise to novel body plans? With what empirical basis? --> And, that starts with the first body plan, including the DNA - RNA - ribosome enzyme programmable, algorithmic information processing system in the cell] The computer examines the mutant nonsense phrases, [--> the issue of getting to shores of functionality has just been begged without even a pause to note on what that shift in focus does to the relevance of Weasel to OOL and origin of body plans --> Namely it means Weasel is now of zero relevance to the issue Hoyle et al raised: getting TO complex function based on information rich molecules] the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . [ --> targetted search rewardign mere proximity without any credible threshold of function --> Ideas of fitness functions are therefore irrelevant, and equivocate off proximity to target vs the sort of algorithmic functionality DNA etc [including of course epigenetic structures . . DNA underestimates the info required . .. ] drives for first life and major body plans --> targetted and with programmed choice, so foresighted] The exact time taken by the computer to reach the target doesn't matter. [--> oh yes it does, as the realistic threshold would credibly never get done in any reasonable time, much less a lunch time] If you want to know, it completed the whole exercise for me, the first time, while I was out to lunch. It took about half an hour. (Computer enthusiasts may think this unduly slow. The reason is that the program was written in BASIC, a sort of computer baby-talk. When I rewrote it in Pascal, it took 11 seconds.) Computers are a bit faster at this kind of thing than monkeys, but the difference really isn't significant. [--> Distractive] What matters is the difference between the time taken by cumulative selection, [--> thus, ratcheting and latching, as observed in the 1986 o/p . . . and decidedly not in the 1987 o/p --> cumulative, programmed selection that ratchets its way to a target, rewarding the slightest improvement in proximity of nonsense phrases, without regard to realistic thresholds of function . . . ] and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: [--> Strawmanised form of the key objection: Mr Dawkins is ducking he issue of getting to shorelines of functionality] about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . . [--> he KNOWS -- or, should know [which is worse] -- that a realistic threshold of functionality is combinatorially so explosive that the search is not reasonable --> but good old Will with feather pen in hand probably tossed it off in a couple of minutes by intelligent design --> So he is pointing away from the most empirically credible explanation of FSCI] Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. [--> if you know from the outset that an exercise in public education is misleading in important ways, why do you still insist on using it? --> Other than, it is the intent to make plausible on the rhetoric what would on the merits be implausible?] One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, [--> He knows -- from teh outset -- that promotion to generation champion based on proximity without reasonable criteria of functionality is misleading in important ways] the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. [--> he knows that this artificqally selected, targetted search without reference to functionality is irrelevant to the issues over the origins of information rich systems in life] Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, [ --> That is he knows tha the has used artificial selection off proximity to a desired future state, not natural section based on differential functionality, begging the question of origin of function --> thus, the underlying question is being ducked and begged] although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [--> tha tis, he knows right from get-go, that hehas begged teh quesiton bigtime, but he obviously thought his rhetoric would work. --> From abundant evidence, that is all too well -- albeit cynically [I doubt that "weasel" is an accident; this paragraph being an exercise in weasel words] -- judged. ___________________________ In short, Weasel is an exercise in manipulative rhetoric, not education, and certainly not science. GEM of TKI kairosfocus
PS: I hereby second (with slight amendments) Jerry's nomination of Weasel [and of the various neo-Weasels] for the growing list of display items for the walls of the hall of infamy in the gallery of misleading icons of evolutionary materialism:
I think the whole discussion should be abandoned and the Weasel program put in a black hole [the list of misleading icons of evolutionary materialism that too often appear in textbooks, the popular media, the Internet and the blogosphere] where it rightly belongs only to be resurrected to show why it is useless and not to emulated. I have said all this discussion is folly because the program is nonsense.
[Moderators, do you think the time has come to host such a virtual hall of infamy here at UD?] kairosfocus
Clive, Please read 45, 47, 63, and 83. IDers are so focused on parsing the Weasel program that they have forgotten, if they ever knew, what Dawkins was trying to convey. The specification of the target is irrelevant to the point he illustrates with the program. That the target is constant is also irrelevant. I have described trivial modifications to the program that make this obvious. In other words, I have removed what IDers usually object to. The program still accumulates statistical information as it runs, but the only way to see this is to run the program many times. The Weasel program, under interpretation in line with Darwinian evolutionary theory, essentially says that organisms express themselves in 28 dimensions, and that the environment "pays off" exactly one of 27 possible traits in each of those dimensions. In the Wandering Weasel program, I initialize the Markovian environment uniformly at random because most IDers feel that there is something remarkable in setting it to "Methinks it is like a weasel." But the initialization is actually irrelevant. Set the environment to the "weasel" line, and you will see, as the program runs, the sentence wander over the space of length-28 sequences of letters and spaces. There is a weasel, it does wander, and the evolution strategy does track it.
And do we have a better way of “getting across” something that shouldn’t be “gotten across”?
Are you saying that populations of organisms do not accrue statistical information? There is strong evidence that beneficial alleles have entered the human genome rapidly since humans began pushing into a wide range of new environments. One of the authors of Mendel's Accountant, John C. Sanford, says that genetic entropy does not permit this. What's your position? Sal Gal
4] My thinking is, some event is designed by whatever causes preciptiated its occurence. That volcanic eruption for example was designed and characterized by whatever physical forces precipitated it and constrained it. Orwellian, manipulative, language- corrupting, destructive newspeak. Here, the term "design" -- which is well established as a term and is abundantly exemplified in light of the fact that we live in a technological civilisation -- is being wrenched to try to claim that it means just what it does not mean. Am H Dict:
de·sign (d-zn) v. de·signed, de·sign·ing, de·signs v.tr. 1. a. To conceive or fashion in the mind; invent: design a good excuse for not attending the conference. b. To formulate a plan for; devise: designed a marketing strategy for the new product. 2. To plan out in systematic, usually graphic form: design a building; design a computer program. 3. To create or contrive for a particular purpose or effect: a game designed to appeal to all ages. 4. To have as a goal or purpose; intend. 5. To create or execute in an artistic or highly skilled manner. v.intr. 1. To make or execute plans. 2. To have a goal or purpose in mind. 3. To create designs.
5] Even in the case of human design, there is a context of culture, of necessity, of existing technology that must be considered to fully account for the emergence of new technology. Basic logic, 101: P => Q means that P is sufficient for Q, and Q is necessary for P. That is, NOT{ P AND Not-Q]. However, this is NOT at all the same as that P is equivaslent to Q. that is Q does not determine P; it only constrains it. There are constraints that may well influence how a design is carried though, but they do not determine the design. For instance, the rules of spelling and grammar constrain a good English sentence, but they do not determine it. The "rules" of electronics [based on device physics and circuits - networks theory] constrain designs, but that does not determine the architecture of a microprocessor, e.g. a Pentium or an Athlon or a good old 6800 or 8080 or even a 1971 era 4004. Such a gross error as just captioned, sadly, shows how the evolutionary materialistic, chance + necessity view simply cannot handle the most patent fact of all: we are conscious, reasoning, deciding, acting creatures. It reduces to absurdity, over and over again, as a direct result. JT, when your reductio has reached absurdum, not once but over and over again, it is high time to rethink your key first plausibles. 6] Is the remaining 99.99999999999999999999999999999 of the universe essentially garbage? Why does it exist? First, have you ever seriously star gazed outside the zone of city-lights? (If so, the notion of the beautiful, subtle, intricate cosmos in which we live being possibly garbage should never cross your mind.) Have you acquainted yourself with the science of cosmological origins, and the associated inference to design in light of fine-tuning per the most credible scientific model of cosmological origins? In essence, on dozens of parameters of crucial importance to the cosmos as we see it, we live in a universe that is knife's edge balanced to facilitate the existence to the kind of carbon chemistry aqueous medium cell based life we observe and experience. Scale and constitution of the observed cosmos are directly connected to the existence of the sort of galaxy in which we live, and its having a habitable zone, such as our local spur between two major spiral arms and a bit over 1/2 way from the central core to the rim. In turn, it seems there are a few dozen more interesting finely tuned factors that have led to our having a habitable planet such as our own on which we live. And, it is also interesting that such a planet in such a solar system in such a galactic habitable zone is also by those same factors set up for inviting investigation of the glories and intricacies of the universe. Indeed, the best explanation for our cosmos, in light of the factors and patterns we see, is that it is the product of design, powerful, elegant design too. And, it is COSMOLOGICAL design that points to an extra-cosmic, powerful and intelligent, artistically creative designer. but of course, one is free to shut one's eyes to the obvious, and to dismiss the powerful testimony of the cosmos in which we live, making up stories to make such eye-shutting seem reasonable. S/he, however, is not thereafter free to escape the absurd consequences that flow from that -- including the notion that the glorious cosmos we can so easily gaze upon in wonder, could be discussed in the same context as "garbage." 7 if recognizable sequences from Hamlet are preserved, probabilistic resources don’t enter into it. the issue is of course to get To those sequences, without smuggling in active information. On that, issues of probabilistic resources constraining realistic search are very relevant, as Mr Dawkins himself admitted in 1986. And as was explicitly noted. Repeat: Weasel ducks the real challenge -- getting TO shores of function, not hill climbing to more or less optimal function. 8] Hazel, 76: mutation happens independent of selection for fitness. By Mr Dawkins' own admission, onlookers, Weasel selected "nonsense phrases" -- i.e non-functional ones -- for PROXIMITY to "target," not for fitness. Weasel begs the question of first having to get to shores of function before differential section based on degree of performance, and properly be taken into account. And, by his own admission, that was to get away from the inconvenient fact that even in a toy example, a realistic functionality threshold would have been beyond the probabilistic search resources in the computer. And dismissive references to "propaganda" and attempts to put up "neo-Weasel Mark 156, 157, . ." do not change that basic, sadly abject failure on the merits. GEM of TKI kairosfocus
Onlookers: There is actually very little point in onward extension of the issues over minutae of Weasel and/or the latest neo-Weasel, the real matters on the merits having long since been settled. Namely, out of Mr Dawkins' own mouth, Weasel is utterly unconnected to the real issues and begs the question of origin of complex, functional bio-information. Weasel has no legitimate illustrative or didactic role and only serves to distract from the Hoylean challenge to get TO shores of bio-function on molecular noise in pre-biotic environments, and/or to get to major body plans that require huge increments of functionally integrated information. Plainly, for 150 years now, Darwinism has had no real answer to this question. And, it is the key question. Darwinism, once we move beyond minor micro-changes to already functioning organisms, is an empirical failure, a massive one; now increasingly sustained by the powers of an orthodoxy, complete with its own magisterium. However, a few points will be remarked on, for the further record, no 72 being particularly revealing: 1] Madsen, 66:In the weasel program, assuming “implicit latching”, do you believe that a correct letter has a different probability of mutating than an incorrect letter? Observe -- as I have already explicitly stated (and summarised above) -- Madsen, that the Weasel, from 1986, explicitly rewards increments to the target, without reference to function. Under reasonably accessible co-tuning of program mutation rates, population sizes and this filter, we will see that once a letter goes correct, form genration to gneration, the champions will preserve the correct letters due to that co-tuning. So, under that co-tuning, per population member in a generation, letters may see the same odds of mutation, i.e your question is mid-directed. For, because of the non-functionality, proximity filter acting with the rates and pop size, once a letter goes correct, it will pass down in the champions line of succession with much higher probability than letters that are not correct. In certain cases, the latching of the letters is practically all but certain. This is what on preponderance of evidence happened in 1986 in ch 3 of TBW, and in the NS run. Indeed, there we can see that of 300+ positions that could change, 200+ show that letters, once correct stay that way, and none are seen to revert. Such a large sample provided by the man who in the same context exults in how progress is "cumulative," is clearly representative. 2] DK, 69:your comment helps me understand what you mean by “implicit latching.” You mean “non-latching.” Stubborn insistence on corrected error, in the teeth of explanation of the correction. Latching is an o/p observation based on the actual runs, Mr Kellogg. The issue is mechanism to get there. For the record, I have already long since pointed out that such o/p letter latching as samples published in 1986 show beyond reasonable doubt, can be implicitly achieved, not just explicitly so. In the explicit case, a per letter distance metric specifically locks the letter once it hits the target. In the implicit case, the use of a mere proximity metric co-tuned to population size and mutation rates sets up that there are some zero-mutations members of the population, and so only members that have at least that many correct letters will advance. Since it is hard to get to a double that substitutes a new letter for an old that has reverted, on probabilistic grounds, the correct letters will overwhelmingly latch in such cases. Under other cases, as the pop size and mutation rates make multiple letter mutations more and more likely, letter substitutions and multiple new correct letter champions will emerge and begin to dominate the runs. This, because the filter that selects the champions rewards mere proximity without reference to function; as Mr Dawkins so explicitly stated. So, to put up a de-tuned case as if it were the sort of co-tuned latching case as we see in the 1986 runs, is a strawman fallacy. One that has been used in recent weeks over and over again. I note: the 1987 o/p is materially different from the 1986 one,a nd we have two viable mechanisms to explain the difference. On preponderance of evidence -- specifically given Mr Dawkins' reported claim that he did not explicitly latch the 1986 runs as published, implicit latching is the best explanation forf r the 1986 patterns. likewise de-tuning leading to letter reversions is the best explanation for the 1987 run as videotaped. All of this has been pointed out, and the meaning of "implicit latching" has long since been explained across three threads and hundreds of posts now in recent weeks. There are utterly no grounds -- and there is no excuse -- for this latest objection. 3] JT, 72: there would be some complete set of determinsitic factors that came together at some point in time to cause that volcano to erupt. That set of factors would equate to a program causing volcanic eruption. Some aspects of that program came into existence by chance, presumably, I’m not denying that. I of course live with an erupting volcano as a near neighbour. I can assure you, regrettably, there is no observed algorithm, that is coded in any observed computer language, stored in any observed storage medium and run on any observed hardware that can be hacked to shut it down. The volcano is a dynamical situation in the real world, not a computer simulation. The two are utterly distinct and should not be confused or equated. Moreover, the essence of program is linguistically coded algorithm, which implements a process through purposefully decided, step by step sequences of actions and choices, usually involving loops of iterations and data structures, instantiated on hardware. Programs are not mechanical forces acting on materials and structures in nature. they are artifacts of design, exhibiting both functionally specific complex information and irreducible complexity. Dynamical situations in nature -- and a volcano is a nonlinear, complex, sensitively dependent physical entity -- exhibit mechanical forces and chance circumstances, not signs of design; directed contingency. [ . . . ] kairosfocus
jerry,
There is no accumulation of information through selection that I have ever seen so how could I present a better way to an example I am not aware of.
Wow. Even Dembski allows that chance-and-necessity can give rise to 400 or fewer bits of complex specified information. I am utterly baffled by your response. A cattle breeder selects individuals to mate, but does not control the variation in calves. Suppose she does muscle biopsies and selects the leaner individuals in her herd for mating. There is much more information in the herd than the outcomes of the biopsies, but the breeder ignores it. The herd becomes leaner over time. The breeder knows only the outcome -- not the physiological changes that yield the outcome. The "how to be lean" information that enters the herd as a consequence of selection does not come from the "intelligence" of the breeder. That statistical information can be gained through iteration of reproduction-with-variation and selection is certain. Illustrating the difference of this information gain through trial and error from obtaining the combination of a lock through trial and error is worthwhile. Sal Gal
Sal Gal, "No one here has evinced a willingness to take on the Wandering Weasel. My best guess is that you don’t understand it. And if you don’t understand the consequences of such a small modification to the Weasel program, why would you suggest that Dawkins should have given a different illustration? Do you have a better way of getting across the idea of accumulation of information through selection? If so, then out with it." Would you please update me about what exactly the "Wandering Weasel" is? At first it was "Methinks it is like a weasel." Are we sure that it is a weasel now? and that it is wandering around? And do we have a better way of "getting across" something that shouldn't be "gotten across"? Why would we attempt a positive correction of a flawed analogy explaining alchemy? Not only is the analogy flawed, the thing it is supposed to analogize is flawed. It's one misconception heaped on another. I'm sure that there will be much less comprehensibility the more modifications that are introduced into something that in it's simplest form is incomprehensible. Clive Hayden
"Do you have a better way of getting across the idea of accumulation of information through selection? If so, then out with it." There is no accumulation of information through selection that I have ever seen so how could I present a better way to an example I am not aware of. Maybe you should present an/the instance. We would all be interested. It would be a first, since I am not aware of even Dawkins ever doing so. jerry
jerry,
The fact that people persist on this says more about their purpose here than anything else.
Yes, I totally agree. Given that, who has persisted for years in constructing complicated "refutations" (based on misunderstanding) of a simple illustrative program? And what would that say about their purpose? David Kellogg
jerry,
Now that we have established that, it can be put in the black hole that I recommended and forgotten about.
I will not be forgetting anytime soon the IDiocy of attacking for years a pop-sci illustration -- no, a misunderstanding of the illustration -- of an aspect of the theory of evolution. Dembski and Marks evidently have committed to publishing a peer-reviewed paper that falsely attributes "partitioned search" to Dawkins, and the Weasel program will go into a black hole no sooner than their paper does. Various parties explained their misunderstanding to them long ago. And Bob Marks' inclusion of the Weasel Ware propaganda at his Evolutionary Informatics Lab website makes the whole affair surreal. It is absurd to suggest that Dawkins was out to prove anything with the Weasel program. He was, as an authority on evolutionary theory, illustrating for a mass readership a belief of the scientific mainstream. Dembski's rhetorical strategy has been to attack the illustration and leave it to naive readers to conclude that he has refuted scientific theory. There is absolutely nothing wrong with giving a simple illustration of a belief that in fact has very strong support. I have shown that it is trivial to turn the Weasel program into a much stronger illustration. But the Wandering Weasel is also more difficult to understand. (And I have in mind three more modifications that will make the program more realistic, each at a cost of decreased comprehensibility.) No one here has evinced a willingness to take on the Wandering Weasel. My best guess is that you don't understand it. And if you don't understand the consequences of such a small modification to the Weasel program, why would you suggest that Dawkins should have given a different illustration? Do you have a better way of getting across the idea of accumulation of information through selection? If so, then out with it. Sal Gal
Good grief, jerry -- denying what's obvious to everybody is only going to make things worse. skeech
"But it doesn’t model real evolution well, which Dawkins himself pointed out." Now that we have established that, it can be put in the black hole that I recommended and forgotten about. "In comment #69, you reversed yourself " Whoa, when desperate, claim the other person fails to answer an inane question or contradicts himself. Your question has been answered in what I have said and I have not contradicted myself. A contradiction only exists in your mind and is probably due to your failure to read things carefully. Latching is closer to reality than the non latching scenario set up in the Weasel program. An even closer to a reality scenario is one that would only rarely eliminate the latching and I mean rare but in terms of the simulation it would probably only extend the simulation a few steps. The parameters of the program is nowhere close to reality so trying to salvage it by suggesting which of latching, almost latching and no latching is best, really misses the point. Instead of a beauty contest, we have the Weasel Ugly contest. Which of the very, very, very ugly incompetent inappropriate programs is the least ugly. If you want to fight over this, be my guest. Give it a rest and move on to something of substance. The fact that people persist on this says more about their purpose here than anything else. Many of the comments people make really don't deserve an answer. If they were sincere, the questions and approach would be quite different. jerry
Good job, Skeech. A key idea is that mutation happens independent of selection for fitness. First the phrase is subject to mutation according to rules that know nothing of the criteria for selection, and then, once that is done, the fitness function determines whether that phrase, as part of a generation of phrases, actually survives. hazel
jerry, I notice that you avoided answering madsen's question, which was:
But now I’m confused as to what your complaint with the weasel algorithm actually is. You seem to be saying that the algorithm should be modified to include a bias in favor of preserving correct letters. However, in all the discussion that has gone on here, I can’t remember anyone citing a single case of letter reversion, while using what are considered “realistic” parameters. In fact, the apparent lack of reversion was what got these threads started in the first place. Why would the program need additional tweaks in order to prevent something which happens so rarely anyway?
I can understand your desire to change the subject, since you have in fact contradicted yourself. In comment #43, you wrote:
Explicit latching as it is defined here more closely resembles reality.
In comment #69, you reversed yourself and concluded that the bias for preserving correct letters belonged in the selection step, not the mutation step. That is exactly what we've been trying to get you to see for half of this thread. I'm glad you finally understand. skeech
jerry,
I have said all this discussion is folly because the program is nonsense. Do you really think the Weasel program has any value?
Well, I think it does have some value as an extremely simple demonstration of the power of mutation and selection. But it doesn't model real evolution well, which Dawkins himself pointed out. madsen
"But now I’m confused as to what your complaint with the weasel algorithm actually is. You seem to be saying that the algorithm should be modified to include a bias in favor of preserving correct letters. " No, I think the whole discussion should be abandoned and the Weasel program put in a black hole where it rightly belongs only to be resurrected to show why it is useless and not to emulated. I have said all this discussion is folly because the program is nonsense. Do you really think the Weasel program has any value? Have you run the two Monash version of Dawkins programs, one of which I was told is a good replication of the original? The other is a latched version. I find the persistence of this silliness the most interesting thing about this discussion. Now that I understand that the Weasel program is nonsense, we can point to this discussion to shorten any further discussions in the future. That is the whole value of this thread. A way to short circuit further inane discussions down the road. jerry
KF [64,65]:
JT:9] any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate.
Nope. There are [a] undirected, stochastic contingencies, and [b] directed contingencies, some of which may be [c] set up in programmed, algorithmic systems. Programs that have to work on dynamical entities will [d] use the forces and materials of nature to bend them into structures fitted to to the intent of the designer. Simple, easily observed empirical facts.
My point was as follows: Take a volcano for example - there would be physical conditions that came into existence that triggered its eruption. Some of those conditions would be relatively permenant, for example the size shape and location of the volcanic mountain in question. Other factors that eventually precipitated the volcanic eruption would be possibly chance events of some sort or another. But there would be some complete set of determinsitic factors that came together at some point in time to cause that volcano to erupt. That set of factors would equate to a program causing volcanic eruption. Some aspects of that program came into existence by chance, presumably, I'm not denying that.
(And, BTW, what observed or predicted empirical evidence leads to the conclusion that life could be so written into the laws of nature, as an explanation claimed to be superior to the obvious: design by designers.)
My thinking is, some event is designed by whatever causes preciptiated its occurence. That volcanic eruption for example was designed and characterized by whatever physical forces precipitated it and constrained it. What sort of lava flow was it - that would be designed by the necessary precipitating physical causes, not an "Intelligent Agent" for example. Even in the case of human design, there is a context of culture, of necessity, of existing technology that must be considered to fully account for the emergence of new technology. "Necessity is the mother of invention." (not "Intelligent Agency".) Why would this gargantuan universe exist if it was not integral to Man's creation? From a Biblical perspective, aren't we sort of the end point? Is the remaining 99.99999999999999999999999999999 of the universe essentially garbage? Why does it exist?
12] the monkey will eventually hit some 10 word sequence from Hamlet.
Just as, by chance he will eventually reach Shakespeare’s full corpus. The issue is — again — probabilistic resources to get to functionality
But if recognizable sequences from Hamlet are preserved, probabilistic resources don't enter into it. JT
jerry, At least we agree that bias should not be placed in the mutation step. But now I'm confused as to what your complaint with the weasel algorithm actually is. You seem to be saying that the algorithm should be modified to include a bias in favor of preserving correct letters. However, in all the discussion that has gone on here, I can't remember anyone citing a single case of letter reversion, while using what are considered "realistic" parameters. In fact, the apparent lack of reversion was what got these threads started in the first place. Why would the program need additional tweaks in order to prevent something which happens so rarely anyway? madsen
"In other words, in which step of the algorithm should the bias of preserving correct letters be placed—in the mutation step, or the selection step? Probably in the selection step and by the way the selection is nonsense so to try and shore up this part is contributing more to the ludicrousness of this program. So it is should be highly unlikely that an offspring without all the correct letters should be eligible for selection. That is why latching makes more sense but almost latching could be argued as even more sense so that only rare offspring that do not have all the letters would be available to reproduce. But the real world is quite different. Evolution seems to show stasis and that is what latching is about and it is due to the conservative aspects of natural selection. Over time there seems to be something added to species and that is what the theory of punctuated equilibrium is all about. Part of the genome lays fallow till a new capability is available through mutation. Few but the real die hards ascribe to the traditional Darwinian process of small changes over time to functional elements. So latching is much closer to reality than non latching. Dawkins is one of those die hards so this program as well as Dawkins should be thrown under the bus and everyone should move on to the new paradigm. Which by the way is also just as unproven as the Darwinian one of small functional changes. The Gould paradigm is of small non functional changes suddenly leading to new capabilities. So if there is any teaching value in the Weasel program it is that "The King Is Dead"; "Long Live the King." But unfortunately the new king is also as weak as Henry's sons. Not quite still born but not strong to last till maturity. jerry
kairosfocus [64], your comment helps me understand what you mean by "implicit latching." You mean "non-latching." Let me explain. You write:
In the implicit latching case, a co-tuned blend of population size [vs probabilities of multiple mutations], random changes on population members, preservation of the original champion in a significant fraction of the generation and rewarding of mere proximity to target causes latching that is implicit but very real. This is because it is practically — not theoretically — impossible for a double mutation that simultaneously gives a good new letter and reverts an old correct letter to win as champion in the current generation.
The added emphasis shows that you know that letters are not latched but subject to random mutation. What do you mean by "implicit latching," then? I believe you mean that selection will tend to conserve correct letters. Well, okay. So what? Nobody suggested otherwise. Mutation is still random on the letters. As for your claim that selection for a letter reversion is theoretically possible but will not happen practically, this is simply not true. I've run several non-latching Weasel programs and found consistent if rare examples. I even posted an entire run in the other thread and pointed out the reversion. So you're wrong there. In short, by your own words, "implicit latching" simply means that beneficial mutations tend to be conserved when selection happens. And that's always been what Dawkins has claimed. David David Kellogg
jerry,
If you read what I said, I never said that once a letter reaches function, it should not be prevented from disappearing. Just that it should be rare and not the equivalent of the mutation rate because there will be a strong bias by natural selection to preserve a functional state.
Ok, let me see if I'm understanding: Are you saying (in weasel terms) that correct letters should have a lower chance of mutation than incorrect letters? Or are you saying that correct letters in the parent should very rarely revert to incorrect letters in the "best" offspring? In other words, in which step of the algorithm should the bias of preserving correct letters be placed---in the mutation step, or the selection step? madsen
"If you believe that latching is more realistic in actual biology, how are functioning genes prevented from mutating?" They are not. I never said they weren't. Read what I said. They are just conserved by natural selection and it is not realistic that a population would be replaced by one that does not have a functional element. But the program treats this outcome as likely for several of the generations. It is the program that is nonsense. If you read what I said, I never said that once a letter reaches function, it should not be prevented from disappearing. Just that it should be rare and not the equivalent of the mutation rate because there will be a strong bias by natural selection to preserve a functional state. Latching is a way of doing that but if one wants to get more realistic then make the chance of it being switched out much lower than the mutation rate. However, absolute latching is more realistic than the chance of the mutation rate switching it out. I bet Dawkins did the latching first and found out the example was too fast and consequently not believable so he went for the even less realistic version just because it searches longer. But in either case the example has no value except to prolong irrelevant discussion about it for people who have too much time on their hands. "Deleterious mutations do happen in reality — just ask anyone with cystic fibrosis. The non-latching case, which allows deleterious mutations, is therefore more realistic than the latching case. Again, please slow down and think this through before responding." No one said they didn't happen. They just do not take over the population in one generation under any plausible scenario just because Richard Dawkins programs it as such. So the latching case is more realistic. See the discussion above. All this discussion is due to the ineptness of this program as both an example of evolution and as a teaching tool. What it is, is a propaganda tool. And a lot of people fell for it. So look at Dawkins as a master of propaganda and not as someone well versed on evolution. As I said above he would make a good used car salesman. So smile while you are being conned by Sir Richard's gobbledegook. jerry
kairosfocus, In the weasel program, assuming "implicit latching", do you believe that a correct letter has a different probability of mutating than an incorrect letter? I'm looking for a short answer, a few sentences at most. madsen
8] If you’re talking about first life (or first RNA strand or whatever) then obviously that cannot come about by Darwinian selection. Everyone would agree with that. But does that logically necessitate that it just poofed into existence instantaneously via Divine fiat? I will ignore the ad hominem laced inference to Creationism, especially as from the very outset of the modern design theory, Thaxton et al stated explicitly that OOL by design per empirical evidence of want of thermodynamic credibility in abiotic environments, does not directly implicate an extracosmic designer. (Forrest et al have been setting up convenient -- and dishonest -- strawmen to knock over.) But the main point is that you here implicitly acknowledge the force of the Hoylean challenge. The FSCI in first life has to be accounted for, and there is no empirically credible, spontaneous hill climbing algorithm -- no BLIND watchmaker -- to get there. (And onlookers, in Section A my always lined I discuss how Shapiro and Orgel mutually destroy RNA world and metabolism first imagined scenarios for chemical evo. In section B, I take apart the wreckage, and back it up with Appendix A on the relevant thermodynamics, building on . . . Hoyle's tornado in a junkyard.) Similarly, body plan level architectures are highly complex, functionally deeply and multiply integrated -- and so credibly highly irreducibly complex [observe Jerry, where I think Behe's IC comes most seriously into play] -- starting from the observed flexible program flexible data storage, molecular nanotech computer in the heart of the cell. And, major body plans similarly brim with FSCi-rich, irreducibly complex, multiply and subtly functionally integrated systems. Think: autonomous robots that are self replicating, based on intelligent polymer molecular nanotech information systems. That's what I find myself looking at, from my applied physicist's perspective, and that screams: design. (And, BTW, see why AI does not faze me, once we look at Derek Smith's two-tier control processor for MIMO servo systems, as I discuss in App 7 the always linked? [Do you see that I am heading to reverse engineering life systems, to forward engineer a new generation of REALLY smart systems? Eventually, systems that can be loaded into embryonic robots launched across space to seed new planets and even solar systems, "eating" local resources, then organising and [partly] terraforming them to set up for colonisation? Starting with the Moon, Mars, Ceres and possibly some of those big Jovian moons? THAT'S what lurks in Design Theory.]) 9] any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate. Nope. There are [a] undirected, stochastic contingencies, and [b] directed contingencies, some of which may be [c] set up in programmed, algorithmic systems. Programs that have to work on dynamical entities will [d] use the forces and materials of nature to bend them into structures fitted to to the intent of the designer. Simple, easily observed empirical facts. Since you mention physical programs, perhaps you suggest life is written into the laws of he cosmos, so that what looks contingent is not. This leads straight to the cosmological inference to design on a grand scale. (And, BTW, what observed or predicted empirical evidence leads to the conclusion that life could be so written into the laws of nature, as an explanation claimed to be superior to the obvious: design by designers.) 10] If those intermediates in Dawkin’s Weasel are functional, then your 1000 bits are irrelevant - right? As the Spartans once famously replied: IF. In fact, on evidence, 10^40 configs is a toy example relative to the initial state of function of the type of entities we really need to get to, and Dawkins -- not GEM of TKI or WmAD or Royal Truman etc -- acknowledged that he intermediates were NON-FUNCTIONAL. Again: ". . . nonsense phrases." No antecedent in a hypothetical inference, no warrant for inferring the consequent on that basis. 11] Consider the example of Monkeys with Typewriters, but with some modifications: You have a bunch of monkeys that have access to a hat filled with a couple of thousand words or so - all words in Hamlet. They can go and grab a word out of the hat and then . . . This is of course a rabbit trail leading away from the material case on the table circa 1986. But also it brings up a most interesting issue: the challenge is to get to the initially functional entities by spontaneous processes then assemble into integrated systems. The polymers of life [and some of their monomers] are multiply thermodynamically implausible for any credible prebiotic spontaneously formed environment. (This, I discuss in Section B my always linked.) In a Weasel case, if the threshold of getting to relevant words first and then assembling them in grammatically proper order is set, it combinatorially -- and algorithmically -- explodes. 12] the monkey will eventually hit some 10 word sequence from Hamlet. Just as, by chance he will eventually reach Shakespeare's full corpus. The issue is -- again -- probabilistic resources to get to functionality. Which is still unmet. 13} 49: with an actual population, and beneficial mutations overtaking the entire population (a simplifying assumption as well admittedly but certainly comprehensible) any detrimental mutations in certain individuals (of already benficial changes) will be swamped by the rest of the population that did not have these mutations. In short, implicit latching. AND in your case, with tearaway run to the target through multiple beneficial mutations on a probabilistically and empirically implausible model. [Think about the skirts of the mutations distribution and what happens with 500 iterations per generation of 5% per letter odds of mutation.] Cf Behe's Edge of Evo and the realistic odds of double mutations to get a benefit. Of course, this at best overlooks the point that I have ALWAYS spoken in the context of a population of mutants, for both explicit and implicit latching. In the case of something like Apollos' runs, that is equivalent to taking every ten or whatever number of mutants catches your fancy, and picking the best of that cluster to move ahead. The simplest implementation would be to pick a champion, then do a mutant and compare setting up the candidate best next champion from the comparison, noting distance to target. Repeat the mutation on the existing champion n times, and the resulting best approach to target becomes the next current champion. Repeat till convergence hits target. There is no material difference between this and doing the same on one individual at a time, then incrementing the count. 14] hazel, 50: all letters need to have the possibility of mutation because mutation is random in respect to fitness: that is a critical part of the model. Post-facto spinning and ink cloud spewing to make a fast getaway. As I pointed out this morning in the other thread, this has no good warrant from TBW, ch 3. Dawkins circa 1986, by his statements, abundantly warrants a per letter partitioned search, latching approach. All else on this is, in the end, squid ink-cloud obfuscation. 15] the fact that occasionally a correct letter mutates doesn’t have a net effect This would reflect, of course the impact of implicit [quasi-]latching due to co-tuning of mutation rates, population size and the proximity to target filter. Which is what I have long since pointed out, in the first thread where this issue was raised. Many days ago now. 16] Skeech, 55: Latching is equivalent to preventing deleterious mutations. Not at all, it is primarily the effect of rewarding on proximity to target without reference to function. As I must repeat yet again: until you have achieved function, no credible natural selection ion differential function across variation can be discussed. Weasel begs the Hoylean question by imposing effectively no or an implausibly simplistic criterion of function in more modern versions. And, as noted, Dawkins acknowledged that in 1986, that his implementation rewards non-functionality on proximity to target. Jerry, in 59 aptly sums up the matter:
Natural selection would select the progeny with the functional traits and eliminate those when the traits were not conserved. A functional state is treated as equivalent to a non functional state by Dawkins’ Weasel program which is very bad evolutionary theory by the high genius of evolution. [But, Jerry, fair comment: onward language gets a little testy.]
17] SG, 62: what Dawkins’ was illustrating depends in no way on “targeted search,” “partitioned search,” or “implicit locking.” the only thing That Mr Dawkins succeeds in "illustrating" is that intelligent design based on targetted search can effectively scan an otherwise resource-wise unsearchable config space. It does so through active information. Again, Am H Dictionary:
il·lus·trate (l-strt, -lstrt) v. il·lus·trat·ed, il·lus·trat·ing, il·lus·trates v.tr. 1. a. To clarify, as by use of examples or comparisons: The editor illustrated the definition with an example sentence. b. To clarify by serving as an example or comparison: The example sentence illustrated the meaning of the word. 2. To provide (a publication) with explanatory or decorative features: illustrated the book with colorful drawings. 3. Obsolete To illuminate. v.intr. To present a clarification, example, or explanation.
GEM of TKI kairosfocus
H'mm Several points; pardon the need for a string of correctives, as it is plain that a spin game is being played out over in the Darwinian blogosphere. So, let us note for the record: 1] Pendulum, 35: Weasel was not a response to anything by Hoyle. Not so. At the turn of the 80's Hoyle [and Wickramasinghe] had raised the issue of the complexity of life, and the question was captioned by the odds of creating a cell per say the odds against the cluster of enzymes originating by chance being something like 1 in 10^40,000. This set he context for debates [and is part of the back-story on the Thaxton et al book that launched the modern design movement, TMLO], and Weasel was in its core an attempt to say that what is utterly improbable as a single step, is feasible if we look at baby steps. Excerpting Wiki on their overview on the Weasel Program:
Dawkins intends this example to illustrate a common misunderstanding of evolutionary change, i.e. that DNA sequences or organic compounds such as proteins are the result of atoms "randomly" combining to form more complex structures. In these types of computations, any sequence of amino acids in a protein will be extraordinarily improbable (this is known as Hoyle's fallacy). Rather, evolution proceeds by hill climbing.
The highlighted quote shows precisely the question-begging at stake. The DNA- RNA- Ribosome-Enzyme system constitutes a stored program and stored data computer, physically instantiated. Such entities are known to be irreducibly complex. Until one arrives at the threshold of elements to achieve physical functionality, one has nothing. And, until one has functionality, one cannot credibly address differtne4ial functionality in environments to get to natural selection, the blind watchmaker hill-climbing engine proposed. Further, the spontaneous synthesis of relevant informational macromolecules is a question not of biology but statistical thermodynamics -- an expertise of Hoyle [who is the precise person who in this exact context raised the question of he analogy of a tornado in a junkyard in Seattle spontaneously forming a 747], and not one of Dawkins. (Cf my own discussion in my always linked to see what I am pointing to, App 1, esp note 6.] So, by diverting attention from the need to get TO shores of functionality by focussing on hill climbing within islands of already existing functionality, Dawkins' Weasel is an exercise in question- begging the Hoylean challenge. The real fallacy is Dawkins' q-begging. Hoyle is right, and sitting on a point of his Nobel Prize-equivalent winning expertise. (Hint: the late, great Sir Fred Hoyle may be wrong on a point of theory, but he is not going to be grossly, simplistically wrong; his errors will be both interesting and deeply revealing on what is going on. That is why regardless of differences he is one of my personal intellectual heroes. Most recently, I was looking at his magnetic braking of proto-solar system disks model. That led me into looking at issues on Faraday disk generator empirical behaviour that I had never noticed before, and which are glossed over in the textbooks on electromagnetism and electromagnetics.) 2] Weasel’s extremely limited didactic goals Weasel's goals were more rhetorical than didactic. And, it is not misunderstanding to take plain words and outputs at their direct and obvious import. The o/p of weasel circa 1986 plainly on a law of large numbers sample, credibly latches. Two models for that latching were proposed: explicit and implicit. the former is overwhelmingly justified by the statements Mr Dawkins made in his text. It is on further statements that not even the 1986 version was explicitly latched that the implicit latching becoems the better explanation. And, in that light, the 1987 video o/p becomes a clear case of detuning for video effect. Misunderstanding is not the material issue at stake. What is, is that Weasel from the outset, was precisely not BLIND watchmaker in action, but foresighted, targetted search that rewards non-functional configs on mere proximity. And, that in a context where threshold of function was the precise issue at stake from the outset. 3] DK, 39: I think “implicit latching” is a way to avoid saying “non-latching.” Strawman. In the implicit latching case, a co-tuned blend of population size [vs probabilities of multiple mutations], random changes on population members, preservation of the original champion in a significant fraction of the generation and rewarding of mere proximity to target causes latching that is implicit but very real. This is because it is practically -- not theoretically -- impossible for a double mutation that simultaneously gives a good new letter and reverts an old correct letter to win as champion in the current generation. (Just as, on statistical thermodynamics with an entity of sufficient size, significant fluctuations away from the 2nd law of thermodynamics as classically stated are practically -- not theoretically -- impossible. And, yes DK, stat thermo-d considerations were in mind for me from the outset.) Under somewhat relaxed conditions [higher mutation rates coupled to big enough populations], the probability of seeing that rather special double mutation will rise. And, as the odds of per letter mutation rise sufficiently, facilitating that, the odds of no mutation at all fall. So, we can see cases where a substituting mutation will occasionally win: remember the closest to target, regardless of want of functionality, wins. Thus, quasi-latching. Then, as the conditions are further relaxed, triple etc mutation cases become more common. In some of these cases, even more substitutions will happen, and as well, novel multiply correct letters begin to emerge. Implicit latching vanishes, and we see much more of flicking back, no strong preservation of current letters, and occasional leaps of multiple letter advance. 4] Skeech, 41: KF’s complaint about “implicit latching that rewards non-functional but closer population members” is therefore, to use a couple of his favorite phrases, “distractive” and a “red herring”. Spin. I have just -- again, onlookers -- explained why this is VERY relevant. And,t he original issue is that Weasel is precisely not blind watchmaker. Back to that December thread, comments 107 and 111:
[107:] the problem with the fitness landscape is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. So far in fact — as I discuss in the linked in enough details to show why I say that — that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy. But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge. [111, excerpted paragraph used by GLF in his threadjack:] Weasel sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.
See what is being desperately spun away from, onlookers? Notice, too, the evident fact of ratcheting and thus implication of explicit or implicit latching? And, notice the issue bing skipped over? 5] SG, 44: “Implicit latching” — the term itself — reflects gross misunderstanding of the evolution strategy. The Weasel program should have no termination criterion. That is, it should not stop itself, just as evolution does not stop itself. Ad hominem laced, smoke- cloud- emitting, burning strawman. Weasel as a matter of fact has a target [on achieving of which it terminates], and as a matter of observed o/p fact, circa 1986, that target worked on aper letter basis. "Latching" is in that context and does not reflect a misunderstanding. 6] the fitness function in Dawkins’ example Dawkins uses a TARGET, and proximity thereto, not functionality-based differential fitness. Fact, acknowledged by him in TBW, ch 3: "The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL." Of course, that "however slightly" implies that a one-letter advantage is enough, justifying a letter by letter partitioned search interpretation. 7] JT, 47: you’re defining “Methinks it is a weasel” as the only functional string. Presumably you accept that string as being functional. If you’re just defining every other string of letters as nonfunctional, then obviously there won’t be any intermediates that can be preserved. Kindly examine what Mr Dawkins stated in TBW, ch 3, as just excerpted. It is HE who accepts that the phrases in question are non-functional, "nonsense phrases." That is, I am making no redefinitions; just taking him at his word and the linguistic context at its natural meaning: only a correct sentence is a correct sentence, and only a correct word is a word. [In algorithmic and data storage contexts, just one incorrect letter or character can cause havoc, e.g. the infamous comma that forced NASA to abort a rocket launch. I also recall the case of a computer trying to subtract Jones from Smith, and causing a social security system in was it South Africa to crash.] And indeed,the only reasonable threshold of function is that he has cut down form monkeys reproducing all of Shakespeare by random typing [on probabilistic resources grounds] to one sentence, and in that one sentence, he admits that trying to get to it "single step" is far beyond the credible reach of a computer on the gamut of the observed universe:
What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed.
It turns out that 27^ 28 ~ 1.2 *10^40, which is well within the 500 - 1,000 bit threshold of complexity [~ 106150 - 10^301 configs] at which FSCI becomes a material issue. In turn, 1,000 bits is well below the sort of reasonable threshold for first cellular life, 600 k bits; as well as for major body plan innovations, ~ 10's - 100's of M bits. And, that is Hoyle's context and of my reasonable extension to bio- diversity origination. [ . . . ] kairosfocus
JT, My point is that what Dawkins' was illustrating depends in no way on "targeted search," "partitioned search," or "implicit locking." Populations sometimes accrue information that reduces the uncertainty of the environment, even though the environment is changing. As David Fogel has written, following Wirt Atmar closely, "Selection operates to minimize species' behavioral surprise — that is, predictive error — as they are exposed to successive symbols." The symbols he refers to represent environmental circumstances. The Weasel program makes better sense if you stop thinking of the simulated organisms as genotypes, but instead as phenotypes making 28 predictions of the environment. The count of matching characters in the fitness function is then the total payoff for correct predictions. Dawkins specified an environmental sequence of symbols and held it constant to provide a clear illustration of information accrual through selection. In the Wandering Weasel program, obtained through slight changes to the Weasel program, I illustrate accrual of information when the environment is a random process -- a Markov process -- that hops from sequence to sequence of symbols. The Wandering Weasel program reduces the entropy of the environment -- the simulated environment is essentially external to the program -- for a wide range of parameter settings. This holds even if we make the length of sequences large and set the parameters to make the probability of ever perfectly matching the current sequence very low. Sal Gal
jerry writes:
Absolute nonsense. You have it backwards. Deleterious mutations are eliminated from the offspring in normal evolution not allowed to prevail.
jerry, Slow down and think this through. Deleterious mutations "prevail" in neither the latching nor the non-latching paradigm. Both match biological reality in that respect. The difference between the two is that in the latching case, deleterious mutations are prevented from happening in the first place; in the non-latching case, they are allowed to happen but are weeded out of the population by selection. Deleterious mutations do happen in reality -- just ask anyone with cystic fibrosis. The non-latching case, which allows deleterious mutations, is therefore more realistic than the latching case. Again, please slow down and think this through before responding. skeech
jerry, If you believe that latching is more realistic in actual biology, how are functioning genes prevented from mutating? madsen
"Latching is equivalent to preventing deleterious mutations. Therefore a realistic simulation should not latch, but should instead allow deleterious mutations to be filtered out by the fitness function." Absolute nonsense. You have it backwards. Deleterious mutations are eliminated from the offspring in normal evolution not allowed to prevail. Natural selection would select the progeny with the functional traits and eliminate those when the traits were not conserved. A functional state is treated as equivalent to a non functional state by Dawkins' Weasel program which is very bad evolutionary theory by the high genius of evolution. Latching is a more realistic outcome. This whole discussion is as I said folly because the example is very bad evolution and very bad pedagogy. But people follow this idiot Dawkins like he is a prophet. Dawkins would make a good used car sales man because he has sold a bunch of junk to people in the last 30 years. And they smile when they buy it. "To latch or not to latch, that is the folly." jerry
Sal Gal [44]: “Implicit latching” — the term itself — reflects gross misunderstanding of the evolution strategy. What about "relative latching" - referring to the relative fixation of a trait. Haven't cockroaches been around for millions of years without much change? JT
Sorry, I didn't see your post 44. JT
Sal Gal 45-46: I may see where you're going with this "Wandering weasel" program, but what are you inferring (or implying) from it (And where are you going with that wandering weasel)? BTW- Where is the preview button. JT
jerry:
In evolution, the one thing that natural selection tends to do is conserve so if one wants to approach reality in any sense then the loss of information within an iteration of a functional subpart is extremely unlikely.
Evolution conserves useful traits not by preventing deleterious mutations but by filtering them out of the population via selection. Latching is equivalent to preventing deleterious mutations. Therefore a realistic simulation should not latch, but should instead allow deleterious mutations to be filtered out by the fitness function.
How difficult is that to understand.
How difficult is this to understand? skeech
But the mutationworks simulation does latch - whether they realize it or not. It latches the original configuration, because any other mutations that can occur are rejected in favor of the original configuration, (save one and only one target config out of the entire space). To clarify, mutationworks has all these nuetral mutations happening (they call virtually everything neutral), and they say those are preserved. But how can they mean all those neutral mutations are actually overtaking the entire population? They cannot mean that. And then any other neutral mutation also overtakes the entire population? So really the mutationworks simulation means one of two things: EVERYTHING LATCHES [because every netural mutation overtakes the population] or NOTHING LATCHES but the original config and only one distant target. But at any rate, the mutationworks simulation most definitely latches as well. JT
"So I agree: what is difficult to understand about this?" The whole process is folly as I said above. The artificial fitness function is nonsense even if it is meant to be a pedagogical process. In evolution, the one thing that natural selection tends to do is conserve so if one wants to approach reality in any sense then the loss of information within an iteration of a functional subpart is extremely unlikely. A more intelligent way would be to make the loss of this functionality very rare. And latching is the easy way out but but more reasonable then to let it mutate out like it meant nothing. How difficult is that to understand. Now it turns out that by programming that latching effect into this absurd example one gets a less absurd simulation but it is extremely trivial one because it reaches an answer very quickly. It is like we are some how fooled that the longer simulation must be more real life and that is nonsense. There is no relation to reality here which is why it is absurd that it has taken over 400 comments over three threads to discuss it. People do have too much time on their hands. They should go out and get a beer or something. From what I understand the offspring population size generated at each iteration should be the same for each type of simulation. The only difference is whether a letter is latched or not. If that is wrong, then why? I am willing to learn. jerry
virital = virtual JT
hazel wrote [37]: to JT: I was using latching and non-latching in the sense that they were used in that other thread. Sometimes we talked about explicit latching in which there is a rule that prohibits correct letters from mutating and implicit (or what kf has called quasi-latching) to mean what you have been meaning by latching. Given that the distinct between the two is related to the distinction I mentioned: random vs non-random in respect to fitness, it would seem best for there to be a consistent usage of the word latching. OK. To return to the mutationworks example (from the OP), they say that Dawkin's latches letters into place. But the mutationworks people say their own simulation does not latch like Dawkins, and that that's why their implementation is more realistic and Dawkins always wins. But the mutationworks simulation does latch - whether they realize it or not. It latches the original configuration, because any other mutations that can occur are rejected in favor of the original configuration, (save one and only one target config out of the entire space). Really, I think latching should mean any fixation of a trait, and artificial latching should mean any process that models latching by means a single individual (instead of a population) and the actual prevention of negative mutations. But latching of some type (real or virital or artificial or actual) does occur in nature. The question is (which the mutationworks people for example do not address) is why should only two individuals latch - the original config and one specific distant target config - as happens in the mutationworks simulation. Why cannot their be multiple intermediate configs that "latch". JT
Yes, that is exactly what I and others have explained on the other thread. And, as I will point out again (because it doesn't seem like anyone wants to respond), all letters need to have the possibility of mutation because mutation is random in respect to fitness: that is a critical part of the model. And as JT points out (and I have pointed out elsewhere), the fact that occasionally a correct letter mutates doesn't have a net effect is because such mutations are detrimental to survival, so a phrase which has a mutated correct letter is very unlikely to be the most fit individual in a generation. So I agree: what is difficult to understand about this? hazel
Jerry wrote [43]: Explicit latching as it is defined here more closely resembles reality. Once something becomes functional, here the matching to the desired letter, it will tend to be conserved by natural selection. But it is not in Dawkins’ program so it is Dawkins’ thinking that is buggy. My guess is that Dawkins tried the latching first and found it to be too easy so he changed the program but unwittingly went to poor evolution to do so. I honestly don't know what you're talking about. The way Dawkins' antagonists have characterized and implemented the algorithm there is only one individual in the population. What does that have to do with reality? In that scenario all you can do is artifically prevent certain beneficial mutations from changing in that one individual. But with an actual population, and beneficial mutations overtaking the entire population (a simplifying assumption as well admittedly but certainly comprehensible) any detrimental mutations in certain individuals (of already benficial changes) will be swamped by the rest of the population that did not have these mutations. In my own implementation, A population of 500 and a a mutation rate of even 5% means that on any given iteration there will be on average 25 mutations of a beneficial letter. But there are 475 individuals that did not have this detrimentatal mutation, so the chances are highly unlikely for the winning candidate for that iteration [generation] to have the detrimental mutation in question. What is difficult to understand about this? JT
And just to emphasize (what others have as well) intermediate functionality need not be directly associated with only one specific distant target (subcomponents of an eye for example could be used elsewhere - don't believe anyone has successfully ruled this out). JT
KF wrote [32]: "Had there been a serious functionality constraint - which was what Hoyle stipulated to begin with, Weasel would wander around hopelessly in the sea of non-function, unable to find a shore of function to hill-climb." That's only because you're defining "Methinks it is a weasel" as the only functional string. Presumably you accept that string as being functional. If you're just defining every other string of letters as nonfunctional, then obviously there won't be any intermediates that can be preserved. 1,000 bits is credibly unreachable by our observed cosmos, and first life credibly needs 600 k bits; [NOTE: the following up to the dashed line is a digression from the subject of simulations] If you're talking about first life (or first RNA strand or whatever) then obviously that cannot come about by Darwinian selection. Everyone would agree with that. But does that logically necessitate that it just poofed into existence instantaneously via Divine fiat? Or does it allow also that it inexplicably coalesced bit by bit, but only because it was being guided by the unseen hand of Providence, a Providence exercizing some sort of non-physical fundamental force of nature called "intelligence". But what about when a human is formed via the automatic process of epigenesis, starting from something that looks entirely different - an embryonic cell. Is that an automated mechanized process, or is that also guided to completion in real-time via some "Intelligence". Obviously its the former - it resulted from a mechanized blind physical process. Now you're saying, "But that embryonic cell is a program for a human being!" And granted that is indeed the case. But any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate. There is simply no getting around that. So why couldn't there be some set of prexisting physical conditions in the universe prior to RNA that resulted in RNA coming into existence? What law rules that out? Of course you're saying, "Well whatever that thing is that caused RNA, it couldn't have come into existence by blind chance either, because it would be like saying an embryonic cell could come into existence by blind chance." And again, granted, that is a fact. But by saying those physical causes of RNA would have to be caused by intelligence basically explains nothing, because intelligence isn't actually defined as such in I.D. And in fact any sort of definition of it is implicity ruled out as it is asserted that intelligence is nondeterministic. If something is nondeterminstic that means that no description exists to accurately characterize (and thus predict) its behavior. And to say, "Well we know that human intelligence for example is this mysterious nondeterminsitic thing", well then that is merely begging the question. But to return to the point where I think you and I implicitly agree - any physical cause for RNA would be no more probable than RNA itself to occur by blind chance, and furthermore any physical cause proposed for RNA would just be pushing back what needs to be explained. So I think we both agree on that. In fact any set of physical conditions and laws that resulted in the formation of RNA would in fact equate to RNA, just as if f(x) = y, then f(x) equates to y, and just as an embryonic cell (+epigenetic machinery) equate to a human being. But just as an embryonic cell doesn't look anything like a human being, any set of physical contigencies that resulted in RNA might not look anything like RNA (or life or animals or human beings). Such contingencies could be diffuse and disparate and indirect and remote but could still collectively result in RNA. What law rules that out - nothing. And certainly you're just pushing back what needs to be explained, but intelligence isn't an explanation so at some point in the regression lets just say instead you hit something that has always existed ( and thius did not need to be caused by anything.) ------------------------------------------------------- Sorry about that long digression, but there was actually another point I wanted to make in the context of evolution and simulations regarding your above comment : 1,000 bits is credibly unreachable by our observed cosmos, and first life credibly needs 600 k bits; I assume you realize if they're functional intermediates then your 1000 bits limit of what's reachable by blind chance goes out the window. If those intermediates in Dawkin's Weasel are functional, then your 1000 bits are irrelevant - right? Consider the example of Monkeys with Typewriters, but with some modifications: You have a bunch of monkeys that have access to a hat filled with a couple of thousand words or so - all words in Hamlet. They can go and grab a word out of the hat and then tack it on to either the beginning or end of a sequence of words in a sentence. Once it reaches say 10 words, a human comes in and reads it, and if its not a 10 word phrase from Hamlet, then he picks up all the words and throws them back into the hat and the monkeys have to start over. Now presumably we could consider any 10 word phrase from Hamlet as functional (You considered the Weasel sentence functional). Certainly any 10 word phrase from Hamlet could be termed "sublime poetry" or maybe the "work of genius". But before discussing that further let it be noted that in the above scenario the monkeys will never generate a ten word phrase from Hamlet as the odds agains them are 1 in (2000^10)/10000 [assumming there are 10000 10 word sequences in Hamet]. However what if the rule is "preserve any n-word sequence from hamlet and reject any additional word not resulting in a n-word sequence from hamlet." Obviously the monkey will eventually hit some 10 word sequence from Hamlet. Certainly a vivid metaphor can be painted with just a couple of words together. And the following phrase is not ten words either: "A rose by any other name smells just as sweet.". So preserving any sequence from Hamlet seems justified. [In the context of evolution think about an additional mutation being a viable organism or not.] What if instead of Hamlet it were a biology textbook, or perhaps the rule "any valid english sentence." But concerning nature, let's say some biological entity exists but its origin is unknown. Now supposing that entity is functional, lets say its an eye - or maybe a heart or hand or whatever. Without regard to its origin there is a reason why this entity's complex physical configuration results in a certain function and why that function conveys on its posessor certain advantages. Plunk that entity down in a certain context, and presumably reality itself will dictate how the entity's complex configuration confers on it certain advantages: "This part of the entity interacts with this part and this part [etc.] and the result is such and such function." So reality itself is parsing a biological sentence and saying, "OK that's valid - keep it." or "That makes no sense - get rid of it." Now of course I am anthropomorphizing "reality" or "nature" but perhaps that is fundamentally unavoidable. Reality is after all making such discriminations. And if such an unavoidable view of reality or nature seems to confer on it some sort of intelligence, then maybe that is something we have to live with (I.Dists shouldn't have any problem with that). In any case, I think you would have to say that God and Reality equate. You could never say that God exists in reality because it would imply that reality was a more transcendent concept than God was. So anyway, the idea would be God is reality, i.e God is the environment, God is nature. In reality, in nature, certain things can exist (or persist) and certain things cannot, i.e. certain things are viable and certain things are not. If Man for example is what ultimately persists (hypothetically) then that tells you something about the eternal nature of reality. And there is probably a more direct transition to a proof of God via this path for someone more deft than me. But to reiterate a point from much earlier in this post, ...any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate...any physical causes for RNA [for example] would be no more probable than RNA itself to occur by blind chance, and furthermore any physical cause proposed for RNA would just be pushing back what needs to be explained. So I think we both agree on that. In fact any set of physical conditions and laws that resulted in the formation of RNA would in fact equate to RNA, just as if f(x) = y, then f(x) equates to y, and just as an embryonic cell (+epigenetic machinery) equates to a human being. In the case of preserving Hamlet, we have Hamlet as a preexisting template. In the case of English Language sentences, it is something able to recognize and parse English sentences. In such enviroments "functionality" can very easily be built up under time contraints not any where near combinatorial intractibility. What is nature or reality able to parse - What are legal sentences in that context. What does that tell us about the "intelligence" of reality or nature. Now, I'm just repeating myself and others. Don't necessarily want to attempt an even more lengthy defense of the above if its not already clear and compelling on its own. Just trying to add to the discussion a bit. JT
What, precisely, is wrong with letting me use HTML to specify an ordered list? You might as well turn off the "preview" if it's not a preview. -------------- To make the Weasel program into Wandering Weasel: 1. Initialize the target randomly. 2. Mutate the target at the end of each generation. 3. Let the program run many generations. 4. In each generation, output the current target along with the current parent and its fitness. Sal Gal
For those of you working with Weasel implementations, make the following modifications to obtain a Wandering Weasel program: Initialize the target randomly.Mutate the target at the end of each generation.Let the program run many generations.Output the current target along with the current parent in each generation. To study the behavior empirically, you will need to do, say, 100 runs for various settings of the three parameters I identified in my previous comment. In data analysis, you should plot, for each combination of parameter values, the mean and median fitness of parents in the 100 runs, as a function of generation. Dispersion of fitness (error bars, standard deviation values) is also of interest. Sal Gal
"Implicit latching" -- the term itself -- reflects gross misunderstanding of the evolution strategy. The Weasel program should have no termination criterion. That is, it should not stop itself, just as evolution does not stop itself. Replace the fitness function in Dawkins' example with one that draws a new target sentence uniformly at random whenever the argument (the sentence passed to the function by the main program) matches the current target sentence perfectly. The upshot is that the evolution strategy (ES) has to start all over when it obtains a target. Dealing with this time-varying fitness function requires no change whatsoever to the (non-terminating) ES. Statistically, the behavior of the ES in going from the Hamlet sentence to the new target is identical to that in going from the random initial parent to the Hamlet sentence. The ES "latches" the Hamlet sentence no more than it does the very first parent of the run. (Note that there are "self-adapting" ESs that adjust mutation "step size" dynamically. Any "latching" is in reduction of the expected distance of the offspring from the parent. There is no such reduction in Dawkins' ES, inasmuch as the mutation rate is constant.) A more subtle, but considerably more interesting, approach would be to mutate the target sentence in each generation. Consider the entropy of the n-th target T_n conditioned on the n-th parent P_n, H(T_n | P_n), with P_1 and T_1 drawn uniformly at random. This is a measure of how much information you have to be given, on average, to know the target when you already know the parent. An objective measure of success for the ES is H(T_n | P_n) decreasing in n (perhaps reaching some minimum). Clearly the ability of the ES to track the moving target depends on the number of offspring, the mutation rate in reproduction, and the mutation rate in copying the target from one generation to the next. The reason I focus on reduction of entropy is that Dawkins evidently was thinking of accumulation of information when he wrote the strange term cumulative selection. I've just described how to illustrate accumulation of information "about" a randomly-initialized target that may drift to any point whatsoever in the search space. Sal Gal
jerry, No, non-latching resembles reality. Mutations take place at the level of the gene, and are random. Selection takes place at the level of the organism, and is related to fitness. David Kellogg
Explicit latching as it is defined here more closely resembles reality. Once something becomes functional, here the matching to the desired letter, it will tend to be conserved by natural selection. But it is not in Dawkins' program so it is Dawkins' thinking that is buggy. My guess is that Dawkins tried the latching first and found it to be too easy so he changed the program but unwittingly went to poor evolution to do so. jerry
Great - now I get it, and I agree. hazel
hazel, Pendulum's joke is a play on the catch-phrase "It's not a bug, it's a feature!", which is commonly heard in arguments among engineers over whether a hardware design or computer program is doing what it's supposed to. Explicit latching is a bug, because it fails to conform to Dawkins' original description of the Weasel program's intent and because latching violates the principle that mutations should be random with respect to fitness. So-called "implicit latching" (which as David points out really means "non-latching") is the way the program is supposed to work. It's not a bug, it's a feature! KF's complaint about "implicit latching that rewards non-functional but closer population members" is therefore, to use a couple of his favorite phrases, "distractive" and a "red herring". skeech
I don't get this. I think this is wrong, but the smiley face makes it seem like a joke. I'm confused. hazel
Pendulum, Possibly. I think "implicit latching" is a way to avoid saying "non-latching." David Kellogg
hazel @ 37, The difference between explicit and implicit latching is that one is a bug, and the other is a feature. :) Pendulum
to JT: I was using latching and non-latching in the sense that they were used in that other thread. Sometimes we talked about explicit latching in which there is a rule that prohibits correct letters from mutating and implicit (or what kf has called quasi-latching) to mean what you have been meaning by latching. Given that the distinct between the two is related to the distinction I mentioned: random vs non-random in respect to fitness, it would seem best for there to be a consistent usage of the word latching. hazel
gpuccio writes:
A wolf eating a rabbit is just part of an interaction. In itself, it does not select anything. It’s the rabbit adapting to wolves which self-selects itself for survival.
Yes, that's it. Some rabbits select themselves to survive, and others select themselves to be eaten. The wolves are purely passive. *rolls eyes* gpuccio, it's really quite simple. In evolution, whether an individual survives and reproduces depends on both the individual and the environment. In an evolutionary simulation, whether an individual survives and reproduces depends on both the individual and the fitness function. You seem to be straining at gnats in order to avoid admitting the obvious parallels between the two. Why is that? skeech
KF @32, Why bring up Hoyle? Weasel was not a response to anything by Hoyle. Considering Weasel's extremely limited didactic goals, I'm amazed how much people have obsessed over it. And had trouble admitting that they made mistakes understanding it. Perhaps we should blame Dawkins for thinking so little of his example that his explanation was too skimpy. Pendulum
DonaldM, your experience with targetting Bach reinforces SJ Gould's "rewind the tape of life" comment at the end of "Wonderful Life". As you saw, each time you got something different, perhaps something beautiful, but not Bach. Pendulum
Speaking of these sorts of programs, another program that could approach the evolutionary algorithm problem from a different angle can be found in music. My hobby happens to be messing around in my home recording studio, which utilizes MIDI (Musicial Instrument Digital Interface) extensively. Back in the late 80's/early 90's THE computer for music applications was, believe it or not, the Atari ST computers. A brilliant programmer, Emile Tobenfeld, developed a music sequenceing program called Dr. T's KCS (for Keyboard Controlled Sequencer). Besides being able to do multi-midi track recording, the program had a Programmable Variations Generator (PVG). The PVG was the first (as I recall) ever algorithmic music generator (AGM) and I've never encountered anything like since. There are other AGM's out there, but nothing close to what PVG could do. Here's what it does
Variations can be programmed to be Consecutive (give me 16 variations on this theme using the original as the basis for each) or Evolving (give me 16 variations on this theme basing each variation on the preceding one) - The PVG consists of ten pages of functions (over 500 of them) divided into a series of logical groups - these are: Changes: The introduction of new elements (via random or deterministic selection) Signed: Size and weight plus direction Gaussian: Statistical control of changes Constant: Size and weight but NOT direction Swap/Copy: The rearrangement of existing data in a sequence, or between two sequences using random selection Set Values: The selection of data at random that can be mapped to any set value. Any configuration of data is possible. Global 1: Provides transposition, inversion, erasure and deletion. Global 2: Maps specified data to set values Split/Pattern: An extension of the Global Protection function, it permits important characteristics of a sequence, particularly interval patterns, to be defined as a protection "template" and the varied material split from the original to form new material Ornaments: The addition of adjacent or simultaneous data with up to 18 different additions available at one transformation Add Controllers: Similar to Ornaments. Used to add controller, program, aftertouch and pitch bend data Vary Controllers: Similar to Changes. Used to vary controller, program, aftertouch and pitch bend data Macros: Up to 16 of the above presets can be combined to operate simultaneously or sequentially on a sequence. Control over each preset's range and direction of reading An additional function appears if PVG is called from Open Mode - In-Betweens, which permits two sequences to be "morphed" from one to the other. In addition, the Master Editor provides functions that don't easily fit into PVG's environment. Of particular note is the Pitch Map - select any pitch on any channel and map it to a new pitch and/or channel: this can also be done recursively. What the PVG does for the composer is to allow them to create their own tools that can be made to emulate virtually any conceivable compositional or pre/post-production MIDI editing process. For example, much composition requires "pre-processing", the manipulation of existing material via the user's own criteria to form new material - counterpoint is a good example of this. Practically any "rule" for extracting thematic material can be created or otherwise mimicked in the PVG: the musical devices of counterpoint, such as inversion, rotation, augmentation, diminution and reflection, can be programmed and applied to any aspect of the music - other compositional procedures are just as easily created. The PVG can also be used as an "ideas" generator: in short, KCS is a tremendous grab-bag of customizable tools suitable for both top-down and bottom-up composition and editing.
I think this program might have some direct applications to what is being discussed here. Back when I was running my studio off the ST, I remember trying to set algorithms to see if a target sequence (4 measures of something from Bach let's say) could be generated starting with a boring 3 octave quarter note chromatic scale. I ran hundreds of permutations...but never once did it hone in on something from Bach (or Mozart or Beethoven). Every permutation was unique musically...and some were even quite usable. But I've often wished I could run the program again in light of discussions like these about evolutionary algorithms. I think there would be some interesting applications. DonaldM
JT: Heading out after a morning on phone calls, emails and slide presentations. On way out the door . . . latching in the context of Weasel has to do with either explicit letter by letter partitioned search, or to do with implicit latching that rewards non-functional but closer population members. Had there been a serious functionality constraint -- which was what Hoyle stipulated to begin with, Weasel would wander around hopelessly in the sea of non-function, unable to find a shore of function to hill-climb. Hill climbing begs the key question: ORIGIN of bio-function based on complex, specific information. (That is why we keep stressing FSCI: 1,000 bits is credibly unreachable by our observed cosmos, and first life credibly needs 600 k bits; with novel body plans at phylum or sub phylum level weighing in at 10's - 100's of M bits.) Optimisation/diversification/loss of already achieved function is not even an issue. GEM of TKI kairosfocus
[30]: OK once in 24 I did use latching in the sense of artificially prohibiting detrimental mutations (which should be apparent by context) but elsewhere I meant "latching" as it could actually be expected to occur in reality through population dynamics. (I will be off at least for several hours.) JT
Hazel wrote [just now in the weasel thread]: "Non-latching implies that mutation is random in respect to fitness. Latching implies that mutation is not random in respect to fitness." Just need to clarify that in this thread (starting in 24), I personally have been using "latching" to mean merely a benfecial trait not changing (or changing rarely) in a species once it becomes fixated. So thus it does not generally change even though one or a few individuals in a population experiences a mutation for that trait. Of course, mutationworks et. al. have written their own version of the weasel algorithm that only operates on one individual, (instead of a population) and wherein latching is caused by expressly prohibiting mutations where it is not "beneficial" in that one individual. But I didn't mean latching in this highly contrived and artificial sense. Rather, I mean merely the obvious fact that traits can become fixated and extremely difficult to change, unless some other very beneficial trait emerges. JT
28 cont. My point would be that even the mutation works simulation is latching, as its implied that no intermediate can take hold. They can't possibly mean that all those "neutral" intermediates as they term them is overtaking the entire population. So therefore, they're saying the original config is latched and stays latched until one and only one target out of 4^6 is hit upon by chance in one go, and then that and that only replaces the original config which remained latched for every other conceivable sequence generated by mutations. TO REITERATE: MUTATIONWORKS LATCHES AS WELL. BUT ONLY THE ORIGINAL AND TARGET CONFIG - THE IMPLICATION IS THAT NO OTHER CONCIEVABLE CONFIG CAN LATCH. DOES THAT SOUND REALISTIC? JT
I had an epiphany as to how to explain the whole latching issue (maybe): Take a perfectly adapted species - say we don't know how it originated. But say a detrimental mutation happens at some gene in one individual of that species. Everyone here presumably will agree that that mutation will dissapear very quickly. So the bit effected by this mutation was "latched" into place preventing a permenant change, but only by virtue of population dynamics, because one detrimental mutation in one individual will not be enough to take hold in a population. But someone will counter - even a single beneficial mutation in one individual could not take hold in a population. But then you're assuming that NO change of any kind is even possible, so why the charade of calculating how many hundreds of millions of years to get a specific 6 character sequence (as in the mutationworks simulation.) JT
gpuccio @ 22, 3) Fitness is fitness. In NS, it is measure only by the capacity of the replicatore to survive and repèlicate, and by nothing else. Comparing your criteria for fidelity to biological evolution, this is the only one where there might be some conceptual mismatch. My understanding of what you are saying here is that fitness can only be measured after the entity is dead, and we can add up its total contribution to the genetic content of the ongoing simulation. Is that correct? If so, then I understand that from your perspective 'fitness function' is a misnomer. "Environmental scoring function" might be more precise. Fitness in the post hoc sense can be tallied up after the modules that carry out selection and reproduction are done. You may also be interested in the 'agent based modeling' approach to simulation, such the Sugarscape model used by Axtell and Epstein in Growing Artificial Societies. Pendulum
As far as the MESA project, it looks like Dr. Dembski is directly associated with that so I'll have to study that carefully. Actually, I had assumed that those three examples of "reasonably faithful" simulations of evolution he mentioned would be from evo-theorists. I certainly did not expect none of them would be. Maybe it was meant "reasonbly faithful" to the I.D. cause. The first one is from ReMine - I should look at that one as well, but there's nothing on the website itself giving any sort of useable overview of the algorithm. But as far as mutationworks - "You are the weakest link. Goodbye." JT
[23] cont. [mutationworks.com] So you have a 6 character sequence, where each character can have one of four values. The assumption they make for the simulation is that between the starting point and the target, there are no advantageous intermediates. So it becomes simply a 4^6 exhaustive search. However, if there were advantageous intermediates, then if such an intermediate occured it would overwhelm a population, and thus any detrimental mutations would be higly unlikely to revert something back (thus virtual latching as a result of population dymanics). They say only one offspring per generation but if the mutation rate is as low as they say, there would be plenty of time for an advantageous intermediate to duplicate repeatedly, thus overwhelming any isolated mutations away from this advantageous intermediate. (Probably the reason the'yre only assuming one offspring per generation is because they erroneously make the same assumption for Dawkin's weasel). But basically they're just assuming that no advantageous intermediates exist so any intermediate can change back at the same rate as anything else, and thus its just an exhaustive (non-cumulative) search. And then their other point is the supposed low rate of mutation means it would takes 100's of millions of years for this 6 character sequence to occur. JT
So mutationworks.com was one "simulation" personally recommended by Dr. Demski in the OP. At the website they present their "simulation" in a contest with the Dawkin's weasel and it is to show that Dawkin's weasel always wins. And thus apparently the whole purpose of the website is illustrate the ostensible unreality of the weasel algorithm. In the"Signficance of Simluation" page it says, "Dawkins' simulation has letters that never change once they are right. Nucleotides, by contrast are never immune to mutation." So, this website that Dembski is endorsing also says that letters are latched into place. This is one of the purported reason they're giving as to why Dawkin's weasel always beats they're own simulation. However, an even more crucial reason for their own simulation's failure can be found on the initial page: "Your lineage begins with a single asexually reproducing organism that leaves one descendent. This pattern is repeated for all generations thereafter. All prior mutations are preserved in your lineage. [empasis added]." So all mutations are preserved from generation to generation - nothing is rejected evidently. This website is basically a joke - they have Dawkins image pasted up there like the boogie man, and each time you hit the "Next Point Mutation" button, it generates new sarcastic and/or humorous comments. JT
Pendulum: "Well, if you don’t accept that the fitness functioni subsumes all interactions of a phenotype and its environment, I can understand why you feel somewhat isolated." I definitely don't accept that. "Simulations such as Tierra and ECHO may be more to your liking, as these don’t use an explicit fitness function." I have been interested in Tierra exactly because of that, but as far as I understand it is not so different from the others because even if the fitness function is not explicit, the environment still seems to measure specific properties. But I could be wrong. "But I think a GP system that created radio antenna designs is just as valid according to your criteria. The fitness of each antenna arises spontaneously from its interaction with the laws of physics." I would like to know more about those systems, I will try to read something. Going back to the general problem, my idea is simple enough: 1) NS is defined as a mechanism of necessity 2) The selection has to be "natural", IOW it has tobe a consequence of the interaction between replicator and the environment, and of the intrinsic functions of the replicator and their variations. 3) Fitness is fitness. In NS, it is measure only by the capacity of the replicatore to survive and repèlicate, and by nothing else. 4) The modifications of fitness (acquisition of new functions) in the RV + NS model are due to RV. The new information accruing from RV is comnpletely "unexpected" by the environment, and the only measure of that information takes place at the level of survival and relication, however it happens. 5) Any simulation of NS must have the same characteristics: the replicators and the environment will be digital systems (after all, it is a simulation), the vairation can be targeted at different values and modalities, but the selection must take pace in exactly the same way: out of random variation, the digital replicators have to develop new information which confers to them new functions capable of exploting better the existing dogotal environment. The environment must be totally blind to that. The new functions must be true functions, spontaneously giving survival or replication advantage to the digital replicator, and must not in any way be "recognized" by the system for other characteristics. gpuccio
skeech: a passive role is a role just the same. What I mean is that the environment has no role in creating or modeling the information which determines fitness, while the information already existing in the replicator, its functions (especially survival and replication itself) and the random variation in that function are the real source of the new information. The selection itself is not made by the environment, but is a consequence of the interaction between the replicator and the environment, and of how good the functions of the replicator are in that environment. That's what I mean by saying that the environment has a passive role. Moreover, it should be obvious that the environment is totally blind to the replicator, has no relationship with its necessities or potentialities, if not as a passive consequence of the fact that the replicator has to be fit for the environment (be it because it was designed for it, or because it has adapted to it). No theory I know (except perhaps the extreme forms of TE) really assigns an "active" role to the environment (in the sense which I have tried to explain). It's the replicator which is functional, not the environment. It's the replicator which has rules of necessity to be able to survive and replicate: the environment is just permissive or not permissive to the existence of the replicators. If NS has to be considered a mechanism of necessity (and that's usually the way ot is considered), then it's the replicator who creates the rules, because it's its survival and replication which generates the output. We can say that the replicator self-selects itself according to the random modificaions of its environment and to the variations/adaptations it can exhibit in response, and always according to the basic laws of necessity intrinsic in life and replication. A wolf eating a rabbit is just part of an interaction. In itself, it does not select anything. It's the rabbit adapting to wolves which self-selects itself for survival. gpuccio
gpuccio, Well, if you don't accept that the fitness functioni subsumes all interactions of a phenotype and its environment, I can understand why you feel somewhat isolated. Simulations such as Tierra and ECHO may be more to your liking, as these don't use an explicit fitness function. But I think a GP system that created radio antenna designs is just as valid according to your criteria. The fitness of each antenna arises spontaneously from its interaction with the laws of physics. Pendulum
NS is more an effect of the replicator than of the environment, and the environment plays only a passive role.
That makes no sense. The environment has a huge effect on natural selection. Would polar bears be white if snow were black? Would insects be camouflaged if all of their predators were sightless? When a rabbit (a replicator) is eaten by a wolf (part of its environment), is the environment acting passively? skeech
Pendulum: "The program knows the target, but the population does not know the target." That's what makes, IMO, all EAs silly in the measure that they are used as a even vague simulation of NS. I have debated this principle many times here and, although I seem to be rather alone in believing that, I still stick to it. The idea is simple: NS is a kind of selection where the fitness arises of its own in some environment: it's not that the environment "recognizes" it, as many seem to think. NS is more an effect of the replicator than of the environment, and the environment plays only a passive role. The fundamental principle is therefore that the environment must know nothing of the replicator, or of the principles which determine its fitness, In other words, there must be no fitness function programmed in advance. What I mean is that fitness must arise of itself, "on its bootstraps": any other situation, where fitness is in some way "recognized", has nothing to do with NS, and is a form of intelligent selection. The weasel is just a form of trivial IS. Other EAs are smarter, but still IS they are, all of them, including Zachriel's plays with words and phrases. This fundamental difference between "real fitness" and "recognized fitness" is fundamental, and yet constantly overlooked. Fitness has to be fitness, and not the adherence to a pre-ordained function where the "fitness" is artificially conceded because the system recognizes a target which it is programmed to recognized. That is not fitness, but only the intelligent recognition of a pattern by observation and measurement of it. I have suggested a couple of times the only kind of simulation which would really simulate NS: any simulation where the replicator and population and mutation are in some way controlled, but the selection is not. In other way, a simulation which should generate, in a natural computer environnment, replicators which "spontaneously" improve their fitness in the environment, profiting of the natural rules of the envirnment, while the environment does not actively recognize any fitness function, but just acts as a passive filter for spontaneous fitness. And such a simulation should be able to generate a significant generation of functional complexity in the replicators. That would be a simulation of NS. And, IMO, would never work. In the meantime, I have nothing against simulations of IS, provided it is understood that they are good simulations of ID, and not of NS. Indeed, intelligent selection is probably a tool which can easily allow the implementation of ID, together with guided or targeted variation. As I have argued elsewhere, that's the only kind of implementation of ID by partial random search of which we have examples both in nature (antibody maturation) and in human simulation of it (protein engineering). gpuccio
The reason for this simulation is that one of the current theories of evolution is that part of the DNA is non coding and is essentially mutating away because of its non use. Eventually a small number of these non coding DNA sections becomes useful. A demonstration that a random process can produce something close to a functional protein over time or that even after millions of iterations it was never able to get close to one would also be interesting. So I had this idea to see if some form of mutation could ever lead to something useful. Suppose one took a random string of DNA or maybe a repeating sequence that is typical in a genome that is 240 nucleotides long. Nothing magic about 240 except it would represent a protein of 80 amino acids. Then take a set of say a 1000 proteins of length 90-100. I don't know how many exist of exactly that length but maybe the 1000 proteins could be generated by taking sub lengths of slightly longer proteins. Then mutate the string, say two or three nucleo tides at a time to represent a certain time period and determine the protein and compare it to each of the 1000 proteins starting at 5 proteins in. Determine a measure for how close the mutated protein is to any of the sample proteins and keep score. After x iterations see how the similarity is to any of the functional protein polymers. The mutation could be of several different types such as an insertion, deletion, SNP or something else. The similarity would assess whether two comparable amino acids have similar chemical properties or not and adjust the distance measure accordingly. The new protein could be shifted up or down each of the 1000 targeted polymers to see if some type of shift would improve the distance measure or closeness of the base protein with each of the targeted proteins. Do this a large number of times to see if any of the functional proteins or subsections of these functional proteins could ever be approached. There are a number of issues but a basic one is something I do not know much about. How long would each iteration take since the basic polymer would have to be compared to say a 1000 other polymers maybe 10 times each (shift the frame up by 5 and down by 5 so there would be 10,000 comparisons at each iteration. Maybe I am dreaming on how fast computers are these days and this type of experiment would need a super computer. A second issue is that 1000 is a small subset or potential proteins but this could be remedied by including more proteins in the target set. A third issue is how similar are two different amino acids. Some are very similar and some are very different so the distance measure has to reflect this. There are probably a lot of other details that I haven't dreamed of but the purpose would be to see if a random process can approach anything useful. It is always possible that the mutated DNA leads to a useful protein but just not in the set of known ones that are in the simulation. Since I know very little about proteins, there may be some way of determining if a particular protein could ever fold in to a potentially useful shape just by its amino acid sequence. This could be a second measure and possibly a way of pre screening potential sequences for additional mutations or comparison to a larger number of proteins. There is no selection in this because the theory says selection does not begin till the potential polymer becomes useful. When a particular iteration became potentially useful, then maybe some selection could be included. jerry
gpuccio @ 11, That Weasel had a string hard wired into the fitness function is not important. Dr. Dembski makes this point very clearly in his discussion of MESA, where he points out that MESA is hard wired to optimise for a sting of 0s, and that this is exactly what Weasel does. The program knows the target, but the population does not know the target. Pendulum
DonaldM, As much interest as there is in EC, I don't get the impression that much of it is directed at simulating natural evolution. Lots of people just want to reap the benefits of an approach that works. If you showed this group an algorithm that was based on baraminology and worked faster than EC, or solved otherwise intractable problems, they'd buy it. Other scientists that are trying to simulate nature have to pick and choose which facet of nature to explore. John Holland's ECHO is trying to do something very different from MESA. Neither system atempts to understand interactions of entities reproducing at vastly different timescales, even though a big part of our genome is dead viruses. Bottom line - there are not enough scientists, and there is a lot of great science waiting to be done. A personal supercomputer costs under $10K from Dell. This is science the Discovery Institute can and should be sponsoring. Pendulum
Dawkins didn't intend to tell us anything about real biology - didn't you just read the Dawkins' quote that R0b provided? He intended to demonstrate a principle that in other contexts might apply to real biology - that is true, because it's a powerful principle that has been proven to be useful in many fields. It's not Dawkins or those of us here discussing Dawkins' view of Weasel that are trying to make more of Weasel than it is. hazel
Huh. It just occurred to me from ROb's comment above why Joseph (in the latching thread) misunderstands the notion of "cumulative" selection. Cumulative in TBW means that the total phrase is closer to the target, not that each letter is. So when a 28 letter target is matched by 15 letters, a progeny that matches 16 letters will be an cumulative advance even if a particular letter reverts. In short, Dawkins's use of "cumulative" implies non-latching of individual letters. David Kellogg
Rob in #12. I really think you miss the point. Dawkins's program doesn't demonstrate anything at all about biological reality. Nothing. In TBW he cleverly tries to avoid the problems inherent with natural selection by inventing a new term cumulative selection. But what has he actually explained? Nothing. His clever phrase tells absolutely nothing about how evolution actually brought about the multi-variations of life forms that exist on this wonderful planet. He could have called phlophertophby selection and it would explained just as much. Niether his book nor his program tell us anything about biological reality. Its a rhetorical gimmick and little else, all his caveats not-with-standing. DonaldM
gpuccio:
The weasel algorithm, whatever the details of its working, has one distinguishing feature: the program already knows the phrase it is looking for. And if that seems silly, well, it is. Just ask Dawkins: he will certainly give many complex arguments for his utilization of such a silly model, but silly it remains anyway.
Actually, it has at least two distinguishing features: #1 A completely artificial fitness function, as you point out. #2 An algorithm that uses cumulative selection to maximize fitness. Dawkins' explicit stated purpose is to illustrate cumulative selection (#2 above). If you don't like the fitness function (#1 above), you can replace it with one you like better. It makes no difference to his illustration. Dawkins was very careful to point out that his target-based fitness function has no analogue in nature, lest there be any confusion:
Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal.
His caveats seem to have fallen on deaf ears. R0b
uoflcard: "My question is, does nature really select for parts of an effective protein, for example? Don’t you need the entire code (in the right context) before it can be of any use?" The answer to that is easy. Nature cannot select for parts of an effective protein (unless they are functional themselves). And yes, you need the entire code, or at least the minimum code which can give the necessary and selectable function. That's, very simply, why the darwinian model cannot work. The weasel algorithm, whatever the details of its working, has one distinguishing feature: the program already knows the phrase it is looking for. And if that seems silly, well, it is. Just ask Dawkins: he will certainly give many complex arguments for his utilization of such a silly model, but silly it remains anyway. Just a reminder: nature is not supposed to know what it is searching for. Indeed, nature is not supposed even to search for anything. The "only" thing nature is supposed to do is to provide thousands and thousands of complex functional proteins, to connect them in complex organized networks, and many other smart things which I will not state for brevity, but all of that blindly, and finally never know that such an astonishing task was accomplished. gpuccio
Sorry, on a totally different (and for once non argumentative) note, how come some of the threads have less comments then displayed on the UD mainpage? E.g: Message Theory – A Testable ID Alternative to Darwinism – Part 2 (says 21 comments) but only 8 are displayed. And why are the comments closed? Thats why posted here, as this thread is not busy. Thanks! eintown
to uoflcard The answer is no. Excluding letters that are correct from mutating is not what Weasel does. This has been discussed at length starting here You might start by reading my post #320. hazel
This is somewhat off-topic, as it has to do with phylogenetic software rather than evolutionary simulations, but what can be said about the Phylogenetic Tree of Mixed Drinks? anonym
A couple questions, one for Mr. Dembski (although it could be answered by others) and one for anyone in the know: #1) What types of things can make an evolution simulation "less than faithful" to biological reality? #2) I was looking briefly at MutationWorks. It tries to form the phrase "METHINKS IT IS LIKE A WEASEL", supposedly with biological evolutionary techniques. In each generation, the letters that aren't what they are supposed to be are changed while the ones that are correct stay the same. So just looking at the first word, if your first generation produces: XITIWWKQ Then it will keep the T and K and change the others, until finally it reaches: METHINKS My question is, does nature really select for parts of an effective protein, for example? Don't you need the entire code (in the right context) before it can be of any use? uoflcard
Pendulum
I agree that evolutionary algorithms can give widely divergent results, given different parameter settings and data sets. I think that is why it is important to be open source, and publish parameter settings,etc. so that work is reproducible.
I think you've hit on one of the major issues with these types of programs: no one knows what the correct algorithm ought to be or what the actual parameters are supposed to be to model biological evolution. What seems to be missing in all of these sorts of studies is any sort of mapping of the computational onto the biological or vice-versa. I recall in the Avida study referred to in the OP that Lenski et.al. defended the way they put their model together as mirroring "exactly what evolution requires." But exactly what evolution requires was precisely the point at issue! What was missing was a correspondance of the computational program to biological reality. That problem seems to be endemic to all these studies, which is one reason why there's such a diversity of models, parameters and outcomes. DonaldM
I agree that evolutionary algorithms can give widely divergent results, given different parameter settings and data sets. I think that is why it is important to be open source, and publish parameter settings,etc. so that work is reproducible. Should we wait for a longer excerpt or summary of this paper in the near future? I think from all the comments recently on GA related topics, it is clear that many people here are interested in this topic, and would like to hear your thoughts. Pendulum
Fantastic, I look forward to it, I'll keep an eye out! Oh, and God speed, regarding the opposition at Baylor and general Darwinist opposition that seeks to muffle your progress. I have confidence that the facts and logic supporting the ID position will eventually overcome the authority of the establishment. It has to. PaulN
In answer to the two previous questions, there was a conference back in April 2000 that I helped organize titled THE NATURE OF NATURE. It was sponsored through Baylor's then Michael Polanyi Center, which I directed (the center itself was dismantled right after the conference because of agitation by Darwinists in and outside Baylor -- see here for the rise and fall of that center). Because the center was shut down, no conference proceedings were ever published ... until now. Those conference papers, all updated, as well as a number of new ones will be part of an anthology that will be appearing later this year (the volume will be titled THE NATURE OF NATURE). The essay described in this post, coauthored with Bob Marks, will be appearing in that volume. William Dembski
Excellent. I'd love to be able to read it, is it going to be publicly available when complete? PaulN
Is this essay your forthcoming paper or related to it? tragic mishap

Leave a Reply