Uncommon Descent Serving The Intelligent Design Community

The Simulation Wars

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I’m currently writing an essay on computational vs. biological evolution. The applicability of computational evolution to biological evolution tends to be suspect because one can cook the simulations to obtain any desired result. Still, some of these evolutionary simulations seem more faithful to biological reality than others. Christoph Adami’s AVIDA, Tom Schneider’s ev, and Tom Ray’s Tierra fall on the “less than faithful” side of this divide. On the “reasonably faithful” side I would place the following three:

Mendel’s Accountant: mendelsaccount.sourceforge.net

MutationWorks: www.mutationworks.com

MESA: www.iscid.org/mesa

Comments
kairosfocus, I'm relaying something from Zachriel in response to this from you:
Not at all. We see 300+ samples of letters, of which 200+ are in a go-correct then stay correct condition. There are NO observed reversions.
Zachriel writes:
For a confidence interval of 2% (letters from 0 to 26 ± 0.5), and a confidence level of 95%, we need to sample 70% of a population of 1000. (Interestingly, we only need to sample about 2400 in a population of a million or a billion for the same level of confidence. This is why a drop of blood containing trillions of particles can represent the composition of all the blood.) Anyone can see that Weasel doesn't require latching to work. With reasonably large populations the best of the brood will only occasionally show a letter reversion. A typical sample of ten Mother Weasels will show the same results that kairosfocus insists must be due to latching. The appeal to sampling is obviously faulty as it is contrary to simple observation.
FYI, the link is to an Excel file from which you can run Zachriel's Weasel. Check it out! No latching there, certainly not explicit latching: yet it will tend to show the same results that kairosfocus insists must be due to explicit latching.David Kellogg
April 3, 2009
April
04
Apr
3
03
2009
05:48 AM
5
05
48
AM
PDT
kairosfocus [169], it remains the case that you engaged in the very kind of rhetorical dismissal which you lament. Further, there was no slander in the opening sentence. The author connected ID and creationism, but that's just an argument. Indeed, many ID advocates, by rejecting common descent, are in fact creationists. Further, the first "ID textbook," Of Pandas and People, was authored by two creationists. Also, many ID advocates, including you, routinely cite avowed creationists in support of their claims. The lines seem easy enough to draw. ID proponents may ally with creationists, co-author with creationists, and be creationists themselves, but ID opponents are not allowed to say that ID is a form of creationism? How about this? I once heard a secular Jewish comedian say that she wasn't a Jew, she was just "Jew-ish." Perhaps those on the evolution side should refer to ID not as creationism but as "creation-ish." :-)David Kellogg
April 3, 2009
April
04
Apr
3
03
2009
05:16 AM
5
05
16
AM
PDT
Not much time this morning, but let me point out that Kairosfocus’s example of taking 0.1% of your blood as a sample is most assuredly NOT analogous to looking at the 8 phrases Dawkins’ printed as a sample of all the children in the 64 generations of the run. We have good, empirical reasons for thinking that the composition of the blood is the same throughout the body, so taking any small fraction of it will reflect the composition of the whole. If I have 1000 white balls in a bag, sampling one will be sufficient. We know, however, that the 9600 phrases (assuming N = 150 for illustration’s sake) are absolutely NOT the same. Because of the process of picking the best child from each generation, even if we printed each parent for every generation, and not each tenth, we would not have a random variable. The whole point of the program is to demonstrate that while the mutation process itself is random, the results are not random because of the action of the fitness function. So it is not even remotely accurate to compare displaying the 0.1% sample of phrases of most fit children after every 10 generation to drawing 0.1% of your blood for a blood test (And kf, you can object each time I use the word “fitness” if you wish, but it is the standard term and I’m not going to abandon it. I’ve made it clear - gpuccio said so ;) - that I know that the general term does not necessarily imply anything about biolgical functional fitness.)hazel
April 3, 2009
April
04
Apr
3
03
2009
05:09 AM
5
05
09
AM
PDT
pS; on "naive" i simply point out that the text and the print runs show what would in other contexts be fairly conclusive evidence. It is tbecause of the direct statement that is reported, that I have gone with implicit over explicit latching to explain 1986. Absent that, explicit latching is the best explanation on the evidence of TBW, ch 3. As the initial Monash university understanding [of a pro-Darwinist team] also substantiates. Natural, not naive.kairosfocus
April 3, 2009
April
04
Apr
3
03
2009
12:42 AM
12
12
42
AM
PDT
Onlookers (and participants): Further follow up on points. For record. 1]DK, 167: Don’t like the first sentence? Refuse to consider the argument. From this, you would note get a clear understanding that I objected to uncivil conduct to the point of slander in the opening sentence. Nor, that I stated in so objecting that those who use the article will need to present the substantial case here -- without uncivil language -- if they want my response. (Which no-one has evidently thought it fit to do.) Nor, that, e.g. at 88 above, I have outlined step by step why I have concluded from Mr Dawkins' words as I do. 2] Hazel, 168: The law of large numbers is about the behavior of a random variable: it says that the observed average value of a set of observations will more closely approach the expected value of the variable as the number of observations increase. Not quite. LOLN is about the behaviour of credibly random samples from a population. Namely, that there is a reasonably strong tendency for the samples to be representative once they have adequate numbers. Subtle, but that is where the rest of the analysis goes off the rails. 3] it says that the observed average value of a set of observations will more closely approach the expected value of the variable as the number of observations increase. Nope, it is broader than that. E.g. there is a reason why the average of samples tends to population avg. For, random -- in principle, equiprobable [there are variations . . . ] -- samples tend to reflect the distribution of the population, once you have enough of them. So, on analysing a population probabilistically so that a certain subset is fraction p of the pop, a good sample of size N will tend to have fraction Np being from the relevant subset. As a result, once Np is a reasonable number, you will expect to see subsets appearing in the sampled population. (Which in turn is the basis for my remarks about far-skirt members. Think about the darts and chart illustration/thought expt.) 4] the strings in BWM are NOT random variables. They are a product of a process that selects for fitness - they are not instances of a random variable. First, I again object, for good reason, to the insistence on a very flawed term, "fitness." The context is just the opposite: selecting on mere proximity without reference to fucntionality. note Mr Dawkins' "nonsense phrases." Next, one way to get a representative cross section of a process and to infer to its dynamics is to sample it at regular intervals uncorrelated with the credible process dynamics; but of course within the bandwidth thereof, on the good old Sa-freq is at least 2f rule. [That is how for instance a CD works, or digital video or a digital storage oscilloscope. [I need not go on into details on phase shifts and sampling rise times for transients. In short, we see here different domains of sampling at work: telecomms and instrumentation and control, as well as broader physical sciences are highly relevant contexts in which a whole world of sampling is also done. FYI, H, I used to regularly set a lab exercise very similar to the dart and chart one as the very first lab exercise for physics students doing in effect first year of a 4-year pgm college physics; making them do various sampling population analyses and 3-sigma control chart exercises. In turn that was based on and extended my own very first university physics lab exercise.]) So, the Weasel samples circa 1986 can indeed be representative of the trends in "good" runs of champions, circa 1986. And, the relevant population is that of the letters within the champions. Mr Dawkins' description in TBW, ch 3 at that time, underscored that the published runs were in fact representative of "good" runs. Cf 88 above. 5] the sample size here is very small. Dawkins shows 8 members of a 64 generation run, including the first and last. Not at all. We see 300+ samples of letters, of which 200+ are in a go-correct then stay correct condition. There are NO observed reversions. All of that in a context that positively enthuses over cumulative selection and progress to the target. 6] If we use a population N = 150, then there have been 150 x 64 = 9600 phrases, of which we only see 8, which is less than 0.1%. This is an insufficient sample . . . Many relevant populations are of continuous variables [between any two distinct values, you can find another valid member of the population] or quasi-continuous variables [i.e. very fine-grained discrete behaviour we approximate as effectively continuous; e.g the origin of gas pressure in molecular collisions], or of indefinitely large numbers of actualised or potential events. That is, the population is in effect empirically infinite. But, through reasonable samples we can be quite confident of picking up patterns in the overall population. Thanks to, LOLN. For instance, consider a blood sample of a few cc's, say 5. Typical humans have ~ 5 litres of blood, i.e. the sample size is 1 in 10,000. Blood constituents are not a fundamentally random population, being driven by body processes (though of course there will be fluctuations in any one 5 ml sample]. Samples are as a rule taken at a given convenient site, and are well below 0.1%. But, they sufficiently reliably reflect the general patterns of our blood to be routinely used in diagnostics; including on matters of life and death. In short I think onlookers will see here why I cite this to illustrate what is to my mind a case of selective hyperskepticism. 7] we have agreed, I think, that in the implicit latching case the probability of a child with a mutated correct letter being the most fit in the generation is extremely low, and since Dawkins is only showing a sample of best fit children every 10 generations, there is an extremely low probability that that set of data would show a letter reversal. In short, in the end, you agree with my analysis that the samples will correctly reflect the implicitly latched behaviour. 8] within the limits of reasonable probabilities, Dawkins data is just as likely to be the result of non-latching (implicit) as it is of explicit latching. Latching of o/p as credibly observed is explained by two possible latching mechanisms: explicit, or implicit. Implicit latching is latching, not "non-latching." 9] Dawkins said nothing about latching Let's guess, from 88 above: Cumulative selection, rewarding the smallest increment in proximity, etc? Not to mention, publishing the following o/p pattern, on pp. 47 - 48 of TBW, and the second again in New Scientist that same year, 1986:
WDL*MNLT*DTJBKWIRZREZLMQCO*P WDLTMNLT*DTJBSWIRZREZLMQCO*P MDLDMNLS*ITJISWHRZREZ*MECS*P MELDINLS*IT*ISWPRKE*Z*WECSEL METHINGS*IT*ISWLIKE*B*WECSEL METHINKS*IT*IS*LIKE*I*WEASEL METHINKS*IT*IS*LIKE*A*WEASEL Y*YVMQKZPFJXWVHGLAWFVCHQXYPY Y*YVMQKSPFTXWSHLIKEFV*HQYSPY YETHINKSPITXISHLIKEFA*WQYSEY METHINKS*IT*ISSLIKE*A*WEFSEY METHINKS*IT*ISBLIKE*A*WEASES METHINKS*IT*ISJLIKE*A*WEASEO METHINKS*IT*IS*LIKE*A*WEASEP METHINKS*IT*IS*LIKE*A*WEASEL
_____________ GEM of TKIkairosfocus
April 3, 2009
April
04
Apr
3
03
2009
12:25 AM
12
12
25
AM
PDT
Hi David, now that we’ve gotten clear on the fundamental difference between the explicit and implicit cases, I’d like to go back and respond to something you said at 135:
kairosfocus, since the 1986 “observations” are a highly biased sample (by the very nature of the experiment) from a population of unknown size, you can conclude precisely nothing about latching from it.
kairosfocus has claimed that the law of large numbers supports his argument that the data in BWM leads one to the conclusion that Dawkins used an explicit latching routine, but I don’t think the law of large numbers really applies to the situation. The law of large numbers is about the behavior of a random variable: it says that the observed average value of a set of observations will more closely approach the expected value of the variable as the number of observations increase. First of all, the strings in BWM are NOT random variables. They are a product of a process that selects for fitness - they are not instances of a random variable. Secondly, the sample size here is very small. Dawkins shows 8 members of a 64 generation run, including the first and last. We don’t know the generation population size, but other people’s programs show that populations in the range of 100 - 200 produce results similar to Dawkins. If we use a population N = 150, then there have been 150 x 64 = 9600 phrases, of which we only see 8, which is less than 0.1%. This is an insufficient sample even if the children were truly instances of a random variable. And last, since we have agreed, I think, that in the implicit latching case the probability of a child with a mutated correct letter being the most fit in the generation is extremely low, and since Dawkins is only showing a sample of best fit children every 10 generations, there is an extremely low probability that that set of data would show a letter reversal. So within the limits of reasonable probabilities, Dawkins data is just as likely to be the result of non-latching (implicit) as it is of explicit latching. The law of large numbers really has nothing to do with this, I think. And since Dawkins said nothing about latching, and since non-latching is random in respect to fitness and explicit latching is not, I see no reason (other than a lack of understanding or a bias) for thinking that explicit latching is the “natural” interpretation of the data. It may the naive interpretation, and therefore natural to some, but it is not the best interpretation if one thinks both about the probabilities and about Dawkins perspective.hazel
April 2, 2009
April
04
Apr
2
02
2009
07:51 PM
7
07
51
PM
PDT
kairosfocus [161], I have very little time today but wanted to respond to this, your response to a request to evaluate Patrick May's walk-through of the passage in TBW as a guide to coding:
Mr Kellogg et al lost me in that linked, in the very first sentence, on slanderous incivility.
Observers, that is is a prime example of what kairosfocus calls "rhetorical dismissal." Don't like the first sentence? Refuse to consider the argument.David Kellogg
April 2, 2009
April
04
Apr
2
02
2009
06:53 AM
6
06
53
AM
PDT
Thank you, gpuccio. I'm glad you found what I wrote clear and simple. One of my goals in discussions is for people, even if they are in disagreement, to be at least clear about what they disagree about. :)hazel
April 2, 2009
April
04
Apr
2
02
2009
06:19 AM
6
06
19
AM
PDT
Hazel: I was on my way somewhere else but paswed by. A few points: 1] 162: “Fitness function” is the standard phrase used for that part of the program which evaluates candidates to see which are passed on to the next generation. I am objecting to the term used, its denotation and the inextricably attached connotation; precisely because of rhetorical impact. "Fitness" cannot evade the import of function in a context. We are dealing with mere proximity to a target, of non-functional "nonsense phrases." 2] No one - Dawkins himself nor anyone else - has ever claimed that matching the target string modeled biologically functional fitness Mr Dawkins set his up to answer a challenge, from Hoyle and others, on the problem of achieving complex bio- functionality. He did so by arguing that cumulation of micro-increments in function was enough. Then, he provided a case study of target proximity search, withourt reference to function. Of coruse i am well aware of his qualifgying words and disclaimers -- cf 88 above -- but I am also aware that the rhetorical impact of the example will still go through. indeed, the very choice of the phrase that highlighted the term "weasel" strongly hints that Mr Dawkins intended the example to make its point by taking advantage of the difference between the example by computer and the qualifying words. Indeed, his qualifications include that he understood the example to be misleading on the issue of natural selection -- the precise point at stake. Do you see why I am not at all amused? And, why alluding to qualifications in a context where a misleading example is being headlined [and thus having its rhetorical impact], is NOT tilting at strawmen? 3] in order to mask the correct letters you have to have consulted the target phrase, and stored that information on a letter by letter case as additional information about the phrase. As I pointed out, the issue is when you mask, and whether it is by letter or by phrase, not if you mask. BOTH explicit and implicit latched versions of Weasel are on the wrong side of the issue of functionality as an a priori of any meaningful natural selection analogy. 4] in the letterwise case the mutation function knows which letters to not mutate, so mutation is not entirely random in respect to fitness, and in the “target as a whole” case the mutation function mutates irrespective of whether the letters are right, so mutation is random in respect to fitness. Again, I object to the use of "fitness" -- including the context of its definition and standardisation. Onward, the point is that in both cases functionality is ignored in Weasel, and mere proximity is rewarded. Whether that leads to masking done letter by letter or phrase as a whole, makes little difference. In both cases, Weasel is fundamentally misleading. On the narrow technical point of probabilities, the situation is that if letters are explicitly latched, once they hit the target per letter, it locks off further search. If they are not explicitly latched, pop size and [unrealistic] mutation rates and probabilities of being "good" latch progress, until the phrase is filled in and the mask blocks further mutations. In either case, Weasel's performance depends on being fundamentally distinct form the world of living things and proto-living things. And, to present such an example in a book on BLIND watchmakers, is thus misleading. Seriously so. GEM of TKIkairosfocus
April 2, 2009
April
04
Apr
2
02
2009
06:08 AM
6
06
08
AM
PDT
hazel and others: Excuse the intrusion, I have not followed the discussion because I was not specially interested. Just wanted to thank hazel for this clear definition: "“Fitness function” is the standard phrase used for that part of the program which evaluates candidates to see which are passed on to the next generation. The word “fitness” does not necessarily have to mean biological fitness, and it doesn’t have to mean functional fitness. The word is a very general word that refers to how well an entity meets whatever criteria is present in the program under discussion." I do like it. It is clear and simple. And that is exactly the reason why I believe that all simulations using a fitness function are simulating some form of Intelligent Selection, and never Natural Selection. To simulate NS, as I have many times stated, no fitness function must be present. Fitness has to be true functional fitness, and must be sufficient to guarantee a reproductive advantage in the system "of its own", and not because it is "recognized" by some pre-programmed function in the system. I know of no simulation of NS.gpuccio
April 2, 2009
April
04
Apr
2
02
2009
06:06 AM
6
06
06
AM
PDT
to kairosfocus. “Fitness function” is the standard phrase used for that part of the program which evaluates candidates to see which are passed on to the next generation. The word “fitness” does not necessarily have to mean biological fitness, and it doesn’t have to mean functional fitness. The word is a very general word that refers to how well an entity meets whatever criteria is present in the program under discussion. In this case, fitness refers to how many correct letters are in the phrase. That’s all. You keep arguing about issues that are not issues. No one - Dawkins himself nor anyone else - has ever claimed that matching the target string modeled biologically functional fitness. To mix metaphors, you keep tilting at a strawman of your own making. Then you write,
Then, the mutation module per se has no “knowledge” of the target in any case. All it would know on an explicit case is that some letters are masked off. turn off mask, and with the right pop and rate, you are at implicit case.
Yes, but in order to mask the correct letters you have to have consulted the target phrase, and stored that information on a letter by letter case as additional information about the phrase. Of course if you turn the mask off you get the implicit case, but that is exactly what I said. The difference is whether the mutation function does or does not have information about the target on a letter-by-letter basis. If the mutation function does have such information, in the form of a mask or flag having been stored with the letter, then the mutation function is not entirely random in respect to mutation. And, you write,
And, on the implicit case, when the target phrase has been hit, the whole is masked off at once from further mutations. The difference is whether you define hitting the target as a whole or as a letterwise case. When to mask, not if.
Of course when the target phrase is found, there are no more mutations, because the program quits. And, yes the difference is whether you define “hitting the target as a whole or as a letterwise case.” Again, in the letterwise case the mutation function knows which letters to not mutate, so mutation is not entirely random in respect to fitness, and in the “target as a whole” case the mutation function mutates irrespective of whether the letters are right, so mutation is random in respect to fitness. I agree that “letter by letter” and “phrase as a whole” is another way of highlighting the essential difference between explicit and implicit.hazel
April 2, 2009
April
04
Apr
2
02
2009
05:10 AM
5
05
10
AM
PDT
kairosfocus @161
7] Jay M 159, David Kellogg posted a link in another thread that shows pretty convincingly that Dawkins’ text in The Blind Watchmaker cannot reasonably be read to suggest explicit latching. Mr Kellogg et al lost me in that linked, in the very first sentence, on slanderous incivility.
"Slanderous incivility"? The linked page equates ID with creationism, very briefly (with reference to the famous changes in Of Pandas and People). Rude? Perhaps. Unnecessary? Certainly. Hardly a reason to ignore the real issue raised.
As to what Mr Dawkins said circa 1986, and what it naturally means, cf 88 above for my latest citation and comments. You will see that explicit latching, for good reason, is a very natural understanding.
In the article you refuse to read the author quotes the full text regarding the weasel algorithm from TBW and goes through line-by-line building a program directly from Dawkins' own words. The text of the book is readily available via Google. Could you also go through line-by-line and show where Dawkins' explanation (not his sample output, but his actual explanatory text) could be interpreted to specify explicit latching? I've now re-read it myself several times and cannot see any way to support that contention. JJJayM
April 2, 2009
April
04
Apr
2
02
2009
04:09 AM
4
04
09
AM
PDT
Onlookers and particiapnts Further follow up on points of note: 1] Hazel, 155: felt that we were saying the same thing in different ways, but I wanted to make sure. On the cited point, yes. On material context, note my remarks above, and below. 2] I also face a “hostile audience” This site (for all its flaws and troubles) bears little material resemblance to the likes of Anti Evo, et al. An audience that disagrees is one thing, the sort of routine contempt, dismissive rhetoric laced with that, and general nastiness I have seen at sites such as the above named, are beyond the pale of basic civility. Underneath, we hear the distinct echo of Mr Dawkins' notorious claim that those who differ with his evolutionary materialism [especially if influenced by a religious perspective] are "ignorant, stupid, insane or wicked." And, we see that backed up by abusive magisterial power of major institutions, and expressed in question-begging censorship and hijacking of science in service to a highly controversial worldview and its agenda: materialism. Only where there is a willingness to address matters on the merits can we have serious progress. Which, is why I have in latter days principally dialogued with you on the Weasel matter. But, I have to always remember the hostile onlookers. [Which, inter alia, is why I have to repeatedly underscore such matters as the import of the actual pattern of o/p's and discussion thereof by Mr Dawkins circa 1986.] 3] DK, 157: Focusing on the winners is only relevant in examining what you are calling “implicit latching.” First, apology appreciated. (You will note my own, where it seems i inadvertently used language that while intended to be on the merits seems to have been overly pointy.) Generation chamnpions, in the context of the conditions of Weasel c. 1986, and especially Mr Dawkins' remarks on cumulative progress, are actually telling us a lot about the population of the runs. For instance, if showcased "good" runs are taking 40+ and 60+ generations to hit target, then we know that no-change is winning ~ 50% of the time. That means that the mechanism strongly tends to preserve letters already on-target. Multiply by 200+ cases of letters once hit, never being seen to revert; leading to strong runs as a dominant characteristic of samples of over 300 letters in principle capable of changes. That is, the evidence is that steps forward are preserved, i.e. cumulative progress to target, just as described by Mr Dawkins. Thus, there is good reason to infer on the runs as published and the surrounding commentary that on simplest explanation, letters were explicitly latched on hitting their individual target. It is on reported remarks circa 2000, and just recently reported that implicit latching becomes a better explanation of What Mr Dawkins did in 1986, on preponderance of evidence. (But, per the remarks of 1986, a letterwise partitioned version of Weasel is a legitimate version, one of the many possible Weasels.) 4] The Law of Large Numbers. The underlying point in LOLN, is that "large enough" samples of a population, that are on reasonably credible grounds, not unduly biased, will reflect the population as a whole. The illustration I have used in this discussion is to draw up a bell-chart slit into even stripes and place it on a floor, then drop darts more or less evenly onto it. One hit could be anywhere. A few will be allover the place, but as we get to about two dozen, we will begin to see that the numbers of hits in stripes will more and more reflect the fraction of the overall area in the bands. That is, -- as fractional area of such stripes on a bell curve [or the like] is a probability metric -- if probability is p, and we have N samples, the fraction in a zone of probability p will trend more and more to Np [its "expected value"] as N rises. This is why observed fraction of N samples in a band, f/N, tends to the value p. It is also why the average of a large enough sample will tend to the population's average, up to the classic distribution of sampling means: for, "Avg" = SUM [pi. value], for sub populations of probability pi each. It is why fluctuations "often" tend to go as root-N so to double precision one needs to quadruple sample size, etc. In short, reasonably large samples -- and 300+ is a rather good case, on the whole of that -- will with high likelihood reflect the behaviour of the relevant population as a whole. And, skirt-catching needs big enough samples that it becomes reasonable to see far-skirt values in the sample. 5] What is the expected probability that a correct letter [i.e. circa 1986] will revert in the Weasel program? On 200+ samples of such letters from rune without exception, nearly -- effectively -- zero. The basis (up to now I thought this needed no explicit expansion . . . ) is that Expected Ealue, EV = N.p, while as N rises, Observed Value, OV --> EV. Then, on our case: N = 200+, and OV = 0. So p --> 0. 6] you can’t know what to expect know unless you know the population size and the mutation rate. On the very contrary, we have before us samples from "good" showcased runs circa 1986, of the relevant pop, of generation champions. They show the very strong appearance of latching, and on LOLN, we may very reasonably infer to latching on the o/p, as just explained and shown. The issue is mechanism, and from the original thread, I have pointed to explicit and implicit latching as reasonable mechanisms. It is on explicit reported testimony that implicit latching becomes the best explanation on preponderance orf evidence. 7] Jay M 159, David Kellogg posted a link in another thread that shows pretty convincingly that Dawkins’ text in The Blind Watchmaker cannot reasonably be read to suggest explicit latching. Mr Kellogg et al lost me in that linked, in the very first sentence, on slanderous incivility. Whatever emanations of penumbras of the text may have been brought into play to make you think that the text of TBW ch 3, circa 1986, cannot reasonably be understood as saying that the best explanation for Weasel on that text is explicit latching, I simply point tot he Monash University as an outside group sympathetic to Mr Dawkins' views. (Mr Elsberry had to explicitly "correct" them by saying that Mr Dawkins did not latch explicitly.) As to what Mr Dawkins said circa 1986, and what it naturally means, cf 88 above for my latest citation and comments. You will see that explicit latching, for good reason, is a very natural understanding. 8] Hazel, 160: In the implicit case, the mutation function does not depend on and has no knowledge of the target phrase or any other details of the fitness function. First, I must insist: a target proximity fucntion that rewards mere closeness without reference to curtrent fuctionality in any meaningful snese -- observe Mr Dawkins' "nonsense phrases" -- is NOT a "fitmness fucntion." And, this is the main reason why Weasel fails to be a reasonable presentation of the power of natural selection, which may only reward difference of current functionality. had a biologically reasonable threshold of such function been put in place, Weasel would have failed directly -- as Mr Dawkins admitted in so many words, though he did not discuss the implications of a search space of 1 in 10^40 [27^28] vs 1 in 10^ 180,000 [4^300,000] for even reasonable first life. Then, the mutation module per se has no "knowledge" of the target in any case. All it would know on an explicit case is that some letters are masked off. turn off mask, and with the right pop and rate, you are at implicit case. And, on the implicit case, when the target phrase has been hit, the whole is masked off at once from further mutations. The difference is whether you define hitting the target as a whole or as a letterwise case. When to mask, not if. 9] it is correct to say that in the explicit case, mutation is not random in respect to fitness. Weasel, quite explicitly [cf 88 supra], dodges the issue of fitness, i.e of credible functionality and associated combinatorial complexity. So, in neither explicit nor implicit latching cases can one suggest correctly that mutation is in any wise related to "fitness." Mutation is related to letters, and then a filter looks for proximity to a target. In the implicit case, it locks off further mutations on hitting the whole phrase. In the explicit case, it does so letterwise. In neither case do we see any serious assessment of first having to get to function so that relative fitness can be a properly material consideration. Thus, the build-up to an inference on divergent letterwise probability is irrelevant. The key fallacy has long since been made. And, interpreting mask off on a letterwise vs a phrase wise basis are BOTH on the wrong side of the relevant fallacy, of rewarding non-functionality on mere proximity, down to the letterwise level. GEM of TKIkairosfocus
April 2, 2009
April
04
Apr
2
02
2009
01:42 AM
1
01
42
AM
PDT
Well, kairosfocus and I seemed to have cleared this one point up, as he wrote in 149, "Hazel: There is no material difference between us on the substantial matters, once we see how explicit and implicit latching can work." This was in respect to my point that the essential logical difference between the explicit and implicit latching situations is in the mutation function. Kf had written, "Yes, in an implicit case, P(mut) is the same whether or no a let[t]er has already hit target. Yes, the p(mut) for non latched letters in explicit latchi[n]g is different from that of latched ones ….," and I had written,
Implicit: for each letter, p(mut) = p Explicit: for each letter, if letter is incorrect, p(mut) = p if letter is correct, p(mut) = 0
With that said and agreed upon, I'd like to return to a previous point I had made, which will be clearer now that we have clarified this essential difference between the implicit and explicit cases. I claim that it is accurate to say:
In the implicit case, mutation is random in respect to fitness. In the explicit case, mutation is not random in respect to fitness.
Let me explain more about why the above is correct. In the implicit case, the mutation function does not depend on and has no knowledge of the target phrase or any other details of the fitness function. Every letter always has the same probability of mutating irrespective of whether it is correct or not. Mutation is random - the only factor being the mutation rate p that is applied uniformly to all letters at all times. Mutations happen entirely independently of any effect the mutation or lack thereof may have on fitness. This is why it is correct to say that In the implicit case, mutation is random in respect to fitness. In the explicit case, the mutation function is dependent upon and influenced by the fitness function, because for each letter it must reference the target string to see which of the two rules to apply: if the letter is incorrect, p(mut) = p or if the letter is correct, p(mut) = 0. In this case, if a letter is subject to mutation (by being incorrect), the probability that it will mutate is random, and so is the probability that it will mutate to the correct letter. But whether a letter is subject to possible mutation is not random: that is determined by comparing the letter to the target string. This is why it is correct to say that in the explicit case, mutation is not random in respect to fitness.hazel
April 1, 2009
April
04
Apr
1
01
2009
01:54 PM
1
01
54
PM
PDT
kairosfocus @150
Hazel, the evidence on the o/p aqdn Mr Dawkins’ discussion of it circa 1986 strongly supports that there are very, very few if any such reversions. Indeed, it at minimum strongly suggests that there are none. That is why, absent taking the testimony that there was not explicit latching at work, explicit latching is a very viable and natural explanation of what was displayed and what was said about it.
David Kellogg posted a link in another thread that shows pretty convincingly that Dawkins' text in The Blind Watchmaker cannot reasonably be read to suggest explicit latching. Can you go similarly step-by-step through Dawkins' description and show how explicit latching is a "viable and natural" explanation? JJJayM
April 1, 2009
April
04
Apr
1
01
2009
12:22 PM
12
12
22
PM
PDT
Joseph [156], are you stuck on the difference between climb and climbing or between mountain climbing and other mountain sports? The difference is trivial in either case. Anyway, modifying the Google search, this is from a climbing teacher's journal:
Thursday Apr 20 Climbing 30:00 [3] teaching anchor class at hammond pond. about half hour of cumulative climbing
And here's a climber (or maybe a biker) writing about his sport watch:
I love my Suunto. It keeps track of cumulative climbing, so if you are going up and down (lot of PUD's) it sorts out the "up" part. It is pretty amazing what the cumulative climb can show. Anyway... I am trying to compare the change in pressure due to a front compared to a change due to 100 feet of altitude change. Are they about the same? Just guessing.
David Kellogg
April 1, 2009
April
04
Apr
1
01
2009
11:11 AM
11
11
11
AM
PDT
Moderators, this failed to post earlier. Could you post it please? kairosfocus [144], I apologize for saying anything that might be taken to impugn your motives or integrity. Let me focus on two issues: the lottery example, and the Law of Large Numbers as it relates to total population. 1. The lottery example. You write:
here the relevant population is the generation champions, which is where the o/p latching was observed in the first place. This is a study of lottery winners, not the overall population, and the point of IMPLICIT latching as an explanatory mechanism is that the o/p will latch based on how the lottery is run
No. In explicit latching, the relevant population is the whole population. The question is not whether correct letters that revert are ever selected (that is, "win" the lottery), but whether they ever at all. Focusing on the winners is only relevant in examining what you are calling "implicit latching." 2. The Law of Large Numbers. Here it is:
The Law of Large Numbers says that in repeated, independent trials with the same probability p of success in each trial, the chance that the percentage of successes differs from the probability p by more than a fixed positive amount, e > 0, converges to zero as the number of trials n goes to infinity, for every positive e.
Wolfram Math World puts it more generally:
A "law of large numbers" is one of several theorems expressing the idea that as the number of trials of a random process increases, the percentage difference between the expected and actual values goes to zero.
What is the expected probability that a correct letter will revert in the Weasel program? You haven't given such a probability. Why? Because you can't know what to expect know unless you know the population size and the mutation rate. Therefore, you can't say anything about latching from the examples in TBW.David Kellogg
April 1, 2009
April
04
Apr
1
01
2009
09:17 AM
9
09
17
AM
PDT
David Kellogg, I see that the word context still eludes you. And still nothing about mountain climbers using the term "cumulative climbing". Your issues are not my problem.Joseph
April 1, 2009
April
04
Apr
1
01
2009
09:10 AM
9
09
10
AM
PDT
kairosfocus, you writes,
Hazel: There is no material difference between us on the substantial matters, once we see how explicit and implicit latching can work.
Thank you. I felt that we were saying the same thing in different ways, but I wanted to make sure. You also write,
Unlike you, i have to bear in mind a hostile audience fraction who will gleefully extract what they can find to caption as an occasion for rhetorical dismissal. They already have done so. Repeatedly. Please try to understand that.
I would like to address this issue, as one of my main interests is how people with differing perspectives can constructively communicate with each other. I also face a "hostile audience", in that my overall perspective is different from the prevailing perspective at this forum, and I often have my points met with "rhetorical dismissal." However, I prefer to not think of that as hostile, and I definitely prefer to not respond with hostility and rhetoric: I believe pretty strongly that I should do unto others as I would have them do unto me rather than doing to others what they do to me. Two wrongs don't make a right. And to make a less platitudinous point, I believe that when I am met with behavior that I think is wrong, that is even more reason for me to try to behave well: if the other person is behaving poorly, then I need to behave twice as well in order to make up for their shortcomings. So when I am met with rhetorical dismissal or other non-constructive responses from people who disagree with me, my response is just to stay positively focused on the immediate issues. And last, you write,
PPS: Hazel, the evidence on the o/p aqdn Mr Dawkins’ discussion of it circa 1986 strongly supports that there are very, very few if any such reversions. Indeed, it at minimum strongly suggests that there are none. That is why, absent taking the testimony that there was not explicit latching at work, explicit latching is a very viable and natural explanation of what was displayed and what was said about it.
This is an example of something that you don't need to bother saying to me, because I have not been discussing this issue, nor been interested in it, for a very long time, and I've said that to you a number of times. I wish you could hear that, and limit your responses to me to topics that are currently on the table between us rather than continuing to repeat points that are not currently on the table. That also makes for more productive communication.hazel
April 1, 2009
April
04
Apr
1
01
2009
08:51 AM
8
08
51
AM
PDT
Joseph, you write:
Climbers do NOT refer to that as “cumulative climbing”
A simple Google search for the phrase "cumulative climb" and the word "mountain" demonstrates that this is incorrect. Here are some examples:
The orphaned Cataloochee pavement starts at Sal Patch Gap (3580') and descends into the valley. At mile 3 the pavement crosses Cataloochee Creek (2600') and is joined from the right by the gravel road coming 2 miles from Mt Sterling Rd. The pavement continues up the valley along the creek and past the campground. The road passes by several old settlements before turning to gravel (mile 5) and terminating at mile 6 (2860'). The loop is 7 miles with a cumulative climb of 1000'.
Here's another:
This hike will follow the MST south on the Shut-in Trail to the Sleepy Gap Overlook for lunch, and return on the same trail. Cumulative climb is about 1600 feet. Grade is mostly moderate. Nice views of the French Broad. First meeting place: Ingle’s, US 25N, Hendersonville. Second meeting place: Biltmore Square Mall parking lot, near McDonalds.
(I've hiked that one.) Here's one for biking:
Day 2: Bled – Ribcev Laz (Bohinj Lake) (40 km, cumulative climb 700 m). Uphill to Pokljuka high plateau. From there you cycle descending down to the Bohinj valley, place of unique beauty of nature and tiny villages in Alpine valley.
And another biking one:
The first day of this 5 day duathlon involved running 30km from close to Everest base camp to the largest town in the area, Namche bazaar. Along the way the competitors would drop over 2500m but would also climb a cumulative total of over 800m.
David Kellogg
April 1, 2009
April
04
Apr
1
01
2009
07:29 AM
7
07
29
AM
PDT
Am h Dict: cu·mu·la·tive (kymy-ltv, -y-l-tv) adj. 1. Increasing or enlarging by successive addition. 2. Acquired by or resulting from accumulation. 3. Of or relating to interest or a dividend that is added to the next payment if not paid when due. 4. Law a. Supporting the same point as earlier evidence: cumulative evidence. b. Imposed with greater severity upon a repeat offender: cumulative punishment. c. Following successively; consecutive: cumulative sentences. 5. Statistics a. Of or relating to the sum of the frequencies of experimentally determined values of a random variable that are less than or equal to a specified value. b. Of or relating to experimental error that increases in magnitude with each successive measurement. GEM of TKIkairosfocus
April 1, 2009
April
04
Apr
1
01
2009
06:33 AM
6
06
33
AM
PDT
hazel:
A process can be cumulative and at the same time you can occasionally lose some of what you have, which is different than what you said.
Only if you re-define the word "cumulative". And tat appears what evolutionists always want to do- redefine words to suit their needs.Joseph
April 1, 2009
April
04
Apr
1
01
2009
06:29 AM
6
06
29
AM
PDT
hzel:
Two examples I have used: when climbing a mountain, you occasionally go downhill for a while.
Climbers do NOT refer to that as "cumulative climbing".
When accumulating savings, occasionally you have less money than you did the month before.
If you ever have less than before then it is NOT an example of cumulative savings. And again perhaps Dawkins should use the term "back-n-forth selection". But if he did that then he could never illustrate his point that selection can account for something.Joseph
April 1, 2009
April
04
Apr
1
01
2009
06:27 AM
6
06
27
AM
PDT
Joseph: You are of course materially correct, but I suspect that all you can really hope for is that the correction of the record here will make sure that onlookers can see the holes in the endlessly recycled objections. GEM of TKI PS: I also suggest that with truly large per generation populations, sufficient of the skirts will show up that multiple mutation cases will break through and will make the multiple mutation cases that are eve4r so rare win the championship match on mere proximity. That is why I speak of co-tuned mutation rates and population sizes. PPS: Hazel, the evidence on the o/p aqdn Mr Dawkins' discussion of it circa 1986 strongly supports that there are very, very few if any such reversions. Indeed, it at minimum strongly suggests that there are none. That is why, absent taking the testimony that there was not explicit latching at work, explicit latching is a very viable and natural explanation of what was displayed and what was said about it.kairosfocus
April 1, 2009
April
04
Apr
1
01
2009
06:21 AM
6
06
21
AM
PDT
Hazel: There is no material difference between us on the substantial matters, once we see how explicit and implicit latching can work. Unlike you, i have to bear in mind a hostile audience fraction who will gleefully extract what they can find to caption as an occasion for rhetorical dismissal. They already have done so. Repeatedly Please try to understand that. GEM of TKIkairosfocus
April 1, 2009
April
04
Apr
1
01
2009
06:09 AM
6
06
09
AM
PDT
A process can be cumulative and at the same time you can occasionally lose some of what you have, which is different than what you said. Two examples I have used: when climbing a mountain, you occasionally go downhill for a while. When accumulating savings, occasionally you have less money than you did the month beforehazel
April 1, 2009
April
04
Apr
1
01
2009
06:06 AM
6
06
06
AM
PDT
kellogg:
It just occurred to me from ROb’s comment above why Joseph (in the latching thread) misunderstands the notion of “cumulative” selection.
Nice bald accusation.
Cumulative in TBW means that the total phrase is closer to the target, not that each letter is.
Exactly. And given a target, a small enough mutation rate and a large enough sample size the selected offspring will never be farher away fromn the target than the parent. So when a 28 letter target is matched by 15 letters, a progeny that matches 16 letters will be an cumulative advance even if a particular letter reverts. So you are saying that at least one offspring received three mutations? One that flipped a correct letter and two others that matched the target? Some much for the gradual change that Dawkins was trying to illustrate. And so much for small mutations rates. In short, Dawkins’s use of “cumulative” implies non-latching of individual letters. Not according to his description and illustration in TBW. In TBW Dawkins uses “weasel” to illustrate cumulative selection. “Cumulative” means “increasing by successive additions”. INCREASING BY SUCCESIVE ADDITIONS. “Ratchet” means to “move by degrees in one direction only”. Increasing by additions means to move by degrees in one direction only. Dawkins NEVER mentions that one or more steps can be taken backward. He never says anything about regression. Therefor cumulative selection is a ratcheting process as described and illustrated by the “weasel” program in TBW. That is once a matching letter is found the process keeps it there. No need to search for what is already present. Translating over to nature this would be taken to mean once something useful is found it is kept and improved on. IOW it is not found, lost, and found again this time with improvements. By reading TBW that doesn’t fit what Richard is saying at all. And he never states that he uses the word “cumulative” in any other way but “increasing by successive additions”. How can a process be “cumulative” and at the same time allow you to keep losing what you have?Joseph
April 1, 2009
April
04
Apr
1
01
2009
05:20 AM
5
05
20
AM
PDT
Kairosfocus writes,
Hazel, I have already pointed out that the very term latching implies that once a latched letter hits the target, its probability of further change falls to effectively zero. (A bit more than that in the case of quasi-latching, and also that of explicit latching that triggers post target shifting.)
And I’ll point out, and have pointed out several times before, that I understand that. What I don’t understand is why you keep telling me things that we already agree upon. kf writes,
These are why we need credible code to make a definitive conclusion beyond the preponderance of evidence.) I have also pointed out that (i) this is well warranted by Mr Dawkins’ statements c. 1986 as already cited and remarked on, and that (ii) locking up on a letter by letter basis is not i8n principle different from locking up on the basis of hitting ther phrase.
And I have said, repeatedly, that I am not interested in, or discussing, the historical problem of what Dawkins did or didn’t do, nor am I interested in specific implementations or interpretations of others such as Apollos (I know nothing about what he did.) I am interested in the pure logic, and the programming implementation of that logic, of the basic difference between the explicit and implicit latching cases. A few days ago, you wrote,
Yes, in an implicit case, P(mut) is the same whether or no a let[t]er has already hit target. Yes, the p(mut) for non latched letters in explicit latchi[n]g is different from that of latched ones ….
So, if I were writing code, or if you were, for both an explicit latching version and an implicit latching version, could we write, based on what you wrote above,
Implicit: for each letter, p(mut) = p Explicit: for each letter, if letter is incorrect, p(mut) = p if letter is correct, p(mut) = 0
Does this capture the essential, fundamental logical difference between the two situations?hazel
April 1, 2009
April
04
Apr
1
01
2009
04:59 AM
4
04
59
AM
PDT
PS: On sample size. Onlookers, that was raised and properly answered in the original thre4ad where the issue was raised by GLF in a threadjack attempt. Sample points generate 28 letter samples per snapshot. Latching is evident on the succession of letters, not the phrase as a whole. DK is simply again trying to find a way to suggest that a sample of 300+ letters that could change with runs of latched letters for 200+ of them is below the LOLN threshold. Recycling already adequately answered objections. Shall we call it "objecting in circles" to avoid being overly direct? [is that phrasing acceptable, Mr Hayden?]kairosfocus
April 1, 2009
April
04
Apr
1
01
2009
03:30 AM
3
03
30
AM
PDT
Mr Kellogg: I -- for excellent reason -- take serious exception to the following remark, which directly implies that I am lying, in a context where I have repeatedly given the Law of Large Numbers [LOLN, henceforth] grounds for my conclusions, over three threads now:
[DK, 140:] You know the sample is unrepresentative, but you persist. You don’t know the population size, but you persist . . .
1 --> As I have repeatedly pointed out and explained, even where the population at large is indefinitely large, a sufficiently large sample -- hence LOLN -- will as a rule be representative thereof. [Andf here the relevant population is the generation champions, which is where the o/p latching was observed in the first place. This is a study of lottery winners, not the overall population, and the point of IMPLICIT latching as an explanatory mechanism is that the o/p will latch based on how the lottery is run. So, it is distraction to advert to the possibility that within the generation, there may indeed be members where the letters that latch in the run of champions, are not latched. Due to the way champions are selected, that is of no EFFECTIVE consequence, as has been repeatedly highlighted and explained. That is, so long as the pop is small enough and the per letter mutation rate per letter is sufficiently low relative to that, that a significant number of zero change and only one change members are present, a Weasel program will latch or at worse quasi-latch. This is because far-skirt multiple change members that substitute one good letter for a reverted one will be too rare to show up significantly in the runs of Weasel before it hits the target. And, when the parameters are shifted to allow that substitution effect to trigger reversions, we will see first quasi-latching, then also cases of multiple letter jumps towards the target as the skirt comes into play; leading to relatively speaking a tearaway rush tot he target. The reported 500, 5% cases that run to target in about 20 - 30 gens show that case aptly. It is also possible to have versions of Weasel that converge extra slowly, 1000+ gens, which will show reversions etc, as Apollos inadvertently demonstrated though an error in his program. (Contrast that in the published 1986 runs, 40+ and 60+ gens were used for showcased "good" runs. That is, no-change won the generation championship about 1/2 the time. All this has been repeatedly pointed out, over three threads now.] 2 --> While there are pathological cases, it should be abundantly plain that sampling in the main at every tenth generation of champions will not correlate with any reasonable Weasel algorithm, and 3 --> in addition, Mr Dawkins' statements on "cumulative" progress and the like lend further reason to believe that the published excerpts of runs circa 1986 were representative of performance on good runs at that time. 4 --> It is a longstanding statisticians' rule of thumb that 20 - 30 is more or less the range where big enough allows LOLN to begin to kick in. 5 --> I have also pointed out the significance of strong runs in a trend. _______________ In short, I have warranted my conclusions. To date, I find no indication that you have seriously interacted with the sampling issue lurking in the LOLN. And yet, you are willing to draw quite serious conclusive and dismissive inferences. I find it further interesting that the same issue is precisely the underlying point in the concept of Complex Spacified Information and its relevant subset, FSCI. Namely, a search that is random or otherwise equivalent [cf Dembski-Marks on active information and cost of search as well as search for a search] will be so overwhelmingly dominated by the typical configurations -- on the gamut of the search resources of the observed cosmos as a whole -- that it will be maximally unlikely to find islands of function requiring 500 - 1,000 or more bits of capacity to store the used information. Thus, onlookers: we see -- again -- where the selectively skeptical objection to one thing leads, step by step, to a point where we see that one has an inconsistency in his or her scheme of warrant. CONCLUSION: It is plain that there is good reason to see that the published runs circa 1986 were representative of what were thought to be "good" runs at that time. Since then Weasel [circa 1987] and neo-Weasel programs have been as a rule carefully set up NOT to latch. the reason is that the obvious latching -- 200 out of 300 changeable letters without a single exception -- led to the recognition of the key flaw in the program: targetted search, without reference to functionality of relevant complexity. So, Weasel is not a good illustration of the powers of any BLIND watchmaker, as it is an example of intelligently designed, targetted, foresighted search. Of intelligent design, in fact. GEM of TKIkairosfocus
April 1, 2009
April
04
Apr
1
01
2009
03:13 AM
3
03
13
AM
PDT
1 6 7 8 9 10 13

Leave a Reply