Uncommon Descent Serving The Intelligent Design Community

PZ Myers Does It Again

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

PZ Myers has, once again, railed against something that he doesn’t understand at his blog Pharyngula. Hi PZ! Notice that he doesn’t actually address the content of Dr. Dembski and Dr. Marks’ paper, which you can read here: Conservation of Information in Search: Measuring the Cost of Success, published at the IEEE. Given his argument, he doesn’t know how to measure the cost of success, yet claims that Dr. Dembski doesn’t understand selection. A bit of advice PZ, the argument presented by Dr. Dembski and Dr. Marks is very sophisticated PZ, your mud slinging isn’t PZ, you need to step it up PZ. I know this new stuff isn’t ez, but you may want to consider a response that has actual content PZ. Your argument against this peer-reviewed paper is still in its infancy, or, more accurately, still in the pharyngula stage, embryonic in its development.

Since evolution of the kind PZ subscribes to cannot be witnessed, the argument has moved into genetic algorithms with the advent of computational abilities to determine the affair, and the IEEE is an entirely appropriate place to publish on that subject. We’re not going anywhere, we’ll give him time to catch up and educate himself to the tenets of the paper’s actual content. And if/when he does, maybe he’ll write another blog, and possibly write one with active information, that is, actual information, or else his argument will never reach it’s target.

Comments
KF: In your post at 193 and 195 you provide a series of examples that demonstrate exactly what myself and others have been arguing. I am going to try a slightly different tack to see if it makes any difference. You state this about the 1986 output of WEASEL
Onn balance of evidence, it was probably achieved implicitly through matching of pop per generation, mutation rate per letter and filter characteristics.
This is correct and precisely what we have been arguing. It seems to describe the algorithm dawkins described but with some unnecessary new terminology. Note that this output does not require that the algorithm locks any correct letters out of the mutation process - how recall Dembskis description:
Consider the L = 28 character phrase ... Two of the letters {E, S} are in the correct position. They are shown in a bold font. In partitioned search, our search for these letters is finished. For the incorrect letters, we select 26 new letters and obtain ...
Notice how the algorithm explicitly requires that the correct letters are removed from the search (the randomisation) process as evidenced by both his phrasing and the highlighted numbers. In the process you describe there is no such requirement. This is the issue we have been addressing, at least as far as the latching point goes. Merely choosing to call the observation of a behaviour an 'implicit latch' does not really deal with the point - Dembski and Marks explicitly describe a mechanism that prevents correct letters from ever reverting, Dawkins describes a mechanism that allows any letter to randomly change. In order to produce the latching described by D and M you have to explicitly include it in the code, with Dawkins algorithm you do not need this extra mechanism It is worth noting also that from Dawkins description you cannot have a mutation rate of zero, or one hundred percent. If it is zero then they are not 'mutant progeny', they are identical clones, and if it is one hundred percent then they are also not mutant progeny (they inherit nothing, they are just new randomly generated strings). Now can you answer this question: Is a latch that doesn't always latch the same as a latch that will always latch? Or more specifically: Is something who's observed output looks like it sometimes latches, the same as a device that latches? ---- Moving on to your more recent comments:
Since this is an official product of the EIL, it can fairly be said to be the Dembski-Marks model of Weasel, and it covers the ballpark.
Absolutly not, Atoms system can be configured to reproduce the WEASEL algorithm, it can also be configured to reproduce the Dembski/Marks algorithm along with a host of different algorithms. The mere fact that Atoms application requires re-configuring in order to meet the published criteria for each algorithm (and to produce all the results you have generated) indicates that they are two different algorithms. Remember - a computer programme is typically a collection of many algorithms. --- Finally, you still have not addressed the issue of a population, or the issue of randomisation and locking vs equal probability of mutation (as noted above). Dawkins description only holds if you have a mutation rate between zero and one hundred. Dembski and Marks demands a mutation rate of one hundred percent for incorrect letters and zero for correct letters. Can you now answer this question (yes or no will do): Is an algorithm that implies a mutation rate between zero and one hundred percent the same as one that requires a rate at zero or one hundred percent depending on the letter being examined? Remember, the issue is whether Dembski and Marks have portrayed WEASEL accurately. Now that we appear to be agreeing that Dawkins algorithm does not require a specifically designed latching mechanism to produce the observed behaviour. we seem to be close to agreeing that their algorithms differ on this point, so can you now address the issue of poulations and mutation.BillB
August 26, 2009
August
08
Aug
26
26
2009
02:13 AM
2
02
13
AM
PDT
kf, you still go on about this latching stuff that has long ago been sorted: Your rethoric trick to call "non latching" algorithms "implicitly latching" is long known. Also, whatever the EIL has on it´s website is not really under discussion. The question is wether the Dembski and Marks paper misrepresents Dawkins Weasel. And the answer is yes. I will copy my arguments again, so that I can again watch you ignore them (and still typing a few thousands words): So, let´s compare the examples given by Dembski and Marks and Dawkins again, shall we? 1. Correct letters Correct letters don´t stay fixed in the algorithm as intended by Dawkins. Demski and Marks fix correct letters explicitly. I think everything has been said about this latching behaviour. I will just add again that the Dawkins version is much more representative of biological evolution. The fixation of correct letters that can be seen when high population numers and low mutation rates are used is just what was to be demonstrated by Weasel: The power of mutation/selection. 2. Incorrect letters Dembski and Marks replace *every* wrong letter with a new random letter. This means that subsequent search results are completely different at the beginning: 1: SCITAMROFN*IYRANOITULOVE*SAM 2: OOT*DENGISEDESEHT*ERA*NETSIL Dawkins algorithm works in a completely different way: From the parent search string he computes a population of daughter strings which are exact copies except for a fixed (and low) mutation rate per letter: 1: WDLTMNLT*DTJBKWIRZREZLMQCO*P 2: WDLTMNLT*DTJBKSIRZREZLMQCO*P This is of course much more in line with biological evolution. Blue Lotus makes the same point and you evade by talking about an algorithm from EIL. I am talking about the paper of Dembksi and Marks. Instead of a "Proximity Reward Search" Dembski presents it as a "Partioned Search" in the paper. The funny thing is that the differences are immediately visible in the EIL software. 3. Population This is related to point 2: Dembski and Marks have a population size of one (not really a population at all). From a parent string exactly one daughter string is computed. There is no selection involved! Dawkins generates a large population of daughter strings and selects the best one as the parent string for the next generation. Again, while it is an extremely simplified model of evolution it at least models the selection part. Summary: The two algorithms are completely different in almost every aspect. The one that Dawkins said he used (and everybody can reproduce the results easily) is a much better model of biological evolution: - Correct letters are not fixed: Mutation rate is independend of resulting fitness: Dawkins: ca. 5% for every letter, Dembski and Marks use an extremely unrealistic rate of 0%/100% for correct/incorrect letters. - Selection is modelled - The effect of population sizes is modelled in the Dawkins version. - The correct Weasel can follow a moving target So, once again, the algorithms are completely different and the latching behaviour is only a small part of this difference (but maybe the one that can be misrepresented most easily).Indium
August 26, 2009
August
08
Aug
26
26
2009
01:50 AM
1
01
50
AM
PDT
PPPS: Onlookers: BillB's fishy funnel model is of course an intelligently designed simulation, which has in it a lot of already built in functionality, as is implicit in robotic fish with solar panels swimming around. That is, it begs the key questions of the origin of basic information-rich, complex functionality, which is relevant to BOTH origin of life and origin of novel body plans and major features otherwise. Until you get tothe shoreline of an island of complex function by chance variation and natural selection, hill-climbing is irreelvant as an algorithm to illustrate the power of evoltuionary mechanisms. For, per argument, we are perfectly willing to concede huill climbing for the moment. the issue is getting from the sea of non-function to the shorelines of islands of function within teh credible resources of the cosmos as we observe it [if 10^150 random walk moves are too small a fraction of the config space, the proposal is not credible; and 1,000 bits of required info storage for function will put you well into that territory], as has been repeatedly stated and just as repeatedly dodged.kairosfocus
August 26, 2009
August
08
Aug
26
26
2009
01:41 AM
1
01
41
AM
PDT
PPS: BL, you are confusing a pedagogical example of what partitioning is and does [p. 1055, IEEE paper] with showcased "good" runs of Weasel c. 1986. Guavas and soursops. EIL DOES have a program that implements in parallel runs of Weasels, with adjustable parameters. That is where the runs I put up above come from [and you will see runs that bracket the 40 - 60 "good runs" produced by CRD c 1986: 130+ for a 4%, 50 run and 31 for a 500 pop run]. On the original page I have one run at 22, when rates and pop are pushed to extremes. Another ran to something like 2500, most of that being in the tail end as odd effects kept it bouncing around. You will see that as parameters shift, different population (mutation distribution tail . . . recall the filters reward the closest per CRD's specification, not the average on distance to target ) effects dominate -- indeed, my own analysis suggests that different factors and dynamics dominate as a run progresses.kairosfocus
August 26, 2009
August
08
Aug
26
26
2009
01:30 AM
1
01
30
AM
PDT
PS: BL: While I know there is a new set of "standard" Darwinist idea hit-man talking points on this, they are on the usual strawman distortion. (And note that while much digital ink has been spilled on a side issue, the main point from 78: the implications of the new phase of development of ID theory, are being distracted from. And unsurprisingly, the idea hit-men want to demonise and dismiss those they object to. but meanwhile we now have a tool for explaining the impact of intelligence on search: active information fits horses for courses and gives a substantial gain on the average/yardstick random walk search. A gain that can in some cases be quantified and turned into an information metric. Moreover, as this is in an area of interest to engineers, the power games played out over the past decade to lock ID research from official "Science" journals are suddenly irrelevant.) I should also immediately note that the version of Weasel used in the demo runs as printed off again above, was Atom's adjustable weasel. This is of course produced under the auspices of the M & D EIL, and is hosted by them. One part does a strict random walk. One, in parallel, an explicitly partitioned search. The third, an adjustable parameter search based on set pop no, set per letter mut rate and I believe now filter type and even clustering of groups of letters 1, 2, 4 etc [at my request]. Since this is an official product of the EIL, it can fairly be said to be the Dembski-Marks model of Weasel, and it covers the ballpark. Why not try a few runs yourself and see the in-parallel results? M & D, on p. 1055 of the recent IEEE paper are doing a very different thing from CRD's development of a program of even their own hosting of the adjustable Weasel; so, one must not compare guavas and soursops. For, on p. 1055, M & D are analysing the probability implications of partitioned search; in the context of the impact of active information on search. (The evidence that Weasel credibly 1986 exhibits this class of search is accessible in 78 and 134 above. Remember, cumulative, ratcheting, latching of already correct letters search is achievable explicitly or implicitly, as shown by actual o/p. Once the observable ratcheting effect occurs, partitioning has been achieved and the analysis applies regardless of whether the pathway was explicit or implicit.)kairosfocus
August 26, 2009
August
08
Aug
26
26
2009
01:20 AM
1
01
20
AM
PDT
KF
And, beyond reasonable doubt Weasel 1986 was partitioned as is shown by the evident latching on o/p.
And this is why my question is so important to me. If there is no difference between Weasel 1986 and Dembski/Marks 2009 then why are the outputs across generations so wildly divergent on Dembski/Marks and almost identical Weasel 1986?
Fallacy of the closed mind as a manifestation of cognitive dissonance.
Then I have high hopes you will address my query.Blue Lotus
August 26, 2009
August
08
Aug
26
26
2009
01:12 AM
1
01
12
AM
PDT
Clive: This scenario below may help (or not) It is something I plan to make, or at the very least write in simulation one day. It is I think an embodiment of R0b's C example in 211 and could, if one were so inclined, be said to contain 'implicit targets' and 'implicit fitness functions'. ------ You have a large, tall, tank of water with some lights shining in at points near the top, and a funnel at the bottom. The tank is a few meters around and maybe five meters tall. You have some fairly simple robotic fish - they have motorised tails and fins so they can, if correctly controlled, swim around the tank (assume they are waterproof BTW) Each has a solar cell and some light sensors and a small power store (a battery or capacitor) so that, when they are close enough to the light, their power store will accumulate energy which can then power their brains, and allow the motors to move. There is sufficient light near the top of the tank so that a fish can power its various parts, and trickle charge its energy store, using only the energy coming from its solar panel. Each also has a small embedded computer to control all its parts, and a device for reprogramming the chip wirelessly over a short distance (probably by an infra red link) Each is slightly less than neutrally buoyant - it will slowly sink ---- You start by programming each robot fish (maybe you have about 30 of them) with a randomly configured neural network programme, and then you drop them into the tank. The fish start to sink. Maybe some of them twitch a bit, it depends on how their random brains work. When a fish reaches the funnel at the bottom it is sucked out and transported up to the top of the tank. On the way is is re-programmed with a new neural network, one which is a slightly mutated copy of a programme from a randomly selected fish that is still in the tank. (you have kept a record of all the random brains you generated at the start) The 'new' fish is then dropped in at the top of the tank (and your records are updated). --- In simple terms, when a fish falls to the bottom it 'dies' and (solely because of the finite number of fish we can reasonably make) this allows one of the still 'living' fish to reproduce. Any idea what will happen when you let the system run for a few months? What ought to happen is that the fish's brains will 'evolve' to allow them to swim towards the light, which provides them with power, which keeps them moving, and stops them sinking, and stops them dying. The longer a fish can do this the greater the probability that it will 'reproduce' and the greater the number of offspring it will have - these offspring will 'inherit' some of the traits that helped the 'parent' survive for long enough to reproduce. Now the important questions: Where is the target? Are the fish (or rather their brains) gathering 'information' from their environment? And a note to KF in case he misunderstands: This example is designed to deal SOLELY with the concepts of fitness functions and targets. It is constructed to serve this purpose and this purpose alone. It is NOT about how this simulacrum of self replicating agents came to exist, or how biological life first arose - it assumes self replicators exist, as we can observe in nature.BillB
August 26, 2009
August
08
Aug
26
26
2009
01:10 AM
1
01
10
AM
PDT
Onlookers: 1] Re Rob @ 210: Once partitioning is effectively achieved in the o/p run of generational champions as they move to the target on mere proximity, the M & D analysis of the probability constraints of cumulative, partititoned ratcheting (thus latched) search applies, as does the discussion on active information. And, beyond reasonable doubt Weasel 1986 was partitioned as is shown by the evident latching on o/p. Onn balance of evidence, it was probably achieved implicitly through matching of pop per generation, mutation rate per letter and filter characteristics. 2] re Indium @ 209: Simply repeats the same already corrected errors, starting witht he denial of the obvious latchign in the showcased Weasel runs of 1986 and in CRD's enthusiastic discussion of same. [Cf. e.g. 78 above.] Fallacy of the closed mind as a manifestation of cognitive dissonance. 3] re BillB: Still, sadly, beyond the pale of responsible, civil discussion. G'day. GEM of TKIkairosfocus
August 26, 2009
August
08
Aug
26
26
2009
01:00 AM
1
01
00
AM
PDT
kairosfocus, Since my post at #196 there have been several long posts. Could I refer you back to it? Would you be able to answer the question, which in summary is: If the two methods (M+D, Dawkins) are the same but for latching then why do the outputs of each differ so much? I.E Dawkins printed runs generation one to two are almost identical, generation one to two of Dembski/Marks is totally different. Why is that the case? And why does that not indicate these methods differ in their methods?Blue Lotus
August 26, 2009
August
08
Aug
26
26
2009
12:57 AM
12
12
57
AM
PDT
I took the string SCITAMROFN*IYRANOITULOVE*SAM and calculated a next generation using Dawkins's algorithms with populations of 10,50 and 100 - and mutation rates of .04, .05 and .1. The tenth string in the list is the second generation given in the paper of Mark and Dembski. The differences with the first generation are in bold face: 1. SCITAMROFN*IYRANOIEULOVE*SAM 2. SCITAMROFN*IYRANOITULOGE*SAM 3. ECITAMRI*N*IYZANOITULOVE*SAM 4. SCITAMROFN*IYRANOITUL*VE*SAM 5. SCITAMROFN*IYRANOITULOVE*SEM 6. SCITAMOOLNOIYRAMOITULOVE*SEM 7. SCITANROFN*IYYANOITULOVE*SAM 8. SCITIMROFN*JYRANOITULOVE*SAM 9. SCITAMROFN*ICRHNOITSLOWE*SAV 10. OOT*DENGISEDESEHT*ERA*NETSIL Can anyone spot a difference in the design of the strings? Anyone? KF? Anyone?DiEb
August 26, 2009
August
08
Aug
26
26
2009
12:52 AM
12
12
52
AM
PDT
KF, Thank you for answering my question at 165:
Do you honestly believe that when a woman wears clothes that you disapprove of, an accusation of rape made by her should be disregarded?
I am glad to hear that you do not believe this. When you said:
She discredted herself in the courtroom by showing up in stilettos, stockings and a tight micro-mini.
You gave no indication that this was not just your own opinion, so I am relieved to hear that it was actually someone else's. Needless to say a simple "No, this is not what I believe" would have been sufficient, and more polite.BillB
August 26, 2009
August
08
Aug
26
26
2009
12:09 AM
12
12
09
AM
PDT
Clive @ 185, I'll try to lay this out more explicitly. Consider three different fitness measures for cumulative selection: A) Similarity to a long-term fixed ideal. B) The outcome of a dice roll (rolled every time fitness is evaluated). C) A function of time, calculated using a complex combination of feedback and feedforward loops. Option A is WEASEL and B is "anything goes," but those aren't the only options. Unlike A, C has no long-term target, and unlike B, it doesn't produce random noise. A and C both produce results that single-step selection cannot.R0b
August 25, 2009
August
08
Aug
25
25
2009
04:14 PM
4
04
14
PM
PDT
kairosfocus:
And, once we have effectively partitioned search,t eh D & M observations and calculations are applicable. That is, BOTH explicitly latched and implicitly latched searches follow the math of partitioned search.
No. D&M's math applies only to "explicitly latched" searches. The math assumes one offspring per generation and 100% mutation rate. "Implicit latching" cannot occur under those conditions, so the math does not apply to Dawkins' WEASEL or any other "implicitly latched" search.R0b
August 25, 2009
August
08
Aug
25
25
2009
02:35 PM
2
02
35
PM
PDT
I am amazed. I have never experienced something like this. Watching it is funny, but being part of such a conversation is simply amazing. Everybody should try this once in a while. So, I will start again, just for the fun of watching you, kf, avoiding my points again completely: The wording in the Blind Watchmaker gives no hint of latching. A video of Dawkins presenting the algorithm shows no latching. Dawkins says there is no latching. Latching is not needed for the algorithm to work (as kf has replicated). The algorithm is more complicated when it uses latching. Explicit latching is not something biologists would implement when modelling evolution: Mutation rate is supposed to be independent of the resulting fitness. The only argument FOR latching I have seen is the fact that no mutation of correct letters is shown in the BW tables, which is easily explained by the fact that only the best members of a few generations were shown. There is no reason to believe one should see fitness reducing mutations in this case. So, let´s compare the examples given by Dembski and Marks and Dawkins again, shall we? 1. Correct letters Correct letters don´t stay fixed in the algorithm as intended by Dawkins. Demski and Marks fix correct letters explicitly. I think everything has been said about this latching behaviour. I will just add again that the Dawkins version is much more representative of biological evolution. 2. Incorrect letters Dembski and Marks replace *every* wrong letter with a new random letter. This means that subsequent search results are completely different at the beginning: 1: SCITAMROFN*IYRANOITULOVE*SAM 2: OOT*DENGISEDESEHT*ERA*NETSIL Dawkins algorithm works in a completely different way: From the parent search string he computes a population of daughter strings which are exact copies except for a fixed (and low) mutation rate per letter: 1: WDLTMNLT*DTJBKWIRZREZLMQCO*P 2: WDLTMNLT*DTJBKSIRZREZLMQCO*P This is of course much more in line with biological evolution. 3. Population This is related to point 2: Dembski and Marks have a population size of one (not really a population at all). From a parent string exactly one daughter string is computed. There is no selection involved! Dawkins generates a large population of daughter strings and selects the best one as the parent string for the next generation. Again, while it is an extremely simplified model of evolution it at least models the selection part. Summary: The two algorithms are completely different in almost every aspect. The one that Dawkins said he used (and everybody can reproduce the results easily) is a much better model of biological evolution: - Correct letters are not fixed: Mutation rate is independend of resulting fitness: Dawkins: ca. 5% for every letter, Dembski and Marks use an extremely unrealistic rate of 0%/100% for correct/incorrect letters. - Selection is modelled - The effect of population sizes is modelled in the Dawkins version. - The correct Weasel can follow a moving target So, once again, the algorithms are completely different and the latching behaviour is only a small part of this difference. And still you insist that the two algorithms are equivalent? Did I extract this correctly from your recent word-salads?Indium
August 25, 2009
August
08
Aug
25
25
2009
02:12 PM
2
02
12
PM
PDT
PS: Onlookers, let's snip out the first few sentences of Indium at 14 and again see what is their accuracy in light of what has been shown: _____________ >> The wording in the Blind Watchmaker gives no hint of latching. --> false, cf 78: cumulative selection that rewards the slightest increment to target is about latching --> Similarly, the print runs as shown show evident latching --> first claim is a demonstrated falsehood A video of Dawkins presenting the algorithm shows no latching. --> Algor o/p c 1986 is the relvant o/p being discussed, and it credibly shows latching --> that of 1987 was demonstrably quite different, and we have across this thread shown why that is likely to be so: detuning of the parameters which materially affects behaviour. Dawkins says there is no latching. --> I believe his reported claim was tha there was no EXPLICIT latching. --> We have demonstrated implicit latching to be not only possible but instantiated Latching is not needed for the algorithm to work --> in the abstract, this is quite true: explicitly latched, implicitly latched, implicitly quasi-lat5ched and fully unlatched runs are all very possible, and all will eventually converge on target. --> in practice, CRD in 1986 showcased runs that latched and glowingly described their cumulative progress to target --> And, lest we get olost on this side issue: teh fundamental problem with Weasel is that it showes targetted, artificially selected progress on mere proximity without refere3nce to function --> And, relevant to M & D, the program does not CREATE novel information, it simply replicates a stored -- thus already existing -- target. --> That is, it is based on active information and the advantage it creates for the particular search in view.>> _______________ We could go on, but the point is made.kairosfocus
August 25, 2009
August
08
Aug
25
25
2009
02:10 PM
2
02
10
PM
PDT
DL: You are simply wrong. Demonstrably so. Indeed, demonstratedly so. Denial will not change that. Good day. GEM of TKIkairosfocus
August 25, 2009
August
08
Aug
25
25
2009
01:59 PM
1
01
59
PM
PDT
kairosfocus#203 I fear you have Morton's Demon sitting on your shoulder. Any objective reader of this and past related threads can see that Dembski and Marks have mischaracterized Dawkins' algorithm. Your protracted attempts to distract from this core issue, succinctly summarized by Indium at #14 (among several others), suggests that further discussion of this topic with you will be fruitless.DeLurker
August 25, 2009
August
08
Aug
25
25
2009
01:56 PM
1
01
56
PM
PDT
DL; Pardon, your cognitive dissonance is also showing. Yours is another case of conceding the case on the undeniable merits while claiming victory. Kindly look at the cases reproduced at 193 and 195 [and long since linked onlookers], comparing with the published Weasel o/p c. 1986. You will see that once letters go correct, they are credibly retained in certain ACTUAL cases, and you can see that this is an effect of pop sizes and mut rates that cause there to be at least one unchanged member in the pop per gen almost all the time, so that once there is a dominance of single mutations otherwise, we will see holds or single step advances, i.e. latching shows up implicitly based on the set-up of the algor and its parameters. And, once we have effectively partitioned search,t eh D & M observations and calculations are applicable. That is, BOTH explicitly latched and implicitly latched searches follow the math of partitioned search. And, D & M do not give a detailed algor, they describe what partitioned search is and then analyse its mathematical consequences. As tot whether Weasels will move towards set targets, that is irrelevant to the point. As to the mantra that M & D have somehow "misrepresented" Dawkins' algor, saying that 50 time does not make it true in the teeth of the easily accessible evidence. They have drawn from the CRD-claimed cumulative approach to target, have looked at the published o/p and have assessed on the effect, partitioning of the search into [1] active search part and [2] already home part. The only clear inaccuracy I can therefore see is yours, I am afraid; so kindly refrain from projecting such to me. GEM of TKIkairosfocus
August 25, 2009
August
08
Aug
25
25
2009
01:29 PM
1
01
29
PM
PDT
YD: Please read 78 above on just what the concept of active information and associated concepts do for ID. Then, revisit your comment just now, please. GEM of TKIkairosfocus
August 25, 2009
August
08
Aug
25
25
2009
01:14 PM
1
01
14
PM
PDT
Comparison M&D (Mark and Dembski) and D (Dawkins) Fitness Function M&D: position of correct letters D : number of correct letters Mutation Rate M&D: up to 100% D : works best with 4% - 5% Monotony M&D: it's impossible that a current generation is less fit than the previous one D : it's improbable that a current generation is less fit than the previous one Population M&D: 1 D : >1 (it is worse than random search on a population of 1...) kf, I give up, you are right, both M&D and D are describing the same algorithm!DiEb
August 25, 2009
August
08
Aug
25
25
2009
01:14 PM
1
01
14
PM
PDT
Onlookers: I see Indium is now indulging in the tactic of conceding a case on the merits while claiming victory. That is sad. Let's review the basic case, as can be seen form the long since linked: 1 --> Weasel c 1986 is by CRD's admission in BW, tartgetted, cumulatively progressive search that rewards mere proximity to a preset target built intothe program, without reference to a reasonable threshold of funcitonality. 2 --> As such, it is irrelevant to the claimed blind watchmaker, random ariation and natural selection, and it does not create de novo information. So, as CRD concedes, the program is fundamentally misleading and it is my considered opinion on the evidence of what CRD said and the impact it has plainly had for 23 years, that it serves only the rhetorical purpose of creating the IMPRESSION that small increments in changes in genomes can account for complex function. 3 --> One remarkable feature of the showcased 1986 runs is tha they seem to latch the leters once they go correct. 4 --> On the evidence, one reasonable explanation thereof is EXPLICIT latching, as say by use of a mask register (which can also double as a distance to target metric). 5 --> Another is IMPLICIT latching due to co-tuning of pop size, mut rate per letter and filter characteristics. 6 --> on Mr Elsberry's report, we have accepted that c. 2000 CRD claimed that Weasel 1986 was not explicitly latched, leaving the implicit latching model as the best explanation on preponderance of evidence. 7 --> Strong and even strident objections were made above tot he idea of implicit latching, and attempts were made -- are still being made or hinted at -- to suggest that implicit latching is a non-phenomenon, regardless of evidence. And, to suggest that Weasel is a credible simple model of what chance variation and natural selection -- Dawkins' blind watchmaker -- can do. 8 --> one hopes that at last, the force of the correctives will be taken to heart, and that the resort on the part of Darwinists to distractions, distortions and demonisation of those who differ with them will be abandoned. GEM of TKIkairosfocus
August 25, 2009
August
08
Aug
25
25
2009
01:11 PM
1
01
11
PM
PDT
kairosfocus#197
And since BOTH explicitly latched and implicitly latched searches will have this partitioned search effect [and will follow the same associated mathematics],
It would be more accurate to simply admit your error and use the term "unlatched" rather than "implicitly latched." Be that as it may, it is by no means obvious that the two, very different, algorithms "follow the same associated mathematics." Consider the case of a varying target, which is much more biologically realistic. The partitioned search described by Dembski and Marks will fail as soon as the target changes with respect to any latched letter. The Weasel algorithm described by Dawkins will begin to converge on the new target. Quite different behavior because they are different algorithms.
the discussion of the M & D “algorithm” that you make such heavy weather of is tilting at a windmill of your own manufacture.
You seem not to understand the importance placed on accuracy in the peer reviewed literature. Dembski and Marks have misrepresented Dawkins' algorithm. While it might be minor in terms of its impact on their conclusions, it does warrant a public correction.DeLurker
August 25, 2009
August
08
Aug
25
25
2009
01:09 PM
1
01
09
PM
PDT
Jerry,
I guess you are right. Everybody here now agrees that Dawkins used a meaningless algorithm (WEASEL) on a meaningless topic (cumulative selection) to fill up part of his book with drivel. I am glad we are finally all on the same page.
Except apparently D & M, who maintain that their critique of weasel amounts to a pro-ID argument.yakky d
August 25, 2009
August
08
Aug
25
25
2009
01:08 PM
1
01
08
PM
PDT
"Yes, kf, I guess everybody knows quite well how weasel works, thanks. I am happy that you are so excited about how well Weasel latches without explicit latching. Dawkins pedagogical intention seems to work well for you!" I guess you are right. Everybody here now agrees that Dawkins used a meaningless algorithm (WEASEL) on a meaningless topic (cumulative selection) to fill up part of his book with drivel. I am glad we are finally all on the same page. The question is if this part of his book is drivel, is the rest of it? Is everything that he says, drivel? Would anyone buy a used car from him?jerry
August 25, 2009
August
08
Aug
25
25
2009
12:56 PM
12
12
56
PM
PDT
Indium: All M & D do in the published paper is they discuss the effect of ratcheted, latched searches. Cf the text, p. 1055: ____________ >> Partitioned search [12] is a “divide and conquer” procedure best introduced by example. Consider the L =28 character phrase METHINKS ? IT ? IS ? LIKE ? A ? WEASEL. (19) Suppose that the result of our ?rst query of L =28 charac- ters is SCITAMROFN ? IYRANOITULOVE ? SAM. (20) Two of the letters {E, S} are in the correct position. They are shown in a bold font. In partitioned search, our search for these letters is ?nished. For the incorrect letters, we select 26 new letters and obtain OOT ? DENGISEDESEHT ? ERA?NETSIL. (21) Five new letters are found, bringing the cumulative tally of discovered characters to {T, S,E, ?,E, S,L}. All seven char- acters are ratcheted into place. The 19 new letters are chosen, and the process is repeated until the entire target phrase is found . . . >> _______________ Observe:
Partitioned search [12] is a “divide and conquer” procedure best introduced by example . . . . Two of the letters {E, S} are in the correct position. They are shown in a bold font. In partitioned search, our search for these letters is ?nished . . . . Five new letters are found, bringing the cumulative tally of discovered characters to {T, S,E, ?,E, S,L}. All seven char- acters are ratcheted into place. The 19 new letters are chosen, and the process is repeated until the entire target phrase is found.
I remind, the printed off runs above demonstrate that implicitly latched search is a real, observable phenomenon. And since BOTH explicitly latched and implicitly latched searches will have this partitioned search effect [and will follow the same associated mathematics], the discussion of the M & D "algorithm" that you make such heavy weather of is tilting at a windmill of your own manufacture. GEM of TKIkairosfocus
August 25, 2009
August
08
Aug
25
25
2009
12:52 PM
12
12
52
PM
PDT
Yes, kf, I guess everybody knows quite well how weasel works, thanks. I am happy that you are so excited about how well Weasel latches without explicit latching. Dawkins pedagogical intention seems to work well for you! The point, however, is that Dembski and Marks describe a completely different algorithm. No population, no selection. Just 100% mutation rate of incorrect letters from a single string.Indium
August 25, 2009
August
08
Aug
25
25
2009
12:39 PM
12
12
39
PM
PDT
kairosfocus, Thank you for taking the time to type that lengthy run to illustrate your point. I have but a single, simple question.
Finally, I did a 500 pop at 4% run as run no 2. In 31 gens it hit target, i.e the tail effect shows up.
Would it be possible for you to do the same with Dembski's version and show the output also? My question is related to that you see, what I don't understand currently is why the string in generation 2 of Weasel
1. HIMMITFEBTIYEVJHKWLSQZBWWZHW 2. MIMMITFEBTIYEVJHKWLSQZBWWZHW
is almost identical to generation one. From the output I've seen of Dembski's version the generation 1 and 2 strings are totally different. If they are the same, apart from one latches and one does not would it be possible for you to explain to me why the strings across a single generation show very little change in "Dawkins Weasel" but appear to be essentially random strings in "Dembski's latching Weasel". I would not expect such different outputs if the only difference was one latched the correct letters when found. Thanks in advance.Blue Lotus
August 25, 2009
August
08
Aug
25
25
2009
12:30 PM
12
12
30
PM
PDT
While we are at it, from later in the same earlier UD thread, at 236, a quasi-latched case: +++++++++++ >> Run C, 500 /gen, 8% mutation rate: _______________ 1. QB NRQWFVIDGVT FLOPLWCGHLIJM 2. MB NRQWFVIDVVT FLOPLW GHLIJV 3. MB NRQWFVIDVVT FLOPNW GZLIEV 4. ME NRQWFVITVVT FLOPNW GZLIEV 5. ME NRQWFVITVVT LLOPNW GZLBEV 6. MEXNRQWFVITVTT LLKPNW GZLBEV 7. MEXNRQKFVITVTT LLKPTW GZLBEV 8. MEXNRRKFVITVTT LLKPTW CXLBEL 9. MEXNRRKFVIT TT LLKETW CXYBEL 10. MEXNRIKFVIT TT LLKETW CEYBEL 11. MEXNRIKFVIT TT LLKETA CEYBEL 12. MEXNRIKFVIT TT LLKETA CEABEL 13. MEXNRIKFVIT TT LIKERA REABEL 14. MEXNRIKFVIT TS LIKERA REABEL 15. MEKNRIKFVIT TS LIKERA REASEL 16. MEKNRIKSVIT TS LIKERA REASEL 17. MEKNIIKSVIT TS LIKERA REASEL 18. MEKNINKSVIT TS LIKERA REASEL 19. MEKNINKSVIT IS LIKERA REASEL 20. MEKNINKS IT IS LIKEDA REASEL 21. MEKNINKS IT IS LIKE A REASEL 22. MEKHINKS IT IS LIKE A REASEL 23. MEKHINKS IT IS LIKE A REASEL 24. METHINKS IT IS LIKE A REASEL 25. METHRNKS IT IS LIKE A WEASEL 26. METHRNKS IT IS LIKE A WEASEL 27. METHHNKS IT IS LIKE A WEASEL 28. METHHNKS IT IS LIKE A WEASEL 29. METHHNKS IT IS LIKE A WEASEL 30. METHHNKS IT IS LIKE A WEASEL 31. METHHNKS IT IS LIKE A WEASEL 32. METHHNKS IT IS LIKE A WEASEL 33. METHHNKS IT IS LIKE A WEASEL 34. METHHNKS IT IS LIKE A WEASEL 35. METHINKS IT IS LIKE A WEASEL _____________ Observe the reversion at 24/25, and the time it took to recover. indeed, the reverted letter was the last one to go correct inthe end. Case C exhibits Quasi latching, with letter reversion and recovery, with a high pop size per gen and a high per letter mutation rate. >> +++++++++++ We may add that the case also shows a substitution effect at 24/25, one of the tail of distribution effects discussed. GEM of TKIkairosfocus
August 25, 2009
August
08
Aug
25
25
2009
12:26 PM
12
12
26
PM
PDT
kf, nobody disputes that Weasel is a targetted search. I dispute that Dembski and Marks use the correct algorithm in their paper. Except for the latching issue (which I said is ratgher minor) you have chosen not to comment on my points so far.Indium
August 25, 2009
August
08
Aug
25
25
2009
12:07 PM
12
12
07
PM
PDT
Footnote: The longstanding linked implicit latching and/or quasi latching runs, from the just linked: +++++++++++++ >> . . . out of the box, at 50 members per generation and with 4% per letter mutation rate, proximity reward search latched or so close to latched on my first run, as makes no difference. You will also see predominance of both no-change cases and of the single step advances, just as Joseph and I have remarked on. Finally, I did a 500 pop at 4% run as run no 2. In 31 gens it hit target, i.e the tail effect shows up. QED. _______________ RUN A: 50/gen, 4% per letter mut rate: 1. HIMMITFEBTIYEVJHKWLSQZBWWZHW 2. MIMMITFEBTIYEVJHKWLSQZBWWZHW 3. MIMMITFEBTIYEVJHKWL QZBWWZHW [ . . . ] 27. MEIFINKE IT DS KGKL A VEXJXT 28. MEIFINKE IT DS KGKL A VEXIXT 29. MEIFINKE IT DS KGKL A VEXIXT 30. MEIFINKE IT DS LGKL A VEXIXT 31. MEIFINKE IT DS LGKL A VEXIXT 32. MEIFINKE IT DS LGKL A VEXIXT 33. MEIFINKE IT DS LGKF A VEXZXT 34. MEIFINKE IT DS LGKF A VEXZXT 35. MEIHINKE IT DS LGKF A VEXZXT 36. MEIHINKE IT DS LNKF A VEXZXT 37. MEIHINKE IT VS LNKF A VEXZXT 38. METHINKE IT VS LNKF A VEXZXT 39. METHINKE IT VS LNKF A VEXZXT 40. METHINKE IT VS LNKF A VEXZXT 41. METHINKE IT VS LNKF A VEXZXT 42. METHINKE IT VS LNKF A VEXZXT 43. METHINKE IT VS LIKF A VEXZBT 44. METHINKE IT VS LIKF A VEXZBT 45. METHINKE IT VS LIKF A WEXZBT 46. METHINKE IT VS LIKF A WEXZBT [ . . . ] 62. METHINKS IT GS LIKK A WEXSBG 63. METHINKS IT GS LIKK A WEXSBG 64. METHINKS IT NS LIKK A WEXSBG 65. METHINKS IT NS LIKK A WEXSBG 66. METHINKS IT NS LIKK A WEXSBG 67. METHINKS IT NS LIKK A WEXSBG [ . . . ] 120. METHINKS IT IS LIKE A WEASEG 121. METHINKS IT IS LIKE A WEASEG 122. METHINKS IT IS LIKE A WEASEG 123. METHINKS IT IS LIKE A WEASEG 124. METHINKS IT IS LIKE A WEASEG 125. METHINKS IT IS LIKE A WEASEG 126. METHINKS IT IS LIKE A WEASEG 127. METHINKS IT IS LIKE A WEASEG 128. METHINKS IT IS LIKE A WEASEG 129. METHINKS IT IS LIKE A WEASEG 130. METHINKS IT IS LIKE A WEASEG 131. METHINKS IT IS LIKE A WEASEL _________________ RUN B, 500 pop/gen, 4% per letter mut rate: 1. MEL LSI YHXMAJLMDGMVKTSKGW 2. MEL LSI YHXIAJLMDNMVKTSKGW 3. MEL LSI YHXISJLMDNMJKTSKGW 4. MEL LSI YHXISJLMDN JKTSKGW 5. MEL LNI YHXISJLDDN JKTSKGW 6. MEL LNI YHXISJLDDN JKTEKGW 7. MEL LNB BHXISJLDDN JKTEKGE 8. MEL LNB BHXISJLIDN JKTEKGE 9. MEL LNB BHXISJLIDN JKTEKSE 10. MEL LNB BHXISJLIDN JKTEKSEL 11. MEL LNK BHXISJLIDN JKTEKSEL 12. MEL LNK BHXIS LIDN JKTEKSEL 13. MET LNKV BHXIS LIDN JKTEKSEL 14. MET LNKV BHXIS LIDN AKTEKSEL 15. MET LNKV BHXIS LIDE AKFEKSEL 16. MET LNKV BHXIS LIKE AKFEKSEL 17. MET LNKS BHXIS LIKE AKFEKSEL 18. MET LNKS BH IS LIKE AKFEKSEL 19. MET LNKS BH IS LIKE AKFEKSEL 20. MET LNKS BH IS LIKE AKWEKSEL 21. MET INKS BH IS LIKE AKWEKSEL 22. MET INKS BH IS LIKE AKWEKSEL 23. MET INKS BH IS LIKE AKWEKSEL 24. MET INKS IH IS LIKE AKWEKSEL 25. MET INKS IH IS LIKE A WEKSEL 26. MET INKS IH IS LIKE A WEASEL 27. MET INKS IH IS LIKE A WEASEL 28. METHINKS IH IS LIKE A WEASEL 29. METHINKS IH IS LIKE A WEASEL 30. METHINKS IH IS LIKE A WEASEL 31. METHINKS IT IS LIKE A WEASEL ______________ The matter in the main now settled . . . >> ++++++++++++ Trust this helps clear the air. Notice too how a 500 pop sample size seems to have sped up the process wonderfully. And with the side issue settled, we can address the fact that Weasel 1986 is targetted search that rewards proximity not function, and so is irrelevant to the real challenge, apart from showing the impact of active information. GEM of TKIkairosfocus
August 25, 2009
August
08
Aug
25
25
2009
11:48 AM
11
11
48
AM
PDT
1 2 3 4 9

Leave a Reply