Uncommon Descent Serving The Intelligent Design Community

The Original WEASEL(s)

Categories
Darwinism
Evolution
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

On August 26th last month, Denyse O’Leary posted a contest here at UD asking for the original WEASEL program(s) that Richard Dawkins was using back in the late 1980s to show how Darwinian evolution works. Although Denyse’s post generated 377 comments (thus far), none of the entries could reasonably be thought to be Dawkins’s originals.

It seems that Dawkins used two programs, one in his book THE BLIND WATCHMAKER, and one for a video that he did for the BBC (here’s the video-run of the program; fast forward to 6:15). After much beating the bushes, we finally heard from someone named “Oxfordensis,” who provided the two PASCAL programs below, which we refer to as WEASEL1 (corresponding to Dawkins’s book) and WEASEL2 (corresponding to Dawkins’s BBC video). These are by far the best candidates we have received to date.

Unless Richard Dawkins and his associates can show conclusively that these are not the originals (either by providing originals in their possession that differ, or by demonstrating that these programs in some way fail to perform as required), we shall regard the contest as closed, offer Oxfordensis his/her prize, and henceforward treat the programs below as the originals.

For WEASEL1 and WEASEL2 click here:

WEASEL1:

Program Weasel;

Type

Text=String[28];

(* Define Parameters *)

Const

Alphabet:Text=’ABCDEFGHIJKLMNOPQRSTUVWXYZ ‘;

Target:Text=’METHINKS IT IS LIKE A WEASEL’;

Copies:Integer=100;

Function RandChar:Char;

(* Pick a character at random from the alphabet string *)

Begin

RandChar:=Alphabet[Random(27)+1];

End;

Function SameLetters(New:Text; Current:Text):Integer;

(* Count the number of letters that are the same *)

Var

I:Integer;

L:Integer;

Begin

L:=0;

I:=0;

While I< =Length(New) do Begin If New[I]=Current[I] Then L:=L+1; I:=I+1; End; SameLetters:=L; End; Var Parent:Text; Child:Text; Best_Child:Text; I:Integer; Best:Integer; Generation:Integer; Begin Randomize; (* Initialize the Random Number Generator *) (* Create a Random Text String *) Parent:=''; For I:=1 to Length(Target) do Begin Parent:=Concat(Parent, RandChar) End; Writeln(Parent); (* Do the Generations *) Generation:=1; While SameLetters(Target, Parent) <> Length(Target)+1 do

Begin

(* Make Copies *)

Best:=0;

For I:=1 to Copies do

Begin

(* Each Copy Gets a Mutation *)

Child:=Parent;

Child[Random(Length(Child))+1]:=RandChar;

(* Is This the Best We’ve Found So Far? *)

If SameLetters(Child, Target) > Best Then

Begin

Best_Child:=Child;

Best:=SameLetters(Child, Target);

End;

End;

Parent:=Best_Child;

(* Inform the User of any Progress *)

Writeln(Generation, ‘ ‘, Parent);

Generation:=Generation+1;

End;

End.

WEASEL2:

PROGRAM WEASEL;
USES
CRT;

(* RETURN A RANDOM LETTER *)
FUNCTION RANDOMLETTER : CHAR;
VAR
NUMBER : INTEGER;
BEGIN
NUMBER := RANDOM(27);
IF NUMBER = 0 THEN
RANDOMLETTER := ‘ ‘
ELSE
RANDOMLETTER := CHR( ORD(‘A’) + NUMBER – 1 );
END;

(* MEASURE HOW SIMILAR TWO STRINGS ARE *)
FUNCTION SIMILARITY(A : STRING; B : STRING) : INTEGER;
VAR
IDX : INTEGER;
SIMCOUNT : INTEGER;
BEGIN
SIMCOUNT := 0;

FOR IDX := 0 TO LENGTH(A) DO
BEGIN
IF A[IDX] = B[IDX] THEN
SIMCOUNT := SIMCOUNT + 1;
END;
SIMILARITY := SIMCOUNT;
END;

FUNCTION RANDOMSTRING(LEN : INTEGER) : STRING;
VAR
I : INTEGER;
RT : STRING;
BEGIN
RT := ”;
FOR I := 1 TO LEN DO
BEGIN
RT := RT + RANDOMLETTER;
END;
RANDOMSTRING := RT;
END;

VAR
X : INTEGER;
TARGET : STRING;
CURRENT : STRING;
OFFSPRING : STRING;
TRIES : LONGINT;
FOUND_AT : INTEGER;
BEGIN
RANDOMIZE;

CLRSCR;

WRITELN(‘Type target phrase in capital letters’);
READLN(TARGET);
(* PUT SOME STRING ON THE SCREEN *)
TEXTCOLOR(GREEN);
GOTOXY(1, 6);
WRITELN(‘Target’);

GOTOXY(10, 6);
WRITELN(TARGET);

TEXTCOLOR(BLUE);

GOTOXY(1,13);
WRITELN(‘Darwin’);

TEXTCOLOR(BLUE);
GOTOXY(1,19);
WRITELN(‘Random’);

TEXTCOLOR(WHITE);
GOTOXY(1, 25);

WRITE(‘Try number’);

(* PICK A RANDOM STRING TO START DARWIN SEARCH *)
CURRENT := RANDOMSTRING(LENGTH(TARGET));

(* RUN THROUGH MANY TRIES *)
FOUND_AT := 0;
FOR TRIES := 1 TO 100000 DO
BEGIN

(* Darwin *)
OFFSPRING := CURRENT;
OFFSPRING[ 1 + RANDOM(LENGTH(OFFSPRING)) ] := RANDOMLETTER;

GOTOXY(10,13);
WRITELN(OFFSPRING, ‘ ‘);

IF( SIMILARITY(OFFSPRING, TARGET) >= SIMILARITY(CURRENT, TARGET) ) THEN
CURRENT := OFFSPRING;

IF( (SIMILARITY(CURRENT, TARGET) = LENGTH(TARGET)) AND (FOUND_AT = 0) ) THEN
BEGIN
(* TELL THE USER WHAT WE FOUND *)
FOUND_AT := TRIES;
GOTOXY(1, 15);
TEXTCOLOR(BLUE);
WRITELN(‘Darwin’);
TEXTCOLOR(WHITE);
GOTOXY(9, 15);
WRITELN(‘reached target after’);
GOTOXY(37, 15);
TEXTCOLOR(BLUE);
WRITELN(FOUND_AT);
WRITE(‘tries’);
TEXTCOLOR(WHITE);

GOTOXY(1, 21);
TEXTCOLOR(BLUE);
WRITE(‘Random’);
TEXTCOLOR(WHITE);
WRITELN(‘ would need more than ‘);
TEXTCOLOR(BLUE);
WRITELN(‘1000000000000000000000000000000000000000’);
TEXTCOLOR(WHITE);
WRITE(‘tries’);
END;

(* Random *)
GOTOXY(10, 19);
WRITELN(RANDOMSTRING(LENGTH(TARGET)), ‘ ‘);

GOTOXY(27,25);
WRITE(TRIES, ‘ ‘);
END;

GOTOXY(1, 20);
End.

Comments
Moseph: I have no wish to go down the latest red herring track, having shown that the likely form of the original Weasel is compatible with implict latching and ratcheting. I have also long since -- April 9, 2009 -- shown that on the per letter mutant understanding, implicit latching-ratcheting is demonstrated. In short, the evidence of the published, showcased Weasel 1986 runs is now accounted for. On (i) targetted, proximity rewarding search, and (ii) the apparent latching or at least quasi-latching effect in the showcased runs. The first of these directly implies a fundamental dis-analogy to the claimed natural process, which has to have arrival of complex information based function before we can get to select on competitive reproductive success of sub populations. The second is simply a minor puzzle that has significance only insofar as it helps point to the first. GEM of TKI PS: On the latest side issue, I have offered evidence and principles on preponderance of evidence that point to the probable author of the program, c 1986. This is not an absolute proof (and it is on a secondary or even tertiary matter), but it is enough for prudential decision. PPS: As to the inference to and snide accusation of child abuse, this is of course without evidence. The person offering it as if it were comparable to the case that on balance of evidence W1 and W2 are probably authentic, should reflect on the point noted on in 42 [which also shows that a per child phrase interpretation is unexpectedly subtly and deeply compatible with the statement in BW], in light of the further provenance that credibly is Oxford [where one would expect materials originating in Oxford to be] and the nature as a c 1980's "simple" Pascal program implementing Weasel-like algorithms [not a popular exercise at that time], compatible with the 1986 showcased result for W1, and the distinctive features of the 1987 BBC horizon pgm video for W2. I note that someone above says that on 1980s TurboPascal, the pgms behave as expected [recent composition being fairly unlikely]. Mr Elsberry's response is that it is not biological enough -- but targetted search rewarding non functional phrases on mere proximity [as is explicitly acknowledged as a "cheat"] is fundamentally a-biological to begin with. Mr Dawkins' statement so far seems to be "I have no recollection." Let's see if more evidence emerges and which way it will tip or re-tip the balance: e.g. a clear repudiation by Mr Dawkins (which will have to resolve the "I do not remember" above]. PPPS: Dieb, I pointed out that EIL hosts several different algors, and that a case where we move from two correct to seven correct letters in the first step is rather unlikely to come from a real run of a real algorithm. It is however, very compatible with a didactic illustration; though in principle it is logically possible.kairosfocus
September 22, 2009
September
09
Sep
22
22
2009
12:44 PM
12
12
44
PM
PDT
Dieb, I'm not sure if you're joking or not, but the example is obviously made up. (Read the sequences backwards.) But you are correct that the algorithm is described in the text, and any chance of ambiguity is eliminated by the math.R0b
September 22, 2009
September
09
Sep
22
22
2009
12:12 PM
12
12
12
PM
PDT
kf & Mr. Dembski kf claims - re the example which is stated on p. 1055 of your (and R. Marks's) paper - that
M[arks] & D[embski] do not describe an algorithm, they give an unrealistic illustration.
while I think that you description is clear enough to implement an actual program, and that the example you gave is a run of such a program. Could you please tell us who is right?DiEb
September 22, 2009
September
09
Sep
22
22
2009
11:25 AM
11
11
25
AM
PDT
Based on the comments of AndrewFreeman, I personally lean toward these programs being genuine, although I don't lean very far in that direction. I think that, contrary to the last sentence of Dawkins' response, there are more things that he could say about it. For instance, he could tell us whether it was he or someone else who coded the program seen in the BBC video.R0b
September 22, 2009
September
09
Sep
22
22
2009
10:32 AM
10
10
32
AM
PDT
Mr Dembski, Given Mr Dawkins' comments and Wesley R. Elsberry's comments (as posted by him publicly, which I reproduce below as linking to the site where they appear has been forbidden by the site moderator) have you now decided that these 2 versions of Weasel are to be treated as the originals or not?
I sent a response to a few of the folks on the list, including Dembski and Dawkins: The first seems unlikely due to the section following: " (* Each Copy Gets a Mutation *)" Putting mutation on a per-copy basis rather than per-base would be rather unlike the biology. The second shares the same fault, though coded somewhat differently: " (* Darwin *) OFFSPRING := CURRENT; OFFSPRING[ 1 + RANDOM(LENGTH(OFFSPRING)) ] := RANDOMLETTER; " The fact remains that "weasel" implementations were not based on "partitioned search" as claimed in Dembski and Marks' recent paper, a point that Dembski implicitly concedes by his attempted elevation of these two programs without provenance, and further, that other "weasel" style programs can illustrate the point at argument in "The Blind Watchmaker" while allowing a small finite chance of mutation at every base or symbol in the generation of new candidates. Wesley
Moseph
September 22, 2009
September
09
Sep
22
22
2009
10:01 AM
10
10
01
AM
PDT
I've sent the two programs to Richard Dawkins so that he can either confirm or disconfirm their authenticity. I heard back from him. The relevant portion of his email for this discussion reads: "I cannot confirm that either of them is mine. They don't look familiar to me, but it is a long time ago. I don't see what more I can say."William Dembski
September 22, 2009
September
09
Sep
22
22
2009
09:25 AM
9
09
25
AM
PDT
Kariosfocus
But mere announcements and claims that the code above is not CRD’s code are not enough for that.
Erm, I'm sorry but there is no evidence whatsoever that the code above was written by CRD. None whatsoever. The claim that it *is* his code is the one which needs to be supported.
Unless Richard Dawkins and his associates can show conclusively that these are not the originals
Unless Kariosfocus can conclusively show that he does not beat small children then we can only assume that he does so and on a regular basis, and with considerable relish. See, it's not really fair way to make a point is it?Moseph
September 22, 2009
September
09
Sep
22
22
2009
08:03 AM
8
08
03
AM
PDT
Kariosfocus
Also on what I have seen of Weasel C 1986 as described in BW, there is in fact [cf above] no specification that there is a per letter application of a probability filter to mutate.
Then what specification was given in BW regarding per letter mutation rates? Is it or is it not your position that each candidate string can only have a single letter mutated then, as described in BW?Moseph
September 22, 2009
September
09
Sep
22
22
2009
07:56 AM
7
07
56
AM
PDT
Dieb: You have shown according to your own data that something like 1 in 200 runs at 100 pop level will NOT show an implicit latching effect. (This fits in with the rising odds of at least one no-change backstop child being present per gen as pop size rises. [And yes, there are various holes in the algor underlying the presented program -- "uncovered" possible cases.]) Thus, the sort of runs showcased in BW and NewScientist are very observable on the circumstances in the newly posted W1. So also, with the desire being to showcase "cumulative selection," and those odds, what would be the likelihood that a run that did not latch implicitly would be chosen? My guess: not very high at all. GEM of TKI PS: As to the claimed divergences of algorithms etc, what I will say is that the descriptions to date in BW etc do not sufficiently specify an algorithm to exclude the sort of program we are seeing. And, once we are looking at something that is so likely to latch [implicitly] and so to ratchet with rather low odds of slipping on the "dog," an analysis on latching is a reasonable thing to do. Similarly, the set of algors at EIL make it rather strawmannish to insist that the IEEE paper is presenting a "the" M & D algorithm on p. 1055.kairosfocus
September 22, 2009
September
09
Sep
22
22
2009
07:42 AM
7
07
42
AM
PDT
Moseph: You will see that all that was needed to explain was that in some runs, we will get implicit latching and ratcheting to target of generational champions. There are two candidate mechanisms, explicit and implicit latching-ratcheting. Implicit latching and ratcheting just has to be sufficiently common to be observable and in a context where it would be likely showcased if observed. And BW c 1986 is such a context. Also on what I have seen of Weasel C 1986 as described in BW, there is in fact [cf above] no specification that there is a per letter application of a probability filter to mutate. A per phrase application of a single mut would work on the description I have in hand. Indeed, zooming in, I find the following excerpt very interesting:
>>it [Weasel] duplicates it [the seed phrase for a generation] repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying . . . >> a --> duplication of the seed phrase looks literally accurate to the code for W1 b --> Application of a certain chance of error to the copying suggests on closer inspection a phrase-wise mutation event c --> The showcased runs show as well that about 1/2 the time, no change wins, in a context where we should likely see about 1 in 50 gens uncovered by a no-change backstop at 100 per gen, which would be what blocks reversion for the other 49/50 or so in the gen champs. d --> And single step advances predominate otherwise, in a context where the code above would impose no more than one such change per mutant. e --> So some runs at rates sufficient to be observable for repeated runs, will implicitly latch, once pop is set to a reasonable level.
Of course if you have information that tells us otherwise, credibly, that shifts the balance on the evidence. But mere announcements and claims that the code above is not CRD's code are not enough for that. And, the above code (with appropriate pop levels, the mut rate now being fixed as reasonably low) will implicitly latch often enough to show the point, with quasi-latching predominating otherwise. GEM of TKIkairosfocus
September 22, 2009
September
09
Sep
22
22
2009
07:20 AM
7
07
20
AM
PDT
Kariosfocus, It seems to me this set of Weasels are not "the" Weasel for a few simple reasons. Mutation is on a per-copy basis rather then a per-base basis. The original Weasel as described in TBW allows for mutations in every base during the generation of new characters. These Weasels do not. An interesting analysis here http://dieben.blogspot.com/ Dieb notes
So, even with a generation size of one hundred children, one in two hundred runs will show that this algorithm doesn't latch - it's not what W. Dembski and R. Marks describe as a partitioned search.
So, not Weasel after all then...Moseph
September 22, 2009
September
09
Sep
22
22
2009
06:20 AM
6
06
20
AM
PDT
Okay: Late for the party -- busy elsewhere (and my ISP seems to prioritise phone over Internet service on quality . . . ). Looks like someone from it seems Oxford has in fact won the Contest 10. Let's see, therefore, where we have come out (and of course the below is subject to correction, esp. on my reading of the Pascal Code!): 1] It seems we have two credible "original" Weasels, thanks to an anonymous donor at it seems Oxford. 2] Provenance is thus about right, chain of custody is reasonable, and there are no signs of obvious fraud, so on the Ancient Documents Rule -- failing credible explanation otherwise [i.e burden of disproof is now on those who would reject the programs] -- it seems on preponderance of evidence these are the right "original" pgms, PASCAL version at least. (The BASIC version would be interesting . . . ) 3] Surprise -- not -- TWO versions, W1 seems to be what was in the book (and NewScientist) and W2 seems to be the version in the 1987 BBC Horizon video. 4] W2 is indeed significantly different from W1 (despite many expectations to the contrary on the part of Darwinists), and has an entirely different dynamic, one that is set up for video; as was suspected. (And W2 does not seem to have generational clustering.) --> Diverse performance is accounted for . . . 5] As expected, both W1 and W2 are targetted search, rewarding plainly non-functional strings on mere proximity to target. Target-proximity is what is "fitness" in W1 and W2; measured on a letter-wise comparison to the target:
W1: [Set target:] Target:Text=’METHINKS IT IS LIKE A WEASEL’; [ . . . . ] [Measure proximity:] (* Is This the Best We’ve Found So Far? *) If SameLetters(Child, Target) > Best Then Begin Best_Child:=Child; Best:=SameLetters(Child, Target); End; End; Parent:=Best_Child; (* Inform the User of any Progress *) Writeln(Generation, ‘ ‘, Parent); Generation:=Generation+1; End; End. W2: [Set target:] CLRSCR; WRITELN(’Type target phrase in capital letters’); READLN(TARGET); (* PUT SOME STRING ON THE SCREEN *) TEXTCOLOR(GREEN); GOTOXY(1, 6); WRITELN(’Target’); GOTOXY(10, 6); WRITELN(TARGET); [ . . . . ] [Measure proximity:] (* MEASURE HOW SIMILAR TWO STRINGS ARE *) FUNCTION SIMILARITY(A : STRING; B : STRING) : INTEGER; VAR IDX : INTEGER; SIMCOUNT : INTEGER; BEGIN SIMCOUNT := 0; FOR IDX := 0 TO LENGTH(A) DO BEGIN IF A[IDX] = B[IDX] THEN SIMCOUNT := SIMCOUNT + 1; END; SIMILARITY := SIMCOUNT; END;
6] Thus, "fitness" a la Weasel, is completely dis-analogous to fitness of life forms: life forms must function on highly complex, algorithmically specific information at cellular levels, to live and reproduce; and life forms per NDT do not cumulatively progress to a preset optimum target point. 7] Thus, Dawkins' acknowledgement of a "cheat." (And, W1 and W2 gain over what he dismissed as "single-step selection" [i.e. what Hoyle, Schurtzenberger et al have pointed to: need for complex information based function before fitness "arrives"] by using active, designer-input target information and a measure of hotter-colder.) 8] Thus also, Weasels W1 and W2 both fall under the principal concern that as targetted searches that reward mere proximity in the absence of complex function, they are fundamentally dis-analogous to any claimed capability of Darwinian evolution by chance variation plus probabilistic culling on relative fitness. 9] Now also, W1 plainly does not explicitly latch and ratchet on already successful letters. (W2 will not latch at all, but this is already known to be different from the showcased runs c 1986.] 10] However, W1 seems set up to do two things: (i) to give a generation of size 100, and (ii) to force just one mutation per member of the population. (Recall, 1 of 27 times, a mutation will return the same original value.) 11] This means that odds are about 98% that there will be non-changed members of the pop, and that if the best is that, it will be passed down to the next generation as this gen's champion and seed for the next gen. 12] Already, double or triple etc mutation effects have been eliminated by the algorithm, so in at least some runs we will likely see preservation of achieved characters plus increments of one character. [The typo suggestion seems good enough on the claimed double change.] 13] In short, implicitly latched runs are possible and if these are seen c. 1986 as giving "best" results on "cumulative selection," they would credibly be showcased. 14] Similarly, quasi-latched runs are possible, with occasional (relatively infrequent) reversions. {I think this case will predominate in the pop of runs.} 15] On W1 (the relevant case for the showcased o/p c. 1986), far from latched runs are unlikely. _______________ Thus, after months we see that while explicit latching was credibly not used in Weasel c 1986, implicit latching is a possible explanation of the showcased runs of Weasel c 1986, and that quasi-latched runs (ratcheting with occasional slips) are likely to predominate in the population of runs of the program. Perhaps we could get some sample runs from the gentleman with 1980's era Turbo Pascal? GEM of TKI PS: To remind us, here are the showcased runs c 1986: _________________ >> We may conveniently begin by inspecting the published o/p patterns circa 1986, thusly [being derived from Dawkins, R, The Blind Watchmaker , pp 48 ff, and New Scientist, 34, Sept. 25, 1986; p. 34 HT: Dembski, Truman]: 1 WDL*MNLT*DTJBKWIRZREZLMQCO*P 2? WDLTMNLT*DTJBSWIRZREZLMQCO*P 10 MDLDMNLS*ITJISWHRZREZ*MECS*P 20 MELDINLS*IT*ISWPRKE*Z*WECSEL 30 METHINGS*IT*ISWLIKE*B*WECSEL 40 METHINKS*IT*IS*LIKE*I*WEASEL 43 METHINKS*IT*IS*LIKE*A*WEASEL 1 Y*YVMQKZPFJXWVHGLAWFVCHQXYPY 10 Y*YVMQKSPFTXWSHLIKEFV*HQYSPY 20 YETHINKSPITXISHLIKEFA*WQYSEY 30 METHINKS*IT*ISSLIKE*A*WEFSEY 40 METHINKS*IT*ISBLIKE*A*WEASES 50 METHINKS*IT*ISJLIKE*A*WEASEO 60 METHINKS*IT*IS*LIKE*A*WEASEP 64 METHINKS*IT*IS*LIKE*A*WEASEL >> ________________ PPS: And, this is Mr Dawkins' commentary in BW (with my remarks in parentheses): ____________ >> It [Weasel] . . . begins by choosing a random sequence of 28 letters [which is of course by overwhelming probability non-functional] … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense [= non-functional] phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target [so, targetted search] phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection [cumulative implies progress by successive additions, and in the context of the showcased gives rise to the implications of latching and ratcheting, which is -- your blanket denial in the face of frequently presented detailed evidence notwithstanding (so you either know or should know better) -- demonstrated to happen implicitly as well as explicitly], and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection [dismissive, question-begging reference to the requirement of function for selection] . . . more than a million million million times as long as the universe has so far existed [i.e. acknowledges the impact of intelligently injected purposeful, active info on making the otherwise practically impossible becvome very feasible] . . . . Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection [in more ways than one!], it is misleading in important ways. One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection . . . In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. >> ____________kairosfocus
September 22, 2009
September
09
Sep
22
22
2009
06:12 AM
6
06
12
AM
PDT
Oops CJYman beat me to it- once again...Joseph
September 21, 2009
September
09
Sep
21
21
2009
05:01 PM
5
05
01
PM
PDT
Cumulative selection implies a target. Otherwise cumulative is meaningless.Joseph
September 21, 2009
September
09
Sep
21
21
2009
05:00 PM
5
05
00
PM
PDT
Hello ROb, You state: "WEASEL is like Darwinian evolution in one respect (selection acts cumulatively) and unlike it in another respect (there is a long-term target). Dawkins is very careful to point this out. I hope we can all agree that that one aspect of something can be illustrated without all aspects being illustrated." I see exactly what you are saying here and to a point I do agree. It does seem that Dawkins was merely showing the difference between cumulative selection and a random search. To me, that is a trivial/obvious observation. Yes, he was showing that cumulative selection, which is supposed to be one of the driving forces in Darwinian evolution, performs better than random search. However, as to "one aspect of something can be illustrated without all aspects being illustrated," I have to disagree in this case. The question still remains ... "will cumulative selection operate without a long term target; and if so, for how long will cumulative selection operate without that long term target?" If Dawkins is defining Darwinian evolution as inherently without a target, then he is going to have to show that cumulative selection can operate without a target to even show that cumulative selection can indeed be a part of Darwinian evolution. IOW, does Darwinian evolution, being defined as cumulative selection without a target, even exist? Any evidence anyone?CJYman
September 21, 2009
September
09
Sep
21
21
2009
03:57 PM
3
03
57
PM
PDT
SteveB, WEASEL is like Darwinian evolution in one respect (selection acts cumulatively) and unlike it in another respect (there is a long-term target). Dawkins is very careful to point this out. I hope we can all agree that that one aspect of something can be illustrated without all aspects being illustrated. And a find it interesting that some ID proponents have no problem with the idea that the mainstream model of evolution is targetless, when such a position renders the work of Marks and Dembski irrelevant to biology.R0b
September 21, 2009
September
09
Sep
21
21
2009
03:18 PM
3
03
18
PM
PDT
The theory:
"Adopting this view of the world means accepting not only the processes of evolution, but also the view that the living world is constantly evolving, and that evolutionary change occurs without any ‘goals.’ The idea that evolution is not directed towards a final goal state has been more difficult for many people to accept than the process of evolution itself.” (Life: The Science of Biology by William K. Purves, David Sadava, Gordon H. Orians, & H. Craig Keller, (6th ed., Sinauer; W.H. Freeman and Co., 2001), pg. 3.)
“The ‘blind’ watchmaker is natural selection. Natural selection is totally blind to the future....” (Richard Dawkins quoted in Biology by Neil A. Campbell, Jane B. Reese. & Lawrence G. Mitchell (5th ed., Addison Wesley Longman, 1999), pgs. 412-413.)
“Nothing consciously chooses what is selected. Nature is not a conscious agent who chooses what will be selected.... There is no long term goal, for nothing is involved that could conceive of a goal.” (Evolution: An Introduction by Stephen C. Stearns & Rolf F. Hoeckstra, pg. 30 (2nd ed., Oxford University Press, 2005).)
The application of the theory:
We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL. …What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed. (Richard Dawkins, quoted on http://en.wikipedia.org/wiki/Weasel_program
Clearly, goals and targets are explicitly not allowed in evolutionary theory... unless Dawkins wants to write a computer program that won’t work with out one. I suppose the defense of any point of view is relatively easy if the apologist has the freedom to abandon his presuppositions whenever they become inconvenient. This is the real issue; all the arcane programming details are irrelevant.SteveB
September 21, 2009
September
09
Sep
21
21
2009
01:11 PM
1
01
11
PM
PDT
"The burden of proof is on Dawkins to show that these aren’t the originals." Um, how do you figure that? He never claimed they were. And it doesn't matter anyway. The Weasel program was designed to show that . . . . oh, nevermind! :-) :-) We're never going to settle this!ellazimm
September 21, 2009
September
09
Sep
21
21
2009
08:03 AM
8
08
03
AM
PDT
Expected number E[N] of correct letters after ten iterations (mut.prob. µ=4%) pop.size E[N] 10 4.02 20 5.89 50 8.83 100 10.67 200 11.74 500 12.82 I hope that these numbers fit Andrew Freeman's observations.DiEb
September 21, 2009
September
09
Sep
21
21
2009
04:31 AM
4
04
31
AM
PDT
--AndrewFreeman, interesting observation: To come up with some numbers, I modeled the weasel in its various forms as a Markov chain. Here are the probabilities for getting at most 10 characters right in 10 generations, using a mutation rate of 4%: pop.size probability 10 99.99% 20 99.68% 50 87.60% 100 46.44% 200 15.23% 500 2.85% Here, I allowed for a random first string. So, even with 200 children in a generation, this event should occur fairly regularly.DiEb
September 21, 2009
September
09
Sep
21
21
2009
02:28 AM
2
02
28
AM
PDT
200 children per generation combined with a 4% mutation rate does produce a similiar generations to converge as recorded in the book. However, in all of my runs of that scenario more then 10 characters are fixed within ten generations. As shown earlier, the runs in the book fix no more then ten characters in ten generations. I suspect that its highly improbable to remain below the ten-generation ten-changes limit when multiple mutations are included. Early on, multiple mutations should have an advantage since most of the letters are incorrect and they have double the chance of finding a correct letter.AndrewFreeman
September 20, 2009
September
09
Sep
20
20
2009
07:06 PM
7
07
06
PM
PDT
DieB: You suggest a combination of 10 children with a 4% mutation rate to explain the video's performance. Given both of our results that would be a long run for those parameters. As I'm understanding your comment, you suggest that he ran the program repeatedly during the interview to get a long run. Are you suggesting that the BBC filmed lots of short runs of the program and then just kept the one run that did what Dawkin's wanted? I find that dubious... The Weasel2 program is different from the Weasel1 program in that it has no generations. Instead it generate a single child and either accepts or rejects it. (As a side note to the powers that be here, posting code where the whitespace has been eliminated is evil and not to be tolerated). The result is that there is a lack of parameters to tweak. There is no number of generations to pick or a mutation rate to specify. Nevertheless, my reimplementation of it (available upon request as before) seems to come up with a similar number of tries as the video records. Actual analysis would be good, but I'm a coder not a mathematician.AndrewFreeman
September 20, 2009
September
09
Sep
20
20
2009
06:59 PM
6
06
59
PM
PDT
I had a friend run both WEASELs in 1980s TurboPascal, and the programs worked just fine. The burden of proof is on Dawkins to show that these aren't the originals.kibitzer
September 20, 2009
September
09
Sep
20
20
2009
04:18 PM
4
04
18
PM
PDT
I think the statements put forward by Diffaxial do support a multi-mutation interpretation but only weakly. Dawkins wasn't precise about the details of his algorithm throughout and so its not really a surprise if that impreciseness continues here. In any case, the procedure described is still followed in the code. I think the lack of any multiple mutations in the data (as shown by my earlier comment) is considerably stronger evidence. It is true that the first child to improve will end up being selected. But ties have to be broken in an arbitrary way regardless. It is true that the loop could be exited early; but I fail to see where what could have been done is relevant. At the end of the day, the code still goes through all of the progeny and selects the one most similar to the target.AndrewFreeman
September 20, 2009
September
09
Sep
20
20
2009
12:19 PM
12
12
19
PM
PDT
AndrewFreeman, thanks for taking the time to port the code and run it. Either the code is genuine, or someone has put a lot of thought into this hoax.R0b
September 20, 2009
September
09
Sep
20
20
2009
06:39 AM
6
06
39
AM
PDT
In the text Dawkins states, "The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL" (p. 47-48, emphasis in the original.) Again on page 49: "There is a big difference, then, between cumulative selection in (in which each improvement, however slight, is used as a basis for future building), and single step selection..." These passages suggest that there are degrees of resemblance possible between target and child - from "slight" to greater than slight. However, as coded above with a single letter mutation, in any one generation there is only one possible degree of increased resemblance (one additional letter matches the target). The code above therefore seems unfaithful to the text. This matters less, but It also occurs to me that the probability that a given child will be selected in a given generation reflects the order in which the children are generated and examined. In WEASEL1 the first child to demonstrate improvement becomes the winner. Therefore the generation/examination loop may be exited as soon as a single improved child is generated/detected. That again seems quite at odds with the intent of the text. (In WEASEL2 ties go to the child examined later). Coding a version in which every letter is exposed to some probability of mutation solves the first of these problems. It would also require that every child be examined in every generation, as well.Diffaxial
September 20, 2009
September
09
Sep
20
20
2009
05:50 AM
5
05
50
AM
PDT
BillB:
I can’t imagine why Dawkins would limit mutation so it can only EVER apply to one letter – it doesn’t seem very biological.
If you want biological then you have to keep the mutation rate below 1%. And with a mutation rate below 1% and a population of 100, we will observe a partioned search.Joseph
September 20, 2009
September
09
Sep
20
20
2009
05:40 AM
5
05
40
AM
PDT
--O'Leary as far as I can see, no one is entitled to your prize... But though the discussion at your thread didn't recover Dawkins's original program, it gave interesting insights into the workings of the algorithms described by Dembski and Marks in their paper on the one hand side and by Dawkins in his book on the other.DiEb
September 19, 2009
September
09
Sep
19
19
2009
10:49 PM
10
10
49
PM
PDT
-- Andrew Freedman I calculated the expected number of runs for some combination of population size and mutation probability (in brackets the standard deviation) Sorry about the format size -- 4% ---------- 5% --------- one mut. 10 1305 (924) 12,461 (12,140) 477,456 (477,303) 20 326 (121) 341 (140) 754 (652) 30 222 (80) 223 (84) 168 (90) 40 170 (60) 170 (63) 101 (38) 50 139 (49) 140 (51) 79 (25) 60 119 (41) 120 (42) 67 (19) 70 105 (35) 105 (37) 60 (16) 80 93 (31) 94 (32) 54 (14) 90 85 (28) 86 (29) 50 (13) 100 79 (25) 79 (26) 48 (12) 200 49 (14) 49 (14) 35 (8) 300 40 (10) 40 (10) 32 (6) 400 35 (8) 35 (9) 30 (6) 500 32 (7) 32 (8) 30 (6) 1: 4% - 5% is the best rate of mutation, values outside this interval will produce longer runs 2: For his interview, Dawkins needed the program to run for ~ 2000 generations. This could be achieved by the combination (10 children, 4% mutation rate) But I suppose that Dawkins just fooled around a little bit with his program to get an optimal number of runs, i.e., the program was running during the length of his interview... 3: I'm glad to see that your numbers agree with mine... 4: For the book, the number of children was 100-200, not fifty, as I said earlier. Sorry. That is, if Dawkins used the algorithm which most people think he described...DiEb
September 19, 2009
September
09
Sep
19
19
2009
10:41 PM
10
10
41
PM
PDT
The video makes no reference to generations. Instead it counts up to some 2485 "tries" These don't seem likely to be the same as the generations from the book. The program in the video doesn't seem to be the same as the one in the book. All my code is available upon request. (Just to make sure nobody is asking for it in ten years.) I'd post it here, but I'm pretty sure this comment system will destroy my whitespace in my python code making it a useless gesture.AndrewFreeman
September 19, 2009
September
09
Sep
19
19
2009
09:57 PM
9
09
57
PM
PDT
1 2 3 4 5

Leave a Reply