Uncommon Descent Serving The Intelligent Design Community

The Original WEASEL(s)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

On August 26th last month, Denyse O’Leary posted a contest here at UD asking for the original WEASEL program(s) that Richard Dawkins was using back in the late 1980s to show how Darwinian evolution works. Although Denyse’s post generated 377 comments (thus far), none of the entries could reasonably be thought to be Dawkins’s originals.

It seems that Dawkins used two programs, one in his book THE BLIND WATCHMAKER, and one for a video that he did for the BBC (here’s the video-run of the program; fast forward to 6:15). After much beating the bushes, we finally heard from someone named “Oxfordensis,” who provided the two PASCAL programs below, which we refer to as WEASEL1 (corresponding to Dawkins’s book) and WEASEL2 (corresponding to Dawkins’s BBC video). These are by far the best candidates we have received to date.

Unless Richard Dawkins and his associates can show conclusively that these are not the originals (either by providing originals in their possession that differ, or by demonstrating that these programs in some way fail to perform as required), we shall regard the contest as closed, offer Oxfordensis his/her prize, and henceforward treat the programs below as the originals.

For WEASEL1 and WEASEL2 click here:

WEASEL1:

Program Weasel;

Type

Text=String[28];

(* Define Parameters *)

Const

Alphabet:Text=’ABCDEFGHIJKLMNOPQRSTUVWXYZ ‘;

Target:Text=’METHINKS IT IS LIKE A WEASEL’;

Copies:Integer=100;

Function RandChar:Char;

(* Pick a character at random from the alphabet string *)

Begin

RandChar:=Alphabet[Random(27)+1];

End;

Function SameLetters(New:Text; Current:Text):Integer;

(* Count the number of letters that are the same *)

Var

I:Integer;

L:Integer;

Begin

L:=0;

I:=0;

While I< =Length(New) do Begin If New[I]=Current[I] Then L:=L+1; I:=I+1; End; SameLetters:=L; End; Var Parent:Text; Child:Text; Best_Child:Text; I:Integer; Best:Integer; Generation:Integer; Begin Randomize; (* Initialize the Random Number Generator *) (* Create a Random Text String *) Parent:=''; For I:=1 to Length(Target) do Begin Parent:=Concat(Parent, RandChar) End; Writeln(Parent); (* Do the Generations *) Generation:=1; While SameLetters(Target, Parent) <> Length(Target)+1 do

Begin

(* Make Copies *)

Best:=0;

For I:=1 to Copies do

Begin

(* Each Copy Gets a Mutation *)

Child:=Parent;

Child[Random(Length(Child))+1]:=RandChar;

(* Is This the Best We’ve Found So Far? *)

If SameLetters(Child, Target) > Best Then

Begin

Best_Child:=Child;

Best:=SameLetters(Child, Target);

End;

End;

Parent:=Best_Child;

(* Inform the User of any Progress *)

Writeln(Generation, ‘ ‘, Parent);

Generation:=Generation+1;

End;

End.

WEASEL2:

PROGRAM WEASEL;
USES
CRT;

(* RETURN A RANDOM LETTER *)
FUNCTION RANDOMLETTER : CHAR;
VAR
NUMBER : INTEGER;
BEGIN
NUMBER := RANDOM(27);
IF NUMBER = 0 THEN
RANDOMLETTER := ‘ ‘
ELSE
RANDOMLETTER := CHR( ORD(‘A’) + NUMBER – 1 );
END;

(* MEASURE HOW SIMILAR TWO STRINGS ARE *)
FUNCTION SIMILARITY(A : STRING; B : STRING) : INTEGER;
VAR
IDX : INTEGER;
SIMCOUNT : INTEGER;
BEGIN
SIMCOUNT := 0;

FOR IDX := 0 TO LENGTH(A) DO
BEGIN
IF A[IDX] = B[IDX] THEN
SIMCOUNT := SIMCOUNT + 1;
END;
SIMILARITY := SIMCOUNT;
END;

FUNCTION RANDOMSTRING(LEN : INTEGER) : STRING;
VAR
I : INTEGER;
RT : STRING;
BEGIN
RT := ”;
FOR I := 1 TO LEN DO
BEGIN
RT := RT + RANDOMLETTER;
END;
RANDOMSTRING := RT;
END;

VAR
X : INTEGER;
TARGET : STRING;
CURRENT : STRING;
OFFSPRING : STRING;
TRIES : LONGINT;
FOUND_AT : INTEGER;
BEGIN
RANDOMIZE;

CLRSCR;

WRITELN(‘Type target phrase in capital letters’);
READLN(TARGET);
(* PUT SOME STRING ON THE SCREEN *)
TEXTCOLOR(GREEN);
GOTOXY(1, 6);
WRITELN(‘Target’);

GOTOXY(10, 6);
WRITELN(TARGET);

TEXTCOLOR(BLUE);

GOTOXY(1,13);
WRITELN(‘Darwin’);

TEXTCOLOR(BLUE);
GOTOXY(1,19);
WRITELN(‘Random’);

TEXTCOLOR(WHITE);
GOTOXY(1, 25);

WRITE(‘Try number’);

(* PICK A RANDOM STRING TO START DARWIN SEARCH *)
CURRENT := RANDOMSTRING(LENGTH(TARGET));

(* RUN THROUGH MANY TRIES *)
FOUND_AT := 0;
FOR TRIES := 1 TO 100000 DO
BEGIN

(* Darwin *)
OFFSPRING := CURRENT;
OFFSPRING[ 1 + RANDOM(LENGTH(OFFSPRING)) ] := RANDOMLETTER;

GOTOXY(10,13);
WRITELN(OFFSPRING, ‘ ‘);

IF( SIMILARITY(OFFSPRING, TARGET) >= SIMILARITY(CURRENT, TARGET) ) THEN
CURRENT := OFFSPRING;

IF( (SIMILARITY(CURRENT, TARGET) = LENGTH(TARGET)) AND (FOUND_AT = 0) ) THEN
BEGIN
(* TELL THE USER WHAT WE FOUND *)
FOUND_AT := TRIES;
GOTOXY(1, 15);
TEXTCOLOR(BLUE);
WRITELN(‘Darwin’);
TEXTCOLOR(WHITE);
GOTOXY(9, 15);
WRITELN(‘reached target after’);
GOTOXY(37, 15);
TEXTCOLOR(BLUE);
WRITELN(FOUND_AT);
WRITE(‘tries’);
TEXTCOLOR(WHITE);

GOTOXY(1, 21);
TEXTCOLOR(BLUE);
WRITE(‘Random’);
TEXTCOLOR(WHITE);
WRITELN(‘ would need more than ‘);
TEXTCOLOR(BLUE);
WRITELN(‘1000000000000000000000000000000000000000’);
TEXTCOLOR(WHITE);
WRITE(‘tries’);
END;

(* Random *)
GOTOXY(10, 19);
WRITELN(RANDOMSTRING(LENGTH(TARGET)), ‘ ‘);

GOTOXY(27,25);
WRITE(TRIES, ‘ ‘);
END;

GOTOXY(1, 20);
End.

Comments
Mr CJYman, I thought the meaning of 'explicit' was clear. If you look at the code of Weasel, you'll find the target string. Even in Weasels that let you type in the target, it is in memory. But there is no target design in the antenna example. There is only measuring efficiency, and ranking that against other designs in the population. So the point of my post was that there are interesting problems that are more complicated than hill climbing smoothly towards a fixed target, and evolutionary algorithms can still solve them, contra a dismissive wave of the hand. With respect to antenna design, this particlar group of researchers was either interested in building better antennas, or thought antenna design was a hard problem for humans, and therefore a good test problem for GP. Other research is not interested in getting useful results, but simply in understanding the limits of EAs. I'm sure there is an 'edge of evolution', and books like David Goldberg's 'Design of Innovation' explore it. Nakashima
Nakashima: "The point of getting ’some results’ is that it happens without an explicit target, contra what many here and elsewhere are saying is necessary." Not sure how you are using the term "explicit," however as per the antenna example there definitely is a target. That target is an efficient antenna. In this case, the target was a specific function instead of a form. The programmers knew what function they wished to achieve and programmed the constraints to achieve that function and without that function of an efficient antenna the form (the exact shape of the antenna) wouldn't have been discovered. The point is that absent the foresight of the programmers to achieve a specific end function, there would be no 'some results.' CJYman
Onlookers: I have been busy elsewhere on other matters for the past week or so. I came back by to see where the thread went. SA has put his finger on the key issue: the ORIGIN of functional complex, specific information is what has to be accounted for. And, both Weasel and the more modern GA's do not address that. In effect they start within the shores of an island of function, without first credibly getting us to those shores in a very large config space well beyond the scanning ability of he resources of the atoms of the observed cosmos. remember, that starts at 500 - 1,000 bits as a rule of thumb. To see the force of that, think about the requisites for a von Neumann self-replicator:
1 --> A code system, with symbols and combinational rules that specify meaningful and functional as opposed to "nonsense" strings. [Situations where every combination has a function are irrelevant.] 2 --> A storage unit with blueprint or tape mechanism that encodes the specifications and at least implies the assembly instructions 3 --> A reader that then drives associated implementation machines that actually carry out the replication. 4 --> A source of required parts (i.e. a pre existing reservoir and/or a metabilic subsystem to make parts out of easily accessible environmental resources)
This is an irreducibly complex set of core elements, i.e, remove any one and self-replicational functionality vanishes. It also specifies an island of functional organisaiton, as not just any combination of any and all generic parts will achieve the relevant function. That is why the randomly varied "genes" in a GA string are irrelevant. For, absent the independent reader and translator into action, the strings have no function. And, the process of reading and converting into a functional behaviour and/or metric is plainly intelligently designed in all cases of GA's on record. We could go on and on, but the point is plain enough. GEM of TKI kairosfocus
They must do nothing less to lend any support to the hypothesis of increased complexity via RM+NS. Otherwise they’re a parlor trick. (Or an easier way of designing better antenna surfaces.)
The "hypothesis of increased complexity" is a term exclusive to the mode of thinking upon which Intelligent Design is based and is irrelevant with respect to fitness adaptation, microevolution. Cabal
Mr ScottAndrews, The point of getting 'some results' is that it happens without an explicit target, contra what many here and elsewhere are saying is necessary. That consistent misunderstnding of the necessity of targets to EAs has been the genesis of much discussion here! Nakashima
Evolutionary algorithms for antenna design are essentially an automation of a trial-and-error process, testing various forms and improving upon them. It's a substitution of brute computing power for human effort. And fine, it gets some results. I'd be really curious to see if any of these "evolved" antennas, on their own, achieved any sort of innovation, such as motors to orient themselves toward a signal, circuitry to enhance the signal, or some relays. They must do nothing less to lend any support to the hypothesis of increased complexity via RM+NS. Otherwise they're a parlor trick. (Or an easier way of designing better antenna surfaces.) ScottAndrews
kairosfocus, " and (ii) the key begged question, again is to get to shores of complex functionality sufficient for further hill climbing to be relevant,...." This is an important issue, that you raise several times in your post. It is important because it represents a fundamental misconception about evolutionary theory. Accepting for the sake of argument that "shores of complex functionality" actually exist, there is no need for evolutionary mechanisms to find them. Living creatures that reproduce already have a successful genome. Evolutionary mechanisms, such as those simulated in programs like ev, don't need to find a viable point in genome space -- they're already at one and are simply exploring nearby points. Abiogenesis is an interesting topic, but it is distinct from evolutionary theory. Given this, the rest of your response does not address the core question. Where, exactly, does the "active information" get injected into ev? Rasputin
kairosfocus, "5] "The amount of information in the genomes of the final population is much higher than that in the initial population, using only simple evolutionary mechanisms." Again, Schneider's Ev is discussing a pre-programmed context that assigns functions, sets up hill-climbing algorithms and gives particular meaning to digital strings according to certain symbol and rule conventions,..." You need to read the paper more carefully. Schneider's ev is a simulation of a subset of known evolutionary mechanisms applied to a known biological system. The only "meaning" assigned to any digital strings is that which reflects real world chemistry. With even a very simple set of mechanisms, Schneider demonstrated the ability to evolve significant amounts of information. Equally importantly, his simulation results are consistent with the empirical evidence resulting from his research on real biological systems. That's very strong support for the ability of evolutionary mechanisms to transfer information from an environment to subsequent populations. "... and inter alia measures Shannon information, which -- as a metric of info-carrying or storing capacity -- is irrelevant to the issue of origin of algorithmically functional, complex specified information in a context of first life or novel body plans." Shannon information is a standard, well-understood metric. Schneider explains how and why it is appropriate in his thesis. After a quick re-review of that thesis, I suspect that any rigorously defined, objective, quantitative measure of information could be used. The fact is that the amount of information in the sequence patterns at a binding site evolves to be equal to the amount of information required to locate the number of such sites within the genome. "Remember, too (as was already pointed out but ignored): Shannon information for a given symbol string length peaks for non-functional flat random code,...." That is immaterial in this context. If you read the ev paper and Schneider's thesis, you will see that the important measurement is the relationship between the amount of information in a binding site sequence and the amount of information required to locate a binding site. "6] "If you read the paper, you’ll see that the fitness landscape itself is constantly changing." Irrelevant: (i) the "fitness landscape" is MAPPED and ALGORITHMICALLY PROCESSED at any given time (to get the hill-climbing by differential fitness metric values),..." No, it is not. Read the thesis. "7] "ev does it" Ev does not create its global algorithmic functionality ab initio from undirected chance plus necessity, but from an intelligent programmer." You are again mistaking what is being simulated. ev shows that a small subset of known evolutionary mechanisms is sufficient to transfer information from the environment to subsequent populations, without any need for intelligent intervention. Rasputin
kairosfocus, "3] "Life ‘knows’ the target, it is ‘aware’ of the target, i.e. it detects when it is pointing closer to or farther from the ‘target’, i.e. increasing or decreasing in fitness." See the point? The issue is not to improve already functioning life forms and body plans, but to first get to them, in light of the entailed complex, functionally specific information basis for such life." That is the issue for theories of abiogenesis. It is not the issue for evolutionary theory. Evolutionary theory explains how populations change over time, given the existence of self-replicating entities. Rasputin
kairosfocus, "Antenna theory and Genetic Algorithms used to design novel antennas, are based on a deeply established theory of the function of such antennas {based on Maxwell’s Electromagnetism], programmed into the simulation by its designers. And, that is the precise source of the relevant active information." It's important to be clear on exactly what is being simulated in these types of genetic algorithms. Typically there are two primary components: a population generator and a fitness evaluator. In the case of the antenna GA, the fitness evaluator uses standard, real world physics to determine the performance of the design represented by each member of the current population. The laws of physics themselves are not being simulated. The population generator implements a subset of known evolutionary mechanisms. At a minimum, the likelihood of a particular gene making it into the next generation will be related to the fitness of the individuals in the current population with that gene (stochastically, in some selection algorithms). Some type of mutation is also required. Other mechanisms such as cross-over may be used. The simulation, therefore, is of the evolutionary mechanisms themselves. Claiming that the laws of physics are providing the "active information" is, as I noted previously, equivalent to recognizing that the evolutionary mechanisms being simulated are capable of transferring information about the environment to subsequent population. Again, this is what we observe in actual biological systems, with no intelligent intervention required. I'll respond to some of your other points separately in the interests of keeping each post readable. Rasputin
PS: On the source of active information in Ev, it is not irrelevant to excerpt from the page linked by R above: ______________ >> The Ev program was written in Pascal, which is a good language for which there is an open source compiler. However, Pascal compilers are not often set up on computers, so this limits experimentation with Ev [NB: computer simulations and modelling are NOT empirical, real-world experiments, but easily lead people to believe what hey see on the screen . . . a problem ever since Weasel] to the few people willing to download a Pascal compiler and to set up Ev. In contrast, an open source version of Ev written in Java and available from Source Forge could be used in schools all across the world to help educate students in the precise mechanisms of evolution . . . >> _______________ The source of the relevant active information should be clear enough, and of course it inadvertently illustrates the empirical limits on evolutionary mechanisms. kairosfocus
Onlookers: The remarks overnight simply sustain my points on: (i) increasingly tangential issues, and (ii)the degree of strawmannishness in the objections. As I have no intention to embark on a yet further set of tangential exchanges [it has been something like nine months, folks], the substantial matters plainly having been settled as I have already summarised, I will simply make some notes for record: 1] GA's, targets, fitness landscapes and Antenna theory and Genetic Algorithms used to design novel antennas, are based on a deeply established theory of the function of such antennas {based on Maxwell's Electromagnetism], programmed into the simulation by its designers. And, that is the precise source of the relevant active information. 2] Fitness and complex function Again, life forms are based on self-replicating cells, and reproduce. To do so they must implement highly complex function sufficient to implement a von Neumann replicator [code, blueprint storage, reader, effector] with associated metabolism to provide energy and materials. For first life and for novel major body plans, until one accounts for the origin of such complex function from in effect chance -- natural selection is a culler on differential function, not an innovator -- discussing hill climbing on comparative "fitness" within islands of function is mere question-begging. This has of course been pointed out in the context of the weasel debates form the outset, not just in my remarks of last December; but in fact such is directly (albeit inadvertently) implied by CRD's remarks of 1986, especially his remarks on rewarding "nonsense phrases" on incremets of proximity to target. 3] "Life ‘knows’ the target, it is ‘aware’ of the target, i.e. it detects when it is pointing closer to or farther from the ‘target’, i.e. increasing or decreasing in fitness." See the point? The issue is not to improve already functioning life forms and body plans, but to first get to them, in light of the entailed complex, functionally specific information basis for such life. 4] "is your contention that natural selection is an invalid concept, that even microevolution is impossible, that the designer is responsible for all species adaptability?" Strawman, in the teeth of an always linked, immediately accessible discussion of the issue: origin of functionally specific, complex information as the basis for cell-based life forms. So-called natural selection is not the issue: probabilistic culling on sub-populations of life forms with variations in an environment is a reasonable and significantly empirically supported concept. But, culling does not explain origin of relevant variations. Similarly, variability of already functioning life forms is not the issue; origin of such functionality based on complex digital, algorithmic information -- and for good reason connected to the number of states accessible to the ~10^80 atoms of the observed universe, I have used the threshold of 500 - 1,000 bits for the border of enough complexity -- is. As for so-called micro-evolution, it is not an issue across any significant view on biological variability, including young earth creationism. [Cabal should consult the Weak Argument Correctives.] 5] "The amount of information in the genomes of the final population is much higher than that in the initial population, using only simple evolutionary mechanisms." Again, Schneider's Ev is discussing a pre-programmed context that assigns functions, sets up hill-climbing algorithms and gives particular meaning to digital strings according to certain symbol and rule conventions, and inter alia measures Shannon information, which -- as a metric of info-carrying or storing capacity -- is irrelevant to the issue of origin of algorithmically functional, complex specified information in a context of first life or novel body plans. Remember, too (as was already pointed out but ignored): Shannon information for a given symbol string length peaks for non-functional flat random code, as the metric: sum of pi log pi is highest for that case -- precisely what will not happen for a real world code. A high level of Shannon information can therefore easily correlate with non-function. That is, organised, algorithmic functionality is on a different dimension than order-randomness, which is what fig 4 in the Abel et al paper airily dismissed just above highlight. Nor is that insight new to them, as for instance Thaxton et al by 1984 in Ch 8 of TMLO summarise on three different types of symbol strings in light of Orgel, Yockey, Wickens and Polanyi as follows:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein.
(Onlookers, this is the crucial point of breakdown of communication on this matter. Darwinists are evidently typically blind to the wider context of code-based digital, algorithmically functional entities and the related distinction between order, randomness and functionally specific complex organisation. And, once we get to quite modest quantities of functional information, we run into cosmically insuperable search or configuration spaces: 1,000 bits -- less than 150 bytes -- specifies 10^301 configs, or more than ten times the square of the number of quantum states of the 10^80 atoms of our observed cosmos across its thermodynamically credible lifespan, about 50 million times the run from the typical estimates of the duration since the Big Bang, 13.7 BY. 150 bytes is grossly too small to create the sort of von Neuman self-replicators we see in even the simplest more or less independent life forms, which start out with ~ 600 - 1,000 k bits of storage space. And, major body plans run from ~ 10 - 100+ million bits.) 6] "If you read the paper, you’ll see that the fitness landscape itself is constantly changing." Irrelevant: (i) the "fitness landscape" is MAPPED and ALGORITHMICALLY PROCESSED at any given time (to get the hill-climbing by differential fitness metric values), and (ii) the key begged question, again is to get to shores of complex functionality sufficient for further hill climbing to be relevant, where (iii) the shores in question for real life systems require self-replication i.e von Neumann replicators with codes, algorithms, storage of blueprints etc, readers and effectors backed up by metabolism to obtain required energy and materials. 7] "ev does it" Ev does not create its global algorithmic functionality ab initio from undirected chance plus necessity, but from an intelligent programmer. 8] "Where, exactly, is the “active information” being inserted? If your answer is “from the simulated environment” then you are recognizing that the evolutionary mechanisms used in the simulation can transfer information about the environment to subsequent populations. This is what we observe in actually biological systems, with no intelligent intervention required." Active information relates here to the challenge of getting to the shores of an island of function in a large config space dominated by seas of non-function, as can be shown to relate to any significant digitally coded context. Cf. Marks and Dembski in the just linked:
Conservation of information theorems [15], [44], es- pecially the No Free Lunch Theorems (NFLT’s) [28], [51], [52], show that without prior information about the search environment or the target sought, one search strategy is, on average, as good as any other. Accord- ingly, the dif?culty of an unassisted—or blind—search problem [9] is ?xed and can be measured using what is called its endogenous information. The amount of information externally introduced into an assisted search can then be measured and is called the active information of the search [33]. Even moderately sized searches are virtually certain to fail in the absence of information con-cerning the target location or the search space structure. Knowledge concerning membership in a structured class of problems, for example, can constitute search space structure information [50] . . . . All but the most trivial searches require information about the search environment (e.g., smooth landscapes) or target location (e.g., ?tness measures) if they are to be successful. Conservation of information theorems [15], [28], [44], [51], [52] show that one search algorithm will, on average, perform as well as any other and thus that no search algorithm will, on average, outperform an unassisted, or blind, search. But clearly, many of the searches that arise in practise do outperform blind unassisted search. How, then, do such searches arise and where do they obtain the information that enables them to be successful? . . . . De?ne an assisted search as any procedure that pro-vides more information about the search environment or candidate solutions than a blind search. The classic example of an assisted search is the Easter egg hunt in which instead of saying “yes” or “no” to a proposed lo- cation where an egg might be hidden, one says “warmer” or “colder” as distance to the egg gets smaller or bigger. This additional information clearly assists those who are looking for the Easter eggs, especially when the eggs are well hidden and blind search would be unlikely to ?nd them. Information about a search environment can also assist a search. A maze that has a unique solution and allows only a small number of “go right” and “go left” decisions constitutes an information-rich search environment that helpfully guides the search . . . . What is the source of active information in a search? Typically, programmers with knowledge about the search (e.g., domain expertise) introduce it. But what if they lack such knowledge? Since active information is indis-pensable for the success of the search, they will then need to “search for a good search.” In this case, a good search is one that generates the active information necessary for success . . . under generalconditions, the dif?culty of the “search for a good search,” as measured against an endogenous information baseline, increases exponentially with respect to the active information needed for the original search.
Ev is about moving around within such an an island that is tectonically active in effect, based on domain expertise. To get to the initial algorithmic functionality of Ev, Mr Schneider did a lot of highly intelligent design, coding and development. Ev did not come from sampling random noise spewed onto a hard disk using say a Zener noise source. So, the random element in Ev is based on a wider intelligently designed context that uses quite constrained random search in a friendly search environment/landscape as a means to an end, a known technique of intelligent designers. GEM of TKI kairosfocus
kairosfocus, "Recall — and this specifically goes back to December last year — the core challenge of evolutionary algorithms in general is to [without undue inadvertent injection of active information by investigators] create complex, information-rich function ab initio from plausible initial conditions [pre-life (`600 - 1,000 k bits), previous to existence of a novel body plan (10 - 100+ M bits)] without pre-loading key intelligently derived info about the overall topography of the fitness landscape and/or goals an their location." Thomas Schneider's ev does exactly that. The amount of information in the genomes of the final population is much higher than that in the initial population, using only simple evolutionary mechanisms. "As touching Ev etc, these have come up in the various discussions over the years, and unfortunately tend to fall under similar problems, i.e. not accounting adequately for the origin of the required level of complex functionality within the search resources of the observed cosmos, and they tend to embed implicit or explicit knowledge of the overall fitness landscape,..." This is not the case with ev. If you read the paper, you'll see that the fitness landscape itself is constantly changing. "...often working within an assumed island of function to carry out hill-climbing. The problem that is decisive is to get to the shores of such islands of function in the extremely large config spaces implied by the digital information in DNA etc, without intelligent direction." And yet, ev does it. "Ev, from the paper you cite — starting with the the abstact and culminating in the conclusion, runs into the problem that Shannon information (a metric of channel and memory transfer or storage capacity) is inadequate to define algorithmic functionality, as say Abel et al discuss in this 2005 paper; cf esp. Fig 4 and associated discussion on OSC, RSC and FSC." That paper is long on assertions and unnecessary jargon and short on mathematical support for their arguments. Schneider explains why Shannon Information is an appropriate measure and shows how it accrues through simple evolutionary mechanisms in his simulation. Why, exactly, do you disagree? "On p. 1058 of their recent IEEE paper, Marks and Dembski observe about the general problem with evolutionary algorithms as follows: . . . In short, inadvertent injection of active information that gives a considerable gain over reasonable capacity of random walk searches in large config spaces, is the critical flaw that consistently dogs evolutionary simulations from Weasel to today’s favourites such as Ev Avida etc." The ev simulation implements simple evolutionary mechanisms for breeding and selection, without an explicit target or static environment. It shows that those mechanisms can create Shannon Information and it corresponds well to the empirical evidence of the real biological systems that were the topic of Schneider's PhD thesis. Where, exactly, is the "active information" being inserted? If your answer is "from the simulated environment" then you are recognizing that the evolutionary mechanisms used in the simulation can transfer information about the environment to subsequent populations. This is what we observe in actually biological systems, with no intelligent intervention required. Rasputin
Cabal:
Or is your contention that natural selection is an invalid concept, that even microevolution is impossible, that the designer is responsible for all species adaptability?
Natural selection doesn't "do" much of anything. It has never been observed to do what evolutionists say it has done. And when it has been studied it has been shown, on average, to contribute to just 16% of the variation. IOW there are factors that are obviously more prevalent than NS. Joseph
With designing antennas the target is known- not the antenna but what the antenna must be able to do. Joseph
Kairosfocus, presupposing that you do know and understands what evolutionary theory predicts about the cumulative effect on fitness by random mutations, do you think that you could devise a functional algorithm truthfully simulating the same process? BTW, I still am unable to understand why algorithms for design of antennae where the target is not known, are not examples of a similar process i.e. selection for fitness where the target is not known, only the landscape. Just as in real life, the fitness landscape is the template against which all the parameters affecting a species survival coefficient are tested. The outcome determines the species degree of reproductive success. I don't think I am saying anything false when I attempt saying the same thing in other words: Life 'knows' the target, it is 'aware' of the target, i.e. it detects when it is pointing closer to or farther from the 'target', i.e. increasing or decreasing in fitness. In short: The target is not the target phrase or whatever we use to represent an imaginary target, the target is fitness. WRT latching - I have not, nor do I intend to, made an in-depth study of the Weasel algorithm, it seems to me however that there's got to be an effect we may conceive as latching, but what it really is, is of course the result of an increase in fitness, whatever serves to augment fitness will of course be preserved. That is after all the purpose of the entire exercise, simulating life. Or is your contention that natural selection is an invalid concept, that even microevolution is impossible, that the designer is responsible for all species adaptability? Cabal
PS: Onlookers, note too how the above exercise in straining at a gnat over Weasel while swallowing a camel on enfolded active info in evolutionary simulations ever since Weasel distracts attention from this key, well-warranted conclusion of the M & D paper. kairosfocus
Rasputin: Kindly note that I was responding to a specific proposal by Cabal, on rewriting Weasel. Recall -- and this specifically goes back to December last year -- the core challenge of evolutionary algorithms in general is to [without undue inadvertent injection of active information by investigators] create complex, information-rich function ab initio from plausible initial conditions [pre-life (`600 - 1,000 k bits), previous to existence of a novel body plan (10 - 100+ M bits)] without pre-loading key intelligently derived info about the overall topography of the fitness landscape and/or goals an their location. Weasel 1986 fails the test by rewarding non-functional "nonsense phrases" on their making increments in proximity that "however slightly, most resemble[ . . .]" the defined and located target, the Weasel sentence; by in effect using he known target and nonsense phrase locations to create warmer-colder signals. This becomes critically dis-analogous to the claimed dynamics of chance variation and natural selection of complexly functional (reproducing, so von Neumann replicator; irreducibly requiring: code, stored blueprint, reader, effector, metabolic support to provide materials and energy) life forms. It is also significantly less complex than the credible real-world info generation challenges. This thread is strictly about the provision of credible code c 1986. Secondarily, there has been a continuation of various objections to and concerning the observed behaviour of showcased o/p c 1986: apparent latching and ratcheting to target. It is clear from general discussion and from the probable code that Weasel c 1986 shows implicit latching as a reflection of its use of targetting and reward of mere non-functional proximity. As touching Ev etc, these have come up in the various discussions over the years, and unfortunately tend to fall under similar problems, i.e. not accounting adequately for the origin of the required level of complex functionality within the search resources of the observed cosmos, and they tend to embed implicit or explicit knowledge of the overall fitness landscape, often working within an assumed island of function to carry out hill-climbing. The problem that is decisive is to get to the shores of such islands of function in the extremely large config spaces implied by the digital information in DNA etc, without intelligent direction. Ev, from the paper you cite -- starting with the the abstact and culminating in the conclusion, runs into the problem that Shannon information (a metric of channel and memory transfer or storage capacity) is inadequate to define algorithmic functionality, as say Abel et al discuss in this 2005 paper; cf esp. Fig 4 and associated discussion on OSC, RSC and FSC. (The 2009 review paper here will provide a survey and guide to a considerable body of relevant literature, including of course the Durston et al metrics; which build on Shannon uncertainty to put it in the context of specific functionality. My 101 level intro here may help onlookers understand Shannon info [including average info per symbol in messages aka entropy aka uncertainty] and its relationship to functionally specific complex info.) For instance, peak Shannon info metric values for a given string length will be for a strictly random data string [as it has very low redundancy], when in fact algorithmic functionality will -- per the inherent structure and requisites of functional language and code -- have redundancy, e.g. as a rule, symbols will not be equiprobable in a real code or language [think of E vs X in English]. A random string will have peak Shannon info while failing to rise above the floor of non-function. On p. 1058 of their recent IEEE paper, Marks and Dembski observe about the general problem with evolutionary algorithms as follows:
Christensen and Oppacher [7] note the “sometimes-outrageous claims that had been made of speci?c optimization algorithms.” Their concern is well founded. In computer simulations of evolutionary search, researchers often construct a complicated computational software environment and then evolve a group of agents in that environment. When subjected to rounds of selection and variation, the agents can demonstrate remarkable success at resolving the problem in question. Often, the claim is made, or implied, that the search algorithm deserves full credit for this remarkable success. Such claims, however, are often made as follows: 1) without numerically or analytically assessing the endogenous information that gauges the dif?culty of the problem to be solved and 2) without acknowledging, much less estimating, the active information that is folded into the simulation for the search to reach a solution.
In short, inadvertent injection of active information that gives a considerable gain over reasonable capacity of random walk searches in large config spaces, is the critical flaw that consistently dogs evolutionary simulations from Weasel to today's favourites such as Ev Avida etc. (And, no I am not interested in a further long tangential discussion on details of such programs. this thread has had a specific purpose, one long since achieved, and a major cluster of tangents has already been addressed.) GEM of TKI kairosfocus
kairosfocus, "All you have to do is write a program that without using the target sentence and a distance to target metric, reliably achieves it in several dozens to several hundreds of generations, showing implicit [quasi-]latching-ratcheting as it converges on target by a real, current functionality anchored metric of fitness." You seem to be asking for a simulation of evolution that doesn't have a fixed target, correct? If so, have you seen Thomas Schneider's ev? It seems to meet your criteria. Rasputin
Cabal: All you have to do is write a program that without using the target sentence and a distance to target metric, reliably achieves it in several dozens to several hundreds of generations, showing implicit [quasi-]latching-ratcheting as it converges on target by a real, current functionality anchored metric of fitness. I would love to see the result. GEM of TKI kairosfocus
Instead of discussing ancient versions of Weasel, wouldn't it be possible to write a version consistent with the principles of evolution, for the sole purpose of demonstrating the effect of selection for fitness? That's the question, isn't it? That random mutations and natural selection can cause adaptation to a fitness landscape? And even allow for adaptation to a changing landscape too, simulated by a target string subject to changes over time? I believe competent programmers can write Weasel programs in a very short time, say a couple of hours? (I might need a couple of days, but I haven't been doing any programming for many years.) Cabal
Onlookers: It is clear that the objection strategy continues to be ever increasing degrees of tangentiality. I have called attention yesterday to the main issues with Weasel in general, and the key turning points of the already tangetnial debates over whether or not Weasel c1986 latched and ratcheted, and how that could have been done. Similarly, the particular focus for this thread has been addressed, and it seems that on balance -- per inference to best, factually anchored (and provisional) explanation -- we now credibly have in hand the original Weasel code. The substantial conclusion is that Weasel c 1986 showed implicit latching and ratcvheting to target, that Weasel c 1987 was materially different (so the video is not a good counterpoint to the conclusion) and that in each case as targetted search rewarding non-funcitonal nonsense phrases on merely being closer to target, weasel is fundamentally disanalogous to the claimed blind watchmaker, chance variation and natural selection across competing populations. Indeed, Weasel is an inadvertent demonstration of intelligent design using targets and/or so-called fitness landscapes and optimisation by hill-climbing techniques. As touching Dr Dembski et al; it has been pointed out that while their analysis on p 1055 of the IEEE paper is based on a constructed example and a particular model of variation as a weasel tracks to target, that does not change anything material about the reality of implicit latching, that similar to explicit latching it ratchets to target, and that either of them could account for the mere facts c 1986: the excerpted runs and the description. On subsequent reported statements by CRD, and the above probable programs, we can see that Weasel credibly exhibited implicit latching-ratcheting. And, EIL, sponsored by M&D, present a cluster of algorithms covering ways in which the o/p of 1986 could have been achieved explicitly or implicitly or even by sheer random chance. (It is noteworthy that objectors claiming that the EIL analysis in the IEEE paper etc caricatures the Dawkins original weasel -- which they cannot provide and show unique characterisation of from the 1986 BW text -- characteristically do not reckon with that range of algorithms and the implications of observing that latching (an inferred behaviour from 1986 run outputs) can be achieved explicitly and implicitly.) G'day GEM of TKI kairosfocus
--kf 1) W. Dembski and R. Marks may have sponsored a cluster of algorithms reflecting the options on Weasel. But in their paper, they speak about one of these weaseloids, and W. Dembski states:
Our critics will immediately say that this really isn’t a pro-ID article but that it’s about something else (I’ve seen this line now for over a decade once work on ID started encroaching into peer-review territory). Before you believe this, have a look at the article. In it we critique, for instance, Richard Dawkins METHINKS*IT*IS*LIKE*A*WEASEL (p. 1055). Question: When Dawkins introduced this example, was he arguing pro-Darwinism? Yes he was. In critiquing his example and arguing that information is not created by unguided evolutionary processes, we are indeed making an argument that supports ID.
To talk about the algorithm presented in their paper isn't strwamannish, it's natural. And so the main point stands: They are not critiquing Dawkins's examplethus, they are not necessarily making an argument for ID 2)As has also been discussed repeatedly, partitioning of the letters cumulatively into found and not-yet-found groups can happen explicitly or implicitly, and with all sorts of possible mutation rates or patterns on the letters in Weasel’s nonsense phrases.But a implicitly latching search is not a partitioned search as described by W. Dembski and R. Marks in their paper. And so, their math doesn't apply. Or can you show otherwise? Just show me the math! DiEb
Onlookers: Plainly, there is little more substantial for Darwinists to object to in this thread. I note briefly: 1] Messrs Dembski and Marks -- as previously, repeatedly, noted -- have sponsored a cluster of algorithms reflecting the options on Weasel, so it is strawmannish to construe them as making up just one algorithm, which can be cast as diverse from Dawkins' original Weasel. [Which, strictly speaking we do not know to this day, as for instance, Mr Dawkins claims that he does not recall whether or not W1 and W2 above were the originals. Recall, he has not published his original program source code, only a description that will fit with both explicit and implicit ratcheting patterns.] 2] As has also been discussed repeatedly, partitioning of the letters cumulatively into found and not-yet-found groups can happen explicitly or implicitly, and with all sorts of possible mutation rates or patterns on the letters in Weasel's nonsense phrases. GEM of TKI kairosfocus
kairosfocus:
After all, ID theory is not God and Dembski is not its inerrant prophet.
Then why have you gone to so much trouble to try to interpret Dembski's words such that "Mr Dembski’s overall description of the behaviour of the Weasel 1986 algortihm is generally correct"? (Considering that the point of WEASEL is to illustrate cumulative selection, how can a version that does not involve selection be considered generally correct?)
I should note, on a point of material context, that up to the day where on April 9th I demonstrated implicit latching (thanks to Atom’s adjustable weasel), it had been hotly disputed by multiple Darwinist objectors that such was possible, for weeks and weeks, across multiple threads — complete with the most demeaning personalities. Nowhere above do we find acknowledgement of such inconvenient facts in the haste to say well we all agree that implicit latching [or, what you call implicit latching . . . ] is possible as a program pattern of behaviour (which of course implies and underlying process or mechanism). Similarly, when I proposed on law of large numbers that the showcased output of 1986 was on balance of evidence probably latched, this was sharply contested. Subsequently, that objection, too, has vanished without trace or acknowledgement that the balance on the merits in the end was not on the side of the Darwinist objectors.
I'm thoroughly confused. What "Darwinist objectors" ever disputed the fact that Dawkins' WEASEL exhibits implicit latching? It was you who sided with Dembski and Marks' description, which cannot be interpreted as implicit latching. Recall:
In this successfully peer-reviewed paper, on p. 5, they briefly revisit Dawkin’s Weasel, showing the key strategy used in the targetted search used: E. Partitioned Search Partitioned search [12] is a “divide and conquer” procedure best introduced by example. Consider the L = 28 character phrase METHINKS*IT*IS*LIKE*A*WEASEL (19) Suppose the result of our ?rst query of L = 28 characters is SCITAMROFNI*YRANOITULOVE*SAM (20) Two of the letters, {E,S}, are in the correct position. They are shown in a bold font. In partitioned search, our search for these letters is ?nished. For the incorrect letters, we select 26 new letters and obtain OOT*DENGISEDESEHT*ERA*NETSIL (21) Five new letters are found bringing the cumulative tally of discovered characters to {T,S,E,*,E,S,L}. All seven characters are ratcheted into place. Nineteen new letters are chosen and the process is repeated until the entire target phrase is found. Now, beyond reasonable dispute, making a test case of such a famous example would not have passed peer review if it had been a misrepresentation.
(Emphasis in original.) R0b
--kf,
All of this, it seems, is now to be swept away with the claim that there was no evidence that supported explicit latching as a possible mechanism, and that all along the evidence supported implicit latching, as though that is a triumph of the Darwinist objectors!
No, all of this is swept away as you failed to show how the math of the paper of Dembski and Marks is applicable to Dawkins's weasel. DiEb
Onlookers: Lest we forget: this thread started as a discussion of what seems to be the code for Weasel c 1986 -7. These programs (which it seems objections to have been abandoned, as opposed to conceded) support the claims -- hotly contested when first made -- that: 1] Weasel, c. 1986, shows latching-ratcheting behaviour, in the implicit form (thus on the balance of evidence implicit latching is the best explanation for the showcased runs and commentary in BW, c 1986); 2] Weasel c 1987 (per the BBC Horizon video previously confidently cited as "proof" that Weasel c 1986 did not latch) was distinctly different as a program; 3] That Weasel c 1986 (and 1987 too) was a case of targetted search that rewarded mere proximity increments of non-functional "nonsense phrases" to the target, using warmer-colder signalling; thus: 4] Weasel 1986 was fundamentally dis-analogous to the claimed BLIND watchmaker process, i.e. chance variation and natural selection across competing sub-populations. These formerly hotly contested and dismissed or even derided points have plainly been abundantly vindicated. That is a significant achievement. However, all that this seemingly means for too many objectors is that the grounds for objecting should be shifted (I say this, on grounds that what we see here at UD is -- on sadly all too abundant evidence on many topics -- likely to be a milder form of the objections being made elsewhere). So, after increasing degrees of tangentiality, the process of inference to best explanation on evidence that comes in bit by bit is dismissed with claims that Weasel showed no evidence that latching could have been explicit. This was already answered, and only a brief summary will be required: 1 --> Inference to best, empirically based explanation is relative to evidence and is not a proof; indeed there is a counterflow between the direction of implications (from Explanation to empirical data) and the direction of empirical support (from observed facts to explanations). 2 --> Thus such abductive explanations -- a characteristic feature of science -- are inescapably provisional and subject to a matter of relative degree of support rather than absolute decision. (That's another way of saying that scientific work -- often implicitly -- embeds trust in faith-points at various levels up to and sometimes including core worldview presuppositions, by the way.) 3 --> In the case of the data on Weasel c 1986, three logically possible explanations cover the data and are live options : T1 -- pure chance, T2 -- explicit latching, T3 -- implicit latching. [This presumes that we have already accepted another previously hotly contested and dismissed point: the excerpted, showcased runs c 1986 support the inference that these runs exhibited latching of correct letters in generational champions.) 4 --> While pure chance is strictly possible, it is vastly improbable and so is soon enough eliminated relative to the other two options. 5 --> Explicit latching (as the months of strident objections to the possibility of implicit latching show) is conceptually simpler and -- per various reports of programmers seeking to replicate weasel -- is "easier" to code for. Thus, on initial balance of evidence per the facts of 1986, it was the "likely" explanation. 6 --> C 2000 and beyond, indirect reports from Mr Dawkins to the effect that the original Weasel 1986 was not explicitly latched [and note, video of Weasel c 1987 was often cited by Darwinist objectors in claimed substantiation] tipped the balance in favour of T3, implicit latching, so soon as attention was drawn tot hem back in March. 7 --> This concept was hotly objected to, in a virtual firestorm of often highly heated objections, which only began to die down after the EIL Weasel GUI allowed demonstrations to be posted at UD as at April 9th. 8 --> Subsequently, a contest to produce the credible original Weasel c 1986 was mounted, and it now seems that we have two credible candidates W1 fro 1986, and W2 for 1987. [If these prove to be credibly correct (as the trend seems to be) we may reasonably conclude as already noted, i.e Weasel c 1986 implicitly latched.] _____________ All of this, it seems, is now to be swept away with the claim that there was no evidence that supported explicit latching as a possible mechanism, and that all along the evidence supported implicit latching, as though that is a triumph of the Darwinist objectors! That, onlookers, speaks volumes. G'day GEM of TKI kairosfocus
--kf, sorry, I thought the paper of W. Dembski and R. Marks was the key point of all the threads of the last couple days. I've to say that your opinion of latching/ratcheting etc. is of little consequence to me, but the statements of W. Dembski and R. Marks carry some weight. Therefore it was important for me to stress the point that the algorithm/example of their paper labeled Partitioned Search isn't the algorithm described by R. Dawkins in his book The Blind Watchmaker - and so, that the premise of W. Dembski's reasoning at this very website (criticizing R. Dawkins's weasel => criticizing evolution) doesn't hold. And this is absolutely independent of a discussion of the merits of Dawkins's weasel... DiEb
My apologies. The only "evidence" that TBW has an explicit latching mechanism is that kairosfocus, Dembski, and U Monash thought it did, until corrected. Yet thousands of other readers had no problem in understanding what duplication "with a certain chance of random error - 'mutation' - in the copying" meant. You appear to be unable to re-assess the evidentiary value of the latching behavior in light of your own data on Proximity Reward Searches. Sad, really. I was hoping you might produce some actual evidence. But no. So it remains true that all the evidence points to an implicit latching weasel (iyw). Given that all the evidence points to an implicit latching weasel and no evidence points to an explicit latching mechanism, would it would be unreasonable to assume, as of May 2009, an explicit latching mechanism? A simple yes or no will suffice, kairosfocus, but I don't want to subject you to undue strain. As a first step, let's see if you can keep your reply under 1,000 words. BTW, I do enjoy your continued use of the word "mechanism" to describe "behavior", "partitioned" as a synonym for "latched", and your determined efforts to avoid DiEb's incredibly simple point: D&M describe a "divide and conquer" search, with the appropriate math. TBW weasel, including the code shown above, cannot be such a search. DNA_Jock
PS: I should note, on a point of material context, that up to the day where on April 9th I demonstrated implicit latching (thanks to Atom's adjustable weasel), it had been hotly disputed by multiple Darwinist objectors that such was possible, for weeks and weeks, across multiple threads -- complete with the most demeaning personalities. Nowhere above do we find acknowledgement of such inconvenient facts in the haste to say well we all agree that implicit latching [or, what you call implicit latching . . . ] is possible as a program pattern of behaviour (which of course implies and underlying process or mechanism). Similarly, when I proposed on law of large numbers that the showcased output of 1986 was on balance of evidence probably latched, this was sharply contested. Subsequently, that objection, too, has vanished without trace or acknowledgement that the balance on the merits in the end was not on the side of the Darwinist objectors. kairosfocus
Onlookers: Observe -- yet again -- the ever increasing degree of tangentiality just above, coupled to studious non-addressing of that which is central about weasel per the description in 1986 and the showcased examples. A few remarks are in order: 1] Dembski vs divide and conquer etc My remarks are primarily on the Weasel output and description c 1986 and its context, not on parsing what Mr Dembski may or may not have said. (After all, ID theory is not God and Dembski is not its inerrant prophet. That is I find a subtle, unwarranted implication of blind adherence to holy writ in too much of the above remarks by darwinist objectors, that needs to be corrected. Mr Dembski is a scientist-philosopher-mathematician, one of a long and distinguished tradition. but he is finite, fallible and fallen just as the rest of us are.) Now, as one who has had to program systems and as one who has had to plan things, I am familiar with hierarchical breakdowns of problems into sub-problems whose solutions when put together appropriately will solve the overall problem. And, that is what, in essence a divide and conquer approach is. The key issue on ratcheting-latching and partitioning is different in emphasis, and we need to get back to it: incremental progress to target by preserving correct letters in current generation champions (explicitly or implicitly) and then getting new ones by a guess and test procedure. When applied, we see cumulative progress to target, and this has been demonstrated for both explicit and implicit latching as a component of ratcheting. Demonstrated since April 9. And, demonstrated to observably happen with varying per letter mutation rates and pop sizes so that selection to demonstrate and showcase cumulative progress to target easily accounts for the showcased runs and associated descriptions. The EFECT of such ratcheting progress is that the overall problem of getting to the target -- and note again it is targetting and warmer-colder search strategies that reward non-functional phrases that make all weasels fundamentally dis-analogous to claimed mechanisms for evolution, thus misleading on the claimed prowess of BLIND watchmakers -- is divided up on a letter-wise basis. So, the divide and conquer effect is achieved through the ratcheting, cumulatively progressive action. 2] W1 doesn’t fit this definition. And so, you just redefine “divide and conquer . . . W1 is credibly -- on balance of evidence -- the likely original Weasel 1986. It shows implicit latching, thus ratcheting and cumulative letter by letter progress to target. And that is my main concern. In showing this pattern of behaviour on an implicit mechanism of latching of generational champions, it exhibits letterwise progress splitting up the overall problem into an effectively letterwise one. Whether that fits or does not fit someone's particular definition out there is irrelevant to my concern. Especially when -- pardon my directness -- such tangential minutiae effrectively constitute straining at a gnat while swallowing a camel. 3] Given the W1 code (and our endless discussions around “implicit latching”), and Atom’s Proximity Reward Search, it is clear that an algorithm that lacks an explicit latching mechanism may show latching behavior. On this we are all agreed. Armed with this knowledge, what “evidence” is there that supports the hypothesis that TBW has an explicit latching mechanism? What evidence is there that argues against it’s being a Proximity Reward Search? The only evidence seems to be that you and Dembski once thought that TBW was explicitly latched This is a case of twisting words from one context to another, to fit a wider rhetorical pattern. I have spoken in the context of empirical evidence and weight on balance. On balance of evidence -- and matters of fact are not settle-able to mathematical certainty -- W1 is the probable original weasel, c. 1986. W1 shows implicit latching in a way that is related to Atom's adjustable weasel. Consequently the best current explanation -- note the opposite in logic to a proof -- of the data and explanation in BW etc is that W1 was the original weasel and that CRD showcased latched runs in the range 40 - 60 or so generations. Further to this, implicit latching is of course a mechanism, which accounts for cumulative progress to target. On the personalities side, it should be noted that a comparison of the record will show that I set the discussion in the context of the try 1, 2, 3 cluster of alternatives: T1 -- pure random chance, T2 -- explicit latching, T3 -- implicit latching. T1 can account for data on a logical possibility basis, as random chance can mimic anything else in a contingent situation, but it is not probable. T2, on the direct factual evidence of 1986 in BW etc, is the simplest most direct account for the behaviour of Weasel. It is on the injection of indirectly communicated testimony that T3 becomes the better explanation on balance. W1 and W2 above further underscore this. Onlookers, observe the sharp contrast between the actual pattern of discussion and the tellingly ad hominem-laced caricature I have excerpted above. 4] Given that all the evidence is on the side of “implicit latching”, how about we stipulate that TBW is a type of Proximity Reward Search that shows latching behavior. Then we can move on. Noticed, again, the difference between empirically anchored provisional inference to best explanation -- the essence of science as process -- and the caricature that is again presented, and capped off with an accusation of "whining." There is, of course, no "all the evidence" to point to just one possible explanation, given the points in 3 just above. On best explanation on balance of evidence from several directions -- and the only effectively certain evidence is the state of the text of BW c 1986 -- T3, implicit latching is the best estimate of the state of Weasel c 1986, and W1 is the best candidate to be that original weasel. 5] It is impossible for implicit latching to behave that way. Changing all of the incorrect letters would require a 100% mutation rate, which would entail the correct letters not being preserved at all. So it is a fact that Dembski was referring to explicit latching. The algorithm that he describes is a square peg that cannot be stuffed into the round hole of implicit latching. Again, tangentiality dominates, while the principal issue lies unaddressed. I am but little concerned as to what Mr Dembski's estimates of the state of Weasel were at any given time -- save to note that T2 on the direct factual evidence of BW c 1986 is a better explanation than T1 or T3. Indeed,t hat is whyt he Monash U Australia biology folks initially understood Weasel as an explictily latched cumulative ratcheting search, and had to be "corrected" by Mr Elsberry on that. And, for T2 type algorithms, since a 100% mut rate may be analytically convenient [especially for purposes of illustration], but is irrelevant to the achievement of cumulative progress to target. (To see this point in action simply imagine W1 above -- only and no more than a single letter mutates in each child -- with the augment of forcing the seed into the population of children with certainty, not just high probability. The backstop dog or pawl is now certainly present and the filter will at least preserve the seed's state of progress to target with certainty. Latching has now been built in explicitly. In the implicit case, as we can see in the original post, in effect the W1 algorithm relies on the odds of getting through such a case; which are fairly high. Consequently on Dieb's calculation, 199 or so times out of 200, we will see implicit latching with a population of 100 per generation, and those who have run it say the pattern is to run to target in about 40 - 60 gens.) Implicitly latched mechanism can achieve the same effect of cumulative progress, and on adding the indirectly reported testimony of CRD c 2000 (which we were made aware of sometime in March this year) and applying the ancient documents rule to the W1 source code provided by Oxfordensis, T3 is the best current explanation. In short, the issue is best explanation on a cumulative body of evidence of various kinds. Evidence with quite different degrees of weight and type, but evidence relevant to the conclusion that on balance of evidence, W1 -- an implicitly latching, ratcheitng mechanism program -- is the most likely source of the showcased runs of 1986 in BW etc. _____________ And all of the aboved points back tot he real issue, the elephant int he middle of the room that so many so often avoid discussing: Weasel and kin are fundamentally dis-analogous tot he claimed power of CRD's BLIND watchmaker [chance variation plus natural selection, illustrative of undirected chance + necessity], but in fact inadvertently illustrate that intelligences can use constrained random variation and artificial selection on explicit or implicit knowledge of a target or the landscape of an objective function, to achieve a design. That is, these are inadvertent examples of intelligent design and its power to account for even evolutionary patterns of development. (That should be no surprise to one who knows that technologies evolve through intelligent direction in a competitive environment.) GEM of TKI kairosfocus
Here is my take on these weasels: for short, w1 would fit Dawkins's algorithms, while w2 is something different. I can't exclude the possibility that w2 was used in the video of the BBC, but I'd prefer if it wasn't: it's just a randomized version of the hangman's game (Optimization by Mutation With Elitism). And it is (implicitly) latching as hell in the sense that a correct letter will not be changed in future generations.... DiEb
DNA_Jock is more charitable than I. I think that kairosfocus also needs to acknowledge the incontrovertible fact that Dembski was not talking about implicit latching in his descriptions of WEASEL. The algorithm described in the Nature of Nature conference, No Free Lunch, the EIL Weasel Math page, and the IEEE paper is as follows: For each iteration, the correct letters are held fixed, and all of the incorrect letters are randomly altered. This is a fact. Each of the above sources either makes the above bolded statement explicitly or presents math that entails it. It is also a fact that the above bolded sentence cannot refer to implicit latching. It is impossible for implicit latching to behave that way. Changing all of the incorrect letters would require a 100% mutation rate, which would entail the correct letters not being preserved at all. So it is a fact that Dembski was referring to explicit latching. The algorithm that he describes is a square peg that cannot be stuffed into the round hole of implicit latching. R0b
DNA Jock, Using Dawkins description of cumulative selection and the illstration of CS using "weasel", the only inference one gets is that once a matching letter is found the search for it is over. IOW there isn't anything in what he says nor illustrates (in TBW) that would demonstrate found letters can change. Joseph
kairos - My offer from post84 stands:
Here’s an idea: you stipulate that TBW Weasel does not contain an explicit latching mechanism, and we’ll all agree that it can, given certain parameters, show latching behavior. Then we can all move on to discussing the ‘distant target’ issue.
You keep repeating that TBW is probably an algorithm with 'implicit latching', based on the "preponderance of evidence". I am puzzled. Given the W1 code (and our endless discussions around "implicit latching"), and Atom's Proximity Reward Search, it is clear that an algorithm that lacks an explicit latching mechanism may show latching behavior. On this we are all agreed. Armed with this knowledge, what "evidence" is there that supports the hypothesis that TBW has an explicit latching mechanism? What evidence is there that argues against it's being a Proximity Reward Search? The only evidence seems to be that you and Dembski once thought that TBW was explicitly latched (Joseph's tortured interpretation of the word 'cumulative' is not evidence of explicit latching, after all). Given that all the evidence is on the side of "implicit latching", how about we stipulate that TBW is a type of Proximity Reward Search that shows latching behavior. Then we can move on. It is quite frankly hilarious for you to whine about distractions while you refuse to acknowledge the obvious. DNA_Jock
--kf if we want to go back, we should revisit this statement which started the current outbreak of weasels:
P.S. Our critics will immediately say that this really isn’t a pro-ID article but that it’s about something else (I’ve seen this line now for over a decade once work on ID started encroaching into peer-review territory). Before you believe this, have a look at the article. In it we critique, for instance, Richard Dawkins METHINKS*IT*IS*LIKE*A*WEASEL (p. 1055). Question: When Dawkins introduced this example, was he arguing pro-Darwinism? Yes he was. In critiquing his example and arguing that information is not created by unguided evolutionary processes, we are indeed making an argument that supports ID.
My point - and that of many other on these threads - is: Dembski and Marks weren't talking about Dawkins's example. So, whether they were indeed making an argument that supports ID is entirely unsettled (ex falsum quodlibet). Mainly, I'm saying that the math developed by Dembski and Marks isn't applicable to the weasel algorithms as they are commonly understood, and - explicitly - that the program W1, doesn't even fit the description in the paper of a “divide and conquer” procedure. DiEb
--kf I gave you a link to a commonly accepted definition of "divide and conquer" in the field of algorithms. I showed how this definition is applicable to the algorithm/example of Marks and Dembski in the paper under discussion. I explained that W1 doesn't fit this definition. And so, you just redefine "divide and conquer"... Or is their any source which backs up your version In particular, in this context, the “divide and conquer” approach takes in partitioning in the sense of a catch and keep bin or net if you will.? Could you provide a short hint? DiEb
PS: Observe, too, that we see a repeated pattern of increasing degrees of tangentiality relative to the primary matter, and that as we move to further and further tangents, personalities are increasingly either explicit or lurk just below the surface discussion. Instead, let's get back to the point: W1 is the probable original Weasel 1986, and it clearly shows targetting, warmer-colder signals to nonfunctional stage by stage phrases and implicit latching, so it -- or whatever actual Weasel 1986 program there is -- is fundamentally dis-analogous to the claimed mechanisms of evolution, and is certainly not a good example of a claimed BLIND watchmaker in action. Even poor Mr Paley had a better point 200+ years ago than is commonly allowed today: a mechanism with self-replication is manifesting a high degree of complex function which needs to be explained in light of empirically credible mechanisms for origin of complex integrated function. kairosfocus
Onlookers: Let's go right back to the beginning, first of all: that which is primarily and fundamentally "misleading" -- I here cite Mr Dawkins himself -- in the whole weasel discussion is weasel itself. For, weasel (and we now have a credible original program code that underscores it) is utterly dis-analogous to the claimed "blind watchmaker" -- I don't have time here to do more than say that poor Mr Paley has been even more strawmannised: do you know he discussed for instance the implications of discovering a self-replicating watch . . . in Chs 1 - 2 of his key book? That is, he understood and seriously addressed the issue of origin of complex integrated function -- of chance variation and natural selection: ________________ >> It [Weasel c. 1986] . . . begins by choosing a random sequence of 28 letters ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense [= non-functional, i.e utterly dis-analogous to organisms that must function to a high degree to reproduce themselves] phrases, the 'progeny' [nonsense phrases are precisely the opposite of "progeny," so the scare quotes are a subtle admission of how misleading the whole exercise is] of the original phrase, and chooses [i.e. programmed, artificial selection, utterly dis-analogous to natural selection] the one which, however slightly, most resembles the target phrase [i.e. there is a built-in target, so weasel does not create complex info out of mere noise but has it inserted from the beginning, as W1 and W2 show], METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection [so, CRD knew in 1986 that the injection of targetted search rewarding non-functional phrases on being slightly "warmer" on distance to target made all the difference, and managed to dismiss the crucial issue of functionality with a rhetorical flourish: "single-step selection]: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . . Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection . . . In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [TBW, Ch 3, as cited by Wikipedia, various emphases and colours added.] >> ___________________ You will go back over the past months or the thread above in vain for a serious and frank facing of the implications of the above by the many darwinist objectors here, at Evo info [which is the darwinist site that began the now coming on year-long push to debate on the issue of latching, through one of its representatives, as I have noted here], or elsewhere. So, before we do anything else, we must first see that any attempt on the part of those who refuse to face the above frankly to tag Mr Dembski or others with label-words like "misleading" or "distorting" becomes little more than a turnabout accusation rhetorical stratagem designed to try to drag us down to the level of immoral equivalency where Weasel has been since 1986. Worse, observe that the accusers above again -- sadly, predictably -- fail to address the fact that EIL, sponsored by Marks and Dembski, for many months, has hosted a cluster of algorithms addressing the different aspects of and approaches to the Weasel issue, ranging from an algorithm that shows the challenge to get to the shores of complex function without artificial assistance [comparable to CRD's "single step selection"], to explicitly latched algorithms, to an adjustable proximity reward algor that demonstrated the reality of implicit latching, and onward others that seek to create reward functions that do not reward mere proximity. (And note, EIL makes its code explicitly available; to date we do not have an acknowledged original Weasel 1986 program, though we now have reasonably good reason to infer that W1 is the version that created the showcased runs of 1986 and W2 the BBC Horizon video.) In that context, that on p. 1055 of the IEEE paper, M & D discuss partitioning in terms of what I have called "catch and keep" and happen to use (for purposes discussed above on reading the reversed text of the phrases used illustratively) a case with 100% mutation rate is being used to set up an ad hominem laced strawman. For, it is long since demonstrated that explicit latching does not require 100% mutation rates, and it is also demonstrated that implicit latching -- as has been ever so often described and explained above and in my discussion in App 7 the always linked -- is possible and actually demonstrated here at UD since April 9th. In short, since April 9th, more than enough information has been available to answer the real secondary issue on the merits, namely that an implicit latching mechanism -- again note AmHD [when your quarrel is with a respectable dictionary that should be a warning]: An instrument or a process, physical or mental, by which something is done or comes into being -- is real and would account pretty well for the showcased Weasel runs of 1986. That should have settled the matter on the merits, apart from the tertiary issue of getting credible original Weasel Code. That last point seems to have now been covered, in this thread. (Cf onward discussion, here.) Now, with the above in hand, let us pick up a few illustrative points from commentary overnight: 1] Dieb, 100: M&D talk about a divide-and-conquer procedure. W1 doesn’t fit this description. EIL, of course, hosts a cluster of algorithms that cover the bases. And, in describing partitioned search the key point the make is that once a letter in a generational champion goes correct, the search for it is over. That can be achieved explicitly or implicitly, under a fairly wide array of circumstances. Circumstances that EIL cover in the various algors at the already linked GUI page. We must bear this in mind when we consider the argumets and points made below. In particular, in this context, the "divide and conquer" approach takes in partitioning in the sense of a catch and keep bin or net if you will. 2] Rob, 101: regardless of how broadly one defines “partitioned”, the fact remains that Dembski has been mischaracterizing Dawkins’ WEASEL algorithm for a long time, even after being corrected. But in fact, partitoning means what it means, and ratcheting means what it means -- and latching though an antireverse pawl or dog is a part of that -- and cumulative progress to target means what it means. All as long since repeatedly and carefully discussed in more than adequate details. The EIL cluster of algors, which antedates the actual publication of the IEEE paper by several months, shows a considerable range of algors that give explicitly and implicitly latched, ratcheting behaviour, quasi-latched [ratchets that sometimes slip] behaviour, and clearly non-latched behaviour. In that context the above, sadly, becomes little more than an ad hominem laced strawman used in a turnabout attack. 3] The EIL website presents the same math as the IEEE paper, specifically saying that it refers to the “partitioned search used by Dr. Dawkins”. As discussed repeatedly, it’s impossible to reconcile that math with the description and output of the algorithm in TBW, as well as the Pascal code that Dembski now considers to be Dawkins’. The reference --note it is not given in context [and that the highly relevant further context of the cluster of algors at the EIL GUI is not mentioned] -- is:
First, let's look at partitioned search used by Dr. Dawkins. Assuming uniformity, the probability of successfully identifying a specified letter with sample replacement at least once in Q queries is . . .
The first statement is clearly an empirically based inference that the showcased Weasel runs of 1986 show partitioning, in the sense that once a letter in a generational champion in these runs goes correct, it does not revert. This holds true without exception for a sample of 200+ letters that potentially could revert, from 300+ letters total. That is, it is a dominant and striking behaviour pattern of the run o/ps. Next, EIL proceeds to a particular model: "Assuming uniform probability . . . " What follows is indeed the same model given in the IEEE paper. However, we may also see that -- through links on the very same page that take you to live versions of the GUI -- so soon as the EIL proceeds to computer modelling, we see that they present a cluster of various algorithms (and models as a direct consequence), inviting participative comparison and exploration of various flavours of weasel ware. So, while I can sympathise with desiring editing of the EIL math discussion page or the IEEE paper, I cannot agree that M & D are in any reasonable sense of the term, willfully misleading. 4] This is how Dembski described it at the “Nature of Nature” conference . . . . This matches the IEEE paper and the EIL website, but not TBW The excerpt (and, remember the context above):
But consider next Dawkins’ reframing of the [origin of complex, functionally specific bio-information] problem. In place of pure chance [and until you attain to complex, specific info based function, a BLIND watchmaker cannot bring to bear differential success at function to climb the hill to optimal performance so this is a fundamental dis-analogy to the generally proposed evolutionary mechanisms . . . i.e. the primary issue; note how this is again not addressed by Darwinist objectors], he considers the following evolutionary algorithm: (1) Start with a randomly selected sequence of 28 capital Roman letters and spaces (that’s the length of METHINKS IT IS LIKE A WEASEL); (2) randomly alter all the letters and spaces in the current sequence [poorly phrased, and the point where D does invite a rhetorical wedge; but the point from the contextual reference and showcased runs is plainly that letters are subject to random alteration, and applicable mutation rates from the EIL models and elsewhere can plainly vary] that do not agree with the target sequence; [that is, in context of what was to be explained -- the showcased 1986 runs -- once a letter in a generational champion goes correct it is credibly preserved thereafter; and there is in the context of the showcased runs plainly no irrevocable commitment to impose a 100% mut rate per incorrect letter, though he does mathematically model such a case -- so again cf the cluster of EIL algors] (3) whenever an alteration happens to match a corresponding letter in the target sequence, leave it and randomly alter only those remaining letters that still differ from the target sequence. [that is, partition the search so that when a letter becomes correct in a generational champion, it is preserved in succeeding champions until the target is hit. ]
When I compare the above with the showcased runs of Weasel 1986, in the contexts as noted in parenthesies, I find that Mr Dembski's overall description of the behaviour of the Weasel 1986 algortihm is generally correct, noting that Weasel 1986, as described and showcased in BW, does not necessarily show 100% mutation per incorrect letter in each child in each generation. [Note again, the significance of the cluster of algors at EIL on this.] Moreover, the W1 program above, the best candidate to be the 1986 algorithm, ratchets through an implicit latching mechanism, as has also been described repeatedly in this thread and similar to the explanation and examples over the past months. Observe also the rhetorical pattern of shifting from one topic to the next, ending up in personalities: [1] Weasel 1986's problem of targetted search with built in target, to [2] the long debate over latching and incidental issues, to now [3] Dembski the claimed mischaracteriser. Rob, a note to you: I would take your points over Mr Dembski and his phrasing or 100% mut rate model far more seriously if I saw a balancing serious and frank facing of the primary issues with Weasel. But, over many months now, I do not. Think about the message that is sending me and others in the context of the all too commonly seen darwinist debater tactic of distraction, distortion, demonisation and dismissal. 5] CIT, 102: If retention of any letter is simply a result of the initial conditions, then how can the program itself be considered latching? This conflates two distinct issues: ratcheting-latching action, and using explicit mechanisms to do this. A program, generally speaking, takes in inputs, processes them per algorithms, data structures and coding, and produces outputs. The main mechanism of the program thus lies in its processing. And, where that processing is in material part driven by the values of key parameters, then setting such parameters is necessarily a major part of the mechanism: how 'tweredun. In the case of weasels that show latching-ratcheting action, it has been shown that his can happen two distinct ways: explicit setting up of a masking mechanism that blocks reversion of correct letters, and implicit interactions that through the presence of no-change cases [or the like] in mutant populations provide a backstop that blocks reversion when the filter is applied to choose a new gnerational champion. The showcased 1986 runs show that in fact the no-change case [or the comparable side-step case where an incorrect letter shifts to a different incorrect value . . . ] won the generational championship race roughly half the time. In short, this is a major behaviour of the program as implemented c. 1986. So, "simply" clearly misses the key point. GEM of TKI kairosfocus
kairosfocus, #99
Above, we should not lose focus on the actual matter to be explained: what is actually to be explained on the secondary issue of latching-ratceting is that for generation champions, once a correct letter has been identified — due to the action of the explicit or implicit latching mechanism as a component of the ratcheting action — the search for it is effectively over. On the case of particular interest, implicit latching and ratcheting, once the population dynamics on the seed are such that there is a high enough probability of no-change cases being present in the child population for a generation [and the odds of double mutations etc are low also], the filter will reject anything that does not at least preserve advances to date.
If retention of any letter is simply a result of the initial conditions, then how can the program itself be considered latching? camanintx
kairosfocus, regardless of how broadly one defines "partitioned", the fact remains that Dembski has been mischaracterizing Dawkins' WEASEL algorithm for a long time, even after being corrected. This fact can't be denied simply by interpreting the IEEE paper in certain way. Consider: - The EIL website presents the same math as the IEEE paper, specifically saying that it refers to the "partitioned search used by Dr. Dawkins". As discussed repeatedly, it's impossible to reconcile that math with the description and output of the algorithm in TBW, as well as the Pascal code that Dembski now considers to be Dawkins'. - This is how Dembski described it at the "Nature of Nature" conference:
But consider next Dawkins' reframing of the problem. In place of pure chance, he considers the following evolutionary algorithm: (1) Start with a randomly selected sequence of 28 capital Roman letters and spaces (that's the length of METHINKS IT IS LIKE A WEASEL); (2) randomly alter all the letters and spaces in the current sequence that do not agree with the target sequence; (3) whenever an alteration happens to match a corresponding letter in the target sequence, leave it and randomly alter only those remaining letters that still differ from the target sequence.
This matches the IEEE paper and the EIL website, but not TBW. - The same description is repeated almost verbatim in his book No Free Lunch. R0b
--kf have a look at #86 and #90: M&D talk about a divide-and-conquer procedure. W1 doesn't fit this description. DiEb
Onlookers: Above, we should not lose focus on the actual matter to be explained: what is actually to be explained on the secondary issue of latching-ratceting is that for generation champions, once a correct letter has been identified -- due to the action of the explicit or implicit latching mechanism as a component of the ratcheting action -- the search for it is effectively over. On the case of particular interest, implicit latching and ratcheting, once the population dynamics on the seed are such that there is a high enough probability of no-change cases being present in the child population for a generation [and the odds of double mutations etc are low also], the filter will reject anything that does not at least preserve advances to date. Indeed, in the showcased 40+ and 60+ gen runs of 1986, no advance evidently won the generation champion race about half the time. Plainly, this is a significant, though somewhat subtle effect. (If we started with no correct letters and stepped just one letter closer per generation, we would hit target in 28 steps.) So, while individual members of the population can vary all over the place, the presence of the no-change cases acting with the dominance otherwise of single step advances, will create implicit latching and ratcheting to target. All of this corrective effort on a side issue is probably best justified by highlighting how it teaches us to think dynamically and to recognise the reality of probabilistic barriers as a real-world mechanism. Which is not without general significance for the wider design issue. So, let us study how the proponents of the different points of view handle the question of mechanisms, where mechanisms are important and may not necessarily be obvious or explicit. GEM of TKI kairosfocus
Excellent Joseph, we are finally getting somewhere. Let X denote the statement "once a letter is found, the search for that letter is over". With your daughter's help, we have determined that X may be false for TBW Weasel. Can X be false for the search described on p1055? Only after you have answered that easy question, we may explore the issue of X appearing to be true, when in actuality it is false... DNA_Jock
PS: Similarly, on partitioning. To partion is not a mystery in language, or in context: once letters in the showcased 1986 Weasel runs go correct, they stay that way; and that can happen explicitly or implicitly, as has been demonstrated. Further to this, EIL has put up a cluster of Weasel algors [and that is a highly relevant fact], showing the two patterns, so there is no one "the" weasel algorithm a la M & D that can be strawmannised into a cartoonish contrast to Dawkins' work. Resorting to the rhetorical device of speaking as though the facts are not so, is revealing that, again, the real facts are not in your favour. kairosfocus
DNA-J: I have cited, for weeks now, the reason why I have used "mechanism" as I have -- and this is a fairly standard and entirely appropriate usage. I again did so just now, as a simple scroll-up will show. You may choose to act as though those unwelcome- to- you facts are not do, but that evidently willful refusal is quite revealing that the balance on the merits is not in your favour. Please, think again. GEM of TKI kairosfocus
DNA Jock, The answer is "most likely"- that is according to my daughter's magic 8 ball. Joseph
Joseph So, in TBW Weasel, the search for a found letter is "for all intents and purposes" over.
If you omit this qualifier, is your statement still true?
Please answer Alice's question DNA_Jock
kf - Still with the "mechanism is a synonym for behavior", H.D.! (see kf's 233 on prev thread) Talk about recycling cogently answered objections, indeed. You completely failed to respond to my point re the meaning of "partitioned search". And no, 'partitioned' is not a synonym for 'latched', H.D. A search algorithm may be latched but not partitioned (a conversation we have had previously, you will recall), or it may be partitioned but not latched (useful if the target is moving). p1055 describes a partitioned search. TBW weasel is not partitioned. The PASCAL under discussion will show impressive latching behavior, almost every time, but it is not a partitioned search. DNA_Jock
DNA Jock:
I am troubled by the word “essentially”.
And you want me to comfort you? OK then- DNA Jock, do not be troubled by the word "essentially". It cannot do you any harm and its meaning can be easily ascertained. But anyway in context "essentially" means "for all intents and purposes". Joseph
DNA-J: You know or should know that we have been discussing the 1986 showcased runs, and a mechanism that when pop sizes, mut rate per letter and filter are mutually adapted, will produce a good enough fraction of runs that will show the pattern as showcased in 1986. For isntance, int eh as printged version of W1 abovge, runs will latch implicitly 199 out of 200 times. tha tis good enough to account for what was needed. GEM of TKI PS: onlookers, I find it interesting that back in March-April, the attack was that the runs as showcased did not show good reason to believe they latched. Notice how that talking point has now disappeared. Now that we have credible code for Weasel 1986 [and the video runs of 1987] it should be clear that the inference to latching was well supported, and that the implicit latching is real enough. kairosfocus
--kf D&M write
Partitioned search is a divide and conquer procedure best introduced by example.
They don't write
Partitioned search is a latching procedure best introduced by example.
You may call W1 implicitly latching, but you can't call it a divide and conquer procedure. DiEb
Joseph wrote:
The way Dawkins describes cumulative selection and illustrates it with “weasel” it is a divide and conquer stategy- that is once a correct letter is found/ matched the search for it is essentially over.
I am troubled by the word "essentially". Does it mean "if the target string is sufficiently short and the generation size is sufficiently large and the mutation rate is not too high." If you omit this qualifier, is your statement still true? Alice DNA_Jock
PS: I think I need to underscore: the latching in view and ratcheting in view is that of generational champions. So, it is that behaviour that needs to be explained, including the description that the effect of the process is to "catch and keep" on-target letters as they are successively guessed; creating cumulative progress to target. kairosfocus
DNA-J: First, latching behaviour is produced by a latching mechanism, which may be explicit or implicit, as has already been pointed out to you. And, a mechanism -- again, onlookers! -- is: An instrument or a process, physical or mental, by which something is done or comes into being. [AmHD] If you took time to actually follow up on the roots of the matter, you would soon see that the heated and prolonged exchanges over latching, etc over several months are secondary to the primary issue, and actually have been distractive -- as in red herrings and ad hominem soaked strawmen -- as posed by objectors to the primary point: Weasel, by CRD's confession, is fundamentally dis-analogous to any reasonable mechanism for chance variation and natural selection. Generational champions are the focus of attention for the discussion for a very simple reason: it is these that were showcased in the CRD runs of 1986. So, it is their evidently latched behaviour that needs to be explained. And, explicit and implicit mechanisms are both able to do that. We now know on preponderance of evidence, that the likely original Weasel -- W1 above -- exhibits implicit latching. And since old, long since cogently answered objections are now being recycled, it seems that the matter is pretty well settled on the merits. Just, the answer on the merits does not sit well with champions of the blind watchmaker thesis. GEM of TKI kairosfocus
For instance: A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type, until these become simple enough to be solved directly. I'll use the example of D&M to identify the sub-problems: 1. Find the correct string out of the 1.19 * 10^40 strings matching ............................ 2. sub-problem 1: Find the correct string, using the 1.64 * 10^37 strings matching …………………..E.S.. sub-problem 2: forget all strings not matching …………………..E.S.. 3. sub-problem 1: Find the correct string, using 1.14 * 10^30 strings matching ..T……….S….E._..E.S.L sub-problem 2: forget all strings not matching ..T……….S….E._..E.S.L At each step, the space of feasible solutions is reduced considerably: we're in fact looking for a sub-string on a smaller space. The complement of this smaller space is discarded. So, now try to apply this definition to weasel1: What are the sub-problems? Each sub-problem should correspond to a sub-space of strings. Which are the sub-spaces? DiEb
DNA Jock, The "weasel" program doesn't need to contain explicit programming for latcching. It does it as a matter of course. The way Dawkins describes cumulative selection and illustrates it with "weasel" it is a divide and conquer stategy- that is once a correct letter is found/ matched the search for it is essentially over. Joseph
kairosfocus My irony meter broke when you started complaining about the effort expended to dismiss the concept of implicit latching. Latching, smatching. Here's an idea: you stipulate that TBW Weasel does not contain an explicit latching mechanism, and we'll all agree that it can, given certain parameters, show latching behavior. Then we can all move on to discussing the 'distant target' issue. In the meantime, your defense of D&M's use of TBW as a citation for a "Partitioned Search" goes ever further off the rails. You are now claiming that the word 'partitioned' somehow refers to the separation of letters in generational champions into those that are an improvement vs those that are not.
Partitioning — AmHD: “The act or process of dividing something into parts . . . To divide into parts, pieces, or sections” –in effect finds a way to put the letters of generational champions — what was to be explained, per the showcased examples of 1986! — into two bins: the ones already on target [that will not slip back] and the ones not yet on target [which will vary until they can be put into the on-target bin]. [emphasis in original]
But TBW Weasel retains phrases, not letters. And this is not what "partitioning" means. D&M describe, quite unambiguously (albeit with a sophomoric example), a 'divide and conquer' search, and give the math (eqn22) for a 'divide and conquer' search without generational champions, where the search for each element is independent of the search for the other elements. And they call this search a "partitioned search" in keeping with the accepted use of this term by the rest of the world. Obviously, TBW Weasel is not a 'divide and conquer' search, so you, in your Quixotic defense of the uncharacteristically silent authors, are reduced to Humpty Dumpty land.
8 –> It is in this context that I have objected to trying to turn partitioning to a synonym for the sort of algor that may be interpreted from the didactic example and associated calculation on p 1055 of the IEEE paper.
There's glory for you" H. Dumpty DNA_Jock
DiEB:
“divide and conquer” implies that at each step we can exclude a set of strings from being possible solutions:
What do you base that on? Or is it just another strawman? But anyway as I said if you Read TBW the way Dawkins describes cumulative selection and illustrates it with "Weasel" it is a partitioned search- that is once you find what you are looking for the search for it is over. Joseph
OOPS: 100% mutation rates, not 10%. kairosfocus
Dieb: Is the only possible case where a "divide and conquer," "ratchet[ed]" search that shows "cumulative" progress to target -- thus, "partition[ing]" -- one in which the rate of mutation per incorrect letter to date is 100%? The answer is obvious: no. Partitioning -- AmHD: "The act or process of dividing something into parts . . . To divide into parts, pieces, or sections" --in effect finds a way to put the letters of generational champions -- what was to be explained, per the showcased examples of 1986! -- into two bins: the ones already on target [that will not slip back] and the ones not yet on target [which will vary until they can be put into the on-target bin]. Catch and keep, not catch and release. Such can be accomplished by explicit mechanisms, and by implicit mechanisms. All of which has long since been shown. Also, the W1 above, which is on balance of evidence likely to be one of the original versions of Weasel in question, manifests the implicit latching pattern. So, I think it is fair comment to say that you have missed the forest for the trees. Yes, the example and calculation M & D provide on p 1055 of the IEEE paper are relevant to a case where non-correct letters undergo 10% mutation rates. At the same time, their lab, EIL, hosts several versions of Weasel that show explicit and implicit latching of letters in generational champions and consequent ratcheting, cumulative progress to target; which last is what CRD enthused over in his remarks on the performance of the original Weasel algorithm. And since CRD did not provide the actual algorithm or program, we must consider how this can be done. It turns out, many ways that fit with the description and examples c. 1986 in BW. Of these, the implicitly latched versions are on balance of evidence the correct general class. And, on the balance of evidence discussed in this thread, W1 in the original post is a credible candidate to be the actual original Weasel responsible for the showcased runs. W1, as discussed, shows implicit latching of generational champions. In short, regardless of pros and cons on the M & D discussion on p 1055 of the IEEE paper, what was needed to be explained in the first place as a secondary matter, has been adequately explained. And, on the primary matter concerning Weasel, it would be far more relevant to the central issues at stake if Darwinists were to attend with 1/100 the effort expended on trying to dismiss the concept of implicit latching and ratcheting, to the issue that Weasel, from the outset, has been fundamentally dis-analogous to the claimed process of chance variation and natural selection and what that can achieve. For Weasel plainly and admittedly rewards non functional nonsense phrases on mere increment in proximity to a target, betraying its fundamental quesiton begging: what was to be explained is . . . not . . . [a] that functionality based on complex information can vary through random events affecting the information, and that in a competitive population, that may lead to shifts in relative abundance of varieties . . . but instead . . . [b] the origin of complex, information based function, for both first life and body plan level bio-diversity. When such a central issue should be on the table, but instead every tangential side-issue imaginable is being discussed and debated, that tells me a lot about the true state of the case on the merits. And, not to the benefit of Darwinism, nor to the credit of Darwinists so committed to distractive side issues. GEM of TKI kairosfocus
Dembski and Marks say:
Partitioned search is a “divide and conquer” procedure best introduced by example.
Apparently the introduction via example isn't good enough, as we seem to disagree on what such a search entails. "divide and conquer" implies that at each step we can exclude a set of strings from being possible solutions: In the example of Dembski and Marks, all strings which don't fit .......................E.S.. after the first generation, and all strings which don't fit ..T..........S....E._..E.S.L after the second generation. That is, we can decide for a string before the evaluation of the fitness function whether it is a possible solution or not. Therefore, we are reducing our search space in each step, as only possible solutions will be presented to the oracle. Does Dawkins's algorithm works in this way? Does weasel1 work in this way? Whether they are quasi-latching or whatever, they are not dividing-and-conquering: Reexamine Dawkins's example in his book - after 20 generations we get the phrase 20 MELDINLS IT ISWPRKE Z WECSEL So, will only strings of the form ME..IN.S_IT_IS........WE.SEL be proposed to the oracle? If you have a look at weasel1, the answer for this program is emphatically no: especially if the generations are big, many strings not fitting this muster will be examined: the search space isn't partitioned in each step. If you understand "divide and conquer" in this context in another way, please elaborate... Until than, I can only repeat: Dawkins's algorithm isn't a "divide and conquer" algorithm as introduced by Dembski's and Marks's example. DiEb
PS: Joseph, I forgot: latching, ratcheting, cumulative progress to target [which is what CRD enthused over in 1986] and partitioning of the targetted search are all causally connected. Thanks for the reminder. kairosfocus
Dieb: A modification of the approach will show one way that implicit latching can be achieved. recall, the essential feature of implicit latching and ratcheting is that if we have a per letter mutation rate and population that are matched to an appropriate selection filter, we will see that implicit latching will become observable in at least some runs if: 1 --> To high probability, no-change cases appear in the pop of children of a given seed. (Not very hard to achieve as 1 of 27 times a mutation comes back to being the original letter. And, in other cases, with a low enough probability of mutation per letter in a child phrase, some members of a generation will have no letters so chosen and will be equal to the original.) 2 --> This creates the ratcheting action's required antireverse. And, antireverse playing out in a pop run is what is at the heart of latching, ratcheting and partitioning: once a letter goes correct, the search for it is effectively over, i.e. the search is effectively letterwise. That is, it is locked in or latched by the antireverse effect. In other words, latching, ratcheting and partitioning are causally interconnected. [And, onlookers, this has been pointed out over and over and over again; there is a persistent strawmanising of what partitioning etc. means.] 3 --> Once single step advances then dominate the behaviour on the filter, we will have a pattern of implicit latching in at least some runs. (Quasi-latching with occasional slips will pop up otherwise.) 4 --> This has of course been demonstrated here at UD ever since April 9th. 5 --> As also discussed above in this thread at 39 ff, the W1 algorithm -- W2 is decidedly different, not showing generational clustering -- has a backstop and actually enforces single letter advances only. With a pop of 100 per gen, the odds of showing implicit latching of generation champions are according to your calculation something like 199 out of 200. 6 --> Now, on p 1055 of the IEEE paper, M & D presented a mathematical analysis for a case where 100% of non-correct letters mutate. (We now know part of why, as this gives them the opportunity to present an interwoven code.) 7 --> By the alchemy of strawmannising rhetoric, this has been transmuted into "the" M & D algorithm, to be triumphalistically contrasted with what Dawkins did. Problems: (i) EIL actually presents -- ever since April (long before the IEEE paper was published) a cluster of algorithms covering the range of reasonable interpretations of the Weasel description [and the comparative all at once search], and (ii) Dawkins' description and showcased runs c 1986 are compatible with two whole families of algorithms: explicitly and implicitly latched targetted searches. 8 --> It is in this context that I have objected to trying to turn partitioning to a synonym for the sort of algor that may be interpreted from the didactic example and associated calculation on p 1055 of the IEEE paper. 9 --> Similarly, Weasel, as a targetted search that rewards decidedly non functional "nionsense phrases," on mere proximity to target, is fundametnally dis-analogous to the proposed darwinian mechanism of chance variation and natural selection across competing preproducing populaitons. 10 --> For, surely, being a viable, reproducing life form matched to a particular environment is a necessary condition for CV + NS to even be a factor. And, that requires origin of complex function. 11 --> Which brings us back to the core challenge to the Darwinian synthesis ever since the Wistar consultation of 1966: origin of complex function. Until you have a means to credibly and with sufficient probability create a 747 in a junkyard by a tornado, you have no basis for originating a von Neumann replicator with metabolic action [first life], and you have no mechanism to onward create complex body-plan level novelty. 12 --> And after the various rhetorical dodges, objections and turnabout tactics are discounted, the bottomline remains: evolutionary materialism, the reigning orthodoxy, has no viable mechanism for the origin of required complex function in the context of metabolic von Neumann self-replicators.
[Recall, we need metabolic machines to create parts, we need blueprints, we need coding schemes, we need code readers, we need organised clusters of effector machinery. And at just 1,000 bits as a therdhold for FSCI, the atomic resources of the observed cosmos across its credible lifespan are simply not adequate to give us a credible scan across the number of possible configs. Where, 1,000 bits -- less than 150 bytes -- is hopelessly inadequate storage to set up a VNR.]
------------- GEM of TKI kairosfocus
Except that as explained and illustrated by Dawkins, cumulative selection is a partitioned search. Nothing you say will change that. Joseph
On the main issue, the acknowledged targetted search on increments in non-functional proximity to target suffice to show that Weasels are fundamentally dis-analogous to the claimed context of chance variation and natural selection. And, as Dawkins himself admitted, this is a bit of a “cheat” and fundamentally “misleading.” Weasel should never have been used, and in fact inadvertently demonstrates the power of intelligent design to use targetting to overcome the search space challenge of getting to complex function. So, the issue in the main seems more or less settled.
1. Dawkins's weasel highlights some aspects of chance variation and natural selection, but of course not all of them: this would be to much to ask of any short algorithm 2. Weasel demonstrates cumulative selection, and it is used to this effect. As any man-made algorithm it is designed, yet it uses evolutionary techniques 3. For me the issue in the main was: Does the algorithms exemplified in the paper of Dembski and Marks represents the algorithm described by Dawkins in "The Blind Watchmaker", and therefore, is the math of the paper applicable to it? This issue surely is settled: it does not and it is not. DiEb
--kf,
Eqn 22 p 1055 of the IEEE paper was discussed previously. It as it stands relates to one scenario; which exists in the context of a fairly large cluster of various Weasel algorithms.
Eq 22 may be applicable for a fairly large cluster of various Weasel algorithms, but it doesn't seem to apply to the larger cluster of what you call implicitly latching weasels. Or can you formulate an equivalent equation for weasel1? That the equation isn't applicable on weasel1 shows again that the algorithm described by Dawkins isn't the Partitioned Search as exemplified by Marks and Dembski. DiEb
Onlookers: A few notes. 1] Weasels: Dieb has raised interesting points on M & D's further approaches to the general Weasel question. Eqn 22 p 1055 of the IEEE paper was discussed previously. It as it stands relates to one scenario; which exists in the context of a fairly large cluster of various Weasel algorithms. Since the reality of implicit latching has been demonstrated and since the reasonably likely original weasels fall under this ambit, the secondary issue on latching-ratcheting in the showcased runs of 1986 has been cogently answered. On the main issue, the acknowledged targetted search on increments in non-functional proximity to target suffice to show that Weasels are fundamentally dis-analogous to the claimed context of chance variation and natural selection. And, as Dawkins himself admitted, this is a bit of a "cheat" and fundamentally "misleading." Weasel should never have been used, and in fact inadvertently demonstrates the power of intelligent design to use targetting to overcome the search space challenge of getting to complex function. So, the issue in the main seems more or less settled. 2] Moseph, 70: What KF wants is a lab experiment that’s set up to run all on it’s own that will generate life spontaneously. Putting aside the fact for now that even if that did happen he’s claim “investigator interference” it should be obvious to all that the only experiment that could possibly perform as he requires is one that has already happened and it was a one off. The Earth, billions of years ago, was the experiment . Much else is rather crudely distractive, distorting or ad hominemistic, so we will ignore it for the moment. The above excerpt, however, captures the essential problem: an experiment set up to replicate credible early earth or similar conditions is by M's acknowledgement not likely to go anywhere. [Apart from the specific designed and intelligent intervention of investigators, which would inadvertently show the capacity of intelligent design. Rather like Weasel.] But, functionally specific, complex algorithm-implementing information is routinely seen to produce funcitonal entities that are beyond teh credible reach of chance circumstances and blind mechanical forces on the gamut of our observed cosmos. In particular, a metabolism-implementing self- replicating life form will have had to implement a von Neumann replicator. This is an extension of known technologies or approaches: coded blueprint storage, code, code reader, organised effectors, support units to ingest and transform environmental inputs to feed the process with required input materials. So, M has inadvertently supported the thesis that such FSCI -- and a descriptive term and acronym take legitimacy from that, not from whoever uses it [cf here from the WAC's on its roots in 1970's - 1980's OOL researcher discussions by e.g. Orgel, Yockey and Wickens] -- is only empirically credible on intelligence. But, that cuts across the a priori intent to assume or assert that life originated spontaneously through chance + necessity, so M proceeds to assert that anyway. 3] . . . it was spontaneous? Onlookers, “spontaneous” here indicates “it just happened” when in fact it’s rather unlikely it did. As noted, networks of auto catalyzing chemicals no doubt had a part to play. There was little spontaneous about it, except that KF would like to make you think it happened by magic. In fact, KF has the version of events where things happened in an instant, from nothing. His intelligent designer swooped in and made the first cell whole. Ad hominem laced strawman, ignited to cloud asnd conguse the issue, while polarising the atmosphere. "Spontaneous" in the relevant -- and fairly obvious -- sense means: Arising from a natural inclination or impulse and not from external incitement or constraint [AmHD]. That is, I have described in a nutshell the abiogenetic models that trace to chance + necessity creatign life by tehmselves without intelligent constraint or direction. Autocatalysis without accounting for origin of gnetetic codign, metabiolism and the like to get to a VN Replicator, is assertion rather than evidence. Similarly, I have made no reference to magic, just tot he factt hat there arte two serious alternatives ont eh table for OOL:
(i) spontaneous fromation under whatever favoured prelife models are applied, (ii) intelligent -- not magical or supernatural as such [OOL on earth on intelligent design would only imply the existence of soneone at he relevant time with the technoliogy to do a VN replicator on carbon polymer molecular technology -- cause acting to form FSCI and assocated designed organisation.
As to requred time vs in an "instant," the threshold for FSCI and unlikelihood of its spontaneous origination is something like using up the 10^80 atoms of our observed cosmos from a reasonable big bang event and runing forward for 10^25 s, the thermodynamically credible lifetime, only to be sampling a maximum of 1 in 10^150 of the available states of 1,000 bits. 10^25 s is something like 50 million times the run to date on the usual cosmological timeline of 13.7 BY, and 1,000 bits or about 130 bytes, is by far and away inadequate to store a blueprint, algorithms and data structures required. Intelligent designers are known to routinely produce FSCi well beyondf tha tlevel, and within rather brief timespans compared to 10^25 s. So, the real issue is being -- predictably -- dodged, distorted, derided and dismissed: inference to best causal explanation across chance, necessity and intelligence, on empirical evidence. 4] And therefore it must be the case the intelligent designer made it happen. So, tell us about that KF? Rather then launch into another monologue about shores of function etc etc just write down what you know for a fact about how life was created by the intelligent designer. M here tries to convert inference to best explanation on empirical evidence, into an a priori assertion. Strawman distortion, again. And, instead of addressing the search space challenge squarely and fairly, he wishes to dismiss it. that should tell the asture onlooker the balance of the matter on the mertis pretty well. 5] KF: setting up a string data structure — this is the de facto fundamental data structure — that has two layer significance, one reading forwards [the five-letter increment in Weasel] plus a backwards reading expression in English is — quite literally — interwoven multi-layer coding. M: Have you proven that such is impossible to evolve? No. Are you a biologist who’s proven that such is impossible to evolve? No. So what is your point? Of course, I referred to the example on p. 1055 of the M & D IEEE paper, where on ecversign (on a hint from Rob to Dieb) we may see, cf. 39 above:
20: mas-evolutionarey-informatics ORIG: SCITAMROFN ? IYRANOITULOVE ? SAM. [has two initially correct Weasel sentence members] 21: Listen-are-these-designed-too ORIG: OOT ? DENGISEDESEHT ? ERA?NETSIL. [adds 5 newly correct Weasel sentence members]
Folks, we have here the first winner of our annual "Welcome to wales" lucky noise onion award! Random chance of course can always in logical possibility account for any apparently meaningful sequence of glyphs. How many reasonable people confronted by such evidence -- in the context of the known provenance of the EIL (it is not even a biological context, and the context showes exactly what was described: multilayer codes with interweaving . . . which of course just happens to also be present in DNA) -- will accept that the strings in question happened by chance rather than intent? On what grounds? In short: the problem here is that M plainly has not seriously reckoned with inference to best empirically based explanation in the context of islands of complex function in large configuration spaces. And that, sadly, tells us all we really need to know. ____________ GEM of TKI kairosfocus
Onlookers: Observe the beginnings of a quiet little drift away from the focal issue for hte thread. Guess why. GEM of TKI kairosfocus
BTW, I added some thoughts on the algorithm Random Mutation (p. 1056) here. DiEb
Moseph:
Yet none of what you wrote pertains to anything that we know about the first replicator.
We don't know anything about the first replicator. And living organisms are much more than replicators. IOW Moseph you don't have anything. What we do know demonstrates it takes agaency involvement just to get two nucleotides. It also takes agency involvement to create a nucleotide sequence to catalyze ONE bond. IOW the blind watchmaker sceanrio for the OoL is in very bad shape. Joseph
Kairosfocus
Pardon, but your side-issue is becoming ever more evidently just that: distractive.
You are under no obligation to respond.
And, if you insist, kindly tell us about what we KNOW per empirical observations on simplest or first self-replicators
According to you, you know all about them. According to you it's known how complex they are. According to you even the number of bits that make it up is known, more or less. According to you the fitness landscape at the time is known. So what more can I add to what you've already stated as fact?
or current spontaneously -- no undue intelligent direction, please, including e.g. selecting homochiral solutions, esp without potentially cross-interfering reactants, and without trapping-out -- formed cases under credible early earth/early solar system warm pond or deep sea vent or cometary head conditions etc
To untangle this for onlookers, what KF means is "Show me an experiment set up to generate a self replicator where the experiment has not been set up at all". All "no undue intelligent direction" means is that any such experiment conducted can always be refuted by the simple tactic of "but it was set up by intelligent agents and therefore is invalid". It seems that KF is unaware that nature itself can provide environments with homochiral solutions and without potentially cross-interfering reactants. It those conditions are replicated in the lab them as far as KF is concerned, you've proven his point for him. Onlookers, is this who you want to take your lead from on this? Someone who has already stated from the outset that any experimental methodology conducted outside of his parameters is invalid, when KF has not seen the inside of a lab since he was at University.
showing metabolic activity and genetic self-replication.
Do you think that If I had achieved self-replication in a organic lifeform from scratch that self-assembled I'd be wasting my time talking to the likes of you? No, my Nobel would await. Again, onlookers, it's another rhetorical trick from KF. Ask for something that you know does not exist and claim the non-delivery of such as a victory.
Autocatalysing molecules etc that do not spontaneously form and express code for metabolic machines and operations will not do.
If you were handing out grants then perhaps your restrictions would make some sense. As it stands, you can reject the results of such experiments if you like but as already mentioned if you want to ignore entire hierarchies of complexity that may have lead to the first replication then be my guest. Onlookers, the reason KF wishes to ignore such things as autocatalysing molecules and networks is because he can then talk about the first cell as if it appeared all at once, and the probability of that is of course very low. What is more probable is that a network sits atop a network which sits atop a network leading to the first replicator. KF wants to consider the first replicator on it's own and claim how unlikely it is to exist without intelligent guidance. When you don't consider anything but the first cell then of course it's unlikely. Another rhetorical device.
Set up all the autocatalysing RNA strings you want. You will then have to get to such strings that CODE — including both algorithms and data structures in the context of spontaneous origin of such languages — for metabolisms
But I thought you knew all about the first replicator? Perhaps then you can tell us how your model gets to CODE, algorithms and data structures? How the language of metabolism arose under your model?
[including the known complex intermediaries known as enzymes] and associate themselves with readers and effectors, then account for a shift to DNA world, all without undue investigator interference.
What KF wants is a lab experiment that's set up to run all on it's own that will generate life spontaneously. Putting aside the fact for now that even if that did happen he's claim "investigator interference" it should be obvious to all that the only experiment that could possibly perform as he requires is one that has already happened and it was a one off. The Earth, billions of years ago, was the experiment . It's not reproducible without a) A pre-biotic earth b) A lab the size of the pre-biotic earth. So, onlookers, although KF's claims seem reasonable on the face in fact he is asking for the impossible. And on past form, he'd reject it even if it came to pass.
Of course, all of these patterns exhibit targetting and associated purposeful construction of complex multi-part entities that are irreducible on function as the entities.)
Of course they do.
As for the evolutionary materialistic magic of claimed spontaneous co-optation of parts that happen to be lying around,
The only person claiming the use of magic is you.
and resulting spontaneous emergence of complex functional structures through cumulative progress [each step being functional in itself! . . . talk about a Lewontinian just-so story!],
Are you a biologist by trade then?
have you ever tried to fit a claimed substitute/ souped-up electronically active part for a car? (Mechanical electrical and electronic/infrormational compatibility have to be all present. This is not at all a given.)
Cars do not sexually reproduce. You can make a part for a car that fits no other car in the world. Somewhat of a different story with biological entities, would you not agree?
Have you ever seen a house built up from parts in a hardware store hit by a hurricane?
No real biologist thinks that cells were make in such a way. Again, another rhetorical device to fool the unsophisticated onlooker into thinking that there is some science and probability behind KF's words. As cells do not arise in such a way and nobody has every claimed that they do (even for the first replicator) what relevance does your comment have KF?
We do know that FSCI -bearing entities are routinely set up by intelligence, and for sixty years we have known how a self-replicating machine would have to be organised.
Nobody uses FSCI except you. Machines are not organic. Machines did not evolve from simpler machines via sexual reproduction, mutation and selection.
It is just a matter of technology to actually build one by intelligence. (E.g. We have self-diagnostic cars already, I would love a self-maintaining one, or close to self-maintaining one.)
You have provided no proof, other then your incredulity, that cells were designed.
And, setting up a string data structure — this is the de facto fundamental data structure — that has two layer significance, one reading forwards [the five-letter increment in Weasel] plus a backwards reading expression in English is — quite literally — interwoven multi-layer coding.
Have you proven that such is impossible to evolve? No. Are you a biologist who's proven that such is impossible to evolve? No. So what is your point?
Onlookers, this issue shows just how pernicious is the misleading impression created by Weasel through rewarding non-functional partially correct phrases on mere proximity to target.
Richard Dawkins' own words have been reproduced to you often enough on this topic that even you should have listened by now. Dawkins never claimed the example did not have serious flaws, namely that it is not a great example as it has a fixed target. Onlookers, don't be fooled by these misrepresentations. Simply obtain a copy of "The Blind Watchmaker" and see for yourself. The fact that you continue to harp on this as if you'd discovered a secret is telling. If you had a substantial criticism you'd have made it already. Despite the fact you've been corrected on this multiple times you continue to repeat it. For shame!
It is the claimed spontaneous arrival of functionality based on complex, specific information that needs to be explained cogently, not the capability of cumulative intelligent design based on targets and warmer-colder hints on guesses.
And that's being explained. By people who know what they are doing. What you need to explain is how the intelligent designer made it happen. Why don't you leave the actual research (oh, you are) to the professionals? And again (another question for you to ignore) could you please tell me who is arguing that
claimed spontaneous arrival of functionality
it was spontaneous? Onlookers, "spontaneous" here indicates "it just happened" when in fact it's rather unlikely it did. As noted, networks of auto catalyzing chemicals no doubt had a part to play. There was little spontaneous about it, except that KF would like to make you think it happened by magic. In fact, KF has the version of events where things happened in an instant, from nothing. His intelligent designer swooped in and made the first cell whole. It's amazing how much you want to talk about the way you insist it did not happen is it not KF? I mean, there's many more ways it did not happen then did. You claim to know how it did happen. Why not talk about that for a change?
PPS: A von Neumann replicator is plainly irreducibly complex: no blueprint, no coded info and instructions. No code, no capability to communicate instructions or descriptions and specifications. No reader, no way to make sense of same. No effector, no ability to use the same to do the task in hand. No functionally correct spatio-temporal organisation corresponding to the instructions and data, and there will be no way for components to work together to achieve the process. No metabolism, no resources to do all of that. (Sub-components may in turn exhibit a similar irreducibility.)
And therefore it must be the case the intelligent designer made it happen. So, tell us about that KF? Rather then launch into another monologue about shores of function etc etc just write down what you know for a fact about how life was created by the intelligent designer. Then we can compare that against the "just so stories" coming out of the labs of actual scientists researching the origin of life and see who has the most creditability. I'm waiting. Moseph
-kf could you tell us how to apply eq. 22 on W1? Thanks! DiEb
Moseph: Pardon, but your side-issue is becoming ever more evidently just that: distractive. And, if you insist, kindly tell us about what we KNOW per empirical observations on simplest or first self-replicators [or current spontaneously -- no undue intelligent direction, please, including e.g. selecting homochiral solutions, esp without potentially cross-interfering reactants, and without trapping-out -- formed cases under credible early earth/early solar system warm pond or deep sea vent or cometary head conditions etc] showing metabolic activity and genetic self-replication. Autocatalysing molecules etc that do not spontaneously form and express code for metabolic machines and operations will not do. (That's a strawmannish bait and switch. Set up all the autocatalysing RNA strings you want. You will then have to get to such strings that CODE -- including both algorithms and data structures in the context of spontaneous origin of such languages -- for metabolisms [including the known complex intermediaries known as enzymes] and associate themselves with readers and effectors, then account for a shift to DNA world, all without undue investigator interference. Of course, all of these patterns exhibit targetting and associated purposeful construction of complex multi-part entities that are irreducible on function as the entities.) As for the evolutionary materialistic magic of claimed spontaneous co-optation of parts that happen to be lying around, and resulting spontaneous emergence of complex functional structures through cumulative progress [each step being functional in itself! . . . talk about a Lewontinian just-so story!], have you ever tried to fit a claimed substitute/ souped-up electronically active part for a car? (Mechanical electrical and electronic/infrormational compatibility have to be all present. This is not at all a given.) Have you ever seen a house built up from parts in a hardware store hit by a hurricane? We do know that FSCI -bearing entities are routinely set up by intelligence, and for sixty years we have known how a self-replicating machine would have to be organised. It is just a matter of technology to actually build one by intelligence. (E.g. We have self-diagnostic cars already, I would love a self-maintaining one, or close to self-maintaining one.) And, setting up a string data structure -- this is the de facto fundamental data structure -- that has two layer significance, one reading forwards [the five-letter increment in Weasel] plus a backwards reading expression in English is -- quite literally -- interwoven multi-layer coding. GEM of TKI PS: Onlookers, this issue shows just how pernicious is the misleading impression created by Weasel through rewarding non-functional partially correct phrases on mere proximity to target. It is the claimed spontaneous arrival of functionality based on complex, specific information that needs to be explained cogently, not the capability of cumulative intelligent design based on targets and warmer-colder hints on guesses. PPS: A von Neumann replicator is plainly irreducibly complex: no blueprint, no coded info and instructions. No code, no capability to communicate instructions or descriptions and specifications. No reader, no way to make sense of same. No effector, no ability to use the same to do the task in hand. No functionally correct spatio-temporal organisation corresponding to the instructions and data, and there will be no way for components to work together to achieve the process. No metabolism, no resources to do all of that. (Sub-components may in turn exhibit a similar irreducibility.) kairosfocus
Atom: Okay, so it is Marks and Dembski. (That collaboration is getting seriously devious in their mathematically tinged devices.) No guzum ;) GEM of TKI PS: Dieb. You spotted more than I did. It is now quite clear that the example was a set-up illustration, and in the hidden context of a design inference on two-layer interwoven coding, one read backwards the other front-ways. So, it served more than one didactic purpose! (And, odds become irrelevant in such a clearly decisional context. I would not like to calculate the odds of generating that particular doubly functionally specified string by chance! 1 of 70 or so odds are also not particularly easily observed, though it is possible: "couple [ = two] of runs" would most likely not be enough to see such a jump in the very first generation from the parent seed. As well, they have given a definition of ratcheting and illustrated it with what is plainly a didactic example, then gone on to a calculation for a simple case; they have not presented an algorithm as such.) Similarly, the calculation they gave can be extended to the sort of generational- seed- as- child- backstopped [or, is "dogged" better suited here -- this is the pawl in the ratchet], single- step- advance- child- ratcheting case that is relevant for implicit latching, by a "fairly simple" adjustment. This simple dynamical model then extends to the case where some advances are missed due to various masking effects. kairosfocus
-kf, R0b & W. Dembski 1: I had spotted the EVOLUTION in the first phrase, but hadn't thought much of it as you can start the algorithm with any phrase. I didn't spot the DESIGN, and frankly, I didn't expect the second string to be designed... 2: ... though the probability to gain five or more correct letters starting this first string is less than 1.5 %, it could have been observed after a couple of runs 3: but perhaps, R. Marks and W. Dembski didn't actually implement the algorithm. That could be explain the lack of an accompanying picture, which the other examples have. 4: If Mr. Dembski is reading this: Why did you choose µ=0.00005 for fig. 2? Of course you wanted to be able to apply the equation of your appendix, but the expected number of generations for this parameter is ~55,500 , while the best choice needs only ~ 10,600? And while the error of your elegant estimation is only .25% in the first case, with 2.4%, it's not too bad in the second case, neither (at least according to my calculations.) DiEb
kf, Hey, haven't been keeping up with UD posts. Wish I had thought of the codes, but can't claim credit for it. So no guzum, please sir. :) Atom Atom
kairosfocus, their math states that the probability of hitting the target in Q iterations is (1-(1-(1/N))^Q)^L. That math is correct only if an iteration consists of holding the correct letters fixed and randomly changing all incorrect letters. I see no way for those conditions to obtain in an "implicit partitioning" scenario. R0b
But in every case we are orders of magnitude beyond the 1,000 bit threshold where the resources of the cosmos are hopelessly inadequate to get to provide enough configs to have a credible chance of getting to a bit of FSCI.
Yet none of what you wrote pertains to anything that we know about the first replicator. Do you really think that looking at currently existing unicellular life tells us a single thing about the first replicator? To give you an example. If you looked at the first motorcar would you conclude that the first motorcar was representative of the first example of locomotion? Would you knock out parts of that first car and conclude when it stopped working "ah - this is the minimum number of parts needed to travel"? Or rather would it be the case that the car was the pinnacle of a different hierarchy of technology, and only by looking at that hierarchy could you come to a full understanding of how the first car came to be. And so it is with the first replicator. By looking only at today's "car" you miss the hierarchy hidden away that allowed the car to come into being. It's then ironic that you say
Note, we here deal with empirical reality, not just so RNA world or imaginary subsea vent sulphur worlds etc: provide observed life forms carrying out independent metabolism much below that band and I will accept them.
Yet empirical reality notes you know nothing at all about the first replicator. And what do you mean by "imaginary subsea vent sulphur worlds"? Such worlds cut off from the sun are not imagination, as much as you might wish it. They are empirical reality. And what does it matter what you accept as a possibility? You are not actively researching in this space, you are not involved except as a bystander. So, be convinced or not it matters not at all. To recap. You know nothing about the first replicator and can only provide "just so" stories about it where you insist it must have a given level of complexity that is impossible to occur naturally, despite only knowing about currently existing organisms and not knowing (nor caring it seems) about such possibilities as undersea vent environments, rich in energy and chemical mixing.
AND, 150 bytes is way too narow to cde blueprints, code for replicating algorithms AND for metabolism — evne withthe sort of code layering trick that M & D pulled on us all.
Only you could call some words spelled out backwards a "code layering trick" And again, you talk about bytes "replicating algorithms AND for metabolism" as if you know something about the conditions surrounding the first replicator. You do not. Moseph
Moseph: I do not have much time just now. 1 --> We look at unicellular life and look for when knockout studies lead to disintegration of life function. 300 - 500 k bases drops out, just double to get bits. 2 --> This is an observational hard point, and the parasitic forms down to about 200 k bits show that below the threshold we run into problems of not ding all the biochem work required. 3 --> Note, we here deal with empirical reality, not just so RNA world or imaginary subsea vent sulphur worlds etc: provide observed life forms carrying out independent metabolism much below that band and I will accept them. 4 --> But in every case we are orders of magnitude beyond the 1,000 bit threshold where the resources of the cosmos are hopelessly inadequate to get to provide enough configs to have a credible chance of getting to a bit of FSCI. 5 --> AND, 150 bytes is way too narow to cde blueprints, code for replicating algorithms AND for metabolism -- evne withthe sort of code layering trick that M & D pulled on us all. 6 --> So, kindly keep the eye on the ball. ___________ ATOM: You haven't confessed yet, and I might just pull up that old bun dem food guzum! ;) GEM of TKI kairosfocus
Rob: You give an interesting historical note. Basic problem: partitioning can happen implicitly or explicitly, i.e it is equivalent to ratcheting and associated latching. (Partitioning looks at the issue from the facet where once a letter goes correct in a given generation champ it is effectively in the correct bin and does not fall back out. Latching looks on the lock on the bin, and ratcheting on the way new letters are dropped in.) GEM of TKI kairosfocus
Correction @ 58 -- I think the paper actually goes back to 2007, but May 2008 is the earliest dated copy that I have. R0b
Kairosfocus
First life took ~ 600 – 1,000 k bits of genetic info
How on earth can you possibly know that? Why are your error bars so high? 600 to 1,000? What can you do to reduce those error bars? What is your methodology? If you are basing this on currently existing life and using some sort of process to work backwards billions of years in time then how exactly do you know when you've got it right? As nobody whatsoever knows anything at all about "first life" I find it amazing that you can proclaim such with complete confidence. Tell me Kairosfocus, do you have a sample of this "first life"? You seem to know so much about it.... If you are in fact basing your claims on what is currently known about simple lifeforms and the amount of information in them then are your claims not just "just so" stories with no basis in fact? Exactly like the "just so" stories about evolution routinely decried here. If you know so much about the origin of life why don't you write up a proposal and get the Biologic institute to research it? Prove that it is impossible without intelligent design. You seem to have already proven the case in your own mind, now why not try and convince some other people outside the little circle here? I.E. actual scientists. Of course, if you don't believe your ideas would stand up to some serious scrutiny then perhaps it's best you don't do that. Moseph
kairosfocus, recall that the original WeaselWare consisted only of the partitioned search. Atom coded it according to Marks and Dembski's understanding of WEASEL. It was in response to comments on this forum that Atom created WeaselWare 2.0 this past spring. Note also that the paper in question preceded WeaselWare 2.0 by about a year. The May 2008 version of M&D's paper has a section on partitioned search that is essentially identical to that of the current version, with the same reference to TBW. And note, finally, that the EIL's Weasel math page (not written by Atom) still says that Dawkins' algorithm is a partitioned search, and presents the same math as in the paper. In summary, it's clear that M&D think (or thought) that Dawkins' algorithm in TBW was, in fact, the partitioned search that they describe in their paper. R0b
kairosfocus
EIL presented a cluster of alternatives covering hte bases, including explicit and implict latched cases and cases that show the single step approach.
Which one represents a non-partitioned search with no "explict" letter latching? Which is the Weasel you get if you implement it as described, line by line, from TBW and add nothing that is not described. Which one is it? Is it there? Moseph
Moseph: 1] EIL presented a cluster of alternatives covering hte bases, including explicit and implict latched cases and cases that show the single step approach. Guess which ones converge in a reasonable time, why. 2] I and others are long since on record in say the WAC's no 28 above, that a useful rule of thumb heuristic threshold is 500 - 1,0000 bits as the border of of functionally specific information not credibly achievable by chance based processes on the gamut of the observed cosmos. First life took ~ 600 - 1,000 k bits of genetic info, if it is at all comparable to observed simple life forms not dependent on other life forms for vital nutrients. Novel body plans take on evidence 10 - 100+ million bits. (You would do well to read the WACs.) 3] to understand this, 1,000 bits corresponds to ~ 10^301 configs, or ten times the square of the number of states the atoms of the observed universe would go through across their thermodynamically credible lifespan, or ~ 50 million times longer than the timeline back to the usual date for the big bang. (And only a very small portion of the cosmos will form zones fitted for emergence of life.) 4] in short, I am not making a "probability" challenge but a "search tantamount to no search" challenge. the cosmos simply will not exhaust sufficient of these states to be significantly different from a search of zero scope. So, 500 - 1,000 bits is a reasonable upper threshold for CV -- to get to the variation in the first instance. 5] for 1st life that is about 150 bytes of information, barely enough to sneeze and certainly not enough to write out a blueprint for a self-replicating von Neumann machine's parts much less the code to execute the replication. GEM of TKI kairosfocus
PS: Rob, my point on M & D of EIL is that given the EIL GUI's multiple algors, trying to pin them down to just one algor claimed to be fundamentally incompatible with the description of Weasel c 1986 is just a bit strawmannish. You will note I started form how that claimed "run" looked fishy [save as a didactic illustration], and you have given me reason to see that it was really and truly fishy indeed. kairosfocus
Kairosfocus
anything reasonably claim-able for chance variation and natural selection of living, reproducing forms of life.
What can reasonably be claimed for chance variation and natural selection. For you, what is the limit? Moseph
Kairosfocus
I pointed out that EIL hosts several different algors,
Thank you for the tip. Could you tell me which one represents "Weasel" as described in TBW? Moseph
Rob: I owe you one. Appreciated. Weird, manipulating spaces, reading in reverse from p. 1055 the IEEE paper:
21: Listen-are-these-designed-too 20: mas-evolutionarey-informatics
Never saw that before! (Talk about functionally specific and complex information that is contextually responsive . . . And, of course I am not exactly a word search fan. How much time did M & D spend on putting that little interwoven meaning time-bomb? ATOM: Fess up!!!! [Or I will work a "guzum" to make the Luminous One burn your dinner for a week! ;) Just joking! Give her our love.]) There is enough present to see that they do math on a 100% mut rate on non-correct letters, and they of course are speaking of a context in which correct letters latch. As we have seen and demonstrated practically, once we have non-change cases with high enough odds in the generation, and single chance cases dominate otherwise a significant fraction of the runs will latch implicitly and ratchet to the target due to the proximity reward filtering process that selects the next generation champion. The side issues are to all intents and purposes over. And on the main one, a targetted search rewarding increments on mere non-functional proximity is fundamentally dis-analogous to anything reasonably claim-able for chance variation and natural selection of living, reproducing forms of life. GEM of TKI kairosfocus
Moseph: I have no wish to go down the latest red herring track, having shown that the likely form of the original Weasel is compatible with implict latching and ratcheting. I have also long since -- April 9, 2009 -- shown that on the per letter mutant understanding, implicit latching-ratcheting is demonstrated. In short, the evidence of the published, showcased Weasel 1986 runs is now accounted for. On (i) targetted, proximity rewarding search, and (ii) the apparent latching or at least quasi-latching effect in the showcased runs. The first of these directly implies a fundamental dis-analogy to the claimed natural process, which has to have arrival of complex information based function before we can get to select on competitive reproductive success of sub populations. The second is simply a minor puzzle that has significance only insofar as it helps point to the first. GEM of TKI PS: On the latest side issue, I have offered evidence and principles on preponderance of evidence that point to the probable author of the program, c 1986. This is not an absolute proof (and it is on a secondary or even tertiary matter), but it is enough for prudential decision. PPS: As to the inference to and snide accusation of child abuse, this is of course without evidence. The person offering it as if it were comparable to the case that on balance of evidence W1 and W2 are probably authentic, should reflect on the point noted on in 42 [which also shows that a per child phrase interpretation is unexpectedly subtly and deeply compatible with the statement in BW], in light of the further provenance that credibly is Oxford [where one would expect materials originating in Oxford to be] and the nature as a c 1980's "simple" Pascal program implementing Weasel-like algorithms [not a popular exercise at that time], compatible with the 1986 showcased result for W1, and the distinctive features of the 1987 BBC horizon pgm video for W2. I note that someone above says that on 1980s TurboPascal, the pgms behave as expected [recent composition being fairly unlikely]. Mr Elsberry's response is that it is not biological enough -- but targetted search rewarding non functional phrases on mere proximity [as is explicitly acknowledged as a "cheat"] is fundamentally a-biological to begin with. Mr Dawkins' statement so far seems to be "I have no recollection." Let's see if more evidence emerges and which way it will tip or re-tip the balance: e.g. a clear repudiation by Mr Dawkins (which will have to resolve the "I do not remember" above]. PPPS: Dieb, I pointed out that EIL hosts several different algors, and that a case where we move from two correct to seven correct letters in the first step is rather unlikely to come from a real run of a real algorithm. It is however, very compatible with a didactic illustration; though in principle it is logically possible. kairosfocus
Dieb, I'm not sure if you're joking or not, but the example is obviously made up. (Read the sequences backwards.) But you are correct that the algorithm is described in the text, and any chance of ambiguity is eliminated by the math. R0b
kf & Mr. Dembski kf claims - re the example which is stated on p. 1055 of your (and R. Marks's) paper - that
M[arks] & D[embski] do not describe an algorithm, they give an unrealistic illustration.
while I think that you description is clear enough to implement an actual program, and that the example you gave is a run of such a program. Could you please tell us who is right? DiEb
Based on the comments of AndrewFreeman, I personally lean toward these programs being genuine, although I don't lean very far in that direction. I think that, contrary to the last sentence of Dawkins' response, there are more things that he could say about it. For instance, he could tell us whether it was he or someone else who coded the program seen in the BBC video. R0b
Mr Dembski, Given Mr Dawkins' comments and Wesley R. Elsberry's comments (as posted by him publicly, which I reproduce below as linking to the site where they appear has been forbidden by the site moderator) have you now decided that these 2 versions of Weasel are to be treated as the originals or not?
I sent a response to a few of the folks on the list, including Dembski and Dawkins: The first seems unlikely due to the section following: " (* Each Copy Gets a Mutation *)" Putting mutation on a per-copy basis rather than per-base would be rather unlike the biology. The second shares the same fault, though coded somewhat differently: " (* Darwin *) OFFSPRING := CURRENT; OFFSPRING[ 1 + RANDOM(LENGTH(OFFSPRING)) ] := RANDOMLETTER; " The fact remains that "weasel" implementations were not based on "partitioned search" as claimed in Dembski and Marks' recent paper, a point that Dembski implicitly concedes by his attempted elevation of these two programs without provenance, and further, that other "weasel" style programs can illustrate the point at argument in "The Blind Watchmaker" while allowing a small finite chance of mutation at every base or symbol in the generation of new candidates. Wesley
Moseph
I've sent the two programs to Richard Dawkins so that he can either confirm or disconfirm their authenticity. I heard back from him. The relevant portion of his email for this discussion reads: "I cannot confirm that either of them is mine. They don't look familiar to me, but it is a long time ago. I don't see what more I can say." William Dembski
Kariosfocus
But mere announcements and claims that the code above is not CRD’s code are not enough for that.
Erm, I'm sorry but there is no evidence whatsoever that the code above was written by CRD. None whatsoever. The claim that it *is* his code is the one which needs to be supported.
Unless Richard Dawkins and his associates can show conclusively that these are not the originals
Unless Kariosfocus can conclusively show that he does not beat small children then we can only assume that he does so and on a regular basis, and with considerable relish. See, it's not really fair way to make a point is it? Moseph
Kariosfocus
Also on what I have seen of Weasel C 1986 as described in BW, there is in fact [cf above] no specification that there is a per letter application of a probability filter to mutate.
Then what specification was given in BW regarding per letter mutation rates? Is it or is it not your position that each candidate string can only have a single letter mutated then, as described in BW? Moseph
Dieb: You have shown according to your own data that something like 1 in 200 runs at 100 pop level will NOT show an implicit latching effect. (This fits in with the rising odds of at least one no-change backstop child being present per gen as pop size rises. [And yes, there are various holes in the algor underlying the presented program -- "uncovered" possible cases.]) Thus, the sort of runs showcased in BW and NewScientist are very observable on the circumstances in the newly posted W1. So also, with the desire being to showcase "cumulative selection," and those odds, what would be the likelihood that a run that did not latch implicitly would be chosen? My guess: not very high at all. GEM of TKI PS: As to the claimed divergences of algorithms etc, what I will say is that the descriptions to date in BW etc do not sufficiently specify an algorithm to exclude the sort of program we are seeing. And, once we are looking at something that is so likely to latch [implicitly] and so to ratchet with rather low odds of slipping on the "dog," an analysis on latching is a reasonable thing to do. Similarly, the set of algors at EIL make it rather strawmannish to insist that the IEEE paper is presenting a "the" M & D algorithm on p. 1055. kairosfocus
Moseph: You will see that all that was needed to explain was that in some runs, we will get implicit latching and ratcheting to target of generational champions. There are two candidate mechanisms, explicit and implicit latching-ratcheting. Implicit latching and ratcheting just has to be sufficiently common to be observable and in a context where it would be likely showcased if observed. And BW c 1986 is such a context. Also on what I have seen of Weasel C 1986 as described in BW, there is in fact [cf above] no specification that there is a per letter application of a probability filter to mutate. A per phrase application of a single mut would work on the description I have in hand. Indeed, zooming in, I find the following excerpt very interesting:
>>it [Weasel] duplicates it [the seed phrase for a generation] repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying . . . >> a --> duplication of the seed phrase looks literally accurate to the code for W1 b --> Application of a certain chance of error to the copying suggests on closer inspection a phrase-wise mutation event c --> The showcased runs show as well that about 1/2 the time, no change wins, in a context where we should likely see about 1 in 50 gens uncovered by a no-change backstop at 100 per gen, which would be what blocks reversion for the other 49/50 or so in the gen champs. d --> And single step advances predominate otherwise, in a context where the code above would impose no more than one such change per mutant. e --> So some runs at rates sufficient to be observable for repeated runs, will implicitly latch, once pop is set to a reasonable level.
Of course if you have information that tells us otherwise, credibly, that shifts the balance on the evidence. But mere announcements and claims that the code above is not CRD's code are not enough for that. And, the above code (with appropriate pop levels, the mut rate now being fixed as reasonably low) will implicitly latch often enough to show the point, with quasi-latching predominating otherwise. GEM of TKI kairosfocus
Kariosfocus, It seems to me this set of Weasels are not "the" Weasel for a few simple reasons. Mutation is on a per-copy basis rather then a per-base basis. The original Weasel as described in TBW allows for mutations in every base during the generation of new characters. These Weasels do not. An interesting analysis here http://dieben.blogspot.com/ Dieb notes
So, even with a generation size of one hundred children, one in two hundred runs will show that this algorithm doesn't latch - it's not what W. Dembski and R. Marks describe as a partitioned search.
So, not Weasel after all then... Moseph
Okay: Late for the party -- busy elsewhere (and my ISP seems to prioritise phone over Internet service on quality . . . ). Looks like someone from it seems Oxford has in fact won the Contest 10. Let's see, therefore, where we have come out (and of course the below is subject to correction, esp. on my reading of the Pascal Code!): 1] It seems we have two credible "original" Weasels, thanks to an anonymous donor at it seems Oxford. 2] Provenance is thus about right, chain of custody is reasonable, and there are no signs of obvious fraud, so on the Ancient Documents Rule -- failing credible explanation otherwise [i.e burden of disproof is now on those who would reject the programs] -- it seems on preponderance of evidence these are the right "original" pgms, PASCAL version at least. (The BASIC version would be interesting . . . ) 3] Surprise -- not -- TWO versions, W1 seems to be what was in the book (and NewScientist) and W2 seems to be the version in the 1987 BBC Horizon video. 4] W2 is indeed significantly different from W1 (despite many expectations to the contrary on the part of Darwinists), and has an entirely different dynamic, one that is set up for video; as was suspected. (And W2 does not seem to have generational clustering.) --> Diverse performance is accounted for . . . 5] As expected, both W1 and W2 are targetted search, rewarding plainly non-functional strings on mere proximity to target. Target-proximity is what is "fitness" in W1 and W2; measured on a letter-wise comparison to the target:
W1: [Set target:] Target:Text=’METHINKS IT IS LIKE A WEASEL’; [ . . . . ] [Measure proximity:] (* Is This the Best We’ve Found So Far? *) If SameLetters(Child, Target) > Best Then Begin Best_Child:=Child; Best:=SameLetters(Child, Target); End; End; Parent:=Best_Child; (* Inform the User of any Progress *) Writeln(Generation, ‘ ‘, Parent); Generation:=Generation+1; End; End. W2: [Set target:] CLRSCR; WRITELN(’Type target phrase in capital letters’); READLN(TARGET); (* PUT SOME STRING ON THE SCREEN *) TEXTCOLOR(GREEN); GOTOXY(1, 6); WRITELN(’Target’); GOTOXY(10, 6); WRITELN(TARGET); [ . . . . ] [Measure proximity:] (* MEASURE HOW SIMILAR TWO STRINGS ARE *) FUNCTION SIMILARITY(A : STRING; B : STRING) : INTEGER; VAR IDX : INTEGER; SIMCOUNT : INTEGER; BEGIN SIMCOUNT := 0; FOR IDX := 0 TO LENGTH(A) DO BEGIN IF A[IDX] = B[IDX] THEN SIMCOUNT := SIMCOUNT + 1; END; SIMILARITY := SIMCOUNT; END;
6] Thus, "fitness" a la Weasel, is completely dis-analogous to fitness of life forms: life forms must function on highly complex, algorithmically specific information at cellular levels, to live and reproduce; and life forms per NDT do not cumulatively progress to a preset optimum target point. 7] Thus, Dawkins' acknowledgement of a "cheat." (And, W1 and W2 gain over what he dismissed as "single-step selection" [i.e. what Hoyle, Schurtzenberger et al have pointed to: need for complex information based function before fitness "arrives"] by using active, designer-input target information and a measure of hotter-colder.) 8] Thus also, Weasels W1 and W2 both fall under the principal concern that as targetted searches that reward mere proximity in the absence of complex function, they are fundamentally dis-analogous to any claimed capability of Darwinian evolution by chance variation plus probabilistic culling on relative fitness. 9] Now also, W1 plainly does not explicitly latch and ratchet on already successful letters. (W2 will not latch at all, but this is already known to be different from the showcased runs c 1986.] 10] However, W1 seems set up to do two things: (i) to give a generation of size 100, and (ii) to force just one mutation per member of the population. (Recall, 1 of 27 times, a mutation will return the same original value.) 11] This means that odds are about 98% that there will be non-changed members of the pop, and that if the best is that, it will be passed down to the next generation as this gen's champion and seed for the next gen. 12] Already, double or triple etc mutation effects have been eliminated by the algorithm, so in at least some runs we will likely see preservation of achieved characters plus increments of one character. [The typo suggestion seems good enough on the claimed double change.] 13] In short, implicitly latched runs are possible and if these are seen c. 1986 as giving "best" results on "cumulative selection," they would credibly be showcased. 14] Similarly, quasi-latched runs are possible, with occasional (relatively infrequent) reversions. {I think this case will predominate in the pop of runs.} 15] On W1 (the relevant case for the showcased o/p c. 1986), far from latched runs are unlikely. _______________ Thus, after months we see that while explicit latching was credibly not used in Weasel c 1986, implicit latching is a possible explanation of the showcased runs of Weasel c 1986, and that quasi-latched runs (ratcheting with occasional slips) are likely to predominate in the population of runs of the program. Perhaps we could get some sample runs from the gentleman with 1980's era Turbo Pascal? GEM of TKI PS: To remind us, here are the showcased runs c 1986: _________________ >> We may conveniently begin by inspecting the published o/p patterns circa 1986, thusly [being derived from Dawkins, R, The Blind Watchmaker , pp 48 ff, and New Scientist, 34, Sept. 25, 1986; p. 34 HT: Dembski, Truman]: 1 WDL*MNLT*DTJBKWIRZREZLMQCO*P 2? WDLTMNLT*DTJBSWIRZREZLMQCO*P 10 MDLDMNLS*ITJISWHRZREZ*MECS*P 20 MELDINLS*IT*ISWPRKE*Z*WECSEL 30 METHINGS*IT*ISWLIKE*B*WECSEL 40 METHINKS*IT*IS*LIKE*I*WEASEL 43 METHINKS*IT*IS*LIKE*A*WEASEL 1 Y*YVMQKZPFJXWVHGLAWFVCHQXYPY 10 Y*YVMQKSPFTXWSHLIKEFV*HQYSPY 20 YETHINKSPITXISHLIKEFA*WQYSEY 30 METHINKS*IT*ISSLIKE*A*WEFSEY 40 METHINKS*IT*ISBLIKE*A*WEASES 50 METHINKS*IT*ISJLIKE*A*WEASEO 60 METHINKS*IT*IS*LIKE*A*WEASEP 64 METHINKS*IT*IS*LIKE*A*WEASEL >> ________________ PPS: And, this is Mr Dawkins' commentary in BW (with my remarks in parentheses): ____________ >> It [Weasel] . . . begins by choosing a random sequence of 28 letters [which is of course by overwhelming probability non-functional] … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense [= non-functional] phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target [so, targetted search] phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection [cumulative implies progress by successive additions, and in the context of the showcased gives rise to the implications of latching and ratcheting, which is -- your blanket denial in the face of frequently presented detailed evidence notwithstanding (so you either know or should know better) -- demonstrated to happen implicitly as well as explicitly], and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection [dismissive, question-begging reference to the requirement of function for selection] . . . more than a million million million times as long as the universe has so far existed [i.e. acknowledges the impact of intelligently injected purposeful, active info on making the otherwise practically impossible becvome very feasible] . . . . Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection [in more ways than one!], it is misleading in important ways. One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection . . . In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. >> ____________ kairosfocus
Oops CJYman beat me to it- once again... Joseph
Cumulative selection implies a target. Otherwise cumulative is meaningless. Joseph
Hello ROb, You state: "WEASEL is like Darwinian evolution in one respect (selection acts cumulatively) and unlike it in another respect (there is a long-term target). Dawkins is very careful to point this out. I hope we can all agree that that one aspect of something can be illustrated without all aspects being illustrated." I see exactly what you are saying here and to a point I do agree. It does seem that Dawkins was merely showing the difference between cumulative selection and a random search. To me, that is a trivial/obvious observation. Yes, he was showing that cumulative selection, which is supposed to be one of the driving forces in Darwinian evolution, performs better than random search. However, as to "one aspect of something can be illustrated without all aspects being illustrated," I have to disagree in this case. The question still remains ... "will cumulative selection operate without a long term target; and if so, for how long will cumulative selection operate without that long term target?" If Dawkins is defining Darwinian evolution as inherently without a target, then he is going to have to show that cumulative selection can operate without a target to even show that cumulative selection can indeed be a part of Darwinian evolution. IOW, does Darwinian evolution, being defined as cumulative selection without a target, even exist? Any evidence anyone? CJYman
SteveB, WEASEL is like Darwinian evolution in one respect (selection acts cumulatively) and unlike it in another respect (there is a long-term target). Dawkins is very careful to point this out. I hope we can all agree that that one aspect of something can be illustrated without all aspects being illustrated. And a find it interesting that some ID proponents have no problem with the idea that the mainstream model of evolution is targetless, when such a position renders the work of Marks and Dembski irrelevant to biology. R0b
The theory:
"Adopting this view of the world means accepting not only the processes of evolution, but also the view that the living world is constantly evolving, and that evolutionary change occurs without any ‘goals.’ The idea that evolution is not directed towards a final goal state has been more difficult for many people to accept than the process of evolution itself.” (Life: The Science of Biology by William K. Purves, David Sadava, Gordon H. Orians, & H. Craig Keller, (6th ed., Sinauer; W.H. Freeman and Co., 2001), pg. 3.)
“The ‘blind’ watchmaker is natural selection. Natural selection is totally blind to the future....” (Richard Dawkins quoted in Biology by Neil A. Campbell, Jane B. Reese. & Lawrence G. Mitchell (5th ed., Addison Wesley Longman, 1999), pgs. 412-413.)
“Nothing consciously chooses what is selected. Nature is not a conscious agent who chooses what will be selected.... There is no long term goal, for nothing is involved that could conceive of a goal.” (Evolution: An Introduction by Stephen C. Stearns & Rolf F. Hoeckstra, pg. 30 (2nd ed., Oxford University Press, 2005).)
The application of the theory:
We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL. …What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed. (Richard Dawkins, quoted on http://en.wikipedia.org/wiki/Weasel_program
Clearly, goals and targets are explicitly not allowed in evolutionary theory... unless Dawkins wants to write a computer program that won’t work with out one. I suppose the defense of any point of view is relatively easy if the apologist has the freedom to abandon his presuppositions whenever they become inconvenient. This is the real issue; all the arcane programming details are irrelevant. SteveB
"The burden of proof is on Dawkins to show that these aren’t the originals." Um, how do you figure that? He never claimed they were. And it doesn't matter anyway. The Weasel program was designed to show that . . . . oh, nevermind! :-) :-) We're never going to settle this! ellazimm
Expected number E[N] of correct letters after ten iterations (mut.prob. µ=4%) pop.size E[N] 10 4.02 20 5.89 50 8.83 100 10.67 200 11.74 500 12.82 I hope that these numbers fit Andrew Freeman's observations. DiEb
--AndrewFreeman, interesting observation: To come up with some numbers, I modeled the weasel in its various forms as a Markov chain. Here are the probabilities for getting at most 10 characters right in 10 generations, using a mutation rate of 4%: pop.size probability 10 99.99% 20 99.68% 50 87.60% 100 46.44% 200 15.23% 500 2.85% Here, I allowed for a random first string. So, even with 200 children in a generation, this event should occur fairly regularly. DiEb
200 children per generation combined with a 4% mutation rate does produce a similiar generations to converge as recorded in the book. However, in all of my runs of that scenario more then 10 characters are fixed within ten generations. As shown earlier, the runs in the book fix no more then ten characters in ten generations. I suspect that its highly improbable to remain below the ten-generation ten-changes limit when multiple mutations are included. Early on, multiple mutations should have an advantage since most of the letters are incorrect and they have double the chance of finding a correct letter. AndrewFreeman
DieB: You suggest a combination of 10 children with a 4% mutation rate to explain the video's performance. Given both of our results that would be a long run for those parameters. As I'm understanding your comment, you suggest that he ran the program repeatedly during the interview to get a long run. Are you suggesting that the BBC filmed lots of short runs of the program and then just kept the one run that did what Dawkin's wanted? I find that dubious... The Weasel2 program is different from the Weasel1 program in that it has no generations. Instead it generate a single child and either accepts or rejects it. (As a side note to the powers that be here, posting code where the whitespace has been eliminated is evil and not to be tolerated). The result is that there is a lack of parameters to tweak. There is no number of generations to pick or a mutation rate to specify. Nevertheless, my reimplementation of it (available upon request as before) seems to come up with a similar number of tries as the video records. Actual analysis would be good, but I'm a coder not a mathematician. AndrewFreeman
I had a friend run both WEASELs in 1980s TurboPascal, and the programs worked just fine. The burden of proof is on Dawkins to show that these aren't the originals. kibitzer
I think the statements put forward by Diffaxial do support a multi-mutation interpretation but only weakly. Dawkins wasn't precise about the details of his algorithm throughout and so its not really a surprise if that impreciseness continues here. In any case, the procedure described is still followed in the code. I think the lack of any multiple mutations in the data (as shown by my earlier comment) is considerably stronger evidence. It is true that the first child to improve will end up being selected. But ties have to be broken in an arbitrary way regardless. It is true that the loop could be exited early; but I fail to see where what could have been done is relevant. At the end of the day, the code still goes through all of the progeny and selects the one most similar to the target. AndrewFreeman
AndrewFreeman, thanks for taking the time to port the code and run it. Either the code is genuine, or someone has put a lot of thought into this hoax. R0b
In the text Dawkins states, "The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL" (p. 47-48, emphasis in the original.) Again on page 49: "There is a big difference, then, between cumulative selection in (in which each improvement, however slight, is used as a basis for future building), and single step selection..." These passages suggest that there are degrees of resemblance possible between target and child - from "slight" to greater than slight. However, as coded above with a single letter mutation, in any one generation there is only one possible degree of increased resemblance (one additional letter matches the target). The code above therefore seems unfaithful to the text. This matters less, but It also occurs to me that the probability that a given child will be selected in a given generation reflects the order in which the children are generated and examined. In WEASEL1 the first child to demonstrate improvement becomes the winner. Therefore the generation/examination loop may be exited as soon as a single improved child is generated/detected. That again seems quite at odds with the intent of the text. (In WEASEL2 ties go to the child examined later). Coding a version in which every letter is exposed to some probability of mutation solves the first of these problems. It would also require that every child be examined in every generation, as well. Diffaxial
BillB:
I can’t imagine why Dawkins would limit mutation so it can only EVER apply to one letter – it doesn’t seem very biological.
If you want biological then you have to keep the mutation rate below 1%. And with a mutation rate below 1% and a population of 100, we will observe a partioned search. Joseph
--O'Leary as far as I can see, no one is entitled to your prize... But though the discussion at your thread didn't recover Dawkins's original program, it gave interesting insights into the workings of the algorithms described by Dembski and Marks in their paper on the one hand side and by Dawkins in his book on the other. DiEb
-- Andrew Freedman I calculated the expected number of runs for some combination of population size and mutation probability (in brackets the standard deviation) Sorry about the format size -- 4% ---------- 5% --------- one mut. 10 1305 (924) 12,461 (12,140) 477,456 (477,303) 20 326 (121) 341 (140) 754 (652) 30 222 (80) 223 (84) 168 (90) 40 170 (60) 170 (63) 101 (38) 50 139 (49) 140 (51) 79 (25) 60 119 (41) 120 (42) 67 (19) 70 105 (35) 105 (37) 60 (16) 80 93 (31) 94 (32) 54 (14) 90 85 (28) 86 (29) 50 (13) 100 79 (25) 79 (26) 48 (12) 200 49 (14) 49 (14) 35 (8) 300 40 (10) 40 (10) 32 (6) 400 35 (8) 35 (9) 30 (6) 500 32 (7) 32 (8) 30 (6) 1: 4% - 5% is the best rate of mutation, values outside this interval will produce longer runs 2: For his interview, Dawkins needed the program to run for ~ 2000 generations. This could be achieved by the combination (10 children, 4% mutation rate) But I suppose that Dawkins just fooled around a little bit with his program to get an optimal number of runs, i.e., the program was running during the length of his interview... 3: I'm glad to see that your numbers agree with mine... 4: For the book, the number of children was 100-200, not fifty, as I said earlier. Sorry. That is, if Dawkins used the algorithm which most people think he described... DiEb
The video makes no reference to generations. Instead it counts up to some 2485 "tries" These don't seem likely to be the same as the generations from the book. The program in the video doesn't seem to be the same as the one in the book. All my code is available upon request. (Just to make sure nobody is asking for it in ten years.) I'd post it here, but I'm pretty sure this comment system will destroy my whitespace in my python code making it a useless gesture. AndrewFreeman
"I can’t imagine why Dawkins would limit mutation so it can only EVER apply to one letter – it doesn’t seem very biological." An argument from lack of imagination? Interesting... Granted its not very "biological"; however, it is simpler to code. This version uses one line of code to change a random letter. The most straightforward method to implement the multiple-mutation scheme would involve a loop over all the characters. Certainly not impossible or even difficult for an experience coder. Nevertheless, it would be simpler not to. I don't think Dawkins was attempting to produce an accurate model of biology here. All he was attempting to demonstrate was the power of cumulative selection over non-cumulative selection. A more accurate mutation scheme really wouldn't have been useful. AndrewFreeman
I coded a quick and dirty version of both the OP algorithm and a similar version that uses the proposed mutation rate per letter scheme. One-mutation-per-child with 100 children (as per the OP) gives me: 40-60 generations which seems consistent with the examples in the text. 4% mutation rate with 50 children as per DieB: 90-120 generations. 5% mutation rate with 50 children as per DieB: 120-150 generations. 4% mutation rate with 10 children as per DieB: 600-2000 generations 4% mutation rate with 200 children per generation: 40-60 generations. The given algorithm appears to produce similar convergence times whereas it takes a larger population then has been previously proposed to get the same range with the multiple mutation scheme. AndrewFreeman
Under the OP's algorithm, we'd expect to get one letter changed per generation. On the other hand, if multiple mutations are possible we'd expect to have at least some generations where multiple letters were fixed. The first run showing every ten generations, and the number of letters fixed in that time: (assuming that I've copied the strings over correctly. I'm using a python script to help make sure I'm counting the changes correctly) WDLDMNLT DTJBKWIRZREZLMQCO P : ancestor WDLDMNLT DTJBKWIRZREZLMQCO P : 9 MDLDMNLS ITJISWHRZREZ MECS P : 10 MELDINLS IT ISWPRKE Z WECSEL : 6 METHINGS IT ISWLIKE B WECSEL : 4 The second run: Y YVMQKZPFJXWVHGLAWFVCHQYOPY : ancestor Y YVMQKZPFJXWVHGLAWFVCHQYOPY : 9 Y YVMQKSPFTXWSHLIKEFV HQYSPY : 10 YETHINKSPITXISHLIKEFA WQYSEY : 7 METHINKS IT ISSLIKE A WEFSEY : 3 METHINKS IT ISBLIKE A WEASES : 2 METHINKS IT ISJLIKE A WEASEO : 3 Both runs peak at 10 letters fixed in ten generations which would seem to lend support to the idea of having one mutation per child. AndrewFreeman
Ah, I hadn't noticed the similarity between the non-cumulative selection and the starting points for the cumulative selection. There seems to be a number of typos in the strings. The first generation of the first run is, as mentioned, too short. The first generation of the second run is too long. It seems to have an extra X. So it is not too much a suggest an additional typo. For the string to have had a D in that location, switching to a D, and back to a T seems rather unlikely. AndrewFreeman
AndrewFreeman:
The two generations in question: 1 WDLMNLT DTJBKWIRZREZLMQCO P 2 WDLTMNLT DTJBSWIRZREZLMQCO P However, somethings not quite right since the first one is shorter then it should be… If we assume it is a typographical error, where T is missing from generation 1, only one difference remains.
If you look at the top of page 47, you'll see the first generation of Dawkins' non-cumulative selection run. The sequence is the same as the first generation of the cumulative selection run, with a 'D' in the position of the missing letter. It seems that the most likely explanation is that Dawkins used the same random seed in both runs, but the 'D' was accidentally omitted when the original sequence of the cumulative run was transcribed. Of course, it may be that the 'T' at the top of page 48 should be a 'D' -- another typo. This would be supported by the fact that there is a 'D' in that position after 10 and 20 generations. That scenario seems very plausible to me, and in that case, the algorithms in the OP could indeed be accurate reflections of the algorithm in TBW. R0b
For Cannuckian Yankee at 14: No, unfortunately. The raggedy copy wouldn't help in this situation. I have a bunch of them going to local libraries now. The general idea of the contest is that I must provide mint copies from the publisher to the winner. Some people even want me to sign them, when I had nothing to do with the book ... I will solve this problem by acquiring winner certificates at the local office supply store shortly. Anyway, if you know any publishers of books of interest, pester them to be generous to us. O'Leary
OT Denyse, "...and solicit new prizes from publishers hit by the recession." and "...so if you have a brilliant idea, get it in soon." I have an extra copy (a little raggedy) of Darwin's Black Box. Will that help? :) CannuckianYankee
To DiEB at 6: "As Both comments and pings are currently closed at O’Leary’s thread, there won’t be any additional comments over there. That’s a pity" Well, it would be a pity if you didn't need to declare a winner from nearly 400 comments, and still judge other contests and put up more contests, and solicit new prizes from publishers hit by the recession. I think our mod closes comments after four weeks, so if you have a brilliant idea, get it in soon. O'Leary
Dr Dembski, Of these two original programs, which uses explicit latching ('86) and which uses quasi-latching ('87)? steve_h
BTW, I did some math for the latching behavior of this program: have a look here. DiEb
As a professional programmer with way over a million lines of Pacal code written, I can say unequivocally that computationally this program is trivial. I recently spent a few hours writing a grandchild-generator program that simulated the distribution of grandparental chromosomes, and it was longer and more involved than this. By the way, the program showed that if you wanted your chromosomes to get much further than your grandchildren, you needed to have at least four children, all making grandchildren. It makes you understand grandparentally-arranged marriages between cousins, which keeps those grandparental chromosomes around much longer, but which was recently shown to have no elevated risk of inbreeding. Nevertheless, good luck pulling that off here. Interstelar Bill
Even this algorithm is not necessarily latching: it could be that all members of a generations have a change to the worse - that's especially probable if most letters are correct already. Only kairofocus's implicit latching is prevented... DiEb
Wait, wait, wait. In his book Dawkins CLEARLY reproduced only some of his program generated output whereas in the TV program more could be seen. AND . . . you're trusting this person "Oxfordensis"? It was just a program to demonstrate that one aspect of evolution was reasonable AND . . . . "Oxfordensis"? Dr Dembski . . . you've got to provide us with more evidence than this surely! At least tell us why YOU think these programs are believable. They don't even look like they were written by the same person!! Have you compiled these to see if they come close to producing the output observed? :-) Don't blow this guys . . . PZ will be all over you if you do!! ellazimm
Ah, I think I see whats going on, very clever. With a low mutation rate on the real WEASEL you can get an average of one mutation per offspring, which will produce the result that AndrewFreeman posted. Fixing the 'mutation rate' so it can only ever be one letter per offspring is very different that giving each letter a low probability of mutating. Giving each letter a random chance of mutating, and selecting the right mutation rate may average out at one letter change per offspring but, like any average measure, the actual results can vary from moment to moment - you can have an offspring with three mutated letters, it is just more unlikely that an offspring with one or none. WEASEL 1 above doesn't allow this. Dawkins output only gives the first two consecutive parents followed by every tenth parent, which should average out at one mutation per offspring, so there is not enough info in his results to tell the difference. Where the WEASEL1 above is really clever is that, by fixing the mutation to one letter per offspring, you create a latching mechanism because a reversion of a correct letter can never be balanced by another correct letter being found. In this way you always preserve offspring that don't show reversions, Unlike the real WEASEL where this is a probabilistic outcome dependent on the choice of mutation rate and pop size. I can't imagine why Dawkins would limit mutation so it can only EVER apply to one letter - it doesn't seem very biological. BillB
3. IMO Dawkins used the same algorithm in his book and the video, he just changed the parameters: in his book, the population size seems to be fifty, while for the video, it seems to be ten - and both times, a mutation probability of 4%-5% seemed to be used. DiEb
1. As Both comments and pings are currently closed at O'Leary's thread, there won't be any additional comments over there. That's a pity. 2. In this 377 comments, at least the consensus emerged that the algorithm in the paper of Dembski and Marks (described by kairosfocus as an unrealistic illustration) is not the algorithm of Dawkins's book. DiEb
R0b:
Regardless, it looks like we’ve finally gotten past the idea that Dawkins’ algorithm is a partitioned search.
It just acts like one. Joseph
The two generations in question: 1 WDLMNLT DTJBKWIRZREZLMQCO P 2 WDLTMNLT DTJBSWIRZREZLMQCO P However, somethings not quite right since the first one is shorter then it should be... If we assume it is a typographical error, where T is missing from generation 1, only one difference remains. AndrewFreeman
BillB is right. Both of the above programs mutate exactly one letter per offspring, but the results reported at the top of page 48 in TBW show two changes occurring between the first and second generations. Regardless, it looks like we've finally gotten past the idea that Dawkins' algorithm is a partitioned search. R0b
Although I'm still brushing up on my PASCAL it looks like WEASAEL 1 is wrong - It only mutates one letter for each offspring, whereas each letter for each offspring should have a probability of mutating. I'll try and feed it into a compiler if I can resurrect one, but I really can't see this ever reproducing the results in TBW. BillB
Well, I can manage a contest but don't know code. So naturally, this development pleases me. If we got 377 comments for Contest 10, I really must acquire more prizes, which I am now trying to do. I will judge Contest 9 next week, and am delighted to think I would be no use with Contest 10, as that gives me time to post more contests and fish for more prizes. I don't think the Contest 10 question will ever really be solved without the absolute real genuine original code. But if Oxfordensis is entitled to the prize, that individual must e-mail me a valid postal address at oleary@sympatico.ca O'Leary

Leave a Reply