Uncommon Descent Serving The Intelligent Design Community

The Original WEASEL(s)

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

On August 26th last month, Denyse O’Leary posted a contest here at UD asking for the original WEASEL program(s) that Richard Dawkins was using back in the late 1980s to show how Darwinian evolution works. Although Denyse’s post generated 377 comments (thus far), none of the entries could reasonably be thought to be Dawkins’s originals.

It seems that Dawkins used two programs, one in his book THE BLIND WATCHMAKER, and one for a video that he did for the BBC (here’s the video-run of the program; fast forward to 6:15). After much beating the bushes, we finally heard from someone named “Oxfordensis,” who provided the two PASCAL programs below, which we refer to as WEASEL1 (corresponding to Dawkins’s book) and WEASEL2 (corresponding to Dawkins’s BBC video). These are by far the best candidates we have received to date.

Unless Richard Dawkins and his associates can show conclusively that these are not the originals (either by providing originals in their possession that differ, or by demonstrating that these programs in some way fail to perform as required), we shall regard the contest as closed, offer Oxfordensis his/her prize, and henceforward treat the programs below as the originals.

For WEASEL1 and WEASEL2 click here:

WEASEL1:

Program Weasel;

Type

Text=String[28];

(* Define Parameters *)

Const

Alphabet:Text=’ABCDEFGHIJKLMNOPQRSTUVWXYZ ‘;

Target:Text=’METHINKS IT IS LIKE A WEASEL’;

Copies:Integer=100;

Function RandChar:Char;

(* Pick a character at random from the alphabet string *)

Begin

RandChar:=Alphabet[Random(27)+1];

End;

Function SameLetters(New:Text; Current:Text):Integer;

(* Count the number of letters that are the same *)

Var

I:Integer;

L:Integer;

Begin

L:=0;

I:=0;

While I< =Length(New) do Begin If New[I]=Current[I] Then L:=L+1; I:=I+1; End; SameLetters:=L; End; Var Parent:Text; Child:Text; Best_Child:Text; I:Integer; Best:Integer; Generation:Integer; Begin Randomize; (* Initialize the Random Number Generator *) (* Create a Random Text String *) Parent:=''; For I:=1 to Length(Target) do Begin Parent:=Concat(Parent, RandChar) End; Writeln(Parent); (* Do the Generations *) Generation:=1; While SameLetters(Target, Parent) <> Length(Target)+1 do

Begin

(* Make Copies *)

Best:=0;

For I:=1 to Copies do

Begin

(* Each Copy Gets a Mutation *)

Child:=Parent;

Child[Random(Length(Child))+1]:=RandChar;

(* Is This the Best We’ve Found So Far? *)

If SameLetters(Child, Target) > Best Then

Begin

Best_Child:=Child;

Best:=SameLetters(Child, Target);

End;

End;

Parent:=Best_Child;

(* Inform the User of any Progress *)

Writeln(Generation, ‘ ‘, Parent);

Generation:=Generation+1;

End;

End.

WEASEL2:

PROGRAM WEASEL;
USES
CRT;

(* RETURN A RANDOM LETTER *)
FUNCTION RANDOMLETTER : CHAR;
VAR
NUMBER : INTEGER;
BEGIN
NUMBER := RANDOM(27);
IF NUMBER = 0 THEN
RANDOMLETTER := ‘ ‘
ELSE
RANDOMLETTER := CHR( ORD(‘A’) + NUMBER – 1 );
END;

(* MEASURE HOW SIMILAR TWO STRINGS ARE *)
FUNCTION SIMILARITY(A : STRING; B : STRING) : INTEGER;
VAR
IDX : INTEGER;
SIMCOUNT : INTEGER;
BEGIN
SIMCOUNT := 0;

FOR IDX := 0 TO LENGTH(A) DO
BEGIN
IF A[IDX] = B[IDX] THEN
SIMCOUNT := SIMCOUNT + 1;
END;
SIMILARITY := SIMCOUNT;
END;

FUNCTION RANDOMSTRING(LEN : INTEGER) : STRING;
VAR
I : INTEGER;
RT : STRING;
BEGIN
RT := ”;
FOR I := 1 TO LEN DO
BEGIN
RT := RT + RANDOMLETTER;
END;
RANDOMSTRING := RT;
END;

VAR
X : INTEGER;
TARGET : STRING;
CURRENT : STRING;
OFFSPRING : STRING;
TRIES : LONGINT;
FOUND_AT : INTEGER;
BEGIN
RANDOMIZE;

CLRSCR;

WRITELN(‘Type target phrase in capital letters’);
READLN(TARGET);
(* PUT SOME STRING ON THE SCREEN *)
TEXTCOLOR(GREEN);
GOTOXY(1, 6);
WRITELN(‘Target’);

GOTOXY(10, 6);
WRITELN(TARGET);

TEXTCOLOR(BLUE);

GOTOXY(1,13);
WRITELN(‘Darwin’);

TEXTCOLOR(BLUE);
GOTOXY(1,19);
WRITELN(‘Random’);

TEXTCOLOR(WHITE);
GOTOXY(1, 25);

WRITE(‘Try number’);

(* PICK A RANDOM STRING TO START DARWIN SEARCH *)
CURRENT := RANDOMSTRING(LENGTH(TARGET));

(* RUN THROUGH MANY TRIES *)
FOUND_AT := 0;
FOR TRIES := 1 TO 100000 DO
BEGIN

(* Darwin *)
OFFSPRING := CURRENT;
OFFSPRING[ 1 + RANDOM(LENGTH(OFFSPRING)) ] := RANDOMLETTER;

GOTOXY(10,13);
WRITELN(OFFSPRING, ‘ ‘);

IF( SIMILARITY(OFFSPRING, TARGET) >= SIMILARITY(CURRENT, TARGET) ) THEN
CURRENT := OFFSPRING;

IF( (SIMILARITY(CURRENT, TARGET) = LENGTH(TARGET)) AND (FOUND_AT = 0) ) THEN
BEGIN
(* TELL THE USER WHAT WE FOUND *)
FOUND_AT := TRIES;
GOTOXY(1, 15);
TEXTCOLOR(BLUE);
WRITELN(‘Darwin’);
TEXTCOLOR(WHITE);
GOTOXY(9, 15);
WRITELN(‘reached target after’);
GOTOXY(37, 15);
TEXTCOLOR(BLUE);
WRITELN(FOUND_AT);
WRITE(‘tries’);
TEXTCOLOR(WHITE);

GOTOXY(1, 21);
TEXTCOLOR(BLUE);
WRITE(‘Random’);
TEXTCOLOR(WHITE);
WRITELN(‘ would need more than ‘);
TEXTCOLOR(BLUE);
WRITELN(‘1000000000000000000000000000000000000000’);
TEXTCOLOR(WHITE);
WRITE(‘tries’);
END;

(* Random *)
GOTOXY(10, 19);
WRITELN(RANDOMSTRING(LENGTH(TARGET)), ‘ ‘);

GOTOXY(27,25);
WRITE(TRIES, ‘ ‘);
END;

GOTOXY(1, 20);
End.

Comments
Mr CJYman, I thought the meaning of 'explicit' was clear. If you look at the code of Weasel, you'll find the target string. Even in Weasels that let you type in the target, it is in memory. But there is no target design in the antenna example. There is only measuring efficiency, and ranking that against other designs in the population. So the point of my post was that there are interesting problems that are more complicated than hill climbing smoothly towards a fixed target, and evolutionary algorithms can still solve them, contra a dismissive wave of the hand. With respect to antenna design, this particlar group of researchers was either interested in building better antennas, or thought antenna design was a hard problem for humans, and therefore a good test problem for GP. Other research is not interested in getting useful results, but simply in understanding the limits of EAs. I'm sure there is an 'edge of evolution', and books like David Goldberg's 'Design of Innovation' explore it.Nakashima
October 10, 2009
October
10
Oct
10
10
2009
06:40 AM
6
06
40
AM
PST
Nakashima: "The point of getting ’some results’ is that it happens without an explicit target, contra what many here and elsewhere are saying is necessary." Not sure how you are using the term "explicit," however as per the antenna example there definitely is a target. That target is an efficient antenna. In this case, the target was a specific function instead of a form. The programmers knew what function they wished to achieve and programmed the constraints to achieve that function and without that function of an efficient antenna the form (the exact shape of the antenna) wouldn't have been discovered. The point is that absent the foresight of the programmers to achieve a specific end function, there would be no 'some results.'CJYman
October 10, 2009
October
10
Oct
10
10
2009
05:13 AM
5
05
13
AM
PST
Onlookers: I have been busy elsewhere on other matters for the past week or so. I came back by to see where the thread went. SA has put his finger on the key issue: the ORIGIN of functional complex, specific information is what has to be accounted for. And, both Weasel and the more modern GA's do not address that. In effect they start within the shores of an island of function, without first credibly getting us to those shores in a very large config space well beyond the scanning ability of he resources of the atoms of the observed cosmos. remember, that starts at 500 - 1,000 bits as a rule of thumb. To see the force of that, think about the requisites for a von Neumann self-replicator:
1 --> A code system, with symbols and combinational rules that specify meaningful and functional as opposed to "nonsense" strings. [Situations where every combination has a function are irrelevant.] 2 --> A storage unit with blueprint or tape mechanism that encodes the specifications and at least implies the assembly instructions 3 --> A reader that then drives associated implementation machines that actually carry out the replication. 4 --> A source of required parts (i.e. a pre existing reservoir and/or a metabilic subsystem to make parts out of easily accessible environmental resources)
This is an irreducibly complex set of core elements, i.e, remove any one and self-replicational functionality vanishes. It also specifies an island of functional organisaiton, as not just any combination of any and all generic parts will achieve the relevant function. That is why the randomly varied "genes" in a GA string are irrelevant. For, absent the independent reader and translator into action, the strings have no function. And, the process of reading and converting into a functional behaviour and/or metric is plainly intelligently designed in all cases of GA's on record. We could go on and on, but the point is plain enough. GEM of TKIkairosfocus
October 10, 2009
October
10
Oct
10
10
2009
04:23 AM
4
04
23
AM
PST
They must do nothing less to lend any support to the hypothesis of increased complexity via RM+NS. Otherwise they’re a parlor trick. (Or an easier way of designing better antenna surfaces.)
The "hypothesis of increased complexity" is a term exclusive to the mode of thinking upon which Intelligent Design is based and is irrelevant with respect to fitness adaptation, microevolution.Cabal
October 2, 2009
October
10
Oct
2
02
2009
12:53 AM
12
12
53
AM
PST
Mr ScottAndrews, The point of getting 'some results' is that it happens without an explicit target, contra what many here and elsewhere are saying is necessary. That consistent misunderstnding of the necessity of targets to EAs has been the genesis of much discussion here!Nakashima
October 1, 2009
October
10
Oct
1
01
2009
07:44 PM
7
07
44
PM
PST
Evolutionary algorithms for antenna design are essentially an automation of a trial-and-error process, testing various forms and improving upon them. It's a substitution of brute computing power for human effort. And fine, it gets some results. I'd be really curious to see if any of these "evolved" antennas, on their own, achieved any sort of innovation, such as motors to orient themselves toward a signal, circuitry to enhance the signal, or some relays. They must do nothing less to lend any support to the hypothesis of increased complexity via RM+NS. Otherwise they're a parlor trick. (Or an easier way of designing better antenna surfaces.)ScottAndrews
October 1, 2009
October
10
Oct
1
01
2009
07:28 AM
7
07
28
AM
PST
kairosfocus, " and (ii) the key begged question, again is to get to shores of complex functionality sufficient for further hill climbing to be relevant,...." This is an important issue, that you raise several times in your post. It is important because it represents a fundamental misconception about evolutionary theory. Accepting for the sake of argument that "shores of complex functionality" actually exist, there is no need for evolutionary mechanisms to find them. Living creatures that reproduce already have a successful genome. Evolutionary mechanisms, such as those simulated in programs like ev, don't need to find a viable point in genome space -- they're already at one and are simply exploring nearby points. Abiogenesis is an interesting topic, but it is distinct from evolutionary theory. Given this, the rest of your response does not address the core question. Where, exactly, does the "active information" get injected into ev?Rasputin
October 1, 2009
October
10
Oct
1
01
2009
07:20 AM
7
07
20
AM
PST
kairosfocus, "5] "The amount of information in the genomes of the final population is much higher than that in the initial population, using only simple evolutionary mechanisms." Again, Schneider's Ev is discussing a pre-programmed context that assigns functions, sets up hill-climbing algorithms and gives particular meaning to digital strings according to certain symbol and rule conventions,..." You need to read the paper more carefully. Schneider's ev is a simulation of a subset of known evolutionary mechanisms applied to a known biological system. The only "meaning" assigned to any digital strings is that which reflects real world chemistry. With even a very simple set of mechanisms, Schneider demonstrated the ability to evolve significant amounts of information. Equally importantly, his simulation results are consistent with the empirical evidence resulting from his research on real biological systems. That's very strong support for the ability of evolutionary mechanisms to transfer information from an environment to subsequent populations. "... and inter alia measures Shannon information, which -- as a metric of info-carrying or storing capacity -- is irrelevant to the issue of origin of algorithmically functional, complex specified information in a context of first life or novel body plans." Shannon information is a standard, well-understood metric. Schneider explains how and why it is appropriate in his thesis. After a quick re-review of that thesis, I suspect that any rigorously defined, objective, quantitative measure of information could be used. The fact is that the amount of information in the sequence patterns at a binding site evolves to be equal to the amount of information required to locate the number of such sites within the genome. "Remember, too (as was already pointed out but ignored): Shannon information for a given symbol string length peaks for non-functional flat random code,...." That is immaterial in this context. If you read the ev paper and Schneider's thesis, you will see that the important measurement is the relationship between the amount of information in a binding site sequence and the amount of information required to locate a binding site. "6] "If you read the paper, you’ll see that the fitness landscape itself is constantly changing." Irrelevant: (i) the "fitness landscape" is MAPPED and ALGORITHMICALLY PROCESSED at any given time (to get the hill-climbing by differential fitness metric values),..." No, it is not. Read the thesis. "7] "ev does it" Ev does not create its global algorithmic functionality ab initio from undirected chance plus necessity, but from an intelligent programmer." You are again mistaking what is being simulated. ev shows that a small subset of known evolutionary mechanisms is sufficient to transfer information from the environment to subsequent populations, without any need for intelligent intervention.Rasputin
October 1, 2009
October
10
Oct
1
01
2009
07:19 AM
7
07
19
AM
PST
kairosfocus, "3] "Life ‘knows’ the target, it is ‘aware’ of the target, i.e. it detects when it is pointing closer to or farther from the ‘target’, i.e. increasing or decreasing in fitness." See the point? The issue is not to improve already functioning life forms and body plans, but to first get to them, in light of the entailed complex, functionally specific information basis for such life." That is the issue for theories of abiogenesis. It is not the issue for evolutionary theory. Evolutionary theory explains how populations change over time, given the existence of self-replicating entities.Rasputin
October 1, 2009
October
10
Oct
1
01
2009
07:17 AM
7
07
17
AM
PST
kairosfocus, "Antenna theory and Genetic Algorithms used to design novel antennas, are based on a deeply established theory of the function of such antennas {based on Maxwell’s Electromagnetism], programmed into the simulation by its designers. And, that is the precise source of the relevant active information." It's important to be clear on exactly what is being simulated in these types of genetic algorithms. Typically there are two primary components: a population generator and a fitness evaluator. In the case of the antenna GA, the fitness evaluator uses standard, real world physics to determine the performance of the design represented by each member of the current population. The laws of physics themselves are not being simulated. The population generator implements a subset of known evolutionary mechanisms. At a minimum, the likelihood of a particular gene making it into the next generation will be related to the fitness of the individuals in the current population with that gene (stochastically, in some selection algorithms). Some type of mutation is also required. Other mechanisms such as cross-over may be used. The simulation, therefore, is of the evolutionary mechanisms themselves. Claiming that the laws of physics are providing the "active information" is, as I noted previously, equivalent to recognizing that the evolutionary mechanisms being simulated are capable of transferring information about the environment to subsequent population. Again, this is what we observe in actual biological systems, with no intelligent intervention required. I'll respond to some of your other points separately in the interests of keeping each post readable.Rasputin
October 1, 2009
October
10
Oct
1
01
2009
07:16 AM
7
07
16
AM
PST
PS: On the source of active information in Ev, it is not irrelevant to excerpt from the page linked by R above: ______________ >> The Ev program was written in Pascal, which is a good language for which there is an open source compiler. However, Pascal compilers are not often set up on computers, so this limits experimentation with Ev [NB: computer simulations and modelling are NOT empirical, real-world experiments, but easily lead people to believe what hey see on the screen . . . a problem ever since Weasel] to the few people willing to download a Pascal compiler and to set up Ev. In contrast, an open source version of Ev written in Java and available from Source Forge could be used in schools all across the world to help educate students in the precise mechanisms of evolution . . . >> _______________ The source of the relevant active information should be clear enough, and of course it inadvertently illustrates the empirical limits on evolutionary mechanisms.kairosfocus
October 1, 2009
October
10
Oct
1
01
2009
06:04 AM
6
06
04
AM
PST
Onlookers: The remarks overnight simply sustain my points on: (i) increasingly tangential issues, and (ii)the degree of strawmannishness in the objections. As I have no intention to embark on a yet further set of tangential exchanges [it has been something like nine months, folks], the substantial matters plainly having been settled as I have already summarised, I will simply make some notes for record: 1] GA's, targets, fitness landscapes and Antenna theory and Genetic Algorithms used to design novel antennas, are based on a deeply established theory of the function of such antennas {based on Maxwell's Electromagnetism], programmed into the simulation by its designers. And, that is the precise source of the relevant active information. 2] Fitness and complex function Again, life forms are based on self-replicating cells, and reproduce. To do so they must implement highly complex function sufficient to implement a von Neumann replicator [code, blueprint storage, reader, effector] with associated metabolism to provide energy and materials. For first life and for novel major body plans, until one accounts for the origin of such complex function from in effect chance -- natural selection is a culler on differential function, not an innovator -- discussing hill climbing on comparative "fitness" within islands of function is mere question-begging. This has of course been pointed out in the context of the weasel debates form the outset, not just in my remarks of last December; but in fact such is directly (albeit inadvertently) implied by CRD's remarks of 1986, especially his remarks on rewarding "nonsense phrases" on incremets of proximity to target. 3] "Life ‘knows’ the target, it is ‘aware’ of the target, i.e. it detects when it is pointing closer to or farther from the ‘target’, i.e. increasing or decreasing in fitness." See the point? The issue is not to improve already functioning life forms and body plans, but to first get to them, in light of the entailed complex, functionally specific information basis for such life. 4] "is your contention that natural selection is an invalid concept, that even microevolution is impossible, that the designer is responsible for all species adaptability?" Strawman, in the teeth of an always linked, immediately accessible discussion of the issue: origin of functionally specific, complex information as the basis for cell-based life forms. So-called natural selection is not the issue: probabilistic culling on sub-populations of life forms with variations in an environment is a reasonable and significantly empirically supported concept. But, culling does not explain origin of relevant variations. Similarly, variability of already functioning life forms is not the issue; origin of such functionality based on complex digital, algorithmic information -- and for good reason connected to the number of states accessible to the ~10^80 atoms of the observed universe, I have used the threshold of 500 - 1,000 bits for the border of enough complexity -- is. As for so-called micro-evolution, it is not an issue across any significant view on biological variability, including young earth creationism. [Cabal should consult the Weak Argument Correctives.] 5] "The amount of information in the genomes of the final population is much higher than that in the initial population, using only simple evolutionary mechanisms." Again, Schneider's Ev is discussing a pre-programmed context that assigns functions, sets up hill-climbing algorithms and gives particular meaning to digital strings according to certain symbol and rule conventions, and inter alia measures Shannon information, which -- as a metric of info-carrying or storing capacity -- is irrelevant to the issue of origin of algorithmically functional, complex specified information in a context of first life or novel body plans. Remember, too (as was already pointed out but ignored): Shannon information for a given symbol string length peaks for non-functional flat random code, as the metric: sum of pi log pi is highest for that case -- precisely what will not happen for a real world code. A high level of Shannon information can therefore easily correlate with non-function. That is, organised, algorithmic functionality is on a different dimension than order-randomness, which is what fig 4 in the Abel et al paper airily dismissed just above highlight. Nor is that insight new to them, as for instance Thaxton et al by 1984 in Ch 8 of TMLO summarise on three different types of symbol strings in light of Orgel, Yockey, Wickens and Polanyi as follows:
1. [Class 1:] An ordered (periodic) and therefore specified arrangement: THE END THE END THE END THE END Example: Nylon, or a crystal . . . . 2. [Class 2:] A complex (aperiodic) unspecified arrangement: AGDCBFE GBCAFED ACEDFBG Example: Random polymers (polypeptides). 3. [Class 3:] A complex (aperiodic) specified arrangement: THIS SEQUENCE OF LETTERS CONTAINS A MESSAGE! Example: DNA, protein.
(Onlookers, this is the crucial point of breakdown of communication on this matter. Darwinists are evidently typically blind to the wider context of code-based digital, algorithmically functional entities and the related distinction between order, randomness and functionally specific complex organisation. And, once we get to quite modest quantities of functional information, we run into cosmically insuperable search or configuration spaces: 1,000 bits -- less than 150 bytes -- specifies 10^301 configs, or more than ten times the square of the number of quantum states of the 10^80 atoms of our observed cosmos across its thermodynamically credible lifespan, about 50 million times the run from the typical estimates of the duration since the Big Bang, 13.7 BY. 150 bytes is grossly too small to create the sort of von Neuman self-replicators we see in even the simplest more or less independent life forms, which start out with ~ 600 - 1,000 k bits of storage space. And, major body plans run from ~ 10 - 100+ million bits.) 6] "If you read the paper, you’ll see that the fitness landscape itself is constantly changing." Irrelevant: (i) the "fitness landscape" is MAPPED and ALGORITHMICALLY PROCESSED at any given time (to get the hill-climbing by differential fitness metric values), and (ii) the key begged question, again is to get to shores of complex functionality sufficient for further hill climbing to be relevant, where (iii) the shores in question for real life systems require self-replication i.e von Neumann replicators with codes, algorithms, storage of blueprints etc, readers and effectors backed up by metabolism to obtain required energy and materials. 7] "ev does it" Ev does not create its global algorithmic functionality ab initio from undirected chance plus necessity, but from an intelligent programmer. 8] "Where, exactly, is the “active information” being inserted? If your answer is “from the simulated environment” then you are recognizing that the evolutionary mechanisms used in the simulation can transfer information about the environment to subsequent populations. This is what we observe in actually biological systems, with no intelligent intervention required." Active information relates here to the challenge of getting to the shores of an island of function in a large config space dominated by seas of non-function, as can be shown to relate to any significant digitally coded context. Cf. Marks and Dembski in the just linked:
Conservation of information theorems [15], [44], es- pecially the No Free Lunch Theorems (NFLT’s) [28], [51], [52], show that without prior information about the search environment or the target sought, one search strategy is, on average, as good as any other. Accord- ingly, the dif?culty of an unassisted—or blind—search problem [9] is ?xed and can be measured using what is called its endogenous information. The amount of information externally introduced into an assisted search can then be measured and is called the active information of the search [33]. Even moderately sized searches are virtually certain to fail in the absence of information con-cerning the target location or the search space structure. Knowledge concerning membership in a structured class of problems, for example, can constitute search space structure information [50] . . . . All but the most trivial searches require information about the search environment (e.g., smooth landscapes) or target location (e.g., ?tness measures) if they are to be successful. Conservation of information theorems [15], [28], [44], [51], [52] show that one search algorithm will, on average, perform as well as any other and thus that no search algorithm will, on average, outperform an unassisted, or blind, search. But clearly, many of the searches that arise in practise do outperform blind unassisted search. How, then, do such searches arise and where do they obtain the information that enables them to be successful? . . . . De?ne an assisted search as any procedure that pro-vides more information about the search environment or candidate solutions than a blind search. The classic example of an assisted search is the Easter egg hunt in which instead of saying “yes” or “no” to a proposed lo- cation where an egg might be hidden, one says “warmer” or “colder” as distance to the egg gets smaller or bigger. This additional information clearly assists those who are looking for the Easter eggs, especially when the eggs are well hidden and blind search would be unlikely to ?nd them. Information about a search environment can also assist a search. A maze that has a unique solution and allows only a small number of “go right” and “go left” decisions constitutes an information-rich search environment that helpfully guides the search . . . . What is the source of active information in a search? Typically, programmers with knowledge about the search (e.g., domain expertise) introduce it. But what if they lack such knowledge? Since active information is indis-pensable for the success of the search, they will then need to “search for a good search.” In this case, a good search is one that generates the active information necessary for success . . . under generalconditions, the dif?culty of the “search for a good search,” as measured against an endogenous information baseline, increases exponentially with respect to the active information needed for the original search.
Ev is about moving around within such an an island that is tectonically active in effect, based on domain expertise. To get to the initial algorithmic functionality of Ev, Mr Schneider did a lot of highly intelligent design, coding and development. Ev did not come from sampling random noise spewed onto a hard disk using say a Zener noise source. So, the random element in Ev is based on a wider intelligently designed context that uses quite constrained random search in a friendly search environment/landscape as a means to an end, a known technique of intelligent designers. GEM of TKIkairosfocus
October 1, 2009
October
10
Oct
1
01
2009
05:53 AM
5
05
53
AM
PST
kairosfocus, "Recall — and this specifically goes back to December last year — the core challenge of evolutionary algorithms in general is to [without undue inadvertent injection of active information by investigators] create complex, information-rich function ab initio from plausible initial conditions [pre-life (`600 - 1,000 k bits), previous to existence of a novel body plan (10 - 100+ M bits)] without pre-loading key intelligently derived info about the overall topography of the fitness landscape and/or goals an their location." Thomas Schneider's ev does exactly that. The amount of information in the genomes of the final population is much higher than that in the initial population, using only simple evolutionary mechanisms. "As touching Ev etc, these have come up in the various discussions over the years, and unfortunately tend to fall under similar problems, i.e. not accounting adequately for the origin of the required level of complex functionality within the search resources of the observed cosmos, and they tend to embed implicit or explicit knowledge of the overall fitness landscape,..." This is not the case with ev. If you read the paper, you'll see that the fitness landscape itself is constantly changing. "...often working within an assumed island of function to carry out hill-climbing. The problem that is decisive is to get to the shores of such islands of function in the extremely large config spaces implied by the digital information in DNA etc, without intelligent direction." And yet, ev does it. "Ev, from the paper you cite — starting with the the abstact and culminating in the conclusion, runs into the problem that Shannon information (a metric of channel and memory transfer or storage capacity) is inadequate to define algorithmic functionality, as say Abel et al discuss in this 2005 paper; cf esp. Fig 4 and associated discussion on OSC, RSC and FSC." That paper is long on assertions and unnecessary jargon and short on mathematical support for their arguments. Schneider explains why Shannon Information is an appropriate measure and shows how it accrues through simple evolutionary mechanisms in his simulation. Why, exactly, do you disagree? "On p. 1058 of their recent IEEE paper, Marks and Dembski observe about the general problem with evolutionary algorithms as follows: . . . In short, inadvertent injection of active information that gives a considerable gain over reasonable capacity of random walk searches in large config spaces, is the critical flaw that consistently dogs evolutionary simulations from Weasel to today’s favourites such as Ev Avida etc." The ev simulation implements simple evolutionary mechanisms for breeding and selection, without an explicit target or static environment. It shows that those mechanisms can create Shannon Information and it corresponds well to the empirical evidence of the real biological systems that were the topic of Schneider's PhD thesis. Where, exactly, is the "active information" being inserted? If your answer is "from the simulated environment" then you are recognizing that the evolutionary mechanisms used in the simulation can transfer information about the environment to subsequent populations. This is what we observe in actually biological systems, with no intelligent intervention required.Rasputin
September 30, 2009
September
09
Sep
30
30
2009
11:08 AM
11
11
08
AM
PST
Cabal:
Or is your contention that natural selection is an invalid concept, that even microevolution is impossible, that the designer is responsible for all species adaptability?
Natural selection doesn't "do" much of anything. It has never been observed to do what evolutionists say it has done. And when it has been studied it has been shown, on average, to contribute to just 16% of the variation. IOW there are factors that are obviously more prevalent than NS.Joseph
September 30, 2009
September
09
Sep
30
30
2009
04:38 AM
4
04
38
AM
PST
With designing antennas the target is known- not the antenna but what the antenna must be able to do.Joseph
September 30, 2009
September
09
Sep
30
30
2009
04:35 AM
4
04
35
AM
PST
Kairosfocus, presupposing that you do know and understands what evolutionary theory predicts about the cumulative effect on fitness by random mutations, do you think that you could devise a functional algorithm truthfully simulating the same process? BTW, I still am unable to understand why algorithms for design of antennae where the target is not known, are not examples of a similar process i.e. selection for fitness where the target is not known, only the landscape. Just as in real life, the fitness landscape is the template against which all the parameters affecting a species survival coefficient are tested. The outcome determines the species degree of reproductive success. I don't think I am saying anything false when I attempt saying the same thing in other words: Life 'knows' the target, it is 'aware' of the target, i.e. it detects when it is pointing closer to or farther from the 'target', i.e. increasing or decreasing in fitness. In short: The target is not the target phrase or whatever we use to represent an imaginary target, the target is fitness. WRT latching - I have not, nor do I intend to, made an in-depth study of the Weasel algorithm, it seems to me however that there's got to be an effect we may conceive as latching, but what it really is, is of course the result of an increase in fitness, whatever serves to augment fitness will of course be preserved. That is after all the purpose of the entire exercise, simulating life. Or is your contention that natural selection is an invalid concept, that even microevolution is impossible, that the designer is responsible for all species adaptability?Cabal
September 30, 2009
September
09
Sep
30
30
2009
01:24 AM
1
01
24
AM
PST
PS: Onlookers, note too how the above exercise in straining at a gnat over Weasel while swallowing a camel on enfolded active info in evolutionary simulations ever since Weasel distracts attention from this key, well-warranted conclusion of the M & D paper.kairosfocus
September 29, 2009
September
09
Sep
29
29
2009
10:20 PM
10
10
20
PM
PST
Rasputin: Kindly note that I was responding to a specific proposal by Cabal, on rewriting Weasel. Recall -- and this specifically goes back to December last year -- the core challenge of evolutionary algorithms in general is to [without undue inadvertent injection of active information by investigators] create complex, information-rich function ab initio from plausible initial conditions [pre-life (`600 - 1,000 k bits), previous to existence of a novel body plan (10 - 100+ M bits)] without pre-loading key intelligently derived info about the overall topography of the fitness landscape and/or goals an their location. Weasel 1986 fails the test by rewarding non-functional "nonsense phrases" on their making increments in proximity that "however slightly, most resemble[ . . .]" the defined and located target, the Weasel sentence; by in effect using he known target and nonsense phrase locations to create warmer-colder signals. This becomes critically dis-analogous to the claimed dynamics of chance variation and natural selection of complexly functional (reproducing, so von Neumann replicator; irreducibly requiring: code, stored blueprint, reader, effector, metabolic support to provide materials and energy) life forms. It is also significantly less complex than the credible real-world info generation challenges. This thread is strictly about the provision of credible code c 1986. Secondarily, there has been a continuation of various objections to and concerning the observed behaviour of showcased o/p c 1986: apparent latching and ratcheting to target. It is clear from general discussion and from the probable code that Weasel c 1986 shows implicit latching as a reflection of its use of targetting and reward of mere non-functional proximity. As touching Ev etc, these have come up in the various discussions over the years, and unfortunately tend to fall under similar problems, i.e. not accounting adequately for the origin of the required level of complex functionality within the search resources of the observed cosmos, and they tend to embed implicit or explicit knowledge of the overall fitness landscape, often working within an assumed island of function to carry out hill-climbing. The problem that is decisive is to get to the shores of such islands of function in the extremely large config spaces implied by the digital information in DNA etc, without intelligent direction. Ev, from the paper you cite -- starting with the the abstact and culminating in the conclusion, runs into the problem that Shannon information (a metric of channel and memory transfer or storage capacity) is inadequate to define algorithmic functionality, as say Abel et al discuss in this 2005 paper; cf esp. Fig 4 and associated discussion on OSC, RSC and FSC. (The 2009 review paper here will provide a survey and guide to a considerable body of relevant literature, including of course the Durston et al metrics; which build on Shannon uncertainty to put it in the context of specific functionality. My 101 level intro here may help onlookers understand Shannon info [including average info per symbol in messages aka entropy aka uncertainty] and its relationship to functionally specific complex info.) For instance, peak Shannon info metric values for a given string length will be for a strictly random data string [as it has very low redundancy], when in fact algorithmic functionality will -- per the inherent structure and requisites of functional language and code -- have redundancy, e.g. as a rule, symbols will not be equiprobable in a real code or language [think of E vs X in English]. A random string will have peak Shannon info while failing to rise above the floor of non-function. On p. 1058 of their recent IEEE paper, Marks and Dembski observe about the general problem with evolutionary algorithms as follows:
Christensen and Oppacher [7] note the “sometimes-outrageous claims that had been made of speci?c optimization algorithms.” Their concern is well founded. In computer simulations of evolutionary search, researchers often construct a complicated computational software environment and then evolve a group of agents in that environment. When subjected to rounds of selection and variation, the agents can demonstrate remarkable success at resolving the problem in question. Often, the claim is made, or implied, that the search algorithm deserves full credit for this remarkable success. Such claims, however, are often made as follows: 1) without numerically or analytically assessing the endogenous information that gauges the dif?culty of the problem to be solved and 2) without acknowledging, much less estimating, the active information that is folded into the simulation for the search to reach a solution.
In short, inadvertent injection of active information that gives a considerable gain over reasonable capacity of random walk searches in large config spaces, is the critical flaw that consistently dogs evolutionary simulations from Weasel to today's favourites such as Ev Avida etc. (And, no I am not interested in a further long tangential discussion on details of such programs. this thread has had a specific purpose, one long since achieved, and a major cluster of tangents has already been addressed.) GEM of TKIkairosfocus
September 29, 2009
September
09
Sep
29
29
2009
10:07 PM
10
10
07
PM
PST
kairosfocus, "All you have to do is write a program that without using the target sentence and a distance to target metric, reliably achieves it in several dozens to several hundreds of generations, showing implicit [quasi-]latching-ratcheting as it converges on target by a real, current functionality anchored metric of fitness." You seem to be asking for a simulation of evolution that doesn't have a fixed target, correct? If so, have you seen Thomas Schneider's ev? It seems to meet your criteria.Rasputin
September 29, 2009
September
09
Sep
29
29
2009
07:06 AM
7
07
06
AM
PST
Cabal: All you have to do is write a program that without using the target sentence and a distance to target metric, reliably achieves it in several dozens to several hundreds of generations, showing implicit [quasi-]latching-ratcheting as it converges on target by a real, current functionality anchored metric of fitness. I would love to see the result. GEM of TKIkairosfocus
September 29, 2009
September
09
Sep
29
29
2009
05:31 AM
5
05
31
AM
PST
Instead of discussing ancient versions of Weasel, wouldn't it be possible to write a version consistent with the principles of evolution, for the sole purpose of demonstrating the effect of selection for fitness? That's the question, isn't it? That random mutations and natural selection can cause adaptation to a fitness landscape? And even allow for adaptation to a changing landscape too, simulated by a target string subject to changes over time? I believe competent programmers can write Weasel programs in a very short time, say a couple of hours? (I might need a couple of days, but I haven't been doing any programming for many years.)Cabal
September 29, 2009
September
09
Sep
29
29
2009
05:23 AM
5
05
23
AM
PST
Onlookers: It is clear that the objection strategy continues to be ever increasing degrees of tangentiality. I have called attention yesterday to the main issues with Weasel in general, and the key turning points of the already tangetnial debates over whether or not Weasel c1986 latched and ratcheted, and how that could have been done. Similarly, the particular focus for this thread has been addressed, and it seems that on balance -- per inference to best, factually anchored (and provisional) explanation -- we now credibly have in hand the original Weasel code. The substantial conclusion is that Weasel c 1986 showed implicit latching and ratcvheting to target, that Weasel c 1987 was materially different (so the video is not a good counterpoint to the conclusion) and that in each case as targetted search rewarding non-funcitonal nonsense phrases on merely being closer to target, weasel is fundamentally disanalogous to the claimed blind watchmaker, chance variation and natural selection across competing populations. Indeed, Weasel is an inadvertent demonstration of intelligent design using targets and/or so-called fitness landscapes and optimisation by hill-climbing techniques. As touching Dr Dembski et al; it has been pointed out that while their analysis on p 1055 of the IEEE paper is based on a constructed example and a particular model of variation as a weasel tracks to target, that does not change anything material about the reality of implicit latching, that similar to explicit latching it ratchets to target, and that either of them could account for the mere facts c 1986: the excerpted runs and the description. On subsequent reported statements by CRD, and the above probable programs, we can see that Weasel credibly exhibited implicit latching-ratcheting. And, EIL, sponsored by M&D, present a cluster of algorithms covering ways in which the o/p of 1986 could have been achieved explicitly or implicitly or even by sheer random chance. (It is noteworthy that objectors claiming that the EIL analysis in the IEEE paper etc caricatures the Dawkins original weasel -- which they cannot provide and show unique characterisation of from the 1986 BW text -- characteristically do not reckon with that range of algorithms and the implications of observing that latching (an inferred behaviour from 1986 run outputs) can be achieved explicitly and implicitly.) G'day GEM of TKIkairosfocus
September 29, 2009
September
09
Sep
29
29
2009
04:01 AM
4
04
01
AM
PST
--kf 1) W. Dembski and R. Marks may have sponsored a cluster of algorithms reflecting the options on Weasel. But in their paper, they speak about one of these weaseloids, and W. Dembski states:
Our critics will immediately say that this really isn’t a pro-ID article but that it’s about something else (I’ve seen this line now for over a decade once work on ID started encroaching into peer-review territory). Before you believe this, have a look at the article. In it we critique, for instance, Richard Dawkins METHINKS*IT*IS*LIKE*A*WEASEL (p. 1055). Question: When Dawkins introduced this example, was he arguing pro-Darwinism? Yes he was. In critiquing his example and arguing that information is not created by unguided evolutionary processes, we are indeed making an argument that supports ID.
To talk about the algorithm presented in their paper isn't strwamannish, it's natural. And so the main point stands: They are not critiquing Dawkins's examplethus, they are not necessarily making an argument for ID 2)As has also been discussed repeatedly, partitioning of the letters cumulatively into found and not-yet-found groups can happen explicitly or implicitly, and with all sorts of possible mutation rates or patterns on the letters in Weasel’s nonsense phrases.But a implicitly latching search is not a partitioned search as described by W. Dembski and R. Marks in their paper. And so, their math doesn't apply. Or can you show otherwise? Just show me the math!DiEb
September 28, 2009
September
09
Sep
28
28
2009
11:46 PM
11
11
46
PM
PST
Onlookers: Plainly, there is little more substantial for Darwinists to object to in this thread. I note briefly: 1] Messrs Dembski and Marks -- as previously, repeatedly, noted -- have sponsored a cluster of algorithms reflecting the options on Weasel, so it is strawmannish to construe them as making up just one algorithm, which can be cast as diverse from Dawkins' original Weasel. [Which, strictly speaking we do not know to this day, as for instance, Mr Dawkins claims that he does not recall whether or not W1 and W2 above were the originals. Recall, he has not published his original program source code, only a description that will fit with both explicit and implicit ratcheting patterns.] 2] As has also been discussed repeatedly, partitioning of the letters cumulatively into found and not-yet-found groups can happen explicitly or implicitly, and with all sorts of possible mutation rates or patterns on the letters in Weasel's nonsense phrases. GEM of TKIkairosfocus
September 28, 2009
September
09
Sep
28
28
2009
11:18 PM
11
11
18
PM
PST
kairosfocus:
After all, ID theory is not God and Dembski is not its inerrant prophet.
Then why have you gone to so much trouble to try to interpret Dembski's words such that "Mr Dembski’s overall description of the behaviour of the Weasel 1986 algortihm is generally correct"? (Considering that the point of WEASEL is to illustrate cumulative selection, how can a version that does not involve selection be considered generally correct?)
I should note, on a point of material context, that up to the day where on April 9th I demonstrated implicit latching (thanks to Atom’s adjustable weasel), it had been hotly disputed by multiple Darwinist objectors that such was possible, for weeks and weeks, across multiple threads — complete with the most demeaning personalities. Nowhere above do we find acknowledgement of such inconvenient facts in the haste to say well we all agree that implicit latching [or, what you call implicit latching . . . ] is possible as a program pattern of behaviour (which of course implies and underlying process or mechanism). Similarly, when I proposed on law of large numbers that the showcased output of 1986 was on balance of evidence probably latched, this was sharply contested. Subsequently, that objection, too, has vanished without trace or acknowledgement that the balance on the merits in the end was not on the side of the Darwinist objectors.
I'm thoroughly confused. What "Darwinist objectors" ever disputed the fact that Dawkins' WEASEL exhibits implicit latching? It was you who sided with Dembski and Marks' description, which cannot be interpreted as implicit latching. Recall:
In this successfully peer-reviewed paper, on p. 5, they briefly revisit Dawkin’s Weasel, showing the key strategy used in the targetted search used: E. Partitioned Search Partitioned search [12] is a “divide and conquer” procedure best introduced by example. Consider the L = 28 character phrase METHINKS*IT*IS*LIKE*A*WEASEL (19) Suppose the result of our ?rst query of L = 28 characters is SCITAMROFNI*YRANOITULOVE*SAM (20) Two of the letters, {E,S}, are in the correct position. They are shown in a bold font. In partitioned search, our search for these letters is ?nished. For the incorrect letters, we select 26 new letters and obtain OOT*DENGISEDESEHT*ERA*NETSIL (21) Five new letters are found bringing the cumulative tally of discovered characters to {T,S,E,*,E,S,L}. All seven characters are ratcheted into place. Nineteen new letters are chosen and the process is repeated until the entire target phrase is found. Now, beyond reasonable dispute, making a test case of such a famous example would not have passed peer review if it had been a misrepresentation.
(Emphasis in original.)R0b
September 28, 2009
September
09
Sep
28
28
2009
02:37 PM
2
02
37
PM
PST
--kf,
All of this, it seems, is now to be swept away with the claim that there was no evidence that supported explicit latching as a possible mechanism, and that all along the evidence supported implicit latching, as though that is a triumph of the Darwinist objectors!
No, all of this is swept away as you failed to show how the math of the paper of Dembski and Marks is applicable to Dawkins's weasel.DiEb
September 28, 2009
September
09
Sep
28
28
2009
12:44 AM
12
12
44
AM
PST
Onlookers: Lest we forget: this thread started as a discussion of what seems to be the code for Weasel c 1986 -7. These programs (which it seems objections to have been abandoned, as opposed to conceded) support the claims -- hotly contested when first made -- that: 1] Weasel, c. 1986, shows latching-ratcheting behaviour, in the implicit form (thus on the balance of evidence implicit latching is the best explanation for the showcased runs and commentary in BW, c 1986); 2] Weasel c 1987 (per the BBC Horizon video previously confidently cited as "proof" that Weasel c 1986 did not latch) was distinctly different as a program; 3] That Weasel c 1986 (and 1987 too) was a case of targetted search that rewarded mere proximity increments of non-functional "nonsense phrases" to the target, using warmer-colder signalling; thus: 4] Weasel 1986 was fundamentally dis-analogous to the claimed BLIND watchmaker process, i.e. chance variation and natural selection across competing sub-populations. These formerly hotly contested and dismissed or even derided points have plainly been abundantly vindicated. That is a significant achievement. However, all that this seemingly means for too many objectors is that the grounds for objecting should be shifted (I say this, on grounds that what we see here at UD is -- on sadly all too abundant evidence on many topics -- likely to be a milder form of the objections being made elsewhere). So, after increasing degrees of tangentiality, the process of inference to best explanation on evidence that comes in bit by bit is dismissed with claims that Weasel showed no evidence that latching could have been explicit. This was already answered, and only a brief summary will be required: 1 --> Inference to best, empirically based explanation is relative to evidence and is not a proof; indeed there is a counterflow between the direction of implications (from Explanation to empirical data) and the direction of empirical support (from observed facts to explanations). 2 --> Thus such abductive explanations -- a characteristic feature of science -- are inescapably provisional and subject to a matter of relative degree of support rather than absolute decision. (That's another way of saying that scientific work -- often implicitly -- embeds trust in faith-points at various levels up to and sometimes including core worldview presuppositions, by the way.) 3 --> In the case of the data on Weasel c 1986, three logically possible explanations cover the data and are live options : T1 -- pure chance, T2 -- explicit latching, T3 -- implicit latching. [This presumes that we have already accepted another previously hotly contested and dismissed point: the excerpted, showcased runs c 1986 support the inference that these runs exhibited latching of correct letters in generational champions.) 4 --> While pure chance is strictly possible, it is vastly improbable and so is soon enough eliminated relative to the other two options. 5 --> Explicit latching (as the months of strident objections to the possibility of implicit latching show) is conceptually simpler and -- per various reports of programmers seeking to replicate weasel -- is "easier" to code for. Thus, on initial balance of evidence per the facts of 1986, it was the "likely" explanation. 6 --> C 2000 and beyond, indirect reports from Mr Dawkins to the effect that the original Weasel 1986 was not explicitly latched [and note, video of Weasel c 1987 was often cited by Darwinist objectors in claimed substantiation] tipped the balance in favour of T3, implicit latching, so soon as attention was drawn tot hem back in March. 7 --> This concept was hotly objected to, in a virtual firestorm of often highly heated objections, which only began to die down after the EIL Weasel GUI allowed demonstrations to be posted at UD as at April 9th. 8 --> Subsequently, a contest to produce the credible original Weasel c 1986 was mounted, and it now seems that we have two credible candidates W1 fro 1986, and W2 for 1987. [If these prove to be credibly correct (as the trend seems to be) we may reasonably conclude as already noted, i.e Weasel c 1986 implicitly latched.] _____________ All of this, it seems, is now to be swept away with the claim that there was no evidence that supported explicit latching as a possible mechanism, and that all along the evidence supported implicit latching, as though that is a triumph of the Darwinist objectors! That, onlookers, speaks volumes. G'day GEM of TKIkairosfocus
September 27, 2009
September
09
Sep
27
27
2009
11:57 PM
11
11
57
PM
PST
--kf, sorry, I thought the paper of W. Dembski and R. Marks was the key point of all the threads of the last couple days. I've to say that your opinion of latching/ratcheting etc. is of little consequence to me, but the statements of W. Dembski and R. Marks carry some weight. Therefore it was important for me to stress the point that the algorithm/example of their paper labeled Partitioned Search isn't the algorithm described by R. Dawkins in his book The Blind Watchmaker - and so, that the premise of W. Dembski's reasoning at this very website (criticizing R. Dawkins's weasel => criticizing evolution) doesn't hold. And this is absolutely independent of a discussion of the merits of Dawkins's weasel...DiEb
September 27, 2009
September
09
Sep
27
27
2009
02:42 PM
2
02
42
PM
PST
My apologies. The only "evidence" that TBW has an explicit latching mechanism is that kairosfocus, Dembski, and U Monash thought it did, until corrected. Yet thousands of other readers had no problem in understanding what duplication "with a certain chance of random error - 'mutation' - in the copying" meant. You appear to be unable to re-assess the evidentiary value of the latching behavior in light of your own data on Proximity Reward Searches. Sad, really. I was hoping you might produce some actual evidence. But no. So it remains true that all the evidence points to an implicit latching weasel (iyw). Given that all the evidence points to an implicit latching weasel and no evidence points to an explicit latching mechanism, would it would be unreasonable to assume, as of May 2009, an explicit latching mechanism? A simple yes or no will suffice, kairosfocus, but I don't want to subject you to undue strain. As a first step, let's see if you can keep your reply under 1,000 words. BTW, I do enjoy your continued use of the word "mechanism" to describe "behavior", "partitioned" as a synonym for "latched", and your determined efforts to avoid DiEb's incredibly simple point: D&M describe a "divide and conquer" search, with the appropriate math. TBW weasel, including the code shown above, cannot be such a search.DNA_Jock
September 27, 2009
September
09
Sep
27
27
2009
07:23 AM
7
07
23
AM
PST
PS: I should note, on a point of material context, that up to the day where on April 9th I demonstrated implicit latching (thanks to Atom's adjustable weasel), it had been hotly disputed by multiple Darwinist objectors that such was possible, for weeks and weeks, across multiple threads -- complete with the most demeaning personalities. Nowhere above do we find acknowledgement of such inconvenient facts in the haste to say well we all agree that implicit latching [or, what you call implicit latching . . . ] is possible as a program pattern of behaviour (which of course implies and underlying process or mechanism). Similarly, when I proposed on law of large numbers that the showcased output of 1986 was on balance of evidence probably latched, this was sharply contested. Subsequently, that objection, too, has vanished without trace or acknowledgement that the balance on the merits in the end was not on the side of the Darwinist objectors.kairosfocus
September 27, 2009
September
09
Sep
27
27
2009
04:17 AM
4
04
17
AM
PST
1 2 3 5

Leave a Reply