Uncommon Descent Serving The Intelligent Design Community

The Simulation Wars

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I’m currently writing an essay on computational vs. biological evolution. The applicability of computational evolution to biological evolution tends to be suspect because one can cook the simulations to obtain any desired result. Still, some of these evolutionary simulations seem more faithful to biological reality than others. Christoph Adami’s AVIDA, Tom Schneider’s ev, and Tom Ray’s Tierra fall on the “less than faithful” side of this divide. On the “reasonably faithful” side I would place the following three:

Mendel’s Accountant: mendelsaccount.sourceforge.net

MutationWorks: www.mutationworks.com

MESA: www.iscid.org/mesa

Comments
jerry,
There is no accumulation of information through selection that I have ever seen so how could I present a better way to an example I am not aware of.
Wow. Even Dembski allows that chance-and-necessity can give rise to 400 or fewer bits of complex specified information. I am utterly baffled by your response. A cattle breeder selects individuals to mate, but does not control the variation in calves. Suppose she does muscle biopsies and selects the leaner individuals in her herd for mating. There is much more information in the herd than the outcomes of the biopsies, but the breeder ignores it. The herd becomes leaner over time. The breeder knows only the outcome -- not the physiological changes that yield the outcome. The "how to be lean" information that enters the herd as a consequence of selection does not come from the "intelligence" of the breeder. That statistical information can be gained through iteration of reproduction-with-variation and selection is certain. Illustrating the difference of this information gain through trial and error from obtaining the combination of a lock through trial and error is worthwhile.Sal Gal
March 29, 2009
March
03
Mar
29
29
2009
12:33 AM
12
12
33
AM
PDT
Sal Gal, "No one here has evinced a willingness to take on the Wandering Weasel. My best guess is that you don’t understand it. And if you don’t understand the consequences of such a small modification to the Weasel program, why would you suggest that Dawkins should have given a different illustration? Do you have a better way of getting across the idea of accumulation of information through selection? If so, then out with it." Would you please update me about what exactly the "Wandering Weasel" is? At first it was "Methinks it is like a weasel." Are we sure that it is a weasel now? and that it is wandering around? And do we have a better way of "getting across" something that shouldn't be "gotten across"? Why would we attempt a positive correction of a flawed analogy explaining alchemy? Not only is the analogy flawed, the thing it is supposed to analogize is flawed. It's one misconception heaped on another. I'm sure that there will be much less comprehensibility the more modifications that are introduced into something that in it's simplest form is incomprehensible.Clive Hayden
March 29, 2009
March
03
Mar
29
29
2009
12:19 AM
12
12
19
AM
PDT
"Do you have a better way of getting across the idea of accumulation of information through selection? If so, then out with it." There is no accumulation of information through selection that I have ever seen so how could I present a better way to an example I am not aware of. Maybe you should present an/the instance. We would all be interested. It would be a first, since I am not aware of even Dawkins ever doing so.jerry
March 28, 2009
March
03
Mar
28
28
2009
08:39 PM
8
08
39
PM
PDT
jerry,
The fact that people persist on this says more about their purpose here than anything else.
Yes, I totally agree. Given that, who has persisted for years in constructing complicated "refutations" (based on misunderstanding) of a simple illustrative program? And what would that say about their purpose?David Kellogg
March 28, 2009
March
03
Mar
28
28
2009
08:15 PM
8
08
15
PM
PDT
jerry,
Now that we have established that, it can be put in the black hole that I recommended and forgotten about.
I will not be forgetting anytime soon the IDiocy of attacking for years a pop-sci illustration -- no, a misunderstanding of the illustration -- of an aspect of the theory of evolution. Dembski and Marks evidently have committed to publishing a peer-reviewed paper that falsely attributes "partitioned search" to Dawkins, and the Weasel program will go into a black hole no sooner than their paper does. Various parties explained their misunderstanding to them long ago. And Bob Marks' inclusion of the Weasel Ware propaganda at his Evolutionary Informatics Lab website makes the whole affair surreal. It is absurd to suggest that Dawkins was out to prove anything with the Weasel program. He was, as an authority on evolutionary theory, illustrating for a mass readership a belief of the scientific mainstream. Dembski's rhetorical strategy has been to attack the illustration and leave it to naive readers to conclude that he has refuted scientific theory. There is absolutely nothing wrong with giving a simple illustration of a belief that in fact has very strong support. I have shown that it is trivial to turn the Weasel program into a much stronger illustration. But the Wandering Weasel is also more difficult to understand. (And I have in mind three more modifications that will make the program more realistic, each at a cost of decreased comprehensibility.) No one here has evinced a willingness to take on the Wandering Weasel. My best guess is that you don't understand it. And if you don't understand the consequences of such a small modification to the Weasel program, why would you suggest that Dawkins should have given a different illustration? Do you have a better way of getting across the idea of accumulation of information through selection? If so, then out with it.Sal Gal
March 28, 2009
March
03
Mar
28
28
2009
07:17 PM
7
07
17
PM
PDT
Good grief, jerry -- denying what's obvious to everybody is only going to make things worse.skeech
March 28, 2009
March
03
Mar
28
28
2009
03:12 PM
3
03
12
PM
PDT
"But it doesn’t model real evolution well, which Dawkins himself pointed out." Now that we have established that, it can be put in the black hole that I recommended and forgotten about. "In comment #69, you reversed yourself " Whoa, when desperate, claim the other person fails to answer an inane question or contradicts himself. Your question has been answered in what I have said and I have not contradicted myself. A contradiction only exists in your mind and is probably due to your failure to read things carefully. Latching is closer to reality than the non latching scenario set up in the Weasel program. An even closer to a reality scenario is one that would only rarely eliminate the latching and I mean rare but in terms of the simulation it would probably only extend the simulation a few steps. The parameters of the program is nowhere close to reality so trying to salvage it by suggesting which of latching, almost latching and no latching is best, really misses the point. Instead of a beauty contest, we have the Weasel Ugly contest. Which of the very, very, very ugly incompetent inappropriate programs is the least ugly. If you want to fight over this, be my guest. Give it a rest and move on to something of substance. The fact that people persist on this says more about their purpose here than anything else. Many of the comments people make really don't deserve an answer. If they were sincere, the questions and approach would be quite different.jerry
March 28, 2009
March
03
Mar
28
28
2009
02:00 PM
2
02
00
PM
PDT
Good job, Skeech. A key idea is that mutation happens independent of selection for fitness. First the phrase is subject to mutation according to rules that know nothing of the criteria for selection, and then, once that is done, the fitness function determines whether that phrase, as part of a generation of phrases, actually survives.hazel
March 28, 2009
March
03
Mar
28
28
2009
12:54 PM
12
12
54
PM
PDT
jerry, I notice that you avoided answering madsen's question, which was:
But now I’m confused as to what your complaint with the weasel algorithm actually is. You seem to be saying that the algorithm should be modified to include a bias in favor of preserving correct letters. However, in all the discussion that has gone on here, I can’t remember anyone citing a single case of letter reversion, while using what are considered “realistic” parameters. In fact, the apparent lack of reversion was what got these threads started in the first place. Why would the program need additional tweaks in order to prevent something which happens so rarely anyway?
I can understand your desire to change the subject, since you have in fact contradicted yourself. In comment #43, you wrote:
Explicit latching as it is defined here more closely resembles reality.
In comment #69, you reversed yourself and concluded that the bias for preserving correct letters belonged in the selection step, not the mutation step. That is exactly what we've been trying to get you to see for half of this thread. I'm glad you finally understand.skeech
March 28, 2009
March
03
Mar
28
28
2009
12:24 PM
12
12
24
PM
PDT
jerry,
I have said all this discussion is folly because the program is nonsense. Do you really think the Weasel program has any value?
Well, I think it does have some value as an extremely simple demonstration of the power of mutation and selection. But it doesn't model real evolution well, which Dawkins himself pointed out.madsen
March 28, 2009
March
03
Mar
28
28
2009
12:22 PM
12
12
22
PM
PDT
"But now I’m confused as to what your complaint with the weasel algorithm actually is. You seem to be saying that the algorithm should be modified to include a bias in favor of preserving correct letters. " No, I think the whole discussion should be abandoned and the Weasel program put in a black hole where it rightly belongs only to be resurrected to show why it is useless and not to emulated. I have said all this discussion is folly because the program is nonsense. Do you really think the Weasel program has any value? Have you run the two Monash version of Dawkins programs, one of which I was told is a good replication of the original? The other is a latched version. I find the persistence of this silliness the most interesting thing about this discussion. Now that I understand that the Weasel program is nonsense, we can point to this discussion to shorten any further discussions in the future. That is the whole value of this thread. A way to short circuit further inane discussions down the road.jerry
March 28, 2009
March
03
Mar
28
28
2009
11:46 AM
11
11
46
AM
PDT
KF [64,65]:
JT:9] any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate.
Nope. There are [a] undirected, stochastic contingencies, and [b] directed contingencies, some of which may be [c] set up in programmed, algorithmic systems. Programs that have to work on dynamical entities will [d] use the forces and materials of nature to bend them into structures fitted to to the intent of the designer. Simple, easily observed empirical facts.
My point was as follows: Take a volcano for example - there would be physical conditions that came into existence that triggered its eruption. Some of those conditions would be relatively permenant, for example the size shape and location of the volcanic mountain in question. Other factors that eventually precipitated the volcanic eruption would be possibly chance events of some sort or another. But there would be some complete set of determinsitic factors that came together at some point in time to cause that volcano to erupt. That set of factors would equate to a program causing volcanic eruption. Some aspects of that program came into existence by chance, presumably, I'm not denying that.
(And, BTW, what observed or predicted empirical evidence leads to the conclusion that life could be so written into the laws of nature, as an explanation claimed to be superior to the obvious: design by designers.)
My thinking is, some event is designed by whatever causes preciptiated its occurence. That volcanic eruption for example was designed and characterized by whatever physical forces precipitated it and constrained it. What sort of lava flow was it - that would be designed by the necessary precipitating physical causes, not an "Intelligent Agent" for example. Even in the case of human design, there is a context of culture, of necessity, of existing technology that must be considered to fully account for the emergence of new technology. "Necessity is the mother of invention." (not "Intelligent Agency".) Why would this gargantuan universe exist if it was not integral to Man's creation? From a Biblical perspective, aren't we sort of the end point? Is the remaining 99.99999999999999999999999999999 of the universe essentially garbage? Why does it exist?
12] the monkey will eventually hit some 10 word sequence from Hamlet.
Just as, by chance he will eventually reach Shakespeare’s full corpus. The issue is — again — probabilistic resources to get to functionality
But if recognizable sequences from Hamlet are preserved, probabilistic resources don't enter into it.JT
March 28, 2009
March
03
Mar
28
28
2009
11:11 AM
11
11
11
AM
PDT
jerry, At least we agree that bias should not be placed in the mutation step. But now I'm confused as to what your complaint with the weasel algorithm actually is. You seem to be saying that the algorithm should be modified to include a bias in favor of preserving correct letters. However, in all the discussion that has gone on here, I can't remember anyone citing a single case of letter reversion, while using what are considered "realistic" parameters. In fact, the apparent lack of reversion was what got these threads started in the first place. Why would the program need additional tweaks in order to prevent something which happens so rarely anyway?madsen
March 28, 2009
March
03
Mar
28
28
2009
10:36 AM
10
10
36
AM
PDT
"In other words, in which step of the algorithm should the bias of preserving correct letters be placed—in the mutation step, or the selection step? Probably in the selection step and by the way the selection is nonsense so to try and shore up this part is contributing more to the ludicrousness of this program. So it is should be highly unlikely that an offspring without all the correct letters should be eligible for selection. That is why latching makes more sense but almost latching could be argued as even more sense so that only rare offspring that do not have all the letters would be available to reproduce. But the real world is quite different. Evolution seems to show stasis and that is what latching is about and it is due to the conservative aspects of natural selection. Over time there seems to be something added to species and that is what the theory of punctuated equilibrium is all about. Part of the genome lays fallow till a new capability is available through mutation. Few but the real die hards ascribe to the traditional Darwinian process of small changes over time to functional elements. So latching is much closer to reality than non latching. Dawkins is one of those die hards so this program as well as Dawkins should be thrown under the bus and everyone should move on to the new paradigm. Which by the way is also just as unproven as the Darwinian one of small functional changes. The Gould paradigm is of small non functional changes suddenly leading to new capabilities. So if there is any teaching value in the Weasel program it is that "The King Is Dead"; "Long Live the King." But unfortunately the new king is also as weak as Henry's sons. Not quite still born but not strong to last till maturity.jerry
March 28, 2009
March
03
Mar
28
28
2009
09:48 AM
9
09
48
AM
PDT
kairosfocus [64], your comment helps me understand what you mean by "implicit latching." You mean "non-latching." Let me explain. You write:
In the implicit latching case, a co-tuned blend of population size [vs probabilities of multiple mutations], random changes on population members, preservation of the original champion in a significant fraction of the generation and rewarding of mere proximity to target causes latching that is implicit but very real. This is because it is practically — not theoretically — impossible for a double mutation that simultaneously gives a good new letter and reverts an old correct letter to win as champion in the current generation.
The added emphasis shows that you know that letters are not latched but subject to random mutation. What do you mean by "implicit latching," then? I believe you mean that selection will tend to conserve correct letters. Well, okay. So what? Nobody suggested otherwise. Mutation is still random on the letters. As for your claim that selection for a letter reversion is theoretically possible but will not happen practically, this is simply not true. I've run several non-latching Weasel programs and found consistent if rare examples. I even posted an entire run in the other thread and pointed out the reversion. So you're wrong there. In short, by your own words, "implicit latching" simply means that beneficial mutations tend to be conserved when selection happens. And that's always been what Dawkins has claimed. DavidDavid Kellogg
March 28, 2009
March
03
Mar
28
28
2009
08:55 AM
8
08
55
AM
PDT
jerry,
If you read what I said, I never said that once a letter reaches function, it should not be prevented from disappearing. Just that it should be rare and not the equivalent of the mutation rate because there will be a strong bias by natural selection to preserve a functional state.
Ok, let me see if I'm understanding: Are you saying (in weasel terms) that correct letters should have a lower chance of mutation than incorrect letters? Or are you saying that correct letters in the parent should very rarely revert to incorrect letters in the "best" offspring? In other words, in which step of the algorithm should the bias of preserving correct letters be placed---in the mutation step, or the selection step?madsen
March 28, 2009
March
03
Mar
28
28
2009
08:00 AM
8
08
00
AM
PDT
"If you believe that latching is more realistic in actual biology, how are functioning genes prevented from mutating?" They are not. I never said they weren't. Read what I said. They are just conserved by natural selection and it is not realistic that a population would be replaced by one that does not have a functional element. But the program treats this outcome as likely for several of the generations. It is the program that is nonsense. If you read what I said, I never said that once a letter reaches function, it should not be prevented from disappearing. Just that it should be rare and not the equivalent of the mutation rate because there will be a strong bias by natural selection to preserve a functional state. Latching is a way of doing that but if one wants to get more realistic then make the chance of it being switched out much lower than the mutation rate. However, absolute latching is more realistic than the chance of the mutation rate switching it out. I bet Dawkins did the latching first and found out the example was too fast and consequently not believable so he went for the even less realistic version just because it searches longer. But in either case the example has no value except to prolong irrelevant discussion about it for people who have too much time on their hands. "Deleterious mutations do happen in reality — just ask anyone with cystic fibrosis. The non-latching case, which allows deleterious mutations, is therefore more realistic than the latching case. Again, please slow down and think this through before responding." No one said they didn't happen. They just do not take over the population in one generation under any plausible scenario just because Richard Dawkins programs it as such. So the latching case is more realistic. See the discussion above. All this discussion is due to the ineptness of this program as both an example of evolution and as a teaching tool. What it is, is a propaganda tool. And a lot of people fell for it. So look at Dawkins as a master of propaganda and not as someone well versed on evolution. As I said above he would make a good used car salesman. So smile while you are being conned by Sir Richard's gobbledegook.jerry
March 28, 2009
March
03
Mar
28
28
2009
07:31 AM
7
07
31
AM
PDT
kairosfocus, In the weasel program, assuming "implicit latching", do you believe that a correct letter has a different probability of mutating than an incorrect letter? I'm looking for a short answer, a few sentences at most.madsen
March 28, 2009
March
03
Mar
28
28
2009
07:04 AM
7
07
04
AM
PDT
8] If you’re talking about first life (or first RNA strand or whatever) then obviously that cannot come about by Darwinian selection. Everyone would agree with that. But does that logically necessitate that it just poofed into existence instantaneously via Divine fiat? I will ignore the ad hominem laced inference to Creationism, especially as from the very outset of the modern design theory, Thaxton et al stated explicitly that OOL by design per empirical evidence of want of thermodynamic credibility in abiotic environments, does not directly implicate an extracosmic designer. (Forrest et al have been setting up convenient -- and dishonest -- strawmen to knock over.) But the main point is that you here implicitly acknowledge the force of the Hoylean challenge. The FSCI in first life has to be accounted for, and there is no empirically credible, spontaneous hill climbing algorithm -- no BLIND watchmaker -- to get there. (And onlookers, in Section A my always lined I discuss how Shapiro and Orgel mutually destroy RNA world and metabolism first imagined scenarios for chemical evo. In section B, I take apart the wreckage, and back it up with Appendix A on the relevant thermodynamics, building on . . . Hoyle's tornado in a junkyard.) Similarly, body plan level architectures are highly complex, functionally deeply and multiply integrated -- and so credibly highly irreducibly complex [observe Jerry, where I think Behe's IC comes most seriously into play] -- starting from the observed flexible program flexible data storage, molecular nanotech computer in the heart of the cell. And, major body plans similarly brim with FSCi-rich, irreducibly complex, multiply and subtly functionally integrated systems. Think: autonomous robots that are self replicating, based on intelligent polymer molecular nanotech information systems. That's what I find myself looking at, from my applied physicist's perspective, and that screams: design. (And, BTW, see why AI does not faze me, once we look at Derek Smith's two-tier control processor for MIMO servo systems, as I discuss in App 7 the always linked? [Do you see that I am heading to reverse engineering life systems, to forward engineer a new generation of REALLY smart systems? Eventually, systems that can be loaded into embryonic robots launched across space to seed new planets and even solar systems, "eating" local resources, then organising and [partly] terraforming them to set up for colonisation? Starting with the Moon, Mars, Ceres and possibly some of those big Jovian moons? THAT'S what lurks in Design Theory.]) 9] any set of physical contigencies that result in a certain outcome, (plus any associated natural laws) are also a program to generate whatever outcome it is they generate. Nope. There are [a] undirected, stochastic contingencies, and [b] directed contingencies, some of which may be [c] set up in programmed, algorithmic systems. Programs that have to work on dynamical entities will [d] use the forces and materials of nature to bend them into structures fitted to to the intent of the designer. Simple, easily observed empirical facts. Since you mention physical programs, perhaps you suggest life is written into the laws of he cosmos, so that what looks contingent is not. This leads straight to the cosmological inference to design on a grand scale. (And, BTW, what observed or predicted empirical evidence leads to the conclusion that life could be so written into the laws of nature, as an explanation claimed to be superior to the obvious: design by designers.) 10] If those intermediates in Dawkin’s Weasel are functional, then your 1000 bits are irrelevant - right? As the Spartans once famously replied: IF. In fact, on evidence, 10^40 configs is a toy example relative to the initial state of function of the type of entities we really need to get to, and Dawkins -- not GEM of TKI or WmAD or Royal Truman etc -- acknowledged that he intermediates were NON-FUNCTIONAL. Again: ". . . nonsense phrases." No antecedent in a hypothetical inference, no warrant for inferring the consequent on that basis. 11] Consider the example of Monkeys with Typewriters, but with some modifications: You have a bunch of monkeys that have access to a hat filled with a couple of thousand words or so - all words in Hamlet. They can go and grab a word out of the hat and then . . . This is of course a rabbit trail leading away from the material case on the table circa 1986. But also it brings up a most interesting issue: the challenge is to get to the initially functional entities by spontaneous processes then assemble into integrated systems. The polymers of life [and some of their monomers] are multiply thermodynamically implausible for any credible prebiotic spontaneously formed environment. (This, I discuss in Section B my always linked.) In a Weasel case, if the threshold of getting to relevant words first and then assembling them in grammatically proper order is set, it combinatorially -- and algorithmically -- explodes. 12] the monkey will eventually hit some 10 word sequence from Hamlet. Just as, by chance he will eventually reach Shakespeare's full corpus. The issue is -- again -- probabilistic resources to get to functionality. Which is still unmet. 13} 49: with an actual population, and beneficial mutations overtaking the entire population (a simplifying assumption as well admittedly but certainly comprehensible) any detrimental mutations in certain individuals (of already benficial changes) will be swamped by the rest of the population that did not have these mutations. In short, implicit latching. AND in your case, with tearaway run to the target through multiple beneficial mutations on a probabilistically and empirically implausible model. [Think about the skirts of the mutations distribution and what happens with 500 iterations per generation of 5% per letter odds of mutation.] Cf Behe's Edge of Evo and the realistic odds of double mutations to get a benefit. Of course, this at best overlooks the point that I have ALWAYS spoken in the context of a population of mutants, for both explicit and implicit latching. In the case of something like Apollos' runs, that is equivalent to taking every ten or whatever number of mutants catches your fancy, and picking the best of that cluster to move ahead. The simplest implementation would be to pick a champion, then do a mutant and compare setting up the candidate best next champion from the comparison, noting distance to target. Repeat the mutation on the existing champion n times, and the resulting best approach to target becomes the next current champion. Repeat till convergence hits target. There is no material difference between this and doing the same on one individual at a time, then incrementing the count. 14] hazel, 50: all letters need to have the possibility of mutation because mutation is random in respect to fitness: that is a critical part of the model. Post-facto spinning and ink cloud spewing to make a fast getaway. As I pointed out this morning in the other thread, this has no good warrant from TBW, ch 3. Dawkins circa 1986, by his statements, abundantly warrants a per letter partitioned search, latching approach. All else on this is, in the end, squid ink-cloud obfuscation. 15] the fact that occasionally a correct letter mutates doesn’t have a net effect This would reflect, of course the impact of implicit [quasi-]latching due to co-tuning of mutation rates, population size and the proximity to target filter. Which is what I have long since pointed out, in the first thread where this issue was raised. Many days ago now. 16] Skeech, 55: Latching is equivalent to preventing deleterious mutations. Not at all, it is primarily the effect of rewarding on proximity to target without reference to function. As I must repeat yet again: until you have achieved function, no credible natural selection ion differential function across variation can be discussed. Weasel begs the Hoylean question by imposing effectively no or an implausibly simplistic criterion of function in more modern versions. And, as noted, Dawkins acknowledged that in 1986, that his implementation rewards non-functionality on proximity to target. Jerry, in 59 aptly sums up the matter:
Natural selection would select the progeny with the functional traits and eliminate those when the traits were not conserved. A functional state is treated as equivalent to a non functional state by Dawkins’ Weasel program which is very bad evolutionary theory by the high genius of evolution. [But, Jerry, fair comment: onward language gets a little testy.]
17] SG, 62: what Dawkins’ was illustrating depends in no way on “targeted search,” “partitioned search,” or “implicit locking.” the only thing That Mr Dawkins succeeds in "illustrating" is that intelligent design based on targetted search can effectively scan an otherwise resource-wise unsearchable config space. It does so through active information. Again, Am H Dictionary:
il·lus·trate (l-strt, -lstrt) v. il·lus·trat·ed, il·lus·trat·ing, il·lus·trates v.tr. 1. a. To clarify, as by use of examples or comparisons: The editor illustrated the definition with an example sentence. b. To clarify by serving as an example or comparison: The example sentence illustrated the meaning of the word. 2. To provide (a publication) with explanatory or decorative features: illustrated the book with colorful drawings. 3. Obsolete To illuminate. v.intr. To present a clarification, example, or explanation.
GEM of TKIkairosfocus
March 28, 2009
March
03
Mar
28
28
2009
05:43 AM
5
05
43
AM
PDT
H'mm Several points; pardon the need for a string of correctives, as it is plain that a spin game is being played out over in the Darwinian blogosphere. So, let us note for the record: 1] Pendulum, 35: Weasel was not a response to anything by Hoyle. Not so. At the turn of the 80's Hoyle [and Wickramasinghe] had raised the issue of the complexity of life, and the question was captioned by the odds of creating a cell per say the odds against the cluster of enzymes originating by chance being something like 1 in 10^40,000. This set he context for debates [and is part of the back-story on the Thaxton et al book that launched the modern design movement, TMLO], and Weasel was in its core an attempt to say that what is utterly improbable as a single step, is feasible if we look at baby steps. Excerpting Wiki on their overview on the Weasel Program:
Dawkins intends this example to illustrate a common misunderstanding of evolutionary change, i.e. that DNA sequences or organic compounds such as proteins are the result of atoms "randomly" combining to form more complex structures. In these types of computations, any sequence of amino acids in a protein will be extraordinarily improbable (this is known as Hoyle's fallacy). Rather, evolution proceeds by hill climbing.
The highlighted quote shows precisely the question-begging at stake. The DNA- RNA- Ribosome-Enzyme system constitutes a stored program and stored data computer, physically instantiated. Such entities are known to be irreducibly complex. Until one arrives at the threshold of elements to achieve physical functionality, one has nothing. And, until one has functionality, one cannot credibly address differtne4ial functionality in environments to get to natural selection, the blind watchmaker hill-climbing engine proposed. Further, the spontaneous synthesis of relevant informational macromolecules is a question not of biology but statistical thermodynamics -- an expertise of Hoyle [who is the precise person who in this exact context raised the question of he analogy of a tornado in a junkyard in Seattle spontaneously forming a 747], and not one of Dawkins. (Cf my own discussion in my always linked to see what I am pointing to, App 1, esp note 6.] So, by diverting attention from the need to get TO shores of functionality by focussing on hill climbing within islands of already existing functionality, Dawkins' Weasel is an exercise in question- begging the Hoylean challenge. The real fallacy is Dawkins' q-begging. Hoyle is right, and sitting on a point of his Nobel Prize-equivalent winning expertise. (Hint: the late, great Sir Fred Hoyle may be wrong on a point of theory, but he is not going to be grossly, simplistically wrong; his errors will be both interesting and deeply revealing on what is going on. That is why regardless of differences he is one of my personal intellectual heroes. Most recently, I was looking at his magnetic braking of proto-solar system disks model. That led me into looking at issues on Faraday disk generator empirical behaviour that I had never noticed before, and which are glossed over in the textbooks on electromagnetism and electromagnetics.) 2] Weasel’s extremely limited didactic goals Weasel's goals were more rhetorical than didactic. And, it is not misunderstanding to take plain words and outputs at their direct and obvious import. The o/p of weasel circa 1986 plainly on a law of large numbers sample, credibly latches. Two models for that latching were proposed: explicit and implicit. the former is overwhelmingly justified by the statements Mr Dawkins made in his text. It is on further statements that not even the 1986 version was explicitly latched that the implicit latching becoems the better explanation. And, in that light, the 1987 video o/p becomes a clear case of detuning for video effect. Misunderstanding is not the material issue at stake. What is, is that Weasel from the outset, was precisely not BLIND watchmaker in action, but foresighted, targetted search that rewards non-functional configs on mere proximity. And, that in a context where threshold of function was the precise issue at stake from the outset. 3] DK, 39: I think “implicit latching” is a way to avoid saying “non-latching.” Strawman. In the implicit latching case, a co-tuned blend of population size [vs probabilities of multiple mutations], random changes on population members, preservation of the original champion in a significant fraction of the generation and rewarding of mere proximity to target causes latching that is implicit but very real. This is because it is practically -- not theoretically -- impossible for a double mutation that simultaneously gives a good new letter and reverts an old correct letter to win as champion in the current generation. (Just as, on statistical thermodynamics with an entity of sufficient size, significant fluctuations away from the 2nd law of thermodynamics as classically stated are practically -- not theoretically -- impossible. And, yes DK, stat thermo-d considerations were in mind for me from the outset.) Under somewhat relaxed conditions [higher mutation rates coupled to big enough populations], the probability of seeing that rather special double mutation will rise. And, as the odds of per letter mutation rise sufficiently, facilitating that, the odds of no mutation at all fall. So, we can see cases where a substituting mutation will occasionally win: remember the closest to target, regardless of want of functionality, wins. Thus, quasi-latching. Then, as the conditions are further relaxed, triple etc mutation cases become more common. In some of these cases, even more substitutions will happen, and as well, novel multiply correct letters begin to emerge. Implicit latching vanishes, and we see much more of flicking back, no strong preservation of current letters, and occasional leaps of multiple letter advance. 4] Skeech, 41: KF’s complaint about “implicit latching that rewards non-functional but closer population members” is therefore, to use a couple of his favorite phrases, “distractive” and a “red herring”. Spin. I have just -- again, onlookers -- explained why this is VERY relevant. And,t he original issue is that Weasel is precisely not blind watchmaker. Back to that December thread, comments 107 and 111:
[107:] the problem with the fitness landscape is that it is flooded by a vast sea of non-function, and the islands of function are far separated one from the other. So far in fact — as I discuss in the linked in enough details to show why I say that — that searches on the order of the quantum state capacity of our observed universe are hopelessly inadequate. Once you get to the shores of an island, you can climb away all you want using RV + NS as a hill climber or whatever model suits your fancy. But you have to get TO the shores first. THAT is the real, and too often utterly unaddressed or brushed aside, challenge. [111, excerpted paragraph used by GLF in his threadjack:] Weasel sets a target sentence then once a letter is guessed it preserves it for future iterations of trials until the full target is met. That means it rewards partial but non-functional success, and is foresighted. Targetted search, not a proper RV + NS model.
See what is being desperately spun away from, onlookers? Notice, too, the evident fact of ratcheting and thus implication of explicit or implicit latching? And, notice the issue bing skipped over? 5] SG, 44: “Implicit latching” — the term itself — reflects gross misunderstanding of the evolution strategy. The Weasel program should have no termination criterion. That is, it should not stop itself, just as evolution does not stop itself. Ad hominem laced, smoke- cloud- emitting, burning strawman. Weasel as a matter of fact has a target [on achieving of which it terminates], and as a matter of observed o/p fact, circa 1986, that target worked on aper letter basis. "Latching" is in that context and does not reflect a misunderstanding. 6] the fitness function in Dawkins’ example Dawkins uses a TARGET, and proximity thereto, not functionality-based differential fitness. Fact, acknowledged by him in TBW, ch 3: "The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL." Of course, that "however slightly" implies that a one-letter advantage is enough, justifying a letter by letter partitioned search interpretation. 7] JT, 47: you’re defining “Methinks it is a weasel” as the only functional string. Presumably you accept that string as being functional. If you’re just defining every other string of letters as nonfunctional, then obviously there won’t be any intermediates that can be preserved. Kindly examine what Mr Dawkins stated in TBW, ch 3, as just excerpted. It is HE who accepts that the phrases in question are non-functional, "nonsense phrases." That is, I am making no redefinitions; just taking him at his word and the linguistic context at its natural meaning: only a correct sentence is a correct sentence, and only a correct word is a word. [In algorithmic and data storage contexts, just one incorrect letter or character can cause havoc, e.g. the infamous comma that forced NASA to abort a rocket launch. I also recall the case of a computer trying to subtract Jones from Smith, and causing a social security system in was it South Africa to crash.] And indeed,the only reasonable threshold of function is that he has cut down form monkeys reproducing all of Shakespeare by random typing [on probabilistic resources grounds] to one sentence, and in that one sentence, he admits that trying to get to it "single step" is far beyond the credible reach of a computer on the gamut of the observed universe:
What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed.
It turns out that 27^ 28 ~ 1.2 *10^40, which is well within the 500 - 1,000 bit threshold of complexity [~ 106150 - 10^301 configs] at which FSCI becomes a material issue. In turn, 1,000 bits is well below the sort of reasonable threshold for first cellular life, 600 k bits; as well as for major body plan innovations, ~ 10's - 100's of M bits. And, that is Hoyle's context and of my reasonable extension to bio- diversity origination. [ . . . ]kairosfocus
March 28, 2009
March
03
Mar
28
28
2009
05:42 AM
5
05
42
AM
PDT
JT, My point is that what Dawkins' was illustrating depends in no way on "targeted search," "partitioned search," or "implicit locking." Populations sometimes accrue information that reduces the uncertainty of the environment, even though the environment is changing. As David Fogel has written, following Wirt Atmar closely, "Selection operates to minimize species' behavioral surprise — that is, predictive error — as they are exposed to successive symbols." The symbols he refers to represent environmental circumstances. The Weasel program makes better sense if you stop thinking of the simulated organisms as genotypes, but instead as phenotypes making 28 predictions of the environment. The count of matching characters in the fitness function is then the total payoff for correct predictions. Dawkins specified an environmental sequence of symbols and held it constant to provide a clear illustration of information accrual through selection. In the Wandering Weasel program, obtained through slight changes to the Weasel program, I illustrate accrual of information when the environment is a random process -- a Markov process -- that hops from sequence to sequence of symbols. The Wandering Weasel program reduces the entropy of the environment -- the simulated environment is essentially external to the program -- for a wide range of parameter settings. This holds even if we make the length of sequences large and set the parameters to make the probability of ever perfectly matching the current sequence very low.Sal Gal
March 28, 2009
March
03
Mar
28
28
2009
12:09 AM
12
12
09
AM
PDT
jerry writes:
Absolute nonsense. You have it backwards. Deleterious mutations are eliminated from the offspring in normal evolution not allowed to prevail.
jerry, Slow down and think this through. Deleterious mutations "prevail" in neither the latching nor the non-latching paradigm. Both match biological reality in that respect. The difference between the two is that in the latching case, deleterious mutations are prevented from happening in the first place; in the non-latching case, they are allowed to happen but are weeded out of the population by selection. Deleterious mutations do happen in reality -- just ask anyone with cystic fibrosis. The non-latching case, which allows deleterious mutations, is therefore more realistic than the latching case. Again, please slow down and think this through before responding.skeech
March 27, 2009
March
03
Mar
27
27
2009
10:48 PM
10
10
48
PM
PDT
jerry, If you believe that latching is more realistic in actual biology, how are functioning genes prevented from mutating?madsen
March 27, 2009
March
03
Mar
27
27
2009
10:24 PM
10
10
24
PM
PDT
"Latching is equivalent to preventing deleterious mutations. Therefore a realistic simulation should not latch, but should instead allow deleterious mutations to be filtered out by the fitness function." Absolute nonsense. You have it backwards. Deleterious mutations are eliminated from the offspring in normal evolution not allowed to prevail. Natural selection would select the progeny with the functional traits and eliminate those when the traits were not conserved. A functional state is treated as equivalent to a non functional state by Dawkins' Weasel program which is very bad evolutionary theory by the high genius of evolution. Latching is a more realistic outcome. This whole discussion is as I said folly because the example is very bad evolution and very bad pedagogy. But people follow this idiot Dawkins like he is a prophet. Dawkins would make a good used car sales man because he has sold a bunch of junk to people in the last 30 years. And they smile when they buy it. "To latch or not to latch, that is the folly."jerry
March 27, 2009
March
03
Mar
27
27
2009
09:44 PM
9
09
44
PM
PDT
Sal Gal [44]: “Implicit latching” — the term itself — reflects gross misunderstanding of the evolution strategy. What about "relative latching" - referring to the relative fixation of a trait. Haven't cockroaches been around for millions of years without much change?JT
March 27, 2009
March
03
Mar
27
27
2009
09:08 PM
9
09
08
PM
PDT
Sorry, I didn't see your post 44.JT
March 27, 2009
March
03
Mar
27
27
2009
09:05 PM
9
09
05
PM
PDT
Sal Gal 45-46: I may see where you're going with this "Wandering weasel" program, but what are you inferring (or implying) from it (And where are you going with that wandering weasel)? BTW- Where is the preview button.JT
March 27, 2009
March
03
Mar
27
27
2009
08:43 PM
8
08
43
PM
PDT
jerry:
In evolution, the one thing that natural selection tends to do is conserve so if one wants to approach reality in any sense then the loss of information within an iteration of a functional subpart is extremely unlikely.
Evolution conserves useful traits not by preventing deleterious mutations but by filtering them out of the population via selection. Latching is equivalent to preventing deleterious mutations. Therefore a realistic simulation should not latch, but should instead allow deleterious mutations to be filtered out by the fitness function.
How difficult is that to understand.
How difficult is this to understand?skeech
March 27, 2009
March
03
Mar
27
27
2009
08:32 PM
8
08
32
PM
PDT
But the mutationworks simulation does latch - whether they realize it or not. It latches the original configuration, because any other mutations that can occur are rejected in favor of the original configuration, (save one and only one target config out of the entire space). To clarify, mutationworks has all these nuetral mutations happening (they call virtually everything neutral), and they say those are preserved. But how can they mean all those neutral mutations are actually overtaking the entire population? They cannot mean that. And then any other neutral mutation also overtakes the entire population? So really the mutationworks simulation means one of two things: EVERYTHING LATCHES [because every netural mutation overtakes the population] or NOTHING LATCHES but the original config and only one distant target. But at any rate, the mutationworks simulation most definitely latches as well.JT
March 27, 2009
March
03
Mar
27
27
2009
08:06 PM
8
08
06
PM
PDT
"So I agree: what is difficult to understand about this?" The whole process is folly as I said above. The artificial fitness function is nonsense even if it is meant to be a pedagogical process. In evolution, the one thing that natural selection tends to do is conserve so if one wants to approach reality in any sense then the loss of information within an iteration of a functional subpart is extremely unlikely. A more intelligent way would be to make the loss of this functionality very rare. And latching is the easy way out but but more reasonable then to let it mutate out like it meant nothing. How difficult is that to understand. Now it turns out that by programming that latching effect into this absurd example one gets a less absurd simulation but it is extremely trivial one because it reaches an answer very quickly. It is like we are some how fooled that the longer simulation must be more real life and that is nonsense. There is no relation to reality here which is why it is absurd that it has taken over 400 comments over three threads to discuss it. People do have too much time on their hands. They should go out and get a beer or something. From what I understand the offspring population size generated at each iteration should be the same for each type of simulation. The only difference is whether a letter is latched or not. If that is wrong, then why? I am willing to learn.jerry
March 27, 2009
March
03
Mar
27
27
2009
08:05 PM
8
08
05
PM
PDT
1 9 10 11 12 13

Leave a Reply