Uncommon Descent Serving The Intelligent Design Community

We’re not in Kansas Anymore

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I hesitate to bring attention to a blog, called Thoughts from Kansas, written by Josh Rosenau (a grad student completing a doctorate in the department of Ecology and Evolutionary Biology at the University of Kansas), because I don’t think it makes accurate arguments and doesn’t deserve to be promoted, even in a rebuttal. The blog amounts to inaccurate, prideful digs at ID and reminisces over a paper he wrote pertaining to what he perceives are the legal and social histories of Intelligent Design:

The paper’s title, “Leap of Faith: Intelligent Design after Dover” is a reference both to the chalky cliffs of the English Channel, to the town in which ID itself took a fall, and to the politically and economically suicidal effects of pushing creationism into public schools. Along the way, I was able to work in some other subtle digs at ID, including this summary of the recent history of the ID movement…

Way to work in subtle digs, it’s obvious he’s an unbiased academic who is only concerned with presenting the truth. Of course, ID has no basis in creationism, it is not concerned with any holy writ as a guide to its discipline. I’ve never read anything about “specified or irreducible complexity” in any sacred text nor encountered them in any religious observance.

William Dembski, once heralded on a book jacket as “the Isaac Newton of Information Theory,” has been reduced to rewriting and analyzing toy computer programs originally written for a TV series and popular books in the 1980s by biologist Richard Dawkins as trivial demonstrations of the power of selection. Dembski explained his poor record of publication in peer-reviewed scientific literature by saying, “I‘ve just gotten kind of blasé about submitting things to journals where you often wait two years to get things into print. And I find I can actually get the turnaround faster by writing a book and getting the ideas expressed there. My books sell well.” Alas, they don‘t convince mathematicians of his mathematical arguments…

Apparently Rosenau isn’t aware of the peer-reviewed IEEE publications from Drs. Dembski and Marks, Winston Ewert and George Montañez originating at their Evolutionary Informatics Lab:

And Dr. Dawkins’ toy needed to be exposed as a farce, because a farce doesn’t illustrate anything except by deceit, and deceit is not an illustration. And alas, the Oxford mathematician John Lennox endorses Dr. Dembski’s mathematics. If you want to write a legal paper for the “lawyerly set”, at least get the story right. The rest of the paper is much of the same, a kind of disconnected cluster of arguments that reads like a brainstorm (concerned with quantity of arguments over quality), that could only persuade the uninformed.

Comments
You’re argument seems to be that since there is more than one fitness landscape, then there are no fitness landscapes
My own understanding is that there are no destinations, only survivors on the journey. But since Laughable made the Point, itatsi in demo mode, can be switched from one language landscape to another in the middle of a run, and the new fitness function will seamlessly begin shaping the population to look like the new language. Before anyone makes the obvious point, the language landscape is much more forgiving than the biological landscape. Nearly any random string has embedded substrings that appear in words. It's a game, not biology. But it makes the point that an evolutionary algorithm does not need a fixed target.Petrushka
April 14, 2010
April
04
Apr
14
14
2010
10:34 AM
10
10
34
AM
PDT
Laughable,
To say that there is one ‘fitness landscape’ that the evolutionary search is working on is a bogus notion.
You're argument seems to be that since there is more than one fitness landscape, then there are no fitness landscapes......Clive Hayden
April 14, 2010
April
04
Apr
14
14
2010
09:18 AM
9
09
18
AM
PDT
DiEb @ 40:
d I don’t think that the definition of active information is very helpful, as it only looks at the size of an underlying set Ω but not on the functions – which are generally in the focus of the NFLT.:
WinstonEwert @ 45
The active information is calculated from the relative probabilities of search algorithms succeeding so I’m confused by your statement here.
In your paper, you are only looking at asymptotically perfect algorithms, i.e., algorithms which actually find the target. For those algorithms, the active information equals the endogenous information, a fancy way to state that it is lb(|Ω|) - so it's only depending on the underlying set and it's the same for all algorithms.DiEb
April 14, 2010
April
04
Apr
14
14
2010
07:37 AM
7
07
37
AM
PDT
Atom, you must realize that in the material world the 'fitness landscape' varies significatly depending on place to place and even from time to time. The fitness landscape in say, LA, would be quite different from 500m underwater in the Atlantic ocean. Generally people, and say, small mammals, moths and bacteria, are the most fit creatures in a fitness ladscape of a urbanised area. Put them 500m under the ocean though and their fitness/survival rate drops to about 10 minutes. As a more realistic example of changing fitness landscapes that could be used in terms of early evolution, take for instance: (a) 1 mile under the sea (b) 1 foot under the sea, (c) floating on top of the ocean and (d) on the beach. All four of these places will have completly different fitness landscapes due to oceanic pressure, exposure to oxygen, exposure to light, etc. These fitness landscapes will also change depending on the availability of resouces, the presence of predators. The same life form placed at each of these 4 locations will 'hill climb' to the nearest local 'hill' over time to better suit that environment (if it survived at all). To say that there is one 'fitness landscape' that the evolutionary search is working on is a bogus notion. Dawkins used the Weasel program as a simple example of how a cumulative search works faster than a complete random search, he did not use the Weasel program as a complete model of how the biological world operates.Laughable
April 13, 2010
April
04
Apr
13
13
2010
06:46 PM
6
06
46
PM
PDT
Petrushka, I've tried to make the same point. As a computer science research program, creating metrics for comparing search procedures may have some interest. But I don't see it edging closer to invalidating evolution as a process. What happened to the claim that evolutionary algorithms work as well as they do because an intelligent agent smuggled in the information? Which of the researchers whose names are on these papers will raise their hand and say, "I did it. That algorithm only works as well as it does because of me." Which of them did the smuggling when they ran Avida? If design is the thing to get to, eventually, why put so much importance on the fitness function? Does the Designer design in nature by the subtractive process of killing? If I was trying to prove design is important, I'd be focusing on the representation and the variation operators.Nakashima
April 13, 2010
April
04
Apr
13
13
2010
03:46 PM
3
03
46
PM
PDT
...as Winston pointed out, these simulations and toy problems use carefully selected fitness functions amenable to the type of search we are performing. In short, we choose the type of landscape we need for success then assume nature has this exact type of landscape.
The only absolutely necessary characteristic of a biological landscape -- in order for some level of evolution to occur -- is that not everything that differs from its parent dies. Aside from that, the efficiency of evolution is irrelevant. We already know that evolution often fails to produce the changes necessary to avoid extinction. We also know that some seemingly simple islands are difficult or impossible to reach. Tame foxes with valuable, soft fur, for example. Which is why I question the metaphor of target. When children who are slightly different from their parents manage to live and reproduce, the population changes. It may be because the children are more fit, or it may be simply because they survive.Petrushka
April 13, 2010
April
04
Apr
13
13
2010
03:15 PM
3
03
15
PM
PDT
Atom, Hans was a sock puppet that I've banned many times previously, so I banned him or her again. Sorry for letting him/her ramble on. :)Clive Hayden
April 13, 2010
April
04
Apr
13
13
2010
02:52 PM
2
02
52
PM
PDT
Hans, I'm not trying to walk you through the paper. I'm only trying to help you ask the right questions. The paper deals with the Hamming oracle as an information source. It shows that this information can be extracted with varying levels of efficiency and that it is a LARGE source of information. That is the main gist, as Winston pointed out earlier. As to questions regarding biological systems: it is not our claim that these hand-crafted programs represent accurately how unguided nature is supposed to operate. Others make these claims. However, as Winston pointed out, these simulations and toy problems use carefully selected fitness functions amenable to the type of search we are performing. In short, we choose the type of landscape we need for success then assume nature has this exact type of landscape. However, the type of landscape we need to achieve success in a search cannot simply be assumed to be given to us. Many types of landscapes are possible; many of them (the majority) are not helpful. Now here's the point: narrowing down the set of possible fitness landscapes to only those that allow quick convergence requires target knowledge. We need to know which target we're searching for to select a fitness landscape that will work. You've grasped that in your last comment, asking "what is the specific target?" The correlation between fitness landscape and target location is essential. Furthermore, such a selection incurs an information cost. How much? Greater than or equal to the amount of active information you can extract from the landscape. Therefore, evolutionary algorithms, as presented in the toy problems, are not sources of unlimited, free functional information. They simply extract the information we provide, with varying levels of efficiency. That's the relevance to ID. AtomAtom
April 13, 2010
April
04
Apr
13
13
2010
02:47 PM
2
02
47
PM
PDT
Will any and every fitness landscape / fitness function give an evolutionary search an advantage over blind random sampling? Or are some fitness landscapes better than others, in terms of convergence to a given target?
I think perhaps that "search" is the wrong metaphor. I wrote itatsi in part to see if a sufficiently knowledgeable oracle could lead toward islands of high fitness. One of the interesting observations that resulted is that different languages present different landscapes. French, for example, seems to have many words connected to other words by one or two character mutations. German does not. I assume that biological structures share this characteristic. Some are close and some distant. Behe seems to assert that some gaps cannot be traversed. My thought on this is that RMNS doesn't always find a target. Species go extinct, even though from a purely physical standpoint there would seem to be solutions. Perhaps something as simple as a change in behavior. I also wonder if some of the structures we do see might be accidents, that might never be repeated if the tape were rewound. the metaphor would be the lottery winner. Looking back, the odds of any particular person winning, or any particular structure evolving might seem insurmountable. If we attempted to replicate the evolution of a complex structure it might never happen. But that does not mean it didn't happen once.Petrushka
April 13, 2010
April
04
Apr
13
13
2010
01:05 PM
1
01
05
PM
PDT
In all honesty, Winston has been rather lucid in his description of the core problem...
I'm having trouble understanding the core problem and how it relates to biology. It seems to mee that the biological oracle is omniscient in the sense that biochemistry dictates whether an organism lives after its DNA mutates, and the ecosystem selectively prunes populations. The relevant question is the one raised by Behe, and that is, can you get there from here. Assuming there can be two stable configurations, one having some enhanced functionality, are all the intermediate configurations viable? In biology, that strikes me as an experimental question. Efficiency doesn't seem to be the issue. Viability does.Petrushka
April 13, 2010
April
04
Apr
13
13
2010
12:47 PM
12
12
47
PM
PDT
Hans, Thank you for answering my question. So one search, using a fixed set of parameters (mutation rate, population size, etc) and a specific fitness function can find a target quickly whereas a second search using the same algorithm and parameters but a different fitness function will not converge to the target at all. What differs, and makes the difference in search performance, appears to be having a landscape amenable to hill-climbing, with our specific target located near the apex of a hill. Now, we must ask, what fraction of all possible fitness landscapes have this form, namely, one suitable to finding our specific target? It isn't enough for it to simply be a smooth single hill landscape, because the maxima could be at a point other than our target, leading our search away from the target rather than towards it. So how many "good" fitness functions (in terms of fraction of total possible landscapes of fixed size) are there for a given target? This bears directly on whether we can just pick a landscape at random from the space of all (size-limited) fitness landscapes and expect to get one usable for our search. I will let you answer before continuing. AtomAtom
April 13, 2010
April
04
Apr
13
13
2010
12:35 PM
12
12
35
PM
PDT
Atom, I'm not sure of the relevance of your question. Are you suggesting that the designer acts by changing the fitness landscape rather then interacting with the organism itself? To answer your question however, I would say "No" and "Yes".Hans Fritzsche
April 13, 2010
April
04
Apr
13
13
2010
12:10 PM
12
12
10
PM
PDT
Hans Fritzche, Allow me to ask a preliminary question: Will any and every fitness landscape / fitness function give an evolutionary search an advantage over blind random sampling? Or are some fitness landscapes better than others, in terms of convergence to a given target? AtomAtom
April 13, 2010
April
04
Apr
13
13
2010
12:00 PM
12
12
00
PM
PDT
Atom, At the risk of derailing this thread:
You don’t need the mutations to be directed, as long as you have enough offspring, a suitable mutation rate, and most importantly an information rich oracle (fitness landscape.) Hill-climbing, gradient ascent searches need a suitable fitness landscape to have any advantage over random sampling searches, and such suitable landscapes are quantifiably rare in the space of possible fitness landscapes.
There seems to be no requirement for an intelligent designer anywhere there. Where do you propose that the intelligent designer comes into play? After all, that is the point of your paper, this website and the claims made about the paper by some of the authors, i.e. that there is a requirement for an intelligent designer somewhere in the process. Where/when is that requirement satisifed? I've looked over the EIL but it's not clear to me why these papers are "ID supporting" papers. Could you clarify?Hans Fritzsche
April 13, 2010
April
04
Apr
13
13
2010
11:49 AM
11
11
49
AM
PDT
Petrushka, You are free to argue about the "rather significant claims made on this website" with those who made them, on the threads in which they made them. I, on the other hand, would rather discuss the issues raised on this thread regarding carefully selected fitness landscapes and information costs associated with landscape bias. In all honesty, Winston has been rather lucid in his description of the core problem and I don't really need to add much to his points. It's just that I'm rather enjoying this thread so I'd rather allow it to stay focused on the current topic. Thanks for your cooperation. AtomAtom
April 13, 2010
April
04
Apr
13
13
2010
11:01 AM
11
11
01
AM
PDT
Everyone here is focusing on the “information richness” (really, target proximity correlation) of the fitness landscapes/oracle.
Focusing on the oracle's intelligence ignores some rather significant claims made on this website, if not on this thread. Are you asserting that ID is completely comfortable with random variation plus selection, provided the selector is intelligent? I could have sworn that many ID supporters assert that there are occasions when living things produce a favorable mutation as a result of need. Either by computing the necessary adaptive mutation (front loading) or by preserving some environmentally induced adaptation (epigenetics?).Petrushka
April 13, 2010
April
04
Apr
13
13
2010
10:33 AM
10
10
33
AM
PDT
Petrushka wrote:
So the itatsi oracle knows how close a random string is to a word, and it can also select the string from a population that is closest to being a word.
Which is exactly Winston's point. You've conceded the argument without realizing it. Directed versus non-directed mutations are a secondary issue. You don't need the mutations to be directed, as long as you have enough offspring, a suitable mutation rate, and most importantly an information rich oracle (fitness landscape.) Hill-climbing, gradient ascent searches need a suitable fitness landscape to have any advantage over random sampling searches, and such suitable landscapes are quantifiably rare in the space of possible fitness landscapes. This is the issue under discussion. Everyone here is focusing on the "information richness" (really, target proximity correlation) of the fitness landscapes/oracle. Re-read Winston's posts, you may see why your response is irrelevant to the discussion at hand. AtomAtom
April 13, 2010
April
04
Apr
13
13
2010
08:13 AM
8
08
13
AM
PDT
Atom @51 and Winston, I believe you are crossing an abstraction layer with your point about location. As Nakashima says @47,
We don’t! We only reward what worked in the last incremental time step because we have no knowledge of whether a distant target exists or not. The inner loop of the algorithm doesn’t know what previous good population members looked like. It only knows the relative fitnesses of current members.
As part of the ..mechanism.. of the search, only the level of fitness is returned. This is the only thing evolution is concerned with. In a real evolutionary scenario, the fitness function is free to test any characteristic it chooses to, on a generation by generation basis. In a computer model, we could swap fitness functions each generation. This means you cannot know in advance, what will be most fit.Toronto
April 13, 2010
April
04
Apr
13
13
2010
08:13 AM
8
08
13
AM
PDT
Winston’s claim is that Itatsi doesn’t just look at whether or not substrings occur, but also assesses fitness on the basis of whether or not those substrings appear in places similar to the places they appear in real words (in a statistical sense).
The cardinal rule of this kind of simulation is that variation is blind. There's nothing about the biological world that requires the oracle to be blind or unintelligent. Darwin compared natural selection to artificial selection, and artificial selection undeniably employs an intelligent oracle. So the itatsi oracle knows how close a random string is to a word, and it can also select the string from a population that is closest to being a word. But nothing is conveyed back to the mutation generator to help it make hopeful mutations. In fact, the actual population wanders around rather than homing in on any single word. In fact, the simple ploy of occasionally killing off the best word and selecting the second best prevents itatsi from getting stuck on a single target. It continues to make new words. Another thing it does is make pronounceable strings that aren't in its dictionary, or in any dictionary. If you google these strings you will find that most of them have been used somewhere by someone because humans love making new words. Not only does itatsi make novel words that are islands between dictionary words, it "knows" when a randomly generated non-word will be pronounceable and interesting to speakers of the language. It knows because the string has good genes.Petrushka
April 13, 2010
April
04
Apr
13
13
2010
08:05 AM
8
08
05
AM
PDT
Petruska wrote:
I think you misrepresent what the oracle is doing. It communicates nothing about locations. It merely grades by fitness.
You seem to be subtly claiming that giving information about fitness can in no way communicate information about location, which is incorrect. If I am trying to select, out of the group of all Americans, a person who lives in my neighborhood, I can simply make fitness be a function of locality (namely proximity to my neighborhood), then select based on, guess what, fitness! That "fitness" information is really proximity information. Hamming oracles work in a similar way. Winston's claim is that Itatsi doesn't just look at whether or not substrings occur, but also assesses fitness on the basis of whether or not those substrings appear in places similar to the places they appear in real words (in a statistical sense). If that is in fact what Itatsi is doing, then Winston's criticism is completely valid. Remember, just because we claim that an oracle is simply returning "fitness" infomation, does not mean that the fitness is not strongly correlated to (or based on) some other trait that we are trying to optimize, like locality/proximity. AtomAtom
April 13, 2010
April
04
Apr
13
13
2010
07:04 AM
7
07
04
AM
PDT
Mr Ewert, Having a useable fitness landscape is key. Key to what?Nakashima
April 12, 2010
April
04
Apr
12
12
2010
08:37 PM
8
08
37
PM
PDT
Whoops, something in my wireless keyboard submitted a post while I was correcting it. Continuing where the error occurred: That is conceptually similar to differential reproductive success. The only thing the oracle does is choose which individual continues its line and which fail to reproduce. The children are all random variants of the selected child from the previous generation. Nothing about the oracle changes the way mutations occur. I do not think it is relevant to consider how "intelligent" an oracle is. The natural ecosystem can be pretty intelligent. Predators are pretty good at selecting the weakest. The only rule that can't be broken is the one forbidding the oracle from influencing how variants are produced. I think you might have misconstrued itatsi as a partioned search.Petrushka
April 12, 2010
April
04
Apr
12
12
2010
08:05 PM
8
08
05
PM
PDT
I think if you want to have something that parallels a “natural oracle”, you need something which does not have a fitness function deliberately constructed to tell me about where in the space the targets live.
I think you misrepresent what the oracle is doing. It communicates nothing about locations. It merely grades by fitness. That is conceptually edifferential reproductive success, whichPetrushka
April 12, 2010
April
04
Apr
12
12
2010
07:52 PM
7
07
52
PM
PDT
That’s true, but I don’t see that it is conceptually unlike a natural oracle that causes differential success of phenotypes. The example you present, Itatsi, is rewarding correct pairs of letters taking into account the positions of those pairs. Put simply, it is rewarding looking statistically like the target. It is clever. For the analogy to work, it would be like having proteins which are rewarded for looking statistically like useful proteins. I think if you want to have something that parallels a "natural oracle", you need something which does not have a fitness function deliberately constructed to tell me about where in the space the targets live. Nakashima, The search algorithm has very little notion of state, but the fitness landscape needs to have the correct form for it to work. ONEMAX and WEASEL have really nice forms. A random fitness function will eliminate the advantage of hill climbing. A suitably defined deceptive fitness function will cause terrible performance on the part of hill climbing. Having a useable fitness landscape is key.WinstonEwert
April 12, 2010
April
04
Apr
12
12
2010
06:03 PM
6
06
03
PM
PDT
Mr Ewert, The essence of hill climbing is that we can somehow measure whether or not we are moving towards the target. Hamming distances make that easy. But how are we to know are moving towards a target? We don't! We only reward what worked in the last incremental time step because we have no knowledge of whether a distant target exists or not. The inner loop of the algorithm doesn't know what previous good population members looked like. It only knows the relative fitnesses of current members. I agree that Weasel, like the OneMax toy problem used more commonly in EAs (and by Dr Dembski previously in MESA) is a problem where the variation operators and genotype representation combine to define a hill climbing behavior. But fitness landscapes do not need to be continuous, everywhere differentiable, etc. As you said, they can be deceptive (an area studied in depth by Dr David Goldberg of UIUC).Nakashima
April 12, 2010
April
04
Apr
12
12
2010
12:34 PM
12
12
34
PM
PDT
In order to obtain your fitness function you are going to need to either analyze a dictionary (i.e. the list of targets or inject your own knowledge of common word fragments.
That's true, but I don't see that it is conceptually unlike a natural oracle that causes differential success of phenotypes. On the other hand, it is still an analogy, so I wouldn't make grandiose claims for it.
Additionally, I don’t think your proposal will actually work. If we reward common substrings we’ll probably get a word containing several of these substrings which do not go together and thus get stuck in a local optima.
I have an example program in mind.Petrushka
April 12, 2010
April
04
Apr
12
12
2010
11:43 AM
11
11
43
AM
PDT
I don’t think that the definition of active information is very helpful, as it only looks at the size of an underlying set ? but not on the functions The active information is calculated from the relative probabilities of search algorithms succeeding so I'm confused by your statement here. Have a look at a third set of functions of size |?| which I will call eben’s spoilers: I think its been clearly established that NFLT wil break for many subsets of the possible fitness functions. What are you attempting to accomplish by demonstrating an example? Your fitness function has been engineered to take me to the target directly. Thats a pretty clear example of information being injected. I have to disagree. ES and other EA would still work better than random generate and test even in the presence of very noisy fitness functions Under NFLT that success only derives from making accurate assumptions about the fitness function. Noise isn't the issue here. The question is whether or not there will be a hill to climb. The hamming oracle puts a very simple hill in, which can be easily climbed. But in other functions there could be so many hills that attempting to climb them won't get you an success. Or perhaps the hills are deceptively placed and actually take you away from a good solution. The essence of hill climbing is that we can somehow measure whether or not we are moving towards the target. Hamming distances make that easy. But how are we to know are moving towards a target? What if, instead of a target, the oracle considered all dictionary words to have value, and selected children for survival based on having substrings that appear most often in actual words? What you propose is essentially measuring closeness to a collection of words. In order to obtain your fitness function you are going to need to either analyze a dictionary (i.e. the list of targets or inject your own knowledge of common word fragments. In other words to obtain active information you'll need prior knowledge about the targets embedded into the fitness function/oracle. Additionally, I don't think your proposal will actually work. If we reward common substrings we'll probably get a word containing several of these substrings which do not go together and thus get stuck in a local optima. We can fix these problems but only by providing more intelligent input into the algorithm or fitness function.WinstonEwert
April 12, 2010
April
04
Apr
12
12
2010
11:17 AM
11
11
17
AM
PDT
#37:
Relative reproductive success is way different then measuring how close I am to a target.
What if, instead of a target, the oracle considered all dictionary words to have value, and selected children for survival based on having substrings that appear most often in actual words? You could consider substrings to be alleles, and the children that have the best set of alleles get the highest fitness score.Petrushka
April 12, 2010
April
04
Apr
12
12
2010
09:56 AM
9
09
56
AM
PDT
I’m aware that review is far less stringent then would be the case if publishing in a journal. I raised this point with respect to the original post (since part of the argument in the post hinges on providing credentials for Bill Dembski, et al), but my comment was excluded by the moderators. I do think it is important to distinguish between the impact of peer reviewed publications in biological research journals and those in less stringent instruments with far less editorial participation by biologists (like the IEEE).spot48
April 12, 2010
April
04
Apr
12
12
2010
09:53 AM
9
09
53
AM
PDT
Reviewers commented on the paper and gave recommendations concerning it, so yes it was peer reviewed. I'm aware that review is far less stringent then would be the case if publishing in a journal. Seeing as I have not yet had the honor of having my work published in a journal you should ask somebody else for any details on how exactly they differ.WinstonEwert
April 12, 2010
April
04
Apr
12
12
2010
08:55 AM
8
08
55
AM
PDT
1 2 3

Leave a Reply