38 Replies to “Behe refuted yet again

  1. 1
    dacook says:

    This (from the explanation) says it all:
    “…game I created just for this purpose”

  2. 2
    Atom says:

    I don’t know, in 936 generations, I got rapidly deteriorating fitness, and by that generation every last offsping had a fitness score of 0.

    Yet they still reproduced like normal and went on with business as usual.

  3. 3
    Atom says:

    Also, (somebody correct me if I’m wrong)

    Isn’t this thing not even SC? From my envelope, it is a matrix of 10 by 10 squares, each with one of five possible values (four letters or a blank space). This gives us a complexity per solution of:

    5^100

    …which is not 10^150…

  4. 4
    JGuy says:

    Personally, I’ve been having more of an educational experience with the Panda-Monium file on UD.

    http://www.uncommondescent.com.....monium.swf

    Still trying to bet level seven.

  5. 5
    WinstonEwert says:

    Allright, who can expound the method used to achieve “IC”?

  6. 6
    JGuy says:

    Winston:
    Exactly the point.. there’s a method.

    meth·od –noun
    1. a procedure, technique, or way of doing something, esp. in accordance with a definite plan: There are three possible methods of repairing this motor.
    -Random House Unabridged Dictionary, copyright Random House, Inc. 2006.

  7. 7
    bFast says:

    WinstonEwert, “Allright, who can expound on the method used to achieve “IC”?”

    I haven’t analyzed the source code of this particular project terribly closely. However, if two “mutations” are required to achieve a given result, then the pair of mutations would technically, barely, be considered “IC”. This would be achieved as follows: a “mutation” happens that “at least does no harm” so it is permitted to continue — then a second mutation happens that completes the “IC” scenerio.

    Now, if the number of possible “mutations” is extremely limited (5^100), this scenerio can happen fairly regularly. If the number of possible “mutations” is huge, the chance of getting a matched pair becomes really low. Further, Behe’s recently published paper shows that it is possible, in bacteria (short lifespan, smaller genome) to get 2 component IC once in a blue moon. Getting 3 component IC is much harder, and 4 gets into the zone of rediculous. The number of “matching” mutations that would be required to assemble a bacterial flagellum from known components is, like, 20. Such is the nature of “a little bit IC” (2 component) verses “very IC”, flagellum.

  8. 8
    Patrick says:

    bfast,

    Color me lazy (I did search a little), but where is Behe’s new paper?

    EDIT:

    Here it is

    http://www.proteinscience.org/.....type=HWCIT

    Discussed here:

    http://www.evolutionnews.org/2.....l_s_1.html

    http://www.evolutionnews.org/2.....s_k_8.html

  9. 9
    Joseph says:

    Winston:
    Allright, who can expound the method used to achieve “IC”?

    Me, me- pick me- I can do it!

    The method used to achieve IC is- ready- DESIGN!

    That is how it always is. Can IC “evolve”? Sure, if it was designed to.

    Any questions?

  10. 10
    dopderbeck says:

    It does seem more than a little dumb to design a program capable of producing an irreducibly complex system in order to demonstrate that IC systems don’t require design.

  11. 11
    bFast says:

    Patrick, the paper I am referring to is the one that Behe was drilled on in the Judge Jones case.

  12. 12
    WinstonEwert says:

    I’m afraid I must disagree with everyone.

    Firstly, you object to simulations of evolution by virtue of the fact they require design.

    Let us suppose that I were to simulate clouds in computer program. Would that mean that somehow the result of the simulation would be invalid and derived from my intelligence because I designed the simulation?

    The fact is, the computer simulation simulates a set of rules. Either the set of rules correspond to the natural situation or they do not. You cannot disqualify the entire class of evolutionary simulations.

    Secondly, you may argue that a certain number of mutations can be chained together by random chance to produce very small IC system. But the IC system here is made up of at least 5 parts beyond any degree of reasonability for random chance.

    Thirdly, you may argue that the designer was specificially attempting to evolve IC. True enough, but that doesn’t change the outcome of the simulation. If the simulation is reasonable to a Darwinian situation, then the intentions of the simulations’s creator are irrelavent.

    Now, I am not Darwinist, just to be clear. However, I do not think those arguments hold sufficent water.

    The simulation simulates a changing environment in the form of the threshold. There is nothing wrong with this, environments do change, and what once was viable may not be now. However, the system that worked now worked then. It may not work well enough now, but thats not the point. The system does not fully fail.

  13. 13
    TomT says:

    This is not irreducible complexity.

    Here’s the definition of IC the author of the programe gave:

    1. It has non-zero fitness (see below for an explanation of fitness)
    2. It has at least five “parts”.
    3. Removal of any of its parts results in fitness dropping to zero.

    Here’s how Behe defines IC:
    “By irreducibly complex I mean a single system composed of several well mached parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning”

    The program is setup so that points are given for a ball dropping in to box five. There are only ten boxes, this is not that hard to do by trial and error. BTW, I think there is a bug in the program that causes the fittness to go to zero after several hundred generations.

  14. 14
    WinstonEwert says:

    “BTW, I think there is a bug in the program that causes the fittness to go to zero after several hundred generations.”

    Not exactly, the program makes the “environment” harder and harder to live in this way eventually an “organism” comes up for which the reduced complexity version would fail in the current environment. And that’s declared IC.

    But not always, in certain circumstances the environment will become so harsh that nothing will survive leaving you with only level 0 fitness.

  15. 15
    bFast says:

    WinstonEwert, I generally agree with you on the philosopical position that simulations are not ipso facto invalid. However, the 5^100 issue, as far as I am conserned, is a big deal. If the game board were broadened to 20*20, how much harder would it be to obtain IC? Further, though I wouldn’t negate the value of simulations on their face, the simulation at best produces a philosophical case, not a practical case for any particular claim of biological IC.

  16. 16
    WinstonEwert says:

    The point is that 5^100 is way higher the number of board layouts that existed in the simulation. That is, it has performed considerably better then a random blind search.

    In this particular simulation a 20×20 board wouldn’t effect it much. It would probably leave most of the area blank only actually using a narrow strip from its source to destination.

  17. 17
    Borne says:

    I’m not a genetic algorithm expert so I won’t pretend to have done an exhaustive evaluation of the program.

    But, as a professional IT worker I don’t think it is a valid “simulation” at all.

    In fact I wouldn’t even call this a simulation.

    It comes no where near the complexity of a real life system. It is vastly over-simplified as usual from Darwinists.

    And IC is not something that can be obtained by a number of parts + a fitness value equation! IC implies more than just some parts + fitness. It implies useful functionality and genetic “meaning”. Nor does the program actually attempt to make anything with functional meaning.

    IC also often implies “coherent concurrency” i.e. 2 or more molecular machines working in harmony and in a mutually beneficent way – which is an NDE killer all by itself!

    Coherent concurrency is not something that CAN evolve. It requires functional synchronization – something RS + NS can’t do. It requires planning which implies intelligence.
    ———–
    The source code is fairly well structured but the coder makes the mistake of naming classes using verbs.

    Objects (Java classes)are not actions but actors. Or, nouns vs. verbs. A minor fault but important as it reveals at least a lack of understanding of what Objects are in OOP and at worse a faulty Object Oriented application.

    Notice also that the author directs us to talkorigins for more info on IC – hardly the place to go!!

    Next please…

  18. 18
    WinstonEwert says:

    Do you think its a problem to look at a simplified version of a situation (as long as you take that into account?)

    While the simulation does not have quite the accurate notion of IC, do you not think we can check for system in evolutionary simulations which would qualify as IC? And if we cannot find them use that as evidence?

    “The source code is fairly well structured but the coder makes the mistake of naming classes using verbs.”
    Yes, but completely beside the point.

  19. 19
    Doug says:

    The simplified element looks like a kind of regression analysis. In that sense, there is no way to discount the variables he left out. They are just too hard to standardize.

    Silly Darwinist, Tricks are for kids. La La la la la … I can’t heeeeear you.

  20. 20
    nullasalus says:

    “Firstly, you object to simulations of evolution by virtue of the fact they require design.”

    Actually, I think this is an interesting philosophical issue to address.

    Is it possible to create a computer simulation of evolution that eradicates design? I don’t think so – only because by creating the simulation, you illustrate how even an evolution devoid of miracles and capable of achieving supposed IC and complex/intelligent life, can ultimately be.. designed.

    That’s what I think the real strength of the ID concept is, one I wish was expounded on more. Proving design may be an impossible task – but it’s equally hard to disprove it, or prove ‘no design’. Both are ultimately philosophical questions. The difference is that one of them is treated more as science than philosophy – and that must end.

  21. 21
    TomT says:

    I’m having second thoughts. After a closer look at the program, it is something like irreducible complexity. If one part can be removed that reduces the fitness to zero then the parts must be well matched. However, the program is unrealistic because there is only one evolutionary goal — get a ball into box five. Even at zero fitness the organisms get to keep trying. It would be like blood clotting were the only evolutionary goal and organisms get to keep trying until some solution for blood cloting was found.

    Here’s my question. When an IC system in the program was found, how many generations did it tend to last? IC would not be an evolutionary advantage, if non-IC solutions were available (as is the case in this program). If removing one part were to reduce fitness to zero, then the organism would be more suceptible to harmful mutations. In biological terms it would be more constrained.

    Furthermore, Behe’s argument is not that IC can’t evolve — as if it’s a logical impossibility, but that it is unlikely to have evolved. In order for IC system to evolve they must take an indirect route, and the more systems found to have IC the more unlikely that darwinian evolution is a probable explanation. If I were to watch this program more closely I’m sure the results would be similar. That all the IC systems got there indirectly.

  22. 22
    inunison says:

    I agree with nullasalus, but even if we disregard problems with computer simulations, they still have to meet following conditions to show that evolutionary mechanisms can produce IC system. Angus Menuge lists five:

    1 Availability. Among the parts available for recruitment to form the IC system, there would need to be ones capable of performing the highly specialized tasks, even though all of these parts serve some other function or no function.

    2 Synchronization. The availability of these parts would have to be synchronized so that at some point they are all available at the same time

    3 Localization. The selected parts must be all made available at the same “construction site” at the time they are needed

    4 Coordination. The parts must be coordinated in just the right way to produce relevant and functional assembly.

    5 Interface compatibility. The parts must be mutually compatible and capable of properly “interacting”.

    I fail to see how this simulation meets these conditions. Maybe someone can help me there.

  23. 23
    Joseph says:

    Winston,

    In order to correctly simulate anything you have to have a complete understanding of it.

    And Behe’s argument is NOT that IC can’t evolve- it is all about the mechanism. IOW IC can evolve it was designed to do so. It is against all reasoning to think that culled genetic accidents can bring about IC.

    ID is NOT anti-evolution. All ID says is that the blindwatchmaker does NOT have sole dominion over the processes.

    What needs to be demonstrated is that a system that does not functioin until all the parts are manufactured and properly configured (including command & control) can arise via stochastic, ie blind watchmaker-type, processes.

    If natural selection only acts on that which exists, IC at least appears to go against that premise (in an materialistic anti-ID scenario).

  24. 24
    DaveScot says:

    Evidently the program has some more evolving to do because it doesn’t work at all on my PC.

  25. 25
    bFast says:

    I’m smelling a rat with this thing. Why is it increasing in difficulty? Could it be that this sim is playing with loaded dice?

    Here’s what I am guessing is happening. As the “difficulty” is rising, a solution is being trained into the organism. The desired IC is, at some point, the same complexity, but without the irreduceability. At some point the five steps that will become irreduceable (offer 0 fitness) are called for, but the factor of irreduceability isn’t yet in the sym.

    This would be the same as me saying to my child, “make your bed, brush your teeth, get dressed, brush your hair, have breakfast, and you will get a reward. If I start out by training each activity so “make your bed”, and “brush your teeth” each get a reward, but I diminish the number of rewards given out over time.

    Such would surely be an unnatural and “rigged” scenerio.

  26. 26
    Patrick says:

    I also agree with Winston that merely saying “the program was designed” is not a standalone argument. The results of GAs are more of an argument in themselves.

    Depending on the design of the GA there appears to be a complexity threshold that can be reached. This is primarily determined by the environment and the constraints imposed on the randomize and fitness functions. Earlier on UD I challenged a Darwinist to refine his GA so that it could produce 500 informational bits (a requirement to be considered CSI). If I remember correctly he was stuck around 80 informational bits. I suggested several other approaches for his GA. Designed correctly, I wouldn’t be surprised if his program could eventually reach 500 informational bits. But those designs are likely to incorporate obvious front-loading where as a more generalized approach is likely to stay stuck.

    GAs are not general purpose. They have to be crafted with a purpose and constrained within certain parameters in order to reach the goal. Biological reality often has very little to do with the refined design of GAs which are more like automated “trial and error” programs. What this program does tell us it that Intelligence combined with Darwinian mechanisms can produce results in artificially constrained environments…but that does nothing to alter biological reality. So as we learn the limitations of GAs we might learn more about the limitations of Darwinian mechanisms acting on biology.

    In the past on UD I’ve also noted my belief that in special scenarios under certain conditions IC/CSI “might” be produced within biology. (I say might since I haven’t seen this occur with CSI yet. And apparently Behe’s paper shows this to be the case with limited forms of IC.) When a certain threshold of complexity is reached via design it may be possible for additional “emergent complexity” to be generated depending on how the system was designed (plasticity in the language and the formulation of base classes of information). I know Bill thinks that CSI cannot come about in such a manner but what if under VERY limited circumstances it can? If that’s the case it might be a good idea to develop a “fallback position” in the literature, just in case there are very limited instances where CSI can be generated inside an already complex system without intelligence. Instead of being seen as “scrambling to cover our butts” we’d just just point to the literature and say “this wasn’t completely unexpected”. In that case the design detection methods would have to be refined to account for these limited instances.

  27. 27
    a5b01zerobone says:

    Does the IC Evolver really show that complexity can rise without intelligence?

    I have a background in the humanities so this is all way above my head.

    Thanks guys

  28. 28
    the wonderer says:

    Shouldn’t a simulation also simulate reality. Manupilating electrons is in a cyberworld does not jibe with the 3 dimensional world of the whole atom.

    The order of 3 dimensional complexty alone is staggering.

  29. 29
    Imaginer says:

    I think this simulation is useful in helping ID folks understand what is actually MEANT by irreducible complexity. Dembski often presents “specified complexity” as an overarching term which emcompasses IC, but I see the IC and SC as more overlapping circles – there are things which are IC which are NOT SC, among them, this program.

    At least, this is true if you define IC as narrowly as the Behe definition referenced above – that is, if you remove a part, the machine flops. A big part of irreducible complexity is (or should be) COMPLEXITY, which this simulation does not convincingly produce (has it demonstrated the production of IS – irreducible simplicity?). I don’t have any numbers to back this up, but on an intuitive level, the complexity present in the simulation is fully a function of the behavior of the parts and the ball. Whether a configuration of those parts exists that maximally delivers balls into slot 5 seems a fairly simple problem (any number of random configurations might do the trick).

    Complexity is not an arrangement of existing basketball players to best beat the other team. Complexity is inventing the game of basketball in the first place. Maybe if the simulation could arbitrarily rewrite the Java in the functionality of a game piece? Still thinking about this one…

    This is a good mental exercise!

  30. 30
    chunkdz says:

    This sim is poorly representative of biological IC, because biological IC isn’t just a matter adding a protein to an existing structure. Bio IC must include the assembly mechanisms that put the pieces together. Often these mechanisms themselves appear to be IC.

  31. 31
    franky172 says:

    If I remember correctly he was stuck around 80 informational bits.
    I wasn’t “stuck” on anything :). The original question asked if a GA could develop words faster than blind chance. I thought the GA developed did so quite well. (See: http://www.duke.edu/~pat7/publ.....rdPt2.html)

    I suggested several other approaches for his GA.
    By the way, I did get around to implementing your suggestion where words matching other words at any specific points would be considered “fit” – so haxxmxn would have a fitness of 4 since it matches “hangman” in 4 spots. I ran the GA for a 100 iterations, and it never found a true word of any significant length – in fact since there was no limit to the number of characters that could be included in a string, many of the character strings exceeded 100 letters, but were just pure gibberish from an english point of view.

    Designed correctly, I wouldn’t be surprised if his program could eventually reach 500 informational bits.
    Other phrase generation GAs have generated significant portions of text that far exceed 106 letters (the approximate length of a “500-bit string” assuming a language of 26 characters, and uniform probabilities on the individual characters).

  32. 32
    Patrick says:

    I wasn’t “stuck” on anything 🙂 .

    Sorry, I didn’t mean that as insult. What I meant was that the design of your initial implementation tended to produce results around there and couldn’t get any further. Not that you yourself couldn’t design anything better.

    The original question asked if a GA could develop words faster than blind chance.

    At least in regards to ID I never saw that as an issue to disagree with. Designed properly, I’d certainly assume a GA would be faster than a blind search! But I’m taking it that the reason your bring this up is because that was your primary objective?

    By the way, I did get around to implementing your suggestion…it never found a true word of any significant length

    That’s about what I expected. My original point for suggesting it was to show how a GA with looser constraints might not reach the goal even if it was designed with the intention to attempt to hit the goal using Darwinian mechanisms. As in, “depending on the design of the GA there appears to be a complexity threshold that can be reached”. Still, I’m a little surprised it “never” hit anything similar to the original program. Run it for more iterations?

    Original comment:

    http://www.uncommondescent.com.....ment-86316

    On a side note, here was your original prediction:

    I imagine it will result in longer strings being found, since there are significantly more “livable” strings under this condition.

    I’m curious; why were you originally optimistic?

    Other phrase generation GAs have generated significant portions of text that far exceed 106 letters

    You’ve mentioned that several times before. Could you provide a link to the relevant programs so others can see what you’re talking about? I’m curious to see how they were designed. Probably would make a more interesting topic than the IC program.

  33. 33
    franky172 says:

    Patrick
    Sorry,…
    Please, don’t be sorry – I was just kidding 🙂

    Still, I’m a little surprised it “never” hit anything similar to the original program. Run it for more iterations?
    I can certainly try running it for more iterations. Right now the results i’m talking about are just stored in my head from a very small number of runs of the GA – so take them with a grain of salt. I should have some mental and computer flops to spare soon, actually, so maybe I can give it another shot. Maybe within a week?

    I imagine it will result in longer strings being found, since there are significantly more “livable” strings under this condition.
    I’m curious; why were you originally optimistic?

    Well, for a few reasons.
    First, a common problem in genetic algorithms is that the fitness solution space is highly sparse (i.e. the fitness surface is very often identically equal to 0). The modification you suggested should result in a much smoother fitness surface, which is typically “good” for GAs. Second, one of the problems with the original GA is that of homogeneity – that is, the population can sometimes at an early iteration find one good, say, 5 letter word, which then proceeds to take over the population, and stifle other more promising words. Then the GA goes no where (this is another common problem in GAs). I also thought that your suggestion would mitigate these effects somewhat.

    As it turns out, neither of these played a significant role in the experiments with your fitness function so far; instead I found that changing the fitness definition as you described it resulted in too flat a landscape, so that there is actually very little pressure for a string to be a word – i.e. “bxd” is as good a word as “thxx”. But I will try to run the code some more to see if I can find any other conclusions. It had kind of dropped off my radar screen.

    You’ve mentioned that several times before. Could you provide a link to the relevant programs so others can see what you’re talking about?

    I believe that the author of the software is kind of a persona non grata round these parts, but if you google “phrasenation”, it will be the first hit. If you are interested in discussing it, I can try to re-create the software in MATLAB so we can easily consider alternate fitness function formulations, but that may take more than one week 🙂

  34. 34
    Patrick says:

    Oh, yes….I’ve run into Zachriel’s stuff before. For everyone else’s benefit, here are the rules his programs runs by:

    http://www.zachriel.com/mutagenation/Contents.asp

    We will use these rules to determine how many random mutations and recombinations (with selection) are required to evolve long meaningful English phrases. (Note: the Original Rules are a subset of the Extended Rules.)

    * STRING: a sequence of letters. To be valid and remain extant, it must form an English word or phrase, e.g. “DOG”.

    * WORD VALIDATION: Words must appear in Merriam-Webster to be considered a valid word for our purposes.

    * POPULATION: a collection of valid strings, e.g. “DOG”, “ZEBRA” or “CAT IN THE HAT”.

    All the mutants of the single-letter word, “O”.* POINT-MUTATIONS: Change from any letter to any other letter, e.g. “BIND” to “BAND”; or the addition of any letter to the beginning or ending of a string, e.g. “LIMES” to “SLIMES”, or “HONE” to “HONEY”; or the insertion of any letter at any point in the string, e.g. “LAD” to “LAID”; or the deletion of any letter at any point in the string, “LIKES” to “LIES”. Every single possible point-mutation must be considered, but if it forms a non-valid string, it is automatically de-selected.

    * SNIPPETS: Any contiguous section of a string, in whole or in part, e.g. “PPE” from “SLIPPERY”. If the snippet forms a valid string, it can become a member of the population, e.g. “LIP” from “SLIPPERY”. The remainder of the string, minus the snipped portion, can also become a member of the population if it forms a valid string, e.g. snipping “IPPER” from “SLIPPERY” leaves “SLY”. Every single possible snippet and remainder must be considered, but if it forms a non-valid string, it is automatically de-selected. All snippets, valid or not, must be considered for insertions.

    * INSERTIONS: An insertion is made by taking any snippet of any string and inserting it into any valid string at any place in that string, e.g. “RAV” from “BRAVERY” can be inserted into “TELLING” to form “TRAVELLING”. Every single possible insertion must be considered, but if it forms a non-valid string, it is automatically de-selected.

    * SELECTION: Besides automatic selection, at the end of each generation, we can de-select any strings we choose leaving a pool of “beneficial” strings. We can cull the herd.

    Well-designed constraints considering the intended goal. If I remember aright it’s about the same level as the “methinks like a weasel” approach.

  35. 35
    franky172 says:

    I’m not sure that I understand the problem here – is there a particular step that you do not like? If you can be more specific in what parts of the algorithm you think are information-bearing or “cheating”, perhaps we can explore alternate approaches – to summarize, here are the shortened rules:

    1) Words must be full words to have any fitness.
    2) Use point-mutations and insertions in cross-over.
    3) Strings can be de-selected. (It’s not clear how this is done.)

    If I remember aright it’s about the same level as the “methinks like a weasel” approach.
    I do not believe this is the case – in me thinks like a weasel, origanisms were judged based on their distance from a pre-defined string, no such string exists here, and only full strings are considered fit.

  36. 36
    Patrick says:

    I do not believe this is the case – in me thinks like a weasel, origanisms were judged based on their distance from a pre-defined string, no such string exists here, and only full strings are considered fit.

    Let me rephrase then…it’s between “me thinks like a weasel” and your original approach. Funny thing is that is what I originally wrote but edited it to be “about the same level” since I hadn’t looked at Zachriel’s program in a long while.

  37. 37
    marsCubed says:

    First post here, hi all. programmer/Art lecturer/Science hobbyist.

    Isn’t the flaw with this program similar to that with ID positing time taken for genetic material to assemble in solution is unfeasibly long?
    ie, the flaw is that in Nature there is more than one of each component part; more than one test tube.

    In Nature there are vast oceans. some combinations are not allowed (laws of physics) etc.
    ID’s ‘Impossibly long time’ required to generate life proteins turns out to be about a week when calculation is scaled up to Earth’s oceans.

    In other words, This program may need to be run trillions of times to yield valid figures, by which time computers may have evolved too.
    (work it out statistically) , thus far it only seems to demonstrate that in increasingly harsh environments organisms will die out unless they have a diverse large gene poor, or adapt in time.

    In Nature chemistry happens for a reason.. ie, an undersea volcano. gradients (a kind of niche where working replicators may survive) also exist. it is in such contexts of planet wide chemical gradients that life may have began.. & there may have been vast amounts of it. chemical potentials for which the environment for early replication may have been very kind, with evolution driven by factors such as mechanical integrity.. not increasingly harsh in what appears to be a poorly conceived program.
    It is a good idea, I just don’t think it has been properly (read comments, will read code soon & come back) thought out, I am far from convinced that it’s conclusions will scale well to what it’s supposed to represent.

  38. 38

Leave a Reply