Uncommon Descent Serving The Intelligent Design Community

Behe’s “Multiple mutations needed for E. coli”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Multiple mutations needed for E. coli

An interesting paper has just appeared in the Proceedings of the National Academy of Sciences, “Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli”. (1) It is the “inaugural article” of Richard Lenski, who was recently elected to the National Academy. Lenski, of course, is well known for conducting the longest, most detailed “lab evolution” experiment in history, growing the bacterium E. coli continuously for about twenty years in his Michigan State lab. For the fast-growing bug, that’s over 40,000 generations!

I discuss Lenski’s fascinating work in Chapter 7 of The Edge of Evolution, pointing out that all of the beneficial mutations identified from the studies so far seem to have been degradative ones, where functioning genes are knocked out or rendered less active. So random mutation much more easily breaks genes than builds them, even when it helps an organism to survive. That’s a very important point. A process which breaks genes so easily is not one that is going to build up complex coherent molecular systems of many proteins, which fill the cell.

In his new paper Lenski reports that, after 30,000 generations, one of his lines of cells has developed the ability to utilize citrate as a food source in the presence of oxygen. (E. coli in the wild can’t do that.) Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it’s not some exotic chemical the bacterium has never seen before). However, the wild bacterium lacks an enzyme called a “citrate permease” which can transport citrate from outside the cell through the cell’s membrane into its interior. So all the bacterium needed to do to use citrate was to find a way to get it into the cell. The rest of the machinery for its metabolism was already there. As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1)

Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.)

The major point Lenski emphasizes in the paper is the historical contingency of the new ability. It took trillions of cells and 30,000 generations to develop it, and only one of a dozen lines of cells did so. What’s more, Lenski carefully went back to cells from the same line he had frozen away after evolving for fewer generations and showed that, for the most part, only cells that had evolved at least 20,000 generations could give rise to the citrate-using mutation. From this he deduced that a previous, lucky mutation had arisen in the one line, a mutation which was needed before a second mutation could give rise to the new ability. The other lines of cells hadn’t acquired the first, necessary, lucky, “potentiating” (1) mutation, so they couldn’t go on to develop the second mutation that allows citrate use. Lenski argues this supports the view of the late Steven Jay Gould that evolution is quirky and full of contingency. Chance mutations can push the path of evolution one way or another, and if the “tape of life” on earth were re-wound, it’s very likely evolution would take a completely different path than it has.

I think the results fit a lot more easily into the viewpoint of The Edge of Evolution. One of the major points of the book was that if only one mutation is needed to confer some ability, then Darwinian evolution has little problem finding it. But if more than one is needed, the probability of getting all the right ones grows exponentially worse. “If two mutations have to occur before there is a net beneficial effect — if an intermediate state is harmful, or less fit than the starting state — then there is already a big evolutionary problem.” (4) And what if more than two are needed? The task quickly gets out of reach of random mutation.

To get a feel for the clumsy ineffectiveness of random mutation and selection, consider that the workers in Lenski’s lab had routinely been growing E. coli all these years in a soup that contained a small amount of the sugar glucose (which they digest easily), plus about ten times as much citrate. Like so many cellular versions of Tantalus, for tens of thousands of generations trillions of cells were bathed in a solution with an abundance of food — citrate — that was just beyond their reach, outside the cell. Instead of using the unreachable food, however, the cells were condemned to starve after metabolizing the tiny bit of glucose in the medium — until an improbable series of mutations apparently occurred. As Lenski and co-workers observe: (1)

“Such a low rate suggests that the final mutation to Cit+ is not a point mutation but instead involves some rarer class of mutation or perhaps multiple mutations. The possibility of multiple mutations is especially relevant, given our evidence that the emergence of Cit+ colonies on MC plates involved events both during the growth of cultures before plating and during prolonged incubation on the plates.”

In The Edge of Evolution I had argued that the extreme rarity of the development of chloroquine resistance in malaria was likely the result of the need for several mutations to occur before the trait appeared. Even though the evolutionary literature contains discussions of multiple mutations (5), Darwinian reviewers drew back in horror, acted as if I had blasphemed, and argued desperately that a series of single beneficial mutations certainly could do the trick. Now here we have Richard Lenski affirming that the evolution of some pretty simple cellular features likely requires multiple mutations.

If the development of many of the features of the cell required multiple mutations during the course of evolution, then the cell is beyond Darwinian explanation. I show in The Edge of Evolution that it is very reasonable to conclude they did.

References

1. Blount, Z.D., Borland, C.Z., and Lenski, R.E. 2008. Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli. Proc. Natl. Acad. Sci. U. S. A 105:7899-7906.

2. Hall, B.G. 1982. Chromosomal mutation for citrate utilization by Escherichia coli K-12. J. Bacteriol. 151:269-273.

3. Pos, K.M., Dimroth, P., and Bott, M. 1998. The Escherichia coli citrate carrier CitT: a member of a novel eubacterial transporter family related to the 2-oxoglutarate/malate translocator from spinach chloroplasts. J. Bacteriol. 180:4160-4165.

4. Behe, M.J. 2007. The Edge of Evolution: the search for the limits of Darwinism. Free Press: New York, p. 106.

5. Orr, H.A. 2003. A minimum on the mean number of steps taken in adaptive walks. J. Theor. Biol. 220:241-247.

Comments
Based upon his comments brembs must be new to this debate. His statements about what ID proponents believe are truly off the wall. And the topic he brings up have been addressed many times over. Go back to lurking, brembs. Read some more and once you have something interesting to say perhaps come back.Patrick
July 4, 2008
July
07
Jul
4
04
2008
06:45 AM
6
06
45
AM
PST
Oh and I forgot one other thing: species. What do IDers mean by species? Scientists can't really agree on what a species is, can IDers? One common, but by no means uncontested species definition refers to reproductive isolation. According to this definition, speciation has been observed a few times, even in animals (Drosophila comes to mind), but also in plants (grass species on islands). Mind you that certain breeds of dogs would classify as species in this sense, as they would not be able to reproduce with each other naturally (Mastiff and Chihuahua for example), which is exactly why it is contended. Another regards the fertility/survival of hybrid offspring. However, a mule is a largely sterile hybrid, so according to this definition evolution resulting in horses and donkeys would be absolutely impossible for IDers, even thought they're so similar. So maybe species is something that somehow looks different from another? There are many species that nobody denies are very different species (even different orders!) but look very similar, to the extent that even specialists can't tell them apart easily. So that makes no sense either. What to do? I have a very practical solution: species in the sense of the biblical "kinds"! IDers could use it to refer to higher up clades such as: kingdom, phylum, class, order, or family? I suggest IDers use some of the latter to mean species (kinds). This would be very practical, because then they can always retreat to the next higher level when one level has been shown experimentally :-) "No, no, it's really kind/family/order/class/etc. what I meant with 'species', really!" :-)brembs
June 19, 2008
June
06
Jun
19
19
2008
02:26 AM
2
02
26
AM
PST
Just found this thread. Only read the first few of the comments and then searched for these keywords: - conjugation - horizontal gene transfer - gene duplication - nondisjunction - polyploidy And couldn't find a single comment with these readily observable genetic phenomena, all of which increase the amount of DNA in an organism and most of which are survived just fine. I really can't assume there are IDers out there who seriously propose that some magic man added some chromosomes so that the king crab gets 208 then takes away some so the fruit fly only gets 8 and then again adds some such that humans get 48 and then adds some more so the camel gets 60? What are you guys saying that an increase in DNA hasn't been seen? Genes/chromosomes duplicate all the time and then acquire mutations which change their information. Come on, that's genetics 101, not rocket science. Likewise, what's "beneficial" or "good"? You mean in the lab or in the wild? Under stress or under perfect conditions? Hot or cold? Island or mainland? Freshwater or saltwater? In yeast, 60% of all knocked-out genes do not show any effect in the lab, in mice it's still 30% of genes. In the genome age we know that many if not most gene knock-outs (depending on the organism) have comparatively small, incremental effects, many of which are buffered out. It's called degeneracy (no, not redundancy and not the degeneracy of today's youth, but degenerate as in the degenerate code). So instead of inventing nonsensical nomenclature for mutations, IDers should try to explain degeneracy: if there are intelligent designers out there changing things around in the DNA every once and a while, why is there degeneracy? That would be so totally unnecessary and impractical. The only reason I can see to put degeneracy in the genetic system would be to trick someone to believe the system actually evolved, but I'm sure you can come up with better reasons why degeneracy was chosen by the intelligent designer. Evolution predicts degeneracy. How does ID predict degeneracy?brembs
June 19, 2008
June
06
Jun
19
19
2008
01:49 AM
1
01
49
AM
PST
http://creationontheweb.com/content/view/5827
In 1988, Richard Lenski, Michigan State University, East Lansing, founded 12 cultures of E. coli and grew them in a laboratory, generation after generation, for twenty years (he deserves some marks for persistence!). The culture medium had a little glucose but lots more citrate, so once the microbes consumed the glucose, they would continue to grow only if they could evolve some way of using citrate. Lenski expected to see evolution in action. This was an appropriate expectation for one who believes in evolution, because bacteria reproduce quickly and can have huge populations, as in this case. They can also sustain higher mutation rates than organisms with much larger genomes, like vertebrates such as us.2 All of this adds up, according to neo-Darwinism, to the almost certainty of seeing lots of evolution happen in real time (instead of imagining it all happening in the unobservable past). With the short generation times, in 20 years this has amounted to some 44,000 generations, equivalent to some million years of generations of a human population (but the evolutionary opportunities for humans would be far, far less, due to the small population numbers limiting the number of mutational possibilities; and the much larger genome, which cannot sustain a similar mutation rate without error catastrophe; i.e. extinction; and sexual reproduction means that there is 50% chance of failing to pass on a beneficial mutation ). As noted elsewhere (see ‘Giving up on reality’), Lenski seemed to have given up on ‘evolution in the lab’ and resorted to computer modelling of ‘evolution’ with a program called Avida (see evaluation by Dr Royal Truman, Part 1 and Part 2, which are technical papers). Indeed, Lenski had good reason to abandon hope. He had calculated1 that all possible simple mutations must have occurred several times over but without any addition of even a simple adaptive trait. Lenski and co-workers now claim that they have finally observed his hoped for evolution in the lab. The science: what did they find? In a paper published in the Proceedings of the National Academy of Science, Lenski and co-workers describe how one of 12 culture lines of their bacteria has developed the capacity for metabolizing citrate as an energy source under aerobic conditions.3 This happened by the 31,500th generation. Using frozen samples of bacteria from previous generations they showed that something happened at about the 20,000th generation that paved the way for only this culture line to be able to change to citrate metabolism. They surmised, quite reasonably, that this could have been a mutation that paved the way for a further mutation that enabled citrate utilization. This is close to what Michael Behe calls ‘The Edge of Evolution’—the limit of what ‘evolution’ (non-intelligent natural processes) can do. For example, an adaptive change needing one mutation might occur every so often just by chance. This is why the malaria parasite can adapt to most antimalarial drugs; but chloroquine resistance took much longer to develop because two specific mutations needed to occur together in the one gene. Even this tiny change is beyond the reach of organisms like humans with much longer generation times.4 With bacteria, there might be a chance for even three coordinated mutations, but it’s doubtful that Lenski’s E. coli have achieved any more than two mutations, so have not even reached Behe’s edge, let alone progressed on the path to elephants or crocodiles. Now the popularist treatments of this research (e.g. in New Scientist) give the impression that the E. coli developed the ability to metabolize citrate, whereas it supposedly could not do so before. However, this is clearly not the case, because the citric acid, tricarboxcylic acid (TCA), or Krebs, cycle (all names for the same thing) generates and utilizes citrate in its normal oxidative metabolism of glucose and other carbohydrates.5 Furthermore, E. coli is normally capable of utilizing citrate as an energy source under anaerobic conditions, with a whole suite of genes involved in its fermentation. This includes a citrate transporter gene that codes for a transporter protein embedded in the cell wall that takes citrate into the cell.6 This suite of genes (operon) is normally only activated under anaerobic conditions. So what happened? It is not yet clear from the published information, but a likely scenario is that mutations jammed the regulation of this operon so that the bacteria produce citrate transporter regardless of the oxidative state of the bacterium’s environment (that is, it is permanently switched on). This can be likened to having a light that switches on when the sun goes down—a sensor detects the lack of light and turns the light on. A fault in the sensor could result in the light being on all the time. That is the sort of change we are talking about. Another possibility is that an existing transporter gene, such as the one that normally takes up tartrate,3 which does not normally transport citrate, mutated such that it lost specificity and could then transport citrate into the cell. Such a loss of specificity is also an expected outcome of random mutations. A loss of specificity equals a loss of information, but evolution is supposed to account for the creation of new information; information that specifies the enzymes and cofactors in new biochemical pathways, how to make feathers and bone, nerves, or the components and assembly of complex motors such as ATP synthase, for example. However, mutations are good at destroying things, not creating them. Sometimes destroying things can be helpful (adaptive),7 but that does not account for the creation of the staggering amount of information in the DNA of all living things. Behe (in The Edge of Evolution) likened the role of mutations in antibiotic resistance and pathogen resistance, for example, to trench warfare, whereby mutations destroy some of the functionality of the target or host to overcome susceptibility. It’s like putting chewing gum in a mechanical watch; it’s not the way the watch could have been created. Much ado about nothing (again) it has no relevance to the origin of enzymes and catalytic pathways that evolution is supposed to explain Behe is quite right; this is nothing here that is beyond ‘the edge of evolution’, which means it has no relevance to the origin of enzymes and catalytic pathways that evolution is supposed to explain.
Patrick
June 13, 2008
June
06
Jun
13
13
2008
08:46 PM
8
08
46
PM
PST
Here is my scenario of what happened: A mutation occurred at around 20,000 generations and the citrate-eating ability appeared when one of the bacteria bearing that mutation had a different mutation at around 31,500 generations. IMO the first mutation was very unusual or rare because (1) it apparently took about nine years to occur (44,000 generations in 20 years is about 2200 generations per year) and (2) it apparently appeared in only one of twelve lines of bacteria, even though all twelve lines were descended from a single individual. I think that the second mutation is a fairly common one because it was often expressed again in populations started by the unfrozen preserved populations of 20,000 generations or later, and the reason why this second mutation took so long to be expressed the first time -- about 11,500 generations (from the 20,000th to the 31,500th) or 5 years -- was that bacteria with the preliminary first mutation were scarce because the preliminary first mutation conferred no advantage in survival. After the preliminary first mutation occurs, appearance of the citrate-eating ability would be just a matter of time if the second mutation were a common one. Also, I am disturbed by numerous claims that the results of this study refute the ideas of Michael Behe -- IMO that is not the case.Larry Fafarman
June 13, 2008
June
06
Jun
13
13
2008
08:00 AM
8
08
00
AM
PST
So IOW, it should be clearly evident why average SAT scores, the catfish populations in manmade lakes, and the popularity of eastern european folk music are all determined by the exact same fundamental force.JunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
11:13 PM
11
11
13
PM
PST
one other comment- 'no change' could also mean changes in a "don't care" region of a word containing an instruction. There are often "don't care" regions to optimize logic if I'm not mistaken.JunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
10:26 PM
10
10
26
PM
PST
DaveScot (#88) I tested Solitaire using the exact same procedure as before - 20 random bit changes followed by testing for each one. Solitaire is 34,064 bytes long whereas my own application was over 500K. So obviously you're bound to hit more errors in a program less than 1/10 the size. This is less analogous to biology where the size and complexity would be much greater. The result: 12 good 8 bad. The bad were generally program crashes. This time I also used a disassembler and compared the altered assembly to the original. For good tests where there were actual source code changes, '-' indicates the original code and '+' indicates the code that replaced it. If it says 'no change' below that means there was no change to the actual code. However, the distinction between code and data is completely artificial. The data in a tiny application like this will generally be resources, i.e. the data indicating the layout of dialogs, the styles of buttons, and other controls they contain. If that code is mangled, buttons are liable to be misplaced, or not function as intended, dialogs will be the wrong size, etc. So "no change" below just means no assembly source code changes (and I wasn't inclined this time to go hunting down what the resource changes were). As far as the test procedure, Solitaire only has one menu with the options 'deal', 'undo', 'options' and 'deck' and also the help menu. So I was pretty much running throught all the functionality in every test. What does it mean when there are source code changes and no malfunction is apparent. It could often mean that the code isn't being hit. As you pointed out only something like a code traversal analyzer will gurantee that all code is being traversed. But the point is, is it extremely difficult to identify any malfunction after a random code change, such that a concerted effort with sophisticated tools is neccessary to identify it. Of course, actual changing of say, a less than sign to greater than in code that you actually end up executing would probably have a negative impact. But the point is, you're not likely to hit it, even in a tiny program like this. In a very large program, its even less likely - much less likely. At least that's my conclusion. Then there are of course code changes you can actually hit where it wouldn't make any difference (e.g. increasing the size of a buffer). The following were the results in order (G-good; B-bad): Here's an incomplete instruction list: CALL //call subroutine JE //jump if equal JG //jump if greater DEC //decrement PUSH //push value on the stack MOV //move value into a register CMP //compare values ----------------------- [this editor is mangling the formatting below.] G -CALL sol.01002188 +DB E8 +DB 86 +DB DD +DB FF +DB 7F G no change B G +JE SHORT sol.01002087 +ADD BYTE PTR DS:[ECX],AL G no change B G -JG sol.01001B9A +DB 0F +DB 8F +DD sol.010001F8 B* //"Deal" becomes "De'l" G -PUSH sol.01007010 +PUSH sol.01006010 G no change G -DEC ESI +DEC EDI B B G -MOV DWORD PTR DS:[10071EC],EAX +MOV DWORD PTR DS:[10071EE],EAX G no change G no change B G -CMP EAX,6 +CMP EAX,7 B BJunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
10:14 PM
10
10
14
PM
PST
Anyone interested in enlightening the folks over at daily tech? http://www.dailytech.com/Evolution+in+Escherichia+Coli+Bacterium+Observed+During+Lab+Tests/article12045.htm Right now, they are mostly interested in arguing about Spaghetti monsters and Biblical calculations of Pi, but there may be a few sensible ones who will listen.DavidV
June 11, 2008
June
06
Jun
11
11
2008
04:30 PM
4
04
30
PM
PST
junkyard Really, the more skilled a programmer is the more he moves away from highly restrictive programming environments. You're confusing the environment with the language and in a modern programming environment you can use the same tools for a wide variety of languages. I'm quite proficient in assembly language for a number of processors including the 80x86 family and a number of embedded processors. There's nothing less restrictive than assembler. C is okay and I've used it a lot but there's no reason not to use C++ as you can do anything with it that you can with C and have more options that C doesn't provide. Those are really the only three languages I've used much but literally millions and millions of lines with them beginning in the 1970's. P.S. I misread your first comment about compilers. You didn't make the error I thought you did. Sorry about that. Still, it would be a good test as there's no blind spots. You really don't know how much of the code in any arbitrary executable is being executed in any arbitrary functional test nor do you have any way of knowing how much of the executable is critical code and how much is non-critical data. That's why the tool I first mentioned was developed. However, if you change a bit of the source code (anything other than a comment) you DO know the compiler will traverse the change and I'd still bet dollars against donuts that random bit flipping in the source will cause a compiler to spit out an error or warning more often than not. I don't believe the test you described can be used to draw any conclusions or make any points because there are just too many unknowns in it. DaveScot
June 11, 2008
June
06
Jun
11
11
2008
01:03 PM
1
01
03
PM
PST
With that many generations in Lenski's lab, I'm surprised that I haven't heard whether the work has confirmed "Muller's Ratchet" (an important component of genetic entropy theory). I just posted something on HJ Muller, the pioneer of "Muller's Ratchet".scordova
June 11, 2008
June
06
Jun
11
11
2008
12:26 PM
12
12
26
PM
PST
DaveScot (#84) My only point was, a high-level compiled language is a sophisticated end-user application designed to facilitate the programming process for HUMANS specifically. There is all sorts of handholding by a compiler to gurantee the human doesn't make inadvertant errors in the programming process, doing things which he didn't actually intend to do. None of this has anything to do with what will run and run efficiently on a computer. Really, the more skilled a programmer is the more he moves away from highly restrictive programming environments. C as you know allows you to bypass all its protection mechanisms and the more you know what you're doing the more a compiler's constant interference becomes a major hindrance.JunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
09:35 AM
9
09
35
AM
PST
Patrick (#82): These are the key findings in the two respective experiments:
[1] "These findings reveal what is called a 'kinetic selection' mechanism for Pol II, which is like many polymerases," he said. "That is, the active site in one condition has a similar affinity for both correct and incorrect NTPs. However, because of motion within the active site--in this case the action of the trigger loop--catalytic activity in the active site proceeds much faster with the correct NTP than with the incorrect NTP. The trigger loop is mobile, and only when it is positioned properly in response to a correct substrate can it really function. [2] "Normally, when an NTP diffuses into the active site of the polymerase, the trigger loop closes behind it like a door, long enough for the polymerase to perform the chemistry to add the NTP to the end of the RNA chain," he said. "If the NTP is incorrect, there is a tendency for this door to stay open for a longer time, which means that the NTP has a chance to diffuse out of the active site before the polymerase can proceed to chemistry.
In the first, the trigger loop is moving around, and as such the correct NTP lines up with it quicker than an incorrect one. In the second experiment the trigger loop "door" tends to stay open longer with an incorrect NTP allowing it to diffuse. What strikes me is the probabilistic imprecise nature of these control mechanisms, wherein the capture of incorrect NTP is not strictly prohibited, but merely controlled by keeping it below a certain threshold.JunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
09:14 AM
9
09
14
AM
PST
junkyard From your response I guess you don't really know what a compiler is or does. Compilers don't crash when they encounter errors - they typically gracefully abort the compilation and describe the location and type of error encountered. Errors come in various levels of criticality. Some are fatal and halt compilation while others are just warnings and compilation continues. Better compilers will send you straight into the editing environment at the point where a fatal error was picked up so you can fix it that much faster. It's been a while since I've done any programming. There are probably bells and whistles now that I haven't seen before. I thought color-coded source files were the best thing since buttered bread the first time I used one - they make it really easy to spot syntax errors as you're typing because the colors don't look right so you don't even have to waste time letting the compiler find simple syntax errors.DaveScot
June 11, 2008
June
06
Jun
11
11
2008
09:04 AM
9
09
04
AM
PST
Paul Giem You anticipated what I was going to write about next - the selectivity of the cell wall. In vitro the cell wall can be far less selective in what is allowed through since all it sees is the ingredients in the agar, which are strictly controlled and few, and it doesn't have to fend off or compete with anything other living thing except its own kind since the cultures contain nothing but E.coli. In vivo the mutation(s) allowing citrase transport into the cell could allow a whole host of nasty things that aren't citrate to enter the cell as well. It's still a real yawner though. Anyone who knows anything about long term clonal tissue culture knows that microrganisms are rather proficient at adapting to different nutrients in the agar recipe. 30 years is remarkable only in that it wasn't 30 days, weeks, or months instead. There's probably something a toxic molecule in the wild that resembles citrate that would kill the citrate eaters so its digestive enzyme repertoire was rather well protected against allowing it to enter the cell. Just a guess - but hey, that what's Darwinian evolution is all about - guesswork. I just don't go so far as presenting my guesses as fact and demanding that they be taught as facts to high schoolers and asking the courts to protect my guesswork from criticism in public schools.DaveScot
June 11, 2008
June
06
Jun
11
11
2008
08:20 AM
8
08
20
AM
PST
As far as compilers and high-level languages, I don’t think nature is such that if one tiny thing is out of place it just shuts down and refuses to do anything. It will take what you give it and attempt to do something, which is what a computer processor does, I think.
On that topic: http://www.sciencedaily.com/releases/2008/06/080605120701.htmPatrick
June 11, 2008
June
06
Jun
11
11
2008
08:07 AM
8
08
07
AM
PST
The whole point of Behe’s new book was to try and find experimental evidence for exactly what Darwinian mechanisms are capable of. On the other hand we have speculative indirect stepwise pathway scenarios but so far the OBSERVED “edge of evolution” doesn’t allow these models to be feasible. But this “edge” is an estimate based upon a limited set of data which in turn “might” mean the estimated “edge” is far less than the maximum capable by Darwinian mechanisms. If Darwinists would bother to do further experiments they may see if this “edge” could in reality be extended. Then if this new derived “edge” is compatible with these models then so be it (though I’ll add the caveat that the “edge” might be better for Darwinism only in limited scenarios). In the meantime they’re just assuming the “edge” allows for it. Even worse, unless I failed to notice the news, the very first detailed, testable (and potentially falsifiable) model for the flagellum is yet to be fully completed (I realize there are people working on producing one) so a major challenge of Behe's first book is yet to be refuted, never mind the new book. Darwinists should stop pretending they have the current strongest explanation. I’ll fully acknowledge they’re currently formulating a response in the form of continued research, new models, and such but the mere fact is that they’re missing all the major parts to their explanation. This might change in the future, but it may not. Or at least the situation hasn't changed based upon this recent conversation where I asked for the functional intermediates in the indirect stepwise pathway to be named...and was never answered. Comment #203 summarizes that discussion, and should be read at full length, but I thought this was the kicker:
if the final function is reached through hundreds of intermediate functions, all of them selected and fixed, where are all those intermediates now? Why do we observe only the starting function (bacteria with T3SS) and the final function (bacteria with flagella)? Where are the intermediates? If graduality and fixation are what happens, why can’t we observe any evidence of that? In other words, we need literally billions of billions of molecular intermediates, which do not exist. Remember that the premise is that each successive step has greater fitness than the previous. Where are those steps? Where are those successful intermediates? were they erased by the final winner, the bacterium with flagellum? But then, why can we still easily observe the ancestor without flagella (and , obviously, without any of the amazing intermediate and never observed functions)?
gpuccio was also gracious enough to assume the T3SS as a starting. Dave pointed out this long ago:
The gist of it, as I recall, is that the t3ss appears on small number of bacteria that prey on eukaryotes. It’s a weapon used to inject toxins into the prey. In the meantime the flagellum appears on a large number of bacteria that don’t prey on eukaryotes. Thus saying that the t3ss predates the flagellum is like saying that anti-aircraft missiles predate aircraft. Non sequitur. Flagella were useful to bacteria before eukarotes appeared but a t3ss would be useless before then. The reasonable conclusion is that the T3SS devolved from the flagellum rather than the flagellum evolving from the T3SS. This scenario, which actually makes sense in the context of a tree of life beginning with bacteria, is also congruent with what we actually observe in nature today - useful things devolving from something more complex.
dmso74 is apparently parroting the chosen line of the Darwinian community:
Lenski's experiment is also yet another poke in the eye for anti-evolutionists, notes Jerry Coyne, an evolutionary biologist at the University of Chicago. "The thing I like most is it says you can get these complex traits evolving by a combination of unlikely events," he says. "That's just what creationists say can't happen."
What a nice PR strategy. Assert their opponents making certain claims that they are not, then blow away that fake claim. AKA Strawman. Yet we're never given the space to defend ourselves against such outrageous tactics. For example, Darwinists were previously accusing Behe of ignoring pyrimethamine resistance in Malaria as an example of cumulative selection. In fact, Behe doesn’t deny the existence of cumulative selection, nor does he omit a mention of pyrimethamine as an example. Behe actually spends more than a full page discussing pyrimethamine resistance. Here is small portion of what Behe wrote about it in The Edge of Evolution.
Although the first mutation (at position 108 of the protein, as it happens) grants some resistance to the drug, the malaria is still vulnerable to larger doses. Adding more mutations (at positions 51, 59, and a few others) can increase the level of resistance.
Explaining how he covered cumulative selection, Behe writes in his Amazon blog:
I discuss gradual evolution of antifreeze resistance, resistance to some insecticides by "tiny, incremental steps, amino acid by amino acid, leading from one biological level to another, hemoglobin C-Harlem, and other examples, in order to make the critically important distinction between beneficial intermediate mutations and detrimental intermediate ones.
So the "ignoring cumulative selection in an indirect pathway" argument is a complete strawman. Behe’s position is that the creative power of cumulative selection is extremely limited, is not capable of traversing necessary pathways that are potentially tens and hundreds of steps long, and he backs up this position with real world examples of astronomical populations getting very limited results with it. This is something the critics don’t really address. The fact that the opponents of Behe’s book find the need to repeatedly lie and misrepresent the book (Carroll and Miller) or avoid the subject matter altogether (Dawkins) shows exactly how good Behe’s book is. In spite of having more reproductive events every year than mammals have had in their entire existence, malaria has not evolved the ability to reproduce below 68 degrees. Nick Matzke’s explanation for this was that “in cold regions all the mosquitoes (and all other flying insects) die when the temperature hits freezing." Think about it. Malaria cannot reproduce below 68 degrees Fahrenheit. Water freezes at 32 degrees Fahrenheit. To illustrate how out to lunch Musgrave and Smith are on Behe’s Edge of Evolution, on page 143 Behe writes that the estimated number of organisms needed to create one new protein to protein binding sites is 10^20. Further down the page, Behe notes that the population size of HIV is, surprise, within that range. So according to Behe’s own thesis, HIV should be able to evolve a new protein-to-protein binding sites. So along come Smith and Musgrave, point out a mutation clearly within Behe’s thesis, and then declare victory when in fact they have not contradicted Behe at all. How about an actual example where a more complex organism is less fit than its simpler counterpart? Depends on the complexity being looked at it, does it not? Let’s take a look at TO’s example of people with “monkey tails”. I have no problem calling that “complexity” in a generalized sense. As in, not CSI, but a continuation of a process beyond its normal termination. I’m not sure what positive effects they do have. From what I remember they’re not articulated and cannot serve as an additional limb. But I’m pretty sure they’d act as the opposite of a peacock’s feathers, (which, BTW, has its own issues), dramatically reducing those individual’s chances of reproducing. Ditto goes for additional/non-functioning mammary nipples and other examples that turn off the opposite sex. The situation is complicated enough that there can’t be blanket statements. There can be increments in complexity where the tradeoff is more positive than negative. But that’s why ID doesn’t have blanket statement…there is a complexity threshold. And that’s why Behe is trying to find an “edge of evolution”. While an estimate has been arrived at I don’t think that “edge” has been found yet. Personally I think it “might” be greater than where some ID proponents envision it to be. Perhaps the "true edge" is around 6 steps in an indirect stepwise pathway. But I could be wrong. The perspective of ARN:
There are several observation that should be made before reaching general conclusions. The first relates to the machinery needed to metabolise citrate. The system to do this is already largely in place, but one enzyme is lacking. This is the comment from Mike Behe: "Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it's not some exotic chemical the bacterium has never seen before). However, the wild bacterium lacks an enzyme called a "citrate permease" which can transport citrate from outside the cell through the cell's membrane into its interior. So all the bacterium needed to do to use citrate was to find a way to get it into the cell. The rest of the machinery for its metabolism was already there. As Lenski put it, "The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions." Consequently, it is at least worth asking the question whether the E.coli bacterium had, in the past, lost the ability to metabolise citrate and what we are now seeing is a restoration of that damaged system. If this were the case, we should not be talking about "a major evolutionary innovation" but rather about the way complex systems can be impaired by mutations. ... it demonstrates a major problem for those evolutionists who want to claim Darwinism can achieve major transformations. These mutations are not only rare, they are also useless without the pre-existence of a biochemical system that can turn the products of mutation into something beneficial.
The Edge of Evolution is an estimate and it was derived from the limited positive evidence for Darwinian processes that we do possess. This estimate would of course be adjusted when new evidence comes into play or abandoned altogether if there is positive evidence that Darwinian processes are capable of large scale constructive positive evolution (or at least put in another category if it’s ID-based[foresighted mechanisms]). The bulk of the best examples of Darwinian evolution are destructive modifications like passive leaky pores (a foreign protein degrading the integrity of HIV’s membrane) and a leaky digestive system (P. falciparum self destructs when it’s system cannot properly dispose of toxins it is ingesting, so a leak apparently helps) that have a net positive effect under limited/temporary conditions (Behe terms this trench warfare). I personally believe that given a system intelligently constructed in a modular fashion (the system is designed for self-modification via the influence of external triggers) that Darwinian processes may be capable of more than this, but we do not have positive evidence for this concept yet. But that’s foresighted non-Darwinian evolution in any case, and even if there are foresighted mechanisms for macroevolution they might be limited in scope. We’re talking basic engineering here. When the code is pleiotropic you have to have multiple concurrent changes that work together to produce a functional result. Hundreds of simple changes adding up over deep time to produce macroevolution are not realistic. And, yes, I’m aware that the modular design of the code can allow for SOME largescale changes, especially noticeable with plants, but this is not uniform. Nor is it usually coherent (cows with extra legs hanging from their bodies, humans with extra mammary glands or extensions of their vertebrae [tails], flies with eyes all over). Nor non-destructive for that matter. And whence came the modularity? And we're looking for CONSTRUCTIVE BENEFICIAL mutations that produce macroevolution. Darwinists cannot even posit a complete hypothetical pathway! Previous discussions about the EoE: Ken Miller, the honest Darwinist Do the facts speak for themselves ERV's challenge to Michael Behe Darwinist Predictions P.falciparum - No Black Swan Observed PBS Airs False "Facts" The main point remains: at this time Darwinism does not have a mechanism observed to function as advertised. Should we continue research on proposed engines of variation? Definitely. When Edge of Evolution was released I believe I said that would make a good follow up (considering each proposed mechanism one by one, and of course their cumulative effect).Patrick
June 11, 2008
June
06
Jun
11
11
2008
07:42 AM
7
07
42
AM
PST
JT, teleportation does not happen. I'll grant though that the theoretical foundation is better supported than, say, Darwinian evolution. hee hee.tribune7
June 11, 2008
June
06
Jun
11
11
2008
07:16 AM
7
07
16
AM
PST
It is a pity that dmso74 (69) isn't around to defend him/herself (see DaveScot 70). However, in fairness to DaveScot, dmso74 did make a rather egregious error. It is conceded by all sides that beneficial mutations are only a minor problem, solvable by most bacteria in one or at most a few culture plates. Two sequential beneficial steps could happen in two, or at most a few, culture plates. The problem comes when there are neutral steps in between. That is where the difficulties begin, where improbabilities rapidly swamp the ability of the organism to evolve. dmso74 writes,
This was at least a 3-step process: one neutral “potentiating” mutation, one weakly beneficial mutation and one strongly beneficial mutation (that may have been a multiple mutation).
If dmso74 is right, there is only one known neutral step, so this is roughly equivalent to a two-step problem in probability. We will await the determination whether it was a 3-step process, or whether there were more (or fewer) steps involved, and whether any further steps were beneficial or neutral, but dmso74 is in error when he says,
Behe clearly states in his book that 2 steps is the “edge”, so we are already over that edge and may be well over it.
What Behe said is that 2 neutral steps is the edge. It is tempting to say that it is surprising to me why dmso74 said
which, again, is why it is surprising to me that Behe chose this paper as support for his hypothesis, when it is clear evidence against it.
But it is not honest. When one's goal is more to protect one's theories and attack competing theories than to understand those competing theories, misunderstanding of those theories can be expected. It is easier to create strawmen than to admit that one's opponent may have a point, when one views him/her as an opponent. Dmso74 had already given evidence that he/she had such difficulties. There is a further prediction by Behe, perhaps soft, but nevertheless a prediction. When the dust has settled, we will probably see that some transport protein that used to be able to transport some other substrate across the cell membrane now either has had its specificity changed so that it allows citrate across also, or has been switched over to citrate completely. In fact, that protein is probably a passive transport protein, which would be detrimental to the organism if it lived in an environment lacking in citrate, as citrate would then leak out instead of in. Thus we probably have something analogous to trench warfare here. That is, machinery is being broken rather than fixed, and it is only a special environment that makes the breaking advantageous. Another analogy would be blind cave fish, which only outcompete normal fish in a cave. It would be a blow to ID if the gene were duplicated first, then one of the genes were to mutate to allow active transport of citrate, complete with some kind of promoter molecule. But I'm not holding my breath on that one.Paul Giem
June 11, 2008
June
06
Jun
11
11
2008
07:06 AM
7
07
06
AM
PST
tribune7: I believe that's what the assertion was. If there's an obvious reason why it not conceptually the same thing its not apparent to me. Evidently others think its too stupid to even merit a comment.JunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
06:53 AM
6
06
53
AM
PST
JT-- No it wasn’t a joke. Isn’t that what happens? Are you saying teleportation happens?tribune7
June 11, 2008
June
06
Jun
11
11
2008
06:10 AM
6
06
10
AM
PST
dmso74 (#69): "This was at least a 3-step process: one neutral “potentiating” mutation, one weakly beneficial mutation and one strongly beneficial mutation (that may have been a multiple mutation)." Where did you get that idea from? I think that in the paper the mutations(s) has not been characterized, but the indirect evidence seems to point to a double mutation, the first one neutral, the second functional. I paste here the abstract of the work: "The role of historical contingency in evolution has been much debated, but rarely tested. Twelve initially identical populations of Escherichia coli were founded in 1988 to investigate this issue. They have since evolved in a glucose-limited medium that also contains citrate, which E. coli cannot use as a carbon source under oxic conditions. No population evolved the capacity to exploit citrate for >30,000 generations, although each population tested billions of mutations. A citrate-using (Cit+) variant finally evolved in one population by 31,500 generations, causing an increase in population size and diversity. The long-delayed and unique evolution of this function might indicate the involvement of some extremely rare mutation. Alternately, it may involve an ordinary mutation, but one whose physical occurrence or phenotypic expression is contingent on prior mutations in that population. We tested these hypotheses in experiments that "replayed" evolution from different points in that population's history. We observed no Cit+ mutants among 8.4 x 1012 ancestral cells, nor among 9 x 1012 cells from 60 clones sampled in the first 15,000 generations. However, we observed a significantly greater tendency for later clones to evolve Cit+, indicating that some potentiating mutation arose by 20,000 generations. This potentiating change increased the mutation rate to Cit+ but did not cause generalized hypermutability. Thus, the evolution of this phenotype was contingent on the particular history of that population. More generally, we suggest that historical contingency is especially important when it facilitates the evolution of key innovations that are not easily evolved by gradual, cumulative selection." So, why are you speaking or at least three mutations?gpuccio
June 11, 2008
June
06
Jun
11
11
2008
06:07 AM
6
06
07
AM
PST
No it wasn't a joke. Isn't that what happens?JunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
06:03 AM
6
06
03
AM
PST
JT "It seems like basic physical motion is a teleportation machine. You’re at one place, and then an instant later you’re no longer there and every molecule in your body is reconstructed precisely at another location." Obviously this is a joke, isn't it?kairos
June 11, 2008
June
06
Jun
11
11
2008
05:59 AM
5
05
59
AM
PST
ba77: "Somehow, this Star Trek type teleporter must generate and transmit a parts list and blueprint of the object being teleported. This information could be used in reconstructing the object at its final destination ... For example, just how much information would it take to describe how every molecule of a human body is put together? In a human body, millimeter accuracy isn’t nearly good enough. A molecule a mere millimeter out of place can mean big trouble in your brain and most other parts of your body. Then how can a person lose 50% of their brain and still think perfectly well. "A good teleportation machine must be able to put every atomic molecule back in precisely its proper place. That much information, Braunstein calculated, would require a billion trillion desktop computer hard drives, or a bundle of CD-ROM disks that would take up more space than the moon. The atoms in a human being are the equivalent to the information mass of about a thousand billion billion billion bits. Even with today’s top technology, this means it would take about 30 billion years to transfer this mass of data for one human body from one spot to another. That’s twice the age of the universe. " ---------- It seems like basic physical motion is a teleportation machine. You're at one place, and then an instant later you're no longer there and every molecule in your body is reconstructed precisely at another location. And it doesn't make any difference whether the object being teleported by nature is a rock an airplane or a human being. The exact same mechanism is used and object are dematerialized at one location and some time later are found rematerialized perfectly at another location. You imply that this would be the most mind-boggling incredible feat that could ever be accomplished, and yet basic physical reality accomplishes it all the time. If it can do that on its own, what wouldn't it be capable of. Just an observation in case you care to comment.JunkyardTornado
June 11, 2008
June
06
Jun
11
11
2008
05:47 AM
5
05
47
AM
PST
#71 bornagain77 Probably I didn't explain well my thought. I strongly claim that almost all DNA is non-junk and poli-functional and that it holds a hyper-huge information that is very very greater than what it is conceivable RM+NS could ever reach. I have only stated that the it is not necessary that the super-hyper-hyper-hyper-huge information needed to characterize a given huna body be actually compressed into the DNA code. In other words, provided that DNA code contains all what is needed to grow and sustain a human life, the "execution" (please excuse me for the computing analogy) of this code will produce the particular raw information in a pretty automatic way (let's for simplicity not to take into account epigenetic factors). In other words the final raw information is very very huge but the coding information that does allow to deterministally produce it is "only" huge. Let us consider your example about teletransportation: "A good teleportation machine must be able to put every atomic molecule back in precisely its proper place. Thus the demands are far greater for DNA than for the bridge blueprint" In your problem certainly but we are non considering a generic teletransportation system but a "life oranism" teletransportation system and this puts the problem in another way. If the machinery needed to grow an organism is available at the destination point, all that is needed is: 1. decoding at the starting point all the DNA of the organism to be "dispatched" 2. sending all the DNA code as a digital file (i.e. some billion bits) 3. using at the destination point the received information to create a "clone" of the organism by growing it. "Indeed, I have seen top evolutionists do their damndest to obfuscate the requirements for this level of information so as to preserve their beloved “Junk DNA”." I completely agree on this observation but to debunk darwinism is not at all necessary to think that 10^30 bits are needed. NDE is defeated by the 10^10 needed for DNA.kairos
June 11, 2008
June
06
Jun
11
11
2008
05:17 AM
5
05
17
AM
PST
kairos, I slightly disagree with this statement: BUT in any case the overall design information needed to actually implement the bridge hasn’t to be equal to the huge raw information that would be required to characterize correctly the position and composition of every iatom or molecule of the bridge. Yet an atom out of place on a bridge is no big deal for the bridge, yet: In a human body, millimeter accuracy isn’t nearly good enough. A molecule a mere millimeter out of place can mean big trouble in your brain and most other parts of your body. A good teleportation machine must be able to put every atomic molecule back in precisely its proper place. Thus the demands are far greater for DNA than for the bridge blueprint: Although DaveScot brought up a valid objection, a while back, when he said that much of the information for construction would have to exist separate of the DNA in the cell. Nevertheless for illustration purposes, I believe the Braunstein teleportation example is very clear in pointing out the basic outline of the staggering level of complexity we are dealing with life. A staggering level of complexity that evolutionists completely ignore and are oblivious too. Indeed, I have seen top evolutionists do their damndest to obfuscate the requirements for this level of information so as to preserve their beloved "Junk DNA".bornagain77
June 11, 2008
June
06
Jun
11
11
2008
04:28 AM
4
04
28
AM
PST
dmso74 hasn't read The Edge of Evolution and is either making things up about what's in it or is parroting falacious sources. He was warned to stop, ignored the warning, and is now no longer with us.DaveScot
June 11, 2008
June
06
Jun
11
11
2008
03:51 AM
3
03
51
AM
PST
Paul GIem et al.: "On the other hand, it does seem like standard evolutionary theory is at some risk as well. If this turns out to be a 3-neutral-step process, or especially a 2-neutral-step process, then the edge of evolution will be demonstrated to be where Behe says it is, and far too close to where an organism started to account for the variety of life as we know it. Perhaps more importantly, that would be experimental evidence, which is supposed to have more weight in science than theory does." This was at least a 3-step process: one neutral "potentiating" mutation, one weakly beneficial mutation and one strongly beneficial mutation (that may have been a multiple mutation).. Behe clearly states in his book that 2 steps is the "edge", so we are already over that edge and may be well over it. which, again, is why it is surprising to me that Behe chose this paper as support for his hypothesis, when it is clear evidence against it..dmso74
June 11, 2008
June
06
Jun
11
11
2008
02:35 AM
2
02
35
AM
PST
#56 gpuccio Thanks for your clarification. Your point of view is exactly what I meantkairos
June 11, 2008
June
06
Jun
11
11
2008
02:09 AM
2
02
09
AM
PST
1 2 3 4

Leave a Reply