Multiple mutations needed for E. coli
An interesting paper has just appeared in the Proceedings of the National Academy of Sciences, “Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli”. (1) It is the “inaugural article” of Richard Lenski, who was recently elected to the National Academy. Lenski, of course, is well known for conducting the longest, most detailed “lab evolution” experiment in history, growing the bacterium E. coli continuously for about twenty years in his Michigan State lab. For the fast-growing bug, that’s over 40,000 generations!
I discuss Lenski’s fascinating work in Chapter 7 of The Edge of Evolution, pointing out that all of the beneficial mutations identified from the studies so far seem to have been degradative ones, where functioning genes are knocked out or rendered less active. So random mutation much more easily breaks genes than builds them, even when it helps an organism to survive. That’s a very important point. A process which breaks genes so easily is not one that is going to build up complex coherent molecular systems of many proteins, which fill the cell.
In his new paper Lenski reports that, after 30,000 generations, one of his lines of cells has developed the ability to utilize citrate as a food source in the presence of oxygen. (E. coli in the wild can’t do that.) Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it’s not some exotic chemical the bacterium has never seen before). However, the wild bacterium lacks an enzyme called a “citrate permease” which can transport citrate from outside the cell through the cell’s membrane into its interior. So all the bacterium needed to do to use citrate was to find a way to get it into the cell. The rest of the machinery for its metabolism was already there. As Lenski put it, “The only known barrier to aerobic growth on citrate is its inability to transport citrate under oxic conditions.” (1)
Other workers (cited by Lenski) in the past several decades have also identified mutant E. coli that could use citrate as a food source. In one instance the mutation wasn’t tracked down. (2) In another instance a protein coded by a gene called citT, which normally transports citrate in the absence of oxygen, was overexpressed. (3) The overexpressed protein allowed E. coli to grow on citrate in the presence of oxygen. It seems likely that Lenski’s mutant will turn out to be either this gene or another of the bacterium’s citrate-using genes, tweaked a bit to allow it to transport citrate in the presence of oxygen. (He hasn’t yet tracked down the mutation.)
The major point Lenski emphasizes in the paper is the historical contingency of the new ability. It took trillions of cells and 30,000 generations to develop it, and only one of a dozen lines of cells did so. What’s more, Lenski carefully went back to cells from the same line he had frozen away after evolving for fewer generations and showed that, for the most part, only cells that had evolved at least 20,000 generations could give rise to the citrate-using mutation. From this he deduced that a previous, lucky mutation had arisen in the one line, a mutation which was needed before a second mutation could give rise to the new ability. The other lines of cells hadn’t acquired the first, necessary, lucky, “potentiating” (1) mutation, so they couldn’t go on to develop the second mutation that allows citrate use. Lenski argues this supports the view of the late Steven Jay Gould that evolution is quirky and full of contingency. Chance mutations can push the path of evolution one way or another, and if the “tape of life” on earth were re-wound, it’s very likely evolution would take a completely different path than it has.
I think the results fit a lot more easily into the viewpoint of The Edge of Evolution. One of the major points of the book was that if only one mutation is needed to confer some ability, then Darwinian evolution has little problem finding it. But if more than one is needed, the probability of getting all the right ones grows exponentially worse. “If two mutations have to occur before there is a net beneficial effect — if an intermediate state is harmful, or less fit than the starting state — then there is already a big evolutionary problem.” (4) And what if more than two are needed? The task quickly gets out of reach of random mutation.
To get a feel for the clumsy ineffectiveness of random mutation and selection, consider that the workers in Lenski’s lab had routinely been growing E. coli all these years in a soup that contained a small amount of the sugar glucose (which they digest easily), plus about ten times as much citrate. Like so many cellular versions of Tantalus, for tens of thousands of generations trillions of cells were bathed in a solution with an abundance of food — citrate — that was just beyond their reach, outside the cell. Instead of using the unreachable food, however, the cells were condemned to starve after metabolizing the tiny bit of glucose in the medium — until an improbable series of mutations apparently occurred. As Lenski and co-workers observe: (1)
“Such a low rate suggests that the final mutation to Cit+ is not a point mutation but instead involves some rarer class of mutation or perhaps multiple mutations. The possibility of multiple mutations is especially relevant, given our evidence that the emergence of Cit+ colonies on MC plates involved events both during the growth of cultures before plating and during prolonged incubation on the plates.”
In The Edge of Evolution I had argued that the extreme rarity of the development of chloroquine resistance in malaria was likely the result of the need for several mutations to occur before the trait appeared. Even though the evolutionary literature contains discussions of multiple mutations (5), Darwinian reviewers drew back in horror, acted as if I had blasphemed, and argued desperately that a series of single beneficial mutations certainly could do the trick. Now here we have Richard Lenski affirming that the evolution of some pretty simple cellular features likely requires multiple mutations.
If the development of many of the features of the cell required multiple mutations during the course of evolution, then the cell is beyond Darwinian explanation. I show in The Edge of Evolution that it is very reasonable to conclude they did.
References
1. Blount, Z.D., Borland, C.Z., and Lenski, R.E. 2008. Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli. Proc. Natl. Acad. Sci. U. S. A 105:7899-7906.
2. Hall, B.G. 1982. Chromosomal mutation for citrate utilization by Escherichia coli K-12. J. Bacteriol. 151:269-273.
3. Pos, K.M., Dimroth, P., and Bott, M. 1998. The Escherichia coli citrate carrier CitT: a member of a novel eubacterial transporter family related to the 2-oxoglutarate/malate translocator from spinach chloroplasts. J. Bacteriol. 180:4160-4165.
4. Behe, M.J. 2007. The Edge of Evolution: the search for the limits of Darwinism. Free Press: New York, p. 106.
5. Orr, H.A. 2003. A minimum on the mean number of steps taken in adaptive walks. J. Theor. Biol. 220:241-247.
“But if more than one is needed, the probability of getting all the right ones grows exponentially worse. “If two mutations have to occur before there is a net beneficial effect — if an intermediate state is harmful, or less fit than the starting state — then there is already a big evolutionary problem.””
You are correct that if the intermediate state (after one random mutation) is harmful then the probability that a second muatation will correct it (to produce a more fit individual) is exponentially worse.
However, you don’t mention the converse logic. That is, if the first mutation produces a slightly more fit individual, then the probability is exponentially better for that second mutation to make an even more fit individual.
There is absolutely no evidence to support your assumption that intermediate species are inherently less fit than previous generations. Probably theory states that generation of a less fit individual in this first random mutation is just as likely as generation of a more fit individual.
broadbill
You obviously haven’t read “The Edge of Evolution” for if you had you’d know there is indeed abundant evidence to presume that fitness is compromised in these situations. It’s called “trench warfare”. I’ll take you off moderation after you’ve demonstrated a willingness to read something before you comment on it.
Is this Lenski the guy who promised, a year or more ago, that he had observerd something truly extraordinary in his lab that was going to provide overwhelming proof of Darwinian evolution? Citrate digestion is a real yawner. If that’s all it is, it’s really pathetic and is really a world class example of the lack of any compelling evidence for RM+NS as a mechanism capable of driving creative evolution.
Suppose that the probability of getting the mutations necessary to utilize citrate is 1/30000 generations. Now supposing that instead of just subjecting the cells to an inordinate amount of citrate, what if the environment contained an inordinate amount of 10 different compounds the cells couldn’t utilize, where the probability of the mutations necessary for any one of them averaged the same as for citrate – 1/30000. Then we could expect the time it took for the cells to be able to utilize any one of them would be 1/10 the time it took for the cells to be able to utilize just citrate, so in other words, a couple of years or so.
“No population evolved the capacity to exploit citrate for >30,000 generations, although each population tested billions of mutations” [from Lenski paper abstract]
Who knows how many abilities the cells generated after billions of mutations that were not detected because the experiment wasn’t set up to detect them.
Junkyard Tornado wrote:
“Who knows how many abilities the cells generated after billions of mutations that were not detected because the experiment wasn’t set up to detect them.”
Possibly. But not likely. Remember, the e-coli already had the ability to use the citrate. They just didn’t have a way to efficiently get it into the cell. In other words, if we look at the overall cellular machinery needed for this new function, we’ve already spotted them the ball on the two-yard line and it took 30,000 generations to even get across the goal line. Then millions of bugs were encouraged — all but begged — by the unusual conditions of the experiment to see if they could make it the last tiny step.
This doesn’t mean that novel results aren’t possible. They are, as Behe discusses in his book with malarial resistance. But it shows yet again just how impotent RM+NS is.
broadbill:
the question is quite simple: if E. coli becomes able to feed on citrate after, say, 2 or 3 mutations, that means that until all of them have happened, E. coli can’t feed on citrate. Otherwise we would observe a strain which can moderately feed on citrate, then another one which is better suited to do that, and so on, each one separated from the previous one by a single mutation.
I don’t think that’s what has been observed here. At present we don’t know how many mutations were necessary before the citrate could enter the cell, but if the hypothesis that at least two were necessary is right, that means that the first mutation still had no effect on permeability to citrate.
I agree with you that the first mutation needs not be harmful: it could just have been neutral, as many mutations probably are. But the point is that it was not useful, for itself. Indeed, if it had been useful, it would have been quickly selected, fixed and expanded, and the second mutation would have quickly followed, instead of having to wait for such a long time.
The fact is: a harmful mutation is probably selected against, and has really few chances of surviving long enough to “receive” the second necessary mutation, least of all to be fixed and expanded.
A neutral mutation, on the other hand, is not usually selected. So, it stays confined to just a single individual line. A lot of time is needed so that a second coordinated mutation can happen in that same single line, because the probabilities are extremely low. For three coordinated mutations, each of the two intermediate ones unselected, the probabilities begin probably to be too low even for bacteria.
Let’s remember that there is theoretically a way that a neutral selection could be fixed: it’s genetic drift. But that is a false reasoning. Genetic drift, in the measure it can happen, is completely random. Any single mutation could be fixed by genetic drift, and that means that no single mutation has a special probability of being fixed.
So in the end, if we need two mutations to be present together at some time in one bacterial line, and none of them can be selected if it happens alone, then the probability for the combined mutation is the product of the two probabilities, and it becomes increasingly lower for each new necessary independent, unselected mutation. That’s why 2 or 3 is the empirical limit observed by Behe and, very likely, also by Lenski.
The old model of single step pathways, where each mutation is patiently selected, is only a myth. If it were true, we would easily observe it at work, at least in rapidly reproducing organisms as bacteria and protozoa. Instead, it is never observed, there is no detailed example of it in the empirical realm. The reason is simple: no real new function can be built with a non trivial number of single mutations, each of them creating a gradual increase of function. It’s only a myth, which lives only in the fertile minds of darwinists, and not in the real world.
Eric Anderson: “Remember, the e-coli already had the ability to use the citrate. They just didn’t have a way to efficiently get it into the cell”
Was this ability just part of a generic ability to process various food sources. IOW, its not clear to me whether there was a seperate distinct capability internally tailored to utilize citrate specifically.
“Now, wild E. coli already has a number of enzymes that normally use citrate and can digest it (it’s not some exotic chemical the bacterium has never seen before).
My point, if it wasn’t clear, was the following:
The human digestive system can already digest various things like meat, fruit, and vegetables via the same digestive mechanism (I’m assuming). Suppose you were on a desert island where some rare species of plant predominated that was perfectly edible for humans, except for one specific compound it contained which was highly toxic. If the humans there developed resistance to this toxin via mutations, it would be irrelevant to observe, “Well, humans already had the ability to digest this rare species of plant, and only the toxin kept them from doing so.” The ability was generic.
JunkyardTornado, yes that is the statement that I understood to mean that E. coli already had the ability to use citrate.
Reposting from other thread, per Bill’s instruction:
———
I presume this is the same Richard Lenski who was involved with the Avida silly business? One of the stated purposes of Avida was to show “how complex functions can originate by random mutation and natural selection.”
Boy, it seems a lot harder to evolve novel features in real life than it did with that slick computer program! 🙂
It will be interesting to watch this further and see what they ultimately determine was the source of this new ability. Based on the track record, I’ve got to believe that Behe’s intuition about the insignificance of the result is likely spot on.
BTW, for those keeping score, Behe is making a real, albeit softly stated, prediction in his Amazon post. We’ll see who ends up being right.
I particularly love the irony here, as Avida — in my view — inadvertently provided support for Behe’s idea of irreducible complexity. Now Lenski (with E. Coli) will likely end up demonstrating empirically what Behe has been arguing in Edge of Evolution.
JunkyardTornado, sorry it looks we were posting at the same time, so I didn’t get your entire thought the first time around.
I understand your point, but I don’t think that is a fair analogy. Behe says that E. coli has enzymes that normally use and can digest citrate. The challenge seemed to be getting enough of it into the cell.
I’m just going off of the brief info we have at this point. It sounds like Lenski is still working to determine what in fact occurred, so we’ll have to wait and see.
My money is with Behe though on this one . . .
“The ability was generic.”
Just realized that was a pun.
That was a real knee-slapper.
The critical ratio of beneficial to harmful mutations is variously reported as lower than 1 in 10,000, or lower than 1 in one million.
This factor dominates all other parameters.
See discussion
P.falciparum – No Black Swan Observed especially bornagain77’s post 82
See especially
Respected Cornell geneticist rejects Darwinism in his recent book
Genetic Entropy & the Mystery of the Genome, by John Sanford (October 2005)
Eh? Can you give me the references that say this? The last review I saw was putting them at a few percent. Still rare, but much more common than you’re suggesting.
p38 of Genetic Entropy doesn’t discuss the proportion of advantageous mutations..
The bottom line in this entire debate is that there is simply no way that random changes of any kind can account for either the information content or the highly sophisticated machinery of the cell, whether filtered by selection (natural or otherwise) or not. To believe in such a conjecture in light of what is now known about biological reality is to believe in the equivalent of the possibility of constructing a perpetual-motion machine. The orthodox Darwinian mechanism of mutation/stochastic genetic change filtered by selection is the greatest get-something-for-nothing scam in the history of science.
Yet, this absurd conjecture is presented as “established science,” about which there is no controversy.
Forty thousand generations in human history takes us back about half a million years, assuming an optimistic 12.5 years per generation, with a few million individuals instead of trillions in the case of E. Coli. Presumably, the same mechanism that gave E. Coli citrate capability turned a primitive simian ancestor into Beethoven and Fermat, with orders of magnitude fewer probabilistic resources.
Darwinists seem to have conveniently forgotten or ignored their junior high school math education. The only thing that bewilders me is the fact that they are bewildered by the fact that most people don’t buy their fantasies.
Aagh. On the preview, the link was appending the second double-quote to the link.
This is the last review. The preview still looks wrong, but we’ll see.
Let me make sure I get Behe’s logic right:
1)Multiple mutations are needed for evolution, and are so wildly improbable that evolution can’t occur.
2)Lenski observed an evolutionary event in a relatively short amount of time (think about 20 years in the context of the earth’s history) that, he speculated, may have been caused by (wildly improbable) multiple mutations.
3)Multiple mutations are needed for evolution, and are so wildly improbable that evolution can’t occur.
I’m sure you’ll fill me in on what I’m missing here.
#18
“Let me make sure I get Behe’s logic right:”
Indeed this seems your problem 🙂
“1)Multiple mutations are needed for evolution, and are so wildly improbable that evolution can’t occur.”
What does it mean evolution for you? If you had read EOE you should know that what is aty stake here is the possibility for any decent macroevolution. And although both Lenski’s data and Behe arguing about malaria resistance are very under that edge we have evidence that those trivial (trench warfare) evolution is indeed extremely rare.
“2)Lenski observed an evolutionary event in a relatively short amount of time (think about 20 years in the context of the earth’s history) that, he speculated, may have been caused by (wildly improbable) multiple mutations.”
See above; what you call “evolutionary event” is a very trivial one; a mere slight modification on a very complex biochemical stuff
“3)Multiple mutations are needed for evolution, and are so wildly improbable that evolution can’t occur.”
Again see above.
“I’m sure you’ll fill me in on what I’m missing here.”
That’s all folks. But if you had read something more about Behe’s arguments before, you hadn’t asked.
@Dave Scott-
I’ll admit I haven’t read Edge of Evolution but I was commenting on the paraphrase of it in this blog entry. I think my criticism still stands. If I need to go back to the book to have my comment was addressed, then the author of this blog entry is incompletely quoting his source (his own book in this instance)
@gpuccio
As Dave Scott has pointed out, I haven’t read Edge of Evolution so I’ll refrain from commenting on that.
However, I don’t think your statement regarding single mutations being a myth is true in the case of citrate utilization. In these E. coli, the citrate utilization machinery was already there, it just needed a way to get into the cell. In this case, a single mutation in the citrate permease gene may very well caused the phenotype. We will have to see future work by this laboratory to see if this is the case.
Yes, several distinct mutations in gene leading to a production of a novel biochemical pathway would indeed be rare, but utilization of an already existing pathway by a single mutation of a rate limiting enzyme isn’t.
You could also have “hijacking” of existing biochemical pathways to use new substrates. The ability of that enzyme to use that new substrate could be caused by a single mutations.
“Multiple mutations are needed for evolution, and are so wildly improbable that evolution can’t occur.”
Nope, that’s not what he’s saying. He specifically talks about an evolutionary change in the development of Malaria’s resistance to drugs in which two ‘simultaneous’ changes must have taken place.
To tired to summarize Behe right now. Have you read Edge of Evolution?
These links, DLH referenced, in 14 work and are worth the read:
Observation of evolution in bacteria
http://www.answersingenesis.or.....vation.asp
excerpt:
One strain had a mutation in a gene for the enzyme glycerol kinase which is important in the first step of glycerol breakdown. This mutation reduced the ability of glycerol kinase to be inhibited by fructose-1,6-bisphosphate (FBP). FBP is important in limiting the rate at which glycerol is catabolized. This is important since a side reaction during glycerol breakdown results in the production of a metabolite which is toxic at high concentrations. No gain of information took place as required by evolution, only loss leading to dysregulation of this pathway. In the wild, versus the rather comfy lab environment, this could be extremely detrimental.
Argument: Some mutations are beneficial
http://www.answersingenesis.or.....apter5.asp
excerpt:
In the process of defending mutations as a mechanism for creating new genetic code, they attack a straw-man version of the creationist model, and they have no answer for the creationists’ real scientific objections. Scientific American states this common straw-man position and their answer to it.
10. Mutations are essential to evolution theory, but mutations can only eliminate traits. They cannot produce new features.
On the contrary, biology has catalogued many traits produced by point mutations (changes at precise positions in an organism’s DNA)—bacterial resistance to antibiotics, for example. [SA 82]
This is a serious misstatement of the creationist argument. The issue is not new traits, but new genetic information. In no known case is antibiotic resistance the result of new information. There are several ways that an information loss can confer resistance, as already discussed. We have also pointed out in various ways how new traits, even helpful, adaptive traits, can arise through loss of genetic information (which is to be expected from mutations).
Mutations that arise in the homeobox (Hox) family of development-regulating genes in animals can also have complex effects. Hox genes direct where legs, wings, antennae, and body segments should grow. In fruit flies, for instance, the mutation called Antennapedia causes legs to sprout where antennae should grow. [SA 82]
Once again, there is no new information! Rather, a mutation in the hox gene (see next section) results in already-existing information being switched on in the wrong place.1 The hox gene merely moved legs to the wrong place; it did not produce any of the information that actually constructs the legs, which in ants and bees include a wondrously complex mechanical and hydraulic mechanism that enables these insects to stick to surfaces.2
These abnormal limbs are not functional, but their existence demonstrates that genetic mistakes can produce complex structures, which natural selection can then test for possible uses. [SA 82]
Amazing—natural selection can ‘test for possible uses’ of ‘non-functional’ (i.e., useless!) limbs in the wrong place. Such deformities would be active hindrances to survival.
—
Only one thing I can add to this, is that it is commonly known that the parent strain of bacteria will consistently be more fit for survival than the mutant strain when compared to the parent strain in the original environment.
Is Antibiotic Resistance evidence for evolution?
http://www.godtube.com/view_vi.....e30ff85177
This very simple demonstration of “demonstrable” evolution in native environment has never been shown. YET….
If evolution were actually true you would naturally expect the “random” mutations of bacteria to, every so often, just randomly develop a complexity in the native environment that surpasses the parent strains complexity and thus surpasses the parent strains ability to survive. Yet this has never been observed! Why must evolutionists always allude to some “shady” characteristic that is in Davescots words “a real yawner” when they should have countless examples of evolution of complexity in native environments for bacteria that would be unambiguous in its proof?
The truth is that they will never demonstrate a gain in complexity for any life-form for the “unmatched” integrated complexity of the information in a life form prevents this from happening. As well I point out that this is able to be inferenced from first principles of science whereas evolution must ignore first principles of science.
broadbill
I’ll admit I haven’t read Edge of Evolution
No admission was required. I wrote that it was obvious you hadn’t read it. It was a statement not a question.
In this case, a single mutation in the citrate permease gene may very well caused the phenotype.
Again, if you’d read The Edge of Evolution you would know better than to write that. The spontaneous single point mutation rate of E.coli vs. the size of its genome coupled with the vast number of individuals in each generation assures us that it will test (over and over and over again) all possible single point mutations in each generation. If a single point mutation with as much benefit as being able to utilize 90% of the available food supply while peers without the mutation starve then that mutant strain will take over the population. The very first culture plate would almost certainly produce citrate eaters.
The math is incontrovertable and is well laid out in Behe’s book.
kairos,
My point is that if Behe is arguing that multiple mutations are too rare to allow evolution of the cell, then it is odd that he is using an example of a multiple mutation (maybe) occurring and, however “trivial” you may consider the mechanism, causing a large increase in fitness through selection (i.e. evolution).
Furthermore, the Lenski article also shows through a nice series of experiments that this evolutionary event was preceded by intermediate steps, which Behe also considers costly and extremely rare.
So he more or less seems to be using a clear example of white to argue black.
dmso74
It’s obvious you have not read The Edge of Evolution either. Why do you people insist on criticizing things you know nothing about? Don’t you realize it makes you look ignorant and lazy?
DaveScot,
I read the book about 3 months ago, after following the back-and-forth over the evolution of the vpu protein in HIV. Can you tell me how I am mis-interpreting it? As I understood it, Behe argues that point mutations are not capable of creating the types of changes needed for evolution.. and the low probability of getting multiple changes (either stimultaneously or through a pattern of intermediates with lowered fitness) dramatically lowers the odds of beneficial phenotypic changes occurring.
This paper, however, shows that a) a series of intermediates with slightly lower or identical fitness and b) (maybe) multiple mutations appears to have led to beneficial change that was strongly directionally selected for. How does this not contradict Behe’s argument?
p.s. Behe limits the number of mutational changes that can occur to two.. however, this paper argues that more than one “potentiating” change occurred early (20,000 generations) that allowed the later beneficial mutation to occur.. if the latter was more than a single mutation, then this goes beyond Behe’s “EoE” threshold.
DLH: “The critical ratio of beneficial to harmful mutations is variously reported as lower than 1 in 10,000, or lower than 1 in one million. This factor dominates all other parameters.
…
The primary thing that is crushing to the evolutionary theory is this fact. Of the random mutations that do occur, and have manifested traits in organisms that can be measured, at least 999,999 out of 1,000,000 (99.9999%) of these mutations to the DNA have been found to produce traits in organisms that are harmful and/or fatal to the life-form having the mutation! (Sanford; Genetic Entropy page 38)
…
I maintain that their, one in a million, estimate for beneficial mutations is flawed and that ALL mutations to a genome will be found to be harmful/fatal when using a correct measure of fitness/information.
——————
Thought I’d run a simple test which has no doubt been done before. I have this application I’ve written which is 529,875 bytes long. That’s optimized, and stripped of all debug and symbolic information. (And of course it accesses various 3rd-party DLL’s as well.) I ran this test where I would change a bit of the program at random (chosen by random number generator) and run the program through a battery of tests.
The application enables the design and building of web pages with various novel graphical effects and textures. So, my test was as follows: I would change a bit at random in the executable file (actually a .DLL) and then do the following: Open a preexisting RTF file. Open an owner-drawn menu three times, each time choosing a different graphical theme for the open file. Then I would change the font of the file. Then I opened another dialog that performed a certain function and test that twice. Finally I would open the dialog to build a web page with the new changes, and from within that open up a subdialog for editing images in the document, close that dialog, and finally hit ‘build’ to build the new webpage. Then I would hit ‘view’ to bring up the new web page in a web browser.
I had decided in advance to perform this test 20 times with 20 different one-bit random changes to the executable. The results: 2 program crashes, 1 malfunction, and in the other 17 cases, no discernable effect whatsoever.
Relevant? Irrelevant?
broadbill:
You say:
“However, I don’t think your statement regarding single mutations being a myth is true ”
But where did I say anything like that? What I said is completely different.
I cite here from ny post:
“The old model of single step pathways, where each mutation is patiently selected, is only a myth.”
As you can see. it is not single mutations which are a myth (IMO), but “pathways” where each single step is selected for function gain. Maybe I did not express myself clearly, I apologize.
I have clearly stated that single mutations are perfectly accessible to all living beings, especially bacteria. Indeed, specific single mutations can happen quite often in bacteria. In another thread, I grossly calculated the probability of a specific mutation in the E. coli genome at 1:(3*4.7 million), that is about 1 : 10^7 for each mutational event. That’s not very low, for a common and fast replicating organism as E. coli.
So, if a specific single mutation cean give an indirect benefit, usually slightly modifying an existing function, it can be selected. All the well documented examples we know od RM and NS are of that kind. Most of them imply single mutations, and almost all of them are examples of indirect advantage, derived from partial degradation of an existing function, selected by special aggressive conditions in the environment (like antibiotics).
I have also said explicitly that two coordinated mutations, that is two mutations which have to be simultaneously present before there is a function gain, are another matter: here the two probabilities multiply, and for E. coli the probability of any specific set of two mutations becomes about 1 : 10^14, which is much lower. Still, that is in the range of bacteria in a reasonable time (decades), while not so much in the range, for in stance, of mammals. Indeed, Behe puts more or less there his “edge” for undirected evolution.
But I want to be more generous. I can accept that, very rarely, specific 3 mutation sets can be attained in bacteria, and selected if they confer gain function. Here the probability becomes 1 : 10^21, and we are already in a really problematic order of magnitude, but you know, luck happens. In bacteria or protozoa, at least. it could happen, although very very rarely. It’s not even the case to discuss higher forms of life here.
But that’s all. If we add more necessary mutations to our set, we are out. That would no more be luck. That would have to be design.
So, if we want to obtain by single independent random mutations a specific set of, say, 10 mutations, the probability would become, in E. coli, 1 : 10^70, and we are out of any reasonable model (I know, we are still not at Dembski’s UPB, which is 1 : 10^150, but I really think we are already out, and I mean really out).
Obviously, there is the alternative possibility that the single steps are selected. That would dramatically make everything infinitely simpler (although not necessarily easy). We eould not have anymore to multiply probabilities, because the expansion of each mutation to all the population would make the probability the same for each new mutation (the previous ones having been fixed). In other terms, it could be done, if we really could justify that kind of single step fixation.
But we can’t. That’s what I meant with my argument about the myth. There is no single pathway known of, say, 10 different mutations, which leads to a definite function gain, and where each of the 9 intermediated is gradually fitter. Even more difficult would be to have such a pathway from one function to a completely different one. Even more difficult would be to have a pathway of, say, 50 mutations (or, as we discussed in detail in the previously mentioned thread, of 490 mutations).
That is the myth: that function landscape can be traverse by specific pathways, where you have a “stepping stone” of higher function at each single mutation, or, if we want to be generous, and if we are discussing bacteria, at every 2-3 mutation distance.
That’s not true. That’s the myth I was alluding to. There is no example of that, neither theorical nor empirical. There is no trace of the billions of functional intermediates that such a scenario would imply.
In other words, that scenario is simply false.
JunkyardTornado (#26):
I think it is relevant. It shows that neutral mutations are quite common, and harmful ones common. Which, IMO, is perfectly true also in biological information.
The real problem comes with beneficial mutations, of course…
junkyardtornado
re; no discernable effect whatsoever
Offhand I’d say you aren’t traversing much of the code in the test. There are tools that help you determine how much of the code has actually been traversed. I’m familiar with this one but it may not be applicable in your environment.
http://www.compuware.com/produ.....c.htm#code
Junkyard you stated:
I had decided in advance to perform this test 20 times with 20 different one-bit random changes to the executable. The results: 2 program crashes, 1 malfunction, and in the other 17 cases, no discernable effect whatsoever.
Relevant? Irrelevant?
Better watch out Junkyard, if evolution is as true as evolutionists assure us it is, you will soon surely discover how to allow computer programs to write themselves. Thus putting many well paid software programmers out of jobs. (You might even crash the entire economy) I might add, when you discover the correct evolutionary process for computer program writing, they (the programs) will write themselves with a level of complexity that we will not be able to understand since the complexity in genomes is currently far beyond man’s grasp to understand. (ENCODE; Bill Gates)
gpuccio: As far as the 17 neutral – Of course nothing I think is really neutral. Each of these I would say had some marginal effect on program performance (in terms of speed or space usage), either good or bad. It just wasn’t discernable from the standpoint of fitness (if user perception is the metric gauging fitness.) Furthermore to assume that I as a designer was so infallible that any change to my code was likely to be at least a marginal decrease in performance would not be reality.
I hadn’t decided before the test what would indicate program improvement, so the test was really only detecting neutral or harmful mutations. The test did not indicate the rarity of beneficial mutations.
DaveScot:
As far as the traversal of code, if I were to devote another hour or two to this I should probably do a code coverage analysis using that tool. I know that at least 80% of the source files were being hit.
Undoubtedly some of those neutral changes probably were doing something pretty harmful. But I was able to perform the program’s primary function without incident. Compare that to getting to breeding age without incident. In another thread, scordova talked about how even obviously beneficial mutations may never have a discernable impact. The same can be said for obviously harmful mutations.
Not an exhautive test I’ve done though, in just ninety minutes.
It is very hard to establish neutrality in deeply redundant systems. You can knock out 1 of the 5 navigation systems of a Space Shuttle, but becuase of the quintuple redundancy, there is no obvious effect on the behavior.
Biological systems are deeply redundant, and system behaviors of the redundant system resist easy characterization by selection. See: Airplane Magnetos.
What is true of knockout approaches is true of deleterious mutations in regions of deep redundancy.
We shouldn’t presume that lack of immediate effects on fitness are necessarily indicative that a mutation is truly neutral.
junkyard
Still doesn’t sound right but without knowing the code I don’t have much to go on. A lot of the DLL could be data where a flipped bit might only cause some subtle error – a misspelled word or a pixel that is out of place.
Try this instead. Randomly flip a bit in the source code instead of the machine code (but make sure it doesn’t land in a comment). I’d bet dollars against donuts in more cases than not you won’t even be able to test the executable because you’ll get a compiler error from the altered source code. One thing you can be sure of is that the compiler will traverse every bit of the source code that isn’t a comment.
DaveScot:
“Still doesn’t sound right but without knowing the code I don’t have much to go on. A lot of the DLL could be data where a flipped bit might only cause some subtle error – a misspelled word or a pixel that is out of place.”
You could say the same about random changes to the genome – 20 changes at random are likely to a subtle errors-a pixel out of place, etc. with no real impact.
If statically allocated strings are being altered I would have reported misspellings. Also if there are statically allocated arrays and the like, its not like there will be strings of 0’s or something in the actual executable file. The data segment will be allocated at program startup. The exe will contain code, resources like dialogs, as well as staticly allocated strings, etc.
Anyone could perform this test. Just bring up some executable in a hex editor (download something from cnet.com) and start making random changes. However, they would have to be actually random.
Will get back to this in a bit, however.
Or if anybody is interested in seeing a demo of my program, that could possibly be arranged as well.
Dave Scot:
I recall you were involved with hardware design or assemblers or something, so you probably have better intuition even than me as to why you can start changing bits around in machine code and be unscathed. I just think in terms like – if a word in hardware is n bits and your instruction set only requires n-m bits, then this means the top n-m bits of any instruction are being ignored. I’m sure there are many other factors as well. On substantive areas, if you allocate a dynamic array of 0xACDF bits (whatever that is in decimal) and a random bits change increases the size to 0xBCDF, then no harm. Also, I’m sure when you compile something, the compiler generates all sorts of boilerplate machine code, generalized for a number of potential scenarios only a handful of which may ever materialize in one particular program. This code could also be mangled and it not make any difference. Scordova mentioned redundancy in the genome, so if there is redundancy there, in the form of junk-dna or whatever obviously that can be mangled and not make a difference.
As far as compilers and high-level languages, I don’t think nature is such that if one tiny thing is out of place it just shuts down and refuses to do anything. It will take what you give it and attempt to do something, which is what a computer processor does, I think.
Don’t know how much light this analyis sheds.
The scenario I tested says whatever it says.
Of interest to topic:
Page3 20=21 Genetic Entropy ; Sanford;
Are there truly neutral nucleotide positions? True neutrality can never actually be demonstrated experimentally (it would require infinite sensitivity). However, for reasons we will get into later, some geneticists have been eager to minimize the functional genome, and wanted to regulate the vast bulk of the genome to “junk DNA”. So mutations in such DNA would be assumed to be entirely neutral. However actual findings relentlessly keep expanding the size of the functional genome, while the presumed “junk DNA” keeps shrinking. In just a few years, many geneticists have shifted from believing that less than 3% of the total genome is functional, to believing that more than 30% is functional – and that fraction is still growing. As the functional genome expands, the likelihood of neutral mutations shrinks. Moreover, there are strong theoretical reasons for believing there is no truly neutral nucleotide position. By its very existence, a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding and nucleosome binding. If a nucleotide carries absolutely zero information, it is then by definition slightly deleterious – as it slows cell replication and wastes energy. Just as there are really no truly beneficial neutral letters in a encyclopedia, there are probably no truly neutral nucleotide sites in the genome. Therefore there is no way to change any given site, without some biological effect – no matter how subtle. Therefore, while most sites are probably “nearly neutral”, very few, if any, should be absolutely neutral.
Since Dr. Sanford made thus crushing critique in 2005 the genome of humans has now been shown to be virtually 100% severely poly-functional, by ENCODE, with no “junk DNA” regions. Thus this principle is devastating to evolutionary theory (but boy do they a song and dance around it!)
This “complex interwoven (poly-fuctional) network” throughout the entire DNA code makes the human genome severely poly-constrained to random mutations (Sanford; Genetic Entropy, 2005; page 141). This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code. Thus even though a random mutation to DNA may be able to change one part of an organism for the better, it is now proven much more likely to harm many other parts of the organism that depend on that one particular part being as it originally was. Since evolution was forced, by the established proof of Mendelian genetics, to no longer view the whole organism as to what natural selection works upon, but to view the whole organism as a multiple independent collection of genes that can be selected or discarded as natural selection sees fit, this “complex interwoven network” finding is extremely bad news, if not absolutely crushing, for the “Junk DNA” population genetics scenario of evolution (modern neo-Darwinian synthesis) developed by Haldane, Fisher and Wright (page 52 and 53: Genetic Entropy: Sanford 2005)!
http://www.genome.gov/25521554
BETHESDA, Md., Wed., June 13, 2007 -” An international research consortium today published a set of papers that promise to reshape our understanding of how the human genome functions. The findings challenge the traditional view of our genetic blueprint as a tidy collection of independent genes, pointing instead to a complex network in which genes, along with regulatory elements and other types of DNA sequences that do not code for proteins, interact in overlapping ways not yet fully understood.”
http://www.boston.com/news/glo.....ed/?page=1
“The science of life is undergoing changes so jolting that even its top researchers are feeling something akin to shell-shock. Just four years after scientists finished mapping the human genome – the full sequence of 3 billion DNA “letters” folded within every cell – they find themselves confronted by a biological jungle deeper, denser, and more difficult to penetrate than anyone imagined.”
Since everyone is performing these fantastic mathematical computations, I will simply conjur up a point backed up by third grade mathematics, and then return to my seat in the back of the class, and take the short bus home after the bell rings.
30,000 generations of e. coli, and presto, we get a new constructive function, the ability to utilize citrate. Now, 30,000 advanced primate generations, at 20 years per generation, equals 600,000 years. So, a billion years equals less than 2,000 generations.
In this drop in the chronological bucket, is it feasible that the features and abilities, particularly mental, could have evolved through undirected genetic variation and natural selection.
And for a final point, if we took our kids, surrounded them with a tiny bit of pizza and ice cream, and enormous amounts of broccoli and other assorted veggies, would they mutate fast enough to eat the veggies before the “good” stuff runs out and they starve? Or go the direction of the Great Lizards?
Ekstasis: “Now, 30,000 advanced primate generations, at 20 years per generation, equals 600,000 years. So, a billion years equals less than 2,000 generations.”
???
It seems like a billion years would equal 50 million generations.
BA77:
Page3 20=21 Genetic Entropy ; Sanford
…
“Are there truly neutral nucleotide positions? True neutrality can never actually be demonstrated experimentally (it would require infinite sensitivity). “
Let’s take that as a given.
“However, for reasons we will get into later, some geneticists have been eager to minimize the functional genome, and wanted to regulate the vast bulk of the genome to “junk DNA”. So mutations in such DNA would be assumed to be entirely neutral”.
It seems certain that is not the case. No one wants to establish junk-dna as being in some state of unprovable platonic neutrality – only that any benefit or detriment it has is marginal at best.
“If a nucleotide carries absolutely zero information, it is then by definition slightly deleterious – as it slows cell replication and wastes energy”
The issue isn’t how much information it contains. Junk dna could be a function that contained a lot of information, but a function that was hardly ever used if at all, for example because a new environment made it completely unnecessary. So you have an organism dragging around this functional code that is never used or accessed in any way. So if its something that’s never used, mutations could make it nonfunctional and the organism would never care or know the difference (thus vestigal organs).
Also, no one would consider dead code in a program as an unacceptable waste of energy because of the cost to replicate it. “Keep it in – maybe sometime down the road we’ll debug it and be able to use it.”
“there are really no truly beneficial neutral letters in a encyclopedia, there are probably no truly neutral nucleotide sites in the genome.”
Is someone trying to make the case that there are letters that are “beneficial” and literally neutral at the same time?
At any rate…
“Let’s see… these sentences here in the entry for zebra – do they really clarify anything? Let’s go around the room…opinions? Where’s Dave? Didn’t he put this in? He’s not in today? Better not change anything, till we here from him.”
Since Dr. Sanford made thus crushing critique in 2005…
I would say more like self-evident and maybe slightly vacuous.
http://www.genome.gov/25521554: “DNA sequences that do not code for proteins, interact in overlapping ways not yet fully understood…”
http://www.boston.com/news/glo.....ed/?page=1: “a biological jungle deeper, denser, and more difficult to penetrate than anyone imagined”
BA77: This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code”
All the above sources indicated (esp. boston.com) is increasing ignorance concerning the genome.
dmso74
In (18) you are able to handle the concept of multiple versus single mutations. In (27) you are able to handle the concept of two mutations. If you read gpuccio (29), perhaps you will be able to handle the concept of three mutations and four mutations. Then you will be able to understand what Behe was saying in The Edge of Evolution.
What is being done here is using ID as a predictive theory. Instead of starting with the “knowledge” that there are no designers in natural history, and that therefore RV&NS had to be able to produce the variety of life we see because nothing else was around, and thus that an exact match exists between change over standard geologic time and the capabilities of RV&NS, ID starts out by allowing for the possibility that RV&NS is not the only creative force in biology. Thus it makes sense to ask the question, how much of what we see can be accounted for on the basis of RV&NS? That requires quantification. That means using numbers, like 1, 2, 3, 4, and so on. It also means using exponents, and probability theory (areas where Darwin was weak 😉 ).
There are two ways to proceed when developing a theory. The first is to use mathematical constructs of the theory. That is what has been done here. What is being said is that one specific single mutation is within easy reach of a single plate of bacteria, as long as it isn’t too devastating. (Even there, it is within reach; it simply won’t survive). However, two mutations is much harder, and puts us close to the edge of where mutations can get us during the lifetime of a researcher. Three mutations require considerable luck, and four mutations would be the equivalent of winning the lottery on one try. Five mutations? Fugeddaboudit. Twenty-seven mutations? We have now (at least in E. coli) passed the UPB, even allowing for more time and more bacteria. And we haven’t even gotten one protein for the flagellum, let alone 35 (or is it 60 with the promoters, etc.?).
The second way to develop a theory is to test it. The straightforward ID prediction is that any experimentally demonstrated change in the genome of E. coli will be limited to two, or at the most 3 separate steps, unless the steps can be demonstrated to be sequentially more fit. That’s a scientific prediction, if you will, which would seem to make ID a scientific theory in the Popperian sense.
Behe noted that chloroquine resistance required 2 separate mutations to happen. That was within the edge of evolution, but barely. Lenski has apparently demonstrated a multiple-step change. It will be interesting to see precisely how many of those steps it took, and whether any of them were in fact advantageous in the medium. If it took 4 steps, all of which were neutral or slightly disadvantageous, then current ID theory will have to be severely modified or abandoned.
On the other hand, it does seem like standard evolutionary theory is at some risk as well. If this turns out to be a 3-neutral-step process, or especially a 2-neutral-step process, then the edge of evolution will be demonstrated to be where Behe says it is, and far too close to where an organism started to account for the variety of life as we know it. Perhaps more importantly, that would be experimental evidence, which is supposed to have more weight in science than theory does.
It will be very interesting to see precisely what mutations were required to allow E. coli to utilize citrate in their environment, and how advantageous or disadvantageous those steps were.
To all:
Let’s give up this nonsense of there being no neutral mutations. To claim that CUU instead of CUC in a protein coding sequence is somehow more deleterious (both coding for leucine), or vice versa, except in very special circumstances, is crazy. Furthermore, if all genetic changes are deleterious, then it follows that there must have at one time been, or at least been able to be, a perfect man (and woman). What was his (and her) skin color? Shape of nose? Distribution of body hair? Straightness of hair? Can we really say that more fat around the eyelids is more or less fit? Give me a break.
Finally, it is at least theoretcially possible that beneficial mutations could happen in the real world. If a mutant bacterium mutates back to the original, would that not be a beneficial mutation? I can understand the argument that such mutations should be rare, but to call them impossible seems to be going a bit too far.
To illustrate the principle of poly-functionality and thus poly-constraint on a genome;
Craig Venter talks about the genitalia bacteria, the smallest bacterium known, in DaveScot’s video here:
Craig Venter – 18 months to 4th generation biofuels
http://www.uncommondescent.com.....-biofuels/
Venter, reservedly, talks, in the video, of the interwoven complexity of the bacteria that prevents the genome from being reduced much below approximately 500 genes. (note; this is a somewhat higher and more accurate figure than previous estimates.
Such as this previous study indicates:
“An earlier study published in 1999 estimated the minimal gene set to fall between 265 and 350. A recent study making use of a more rigorous methodology estimated the essential number of genes at 382.”
John I. Glass et al., “Essential Genes of a Minimal Bacterium,” Proceedings of the National Academy of Sciences, USA103 (2006): 425-30.
So if we were to get a proper “beneficial mutation’ in a polyfunctional genome of 500 interdepedent genes then instead of the infamous “Methinks it is like a weasel” single function information problem for Darwinists, we would actually be encountering something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford.
S A T O R
A R E P O
T E N E T
O P E R A
R O T A S
Which is translated ;
THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS.
This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation.
This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations.
The puzzle I listed is only poly-fuctional to 4 elements, as stated earlier the minimum genome is poly-constrained to approximately 500 elements (genes). For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life is absurd in the highest order!
I believe 20,000 generations is not the relevant measure but the number of reproductive events. It just takes one positive reproductive mutation to theoretically permeate the population.
The number of reproductive events adds a few zeros to the exponent of the opportunties the bacteria have had to evolve.
The number of primate reproductive events or any multicellular animal is quite a small number compared to what has happened in Lenski’s labs.
My apologies for the earlier math, let me rephrase. It took 30,000 generations to make one relatively minor functional change in e. coli. 30,000 generation of advanced primates is roughly equal to 600,000 years. Now, as mentioned, selecting out for multiple changes simultaneously can get very dicey. Advanced primates, transitioning to full humanity, would require thousands or millions of functional changes, particularly in the areas of the brain, would it not?
How many 600,000 year cyles do we have to play with in order to fit with actual timeframes, and is this at all within the realm of reason?
Ekstasis: “if we took our kids, surrounded them with a tiny bit of pizza and ice cream, and enormous amounts of broccoli and other assorted veggies, would they mutate fast enough to eat the veggies before the “good” stuff runs out and they starve?
Maybe the following looks into that-
href=”http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2366040″>Vegetable acceptance by infants
#24 dmso74
in #29 and #45 gpuccio and paul giem did reply to your point.
#29 gpuccio
“As you can see. it is not single mutations which are a myth (IMO), but “pathways” where each single step is selected for function gain. Maybe I did not express myself clearly, I apologize.”
I did understand so your point, and obviously I agree.
“I have clearly stated that single mutations are perfectly accessible to all living beings, especially bacteria. Indeed, specific single mutations can happen quite often in bacteria.”
And this is clearly argued by Behe in EoE.
“I have also said explicitly that two coordinated mutations, that is two mutations which have to be simultaneously present before there is a function gain, are another matter: here the two probabilities multiply, and for E. coli the probability of any specific set of two mutations becomes about 1 : 10^14, which is much lower. Still, that is in the range of bacteria in a reasonable time (decades), while not so much in the range, for in stance, of mammals. Indeed, Behe puts more or less there his “edge” for undirected evolution.”
Precisely. And he stated so very clearly. I don’t understand how someone who claims to have read EoE could reasonably not to know this.
“But I want to be more generous. I can accept that, very rarely, specific 3 mutation sets can be attained in bacteria, and selected if they confer gain function. Here the probability becomes 1 : 10^21, and we are already in a really problematic order of magnitude, but you know, luck happens. In bacteria or protozoa, at least. it could happen, although very very rarely. It’s not even the case to discuss higher forms of life here.”
That’s right, although obviously NDEers are constrained to discuss application to higher forms of life.
“But that’s all. If we add more necessary mutations to our set, we are out. That would no more be luck. That would have to be design.”
Or at least even the most fierce anti-Ider should, to be intellectually honest, admit: “OK, you’re right; RM+NS don’t work here; I don’t know”.
“Obviously, there is the alternative possibility that the single steps are selected. That would dramatically make everything infinitely simpler (although not necessarily easy). We eould not have anymore to multiply probabilities, because the expansion of each mutation to all the population would make the probability the same for each new mutation (the previous ones having been fixed). In other terms, it could be done, if we really could justify that kind of single step fixation.”
This is the only point where I don’t agree with you. Certainly step-by-step selection would yield more reasonable chances but it’ not true that this would “make everything infinitely simpler”; in fact we would still have very low chances. Moreover it’s very generous your following statement: “(although not necessarily easy)”. Let us consider that, even in the more favourable condition, the Haldane paradox seems to put a very severe constraint to the actual occurrence of any reasonable evolution pathway.
That is the myth: that function landscape can be traverse by specific pathways, where you have a “stepping stone” of higher function at each single mutation, or, if we want to be generous, and if we are discussing bacteria, at every 2-3 mutation distance.
“There is no trace of the billions of functional intermediates that such a scenario would imply.
In other words, that scenario is simply false.”
That’s IMHO the final empirical proof that what seems theoretically impossible was impossible in the real world too.
Junkyard, I read and reread your post in 44, and it seems to me that you are trying to argue for large unused sequences in a genome. I think you are well aware that this is an evolutionary/materialistic presupposition and as well I think you know this is not the way the cutting edge evidence is going. But that is OK, If you want to continue to hold onto to your “vast swaths” of the DNA belief for – currently non-functional but awaiting future assignment be my guest. Myself I will await further work along the ENCODE lines and see the Theistic ID position validated in stunning fashion for its postulation of a loss of CSI for each divergence of a sub-species from a parent species.
#46 bornagain77
“something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford.
S A T O R
A R E P O
T E N E T
O P E R A
R O T A S”
Only for sake of precision I add that this famous palyndrome had always been, to the best of our knowledge, always associated to christians. However it is found also in the reversed form:
rotas
opera
tenet
arepo
sator
In particular this is the form in which was written in the buried Pompeian town (this is the reason why we are sure that the palyndrome dates before 79 AD.
“The puzzle I listed is only poly-fuctional to 4 elements, as stated earlier the minimum genome is poly-constrained to approximately 500 elements (genes). For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life is absurd in the highest order!”
That’s right. And this is the reason why the more the science shows that DNA is not “yunk” and is polyfunctional the more darwinian ideas look mere faith in chance.
Kairos,
I wrote this following article in response to people continually debating with me that the DNA contains a majority of “Junk DNA”.
The Wonder of DNA
To illustrate the complexity and wonder in the DNA of man, let’s look at some of the work of Samuel Braunstein who is a quantum physicist at the Weizman Institute in Israel.
Samuel Braustein was asked to present a talk to the science-fiction club in Rehover. What better topic, he thought, than quantum teleportation? Because of the limitations, imposed by the laws of physics, of ever teleporting any material object, Braunstein suggested the secret to teleportation would lie not in transporting people, or material objects, but would lie in teleporting the molecular information about whatever was to be teleported. Somehow, this Star Trek type teleporter must generate and transmit a parts list and blueprint of the object being teleported. This information could be used in reconstructing the object at its final destination. Presumably, the raw materials would be available to reconstruct the object at its final destination. Naturally this process raises a lot of questions that the script writers for Star Trek never answered. For example, just how much information would it take to describe how every molecule of a human body is put together?
In a human body, millimeter accuracy isn’t nearly good enough. A molecule a mere millimeter out of place can mean big trouble in your brain and most other parts of your body. A good teleportation machine must be able to put every atomic molecule back in precisely its proper place. That much information, Braunstein calculated, would require a billion trillion desktop computer hard drives, or a bundle of CD-ROM disks that would take up more space than the moon. The atoms in a human being are the equivalent to the information mass of about a thousand billion billion billion bits. Even with today’s top technology, this means it would take about 30 billion years to transfer this mass of data for one human body from one spot to another. That’s twice the age of the universe. “It would be easier,” Braunstein noted, “to walk.”
Yet the DNA of man contains the parts list and blueprint of how all these trillions upon trillions of protein molecules go together in just 3 billion base pairs of DNA code. As well, the DNA code contains the “self assembly instructions” that somehow tells all these countless trillions of proteins molecules how to put themselves together into the wonder of a human body. Yet far from the billion-trillion computer hard drives calculated by Braustein, these 3 billion letters of information in the DNA of man could easily fit onto the single hard drive of the computer I’m writing this article on with plenty of room left to spare! That ratio of a billion trillion hard drives reduced to one hard drive is truly an astonishing amount of data compression that far exceeds the capacity of man to do as such. It is abundantly clear that all that required information for exactly how all the protein molecules of man are put together is somehow ingenuously encrypted in some kind of “super code” in the DNA of man. Amazingly, many evolutionary scientists “used” to say the majority of DNA that didn’t directly encode for proteins (genes) was leftover “junk” DNA from man’s falsely presumed evolutionary past. Now this blatantly simple-minded view of the required complexity that is inherent in the DNA of man has been solidly overturned. In June 2007, an international research consortium, named ENCODE, published a huge body of preliminary evidence that gives a glimpse into the world of the DNA’s complexity. This is a quote from a Science Daily article about the landmark study.
In a group paper published in the June 14, 2007 issue of Nature and in 28 companion papers published in the June issue of Genome Research, the ENCyclopedia Of DNA Elements (ENCODE) consortium, which is organized by the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health (NIH), reported results of its exhaustive, four-year effort to build a parts list of all biologically functional elements in 1 percent of the human genome. Carried out by 35 groups from 80 organizations around the world, the research served as a pilot to test the feasibility of a full-scale initiative to produce a comprehensive catalog of all components of the human genome crucial for biological function. The ENCODE consortium’s major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active. The new data indicate the genome contains very little unused sequences and, in fact, is a complex, interwoven network. In this network, genes are just one of many types of DNA sequences that have a functional impact.
The revelation of a complex interwoven network is a major blow to evolutionists. Now bear in mind, this is only a “feasibility study” of 1% of the Genome. The interwoven complexity is sure to be multiplied exponentially as the effort extends to decipher the remaining 99% of the DNA. This preliminary study, of how DNA is actually encoded, clearly indicates that most, if not the entire 100%, of the DNA is “poly-functional”. Poly-functional simply means the DNA exhibits extreme data compression in its character. “Poly-functional” DNA sequences will exhibit several different meanings on several different levels. For instance, if you were to write a (very large) book similar to the DNA code, you could read many parts of the book normally and it would have one meaning, you could read the same parts of the book backwards and it would have another completely understandable meaning. Yet then again, a third equally coherent meaning would be found by reading every other letter of the same parts. A fourth level of meaning could be found by using a simple encryption program to get yet another meaning. A fifth and sixth level of meaning could be found in the way you folded the parts of the book into specific two and three dimensional shapes. Please bear in mind, this is just the very beginning of the mind bending complexity scientists are finding in the DNA code. Indeed, a study by Trifonov in 1989 has shown that probably all DNA sequences in the genome encrypt for up to 12 different codes of encryption!! No sentence, paragraph, book or computer program man has ever written comes close to that staggering level of poly-functional encryption we find in the DNA code of man. Here is a quote on the poly-functional nature of the DNA from renowned Cornell Geneticist and inventor Dr. John Sanford from his landmark book, “Genetic Entropy”:
There is abundant evidence that most DNA sequences are poly-functional, and therefore are poly-constrained. This fact has been extensively demonstrated by Trifonov (1989). For example, most human coding sequences encode for two different RNAs, read in opposite directions i.e. Both DNA strands are transcribed ( Yelin et al., 2003). Some sequences encode for different proteins depending on where translation is initiated and where the reading frame begins (i.e. read-through proteins). Some sequences encode for different proteins based upon alternate mRNA splicing. Some sequences serve simultaneously for protein-encoding and also serve as internal transcriptional promoters. Some sequences encode for both a protein coding, and a protein-binding region. Alu elements and origins-of-replication can be found within functional promoters and within exons. Basically all DNA sequences are constrained by isochore requirements (regional GC content), “word” content (species-specific profiles of di-, tri-, and tetra-nucleotide frequencies), and nucleosome binding sites (i.e. All DNA must condense). Selective condensation is clearly implicated in gene regulation, and selective nucleosome binding is controlled by specific DNA sequence patterns – which must permeate the entire genome. Lastly, probably all sequences do what they do, even as they also affect general spacing and DNA-folding/architecture – which is clearly sequence dependent. To explain the incredible amount of information which must somehow be packed into the genome (given that extreme complexity of life), we really have to assume that there are even higher levels of organization and information encrypted within the genome. For example, there is another whole level of organization at the epigenetic level (Gibbs 2003). There also appears to be extensive sequence dependent three-dimensional organization within chromosomes and the whole nucleus (Manuelides, 1990; Gardiner, 1995; Flam, 1994). Trifonov (1989), has shown that probably all DNA sequences in the genome encrypt multiple “codes” (up to 12 codes).
Dr. John Sanford (PhD in Genetics; inventor of the biolistic “gene gun” process! Holds over 25 patents! In addition to the gene gun, Sanford invented both pathgen derived resistance, and genetic immunization. If you ate today you probably ate some food that has been touched by his work in manipulating the genetics of food crops!)
Though the ENCODE consortium is about to undertake the task of deciphering the remaining 99% of the humane genome, I firmly believe that they, and all their super-computers, are soon to be dwarfed by the sheer and awesome complexity at which that much required information is encoded into the three billion letters of the DNA code of man. As a sidelight to this, it takes the most powerful super-computer in the world an entire year just to calculate how a single 100 amino acid protein sequence will fold into a 3-dimensional shape from its 1-dimensional starting point. Needless to say, this impressive endeavor by ENCODE to decipher the entire genome of man will be very, very interesting to watch. Hopefully ENDODE’s research will enable doctors to treat a majority of the over 3500 genetic diseases (mutational disorders) that afflict man without having to fully understand that much apparent complexity in the DNA of man.
The only source for purely evolutionary change to DNA, that is available to the atheistic evolutionists, is the natural selection of copying errors that occur to DNA. This is commonly known as natural selection of random mutations to DNA. What evolutionists fail to ever mention is that natural selection is actually just some totally random selection of some hypothetical beneficial mutation that has never actually been clearly demonstrated to occur in the laboratory. For all practical purposes, All random mutations to DNA, that have been observed in the laboratory (we are talking millions of observations here), are either clearly detrimental or slightly detrimental, to the organism having the mutation. All mutations that are deemed to be somewhat beneficial to the organism, such as the anti-biotic resistance of bacteria, all turn out to involve loss of function in the genome. In fact, at least 99.9999% of the copying errors that do occur to DNA are proven to be somewhat harmful and/or to the organism having to mutation (Gerrish and Lenski, 1998). Evolution assumes a high level of beneficial flexibility for DNA. But alas for the atheistic evolutionists, the hard evidence of science indicates an astonishingly high level of integrity in the DNA code! A code which Bill Gates, the founder of Microsoft, states is far, far more complex than any computer code ever written by man.
Sometimes a mutation to the DNA is found to be the result of a “complex feedback” of preexisting information that seems to be somewhat beneficial to the organism at the macroscopic level (such as lactase persistence). Yet, even in these extremely rare examples of “beneficial” mutations, the questioned beneficial mutation never shows a violation of what is termed “Genetic Entropy”. Genetic Entropy is a fundamental principle of science that means functional information in the DNA cannot increase “above the level of parent species” without an outside source of intelligence putting the information in the DNA. To be absolutely clear about this, evolutionists have never proven a violation of genetic entropy in the laboratory (Sanford; Genetic Entropy, 2005), thus they have never even proven a gain in information in the DNA of organisms above the level of parent species, thus they have never conclusively proven evolution as a viable theory at the molecular level in the first place! To make matters worse for the evolutionists, even if a purely beneficial random mutation were to ever occur it would be of absolutely no use to the evolutionary scenario for it would be swallowed in a vast ocean of slightly detrimental mutations. Yet evolutionists act like evolution has been conclusively proven on the molecular level many times %
DNA is extremely resilient in its ability to overcome copying errors to the DNA, yet, as stated earlier, evolutionary scientists claim that the copying errors in the DNA that do occasionally slip through are what are ultimately responsible for the sheer and awesome complexity we find in the DNA code of man. Contrary to their materialistic beliefs, mutations do not create stunning masterpieces!
As well, the overwhelming “slightly detrimental” nature of all observed mutations to DNA in the laboratory has been thoroughly established by Dr. J.C. Sanford, in his book “Genetic Entropy”. He shows in his book that there is a indeed a slightly negative effect for the vast majority of mutations. These slightly detrimental mutations are not readily apparent at the macroscopic level of the organism. These slightly negative mutations accumulate over time in all higher species since they are below the power of natural selection to remove them from a genome. These “slightly negative” mutations accumulate in a higher species until “genetic meltdown” occurs in a species. Indeed, if mutation rates for higher species have stayed similar to what they currently are, throughout the history of complex life on earth, then genetic meltdown is the most reasonable cause for the numerous mysterious extinctions in the fossil record. Over 90% of extinctions in the fossil record have occurred by some unknown natural mechanism. The average time for “mysterious extinctions” is rather constant at about 4 million years per species in the fossil record (Van Valen; A new evolutionary law, 1973).
http://www.nap.edu/openbook.ph.....8;page=117
Mysterious extinctions which are not part of any known major natural catastrophes in the history of the earth. I would like to point out that since the laws of physics have been clearly proven to have remained stable throughout the history of the universe, then, there is no compelling reason to suspect the naturally occurring mutations to DNA have change significantly from their present rate for any prolonged period of time. Thus the “genetic meltdown theory” is surprisingly strong as the solution to the fairly “constant rate” of mysterious extinctions of higher life-forms in the fossil record.
I’ll end my paper with a bit of trivia. The capacity of the DNA molecule to store information is so efficient that all the information needed to specify an organism as complex as man weighs less than a few thousand-millionths of a gram. The information needed to specify the design of all species of organisms that have ever existed on earth (a number estimated to be one billion) could easily fit into a teaspoon with plenty of room left to spare for every book that has ever been written on the face of earth. Obviously, I am just barely touching the surface of the complexity that is apparent in the DNA of man. Yet even from this superficial examination, we find truly golden nuggets of astonishing evidence that we are indeed the handiwork of Almighty God.
Psalm 139:14
I will praise You, for I am fearfully and wonderfully made;
“bornagain77: “Junkyard, I read and reread your post in 44, and it seems to me that you are trying to argue for large unused sequences in a genome. I think you are well aware that this is an evolutionary/materialistic presupposition and as well I think you know this is not the way the cutting edge evidence is going. But that is OK, If you want to continue to hold onto to your “vast swaths” of the DNA belief for – currently non-functional but awaiting future assignment be my guest. Myself I will await further work along the ENCODE lines and see the Theistic ID position validated in stunning fashion for its postulation of a loss of CSI for each divergence of a sub-species from a parent species”
So the cutting edge evidence is indicating what, exactly? That the genome is a bunch of unmodular spaghetti code, so that if you make a change one place, its liable to break something else in some other remote area of the code? And furthermore, this proves it was all designed in advance by a disembodied intelligent designer?
The general theme of the Boston.com article seems to be ignorance:
“As for the remaining 95 percent of the genome? “There’s this weird lunar landscape of stuff we don’t understand,” Lander said. “No one has a handle on what matters and what doesn’t.”
“No one knows what all that extra RNA is doing. It might be regulating genes in absolutely essential ways. Or it may be doing nothing of much importance: genetic busywork serving no real purpose.”
Scordova was talking above about code redundancy in the genome, so that you can change something without adverse effects appearing.
Dave Scot mentioned the necessity of code traversal tools for program testing. IOW even through systematic testing there’s liable to be vast swaths of code that you never even hit. You could put whatever garbage you want in these sections and it wouldn’t make any difference. You tend to code for contingencies that never ever materialize. If this code never ends up being used it might as well be junk. Then to use the example of e coli, it could have the ability to use some nutrient, but then be an environment where that nutrient is never present, so the ability is never used. Then some mutation happens to it and that ability is broken, but it doesn’t make any difference given its current environment. So to me, there’s junk dna all over the place. If there are more rigorous papers than the Boston.com article that come right out and say junk dna is an erroneous concept, then you can point me to them if I need to know about them.
kairos:
thank you for the comments.
When I wrote: ““make everything infinitely simpler”, I was referring just to the probability count, which does indeed change dramatically in that scenario, because youn have no more to multily the single probabilities. That’s a point which is often misunderstood, but really if you allow for totally efficient selection and expansion of each mutation, we are no more in the realm of randomness. We are ratherin the realm of intelligent selection, like in the “Methinks it’s like a weasel” example, and we do know that, under those conditions, probabilities, although not high, are empirically affordable.
I don’t think that’s any form of concession to darwinism: the fact remains that such a kind of selection is possible only if you know in advance the information to be selected (like Dawkins in the Shakespeare example). And the deconstruction of complex information into single bit variation with continuous function increase is obviously impossible. And even if one example existed where it is possible (and I really think it does not exist), how can anyone conceive that it should be possible for all complex information? That would be, indeed, a weird new law of nature, or of logics, of which nobody has ever had any hint! Just think: we have tens of thousand of different proteins even in a single mammal, most of them very different one from the other, with different 3d structures and lots of different domains, and specific active sites. And all that variety of information should be “linked” by efficient single bit pathways with ordered function increase? I think the most kind name for that scenario is “bullshit”…
When I wrote “although not necessarily easy”, I was exactly thinking of Haldane’s dilemma, which you very appropriately cite, and of all the other improbabilities and impossibilities which would make really “hard” to guarantee effective selection and expansion, even if the “pathways”, which don’t exist, did really exist.
Junkyard,
Post 54 is especially for you buddy.
bornagain:
you write, “This is a quote from a Science Daily article about the landmark study.” But what follows does not contain quotation marks, block quotes or any references that can be checked. In this section you say,
“The ENCODE consortium’s major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active. The new data indicate the genome contains very little unused sequences and, in fact, is a complex, interwoven network”
Was this all a quote from the Science Daily article?
BornAgain77: “a good teleportation machine must be able to put every atomic molecule back in precisely its proper place. That much information, Braunstein calculated, would require a billion trillion desktop computer hard drives, or a bundle of CD-ROM disks that would take up more space than the moon.
…
The capacity of the DNA molecule to store information is so efficient that all the information needed to specify an organism as complex as man weighs less than a few thousand-millionths of a gram. The information needed to specify the design of all species of organisms that have ever existed on earth (a number estimated to be one billion) could easily fit into a teaspoon with plenty of room left to spare for every book that has ever been written on the face of earth. Obviously, I am just barely touching the surface of the complexity that is apparent in the DNA of man. Yet even from this superficial examination, we find truly golden nuggets of astonishing evidence that we are indeed the handiwork of Almighty God.
Psalm 139:14 I will praise You, for I am fearfully and wonderfully made;”
————–
(Psa 19:1-8) The heavens declare the glory of God; the skies proclaim the work of his hands. Day after day they pour forth speech; night after night they display knowledge. There is no speech or language where their voice is not heard. Their voice goes out into all the earth, their words to the ends of the world. In the heavens he has pitched a tent for the sun, which is like a bridegroom coming forth from his pavilion, like a champion rejoicing to run his course. It rises at one end of the heavens and makes its circuit to the other; nothing is hidden from its heat. The law of the LORD is perfect, reviving the soul. The statutes of the LORD are trustworthy, making wise the simple.The precepts of the LORD are right, giving joy to the heart. The commands of the LORD are radiant, giving light to the eyes.
The above passage is significant to me for the following:
Intelligent Design has historically been stated in reference to the two following observations (among others)- That law is not sufficient to generate CSI, and there are not enough particle interactions in the universe to account for the emergence of CSI. So in short, the supposed impotence of law and the heavens are the two pillars of ID thought. The above passage talks about the marvels of the heavens and then seemingly abruptly and with no apparent connection starts talking about the Law and saying that it even gives light to the eyes.
In your paper ba77, you talked about how there are multiple levels of meaning in the human genome. Well, there are multiple levels of meaning in the Bible as well. The Old Testament juridicial law may be in view in some general sense in the above passage. It is ironic to me however, in light of ID’s denigration of A) the law and B) the heavens, that those two same concepts would be exalted in the way that they are in the above passage in direct connection to one another and also in connection to man.
And my point is, while man and the universe are the “handiwork” of God in a certain sense, God does not have literal hands, and the question is, was there a physical intermediary of laws and the universe that in effect served as Gods hands and can also serve quite completely as a proximal explanation for man’s existence. And if not, why on earth does the physical universe exist.
ba77:As well, the overwhelming “slightly detrimental” nature of all observed mutations to DNA in the laboratory has been thoroughly established by Dr. J.C. Sanford, in his book “Genetic Entropy”…
Mysterious extinctions which are not part of any known major natural catastrophes in the history of the earth. I would like to point out that since the laws of physics have been clearly proven to have remained stable throughout the history of the universe, then, there is no compelling reason to suspect the naturally occurring mutations to DNA have change significantly from their present rate for any prolonged period of time. Thus the “genetic meltdown theory” is surprisingly strong as the solution to the fairly “constant rate” of mysterious extinctions of higher life-forms in the fossil record.”
This part of your discussion seemed to be compelling, I’ll have to admit.
Junkyard,
Sidenote:
The overwhelming stability of bacteria through thousands upon thousands of generations, strongly indicates that there is no part of the genome (Junk DNA) for evolution to play with,,i.e. Junk DNA is ruled out from straightforward test for genome flexibility!
Junkyard: I have to admit that your response in 59 seems a bit vague to me. But I’ll try to take this last point:
you stated:
And my point is, while man and the universe are the “handiwork” of God in a certain sense, God does not have literal hands, and the question is, was there a physical intermediary of laws and the universe that in effect served as Gods hands and can also serve quite completely as a proximal explanation for man’s existence. And if not, why on earth does the physical universe exist.
Just How did God Almighty implement the universe, design?
Though completely amateur in my effort, I will lay out what I have so far:
There are foundational Theistic claims for the characteristics of Almighty God. These characteristics are;
Omnipotent
Omnipresent
Transcendent
Eternal and
Omniscient
His Eternal characteristic has basic plausible empirical confirmation in special relativity with time, as we know it, coming to a complete stop at the speed of light. Thus since all foundation sub-atomic matter was constructed with energy at the Big Bang, this indicates that all matter arose from some eternal “timeless” dimension of energy.
The fact that the basic universal laws are precisely the same everywhere we look in the universe and have held exceedingly unchanged over the entire age of the universe (save for the proposed inflation ) gives tentative indication that the universal constants are indeed independent and transcendent of any proposed material basis and also gives tentative confirmation for the omnipresent and transcendent characteristics of God.
And finally, Quantum Teleportation experiments by Dr Zeilinger,
Spooky action and beyond
http://www.signandsight.com/features/614.html
actually proves the transcendence and dominion of “information” over the material/energy realm and makes God’s omniscient (all knowing) and omnipotent (all powerful) characteristics plausible with how our reality is actually constructed.
That is to say, quantum teleportation establishes beyond any reasonable doubt that “transcendent information” does not arise from energy/matter, as materialism presupposes, but in fact “transcendent information” is indeed completely te of energy/matter (material) itself and is therefore, by force of logic, foundational and primary to the energy/matter it tes.
Thus you have all the basic postulated Theistic Characteristics of Almighty God tentatively to strongly confirmed by the current empirical evidences of physics.
In fact Dr. Zeilinger goes so far as to quote scripture to explain informations foundational role to our reality:
http://www.metanexus.net/Magaz.....fault.aspx
excerpt:
In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: “In the beginning was the Word.”
The statement should read:
That is to say, quantum teleportation establishes beyond any reasonable doubt that “transcendent information” does not arise from energy/matter, as materialism presupposes, but in fact “transcendent information” is indeed completely do^min^ate of energy/matter (material) itself and is therefore, by force of overwhelming logic, foundational and primary to the energy/matter it do^min^ates.
ba77:
I am reading the Anton Zeilinger paper now and note the following:
“That’s right. I call that the two freedoms: first the freedom of the experimenter in choosing the measuring equipment – that depends on my freedom of will; and then the freedom of nature in giving me the answer it pleases.
Zeilinger has a default assumption of free will to begin with, not based at all on the conclusions of any of his reasearch. This is significant because my understanding is that a huge amount of the paradox and mysticism surrounding quantum phenomena immediately dissapears if you do not assume free will. IOW there are hyperdeterminstic interpretations of quantum theory that do not entail “observers” influencing events by the power of their free will. But I’m in over my head as well.
The very next question of the interviewer addresses this point and I had not read it before my above post. I just read Zeilinger’s comment on free will and responded. Here is the interviewer’s remark:
I’d like to come back to these freedoms. First, if you assumed there were no freedom of the will – and there are said to be people who take this position – then you could do away with all the craziness of quantum mechanics in one go.
ba77: “And finally, Quantum Teleportation experiments by Dr Zeilinger,
Spooky action and beyond
http://www.signandsight.com/features/614.html
actually proves the transcendence and dominion of “information” over the material/energy realm and makes God’s omniscient (all knowing) and omnipotent (all powerful) characteristics plausible with how our reality is actually constructed.
———-
Interviewer:“I’d like to come to the second freedom: the freedom of nature. You said that for example the velocity or the location of a particle are only determined at the moment of the measurement, and entirely at random.”
Zeilinger:“I maintain: it is so random that not even God knows the answer.”
The fact that Qauntum events are defying time and space in the first place is another strong indication of our reality’s ultimate basis being founded in a “higher dimension”. Thus another stong confirmation of a primary Theistic postulation.
To get back to the do^min^ion of information over this reality. We can only determine that information is in fact dominate and primary of energy/matter when we entangle particles, yet there is no reason to presuppose that every energy/matter particle, whether entangled or not, does not have a dominate/primary information signature at its basis, i.e. it naturally reasonable to presume that all foundational and primary information of every energy/material particle in the universe exist continuously in the primary transcendent realm, i.e. it is never reasonable to assume information does not exist since it is foundational and primary to the material realm in the first place. Thus since it is reasonable to presume information of every particle exist prior to the existance of the particle, it is reasonable to presume that the infinite mind of God has knowlege of every material /energy particle prior to its existence no matter how random it is. To presuppose there is no infinite mind of God is to presuppose no overarching structure, i.e. it is to presuppose chaos as the foundation of reality.
#54 ba77
Thanks for the contribution. I strongly think that future discoveries about “non-junk-ness” of DNA will more and more vindicate ID.
Only I’m not completely convinced by the direct correlation you’ve done of the huge information needed to characterize a human body (or a whichever high level animal for that matter) to the compression of DNA code.
Certainly DNA is highly polifunctional and embeds many nested coding level, but it hasn’t to code ALL nor most of the information content a body requires.
After all this is a common concept in any design. For example, let us consider the design plan of a high complex artifact, e.g. a bridge. The design plan at the higher level does only contain the information needed to put the major information about bridge position, structure and composition. Instead the work plan has to contain much more information to enable field engineers to implement the bridge by controlling the work of a high number of workers and machinery.
BUT in any case the overall design information needed to actually implement the bridge hasn’t to be equal to thehuge raw information that would be required to cgharacterize correctly the position and composition of every iatom or molecule of the bridge.
This is only to state that we cannot strictly speak of a direct compression of the raw information of the human body to the DNA code
#56 gpuccio
Thanks for your clarification. Your point of view is exactly what I meant
Paul GIem et al.:
“On the other hand, it does seem like standard evolutionary theory is at some risk as well. If this turns out to be a 3-neutral-step process, or especially a 2-neutral-step process, then the edge of evolution will be demonstrated to be where Behe says it is, and far too close to where an organism started to account for the variety of life as we know it. Perhaps more importantly, that would be experimental evidence, which is supposed to have more weight in science than theory does.”
This was at least a 3-step process: one neutral “potentiating” mutation, one weakly beneficial mutation and one strongly beneficial mutation (that may have been a multiple mutation).. Behe clearly states in his book that 2 steps is the “edge”, so we are already over that edge and may be well over it.
which, again, is why it is surprising to me that Behe chose this paper as support for his hypothesis, when it is clear evidence against it..
dmso74 hasn’t read The Edge of Evolution and is either making things up about what’s in it or is parroting falacious sources. He was warned to stop, ignored the warning, and is now no longer with us.
kairos,
I slightly disagree with this statement:
BUT in any case the overall design information needed to actually implement the bridge hasn’t to be equal to the huge raw information that would be required to characterize correctly the position and composition of every iatom or molecule of the bridge.
Yet an atom out of place on a bridge is no big deal for the bridge, yet:
In a human body, millimeter accuracy isn’t nearly good enough. A molecule a mere millimeter out of place can mean big trouble in your brain and most other parts of your body. A good teleportation machine must be able to put every atomic molecule back in precisely its proper place.
Thus the demands are far greater for DNA than for the bridge blueprint:
Although DaveScot brought up a valid objection, a while back, when he said that much of the information for construction would have to exist separate of the DNA in the cell. Nevertheless for illustration purposes, I believe the Braunstein teleportation example is very clear in pointing out the basic outline of the staggering level of complexity we are dealing with life. A staggering level of complexity that evolutionists completely ignore and are oblivious too. Indeed, I have seen top evolutionists do their damndest to obfuscate the requirements for this level of information so as to preserve their beloved “Junk DNA”.
#71 bornagain77
Probably I didn’t explain well my thought. I strongly claim that almost all DNA is non-junk and poli-functional and that it holds a hyper-huge information that is very very greater than what it is conceivable RM+NS could ever reach. I have only stated that the it is not necessary that the super-hyper-hyper-hyper-huge information needed to characterize a given huna body be actually compressed into the DNA code. In other words, provided that DNA code contains all what is needed to grow and sustain a human life, the “execution” (please excuse me for the computing analogy) of this code will produce the particular raw information in a pretty automatic way (let’s for simplicity not to take into account epigenetic factors). In other words the final raw information is very very huge but the coding information that does allow to deterministally produce it is “only” huge. Let us consider your example about teletransportation:
“A good teleportation machine must be able to put every atomic molecule back in precisely its proper place. Thus the demands are far greater for DNA than for the bridge blueprint”
In your problem certainly but we are non considering a generic teletransportation system but a “life oranism” teletransportation system and this puts the problem in another way. If the machinery needed to grow an organism is available at the destination point, all that is needed is:
1. decoding at the starting point all the DNA of the organism to be “dispatched”
2. sending all the DNA code as a digital file (i.e. some billion bits)
3. using at the destination point the received information to create a “clone” of the organism by growing it.
“Indeed, I have seen top evolutionists do their damndest to obfuscate the requirements for this level of information so as to preserve their beloved “Junk DNA”.”
I completely agree on this observation but to debunk darwinism is not at all necessary to think that 10^30 bits are needed. NDE is defeated by the 10^10 needed for DNA.
ba77: “Somehow, this Star Trek type teleporter must generate and transmit a parts list and blueprint of the object being teleported. This information could be used in reconstructing the object at its final destination
…
For example, just how much information would it take to describe how every molecule of a human body is put together?
In a human body, millimeter accuracy isn’t nearly good enough. A molecule a mere millimeter out of place can mean big trouble in your brain and most other parts of your body.
Then how can a person lose 50% of their brain and still think perfectly well.
“A good teleportation machine must be able to put every atomic molecule back in precisely its proper place. That much information, Braunstein calculated, would require a billion trillion desktop computer hard drives, or a bundle of CD-ROM disks that would take up more space than the moon. The atoms in a human being are the equivalent to the information mass of about a thousand billion billion billion bits. Even with today’s top technology, this means it would take about 30 billion years to transfer this mass of data for one human body from one spot to another. That’s twice the age of the universe. “
———-
It seems like basic physical motion is a teleportation machine. You’re at one place, and then an instant later you’re no longer there and every molecule in your body is reconstructed precisely at another location. And it doesn’t make any difference whether the object being teleported by nature is a rock an airplane or a human being. The exact same mechanism is used and object are dematerialized at one location and some time later are found rematerialized perfectly at another location. You imply that this would be the most mind-boggling incredible feat that could ever be accomplished, and yet basic physical reality accomplishes it all the time. If it can do that on its own, what wouldn’t it be capable of.
Just an observation in case you care to comment.
JT
“It seems like basic physical motion is a teleportation machine. You’re at one place, and then an instant later you’re no longer there and every molecule in your body is reconstructed precisely at another location.”
Obviously this is a joke, isn’t it?
No it wasn’t a joke. Isn’t that what happens?
dmso74 (#69):
“This was at least a 3-step process: one neutral “potentiating” mutation, one weakly beneficial mutation and one strongly beneficial mutation (that may have been a multiple mutation).”
Where did you get that idea from? I think that in the paper the mutations(s) has not been characterized, but the indirect evidence seems to point to a double mutation, the first one neutral, the second functional. I paste here the abstract of the work:
“The role of historical contingency in evolution has been much debated, but rarely tested. Twelve initially identical populations of Escherichia coli were founded in 1988 to investigate this issue. They have since evolved in a glucose-limited medium that also contains citrate, which E. coli cannot use as a carbon source under oxic conditions. No population evolved the capacity to exploit citrate for >30,000 generations, although each population tested billions of mutations. A citrate-using (Cit+) variant finally evolved in one population by 31,500 generations, causing an increase in population size and diversity. The long-delayed and unique evolution of this function might indicate the involvement of some extremely rare mutation. Alternately, it may involve an ordinary mutation, but one whose physical occurrence or phenotypic expression is contingent on prior mutations in that population. We tested these hypotheses in experiments that “replayed” evolution from different points in that population’s history. We observed no Cit+ mutants among 8.4 x 1012 ancestral cells, nor among 9 x 1012 cells from 60 clones sampled in the first 15,000 generations. However, we observed a significantly greater tendency for later clones to evolve Cit+, indicating that some potentiating mutation arose by 20,000 generations. This potentiating change increased the mutation rate to Cit+ but did not cause generalized hypermutability. Thus, the evolution of this phenotype was contingent on the particular history of that population. More generally, we suggest that historical contingency is especially important when it facilitates the evolution of key innovations that are not easily evolved by gradual, cumulative selection.”
So, why are you speaking or at least three mutations?
JT– No it wasn’t a joke. Isn’t that what happens?
Are you saying teleportation happens?
tribune7:
I believe that’s what the assertion was. If there’s an obvious reason why it not conceptually the same thing its not apparent to me. Evidently others think its too stupid to even merit a comment.
It is a pity that dmso74 (69) isn’t around to defend him/herself (see DaveScot 70). However, in fairness to DaveScot, dmso74 did make a rather egregious error.
It is conceded by all sides that beneficial mutations are only a minor problem, solvable by most bacteria in one or at most a few culture plates. Two sequential beneficial steps could happen in two, or at most a few, culture plates.
The problem comes when there are neutral steps in between. That is where the difficulties begin, where improbabilities rapidly swamp the ability of the organism to evolve.
dmso74 writes,
If dmso74 is right, there is only one known neutral step, so this is roughly equivalent to a two-step problem in probability. We will await the determination whether it was a 3-step process, or whether there were more (or fewer) steps involved, and whether any further steps were beneficial or neutral, but dmso74 is in error when he says,
What Behe said is that 2 neutral steps is the edge.
It is tempting to say that it is surprising to me why dmso74 said
But it is not honest. When one’s goal is more to protect one’s theories and attack competing theories than to understand those competing theories, misunderstanding of those theories can be expected. It is easier to create strawmen than to admit that one’s opponent may have a point, when one views him/her as an opponent. Dmso74 had already given evidence that he/she had such difficulties.
There is a further prediction by Behe, perhaps soft, but nevertheless a prediction. When the dust has settled, we will probably see that some transport protein that used to be able to transport some other substrate across the cell membrane now either has had its specificity changed so that it allows citrate across also, or has been switched over to citrate completely. In fact, that protein is probably a passive transport protein, which would be detrimental to the organism if it lived in an environment lacking in citrate, as citrate would then leak out instead of in. Thus we probably have something analogous to trench warfare here. That is, machinery is being broken rather than fixed, and it is only a special environment that makes the breaking advantageous. Another analogy would be blind cave fish, which only outcompete normal fish in a cave.
It would be a blow to ID if the gene were duplicated first, then one of the genes were to mutate to allow active transport of citrate, complete with some kind of promoter molecule. But I’m not holding my breath on that one.
JT, teleportation does not happen.
I’ll grant though that the theoretical foundation is better supported than, say, Darwinian evolution. hee hee.
The whole point of Behe’s new book was to try and find experimental evidence for exactly what Darwinian mechanisms are capable of. On the other hand we have speculative indirect stepwise pathway scenarios but so far the OBSERVED “edge of evolution” doesn’t allow these models to be feasible. But this “edge” is an estimate based upon a limited set of data which in turn “might” mean the estimated “edge” is far less than the maximum capable by Darwinian mechanisms. If Darwinists would bother to do further experiments they may see if this “edge” could in reality be extended. Then if this new derived “edge” is compatible with these models then so be it (though I’ll add the caveat that the “edge” might be better for Darwinism only in limited scenarios). In the meantime they’re just assuming the “edge” allows for it. Even worse, unless I failed to notice the news, the very first detailed, testable (and potentially falsifiable) model for the flagellum is yet to be fully completed (I realize there are people working on producing one) so a major challenge of Behe’s first book is yet to be refuted, never mind the new book.
Darwinists should stop pretending they have the current strongest explanation. I’ll fully acknowledge they’re currently formulating a response in the form of continued research, new models, and such but the mere fact is that they’re missing all the major parts to their explanation. This might change in the future, but it may not.
Or at least the situation hasn’t changed based upon this recent conversation where I asked for the functional intermediates in the indirect stepwise pathway to be named…and was never answered. Comment #203 summarizes that discussion, and should be read at full length, but I thought this was the kicker:
gpuccio was also gracious enough to assume the T3SS as a starting. Dave pointed out this long ago:
dmso74 is apparently parroting the chosen line of the Darwinian community:
What a nice PR strategy. Assert their opponents making certain claims that they are not, then blow away that fake claim. AKA Strawman. Yet we’re never given the space to defend ourselves against such outrageous tactics.
For example, Darwinists were previously accusing Behe of ignoring pyrimethamine resistance in Malaria as an example of cumulative selection. In fact, Behe doesn’t deny the existence of cumulative selection, nor does he omit a mention of pyrimethamine as an example. Behe actually spends more than a full page discussing pyrimethamine resistance. Here is small portion of what Behe wrote about it in The Edge of Evolution.
Explaining how he covered cumulative selection, Behe writes in his Amazon blog:
So the “ignoring cumulative selection in an indirect pathway” argument is a complete strawman. Behe’s position is that the creative power of cumulative selection is extremely limited, is not capable of traversing necessary pathways that are potentially tens and hundreds of steps long, and he backs up this position with real world examples of astronomical populations getting very limited results with it. This is something the critics don’t really address.
The fact that the opponents of Behe’s book find the need to repeatedly lie and misrepresent the book (Carroll and Miller) or avoid the subject matter altogether (Dawkins) shows exactly how good Behe’s book is. In spite of having more reproductive events every year than mammals have had in their entire existence, malaria has not evolved the ability to reproduce below 68 degrees. Nick Matzke’s explanation for this was that “in cold regions all the mosquitoes (and all other flying insects) die when the temperature hits freezing.” Think about it. Malaria cannot reproduce below 68 degrees Fahrenheit. Water freezes at 32 degrees Fahrenheit.
To illustrate how out to lunch Musgrave and Smith are on Behe’s Edge of Evolution, on page 143 Behe writes that the estimated number of organisms needed to create one new protein to protein binding sites is 10^20. Further down the page, Behe notes that the population size of HIV is, surprise, within that range. So according to Behe’s own thesis, HIV should be able to evolve a new protein-to-protein binding sites. So along come Smith and Musgrave, point out a mutation clearly within Behe’s thesis, and then declare victory when in fact they have not contradicted Behe at all.
How about an actual example where a more complex organism is less fit than its simpler counterpart? Depends on the complexity being looked at it, does it not? Let’s take a look at TO’s example of people with “monkey tails”. I have no problem calling that “complexity” in a generalized sense. As in, not CSI, but a continuation of a process beyond its normal termination. I’m not sure what positive effects they do have. From what I remember they’re not articulated and cannot serve as an additional limb. But I’m pretty sure they’d act as the opposite of a peacock’s feathers, (which, BTW, has its own issues), dramatically reducing those individual’s chances of reproducing. Ditto goes for additional/non-functioning mammary nipples and other examples that turn off the opposite sex.
The situation is complicated enough that there can’t be blanket statements. There can be increments in complexity where the tradeoff is more positive than negative. But that’s why ID doesn’t have blanket statement…there is a complexity threshold. And that’s why Behe is trying to find an “edge of evolution”. While an estimate has been arrived at I don’t think that “edge” has been found yet. Personally I think it “might” be greater than where some ID proponents envision it to be. Perhaps the “true edge” is around 6 steps in an indirect stepwise pathway. But I could be wrong.
The perspective of ARN:
The Edge of Evolution is an estimate and it was derived from the limited positive evidence for Darwinian processes that we do possess. This estimate would of course be adjusted when new evidence comes into play or abandoned altogether if there is positive evidence that Darwinian processes are capable of large scale constructive positive evolution (or at least put in another category if it’s ID-based[foresighted mechanisms]). The bulk of the best examples of Darwinian evolution are destructive modifications like passive leaky pores (a foreign protein degrading the integrity of HIV’s membrane) and a leaky digestive system (P. falciparum self destructs when it’s system cannot properly dispose of toxins it is ingesting, so a leak apparently helps) that have a net positive effect under limited/temporary conditions (Behe terms this trench warfare). I personally believe that given a system intelligently constructed in a modular fashion (the system is designed for self-modification via the influence of external triggers) that Darwinian processes may be capable of more than this, but we do not have positive evidence for this concept yet. But that’s foresighted non-Darwinian evolution in any case, and even if there are foresighted mechanisms for macroevolution they might be limited in scope.
We’re talking basic engineering here. When the code is pleiotropic you have to have multiple concurrent changes that work together to produce a functional result. Hundreds of simple changes adding up over deep time to produce macroevolution are not realistic. And, yes, I’m aware that the modular design of the code can allow for SOME largescale changes, especially noticeable with plants, but this is not uniform. Nor is it usually coherent (cows with extra legs hanging from their bodies, humans with extra mammary glands or extensions of their vertebrae [tails], flies with eyes all over). Nor non-destructive for that matter. And whence came the modularity? And we’re looking for CONSTRUCTIVE BENEFICIAL mutations that produce macroevolution. Darwinists cannot even posit a complete hypothetical pathway!
Previous discussions about the EoE:
Ken Miller, the honest Darwinist
Do the facts speak for themselves
ERV’s challenge to Michael Behe
Darwinist Predictions
P.falciparum – No Black Swan Observed
PBS Airs False “Facts”
The main point remains: at this time Darwinism does not have a mechanism observed to function as advertised. Should we continue research on proposed engines of variation? Definitely. When Edge of Evolution was released I believe I said that would make a good follow up (considering each proposed mechanism one by one, and of course their cumulative effect).
On that topic:
http://www.sciencedaily.com/re.....120701.htm
Paul Giem
You anticipated what I was going to write about next – the selectivity of the cell wall. In vitro the cell wall can be far less selective in what is allowed through since all it sees is the ingredients in the agar, which are strictly controlled and few, and it doesn’t have to fend off or compete with anything other living thing except its own kind since the cultures contain nothing but E.coli. In vivo the mutation(s) allowing citrase transport into the cell could allow a whole host of nasty things that aren’t citrate to enter the cell as well.
It’s still a real yawner though. Anyone who knows anything about long term clonal tissue culture knows that microrganisms are rather proficient at adapting to different nutrients in the agar recipe. 30 years is remarkable only in that it wasn’t 30 days, weeks, or months instead. There’s probably something a toxic molecule in the wild that resembles citrate that would kill the citrate eaters so its digestive enzyme repertoire was rather well protected against allowing it to enter the cell. Just a guess – but hey, that what’s Darwinian evolution is all about – guesswork. I just don’t go so far as presenting my guesses as fact and demanding that they be taught as facts to high schoolers and asking the courts to protect my guesswork from criticism in public schools.
junkyard
From your response I guess you don’t really know what a compiler is or does. Compilers don’t crash when they encounter errors – they typically gracefully abort the compilation and describe the location and type of error encountered. Errors come in various levels of criticality. Some are fatal and halt compilation while others are just warnings and compilation continues. Better compilers will send you straight into the editing environment at the point where a fatal error was picked up so you can fix it that much faster. It’s been a while since I’ve done any programming. There are probably bells and whistles now that I haven’t seen before. I thought color-coded source files were the best thing since buttered bread the first time I used one – they make it really easy to spot syntax errors as you’re typing because the colors don’t look right so you don’t even have to waste time letting the compiler find simple syntax errors.
Patrick (#82):
These are the key findings in the two respective experiments:
In the first, the trigger loop is moving around, and as such the correct NTP lines up with it quicker than an incorrect one. In the second experiment the trigger loop “door” tends to stay open longer with an incorrect NTP allowing it to diffuse. What strikes me is the probabilistic imprecise nature of these control mechanisms, wherein the capture of incorrect NTP is not strictly prohibited, but merely controlled by keeping it below a certain threshold.
DaveScot (#84)
My only point was, a high-level compiled language is a sophisticated end-user application designed to facilitate the programming process for HUMANS specifically. There is all sorts of handholding by a compiler to gurantee the human doesn’t make inadvertant errors in the programming process, doing things which he didn’t actually intend to do. None of this has anything to do with what will run and run efficiently on a computer.
Really, the more skilled a programmer is the more he moves away from highly restrictive programming environments. C as you know allows you to bypass all its protection mechanisms and the more you know what you’re doing the more a compiler’s constant interference becomes a major hindrance.
With that many generations in Lenski’s lab, I’m surprised that I haven’t heard whether the work has confirmed “Muller’s Ratchet” (an important component of genetic entropy theory).
I just posted something on HJ Muller, the pioneer of “Muller’s Ratchet”.
junkyard
Really, the more skilled a programmer is the more he moves away from highly restrictive programming environments.
You’re confusing the environment with the language and in a modern programming environment you can use the same tools for a wide variety of languages. I’m quite proficient in assembly language for a number of processors including the 80×86 family and a number of embedded processors. There’s nothing less restrictive than assembler. C is okay and I’ve used it a lot but there’s no reason not to use C++ as you can do anything with it that you can with C and have more options that C doesn’t provide. Those are really the only three languages I’ve used much but literally millions and millions of lines with them beginning in the 1970’s.
P.S. I misread your first comment about compilers. You didn’t make the error I thought you did. Sorry about that. Still, it would be a good test as there’s no blind spots. You really don’t know how much of the code in any arbitrary executable is being executed in any arbitrary functional test nor do you have any way of knowing how much of the executable is critical code and how much is non-critical data. That’s why the tool I first mentioned was developed. However, if you change a bit of the source code (anything other than a comment) you DO know the compiler will traverse the change and I’d still bet dollars against donuts that random bit flipping in the source will cause a compiler to spit out an error or warning more often than not. I don’t believe the test you described can be used to draw any conclusions or make any points because there are just too many unknowns in it.
Anyone interested in enlightening the folks over at daily tech?
http://www.dailytech.com/Evolu.....e12045.htm
Right now, they are mostly interested in arguing about Spaghetti monsters and Biblical calculations of Pi, but there may be a few sensible ones who will listen.
DaveScot (#88)
I tested Solitaire using the exact same procedure as before – 20 random bit changes followed by testing for each one. Solitaire is 34,064 bytes long whereas my own application was over 500K. So obviously you’re bound to hit more errors in a program less than 1/10 the size. This is less analogous to biology where the size and complexity would be much greater. The result: 12 good 8 bad. The bad were generally program crashes.
This time I also used a disassembler and compared the altered assembly to the original. For good tests where there were actual source code changes, ‘-‘ indicates the original code and ‘+’ indicates the code that replaced it. If it says ‘no change’ below that means there was no change to the actual code. However, the distinction between code and data is completely artificial. The data in a tiny application like this will generally be resources, i.e. the data indicating the layout of dialogs, the styles of buttons, and other controls they contain. If that code is mangled, buttons are liable to be misplaced, or not function as intended, dialogs will be the wrong size, etc. So “no change” below just means no assembly source code changes (and I wasn’t inclined this time to go hunting down what the resource changes were).
As far as the test procedure, Solitaire only has one menu with the options ‘deal’, ‘undo’, ‘options’ and ‘deck’ and also the help menu. So I was pretty much running throught all the functionality in every test.
What does it mean when there are source code changes and no malfunction is apparent. It could often mean that the code isn’t being hit. As you pointed out only something like a code traversal analyzer will gurantee that all code is being traversed. But the point is, is it extremely difficult to identify any malfunction after a random code change, such that a concerted effort with sophisticated tools is neccessary to identify it. Of course, actual changing of say, a less than sign to greater than in code that you actually end up executing would probably have a negative impact. But the point is, you’re not likely to hit it, even in a tiny program like this. In a very large program, its even less likely – much less likely. At least that’s my conclusion. Then there are of course code changes you can actually hit where it wouldn’t make any difference (e.g. increasing the size of a buffer).
The following were the results in order (G-good; B-bad):
Here’s an incomplete instruction list:
CALL //call subroutine
JE //jump if equal
JG //jump if greater
DEC //decrement
PUSH //push value on the stack
MOV //move value into a register
CMP //compare values
———————–
[this editor is mangling the formatting below.]
G
-CALL sol.01002188
+DB E8
+DB 86
+DB DD
+DB FF
+DB 7F
G
no change
B
G
+JE SHORT sol.01002087
+ADD BYTE PTR DS:[ECX],AL
G
no change
B
G
-JG sol.01001B9A
+DB 0F
+DB 8F
+DD sol.010001F8
B*
//”Deal” becomes “De’l”
G
-PUSH sol.01007010
+PUSH sol.01006010
G
no change
G
-DEC ESI
+DEC EDI
B
B
G
-MOV DWORD PTR DS:[10071EC],EAX
+MOV DWORD PTR DS:[10071EE],EAX
G
no change
G
no change
B
G
-CMP EAX,6
+CMP EAX,7
B
B
one other comment-
‘no change’ could also mean changes in a “don’t care” region of a word containing an instruction. There are often “don’t care” regions to optimize logic if I’m not mistaken.
So IOW, it should be clearly evident why average SAT scores, the catfish populations in manmade lakes, and the popularity of eastern european folk music are all determined by the exact same fundamental force.
Here is my scenario of what happened:
A mutation occurred at around 20,000 generations and the citrate-eating ability appeared when one of the bacteria bearing that mutation had a different mutation at around 31,500 generations. IMO the first mutation was very unusual or rare because (1) it apparently took about nine years to occur (44,000 generations in 20 years is about 2200 generations per year) and (2) it apparently appeared in only one of twelve lines of bacteria, even though all twelve lines were descended from a single individual. I think that the second mutation is a fairly common one because it was often expressed again in populations started by the unfrozen preserved populations of 20,000 generations or later, and the reason why this second mutation took so long to be expressed the first time — about 11,500 generations (from the 20,000th to the 31,500th) or 5 years — was that bacteria with the preliminary first mutation were scarce because the preliminary first mutation conferred no advantage in survival. After the preliminary first mutation occurs, appearance of the citrate-eating ability would be just a matter of time if the second mutation were a common one.
Also, I am disturbed by numerous claims that the results of this study refute the ideas of Michael Behe — IMO that is not the case.
http://creationontheweb.com/content/view/5827
Just found this thread. Only read the first few of the comments and then searched for these keywords:
– conjugation
– horizontal gene transfer
– gene duplication
– nondisjunction
– polyploidy
And couldn’t find a single comment with these readily observable genetic phenomena, all of which increase the amount of DNA in an organism and most of which are survived just fine.
I really can’t assume there are IDers out there who seriously propose that some magic man added some chromosomes so that the king crab gets 208 then takes away some so the fruit fly only gets 8 and then again adds some such that humans get 48 and then adds some more so the camel gets 60? What are you guys saying that an increase in DNA hasn’t been seen? Genes/chromosomes duplicate all the time and then acquire mutations which change their information. Come on, that’s genetics 101, not rocket science.
Likewise, what’s “beneficial” or “good”? You mean in the lab or in the wild? Under stress or under perfect conditions? Hot or cold? Island or mainland? Freshwater or saltwater? In yeast, 60% of all knocked-out genes do not show any effect in the lab, in mice it’s still 30% of genes. In the genome age we know that many if not most gene knock-outs (depending on the organism) have comparatively small, incremental effects, many of which are buffered out. It’s called degeneracy (no, not redundancy and not the degeneracy of today’s youth, but degenerate as in the degenerate code). So instead of inventing nonsensical nomenclature for mutations, IDers should try to explain degeneracy: if there are intelligent designers out there changing things around in the DNA every once and a while, why is there degeneracy? That would be so totally unnecessary and impractical. The only reason I can see to put degeneracy in the genetic system would be to trick someone to believe the system actually evolved, but I’m sure you can come up with better reasons why degeneracy was chosen by the intelligent designer.
Evolution predicts degeneracy. How does ID predict degeneracy?
Oh and I forgot one other thing: species. What do IDers mean by species? Scientists can’t really agree on what a species is, can IDers?
One common, but by no means uncontested species definition refers to reproductive isolation. According to this definition, speciation has been observed a few times, even in animals (Drosophila comes to mind), but also in plants (grass species on islands). Mind you that certain breeds of dogs would classify as species in this sense, as they would not be able to reproduce with each other naturally (Mastiff and Chihuahua for example), which is exactly why it is contended.
Another regards the fertility/survival of hybrid offspring. However, a mule is a largely sterile hybrid, so according to this definition evolution resulting in horses and donkeys would be absolutely impossible for IDers, even thought they’re so similar.
So maybe species is something that somehow looks different from another? There are many species that nobody denies are very different species (even different orders!) but look very similar, to the extent that even specialists can’t tell them apart easily. So that makes no sense either. What to do?
I have a very practical solution: species in the sense of the biblical “kinds”! IDers could use it to refer to higher up clades such as: kingdom, phylum, class, order, or family?
I suggest IDers use some of the latter to mean species (kinds). This would be very practical, because then they can always retreat to the next higher level when one level has been shown experimentally 🙂
“No, no, it’s really kind/family/order/class/etc. what I meant with ‘species’, really!” 🙂
Based upon his comments brembs must be new to this debate. His statements about what ID proponents believe are truly off the wall. And the topic he brings up have been addressed many times over. Go back to lurking, brembs. Read some more and once you have something interesting to say perhaps come back.