Uncommon Descent Serving The Intelligent Design Community

Ken Miller may face more embarrassing facts, Behe’s DBB vindicated

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[slight correction and update follows]

[update: I had previously taken a second hand quotation from Paul Nessleroade of Ken Miller here (2003 Wedge Update). Below I give a quote by Miller by a better source document here: Life’s Grand Design.

However, the more accurate quotation still demonstrates Miller made an embarrassing assessment about the architecture of DNA. I apologize to him for the inaccuracy, however, it was not a material inaccuracy as can be seen in the quotation from his website.

The greater burden is on Miller to retract his mistaken ideas from the public sphere. Further, he owes an apology to many in the ID movement for misrepresentations and errors of fact which he has promoted currently, over the years, and even under oath.]

the designer made serious errors, wasting millions of bases of DNA on a blueprint full of junk and scribbles.

Ken Miller, 1994

Empirical evidence 13 years later is putting some egg on Miller’s face [remember Ken Miller is the guy who made misrepresentations under oath in the Dover trial].

Encyclopedia Of DNA: New Findings Challenge Established Views On Human Genome

The new data indicate the genome contains very little unused sequences and, in fact, is a complex, interwoven network. In this network, genes are just one of many types of DNA sequences that have a functional impact. “Our perspective of transcription and genes may have to evolve,” the researchers state in their Nature paper, noting the network model of the genome “poses some interesting mechanistic questions” that have yet to be answered.

Other surprises in the ENCODE data have major implications for our understanding of the evolution of genomes, particularly mammalian genomes. Until recently, researchers had thought that most of the DNA sequences important for biological function would be in areas of the genome most subject to evolutionary constraint — that is, most likely to be conserved as species evolve. However, the ENCODE effort found about half of functional elements in the human genome do not appear to have been obviously constrained during evolution, at least when examined by current methods used by computational biologists.

According to ENCODE researchers, this lack of evolutionary constraint may indicate that many species’ genomes contain a pool of functional elements, including RNA transcripts, that provide no specific benefits in terms of survival or reproduction.

Behe 10 years ago, in Darwin’s Black Box (DBB) suggested junk DNA may not be junk after all. Behe has been vindicated by the facts, Miller refuted.

Finally, there is at least one other interesting fact in this article: “the ENCODE effort found about half of functional elements in the human genome do not appear to have been obviously constrained during evolution“. This means these designs NOT attributable to natural selection. Features in the genome have been shown not to be likely products of “slight successive modifications”. How I love science!

Comments
[...] Ken Miller [...] Has the progress of science vindicated Mike Behe or Ken Miller? « Wintery Knight
[...] Ken Miller, 1994 [...] Nature “writes back” to Behe Eight Years Later | Uncommon Descent
Does anyone know where in Darwin's Black Box Behe suggests "junk DNA" functionality (what pages)? Thanks! bioinformaticist
"...harder to explain with MET than with ID", I meant to write. (I wish this forum had a way to edit a post.) mike1962
DaveScot, "It’s hard to reconcile a load of junk in a brand spanking new design." Maybe, but not impossible. Maybe the length of a particular sequence matters, but not the particular nucleotides. Like packaging "peanuts." Or maybe the values of a particular segment are something like informational dross produced as a byproduct of cellular division because of some otherwise logistical problem in sloughing it off during the duplication. Until we fully understand the entire process, we probably won't understand what the junk is. One thing seems obvious though- conserved sequences of junk between species with hundreds of millions of years of evolution from their alledged divergence is harder to explain with MET and ID. mike1962
Can we measure the information capacity of that portion of the genome which is actually used? We can at least estimate it. In the case of the human genome it is about 2 per cent… Richard Dawkins
OUCH! scordova
Consider the term "junk DNA." ... Design encourages scientists to look for function where evolution discourages it. -- William Dembski, “Intelligent Science and Design,” First Things, Vol. 86:21-27 (October 1998) Genomes are littered with nonfunctional pseudogenes, faulty duplicates of functional genes that do nothing, while their functional cousins (the word doesn't even need scare quotes) get on with their business in a different part of the same genome. And there's lots more DNA that doesn't even deserve the name pseudogene. It, too, is derived by duplication, but not duplication of functional genes. It consists of multiple copies of junk, 'tandem repeats', and other nonsense which may be useful for forensic detectives but which doesn't seem to be used in the body itself... Can we measure the information capacity of that portion of the genome which is actually used? We can at least estimate it. In the case of the human genome it is about 2 per cent... -- Richard Dawkins, "The Information Challenge," The Skeptic, vol. 18, no. 4 (December 1998) j
I have said in the past that the existence of junk does not imply the aritifact was un-designed. I maintain that position. I have owned cars that had broken non functioning parts. I worked on spaceships that were expected to have (over the course of time due to extreme conditions in space) various systems failures in the course of its mission. The functionality of the "junk DNA" was suspected for at least 2 reasons: 1. The creationists needed an explanation for the appearance of common descent ( formally speaking this was not an ID reason at all) 2. ID proponents who accepted universal common ancestry sensed the patterns were well designed and non-random (beginning with Michael Denton's "Biochemical Echo of Typology" and then various other studies thereafter). What proceeded from ID theory was the intuition to presume design at some point stated by Bill Dembski:
But design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term "junk DNA." Implicit in this term is the view that because the genome of an organism has been cobbled together through a long, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as "junk" merely cloaks our current lack of knowledge about function. For instance, in a recent issue of the Journal of Theoretical Biology, John Bodnar describes how "non-coding DNA in eukaryotic genomes encodes a language which programs organismal growth and development." Design encourages scientists to look for function where evolution discourages it. (William Dembski, "Intelligent Science and Design," First Things, Vol. 86:21-27 (October 1998))
I personally was interested in the "broken part" interpretation of design. Because if parts are broken, this suggests they can be somehow "fixed" and this would have medical implications.... The ID proponents in general had less to lose with the junk being junk interpretation, but the creationists had quite a bit at stake in the controversy. On the other hand, if large amounts of "junk DNA" were not junk it would make Haldane's dilemma very accute and it would also destroy large amounts of Neutral Theory and the Molecular Clock Hypothesis. Further, it reveals that biology is orders of magnitude more complex than what we even thought only 10 years ago. These polyconstrained architectures are complex beyond imagination. scordova
bfast He held that ID would fall if junk DNA proved to be junk. This is surely the PRIMARY prediction of ID. I completely disagree. Life could have been designed billions of years ago with no intervention since then and a lot of junk could have accumulated since that time even as the planned phylogenesis was unfolding. If you drive your car long enough without cleaning the windshield you get a lot of bugs on it but that doesn't mean the car wasn't designed or no longer able to do what it was designed to do. Where junk DNA doesn't play well is in special creation in the recent past. It's hard to reconcile a load of junk in a brand spanking new design. DaveScot
Regarding the 12 CODES, I had an intersting exchange with PZ Myers here. PZ complains about poly-constrained codes:
Where ever did you get these ideas? this elaborate fiction you've erected is otherwise nonsensical.
Yet this supposed "fiction" is turning out to be fact. PZ couldn't believe Darwinism would make such structures, therefore they don't exist. But let me quote a greater scientist than PZ, renowned Cornell Geneticist John Sanford from the landmark ID book, Genetic Entropy:
There is abundant evidence that most DNA sequences are poly-functional, and therefore are poly-constrained. This fact has been extensively demonstrated by Trifonov (1989). For example, most human coding sequences encode for two different RNAs, read in opposite direction s(i.e. Both DNA strands are transcribed Yelin et al., 2003). Some sequences encode for different proteins depending on where translation is initiated and where the reading frame begins (i.e. read-through proteins). Some sequences encode for different proteins based upon alternate mRNA splicing. Some sequences serve simultaneously for protein-encoding and also serve as internal transcriptional promoters. Some sequences encode for both a protein coding, and a protein-binding region. Alu elements and origins-of-replication can be found within functional promoters and within exons. Basically all DNA sequences are constrained by isochore requirements (regional GC content), "word" content (species-specific profiles of di-, tri-, and tetra-nucleotide frequencies), and nucleosome binding sites (i.e. All DNA must condense). Selective condensation is clearly implicated in gene regulation, and selective nucleosome binding is controlled by specific DNA sequence patterns - which must permeate the entire genome. Lastly, probably all sequences do what they do, even as they also affect general spacing and DNA-folding/architecture - which is clearly sequence dependent. To explain the incredible amount of information which must somehow be packed into the genome (given that extreme complexity of life), we really have to assume that there are even higher levels of organization and information encrypted within the genome. For example, there is another whole level of organization at the epigenetic level (Gibbs 2003). There also appears to be extensive sequence dependent three-dimensional organization within chromosomes and the whole nucleus (Manuelides, 1990; Gardiner, 1995; Flam, 1994). Trifonov (1989), has shown that probably all DNA sequences in the genome encrypt multiple "codes" (up to 12 codes). John Sanford
scordova
This is the de.ath blow for darwinists. The Dna will continue to astound researchers as more of the complexity is revealed. Dr. Sanford quotes a preliminary study that indicates the encryption of the DNA is up to 12 CODES Thick. To Quote the Psalmist, "I will praise you for I am fearfully and wonderfully made!" bornagain77
This is the blow for darwinists. The DNA will continue to display complexity that astound researches as more is revealed. To Quote The Psalmist. "I will praise You for I am fearfully and wonderfully made!" bornagain77
The orignal quote had some errors, but now it's repaired. Apologies to Ken Miller for the earlier mis-quotation. Unfortunately for him, the corrected quotation even more clearly demonstrates the gravity of his error. scordova
Ed Darrell, The full essay was here: Life's Grand Design There were two version, one for PBS and one for Technology Review. You probably got the PBS version. Nesselrode's citation, it turns out, was correct. Salvador scordova
Thank's for pointing out my error. It was 2nd hand from Paul Nesslerode. Cornelius Hunter gives the correct reference through Behe's DBB page 225-226. scordova
Mr. Cordova, in the article I found, under the title you cite, by Kenneth Miller, he says something quite different from how you quote him. May I ask where you got the quote? edarrell
[...] 1. Ken Miller is the guy who has taken various bruisings from scientific evidence and continues his misrepresentations and story telling as he did under oath in the Dover trial. [See: Ken Miller may face more embarrassing facts, Behe’s DBB vindicated and Ken Miller caught making factually incorrect statements under oath] [...] Ken Miller, the honest Darwinist | Uncommon Descent
Trying to check out Miller's statement, as you cite it in the opening of the thread. What is "Life's Grand Design?" A book? An article? I'm not finding it at Amazon, or anywhere else. Help! edarrell
[...] [NOTES] 1. These recent developments are in addition to the egg on Ken Miller’s face and laurels for Michael Behe. [See: Ken Miller may face more embarrassing facts, Behe’s DBB vindicated] [...] Zuck is out of luck, marsupial findings vindicate Behe, Denton, Hoyle | Uncommon Descent
We must bear in mind that the most modern design paradigms in engineering today do not deal with deeply deeply redundant, self-healing, self-assembling architectures that operate with high levels of internal quantum noise. At the quantum level there is substantially much more noise to deal with then say macroscopically engineered artifacts. Today's engineers tend to think linearly, cause effect. In contrast, modern day nano-molecular engineering is having to confront non-linear thinking where at every step of the way something is presumed to go wrong because of quantum effects. This gives a totally different mode of how to build computational architectures. I got a taste of this building satelite software where you actually presumed, at the end of a module exection, your software may not have executed properly due to cosmic radiation. To compensate for this, one had what looked to the uninitiated, an absolutely horrendous amount of convolution and apparently meaningless "waste". Not so. I think it pre-mature to call certain processes garbage. Biology may be telling us, that this sort of convoluted architecture is what works best at the nano-molecular level. Art2 is expressing his prejudice that biology is an accident. Modem signals to the uninitiated ear sound like a accidents too.... scordova
Art re Genomic garbage collection. If it's true that cells devote enormous resources to unproductive activities it's just another nail in Natural Selection's coffin. The cell lines you mention, like all living cell lines, have been undergoing selection for (ostensibly) billions of years. NS works to make cell lines more fit. If the huge amount of garbage collection is true that's awfully less fit. The garbage is most efficiently dealt with by removal from the genome rather than the transcriptome after considerable resources have been devoted to transcription. If true then it would be adverse to the veracity of both natural selection and interventionist theories of intelligent design pretty much only leaving a genome front-loaded billions of years ago as the best explanation with genetic entropy over billions of years explaining the gross inefficiency found today. That's fine with me as I already believe front loading is the best explanation but still I'd be rather surprised if the garbage collection in the transcriptome is really garbage collection rather than some misunderstood critical process. As I said the acid test would be to artifically eliminate the suspected genomic garbage from the genome so that it never reaches the transcriptome and see if there's any adverse effect from the knockout on the GM organism. Be sure to let me know when that's been done otherwise I think talking about transcriptome garbage collection is little more than woolgathering. DaveScot
Art There really is nothing remarkable about this region. Bona fide researchers in this area of expertise such as Arend Sidow disagree with you. What can I say? Should I put more stock in an anonymous blog commenter or people like Arend Sidow? No disrespect meant but I'm going with Sidow's opinion over yours. See my recent article (I wrote it with you in mind) quoting Sidow's reaction to the experiment which you don't think means anything. DaveScot
Hi DaveScot, Two points. First, about that mouse genic "desert", the original report was made before other genomes were available for comparison. Which is probably why you see the more recent and less, um, provocative stance from the author(s). If you take the time to get lost in the genome browser (for some of us, its more addictive than computer games), you will find that this region is, for all practical purposes, drifting randomly. This is apparent if you look at the nature and distribution of the mutational changes, and if you look at the conservation in other, more distantly-related genomes. There really is nothing remarkable about this region. As far as garbage disposals, it may be easier for the crew here to think in terms of an analogy. Grant, for the moment (I can hear the snickers..) that Microsoft Word is a lean, mean package of, say, 100 MB of executable code. The concept of "junk DNA" as commonly held would translate to an interspersing of this code with some 10 GB of stuff - lines that just sit there and are ignored. What the trf4 paper (and others like it) suggests is something far more bizarre. Not only would the Word code be interspersed with 100 fold more "junk", it would actually be interrupted by or wrapped around 100 times as much executable code. Code that is running all the time, occupying 100 times more processor resources than the actual program, but producing no useful output (in fact, no output at all). Art2
#26
I don’t think the researchers are suggesting anything that implies (at least in their minds) anything like front loading. What they almost certainly had in mind was a system that itself built up over evolutionary time, where it was advantageous to organisms to carry along a “warehouse” or evolutionary tool kit in order to better deal with new situations.
Agree. I didn't say that this was their intention. It's the existence of a toolkit isn't that isn't a speculation, while a speculation is their idea that the toolkit did evolve by RM+NS. What I mean is just that, as reasonably the toolkit couldn't be evolved by RM+NS, front loading hypothesis is now more plausible, IN SPITE of their NDE speculations. kairos
DaveScot,
We don’t care about where evolution will take living things 100 million years from now. All we really care about where evolution will take them next week, next year, or within the next 100 years. That limits any practical application of evolutionary theory to no more than the generation of varieties or sub-species.
This is a great point and fires gaping holes in the 'Darwinism is vital to medicine' argument. In one breath it is argued that we can't test Darwinism because it takes too long (Behe's 10^20 malarial cells just aren't enough) and is necessarily outside of our field of observation. In the next it is argued that if you don't agree with the historical conjecture about that which happens too slowly, too long ago and always somewhere else, then you can't practice science or medicine. Charlie
Art By the way garbage disposal in software engineering is called garbage collection. The fact that automated garbage collection algorithms were discovered in cellular machinery comes as no surprise to design pundits familiar with human designed information processing and control systems. Most everything useful in human designed machinery has parallels in biological machinery. Presuming that biological systems were designed by an intelligent agency similar to human design engineers is a fruitful heuristic for discovery whether true or not. Even more fruitful is the converse. The nanomolecular technology employed by living cells far surpasses anything we've invented on our own recognizance. For example, a bacteria was recently discovered living in the gut of sturgeon fish that is so large it's visible to the naked eye. Why is it so large one might ask? It's because it takes that much room to harbor a trillion (yes Virginia that's 1,000,000,000,000) DNA base pairs. Human engineers at present can only dream about having such information storage density as that dumb little bacteria. DaveScot
Art I don't make up the definitions of what conserved sequences are and are not. The non-coding sequences in question are classed as highly conserved (not ultra conserved) and the conservation is on a par with biologically active protein coding sequences that mice and men share. If I recall correctly the 1000+ sequences in question were at least 100 base pairs long with exact matches ranging from a minimum of 70% to a maximum of 95%. Although there is no standard writ in granite the authors of the research consider any sequence matches over 95% to be ultra-conserved. There were no ultra-conserved 100bp regions in the 1.5mbp block of deleted DNA (actually two blocks that added up to 1.5mbp were deleted). I understand that knockouts have been done on ultra-conserved non-coding DNA and the results revealed biological activity for those regions. There are precious few ultra-conserved non-coding regions between mice and men so there's not a lot to be discovered there. The researchers expressed amazement that no biologic activity was detected in the experiment and there's no mystery why - highly conserved regions are axiomatic with biologic activity (hence selection value hence conservation). It no longer appears to be axiomatic that conservation means activity and activity means conservation. That fundamental rule of NeoDarwinian evolution is now broken. In an honest world this would have been earth-shaking news but we're talking about evolutionary dogma and in that world inconvenient facts are quietly buried. Last night I went to the trouble of finding more recent published research from the same author to see what investigative avenue he took upon the findings in question. To my great surprise he simply wrote off the results under the rubric that different regions of DNA drift at different rates and those highly conserved non-coding regions with no evident biologic activity simply drifted much slower than expected. Par for the course. NeoDarwinian evolutionary dogmatists long ago stopped trying to test the theory and just assume it to be true. The lead researcher, instead of trying to verify his speculative explanation of different rates of neutral drift, is trying to find a reliable method of discriminating between functional and non-functional non-coding DNA. He's doing this by running non-coding genomic comparisons between humans and vertebrates presumably more distant than mice such as fish. As an engineer I understand this tack. He's looking for practical discoveries. Finding that highly conserved non-coding DNA between mice and men is no indicator of biologic activity does nothing to cure disease. He's interested in discovering functional non-coding DNA so that knockout experiments will reveal the function and hopefully lead to identification and treatment of genetic disorders caused by random mutations to those functional regions. That said, as far as discovery for discovery's sake it's a cop-out. It seems no one is interested in why a Darwinian prediction failed but are instead only concerned about how to work around the failed prediction so that medically relevant research can be pursued. As we all know but as some of us won't admit, the theory that evolution proceeds by chance & necessity instead of design has no bearing at all on anything with practical application. Genomes are what they are no matter how they got that way and evolution proceeds so slowly that only microevoluntionary change is relevant to anything practical. I couldn't invent a more adverse finding for the veracity of natural selection than sequence conservation over 100 million years doesn't imply biologic activity in the conserved sequence. But the truth of the matter is that no one cares why a prediction made by natural selection failed so miserably. Practical timeframes for evolution are measured in no more than a few human generations. We don't care about where evolution will take living things 100 million years from now. All we really care about where evolution will take them next week, next year, or within the next 100 years. That limits any practical application of evolutionary theory to no more than the generation of varieties or sub-species. No one worries about mice maybe evolving into a predatory species with nuclear weapons and a score to settle against mankind millions of years in the future. No one worries about bacteria evolving into something substantially different. All we worry about are microevolutionary changes. So there's no reason to figure out why a major prediction of natural selection failed but rather just a concern about how to work around it. Isn't that just precious! DaveScot
magnan, that's about the gist of things. What's really cool about the trf4 poly(A) polymerase is that it resembles, in its action, similar enzymes found in bacteria. This turns one particular "fact" that most students learn on its head - RNA polyadenylation is seen in all domains of life (not just the eukaryotic nucleus and cytoplasm), and its primordial function is not that which we learn of in school (translation and mRNA stabilization). Rather, it is all about breakdown, turnover, degradation. (Sorry for evangelizing about garbage disposals - they're really quite fascinating.) Art2
#15 (Art2): From the paper you cited: "Our data strongly support the existence of a posttranscriptional quality control mechanism limiting inappropriate expression of genetic information." You seem to be suggesting that the research cited in the "Encyclopedia of DNA" article is being wrongly interpreted by careless science journalists and overly speculative researchers. Actually the transcribed RNA from formerly considered "junk" sequences isn't functional in any way whether or not it is overlapping. It is still just garbage that is later disposed of by the polymerases mentioned in the paper you cited. Is this your thesis? magnan
#21 (kairos): "I don’t think that this is a speculation; they have simply stated a scientific hypotesis to try to explain datai within the current paradigm. But the result could be even more favourable for ID. After actual evidence of high % of useful DNA (first score) the researcher themselves are actually suggesting that DNA seems more and more FRONT LOADED (second score). Am I wrong?" The article writer refers to this thinking as speculation, though I later dignify it by calling it a hypothesis. I don't think the researchers are suggesting anything that implies (at least in their minds) anything like front loading. What they almost certainly had in mind was a system that itself built up over evolutionary time, where it was advantageous to organisms to carry along a "warehouse" or evolutionary tool kit in order to better deal with new situations. magnan
idnet.com.au, why do you think it is too early to know if the process I described is "correct"? Lots of work has been done on this mechanism in the past several years, and it seems to be very well-supported. DaveScot, I would encourage you to look at the specifics of the mouse experiment you described. I think you'll find that the "conservation" that is being trumpeted isn't all that conserved, and in fact has all the hallmarks of a region that is diverging by random drift. (Of course, that is apart from the genes that have subsequently been identified in the so-called gene deserts.) To help out, here is a link to the genome browser page. To look thru the respective genomes, use the appropriate pull-down menus. Choose the latest assembly. One of the mouse chr 3 gene deserts: chr3:146,629,000-149,000,000 The corresponding human genome position: chr1:82,073,954-84,900,793 Cut and paste exactly as given. There are lots of display options that appear after the picture. Explore and enjoy. As far as the paper whose abstract I posted, the general theme is something that most people tend to ignore. But it shouldn't be ignored. Turnover of proteins and RNA is a very important mechanism of regulation and , as I indicate here, central in the overall scheme of genomic "homeostasis". Art2
Mung States: In another thread I raised the question of whether genetic entropy itself was not a major factor in the evolution of species. I don’t recall receiving a response, but here the claim is made explicitly. Evolution occurs because of genetic entropy, not in spite of it. Yet that is the opposite of what the “genetic entropy” originator argues in his book. To get a full grasp of how Genetic Entropy allows for limited beneficial adaptation away from a parent species, Let's look at the problems facing us with our food crops. For instance, man has artificially selected for higher yeilding corn for hundreds if not thousands of years. Now the corn yeild increase has leveled off. You could say we have "almost" reached a wall as far as yeild increase is concerned for the corn by artificial selection. Yet at the same time these corn crops are more susceptable to a single disease wiping out an entire crop since the variabilty of the corn is now severely limited... i.e. , the parent species of the sub-species has much more information for variability in its genome than the sub-species or pure breds do. This loss of information away from the parent species is an example of evolution at a cost to "Genetic Entropy". I hope I have made this clear for you Mung. If not, ask me and I will try to make it clearer for you. bornagain77
Art2 You infer that the transcribed elements from areas of unknown function are accidentally transcribed and likely to be immediately chopped up for reuse. I think it is too early to know whether you are corrrect. Dave, The deletion of the non coding sequences in rats experiment found no problems. This does not mean that they tested for the function that the area coded for. Just say it is an area that ensures that rats don't eat a specific form of toxin. Would the lab rats who were not given an opportunity to eat the toxin demonstrate any difference from other lab rats who had that ability perserved? The same may go for specific immunogens. If the ability to fight rhodent malaria is conferred by that segment, and they didn't test the rats with malaria, how would they detect the problem? There are inumerable experiments that would need to be done to be able to truly conclude that one set of rats were equal to the other set. Ultimatly, releasing the cut down rat version into a competitive setting would be an important test. idnet.com.au
Art re transcriptome and unused transcripts The proof of the pudding would be in deleting the DNA sequences that are suspected of having no use and seeing what effect if any it has on the organism. There may well be critical situations where the transcripts seen to be otherwise rapidly disposed of are instead used for some purpose and the transcript simply hasn't been observed in that situation. There sure are a growing number of anomalies that don't make sense in light of natural selection and non-coding DNA. I'm reminded of the experiment where 1000 non-coding highly conserved sequences between mice and men were knocked out of the mouse and there was no observed effect from it on the GM mice. If not selection then what acted to conserve those sequences for 100 million years? If selection conserved them why was there no observed effect on the mice when the conserved sequences were removed? DaveScot
#10
The Science Daily article quotes the researchers with the first speculation on this: “According to ENCODE researchers, this lack of evolutionary constraint may indicate that many species’ genomes contain a pool of functional elements ...
I don't think that this is a speculation; they have simply stated a scientific hypotesis to try to explain datai within the current paradigm. But the result could be even more favourable for ID. After actual evidence of high % of useful DNA (first score) the researcher themselves are actually suggesting that DNA seems more and more FRONT LOADED (second score). Am I wrong? kairos
[...] In yet another proof that it is good to maintain a degree of humility when drawing conclusions from the current body of scientific knowledge, it turns out that junk DNA isn’t junk after all.    [...] 4Simpsons Blog Weekly roundup «
Hi Sal, What is the impact of this on the whole "genetic entropy" argument? I ,eam, if genetic entropy isn't taking place in the "junk DNA," where is it taking place?
This is a further nail in the coffin for evolution for this only greatly strengthens the already unimpeached principle of Genetic Entropy of genomes. It is now virtually certain that genome wide inormation was introduced at the level of parent species with limited radiation of sub-species out from the parent species due to front loaded information that still obeys the principle of gentic entropy.
In another thread I raised the question of whether genetic entropy itself was not a major factor in the evolution of species. I don't recall receiving a response, but here the claim is made explicitly. Evolution occurs because of genetic entropy, not in spite of it. Yet that is the opposite of what the "genetic entropy" originator argues in his book. Mung
Nothing like looking at the data: http://research.nhgri.nih.gov/ENCODEdb/ Art2
Behe's DBB conclusions about IC should be obvious to anyone who has dealt with even the most trivial real-world engineering problems. I'm reading The Edge of Evolution, and I must admit that I am somewhat mystified by the fact that such a book should need to be written. When the information-based nature of living systems was elucidated half a century ago, the entire Darwinian paradigm concerning biological innovation (i.e, stochastic processes filtered by natural selection) should have been discarded as obviously inadequate. GilDodgen
I keyed in on this statement too. The ENCODE consortium’s major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. Especially this part; these transcripts extensively overlap one another. This overlapping nature in conjunction with the poly-functional nature of the genome potentially raises the Complex Specified Information filter of Dembski up to the level whole genome. At least that is the impression I get from first glance of the evidence. bornagain77
For what its worth, it would probably help if discussants here looked at the paper, and especially at the regions of the human genome that were selected for analysis. Also, the paper by Wyers et al. (2005) - Cryptic Pol II Transcripts Are Degraded by a Nuclear Quality Control Pathway Involving a New Poly(A) Polymerase, Cell121, 725-737 might cause some here to pause and re-think things. The abstract: ---------------------- Since detection of an RNA molecule is the major criterion to define transcriptional activity, the fraction of the genome that is expressed is generally considered to parallel the complexity of the transcriptome. We show here that several supposedly silent intergenic regions in the genome of S. cerevisiae are actually transcribed by RNA polymerase II, suggesting that the expressed fraction of the genome is higher than anticipated. Surprisingly, however, RNAs originating from these regions are rapidly degraded by the combined action of the exosome and a new poly(A) polymerase activity that is defined by the Trf4 protein and one of two RNA binding proteins, Air1p or Air2p. We show that such a polyadenylation-assisted degradation mechanism is also responsible for the degradation of several Pol I and Pol III transcripts. Our data strongly support the existence of a posttranscriptional quality control mechanism limiting inappropriate expression of genetic information. --------------------- I've been going on about the garbage disposal for some years. It's reports like this that make my ideas seem, well, almost sensible. Art2
Looks like Nick Matzke and Larry Moran have egg on their face as well.
Larry Moran: Junk DNA is the DNA in your genome that has no function. Much of it accumulates mutations in a pattern that's consistent with random genetic drift implying strongly that the sequences in junk DNA are unimportant. In fact, the high frequency of sequence change (mutation plus fixation) is one of the most powerful bits of evidence for lack of function. ... Nick Matzke: Good post Larry.
Jehu
From the article:
The ENCODE consortium's major findings include the discovery that the majority of DNA in the human genome is transcribed into functional molecules, called RNA, and that these transcripts extensively overlap one another. This broad pattern of transcription challenges the long-standing view that the human genome consists of a relatively small set of discrete genes, along with a vast amount of so-called junk DNA that is not biologically active.
Wow! Jehu
It should be noted that they have now finally proven that most if not all of the DNA is not only functional but is also poly-functional, thus by logic it becomes much more severely constrained to any mutations to the DNA whatsoever(the chance of beneficial mutation to a poly-functional code is much much less than the chance of a beneficial mutation to a mono-funtional code). This is a further nail in the coffin for evolution for this only greatly strengthens the already unimpeached principle of Genetic Entropy of genomes. It is now virtually certain that genome wide inormation was introduced at the level of parent species with limited radiation of sub-species out from the parent species due to front loaded information that still obeys the principle of gentic entropy. bornagain77
Predictions of non-functionality of "junk DNA" were made by Susumu Ohno (1972), Richard Dawkins (1976), Crick and Orgel (1980, Pagel and Johnstone (1992), and Ken Miller (1994), based on evolutionary presuppositions. By contrast, predictions of functionality of "junk DNA" were made based on teleological bases by Michael Denton (1986, 1998), Michael Behe (1996), John West (1998), William Dembski (1998), Richard Hirsch (2000), and Jonathan Wells (2004). These Intelligent Design predictions of are being confirmed. e.g., ENCODE's June 2007 results show substantial functionality across the genome in such "junk" DNA regions, including pseudogenes. These predictions are further detailed in: Junk DNA at Research Intelligent Design. http://www.researchintelligentdesign.org http://www.researchintelligentdesign.org/wiki/Junk_DNA Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. The ENCODE Project Consortium, Nature 447, 799—816 (14 June 2007) doi:10.1038/nature05874 http://www.nature.com/nature/journal/v447/n7146/pdf/nature05874.pdf DLH
Unfortunately, as we have long observed, Darwinism is unfalsifiable. The editor of Nature and the researchers writing these papers obviously assumed these findings are easily accommodated into MET. Otherwise they would not have been published. Because of this, the new data will be blithely explained as usual with various "just so" stories. The Science Daily article quotes the researchers with the first speculation on this: "According to ENCODE researchers, this lack of evolutionary constraint may indicate that many species' genomes contain a pool of functional elements, including RNA transcripts, that provide no specific benefits in terms of survival or reproduction. As this pool turns over during evolutionary time, researchers speculate it may serve as a "warehouse" for natural selection by acting as a source of functional elements unique to each species and of elements that perform the similar functions among species despite having sequences that appear dissimilar." If this is a valid general hypothesis, how do the appropriate items from the "warehouse" just happen to be made available at the right time for natural selection? Even if all 2.7 billion base pairs or so are supposed to be available separately to express in every generation, there would still need to be a very elaborate filing and correlation mechanism that would also have to have evolved by RM & NS. “the ENCODE effort found about half of functional elements in the human genome do not appear to have been obviously constrained during evolution“ If they are described as not constrained, gene studies presumably showed a lot of (random?) changes in sequence in these regions between organisms belonging to different orders, classes, families, etc. having times of origination tens or hundreds of millions of years apart. If this large bulk of functional but nonconserved DNA had originated by natural selection we would see obvious build up of changes to "conserved" elements. This was not the case, so natural selection apparently wasn't involved. I wonder why Behe didn't refer to these research results in his new book. Unfortunately this large release of papers came out just after it was published. magnan
Jeredl, "bFast, why would design detection fail if “junk DNA” were, indeed, junk?" This is Denton's contention, expressed clearly in "Nature's Destiny". I do not necessarily agree with him. However, there does seem to be something not very strategic about creating massive amounts of useless pseudoinformation. "Junk ain't junk" is certainly an ID prediction -- now a validated ID prediction. You and I might both contend that it is not a necessary prediction of ID, but Denton expressed it as exactly such. If you really want to understand his case in the matter, please feel free to read his case rather than my summary. bFast
"The new data indicate the genome contains very little unused sequences and, in fact, is a complex, interwoven network." This is devastating to neo-Darwinism. Complex, interwoven network...like the neurons in our brains? That means DNA sequences are highly dependent to one another, and "evolution" certainly requires highly correlated, systemic or systematic, non-random changes taking place at various places at the same time. Irreducibly complex? But,
“the ENCODE effort found about half of functional elements in the human genome do not appear to have been obviously constrained during evolution“
I don't understand how the above finding leads to the following conclusion. Could you explain?
This means these designs NOT attributable to natural selection. Features in the genome have been shown not to be likely products of “slight successive modifications”.
MatthewTan
We can add TalkOrigins and Richard Dawkins to the losers in this recent discovery. See: DNA researcher, Andras Pellionisz gives favorable review to a shredding of Dawkins and TalkOrigins
Richard Dawkins writes: And there’s lots more DNA that doesn’t even deserve the name pseudogene. It, too, is derived by duplication, but not duplication of functional genes. It consists of multiple copies of junk, “tandem repeats”, and other nonsense which may be useful for forensic detectives but which doesn’t seem to be used in the body itself.
scordova
[update] I made a correction to my post, the quote I wished to highlight was:
the ENCODE effort found about half of functional elements in the human genome do not appear to have been obviously constrained during evolution
scordova
bFast, why would design detection fail if "junk DNA" were, indeed, junk? jaredl
The fact that Ken Miller is the author of the most popular high school text book and is working on a college level edition should make him suspect as a spokesperson. His income from his textbooks are probably over 200k a year and if he once told the truth about evolution, it would all disappear. jerry
I point out why certain functional systems will be nearly if not completely invisible to natural selection. See: Airplane magnetos, contingency designs, and reasons ID will prevail scordova
In Nature's Destiny, Michael Denton, Behe's mentor, was much stronger on this point. He held that ID would fall if junk DNA proved to be junk. This is surely the PRIMARY prediction of ID. Anybody who respects the predict and prove model of scientific theories must, in my opinion, give ID a seat at the table of science. Further, today Haldane's dilemma has become ten times the dilemma that it was prior to this finding. It is my opinion that at this point the ID community claim "scientific theory" status -- can the "hypothesis" bit. bFast
By the way, Miller is author of a widely used high school biology textbook. Isn't that reassuring? (NOT)! scordova

Leave a Reply