Uncommon Descent Serving The Intelligent Design Community

The Sound of Circular Reasoning Exploding

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Alternate Title: Of Mice and Men and Evolutionary Dogma

Explosion
“There has been a circular argument that if it’s conserved it has activity.” Edward Rubin, PhD, Senior Scientist, Genomics Division Director, Lawrence Berkeley National Laboratory

Recent experiments cause a central tenet of NDE to miss the prediction. Large swaths of junk DNA (non-coding, no known function) were found to be highly conserved between mice and men. A central tenet of NDE is that unexpressed (unused) genomic information is subject to relatively rapid corruption from chance mutations. If it’s unused it won’t do any harm if it mutates into oblivion. If it’s unused long enough it gets peppered with mutations into random oblivion. If mice and men had a common ancestor many millions of years ago and they still have highly conserved DNA in common, the story follows that all the conserved DNA must have an important survival value.

A good experiment to figure out what unknown purpose the non-coding conserved pieces are doing would be to cut them out of the mouse genome and see what kind of damage it does to the mouse. So it was done. Big pieces of junk DNA with a thousand highly conserved regions common between mice and men was chopped out of the mouse. In amazement the mouse was as healthy as a horse (so to speak). The amazed researchers were in such a state because they were confident NDE predicted some kind of survival critical function and none was found.

This is a good avenue for positive ID research. If the function of any of those regions were preserved because they could be of important use in the future… well that would pretty much blow a hole in the good ship NDE the size of the one that sunk the Titanic. Maybe not that big, but it would be taking on water – natural selection can’t plan for the future. Planning for the future with genomic information is the central tenet of ID front loading hypothesis. Lack of any known means of conserving non-critical genetic information is the major objection lobbed at the front loading hypothesis. Evidently there is a means after all.

Life goes on without ‘vital’ DNA

16:30 03 June 2004
Exclusive from New Scientist Print Edition.
Sylvia Pagán Westphal, Boston

To find out the function of some of these highly conserved non-protein-coding regions in mammals, Edward Rubin’s team at the Lawrence Berkeley National Laboratory in California deleted two huge regions of junk DNA from mice containing nearly 1000 highly conserved sequences shared between human and mice.

One of the chunks was 1.6 million DNA bases long, the other one was over 800,000 bases long. The researchers expected the mice to exhibit various problems as a result of the deletions.

Yet the mice were virtually indistinguishable from normal mice in every characteristic they measured, including growth, metabolic functions, lifespan and overall development. “We were quite amazed,” says Rubin, who presented the findings at a recent meeting of the Cold Spring Harbor Laboratory in New York.

He thinks it is pretty clear that these sequences have no major role in growth and development. “There has been a circular argument that if it’s conserved it has activity.”

Use the link above for the full article.

Comments
[...] This is a good avenue for positive ID research. Planning for the future with genomic information is a central tenet of the ID front loading hypothesis. Lack of any known means of conserving unexpressed genetic information is the major objection lobbed at the front loading hypothesis. Natural selection is the only mechanism known for preserving genomic information and to do it the information must be “expressed” so that it has some testable survival value for selection to act upon. If it’s not expressed then it is subject to eventual destruction by natural selection’s ever present companion “random mutation”. Evidently there is a means of preserving unexpressed information after all. See also this related blog article I wrote two months ago which is even stronger evidence of a genetic information cold-storage mechanism: The Sound of Circular Reasoning Exploding. Rogue weeds defy rules of genetics 00:01 23 March 2005 NewScientist.com news service Andy Coghlan [...] The Sound of Mendelian Genetics Exploding | Uncommon Descent
GeoMar: "Percent identity is a nonlinear function of the time passed, and you need a differential equation to model it." Mathemetitions, puh! If you set up the array that I described, it will automatically bump into mutated items, and mutate 'em again. Wala, advanced differential calculus. You can drive yourself crazy trying to calculate the trajectory of a football in a stiff breaze, or you can ask a quarterback to throw the darn thing. bFast
Correction to that last post. Rubin's team only found 1 function out of 10 sequences of >90% idenity and 400 bp that were conserved across humans, rodents, chickens and frogs. There is no evidence that the function they did find conferred a selective advantage Jehu
GeoMor,
How many 100-bp windows are there in the genome? A lower bound is 3e9/100, i.e. let’s not even let the windows overlap. So how many 100-bp windows in the genome would we expect to be preserved at better than 70% identity by chance? 3e9/100*1.8e-4 = 5,400. At 40% divergence, it’s 83,000. Again, both lower bounds because we considered only disjoint windows.
Thanks for input. But help me out here. The mouse genome is 2.5 gb. So your equation should be 2.5e9/100*1.8e-4=4,500. So we have 4,500 100bp sequences that would have 100 bp and 70% homology or .00015% of the genome? So there will be a small amount of falsely conserved sequences. Rubin's team found 1,243 with >70% homology and 100 bp in only two stretches of DNA between four exons. I take it there are more of these sequences than would be anticipated by the normal curve.
The trouble with trying to get percent identity/divergence from this is that as you let this process run, on each iteration you have a higher chance of changing a base that you already changed, leading to no decrease (and possibly an increase) in the percent identity.
Not if you use the 47% figure because that is based on the hard observation of of comparing the human and mouse genome. Things get real interesting when you also consider that Rubin's team could not find function for 5 sequences of >90% identity and 180 bp or for only 1 of 10 sequences of >90% idenity and 400 bp that were conserved across humans, rodents, chickens and frogs. There is no evidence that the function they did find conferred a selective advantage. Jehu
Here's how to calculate the significance of the conservation. Assume that by whatever figures you want to use, you expect by chance a certain percentage of bases to have changed, so like 47% divergence (53% identity), whatever you'd like to use. Now, what is the probability that a 100-bp window will be preserved at 70% identity? You have 100 bases and fewer than 30 of them have to have changed, where each one has a 47% chance to have diverged. If I flip a coin a hundred times, and it has a 47% chance of coming up tails, what's the probability I get fewer than 30 tails? You compute this using the cumulative binomial distribution. There is not a simple formula, but you can go plug it in to Matlab, etc. At 47% divergence, the chance of this happening for a 100-bp window is 1.8e-4. How many 100-bp windows are there in the genome? A lower bound is 3e9/100, i.e. let's not even let the windows overlap. So how many 100-bp windows in the genome would we expect to be preserved at better than 70% identity by chance? 3e9/100*1.8e-4 = 5,400. At 40% divergence, it's 83,000. Again, both lower bounds because we considered only disjoint windows. May I once again stress that I was never arguing that many conserved sequences were not deleted in the Nobrega experiment; this calculation arose in a side discussion. But since my math was being questioned, I thought I'd bring the bacon. Finally, let me caution against the calculation several of you have been doing (and, for simplicity's sake, I also used in a previous comment) of X subs/site/yr times Y years to get a percent divergence. This calculation becomes increasingly inaccurate for a larger number of years. The mutation rate is of the process where you randomly pick a position in the genome and change it to another base. The trouble with trying to get percent identity/divergence from this is that as you let this process run, on each iteration you have a higher chance of changing a base that you already changed, leading to no decrease (and possibly an increase) in the percent identity. Percent identity is a nonlinear function of the time passed, and you need a differential equation to model it. I am not sure off the top of my head where (back in time) the linear approximation that we've been using becomes really bad. GeoMor
bFast
Jehu, let me question your figures. We seem to have two sets of numbers flying around. Could the 2 x 10^-9 be an “average rate for all DNA” where the 2.1% per mil be the rate for “junk” DNA? I have independantly seen the latter number measured off of “junk” DNA, so that’s what I am suspecting.
2.22 x 10^-9 is the average mutation for mammals, I am not sure how accurate that number is because it is from 2001. The number's I have been giving for the neutral substitution are from Nature's mouse genome article. (It's free) http://www.nature.com/nature/journal/v420/n6915/full/nature01262.html Jehu
Jehu, let me question your figures. We seem to have two sets of numbers flying around. Could the 2 x 10^-9 be an "average rate for all DNA" where the 2.1% per mil be the rate for "junk" DNA? I have independantly seen the latter number measured off of "junk" DNA, so that's what I am suspecting. bFast
That right, I meant bFast. Sorry. Jehu
That would be bFast ".... right?" bFast
DaveScot
2.2% per million years * 140 million years, right?
Although I am guilty of giving that number earlier in the thread, it is apparently actually much lower. According to Nature the mutation rate in the mouse is 4 x 10^-9 per year per base pair and in the human is 2 x 10^-9 per year per base pair. So those two mutation rates have to be combined. The time of divergence for the human/mouse divergence is 70 - 90 million years, which resulted in neutral divergence of 47%. The exons of coding genes in mouse to human have an average of 85% identity. The introns average 69%. The so-called "promoter regions" which are the poorly defined 200 base pairs just before and after a coding gene have an identity of between 70-75%. I am not sure how significant 70% identity between mouse and human is. However, when you toss in a chicken in there the time of divergence goes way back and it is much harder to justify it by random chance. Jehu
I've been thinking of a simple simulation that would rule out the "segments might just have not been mutated" argument. (If I can find a few hours I will code it.) The code would create an million (tuneable) element array, and randomly mutate its values between a value of 1 and of 4. Let sufficient mutations happened to account for random mutations over the years allotted (2.2% per million years * 140 million years, right?) . At that point, the array can be swept to see what the longest segment that was apparently unmutated to the 70th percentile would be. I bet the longest segment will be about 30 elements long. I will be shocked if the array will show a thousand segments averaging 100 element in length -- shocked, shocked! bFast
DaveScot:
a useless mutation can become fixed simply because that cell line was strong for some other reason or just lucky . The point here isn’t how the CNGs became fixed. It’s how they were conserved for tens or hundreds of millions of years without being blasted into unrecognizability by random mutations.
In true "junk dna" mutations randomness determines when a mutation becomes fixed. However, if we look at the amount of benefit that a mutation must offer so that it does become fixed, we can guess that the converse is also true, that a mutation that offers that much disadvantage would keep it from becoming fixed. If DNA is actually ultra-sensitive to any advantage/disadvantage, then any mutation offering even a very slight disadvantage would not be fixed. The result would be that that segment of DNA would be conserved. Mesk, over on Telic Thoughts, argues that it is quite reasonable for the mice in question to pass the tests provided and still have sufficient deleterious function account for the conservation that is observed. However, he suggests that a 5 year study on about 1000 mice, may make a very convincing case that the mice have suffered no deleterious effects. He then goes on to describe the expected response from the scientific community (assuming that no deleterious effects are found) as frantic. I do like the balance the Mesk is offering in his discussion on TT. bFast
Sparc:
Thus one can hardly claim that ultra-conserved sequences lack functions.
Jehu's quote from Nature:
What explains the correlation among these many measures of genome divergence? It seems unlikely that direct selection would account for variation and co-variation at such large scales (about 5 Mb) and involving abundant neutral sites taken from ancestral transposon relics.
What Jehu alludes to, and what Sparc's quote actually points out, is the fact that irrespective of the question concerning conserved sequences and function there seems to be TWO conserving processes going on, one that can be plausibly attributed to NS, and one that defies this connection. But there's more to it than that. EVEN IF NS is responsible for both such processes (thus, granting the argument), the "enhancers" (per Pennacchio) are nevertheless more conserved than the "genes" themselves. These enhancers "regulate" gene function/expression. Hence, it would seem--using the standard thesis regarding conservation of sequences--that 'regulated' functions are more critical to survival than are "coding" functions( i.e., proteins/enzymes). Dave, I'm going to toss this over to your area of expertise--programming--but doesn't this all suggest that "genes" are less critical in, so to speak, setting up genetic programming than are the "enhancers/regulators" themselves? E.g., wouldn't a problem with a branching node be more fatal to the proper functioning of a program than a called for subroutine? IOW, in programming you'd fix the switching problems first before you started looking at the individual subroutines the branching node was calling for. Looked at this way, it makes gene expression look somewhat secondary, almost peripheral, to genetic programming--which is really what we see in nature, as in the geographical radiation of species. And doesn't that imply that NS is necessarily almost decoupled from phenotypic variation? And doesn't that imply that Darwin was completely wrong given that he bases his theory on the link between phenotypic variation and selection? PaV
DaveScot:
I’m a little suprised the background rate is only 2:1 given that the reproductive cycle in mice is closer to 20:1 when compared to men. The smaller deviance might be due to humans continuing to reproduce for decades and their gametes acquiring age-related mutations.
I think it is simply a failed prediction of NDE. I know there was a strong prediction that generation time would correlate with the mutation rate. That has not turned out to be the case.
It should be mentioned that the neutral theory isn’t complete as not all genes exhibit synonymous substitution at the same rate which made molecular clock calibration into a cottage industry.
If NDE were true I would expect to see uniform drift across the genome with a close correlation to generation times, exceptions would only be made for genes under selective pressure that cannot tolerate neutral mutations as well. The Nature Mouse Genome issue made this comment:
What explains the correlation among these many measures of genome divergence? It seems unlikely that direct selection would account for variation and co-variation at such large scales (about 5 Mb) and involving abundant neutral sites taken from ancestral transposon relics.
Jehu
I'm a little suprised the background rate is only 2:1 given that the reproductive cycle in mice is closer to 20:1 when compared to men. The smaller deviance might be due to humans continuing to reproduce for decades and their gametes acquiring age-related mutations. In other words every new mouse is made by combining young gametes produced by a young animal whereas many humans are produced from gametes decades old (eggs) or recently produced from decades old cells (sperm). At any rate I have no quarrel with Rubin's 0.46 rate of bp mutation. That rate can be easily established by synonymous substitutions in codon sequences in critically expressed genes based upon the neutral theory. It should be mentioned that the neutral theory isn't complete as not all genes exhibit synonymous substitution at the same rate which made molecular clock calibration into a cottage industry. Where I think Geomor mostly went wrong is in saying that serendipity would account for thousands of 100bp sequences appearing conserved. I'm sure someone must have done a statistical analysis to show that is untrue and Rubin based his sequence length selection on that analysis. DaveScot
DaveScot, Thanks for the clarification. I think that actual divergence between mouse and human is slightly more than 40%. Here is what Nature reported.
[W]e estimate that neutral divergence has led to between 0.46 and 0.47 substitutions per site (see Supplementary Information). Similar results are obtained for any of the other published continuous-time Markov models that distinguish between transitions and transversions (D. Haussler, unpublished data). Although the model does not assign substitutions separately to the mouse and human lineages, as discussed above in the repeat section, the roughly twofold higher mutation rate in mouse (see above) implies that the substitutions distribute as 0.31 per site (about 4 10-9 per year) in the mouse lineage and 0.16 (about 2 10-9 per year) in the human lineage.
Jehu
A background rate of 2.2*10^-9/site/year means that any given nucleotide can be expected to mutate into a different one in 500 million years. With both human and mouse diverging at this rate it means any nucleotide that was the same in both can be expected to be different after 250 million years (it'll change in either mice or men). Therefore we should expect sequence divergence at the rate of about 10% every 25 million years. After 90m years we'd expect unconserved regions to have diverged by close to 40% (60% similiarity). Geomor's assertion that we should find thousands of 100bp seqeunces more than 70% conserved by serendipity is wrong. 100bp is sufficient to eliminate virtually all localized deviations from the expected average. Rubin's team isn't stupid. He chose these percentages and sequence lengths because they are well over the threshhold of being purposely conserved. He was amazed because no purpose in 1000 of them became immediately apparent when they were removed. DaveScot
sparc, you wrote:
In a recent paper (Pennacchio L.A. et al. (2006): In vivo enhancer analysis of human conserved non-coding sequences. Nature 444(7118):499-502) Rubin’s group presents evidence that 45% of ultra-conserved sequences they have analyzed “functioned reproducibly as tissue-specific enhancers of gene expression at embryonic day 11.5”. Thus one can hardly claim that ultra-conserved sequences lack functions.
But what about the other 55% percent? You see, nobody is claiming that all ultra-conserved sequences lack function. Notice my post #74 where I pointed out that no function was found in 25% of the tested ultra-conserved sequences. That implictly states that function was found in 75% of them. Here's the kicker. According to NDE 100% of ultra-conserved sequences should have function. So 25% with no function is significant in itself. Jehu
fifth I think the research for ID is to find what mechanism is conserving these 1000 CNGs. If natural selection is what is conserving them then they must have an important function critical to survival. There is no known molecular mechanism other than natural selection that can conserve sequence information like this. The front loading hypothesis of ID predicts a conservation mechanism other than natural selection. DaveScot
GeoMor,
By those exact numbers, you’d expect most of the genome to be greater than 70% identity between mouse and the human-mouse ancestor. Just multiply 2.22e-9 subs/site/yr by 90Myr = 0.2 subs/site or 80% identity. Between human and mouse (140-180Myr of divergence), you’d expect tens of thousands of 100-bp sequences to have 70% or better identity in a 3e9-bp genome.
Not exactely. That is 2.22e-9 per lineage not combined. And that figure is the supposed mammalian average, not the alleged human or mouse specific substitution rate. See the Nature article for better info on human/mouse divergence.
Anyway, I’m not trying to argue that there were not many “truly” conserved sequences in the deleted regions — ... Again, NDE’s prediction here remains that the conserved deleted sequences are functional and have conferred a selective advantage
I agree, it has clearly been predicted that these genes were conserved and therefore have function. However, I am curious to know the statistical significance of a sequence >100 bp at >70% identity. Jehu
Sparc Geomor already mentioned the paper you did in the commentary. I mistakenly called the CNGs referenced in the blog article as ultra-conserved. They are in fact highly conserved at between 70% and 95% sequence match. Rubin explicitely stated none of these 1000 crossed over into ultra-conserved which (by the definition Rubin used) are over 95%. So the article you mention doesn't address any of the thousand CNGs referenced here. The claim that no function has been found for these thousand(!) highly conserved sequences is still a valid claim. DaveScot
Mouse Genome Factoid Round-Up! Nature has their December 5, 2002, Mouse Genome Special Issue posted on the internet for free. http://www.nature.com/nature/mousegenome/index.html It has lots of great information that is highly relevant to this thread. Here are some significant facts as reported back in 2002.
*The mouse genome is about 14% smaller than the human genome (2.5 Gb compared with 2.9 Gb). The difference probably reflects a higher rate of deletion in the mouse lineage. * Over 90% of the mouse and human genomes can be partitioned into corresponding regions of conserved synteny, reflecting segments in which the gene order in the most recent common ancestor has been conserved in both species. * At the nucleotide level, approximately 40% of the human genome can be aligned to the mouse genome. These sequences seem to represent most of the orthologous sequences that remain in both lineages from the common ancestor, with the rest likely to have been deleted in one or both genomes. * The neutral substitution rate has been roughly half a nucleotide substitution per site since the divergence of the species, with about twice as many of these substitutions having occurred in the mouse compared with the human lineage. *By comparing the extent of genome-wide sequence conservation to the neutral rate, the proportion of small (50−100 bp) segments in the mammalian genome that is under (purifying) selection can be estimated to be about 5%. This proportion is much higher than can be explained by protein-coding sequences alone, implying that the genome contains many additional features (such as untranslated regions, regulatory elements, non-protein-coding genes, and chromosomal structural elements) under selection for biological function.
Jehu
Jehu (#43): It has been over 2 years since the paper was published and I can’t find any evidence that anybody has even suggested a function. DaveScot (#53) I’m very surprised that a lot more research into this hasn’t been undertaken in the intervening two+ years.
In a recent paper (Pennacchio L.A. et al. (2006): In vivo enhancer analysis of human conserved non-coding sequences. Nature 444(7118):499-502) Rubin’s group presents evidence that 45% of ultra-conserved sequences they have analyzed “functioned reproducibly as tissue-specific enhancers of gene expression at embryonic day 11.5”. Thus one can hardly claim that ultra-conserved sequences lack functions. sparc
The longer the DNA string or the higher the percentage conserved or longer the time involved the worse it is for the Darwinist The immediate reproductive benefit must be very high to offset the cost of conservation here It seems to me that in order for such a long section of code to be conserved for so long you need to have a much higher immediate reproductive advantage than .0001% fifthmonarchyman
Pav says: Does this constitute an ID experiment? I say: you bet it does. And a cheep one at that I think it could be conducted purely by computer . So much of the data has already been collected and so much of it is purely mathematical and not open to interpretation. We already have the Background mutation rate Theoretical number of generations between the two species Minimum information content in the ultra conserved DNA Minimum benefit necessary for natural selection to conserve the strand in question It looks like all you need to do is devise a formula and plug in the numbers. We might even be able to use Dembski’s filter. This is all over my head but it should be easy for the math geeks here. fifthmonarchyman
I should mention that this experiment was brought up at Panda's Thumb quite a while back, and they just turned their nose at it. http://www.pandasthumb.org/archives/2005/11/we_are_as_worms.html Go to post #60967 PaV
Jehu: "But how much benefit must a mutation confer in order for it to be fixed?" I was asking myself the same question. I think it's actually quite low. However, this kind of knock-out experiment, combined with what we're finding about about siRNA and regulatory functioning of such RNA's, I believe, throws all of the mathematical basis for the Modern Synthessi for a loop. The mathematics developed, of course, when it was thought that "genes" entirely determined organisms. Well, it appears there's much more to phenotypes than the simple expression of genotypes (understood as the "coding" portions of DNA). This is a brave new world. Dave Scot: "You’ve got the right idea though. Insects probably have CNGs like these in common with arthropods and those would be two good candidates for comparison with much faster life cycles than vertebrates. " Does this constitute an ID experiment? :) PaV
jehu It doesn't have to confer any benefit to become fixed. Benefit helps it become fixed but a useless mutation can become fixed simply because that cell line was strong for some other reason or just lucky . The point here isn't how the CNGs became fixed. It's how they were conserved for tens or hundreds of millions of years without being blasted into unrecognizability by random mutations. Unless some important and somewhat immediate function can be found for all these CNGs then the inevitable conclusion is there's something other than natural selection at work conserving them. The front loading hypothesis predicts that some mechanism for conservation other than natural selection must exist to conserve DNA with no immediate purpose, otherwise information stored for future use would be destroyed before it was actually employed. DaveScot
Dave, thanks. I actually had the opportunity to see the Rosetta Stone up close in UK, led by a Prof. of history on walking tour. Was a fascinating walk back thru time. re: testing... what about sea urchins? I've been fascinated by sea urchins ever since I found they're in the chordate group with us humans. To me, that is as weird as voles. I don't remember anyone posting the following info. But thought it just as interesting in relation to comparitive testing. And it appears to have been a favorite for some time now. Sea Urchins share 7,000 genes with humans. Including those linked to parkisons, alzheimers. Plus, "Another surprise is that this spiny creature with no eyes, nose or hears has genes involved in vision, hearing and smell in humans" http://www.eurekalert.org/pub_releases/2006-12/uocf-sug120706.php I wonder how they match up with the mouse & human genome regions tested now? And what is predicted? By evolution? "Sea urchins are echinoderms, marine animals that originated more than 540 million years ago." Well, just found this, 70% of sea urchin matches human genome in comparison to 40% of fruit fly. The project was coordinated by Baylor btw with over 200 scientist working worldwide. http://www.livescience.com/animalworld/061109_urchin_relatives.html The tree is looking rather bushy. This does not compute to a normal mind does it? I'm being asked to believe a sea urchin matches better with me than a fly, which is not something I want to resemble anyway. But the absurdity of current genome statistics tells us this does not compute at all in terms of morphology. At least the fly has fully expressed legs, eyes. Flash presentation on Sea Urchin and other papers. http://www.sciencemag.org/cgi/content/summary/sci;314/5801/938a I don't see how NDE can get away from the bushy aspect of multiple lifeforms springing up out of sea and land. Michaels7
bFast
y discussions with Mesk over at telicthoughts (he’s clearly an evolutionist, but he’s got some humility in his bones, and a scientist’s curiosity) this is not so. He suggests that if a mutation has a deleterious effect as small as 0.0001%, it will not fix in the population. If it doesn’t fix in the population, it will weed out.
But how much benefit must a mutation confer in order for it to be fixed? Also what does .0001% deleterious mean? Does it mean it prevents 1 speceis in 10,000 from reproducing? If that is what it means, I don' t believe the figure. I would want to see some experimental data to support it and not just circular reasoning that attempts to justify what we observe in the genome with NDE or "evo-devo" theory. Jehu
Fifth, mice breed like rabbits. They made 250 of the things before they published these things. However, Mesk over on TT suggests that the real test would be a more natural environment than a lab. I would suggest taking knock-out mice, and non-knock out mice of similar breeding, and put them in an old barn with a cat and an owl. Lets see if after a year or two the non-knock out dominate over the knock-out. That would be an interesting study that even I could do -- given grant money, of course. Hmmm, I think I know where I can get the barn. bFast
fifth Bacteria don't have junk dna so this really doesn't apply to them. Fruit flys don't have these CNGs in common with vertebrates. Fish and birds have a number of them in common with mammals. You've got the right idea though. Insects probably have CNGs like these in common with arthropods and those would be two good candidates for comparison with much faster life cycles than vertebrates. That said, there's probably more of practical interest to be found in figuring out what these CNGs are doing in mammals as it could lead to finding cures for disease and genetic disorders in humans. DaveScot
"He suggests that if a mutation has a deleterious effect as small as 0.0001%, it will not fix in the population." Fine we experiment with bacteria or fruit flies instead of mice and this can still be settled quickly. In the mean time I would expect to see special reports on Fox News and in The New York Times. This is a big deal. fifthmonarchyman
fifthmonarchyman The plain consensus is that any conserved DNA sequence between organisms that diverged in the distant past must have an "important" function. Greater or lesser conservation of conserved sequences in the same two species implies either greater or lesser tolerance to mutation while still maintaining function or greater or lesser importance or both. Ultra conserved DNA implies both importance and intolerance to change. The sequences in question here are highly conserved but not ultra-conserved if by ultra-conserved we mean greater than 95% similarity in sequence. These are not coding genes so we don't know if synonymous substitutions are possible or not. It would appear not as the sequences exhibit far more absolute similarity than do coding genes between mice and men. If just a few highly conserved regions were knocked out and nothing happened to the mouse it wouldn't be a big deal but in this case a thousand highly conserved regions where knocked out and nothing happened. If there isn't important function associated with these regions then there's a mechanism other than natural selection at work conserving DNA sequences. The jury is still out on what naturally selectable advantage all these conserved bits have but it's looking rather grim right now for finding anything important. DaveScot
bfast "Immediate" in an evolutionary sense can be a number of generations. The difference between a day and a thousand years is next to nothing on an evolutionary timescale. Thus the difference between a mutation that causes spontaneous abortion and one that causes a genetic disorder that impairs but doesn't kill the organism will both be "weeded out" in an eyeblink of evolutionary time. DaveScot
m7 "1) Are regulatory functions uniform in mice and men?" To a large extent but less so than coding genes. "2) Are large areas of non-coded regions part of any search space besides the copying process which is essential to reproductive inheritance?" IIUC the answer is yes. "3) Besides number of base pairs deleted without function in the mice. Is it not also another problem for possible search space issues by regulatory functions? Or is the search simply halted or abreviated by a specific type of “stop codon” in a particular location?" If something is badly broken in a gene or its regulatory region such that it is either no longer expressed or kills the organism when it does express then searching, if I understand what you are saying, necessarily halts as part of the "search" is testing new formulations or expression patterns. "4) What is the economic cost of efficiency considered for keeping and copying such large spaces of non-coded regions if they serve no purpose?" The cost is size of the cell and speed of reproduction. For organisms that rely on great numbers of progeny produced in a small amount of time (prokaryotes) the economic cost of DNA baggage is extreme and they use just about every last bit of sequence and squeeze it down to the smallest possible size. Eukaryotes appear to have a lot more flexibility in that regard. Some simple organisms have much more DNA than we do. Amoeba dubia, a huge single celled organism, has 200 times more DNA than we do. All the plants and animals with huge amounts of DNA all seem to thrive just fine with all that baggage. There's a readily identifiable pattern we know about for genes (the genetic code) which makes them easy to understand. There is no genetic code for non-coding regions we know about so we don't don't have a very good idea yet how they operate. The genetic code is like a Rosetta Stone for coding genes. For prokaryotes that's really all we need. Prokaryotes have virtually no non-coding regions in comparison to eukaryotes. For eukaryotes we need to find more Rosetta Stones to decipher what's going on in non-coding regions. The coding gene is king in the prokaryote world. In our world variation in coding gene expression by non-coding instruction are king. DaveScot
fifthmonarchyman
If I understand Evo-Devo correctly in order for something to be functional it must provide immediate reproductive advantage. No other function counts.
My discussions with Mesk over at telicthoughts (he's clearly an evolutionist, but he's got some humility in his bones, and a scientist's curiosity) this is not so. He suggests that if a mutation has a deleterious effect as small as 0.0001%, it will not fix in the population. If it doesn't fix in the population, it will weed out. Contrary to what you have stated, he shows that disease producing mutations, for instance, can persist for a significant number of generations, but still find themselves weeded out of the genome. bFast
Dave, Some simple questions from the peanut gallery. I don't understand how all the regulatory functions work with regards to non-coding regions. Geneticist have discovered some non-coded areas do get expressed, by different mechanisms in the human genome, correct? And some are tied to disease. So... 1) Are regulatory functions uniform in mice and men? 2 Are large areas of non-coded regions part of any search space besides the copying process which is essential to reproductive inheritance? 3) Besides number of base pairs deleted without function in the mice. Is it not also another problem for possible search space issues by regulatory functions? Or is the search simply halted or abreviated by a specific type of "stop codon" in a particular location? 4) What is the economic cost of efficiency considered for keeping and copying such large spaces of non-coded regions if they serve no purpose? I comprehend the copying mechanism for transfer during reproduction matters of inheritance. But it appears regulatory functions are still quite an enigma, or they're growing in scope and complexity. From a programmatic perspective, I can entertain the thought of code compressed and packaged, but left unused until the correct licensing key is utilized to unlock more code. From the viewpoint of someone unfamiliar with this process, they'd see the code and think it is "junky" and inefficient. From a practical perspective as a delivery mechanism from the intelligent coder however, it is not. When a user decides to license a new program, the key they purchase unlocks the existing code delivered earlier in compressed format, which then unfolds and communicates with existing internal and external programs, updates accordingly to the environmental surroundings(PC type, hardware, MS version) and distributes itself. That is strictly a commercial perspective. But, then... to extend this to nature. A DNA "program" is delivered in multiple areas across distant shores. The core has components unfold as different nutrients allow them to proceed. Each unfolds differently based upon the input factors of surrounding environments. So, color, length, height, beak size, etc., are all reactive to the environmental factors as to how the "coded" regions unfold. Whereas the non-coded regions exist as contingencies and to unfold more in the future given new input. But even this does not explain some issues like sexual differences. Like the horned beetle male and female which both need the horn at birth. But the female loses it afterwards. hattip; creationsafaris.com. So, last question that must be asked. Why is this not perceived as "vestigial" DNA? Michaels7
“the function of ultra-conserved elements is the subject of a raging debate.” Ok, if that is true we need hold and opposition to some clear definitions of what is required for super-conserved DNA to be considered functional. The Darwinist does not have the same freedom that ID does in ascribing function to a section of DNA. If I understand Evo-Devo correctly in order for something to be functional it must provide immediate reproductive advantage. No other function counts. This sort of thing could easily be tested. Simply put the knockout organism in a controlled environment with wild stock and see what happens. The function or lack there of could be determined in one generation. After all natural selection only sees the present generation and can not look to the future. Any “function” not seen in the first generation is by definition “Telic” and we win. This sort of experiment could be done with frogs or fish or any thing else with super-conserved DNA. We could know the results in a matter of months not years. It almost sounds like a high school science project. fifthmonarchyman
Unlikely but it should be considered. A retrovirus that can jump from mice to men should have little problem jumping to any mammal. One that can jump from birds to men (re avian flu virus) even less problem infecting a wide range of mammals.
From what I've read on the subject retroviruses can have target preferences so I'd consider that an open possibility for explaining the homology present here. Patrick
"But once we deeply imbibe the fundamental truth that an organism is a tool of DNA, rather than the other way around, the idea of ’selfish DNA’ becomes compelling, even obvious." Its really that easy to drink in... Thus is the bottom up, materialist strategy unleashed in a survival of the fittest DNA mode. As multi-cellular organisms, we serve DNA, ultimately some tube worm I suspect that morphed into FSM. Then it is reasonable we should kneel to the high priest of DNA, like Dawkins and become drunk with his glorifying the spirit of DNA. Now I know where Dawkin's authority comes from. Our bodies, our minds, our souls serve Gaia-FSM, DNA goddess of scientism and the high Priest of Genetics, well, in this case, a Zoologist. Michaels7
DaveScot #87, I did not at all dismiss the possibility of gene flow. I said that conservation would leave a signature that gene flow would not. If you recall from Denton's discussion of the cytochrome C gene (if not, you really need to read that book) the mutational distance between organisms produces a (near) perfect mapping of the phylogenic tree. (Chimp is close to man, mouse is somewhat different, tree is more different, amoeba is way different.) If we had been attacked by rabid mosquitos, say 3 million years ago, the phylogenic tree would still not show up. Chimp DNA would be as different from man's as the mouse's is. #88, I think that there are two factors in the level of conservation. Certainly the importance of the gene is one of them. However, I think that the "precision" of the gene is actually a greater factor. If one gene is highly important -- an essential atp synthase gene, for instance, and another gene is much less important -- say it allows us to see color -- if the former gene works just as well with a variety of mutations, and the latter gene ceases function no matter what the change, the latter will drift more slowly. fifthmonarchyman, "what am I missing." If Mesk, over on on Telic Thoughts is correct, this issue, and the ultra-conserved gene question "the function of ultra-conserved elements is the subject of a raging debate." We must not let 'em forget this one, but science does have to wiggle and squirm a bit before they are content that they have explored every avenue to save their baby. I think that's fair. bFast
Yes, I'd like to know what's going on, too. Douglas
What am I missing? It looks to me like this is the death blow to Neo Darwinism. This is the bacterial flagellum times one thousand. Yet no other Id website that I know of has picked up on it yet and no one on the other side has seriously tried to discredit this. What is going on ? fifthmonarchyman
ds: "Yes, it does indeed follow when the theory of natural selection is applied to the observations. This is the basis of natural selection. The less critical the sequence is to survival the less selection pressure to preserve it, the more critical the more pressure." Wait. Richard Dawkins now says:
Cows and pea plants (and, indeed, all the rest of us) have an almost identical gene called the histone H4 gene. The DNA text is 306 characters long... Cows and peas differ from each other in only two characters out of these 306. We don't know exactly how long ago the common ancestor of cows and peas lived, but fossil evidence suggests that it was somewhere between 1,000 and 2,000 million years ago... Histone H4 is vitally important for survival. It is used in the structural engineering of chromosomes... The histone gene's conservatism over the aeons is exceptional by genetic standards. Other genes change at a higher rate, presumably because natural selection is more tolerant of variations in them. For instance, genes coding the proteins known as fibrinopeptides change in evolution at a rate that closely approximates the basic mutation rate. This probably means that mistakes in the details of these proteins (they occur during the clotting of blood) don't matter much for the organism. Haemoglobin genes have a rate that is intermediate between histones and fibrinopeptides. Presumably natural selection's tolerance of the errors is intermediate. Haemoglobin is doing an important job in the blood, and its details really matter; but several alternative variants of it seem capable of doing the job equally well. (The Blind Watchmaker (1986), p. 123-125.)
j
j I'm glad you added the laughing icon. At least one fatal flaw in applying selfish genes to this problem is that the sequences in question aren't genes. DaveScot
geomor The point of my last comment was that, merely from the observation that sequence X is conserved at higher percent identity than sequence Y, it does not follow that sequence X must have a “more important” function than sequence Y, and vice versa. Yes, it does indeed follow when the theory of natural selection is applied to the observations. This is the basis of natural selection. The less critical the sequence is to survival the less selection pressure to preserve it, the more critical the more pressure. If you don't know that I question whether you should be contributing to this thread. DaveScot
bfast re horizontal gene flow Unlikely but it should be considered. A retrovirus that can jump from mice to men should have little problem jumping to any mammal. One that can jump from birds to men (re avian flu virus) even less problem infecting a wide range of mammals. DaveScot
Richard Dawkins has the resolution of our dilemma. This DNA doesn't need to have anything to do with function in the real world, just so long as it's selfish!:
While awaiting evidence for and against [other hypotheses], the thing to notice in the present context is that they are hypotheses made in the traditional mold; they are based on the idea that DNA, like any other aspect of an organism, is selected because it does the organism some good. The selfish DNA hypothesis is based on an inversion of this assumption: phenotypic characters are there because they help DNA to replicate itself, and if DNA can find quicker and easier ways to replicate itself, perhaps bypassing conventional phenotypic expression, it will be selected to do so. Even if the editor of Nature (Vol. 285, p. 604, 1980) goes a bit far in describing it as 'mildly shocking', the theory of selfish DNA is in a way revolutionary. But once we deeply imbibe the fundamental truth that an organism is a tool of DNA, rather than the other way around, the idea of 'selfish DNA' becomes compelling, even obvious. (The Extended Phenotype (1982), p. 158.)
:lol: j
DaveScot, What you wrote about synonymous substitutions sounds right to me. I'm not sure exactly what message we were to take from it. The point of my last comment was that, merely from the observation that sequence X is conserved at higher percent identity than sequence Y, it does not follow that sequence X must have a "more important" function than sequence Y, and vice versa. Jehu, | With a mutation rate of 2.22 x 10^-9 per year per base pair, the chances of having sequences of 100 bp with 70 identy by chance is not likely, especially after 70 - 90 million years. By those exact numbers, you'd expect most of the genome to be greater than 70% identity between mouse and the human-mouse ancestor. Just multiply 2.22e-9 subs/site/yr by 90Myr = 0.2 subs/site or 80% identity. Between human and mouse (140-180Myr of divergence), you'd expect tens of thousands of 100-bp sequences to have 70% or better identity in a 3e9-bp genome. That one is a little more complicated to work out. Anyway, I'm not trying to argue that there were not many "truly" conserved sequences in the deleted regions -- this discussion was a sidetrack about why we need to keep sequencing genomes in order to distinguish the conserved sequence with greater precision, so that we know what we're going after. Again, NDE's prediction here remains that the conserved deleted sequences are functional and have conferred a selective advantage. The Nobrega paper is certainly some evidence against this prediction, and that's fair to point out for the time being. To investigate the question further, we'll need a better general understanding of the biological mechanisms that involve conserved non-coding sequences, which is more or less what most genome scientists are working on these days. We might succeed or we might fail. GeoMor
#72, Dave, When I initially posted here. I was compared to a "Crackpot" for thinking there were libraries in the Genome. But then, I was taking it one step farther, seeing it as active, functional libraries are carried along with each species as it interacts with the environment. And mostly what we see is now deleterious. What we started with was fully functional unfolding plan. What we see today is a hodge-podge of plans and mutations eating away at the original blueprint. We see this most pronounced thru the embryo development of Crack babies who have to forever be put on some form of amphetamine to keep them calm and focused. It is an immediate impact of "evolution" if ToE's want to call it such. Reading Spetner's notes re: repositories in Not by Chance, and now this is encouraging to say the least. Michaels7
But as I suspect you know it doesn’t quite work out that way because different proteins acquire synonymous substitutions at different rates in the same species. There’s a whole cottage industry sprung up trying to calibrate molecular clocks so they agree with one another.
Would that be the special pleading industry? Jehu
Jehu:
I expect they will argue that some sequences are not subject to random genetic drfit like the other sequences.
And if they can come up with a GOOD explanation, fine enough. I've rolled a dice often enought to suggest that they have their work cut out for them. However, they have had two years. They need to get to work to save their baby!! bFast
geomor Your spin isn't working. A thousand sites ranging from average to very highly conserved had the plug pulled and nothing happened. Deal with it. Me, you, Rubin and most of those on this thread understand synomous substitutions in coding genes and what that means for comparative purposes. You underestimate the difference in coding genes because of them. There are 64 possible codes for 20 amino acids. Take an example of a protein with two monomers monomer A - can be either code 1, 2, or 3 through degeneracy monomer B - can be either code 4, 5, or 6 through degeneracy Only 2 unique polymers can be generated; AB and BA. Yet there are 9 unique codes that can generate each polymer. If you compared them through sequence data alone you will get 0% match in 9 yet they generate exactly the same protein. Taking into account synonymous mutations in comparing protein coding sequences is a necessity otherwise they'd be all over the map and wouldn't make any sense at all. However, synomous substitutions in coding genes should (in theory) be quite accurate molecular clocks for divergence when the same protein is compared between species. But as I suspect you know it doesn't quite work out that way because different proteins acquire synonymous substitutions at different rates in the same species. There's a whole cottage industry sprung up trying to calibrate molecular clocks so they agree with one another. *Edited to correct typo in second monomer list. DaveScot
I am waiting for the special pleading from the Darwinists to start. I expect they will argue that some sequences are not subject to random genetic drfit like the other sequences. Jehu
Finally, to the problem of distinguishing conserved sequences. You can construct a mathematical model of sequence evolution and work out how many sequences of length N you expect to be preserved just by chance in a 3-billion base genome. With sophisticated models and only two mammalian genomes, the number is quite large even for N on the order of 100. There is a great figure on this in the original mouse genome paper, which is the basis for the oft-repeated claim that about 5% of the human genome is under selection. It’s not that there is only 5% of the sequence shared: it’s much higher than that. 5% is the excess amount of conservation above what you would expect by chance. We need more mammalian genomes to continue intersecting (a crude terminological approximation there) to distinguish what is really under selection.
Basically you are saying the identity might be a conincedence and the genes are not actually conserved. Well I respect the fact that at least you are taking a shot at it. However, my literature search hasn't shown anything where it is suggested that sequences of a minimum of 100 bp and 70% identity might be the result of homoplasy instead of homology. At this point I don't find your position credible. With a mutation rate of 2.22 x 10^-9 per year per base pair, the chances of having sequences of 100 bp with 70 identy by chance is not likely, especially after 70 - 90 million years. It gets even worse when the gene has > 90% identity, and is > 400 bp long and is conserved in chicken, frog, mouse, and human as some of the sequences were in the Nature article. I have to conclude that this is strong evidence that highly conserved sequences have not function. And apparently, as discussed above, even ultraconserved sequences can have no function. Jehu
The conceptual principle that "more important function means better conserved" is basically right. However, it is not necessarily the case that "importance of function" correlates with conservation as measured by sequence percent identity (or similar metrics based solely on the sequence), which is how the conserved elements have been defined. For example, many transcription factors bind DNA through physical contact at several locations spatially separated by perhaps 3-12 bases. This means that the DNA is constrained to be a certain sequence at several "footprints", interleaved with stretches where there may be only very weak constraints. Even on the footprints, DNA-binding proteins can be more or less picky about what the sequence there has to be; they may need certain bases to be only A or C, for example, and the binding efficiency is pretty equivalent. These are just a few examples of a general principle, which indicate tht strength of selection need not correlate with sequence identity. To pick up the TF example, a modestly important factor may only bind a very specific sequence, which will therefore be preserved at high identity, while a crucial factor may bind a rather degenerate sequence, which is therefore free to vary somewhat. In this case, "percent identity" is dictated both by strength of selection on the factor's activity and the biophysics of the factor itself, and you can't gauge the former without specifically accounting for the latter. TFs are just one example here of a very general principle; two protein-coding genes, for example, need only be about 70% identical at the DNA sequence level to code for exactly the same protein, because of degeneracy in the genetic code. We can write some programs that sort of distinguish what is conserved and what is not, but without specific mechanistic knowledge of what processes use the sequences, we cannot correlate sequence identity with importance of function. Thus, NDE's prediction is only that there is a function for the conserved sequences that confers "sufficient" selective advantage, without any claim about how advantageous it is. Finally, to the problem of distinguishing conserved sequences. You can construct a mathematical model of sequence evolution and work out how many sequences of length N you expect to be preserved just by chance in a 3-billion base genome. With sophisticated models and only two mammalian genomes, the number is quite large even for N on the order of 100. There is a great figure on this in the original mouse genome paper, which is the basis for the oft-repeated claim that about 5% of the human genome is under selection. It's not that there is only 5% of the sequence shared: it's much higher than that. 5% is the excess amount of conservation above what you would expect by chance. We need more mammalian genomes to continue intersecting (a crude terminological approximation there) to distinguish what is really under selection. This is all very much more complicated than I've explained it, but I hope it is interesting. GeoMor
bFast,
There is no way on God’s green earth that the RM+NS model can support “ultraconserved” for a over half a billion years when there is no function (fish/amphibian common ancestor * 2). This data seems to be clearly understood within the halls of science. Why are they not admitting that the theory so badly does not fit the data! This is a conspiracy that makes the galileo thing look pretty insignificant.
I agree this data contradicts the NDE paradigm and I don't see how "evo-devo" can account for it either. IMO the data is shocking. I think the only available answer to Darwinists is to disbelieve the results, not unlike the situation with the discovery of soft tissue in a dinosaur bone. I found a discussion outline sheet for a Ph.D. course in genomics as Stanford. It seems to question whether the gene knockouts were really achieved but makes no mention of the conservation of nonfunctioning genes. I wonder if it came up in the class? I would have liked to have heard that discussion. http://www.stanford.edu/class/bio203/NobregaPollard.pdf Jehu
Jehu:
Those two studies tested in vivo a total of 36 ultraconserved non-coding sequences and did not find function in 9 of them, or 25% of the ultraconserved sequences.
Wow! I don't care how many untra-conserved sequences has function, if any of the untraconserved sequences do not have function RM+NS is DOOMED!! There is no way on God's green earth that the RM+NS model can support "ultraconserved" for a over half a billion years when there is no function (fish/amphibian common ancestor * 2). This data seems to be clearly understood within the halls of science. Why are they not admitting that the theory so badly does not fit the data! This is a conspiracy that makes the galileo thing look pretty insignificant. bFast
bFast: “What I really want to see is a mouse where one of the “untra-conserved” non-coding segments is knocked out. If you can knock out the ultra-conserved stuff too, it’ll just be more nails in the ‘ol coffin.” The authors address your question in that paper: “The small fraction of elements conserved across several vertebrates but not fish with enhancer activity in this study (1 of 15) contrasts with the results obtained when human–fish conserved non-coding sequences were previously tested with the same in vivo assay. In those studies a significant fraction of human–fish conserved non-coding sequences present in gene deserts were shown to be functional, with 5 of 7 elements in one study and 22 of 29 in a second study.” ofro
In regards to ultraconserved sequences, the Nature article makes reference to two previous studies that could not find a function for ultraconserved sequences conserved between human and fish. Those two studies tested in vivo a total of 36 ultraconserved non-coding sequences and did not find function in 9 of them, or 25% of the ultraconserved sequences. That is really something. Jehu
DaveScot, very interesting. The world sure looks brighter thru the eyes of engineers. mike1962
mike re comment 65 Or perhaps a much bigger library was there in the beginning and as phylogeny unfolded according to a PLAN the plans were reduced so that the radiation terminated in organisms with little potential for further diversification. Or maybe in the vast number of living things not sequenced, salamanders (for instance) with a genomes many times the size of mice and men might have all the plans in them for all the major taxonomic groups that followed them, with genome size being reduced as potential for further diversification was reduced. Maybe those are living repositories. Gene sequencing is still so expensive no one I've read has sequenced a genome with an enigmatic c-value. Is evolution still happening today beyond the generation of closely related species and sub-species? Nobody knows. Evolution works too slowly to confirm that. DaveScot
PaV, there is bunches of evolutinary biology that is not challenged by these findings. Evo-Devo, particularly is not a very RM+NS bound topic. There is nothing about these findings that challenges common descent. These findings only address one core issue -- the role of random mutation and natural selection in evolution. The problem with this core issue is that the only alternative even close to the table is, well, telic. DaveScot, back in post #47, you suggested the possibility of "horizontal DNA flow" explaining these findings. If this were so, then the "preserved" data would not present the phylogenic tree. This could be checked easily enough. If in these regions, the chimp and the human are more highly preserved than the mouse and human, (throw in a couple of other test points, a dog maybe, for good measure) and this possibility would be ruled out. bFast
Lurker: As I posted earlier, I think that the (no longer Darwinists) evolutionists have moved onto "evo-devo" (Allen MacNeil), and will probably say (as did MacNeil) that the trick is in how all of this genetic stuff "develops" over time and how they need a theory of developmental biology. The next step is to simply say that Darwin talked about the importance of "embryology", and that's what "evo-devo" is all about anyways. And they'll simply go on their merry way........ PaV
Lurker, cut the defeatest attitude, we've got 'em tight with this one. Function must be found or preservation happens dispite no natural selection. That's all. Their best response to this one is to shut up 'an hope it goes away. bFast
Just as NDE theory has morphed to account for all situations - evolution is gradual, except when it's rapid or static - I suspect the theory will just morph again without ever admitting defeat. Maybe they'll call it the "New and Improved NDE Theory (now with twice the dogma)", or perhaps "NDE Theory v12.0". Lurker
geomor Point taken on ultra-conserved. I note Rubin said none were ultra-conserved. Is there a standard or consensus definition of what % identity is "ultra-conserved". I did a quick look and found as low as 87% called ultra-conserved. The CNGs deleted by Rubin were noted as at least 70% identity in all of them but a figure for what qualifies as ultra-conservative wasn't specified. Some definitions of ultra-conserved are 99%. What cutoff did Rubin use? Rubin refers to Berejano for this and that's a bottom cutoff of 95% for "ultra-conserved". The graphs in the nature report of the % matchup have very many of the discrete sequences apparently reaching 90% or more but none quite touching the 95% match. So it appears that "ultra-conserved" depends on who you ask. Clearly quite a few sequences Rubin deleted borders on even the stricter definitions of ultra-conserved. If it was only one or even ten of the more highly conserved sequences one might only puzzle over it and not be really surprised. When it's hundreds of very highly conserved bits plus more hundreds at genomic-normal conservation (70%) it's startling in the extreme. Rubin was amazed as well he should have been. DaveScot
...and if they haven’t been productive you’re going to be demoted to lurker status. I resemble that remark. ;) [moderator] smartass :razz: Lurker
"If mice and men had a common ancestor many millions of years ago and they still have highly conserved DNA in common, the story follows that all the conserved DNA must have an important survival value." Or perhaps (gasp) common descend is a crock, and some designer(s) came up with all the body plans, and shared various components from a "library", and the "junk" just happened to be in the library as filler, etc. Think OOP. Boy would I love to toy with that library. Seems like it would be fun populating a planet with variouos lifeforms that interact. Hmm, I wonder if the "angels" (read: extraterrestial brainiacs) did that on this planet? Seems I read in the Talmud that this is exactly what happened. Hmm. mike1962
No wonder Modern Evo Synthesis is being superceeded. Hmmm, along with Vole info posted here before(by Gil?)... http://www.medicalnewstoday.com/medicalnews.php?newsid=51906 This is turning into a conspiracy against science! Monthy Python was right all along, our world was build by mice! Michaels7
DaveScot, they're realin' and a wrigglin' -- or at least they should be. bFast
geomor NDE does not imply it Natural selection sure implies the more important the sequence the better it be conserved. The nonfunctionality of these sequences has not been proven, and NDE’s prediction very bluntly remains that they are functional, despite the evident fact that they are not essential. You bet the prediction is blunt. I fail to see why in the first paragraph you said NDE doesn't imply this. "Functional" is understated. Natural selection predicts the more functional the better preserved. The preservation of the mice/men regions is exceedingly high. By every measure natural selection predicts mutations in these regions to be grossly intolerable. Rubin's amazement I'm sure wasn't exagerated or unjustified. A correspondingly vital function is indicated. So far no one has found it. This is a profound issue to resolve. Well, we are still sequencing genomes just to identify all the conserved sequence in mammalian genomes, let alone track down what every last bit does. Mouse and human genomes are completely sequenced. I recently did a light survey of genomic analysis software to see what it costs to get into the business of data mining the genome bank. It ain't much. There doesn't seem to be any lack of it that can compare a mouse/man genome to isolate highly conserved sequences, eliminate them from known non-junk DNA, and get the distance from other known functional DNA. This was described as being in a genetic desert of junk DNA. IIRC one swath was a million base pairs. It doesn't seem to me with resources like BLAST running in distributed processing server farms and the software free to download (much of it open source) there's any problem with finding conserved sequences. After all, Rubin found a thousand highly conserved sequences with no known function this way and others have already surveyed at least a dozen mammals, fish, drosophila, to see the extent of it. IIRC there was fish/mammal conservation but no drosophila/mammal conservation. I'd say this deserves a database of its own like the c-value enigma and of course the more genomes surveyed for mysteriously conserved junk DNA the better we should be able to get a handle on its purpose. I wrote elsewhere about this (could've sworn it was here though) the cost of sequencing 3 billion base pairs with a tolerable degree of accuracy for data mining is down close to $100,000 as reported in a recent article in Scientific American. The Archon X Prize goes to whoever gets it down to $10,000 and the U.S. gov't is funding a program to get a human genome down to $1000. That's about as much as an MRI scan. In short, affordable to a vast number of people as a diagnostic tool for medicine. An effort I fully support. DaveScot
Sabre, that's how I see it. Chunkdz, if the thing is found in frogs, we're dealing with a mear 500 million years (250M ancestor to frog, 250M ancestor to mouse.) But 140m, 500m in this case it doesn't make a hair of a difference -- 20m might help. Douglas chimes in -- how 'bout 6000! bFast
bFast,
So these sequences are even more conserved between mice and men than even protein coding genes.
It appears to me that they have a different code than the protein expressing "central dogma" genes, this code does not have the alternative spellings that are acceptable in the "central dogma." I should also point out that function in gene regulation has been demonstrated for many of these CNG's. Jehu
bfast, For the record, the article defines ultra conserved as being conserved across mamamals and fish. However there were some large sequencs, conserved amongst birds, mammals, and frogs with 90% identity. From the article:
From the MU19 desert we picked five human-mouse conserved elements representing the most conserved sequences between these species (more than 180 bp, 90% identity ) for the in vivo assay. The ten elements chosen from desert MMU3 ( more than 400 bp, 90% identity ) included all five sequences that are conserved across humans, rodents, chicken and frog , and five that are conserved across humans, rodents and chicken only.
It should be noted that from those 15 sequences they found a small function in one of the sequences. I have mentioned that function twice before in this thread already and suffice it to say there is no indication it provides selective benefit. Jehu
The paper indicates that some of the conserved sequences were conserved in frog genomes as well. Does this mean that we are talking about hundreds of millions of years of conservation, not just 70-90 million years? I'd like to see knockout experiments on frogs using the relevant sequences. chunkdz
DaveScot, perhaps the scientists in question suspect the answer such additional research would yield/confirm, and don't want to face it. Many people who suspect thier cherished spouses are unfaithful will go to great lengths to avoid knowing for certain. So too might it be for a cherished belief. sabre
DaveScot, let me keep this discussion painfully honest, after all we've got 'em this time, and we don't need to bungle it. The original publication, link in post #4 says:
Together, the two selected regions contain 1,243 human–mouse conserved non-coding elements (more than 100 base pairs (bp), 70% identity), also similar to genome averages, whereas no ultraconserved elements9 or sequences conserved to fish (more than 100 bp, 70% identity) are present.
If 70% conserved is the average, I'm still very interested to know how conserved the most conserved region is. bFast
The deleted sequences were not ultraconserved. They explicitly avoided deleting ultraconserved sequences. The deleted sequences fall into a category of conserved sequences less than "ultra" :-) GeoMor
darth To immediatley conclude that this is prove for genetic planning for the future is premature. that does not help convicing people of the idea of ID. I said nothing about proof. That's a straw man. I explicitely suggested that this would be a good line of research for ID to undertake. I'm going to review the productivity of your previous comments here and if they haven't been productive you're going to be demoted to lurker status. Update: Darth is a new commenter who joined just yesterday. I've added him to the moderation list for now. I expect better than straw men in critical commentary, Darth. Don't do it again. DaveScot
sparc What you say is obvious and any competent researchers would attempt to simulate a natural pathogenic environment to the modified mice. It's been over two years. No explanations for function in the missing DNA have been confirmed. How very interesting. Keep in mind these sequences aren't just conserved. They're described as ULTRA-conserved. Moreso than protein coding genes. Now it might be that protein coding genes are just more resilient in tolerating sequence variation without compromising their function but still that's just a hypothetical and the true nature of all this remains a mystery. In response to your description of a gene knockout where a mouse lived happily ever after unless exposed to certain pathogens I'd respond by saying this knockout wasn't one isolated gene but over a million base pairs encompassing a thousand discrete conserved sequences. It's hardly comparable to a single coding gene knockout in that regard. Furthermore, I read in some of the other articles that were dug up is that a good fraction of these conserved sequences are present in fish and at least twelve other mammals. What pathogens are common between fish, mice, and men? I'm very surprised that a lot more research into this hasn't been undertaken in the intervening two+ years. DaveScot
To be fair, the knockout DNA might be part of, say, a genetic repair mechanism such that the result of its deletion would be cumulative over many generations of mice guaranteeing that the variant line absent the DNA would be driven to extinction. The repair mechanism would of course be shared by mice and men through common ancestry. One would hope that the variant line has been preserved in the lab so that it can be observed over the long haul for the emergence of serious problems. I should think that was almost certainly the case as it's so obvious. Given there's been over two years for the line to breed and no subsequent reports of ailments in the lineage there's probably nothing definitive in that regard so far. There's also an uncontrolled variable in such a line of inquiry - namely inbreeding. Because the deletion was only performed on a few individuals to make them homozygous for the deletion the known detrimental effects of inbreeding can be expected to show up. Distinguising between inbreeding ailments and deletion ailments could be very problematic. DaveScot
Patrick I thought about instinctual behavior before I wrote the article. I ran into three problems seemingly without resolution under the natural selection paradigm. 1) what instincts are shared between mice and men 2) which of those common instincts are so important to survival that they'd be ultra-conserved more than coding proteins? The suckling instinct came to mind. 3) Why did the mice not show an adverse effects on losing instincts which must be so important? A baby mouse not able to suckle would die and the loss of the instinct would be easily noted to anyone closely observing the knockout mice. DaveScot
bfast Great work in digging up more dirt! This should have been a proverbial shot heard 'round the world two years ago and instead the implication for natural selection seems to have been by and large quietly disregarded. Mayhap anyone with a vested interest is worried about being Sternberged for being too candid about the implication for natural selection. If previous patterns of response are repeated a chance worshipper will alert Dr. Rubin he's being "quote-mined by intelligent design creationists in support of their pseudo-science" :razz: and what does he have to say in response to them. DaveScot
troutmac The win for ID is that there's a mechanism for ultra-conservation of DNA with no immediate survival value. The basic premise underlying natural selection is that DNA with no immediate selection value is subject to rapid random mutation and DNA with immediate survival value will be conserved (i.e. mutations in it kill or cripple the host so the mutation doesn't propagate in the population). This is a huge failed prediction of natural selection. The front loading hypothesis of ID predicted a mechanism of conserving unexpressed genomic information. I've been blogging about that prediction for over a year. This is vindication most sweet. DaveScot
bfast Maybe it's supposed to be one of those trade secret things. Gould said it was a trade secret of paleontology that the fossil record doesn't support support gradualism. Maybe there's now a trade secret in genomics that conserved sequences aren't necessarily important to immediate survival. By the way, I read in another article here http://www.panspermia.org/nongenseq.htm
The faithfulness of conservation which this study observed in the CNGs is unprecedented. The most highly conserved ones have a nucleotide substitution rate, across the studied mammals, that is less than half that of protein-coding genes.
So these sequences are even more conserved between mice and men than even protein coding genes. Trade secret indeed. DaveScot
Sal I kept going over possible ways this could be redundancy and it just didn't play out. The patter goes that these bits are SO important that in case they get corrupted a backup set takes over. So what's protecting the backup set from corruption? We keep returning to the unavoidable fact that these highly conserved sequences were deleted and nothing bad happened. A redundancy mechanism would have to engender some kind of corrective action when a backup is corrupted - like killing the organism. It just doesn't add up. The only thing I've been able to think of so far that fits the NDE model is horizontal DNA flow. Some very recent event, a retrovirus perhaps, caused these sequences to be inserted in both mouse & human genomes. But other researchers have discovered more or less the same sequences in a dozen different mammals. Whatever the answer, it's going to be interesting and at this point it sure looks like a big KABOOM for the circular reasoning "if DNA is important for survival it's conserved and if DNA is conserved it's important for survival". DaveScot
GeoMor:
No one is running from the principle that “conservation implies function” because it has proven so scientifically fruitful in understanding how our genomes operate and evolve.
As I posted on another thread, I think it's possible that "evolutionists" are running away from it. Prof. Allan MacNeil, for example, has said something to the effect: "The Modern Synthesis is dead. Long live evo-devo." Conserved sequence=essential function is certainly part of the Modern Synthesis. It seems that maybe some scientists see what I saw in reading the article two years ago: Darwinism (Modern Synthesis) is dead. Only the "obits" haven't carried the news. Not yet, anyway. PaV
Me: “Obviously, since Evolution is true, the experiment does not cast doubt on Evolution, but merely shows that mice and men are very closely related.” bFast: "You may read back in post 2 that I mention that these findings are consistent with a young earth. I’m still an old earther on acounta all that other evidence flyin’ in from everywhere." I saw where you mentioned that about a young Earth. (We'll convert you, yet, so be ready.) And, my post was more directed at those who subscribe to Neo-Darwinian Theory. Although, now that you mention it, I thought I had noticed some buzzing noises - I guess I assumed they were mosquitos. ;) Douglas
GeoMor
The paper demonstrates that a mouse with only about 99.9% of its genome does not show major abnormalities in its development, physiology or lifespan in the laboratory environment.
You are mischaracterizing the results. The paper showed that after the deletion of 1,243 conserved sequences of at least 100 base pairs in over 200 mice, there was no evidence of the deletions causing any phenotypic difference between the modified mice and the wild type.
You are basically correct that “Natural selection has no mechanism for conserving sequences without function”
"Basically correct"? As if to imply there are exceptions? Are you aware of any exceptions? Is it not absolutely correct?
The nonfunctionality of these sequences has not been proven, and NDE’s prediction very bluntly remains that they are functional, despite the evident fact that they are not essential.
The evident fact is that they have no function at all, save one sequence that has a function that does not appear to be essential. If you think these sequences have a function, please indicate where the investigators could have looked for function and they did not. It seems to me they were very thorough, doing 108 tissue samples for gene expression.
NDE’s only prediction based on sequence conservation is that they must confer some “sufficient” selective advantage to the species (as you have noted).
Of course, some function is not enough, it must confer a selective benefit. However, the conclusion of the authors of the paper in question was that the 1,242 of those sequences did not have a function, much less a selective benefit.
There is an interesting puzzle here, but not some massive problem. No one is running from the principle that “conservation implies function” because it has proven so scientifically fruitful in understanding how our genomes operate and evolve.
Under NDE conservation has to do better than "imply" function, it has to guarantee it because NDE predicts that sequence without function will not be conserved. Jehu
| What is news is that nonfuncting sequences are conserved. The paper demonstrates that a mouse with only about 99.9% of its genome does not show major abnormalities in its development, physiology or lifespan in the laboratory environment. It is true that, based on sequence conservation in the deleted regions, many biologists may have expected such gross effects to be visible, and that's why it was a surprising result. But while we, as optimists, may have expected this, NDE does not imply it. You are basically correct that "Natural selection has no mechanism for conserving sequences without function". The nonfunctionality of these sequences has not been proven, and NDE's prediction very bluntly remains that they are functional, despite the evident fact that they are not essential. NDE's only prediction based on sequence conservation is that they must confer some "sufficient" selective advantage to the species (as you have noted). So why have there not been massive efforts to search for the more subtle function that NDE predicts these sequences must have? Well, we are still sequencing genomes just to identify all the conserved sequence in mammalian genomes, let alone track down what every last bit does. The question will be addressed eventually. In the meantime, it will certainly fester, and it is not inappropriate to point this out. There is an interesting puzzle here, but not some massive problem. No one is running from the principle that "conservation implies function" because it has proven so scientifically fruitful in understanding how our genomes operate and evolve. GeoMor
darth314,
That piece of “junk” removed might have a function that was just not tested under the described conditions.
It is possible. But the conclusion of the paper was that they did not have function. They expected to find function and made a significant effort to find function but could not. It has been over 2 years since the paper was published and I can't find any evidence that anybody has even suggested a function. The over 200 mice they produced showed not phenotypic effects. So I see no real evidence that the sequences provide selective benefit.
To immediatley conclude that this is prove for genetic planning for the future is premature. that does not help convicing people of the idea of ID
Nobody is concluding that it is proof of planning for the future. But do you have a hypothesis of how and why natural selection would conserve all of these sequences that have no function? Jehu
That piece of "junk" removed might have a function that was just not tested under the described conditions. To immediatley conclude that this is prove for genetic planning for the future is premature. that does not help convicing people of the idea of ID. darth314
GeoMor, you said,
This was certainly a surprising result, and clearly makes everyone examine in more detail the assumption that conservation implies function. It should first be noted that the study did identify significant changes in the expression patterns of two genes near the deleted DNA, in the brain and in the heart.
The article does not say the changes were significant, it says they were "detectable." Only 2 of 108 samples (12 tissues for 9 genes) had any difference in gene expression. The article does not conclude that is evidence of causality. It is true, that out of 1,243 sequences they found one sequence that reproducibly drove B - galactosidase expression in mammary glands and abdominal muscle tissue. However, there is no evidence that this function is enough to provide selective benefit.
This is not to say that it was anything less than very surprising that the mice did fine against a large battery of tests; that is, after all, why Nature published it. It certainly makes genomicists and evolutionary biologists raise their eyebrows, but it hasn’t made them run for the hills. That is because, in sprite of a few surprises like this one, the NDE prediction that conserved sequences are functional has largely held up, and has proven to be a tremendously powerful heuristic for discovering new mammalian biology.
It is not news that many conserved sequences have function or that looking for conserved sequences is a way of finding functional sequences. That has already been pointed out in this thread. What is news is that nonfuncting sequences are conserved. Natural selection has no mechanism for conserving sequences without function. That is the point. Jehu
This was certainly a surprising result, and clearly makes everyone examine in more detail the assumption that conservation implies function. It should first be noted that the study did identify significant changes in the expression patterns of two genes near the deleted DNA, in the brain and in the heart. They also did show that one of the most conserved elements in the deleted regions does act as an expression enhancer element. Perhaps partially for these reasons, the authors were careful to note that "It is possible—even likely—that the animals carrying the megabase-long genomic deletions do harbour abnormalities undetected in our assays, which might affect their fitness in some other timescale or setting than those assayed in this study." This is not to say that it was anything less than very surprising that the mice did fine against a large battery of tests; that is, after all, why Nature published it. It certainly makes genomicists and evolutionary biologists raise their eyebrows, but it hasn't made them run for the hills. That is because, in sprite of a few surprises like this one, the NDE prediction that conserved sequences are functional has largely held up, and has proven to be a tremendously powerful heuristic for discovering new mammalian biology. Just a few recent examples that come to mind were the discovery of an RNA gene expressed in brain based on its accelerated evolution in the human lineage compared to other vertebrates (Pollard & Haussler, 2006), the identification of a conserved enhancer mechanism and its evolutionary origins (Bejerano & Haussler, 2006), and the demonstration that many of the "ultraconserved" elements act as enhancer elements during embryonic development (Pennacchio & Rubin, 2006). One nice thing about this story is that the same PI (Rubin) has published on both sides of the story, this study openly questioning the function of conserved elements and several others demonstrating them. It shows that there's at least one prominent scientist who keeps an open mind towards dogmatic questions :-) GeoMor
What would be the hypothesis for instinct genes from mice being conserved in humans?
Common instinctual behavior that is used by both during the earlier stages of life? Who knows, maybe these mice suffered from mental problems or a deficiency in intelligence not noticed. I fully admit to throwing ideas around. The reason I brought up instinct at all was because it jumped out at me that it was one thing that wasn't tested for (unless that testing came under a category such as "overall development"). Or perhaps this information really is related to front loading. I'd say it's too early to say with any certainty so other possibilities can be explored. Patrick
Douglas, "Obviously, since Evolution is true, the experiment does not cast doubt on Evolution, but merely shows that mice and men are very closely related." You may read back in post 2 that I mention that these findings are consistent with a young earth. I'm still an old earther on acounta all that other evidence flyin' in from everywhere. However, even if this were to challenge the age question, the real question being asked here is whether the NDE paradyme can support this data. If we are going to look at what NDE says "should" happen, we must begin with NDE's expected timetable. Jehu, "Even in the megabase gene deletion experiment we have been talking about, one of the sequences did have a function of regulating gene expression." Please expound. What I really want to see is a mouse where one of the "untra-conserved" non-coding segments is knocked out. If you can knock out the ultra-conserved stuff too, it'll just be more nails in the 'ol coffin. TRoutMac
Am I really out of line to think that within ten years Darwinism will be yesterday’s papers?
That's how I see it. I am still somewhat puzzled that this data has been around for a year and a half and it hasn't made the front page of the Washington Post. There is only one explanation that I can find -- the halls of biological science are not filled with enough honest scientists who are prepared to admit that their foundational theory, their "fact", is toast! BTW, after googling this article I have found nobody who has even ventured an explanation. The scientific community MUST come up with an explantion, or yell UNCLE! bFast
Dave, I guess your argument is biased. Caging mice for experimental work requires maintaining constant housing conditions (e.g. temperature, humidity, food, day night cycles). In addition, breeding normally is carried out in specific genetic backgrounds (e.g. Balb/c, C57BL/6 or 129SV) and researchers make some effort to prevent genetic drift I their colonies. Thus, lack of phenotypic consequences in mouse mutants doesn’t mean that a sequence is not functional. E.g. the IL-10 knockout mouse lives happily if it is kept under pathogen free conditions. However, if kept under less rigid hygienic conditions it will develop gut inflammation similar to Crohn’s disease in man. In addition severity of the disease depends on the genetic background and the sex of the effected animals. Since Animal houses cannot provide the several magnitudes higher variability of the natural habitat, since there is no free choice in breeding for caged animals and allele frequencies are fixed in mouse colonies it is hardly never possible to prove that a sequence doesn’t any function, irrespective if your investigating coding or non-coding sequences. sparc
Thank you late_model, Jehu and bFast for the explanations. I understand it much better now. Turns out I made the same mistake the Darwinists made with regard to this "junk DNA", which is that I jumped to the conclusion that just because we can't see a function NOW, that we will never find a function, or that there never will be a function. You would think I'd know better. And I understand better that this non-coding DNA should have deteriorated into honest-to-goodness junk by now and that because it remains unchanged, the mutations are not happening, which runs counter to NDE theory. Pretty amazing. Am I really out of line to think that within ten years Darwinism will be yesterday's papers? It really seems like these folks are on the slippery slope, 'specially when the best arguments they make are as ridiculous as that article by Jack Woodall about the large blue butterfly. Funny stuff! Thanks again! TRoutMac TRoutMac
(The extra "cows" is not a typo, by the way.) Douglas
"Does anybody know off hand when how long ago man and mouse diverged? How random should DNA be if it is submitted to unselected random mutation?" Obviously, since Evolution is true, the experiment does not cast doubt on Evolution, but merely shows that mice and men are very closely related. All those progressively evolving images of fish to ape to man should probably now be fish to cows to whales to cows to ape to mice to man. Douglas
Patrick,
I’d still like to see these mice tested for notable loss of instinctual behavior. Many tests were run but from what I’ve read it’s not bee directly addressed.
What would be the hypothesis for instinct genes from mice being conserved in humans? Jehu
bFast,
Jehu, thanks for the panspermia link. I would love to find that these “ultra-conserved” DNA segments prove to be “not essetial”.
At least some of most likely regulate gene expression, I am not sure if that would make them "essential" Even in the megabase gene deletion experiment we have been talking about, one of the sequences did have a function of regulating gene expression. Apparently the absence of the gene didn't effect the mouse's health and therefore it is hard to argue it is "essential" or even say for sure that it provides a selective benefit. That does raise another issue. How do these genes regulate gene expression? What is the code they use? Is it different than the code of the central dogma? Is there a second code DNA for gene regulation as opposed to gene expression? I find this very interesting:
A newly observed class of genomic elements of unknown function are 100% conserved, with no insertions or deletions, among the human, rat and mouse genomes. There are over 5,000 such segments longer than 100 base-pairs, and 481 longer than 200 bp. Many of them are also present in the chicken and dog genomes, at 95% and 99% identity, respectively. Many of them are also found in fish. They are widely distributed across the human genome, sometimes overlapping with the exons of genes for RNA processing, sometimes in introns or near regulatory genes, and sometimes far from any gene. The international team who discovered them calls the 481 longest ones "ultraconserved elements."
http://panspermia.org/whatsne33.htm#040507 Jehu
I'd still like to see these mice tested for notable loss of instinctual behavior. Many tests were run but from what I've read it's not bee directly addressed. Patrick
Jehu, thanks for the panspermia link. I would love to find that these "ultra-conserved" DNA segments prove to be "not essetial". BTW, where have the evolutionist loud-mouths gone? C'mon you guys, please explain to us why this evidence is no big deal. I dare yea! bFast
There are a number of species of plants in eastern North America (including skunk cabbage) of which populations are also found in a certain area in eastern Asia. These widely separated populations on two different continents are not only morphologically indistinguishable, but apparently also fully fertile with each other, even though they must have been isolated from each other for 6 - 8 million years. The American botanist Asa Gray called this fact to Darwin’s attention. [See Darwiniana (1876) by Asa Gray, pp. 181-186.] ... [Living fossils are] species of animals and plants that have not visibly changed in more than 100 million years. This includes the horseshoe crab (Limulus; Triassic), the fairy shrimp (Triops), and the lampshell (Lingula; Silurian). Equally long-lived have been found among the plants: Gingko (dating to the Jurassic), [etc.] … The complete standstill or stasis of an evolutionary lineage for scores, if not hundreds, of millions of years is very puzzling. How can it be explained? In the case of living fossils, all the species with which it had been associated 100 or 200 million years ago had either changed drastically since that time or had become extinct… To explain why the underlying basic genotype was so successful in living fossils and other slowly evolving lineages requires a better understanding of development than is so far available. (Ernst Mayr, What Evolution Is: Except When It Isn't (2001), pp. 186, 195.)
And now this. Are the DNA error correction mechanisms enormously better than generally believed, or what? ("Mutations? What mutations?") —————
...If awareness of anomaly plays a role in the emergence of new sorts of phenomena, it should surprise no one that a similar but more profound awareness is prerequisite to all acceptable changes of theory. On this point historical evidence is, I think, entirely unequivocal. ...Copernicus...Newton...wave theory...thermodynamics...quantum mechanics... In all these cases except that of Newton the awareness of anomaly had lasted so long and penetrated so deep that one can appropriately describe the fields affected as in a state of growing crisis. Because it demands large-scale paradigm destruction and major shifts in the problems and techniques of normal science, the emergence of new theories is generally preceded by a period of pronounced professional insecurity. As one might expect, that insecurity is generated by the persistent failure of the puzzles of normal science to come out as they should. (Thomas S. Kuhn, The Structure of Scientific Revolutions (1962), pp. 67-68.)
j
TRoutMac, the issue here is that if DNA does not implement in an organism's phenotype (it's body with all of its complexity) then natural selection, which selects by phenotype alone, cannot select for it. Therefore, any DNA that doesn't implement in the phenotype should be subject to any random mutation that happens along. It should experience the full force of genetic drift. Therefore when significant chunks of DNA are similar between a man and a mouse, NDE requires that the DNA implement in the phenotype. As to, hey we have proved that this is "true junk", which has not been predicted by ID, well, "true junk" status has surely not been established. Is this DNA remnant from the original organism that was "programmed" to become all the variety that we see? If so there should be echoes in the DNA, echoes that look exactly like what we are seeing. Is this DNA actually capable of producing phenotype effects, but it is not seen as such because of redundant code? Well, redundancy certainly has been seen in the simplest organisms, but it is really hard to find when there is gazillions of DNA to consider. Peter Borger, on Brainstorms, shows why redundancy is absolutely unsupportable within the NDE framework. His case is very strong. However redundancy is easily supported within an ID framework. A very light look at this data would see it as anti-ID, but a closer look is anything but. It is damning to the neo-Darwinian evolutionary hypothesis. bFast
TRoutMac, Darwinism predicts that only DNA with function will be conserved. If the DNA has no function it will mutate at a rate of 2.2 x 10^-9 per base per year. So, according to Darwinism, after 70 - 90 million years, sequences without function should not have any homology. But they do. This panspermia article makes a related point. http://panspermia.org/nongenseq.htm Jehu
TRoutMac - I believe the significance is the fact that the DNA sections removed code for function but are unused by the mice. The section may show up in other organisims DNA and posses function but in a mouse they do nothing. This would support a front loaded system because the mouse is carrying around DNA sections that would be used in later evolved generations as it adapted. I could be way off but this my interpration. It is great to see others seeking to understand the issues better and speaking up about it. late_model
Folks, I'm a little confused about this… I'm a newcomer to UD and I have relatively little knowledge of the specifics of genetics and so forth, but I am a fervent supporter of ID and want to learn more. Perhaps someone can help me out… I understand that NDE would predict junk DNA. And I also understand that ID would predict that so-called junk DNA would turn out to have function. And I also understand that, bit by bit, functions are being uncovered in non-coding DNA. What I don't understand is that this experiment seems to me to have demonstrated that non-coding regions really DON'T have function. But you folks are interpreting this as a resounding win for ID. As much as I'd relish a any victory for ID, I can't see how this is a win. Seems like a loss. Seems like if it was a win, the removal of the non-coding regions would have yielded mice that, for example, were born dead and were turned inside out. Something like that, you get the idea. Honestly, I want ID to win as much as the next guy, but I don't get this. I'm sure I'm missing something. Can someone enlighten me? Thank you! TRoutMac TRoutMac
Another link, again from 2004: http://www.i-sis.org.uk/AUEI.php At least this paper brings out the issue very clearly. Let me point out, however, that the scientists avoided the untraconserved (99-100%) sections of the genome when they did their cutting. The "highly conserved" (average 70%) sections still very well should have shown some implementation in the phenotype so that natural selection could conserve them. It is my view that this finding should be as significant to evolutionary biology as the Michelson-Morley experiment was to the theory of "ether". The evolutionary biology community should be in an absolute quandry in response to these findings. This should be front page news in my local paper. But we can't just bring down a religious icon, just because it wears the cloke of science, can we? bFast
Ops, another good link -- again, datad back to 2004. This link again ignores the challenge presented to NDE by this paper, only presenting the fact that chunks of DNA are removable. What makes it even more interesting is that the author challenges Dembski to explain the findings. Well, Dembski and crew have been a bit slow at explaining, but the explanation is a doozie. http://evolutionblog.blogspot.com/2004/10/maybe-it-really-is-junk.html bFast
A quick google to find others talking about this research revealed this: http://www.planetark.com/dailynewsstory.cfm/newsid/27791/story.htm In this Reuters story, the fact that the mice could live without the DNA was written about, the fact that there were significant portions of the mouse DNA that was conserved in humans was just not mentioned. This report presents the study as being very supportive of NDE. So far, other than what has been presented here, this is the only discussion I have found on this study. bFast
bFast,
I believe that in humans (worse in mice) unused DNA drifts at a rate of about 1% per mil. To get to 30% loss of order, would take 30 million years. By 150 million, the DNA should be as random as a Vegas shoe.
I think that actual drift is a little more than double that, 2.1% per million and according to the Berkley Labs article the seperation between mouse and human is 70 to 90 million is years. So by those estimates you wouldn't expect any identy of nonconserved genes at all. See this article
Mutation Rates in Mammalian Genomes
http://www.pnas.org/cgi/reprint/022629899v1.pdf Funny thing is that article raises another failed Darwnian prediction. Darwinism predicts that there is a correlation between the mutation rate and the generation time because organisms that reproduce faster have more opportunities to produce mutations and organims that reproduce slower have fewer opportunities. In fact, there is no correleation between the mutation rate and generation time.
Therefore, small rate differences seem to exist among lineages, and clearly there are no systematic relationships between the evolutionary rate and generation length. This means that the generation length and physiological differences among diverse groups do not influence the neutral substitution rates significantly, and the evolutionary time is the principal factor dictating the accumulation of neutral mutations.
Hmmm. Jehu
I've been calculating with the 70m number myself. That means that the DNA has been experiencing uninhibited genetic drift for 150 million years. I believe that in humans (worse in mice) unused DNA drifts at a rate of about 1% per mil. To get to 30% loss of order, would take 30 million years. By 150 million, the DNA should be as random as a Vegas shoe. 70% conserved requires mechanism, that's all. DaveScot, I am well aware that there is all sorts of evidence for an old earth. However, this one falls on the side of supporting a young earth. So far my personal tally is about 846 old, 4 young. bFast
bFast, The Nature article plays it off like it is no big deal. But the lead researcher, Edward Rubin, said at the presentation that, "“We were quite amazed.”
Does anybody know off hand when how long ago man and mouse diverged? How random should DNA be if it is submitted to unselected random mutation?
Same question I have been pontificating. To answer that, I have been looking at earlier work by the same lab. Here is something interesting I found from April 2000.
Evolutionary conservation of non-coding DNA sequences that play an important role in regulating gene expression is the key to the success of this study, just as it has been a key to identifying DNA sequences that code for genes across different species. "If evolution conserved a sequence over the 70-90 million years since mice and humans diverged, it likely has a function," says Frazer. "Whether its function is to determine the structure of a protein coded for by a gene or to regulate gene expression, we should be able to identify these sequences through mouse to human sequence comparisons." ... To search for conserved non-coding sequences (CNSs), Rubin, Frazer, and their colleagues examined a stretch of DNA about a million base-pairs in length from mice and humans that contained the same 23 genes in both species, including three interleukin genes (IL-4, IL-13, and IL-5). Previous studies indicated that these interleukin genes are similarly regulated and that their regulatory sequences may be conserved in mice and humans. The Berkeley researchers looked for CNSs that were at least 70 percent identical in both species over at least 100 base-pairs. Of the 90 CNSs they identified that met this criteria, the researchers took 15 and did a cross-species sequence analysis which also included DNA from a cow, a dog, a pig, a rabbit, a rat, a chicken, and a fish. Most of these elements were also found to be present in the other mammals, indicating that they most likely have been conserved because they perform an important biological function.
So apparently the researchers at Berkley Labs thought > 100 base pares with at 70% identity was highly conserved enough between mouse and human to predict function. In fact, they found a functioning gene using that method.
"What is unique about our study is that we were lead to the interleukin regulatory element CNS-1 entirely by computational analysis of mouse and human sequences," says Rubin. "Since we are soon to have the entire genomes of mice and humans sequenced, our study demonstrates one successful strategy of interpreting the sequence information coming from the genome program into meaningful biology."
http://www.lbl.gov/Science-Articles/Archive/mouse-dna-model.html Jehu
In contrast to the New Scientist article, the Nature article does not discuss the significance of nonfunctional genes being conserved. Of course it doesn't. If you're able understand what was published in Nature you don't need the significance explained to you. DaveScot
I will admit, however, that this data is also consistent with a 6000 year old biosphere. I thought about that when I wrote it. But you can count tree rings and go back farther than that. YEC has a very many other things it needs to support it that a wide variety of disparate sciences dispute. This is but a drop in the bucket for YEC if telicity is found true but a clincher for front loading. DaveScot
"Ah, but you underestimate the truly magical powers of Natural Selection." Thus it's called supernatural-selection. Supernatural-selection has the amazing ability to change the DNA just right so it can evolve a new creature while leaving the junk DNA untouch for millions of years. Smidlee
"How does Darwinism conserve backup or contingency functions?" I'm not sure this is particularly problematic. In some environments, backups and contingencies are really quite important. The details will vary from case to case, but only under the best possible circumstances will backups and contingencies be "practically invisible to selection." Reed Orak
Jehu: " True. How does Darwinism conserve backup or contingency functions?" Ah, but you underestimate the truly magical powers of Natural Selection. Does it not seem obvious that those creatures posessing mutations which produce biological systems capable of conserving and maintaining backup and contingency functions will be favoured over those lacking such systems? Why limit the powers of natural selection to only "first order" adaptations? :) SCheesman
Jehu, I obvously glossed over your link too quickly, thanks again. Reading the paper hardly suggests that the scientists were shocked at their findings. Dacook, I did a dilligent search through the annals of NDE predictions, and I did not find this one. In fact, this is solidly negatively predicted by the theory. Does anybody know off hand when how long ago man and mouse diverged? How random should DNA be if it is submitted to unselected random mutation? bFast
"“We were quite amazed,” says Rubin" Darwinists are frequently amazed by the real world. dacook
I think this is one of the most promising ideas yet. If you could show a future blueprint and uncover the mechanisms to make it unfold, well that would change everything. jmcd
bfast, It's free at that link. Jehu
Jehu, thanks for the link. This one may be worth buying. bFast
How does Darwinism conserve backup or contingency functions?
Exactly. Kind of hard to conserve something that's practically invisible to selection. scordova
Sal,
If the function is some sort of backup, contingency, or even artifacts of front-loading, this would favor ID far above neo-Darwinism.
True. How does Darwinism conserve backup or contingency functions? Jehu
"Planning for the future with genomic information is the central tenet of ID front loading hypothesis. Lack of any known means of conserving non-critical genetic information is the major objection lobbed at the front loading hypothesis. Evidently there is a means after all." I'm a big fan of front-loading as an avenue of ID research, so these results are especially intriguing. Of course, if these "non-coding" regions of the genome actually turn out to be the location for the front-loaded material, that still doesn't explain how they are preserved. If knockout experiments don't reduce the viability of the organism, then we should expect that random mutations would have substantially altered those portions of the genome. But they haven't, so there must be some (as yet unknown) mechanism preventing mutations from occurring. I'm not sure what that would be, but I'm also not a genetics researcher. Reed Orak
bfase, You can download the Nature article pdf here: http://www.stanford.edu/class/bio203/NobregaGeneDesertsNature.pdf In contrast to the New Scientist article, the Nature article does not discuss the significance of nonfunctional genes being conserved. The article states that 1,243 sequences conserved between human and mouse were deleted. Each segment being at least 100 base pairs and having 70% identity. Jehu
I pointed out why knock-out experiments may be ineffective ways to determine function. See: Airplane magnetos, contingency designs, and reasons ID will prevail. If the function is some sort of backup, contingency, or even artifacts of front-loading, this would favor ID far above neo-Darwinism. I had commented on this at KCFS and ARN several times. I pointed out repeatedly all the circular reasoning out there in Darwinist circles. I'm glad to see some sane voices finally speak out. scordova
DaveScot, I know that you are a fan of Dr. Davison's PEH hypothesis dispite the fact that he is a grade 'A' putz in a discussion forum. This is the kind of data that one would expect if the PEH were valid. PEH, and the Kruze/Mike Gene hypothesis of "front-loading", of course are kissing cousins. I will admit, however, that this data is also consistent with a 6000 year old biosphere. I mastered arithmetic before I ran out of fingers showing people my age. Within the current NDE paradyme, this doesn't add up -- no way! The huge question I have on this one is how this data could have been initially brought forth in June, 2004? Has the scientific community been hiding this finding? bFast
Wow!! bFast

Leave a Reply