Uncommon Descent Serving The Intelligent Design Community

The Sound of Circular Reasoning Exploding

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Alternate Title: Of Mice and Men and Evolutionary Dogma

Explosion
“There has been a circular argument that if it’s conserved it has activity.” Edward Rubin, PhD, Senior Scientist, Genomics Division Director, Lawrence Berkeley National Laboratory

Recent experiments cause a central tenet of NDE to miss the prediction. Large swaths of junk DNA (non-coding, no known function) were found to be highly conserved between mice and men. A central tenet of NDE is that unexpressed (unused) genomic information is subject to relatively rapid corruption from chance mutations. If it’s unused it won’t do any harm if it mutates into oblivion. If it’s unused long enough it gets peppered with mutations into random oblivion. If mice and men had a common ancestor many millions of years ago and they still have highly conserved DNA in common, the story follows that all the conserved DNA must have an important survival value.

A good experiment to figure out what unknown purpose the non-coding conserved pieces are doing would be to cut them out of the mouse genome and see what kind of damage it does to the mouse. So it was done. Big pieces of junk DNA with a thousand highly conserved regions common between mice and men was chopped out of the mouse. In amazement the mouse was as healthy as a horse (so to speak). The amazed researchers were in such a state because they were confident NDE predicted some kind of survival critical function and none was found.

This is a good avenue for positive ID research. If the function of any of those regions were preserved because they could be of important use in the future… well that would pretty much blow a hole in the good ship NDE the size of the one that sunk the Titanic. Maybe not that big, but it would be taking on water – natural selection can’t plan for the future. Planning for the future with genomic information is the central tenet of ID front loading hypothesis. Lack of any known means of conserving non-critical genetic information is the major objection lobbed at the front loading hypothesis. Evidently there is a means after all.

Life goes on without ‘vital’ DNA

16:30 03 June 2004
Exclusive from New Scientist Print Edition.
Sylvia Pagán Westphal, Boston

To find out the function of some of these highly conserved non-protein-coding regions in mammals, Edward Rubin’s team at the Lawrence Berkeley National Laboratory in California deleted two huge regions of junk DNA from mice containing nearly 1000 highly conserved sequences shared between human and mice.

One of the chunks was 1.6 million DNA bases long, the other one was over 800,000 bases long. The researchers expected the mice to exhibit various problems as a result of the deletions.

Yet the mice were virtually indistinguishable from normal mice in every characteristic they measured, including growth, metabolic functions, lifespan and overall development. “We were quite amazed,” says Rubin, who presented the findings at a recent meeting of the Cold Spring Harbor Laboratory in New York.

He thinks it is pretty clear that these sequences have no major role in growth and development. “There has been a circular argument that if it’s conserved it has activity.”

Use the link above for the full article.

Comments
[...] This is a good avenue for positive ID research. Planning for the future with genomic information is a central tenet of the ID front loading hypothesis. Lack of any known means of conserving unexpressed genetic information is the major objection lobbed at the front loading hypothesis. Natural selection is the only mechanism known for preserving genomic information and to do it the information must be “expressed” so that it has some testable survival value for selection to act upon. If it’s not expressed then it is subject to eventual destruction by natural selection’s ever present companion “random mutation”. Evidently there is a means of preserving unexpressed information after all. See also this related blog article I wrote two months ago which is even stronger evidence of a genetic information cold-storage mechanism: The Sound of Circular Reasoning Exploding. Rogue weeds defy rules of genetics 00:01 23 March 2005 NewScientist.com news service Andy Coghlan [...]The Sound of Mendelian Genetics Exploding | Uncommon Descent
May 23, 2007
May
05
May
23
23
2007
11:48 AM
11
11
48
AM
PDT
GeoMar: "Percent identity is a nonlinear function of the time passed, and you need a differential equation to model it." Mathemetitions, puh! If you set up the array that I described, it will automatically bump into mutated items, and mutate 'em again. Wala, advanced differential calculus. You can drive yourself crazy trying to calculate the trajectory of a football in a stiff breaze, or you can ask a quarterback to throw the darn thing.bFast
December 11, 2006
December
12
Dec
11
11
2006
07:35 PM
7
07
35
PM
PDT
Correction to that last post. Rubin's team only found 1 function out of 10 sequences of >90% idenity and 400 bp that were conserved across humans, rodents, chickens and frogs. There is no evidence that the function they did find conferred a selective advantageJehu
December 11, 2006
December
12
Dec
11
11
2006
04:22 PM
4
04
22
PM
PDT
GeoMor,
How many 100-bp windows are there in the genome? A lower bound is 3e9/100, i.e. let’s not even let the windows overlap. So how many 100-bp windows in the genome would we expect to be preserved at better than 70% identity by chance? 3e9/100*1.8e-4 = 5,400. At 40% divergence, it’s 83,000. Again, both lower bounds because we considered only disjoint windows.
Thanks for input. But help me out here. The mouse genome is 2.5 gb. So your equation should be 2.5e9/100*1.8e-4=4,500. So we have 4,500 100bp sequences that would have 100 bp and 70% homology or .00015% of the genome? So there will be a small amount of falsely conserved sequences. Rubin's team found 1,243 with >70% homology and 100 bp in only two stretches of DNA between four exons. I take it there are more of these sequences than would be anticipated by the normal curve.
The trouble with trying to get percent identity/divergence from this is that as you let this process run, on each iteration you have a higher chance of changing a base that you already changed, leading to no decrease (and possibly an increase) in the percent identity.
Not if you use the 47% figure because that is based on the hard observation of of comparing the human and mouse genome. Things get real interesting when you also consider that Rubin's team could not find function for 5 sequences of >90% identity and 180 bp or for only 1 of 10 sequences of >90% idenity and 400 bp that were conserved across humans, rodents, chickens and frogs. There is no evidence that the function they did find conferred a selective advantage.Jehu
December 11, 2006
December
12
Dec
11
11
2006
04:18 PM
4
04
18
PM
PDT
Here's how to calculate the significance of the conservation. Assume that by whatever figures you want to use, you expect by chance a certain percentage of bases to have changed, so like 47% divergence (53% identity), whatever you'd like to use. Now, what is the probability that a 100-bp window will be preserved at 70% identity? You have 100 bases and fewer than 30 of them have to have changed, where each one has a 47% chance to have diverged. If I flip a coin a hundred times, and it has a 47% chance of coming up tails, what's the probability I get fewer than 30 tails? You compute this using the cumulative binomial distribution. There is not a simple formula, but you can go plug it in to Matlab, etc. At 47% divergence, the chance of this happening for a 100-bp window is 1.8e-4. How many 100-bp windows are there in the genome? A lower bound is 3e9/100, i.e. let's not even let the windows overlap. So how many 100-bp windows in the genome would we expect to be preserved at better than 70% identity by chance? 3e9/100*1.8e-4 = 5,400. At 40% divergence, it's 83,000. Again, both lower bounds because we considered only disjoint windows. May I once again stress that I was never arguing that many conserved sequences were not deleted in the Nobrega experiment; this calculation arose in a side discussion. But since my math was being questioned, I thought I'd bring the bacon. Finally, let me caution against the calculation several of you have been doing (and, for simplicity's sake, I also used in a previous comment) of X subs/site/yr times Y years to get a percent divergence. This calculation becomes increasingly inaccurate for a larger number of years. The mutation rate is of the process where you randomly pick a position in the genome and change it to another base. The trouble with trying to get percent identity/divergence from this is that as you let this process run, on each iteration you have a higher chance of changing a base that you already changed, leading to no decrease (and possibly an increase) in the percent identity. Percent identity is a nonlinear function of the time passed, and you need a differential equation to model it. I am not sure off the top of my head where (back in time) the linear approximation that we've been using becomes really bad.GeoMor
December 11, 2006
December
12
Dec
11
11
2006
03:06 PM
3
03
06
PM
PDT
bFast
Jehu, let me question your figures. We seem to have two sets of numbers flying around. Could the 2 x 10^-9 be an “average rate for all DNA” where the 2.1% per mil be the rate for “junk” DNA? I have independantly seen the latter number measured off of “junk” DNA, so that’s what I am suspecting.
2.22 x 10^-9 is the average mutation for mammals, I am not sure how accurate that number is because it is from 2001. The number's I have been giving for the neutral substitution are from Nature's mouse genome article. (It's free) http://www.nature.com/nature/journal/v420/n6915/full/nature01262.htmlJehu
December 11, 2006
December
12
Dec
11
11
2006
02:45 PM
2
02
45
PM
PDT
Jehu, let me question your figures. We seem to have two sets of numbers flying around. Could the 2 x 10^-9 be an "average rate for all DNA" where the 2.1% per mil be the rate for "junk" DNA? I have independantly seen the latter number measured off of "junk" DNA, so that's what I am suspecting.bFast
December 11, 2006
December
12
Dec
11
11
2006
02:36 PM
2
02
36
PM
PDT
That right, I meant bFast. Sorry.Jehu
December 11, 2006
December
12
Dec
11
11
2006
12:57 PM
12
12
57
PM
PDT
That would be bFast ".... right?"bFast
December 11, 2006
December
12
Dec
11
11
2006
12:46 PM
12
12
46
PM
PDT
DaveScot
2.2% per million years * 140 million years, right?
Although I am guilty of giving that number earlier in the thread, it is apparently actually much lower. According to Nature the mutation rate in the mouse is 4 x 10^-9 per year per base pair and in the human is 2 x 10^-9 per year per base pair. So those two mutation rates have to be combined. The time of divergence for the human/mouse divergence is 70 - 90 million years, which resulted in neutral divergence of 47%. The exons of coding genes in mouse to human have an average of 85% identity. The introns average 69%. The so-called "promoter regions" which are the poorly defined 200 base pairs just before and after a coding gene have an identity of between 70-75%. I am not sure how significant 70% identity between mouse and human is. However, when you toss in a chicken in there the time of divergence goes way back and it is much harder to justify it by random chance.Jehu
December 11, 2006
December
12
Dec
11
11
2006
12:21 PM
12
12
21
PM
PDT
I've been thinking of a simple simulation that would rule out the "segments might just have not been mutated" argument. (If I can find a few hours I will code it.) The code would create an million (tuneable) element array, and randomly mutate its values between a value of 1 and of 4. Let sufficient mutations happened to account for random mutations over the years allotted (2.2% per million years * 140 million years, right?) . At that point, the array can be swept to see what the longest segment that was apparently unmutated to the 70th percentile would be. I bet the longest segment will be about 30 elements long. I will be shocked if the array will show a thousand segments averaging 100 element in length -- shocked, shocked!bFast
December 11, 2006
December
12
Dec
11
11
2006
11:31 AM
11
11
31
AM
PDT
DaveScot:
a useless mutation can become fixed simply because that cell line was strong for some other reason or just lucky . The point here isn’t how the CNGs became fixed. It’s how they were conserved for tens or hundreds of millions of years without being blasted into unrecognizability by random mutations.
In true "junk dna" mutations randomness determines when a mutation becomes fixed. However, if we look at the amount of benefit that a mutation must offer so that it does become fixed, we can guess that the converse is also true, that a mutation that offers that much disadvantage would keep it from becoming fixed. If DNA is actually ultra-sensitive to any advantage/disadvantage, then any mutation offering even a very slight disadvantage would not be fixed. The result would be that that segment of DNA would be conserved. Mesk, over on Telic Thoughts, argues that it is quite reasonable for the mice in question to pass the tests provided and still have sufficient deleterious function account for the conservation that is observed. However, he suggests that a 5 year study on about 1000 mice, may make a very convincing case that the mice have suffered no deleterious effects. He then goes on to describe the expected response from the scientific community (assuming that no deleterious effects are found) as frantic. I do like the balance the Mesk is offering in his discussion on TT.bFast
December 11, 2006
December
12
Dec
11
11
2006
11:26 AM
11
11
26
AM
PDT
Sparc:
Thus one can hardly claim that ultra-conserved sequences lack functions.
Jehu's quote from Nature:
What explains the correlation among these many measures of genome divergence? It seems unlikely that direct selection would account for variation and co-variation at such large scales (about 5 Mb) and involving abundant neutral sites taken from ancestral transposon relics.
What Jehu alludes to, and what Sparc's quote actually points out, is the fact that irrespective of the question concerning conserved sequences and function there seems to be TWO conserving processes going on, one that can be plausibly attributed to NS, and one that defies this connection. But there's more to it than that. EVEN IF NS is responsible for both such processes (thus, granting the argument), the "enhancers" (per Pennacchio) are nevertheless more conserved than the "genes" themselves. These enhancers "regulate" gene function/expression. Hence, it would seem--using the standard thesis regarding conservation of sequences--that 'regulated' functions are more critical to survival than are "coding" functions( i.e., proteins/enzymes). Dave, I'm going to toss this over to your area of expertise--programming--but doesn't this all suggest that "genes" are less critical in, so to speak, setting up genetic programming than are the "enhancers/regulators" themselves? E.g., wouldn't a problem with a branching node be more fatal to the proper functioning of a program than a called for subroutine? IOW, in programming you'd fix the switching problems first before you started looking at the individual subroutines the branching node was calling for. Looked at this way, it makes gene expression look somewhat secondary, almost peripheral, to genetic programming--which is really what we see in nature, as in the geographical radiation of species. And doesn't that imply that NS is necessarily almost decoupled from phenotypic variation? And doesn't that imply that Darwin was completely wrong given that he bases his theory on the link between phenotypic variation and selection?PaV
December 11, 2006
December
12
Dec
11
11
2006
10:53 AM
10
10
53
AM
PDT
DaveScot:
I’m a little suprised the background rate is only 2:1 given that the reproductive cycle in mice is closer to 20:1 when compared to men. The smaller deviance might be due to humans continuing to reproduce for decades and their gametes acquiring age-related mutations.
I think it is simply a failed prediction of NDE. I know there was a strong prediction that generation time would correlate with the mutation rate. That has not turned out to be the case.
It should be mentioned that the neutral theory isn’t complete as not all genes exhibit synonymous substitution at the same rate which made molecular clock calibration into a cottage industry.
If NDE were true I would expect to see uniform drift across the genome with a close correlation to generation times, exceptions would only be made for genes under selective pressure that cannot tolerate neutral mutations as well. The Nature Mouse Genome issue made this comment:
What explains the correlation among these many measures of genome divergence? It seems unlikely that direct selection would account for variation and co-variation at such large scales (about 5 Mb) and involving abundant neutral sites taken from ancestral transposon relics.
Jehu
December 11, 2006
December
12
Dec
11
11
2006
09:50 AM
9
09
50
AM
PDT
I'm a little suprised the background rate is only 2:1 given that the reproductive cycle in mice is closer to 20:1 when compared to men. The smaller deviance might be due to humans continuing to reproduce for decades and their gametes acquiring age-related mutations. In other words every new mouse is made by combining young gametes produced by a young animal whereas many humans are produced from gametes decades old (eggs) or recently produced from decades old cells (sperm). At any rate I have no quarrel with Rubin's 0.46 rate of bp mutation. That rate can be easily established by synonymous substitutions in codon sequences in critically expressed genes based upon the neutral theory. It should be mentioned that the neutral theory isn't complete as not all genes exhibit synonymous substitution at the same rate which made molecular clock calibration into a cottage industry. Where I think Geomor mostly went wrong is in saying that serendipity would account for thousands of 100bp sequences appearing conserved. I'm sure someone must have done a statistical analysis to show that is untrue and Rubin based his sequence length selection on that analysis.DaveScot
December 11, 2006
December
12
Dec
11
11
2006
08:53 AM
8
08
53
AM
PDT
DaveScot, Thanks for the clarification. I think that actual divergence between mouse and human is slightly more than 40%. Here is what Nature reported.
[W]e estimate that neutral divergence has led to between 0.46 and 0.47 substitutions per site (see Supplementary Information). Similar results are obtained for any of the other published continuous-time Markov models that distinguish between transitions and transversions (D. Haussler, unpublished data). Although the model does not assign substitutions separately to the mouse and human lineages, as discussed above in the repeat section, the roughly twofold higher mutation rate in mouse (see above) implies that the substitutions distribute as 0.31 per site (about 4 10-9 per year) in the mouse lineage and 0.16 (about 2 10-9 per year) in the human lineage.
Jehu
December 11, 2006
December
12
Dec
11
11
2006
12:43 AM
12
12
43
AM
PDT
A background rate of 2.2*10^-9/site/year means that any given nucleotide can be expected to mutate into a different one in 500 million years. With both human and mouse diverging at this rate it means any nucleotide that was the same in both can be expected to be different after 250 million years (it'll change in either mice or men). Therefore we should expect sequence divergence at the rate of about 10% every 25 million years. After 90m years we'd expect unconserved regions to have diverged by close to 40% (60% similiarity). Geomor's assertion that we should find thousands of 100bp seqeunces more than 70% conserved by serendipity is wrong. 100bp is sufficient to eliminate virtually all localized deviations from the expected average. Rubin's team isn't stupid. He chose these percentages and sequence lengths because they are well over the threshhold of being purposely conserved. He was amazed because no purpose in 1000 of them became immediately apparent when they were removed.DaveScot
December 11, 2006
December
12
Dec
11
11
2006
12:30 AM
12
12
30
AM
PDT
sparc, you wrote:
In a recent paper (Pennacchio L.A. et al. (2006): In vivo enhancer analysis of human conserved non-coding sequences. Nature 444(7118):499-502) Rubin’s group presents evidence that 45% of ultra-conserved sequences they have analyzed “functioned reproducibly as tissue-specific enhancers of gene expression at embryonic day 11.5”. Thus one can hardly claim that ultra-conserved sequences lack functions.
But what about the other 55% percent? You see, nobody is claiming that all ultra-conserved sequences lack function. Notice my post #74 where I pointed out that no function was found in 25% of the tested ultra-conserved sequences. That implictly states that function was found in 75% of them. Here's the kicker. According to NDE 100% of ultra-conserved sequences should have function. So 25% with no function is significant in itself.Jehu
December 11, 2006
December
12
Dec
11
11
2006
12:02 AM
12
12
02
AM
PDT
fifth I think the research for ID is to find what mechanism is conserving these 1000 CNGs. If natural selection is what is conserving them then they must have an important function critical to survival. There is no known molecular mechanism other than natural selection that can conserve sequence information like this. The front loading hypothesis of ID predicts a conservation mechanism other than natural selection.DaveScot
December 10, 2006
December
12
Dec
10
10
2006
11:56 PM
11
11
56
PM
PDT
GeoMor,
By those exact numbers, you’d expect most of the genome to be greater than 70% identity between mouse and the human-mouse ancestor. Just multiply 2.22e-9 subs/site/yr by 90Myr = 0.2 subs/site or 80% identity. Between human and mouse (140-180Myr of divergence), you’d expect tens of thousands of 100-bp sequences to have 70% or better identity in a 3e9-bp genome.
Not exactely. That is 2.22e-9 per lineage not combined. And that figure is the supposed mammalian average, not the alleged human or mouse specific substitution rate. See the Nature article for better info on human/mouse divergence.
Anyway, I’m not trying to argue that there were not many “truly” conserved sequences in the deleted regions — ... Again, NDE’s prediction here remains that the conserved deleted sequences are functional and have conferred a selective advantage
I agree, it has clearly been predicted that these genes were conserved and therefore have function. However, I am curious to know the statistical significance of a sequence >100 bp at >70% identity.Jehu
December 10, 2006
December
12
Dec
10
10
2006
11:51 PM
11
11
51
PM
PDT
Sparc Geomor already mentioned the paper you did in the commentary. I mistakenly called the CNGs referenced in the blog article as ultra-conserved. They are in fact highly conserved at between 70% and 95% sequence match. Rubin explicitely stated none of these 1000 crossed over into ultra-conserved which (by the definition Rubin used) are over 95%. So the article you mention doesn't address any of the thousand CNGs referenced here. The claim that no function has been found for these thousand(!) highly conserved sequences is still a valid claim.DaveScot
December 10, 2006
December
12
Dec
10
10
2006
11:49 PM
11
11
49
PM
PDT
Mouse Genome Factoid Round-Up! Nature has their December 5, 2002, Mouse Genome Special Issue posted on the internet for free. http://www.nature.com/nature/mousegenome/index.html It has lots of great information that is highly relevant to this thread. Here are some significant facts as reported back in 2002.
*The mouse genome is about 14% smaller than the human genome (2.5 Gb compared with 2.9 Gb). The difference probably reflects a higher rate of deletion in the mouse lineage. * Over 90% of the mouse and human genomes can be partitioned into corresponding regions of conserved synteny, reflecting segments in which the gene order in the most recent common ancestor has been conserved in both species. * At the nucleotide level, approximately 40% of the human genome can be aligned to the mouse genome. These sequences seem to represent most of the orthologous sequences that remain in both lineages from the common ancestor, with the rest likely to have been deleted in one or both genomes. * The neutral substitution rate has been roughly half a nucleotide substitution per site since the divergence of the species, with about twice as many of these substitutions having occurred in the mouse compared with the human lineage. *By comparing the extent of genome-wide sequence conservation to the neutral rate, the proportion of small (50−100 bp) segments in the mammalian genome that is under (purifying) selection can be estimated to be about 5%. This proportion is much higher than can be explained by protein-coding sequences alone, implying that the genome contains many additional features (such as untranslated regions, regulatory elements, non-protein-coding genes, and chromosomal structural elements) under selection for biological function.
Jehu
December 10, 2006
December
12
Dec
10
10
2006
11:14 PM
11
11
14
PM
PDT
Jehu (#43): It has been over 2 years since the paper was published and I can’t find any evidence that anybody has even suggested a function. DaveScot (#53) I’m very surprised that a lot more research into this hasn’t been undertaken in the intervening two+ years.
In a recent paper (Pennacchio L.A. et al. (2006): In vivo enhancer analysis of human conserved non-coding sequences. Nature 444(7118):499-502) Rubin’s group presents evidence that 45% of ultra-conserved sequences they have analyzed “functioned reproducibly as tissue-specific enhancers of gene expression at embryonic day 11.5”. Thus one can hardly claim that ultra-conserved sequences lack functions.sparc
December 10, 2006
December
12
Dec
10
10
2006
07:56 PM
7
07
56
PM
PDT
The longer the DNA string or the higher the percentage conserved or longer the time involved the worse it is for the Darwinist The immediate reproductive benefit must be very high to offset the cost of conservation here It seems to me that in order for such a long section of code to be conserved for so long you need to have a much higher immediate reproductive advantage than .0001%fifthmonarchyman
December 10, 2006
December
12
Dec
10
10
2006
10:28 AM
10
10
28
AM
PDT
Pav says: Does this constitute an ID experiment? I say: you bet it does. And a cheep one at that I think it could be conducted purely by computer . So much of the data has already been collected and so much of it is purely mathematical and not open to interpretation. We already have the Background mutation rate Theoretical number of generations between the two species Minimum information content in the ultra conserved DNA Minimum benefit necessary for natural selection to conserve the strand in question It looks like all you need to do is devise a formula and plug in the numbers. We might even be able to use Dembski’s filter. This is all over my head but it should be easy for the math geeks here.fifthmonarchyman
December 10, 2006
December
12
Dec
10
10
2006
08:57 AM
8
08
57
AM
PDT
I should mention that this experiment was brought up at Panda's Thumb quite a while back, and they just turned their nose at it. http://www.pandasthumb.org/archives/2005/11/we_are_as_worms.html Go to post #60967PaV
December 10, 2006
December
12
Dec
10
10
2006
06:05 AM
6
06
05
AM
PDT
Jehu: "But how much benefit must a mutation confer in order for it to be fixed?" I was asking myself the same question. I think it's actually quite low. However, this kind of knock-out experiment, combined with what we're finding about about siRNA and regulatory functioning of such RNA's, I believe, throws all of the mathematical basis for the Modern Synthessi for a loop. The mathematics developed, of course, when it was thought that "genes" entirely determined organisms. Well, it appears there's much more to phenotypes than the simple expression of genotypes (understood as the "coding" portions of DNA). This is a brave new world. Dave Scot: "You’ve got the right idea though. Insects probably have CNGs like these in common with arthropods and those would be two good candidates for comparison with much faster life cycles than vertebrates. " Does this constitute an ID experiment? :)PaV
December 10, 2006
December
12
Dec
10
10
2006
05:21 AM
5
05
21
AM
PDT
jehu It doesn't have to confer any benefit to become fixed. Benefit helps it become fixed but a useless mutation can become fixed simply because that cell line was strong for some other reason or just lucky . The point here isn't how the CNGs became fixed. It's how they were conserved for tens or hundreds of millions of years without being blasted into unrecognizability by random mutations. Unless some important and somewhat immediate function can be found for all these CNGs then the inevitable conclusion is there's something other than natural selection at work conserving them. The front loading hypothesis predicts that some mechanism for conservation other than natural selection must exist to conserve DNA with no immediate purpose, otherwise information stored for future use would be destroyed before it was actually employed. DaveScot
December 10, 2006
December
12
Dec
10
10
2006
01:02 AM
1
01
02
AM
PDT
Dave, thanks. I actually had the opportunity to see the Rosetta Stone up close in UK, led by a Prof. of history on walking tour. Was a fascinating walk back thru time. re: testing... what about sea urchins? I've been fascinated by sea urchins ever since I found they're in the chordate group with us humans. To me, that is as weird as voles. I don't remember anyone posting the following info. But thought it just as interesting in relation to comparitive testing. And it appears to have been a favorite for some time now. Sea Urchins share 7,000 genes with humans. Including those linked to parkisons, alzheimers. Plus, "Another surprise is that this spiny creature with no eyes, nose or hears has genes involved in vision, hearing and smell in humans" http://www.eurekalert.org/pub_releases/2006-12/uocf-sug120706.php I wonder how they match up with the mouse & human genome regions tested now? And what is predicted? By evolution? "Sea urchins are echinoderms, marine animals that originated more than 540 million years ago." Well, just found this, 70% of sea urchin matches human genome in comparison to 40% of fruit fly. The project was coordinated by Baylor btw with over 200 scientist working worldwide. http://www.livescience.com/animalworld/061109_urchin_relatives.html The tree is looking rather bushy. This does not compute to a normal mind does it? I'm being asked to believe a sea urchin matches better with me than a fly, which is not something I want to resemble anyway. But the absurdity of current genome statistics tells us this does not compute at all in terms of morphology. At least the fly has fully expressed legs, eyes. Flash presentation on Sea Urchin and other papers. http://www.sciencemag.org/cgi/content/summary/sci;314/5801/938a I don't see how NDE can get away from the bushy aspect of multiple lifeforms springing up out of sea and land.Michaels7
December 9, 2006
December
12
Dec
9
09
2006
03:31 PM
3
03
31
PM
PDT
bFast
y discussions with Mesk over at telicthoughts (he’s clearly an evolutionist, but he’s got some humility in his bones, and a scientist’s curiosity) this is not so. He suggests that if a mutation has a deleterious effect as small as 0.0001%, it will not fix in the population. If it doesn’t fix in the population, it will weed out.
But how much benefit must a mutation confer in order for it to be fixed? Also what does .0001% deleterious mean? Does it mean it prevents 1 speceis in 10,000 from reproducing? If that is what it means, I don' t believe the figure. I would want to see some experimental data to support it and not just circular reasoning that attempts to justify what we observe in the genome with NDE or "evo-devo" theory.Jehu
December 9, 2006
December
12
Dec
9
09
2006
02:48 PM
2
02
48
PM
PDT
1 2 3 5

Leave a Reply