Uncommon Descent Serving The Intelligent Design Community

Missense Meanderings

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

MISSENSE MEANDERINGS IN
SEQUENCE SPACE: A BIOPHYSICAL
VIEW OF PROTEIN EVOLUTION
Mark A. DePristo, Daniel M. Weinreich and Daniel L. Hartl

“Taken as a whole, recent findings from biochemistry and evolutionary biology indicate that our understanding of protein evolution is incomplete, if not fundamentally flawed.”

Abstract | Proteins are finicky molecules; they are barely stable and are prone to aggregate, but they must function in a crowded environment that is full of degradative enzymes bent on their destruction. It is no surprise that many common diseases are due to missense mutations that affect protein stability and aggregation. Here we review the literature on biophysics as it relates to molecular evolution, focusing on how protein stability and aggregation affect organismal fitness. We then advance a biophysical model of protein evolution that helps us to understand phenomena that range from the dynamics of molecular adaptation to the clock-like rate of protein evolution

Summary:

**In addition to functional properties, proteins have a wide range of biophysical characteristics, such as stability, propensity for aggregation and rate of degradation. These properties are at least as important as function for cellular and organismal fitness.

**Proteins tolerate only narrow ranges of stability, aggregation propensity and degradation rate. Many individual missense mutations perturb these traits by amounts that are on the same order as the permissible range of values, and are consequently common causes of human genetic disease.

**The narrow range of tolerance of deviations from optimum characteristics and the significant effects of mutations give rise to a substantial degree of epistasis for fitness. Moreover, mutations simultaneously affect function, stability, aggregation and degradation. For these reasons, mutations might be selectively beneficial on some genetic backgrounds and deleterious on others.

**Mutations that change function often do so at the cost of protein stability and aggregation. Compensatory mutations therefore function by relieving the biophysical strain that is introduced by adaptive mutations.

**We propose a new model of protein evolution that is reminiscent of a constrained ‘random walk’ through fitness space, which is based on the fitness consequences and distribution of mutational effects on function, stability, aggregation and degradation.

**This model can account for both the micro-evolutionary events that are studied by biochemists and the long-term patterns of protein evolution that are observed by evolutionary biologists.

—–

Taken as a whole, recent findings from biochemistry and evolutionary biology indicate that our understanding of protein evolution is incomplete, if not fundamentally flawed. The neutral theory of molecular evolution1, which states that all mutations that reach FIXATION in a population are selectively neutral, appeals to evolutionary geneticists in part because it can account for the approximately constant rate of protein evolution. However, its premise that most missense mutations are selectively neutral has been systematically rejected by protein biochemists, who recognize instead that almost all missense mutations have large biophysical effects2. Indeed, nucleotide sequence analyses have uncovered pervasive positive selection for amino-acid replacements3?5.

Another important challenge to evolutionary theory, which emphasizes the independent and additive effects of mutations, arises from studies of compensatory evolution. Here the deleterious effects of mutations are rapidly and effectively compensated by conditionally beneficial mutations. Compensatory mutations often occur in the same gene as the initial deleterious mutation, are common in ADAPTIVE EVOLUTION6?8 and have an important role in many human diseases9. There are currently no models that reconcile the constant rate of protein evolution with the biochemical reality that missense mutations have large, context-dependent effects and that few, if any, are selectively neutral.

There is a growing appreciation of the role that the biophysical properties of protein stability, aggregation and degradation have in FITNESS and disease10 TABLE 1. Moreover, these properties have been identified as significant factors in many cases of adaptive8,11,12 and compensatory evolution13?15. These properties ? and not function ? seem to be the forces driving much of protein evolution.

Here we review the literature on biophysics as it relates to molecular evolution, with a particular focus on how missense mutations affect protein stability and aggregation. We then develop a biophysical model of protein evolution that helps to explain such diverse phenomena as compensatory mutation, the dynamics of molecular adaptation and the rate of protein evolution. Throughout this review, we bring together the fields of protein biophysics and molecular evolution by highlighting the shared questions, complementary techniques and important results concerning protein evolution that have come from both fields.

Comments
"selection. If the mice created by Nobrega et al. were just 0.01% less fit, natural selection would easily weed them out. Just because they couldn’t detect a difference in growth rate, doesn’t mean that one doesn’t exist (DaveScot would call it an argument from ignorance). " 3% of a nearly 3 to 4 giga base pair genome is a huge number. We're talking on the order of 90 to 120 million base pairs. Even if it had a 1% decrease in fitness for 120 million base pairs, each base pair in isoloation would be on average a selective disadvantage of only 1%/120,000,000, meaning it's too dilute to be detected or too weak to prevent change. That's a crude analysis, and there are even more problems which I don't have energy to get into. The arguments from ignorance come from the Darwinists: "you can't prove Darwinism wrong therefore Darwinism is true". scordova
"10 raised to the 77th power is thus 10^77." Thanks. That should help. :) ^ 4 = :) :) :) :) MGD
"DaveScot would call it an argument from ignorance" No, Logic 101 calls it an argument from ignorance. Enroll should you ever get to college. DaveScot
MGD Ascii tradition uses an up arrow to denote powers. 10 raised to the 77th power is thus 10^77. Even talking about RM "searching" sequence space for functional proteins is a misnomer. A search implies a goal. Random mutation isn't a goal. It's an aimless wandering. Using the term "search" implies there's a solution being sought for a problem. At best random mutation creates solutions that are seeking a problem. Sort of like the discovery of teflon. Chemists were seeking something else (I forget what) when they stumbled onto the formula for teflon. They noticed it had some unique properties then went about finding problems those properties might solve - a solution looking for a problem. Random mutation doesn't even do that as it has no way of testing newly formulated proteins for efficacy at some task. What's supposed to happen - every new protein is tried out to see how it works as a neurotransmitter, antibiotic, gas transport, coagulent, ad infinitum? Hardly. And that's only one protein. In reality most biologic functions are carried out by suites of proteins working together in concert. The whole concept of RM+NS finding the sequences for these proteins is ridiculous on the face of it. RM+NS is categorically an argument from ignorance. No other mechanism for the synthesis of interdependent protein complexes has been discovered so, as Sherlock Holmes put it "When you've ruled out the impossible, whatever remains, no matter how improbable, is the answer". The problem is there IS one other possibility aside from RM+NS and that's intelligent agency. In Paley's time (watchmaker argument circa 1800) until the late 20th century intelligent agency in directed, goal oriented synthesis of novel proteins was just a theoretical possibility. Not any more. Now it has been proven possible by modern biochemists and genetic engineers tinkering with genes to produce novel proteins and exploring the properties of the resultant molecules. The holy grail of being able to predict those properties from any arbitrary amino acid sequence hasn't quite been acheived but there's no reason to believe it is an unsolvable problem. It's this recent experimental work proving that genes can be purposely manipulated by intelligent agents that has given a new life and strong legs to the old watchmaker argument. It's no longer a question of whether intelligent agency CAN guide evolution but rather DID it guide evolution. Here's where Dembski comes along with a scientific process (well, actually mathematical process applied to scientific data) that claims to be able to reliably distinguish the work of intelligent agency from the work of chance and necessity. Chance worshippers are understandably apoplectic that their own hard earned data has falsified the very core of their materialist faith. DaveScot
Salvador, "Results from systematic null mutations of all genes on chromosome V of S. cerevisiae show that almost 40% of all yeast genes have little or no detectable effect on growth rate in 5 different environments" (from A. Wagner. 2000. Nature Genetics. 24:355-361.) I think the work by Nobrega et al. just goes to show that the laboratory does not capture all the vagaries of the environments that organisms find themselves competing in. Additionally, a yeast (or a mouse) that is 99.99% as fit as wildtype will look like wildtype in the lab, but will be effectively lethal in the eyes of natural selection. If the mice created by Nobrega et al. were just 0.01% less fit, natural selection would easily weed them out. Just because they couldn't detect a difference in growth rate, doesn't mean that one doesn't exist (DaveScot would call it an argument from ignorance). cambion
More from the same article: "In the second scenario, neo-Darwinists envisioned novel genes and proteins arising by numerous successive mutations in the preexisting genetic text that codes for proteins. To adapt Dawkins's metaphor, this scenario envisions gradually climbing down one functional peak and then ascending another. Yet mutagenesis experiments again suggest a difficulty. Recent experiments show that, even when exploring a region of sequence space populated by proteins of a single fold and function, most multiple-position changes quickly lead to loss of function (Axe 2000). Yet to turn one protein into another with a completely novel structure and function requires specified changes at many sites. Indeed, the number of changes necessary to produce a new protein greatly exceeds the number of changes that will typically produce functional losses. Given this, the probability of escaping total functional loss during a random search for the changes needed to produce a new function is extremely small--and this probability diminishes exponentially with each additional requisite change (Axe 2000). Thus, Axe's results imply that, in all probability, random searches for novel proteins (through sequence space) will result in functional loss long before any novel functional protein will emerge. Blanco et al. have come to a similar conclusion. Using directed mutagenesis, they have determined that residues in both the hydrophobic core and on the surface of the protein play essential roles in determining protein structure. By sampling intermediate sequences between two naturally occurring sequences that adopt different folds, they found that the intermediate sequences “lack a well defined three-dimensional structure.” Thus, they conclude that it is unlikely that a new protein fold via a series of folded intermediates sequences (Blanco et al. 1999:741)." See original article for citations. MGD
"I’m willing to accept the former (until the data comes in) on purely intuitive grounds…" Is intuition anything like faith? My intuitions are quite different. "Cassette mutagenesis experiments performed during the early 1990s suggest that the probability of attaining (at random) the correct sequencing for a short protein 100 amino acids long is about 1 in 1065 (Reidhaar-Olson & Sauer 1990, Behe 1992:65-69). This result agreed closely with earlier calculations that Yockey (1978) had performed based upon the known sequence variability of cytochrome c in different species and other theoretical considerations. More recent mutagenesis research has provided additional support for the conclusion that functional proteins are exceedingly rare among possible amino acid sequences (Axe 2000, 2004). Axe (2004) has performed site directed mutagenesis experiments on a 150-residue protein-folding domain within a B-lactamase enzyme. His experimental method improves upon earlier mutagenesis techniques and corrects for several sources of possible estimation error inherent in them. On the basis of these experiments, Axe has estimated the ratio of (a) proteins of typical size (150 residues) that perform a specified function via any folded structure to (b) the whole set of possible amino acids sequences of that size. Based on his experiments, Axe has estimated his ratio to be 1 to 1077. Thus, the probability of finding a functional protein among the possible amino acid sequences corresponding to a 150-residue protein is similarly 1 in 1077." from: http://www.discovery.org/scripts/viewDB/index.php?command=view&id=2177 That's 10 to the 77th power,I dont know how to fix the exponents. MGD
PaV, I guess I should have made myself more clear, because this is isn't what I was trying to get at. The basic idea is that we don't want to calculate the chances that evolution by natural selection produces this particular kinase (or, as you put 1 of the million possible sequences that produce an equivalent kinase), but instead we want to calculate the chances that evolution by natural selection produces *something*. My point was that in a given timeframe, RM+NS might only have a 1 in a million chance of producing a protein kinase, but will have a very good chance of producing something new and complicated and interesting. Asking what are the chances that evolution by natural selection arrived at this particular state of the world (or an isometric equivalent), is asking the wrong question. Things could have turned out any number of different ways. "Yet the kinds of probabilistic calculations that ID makes–and I see no reason to think them far-fetched–would say that the chance of one such kinase coming about, strictly by chance, would be in the order of 10^-150. " The chances of creating a random protein 100 amino-acids long and having it be match exactly to some specified sequence are indeed astronomically small. In this case, (1/20)^100 = 7.9 x 10^-131. However, this is not what evolution by natural selection must accomplish. Proteins are built from other proteins (novel proteins often built from gene duplicates). The protein sequences that encode a digestive enzyme and a protein kinase may be very similar. In other words, natural selection begins with many seeds in search space, all of which are known to have some function. These functional proteins are likely to lie near other functional proteins (just from protein folding arguments). If randomly changing 3 amino-acids of this digestive enzyme (though keep in mind it could have been any other protein in the genome) gives a little bit of kinase function (and that little bit of function is advantageous), then natural selection will take over from here and mold the novel kinase to have greater and greater activity, by just flipping one amino-acid at a time. As long as functional proteins of different varieties lie somewhat near each other in search space, natural selection should do okay finding novel variants. However, if functional proteins are randomly assigned to the search space, natural selection will frustrated in its high climbing attempts (I believe this is roughly the argument of No Free Lunch). I'm willing to accept the former (until the data comes in) on purely intuitive grounds... cambion
Dave Scott wrote: "Salvador The root of the “problem” is that chance worshippers haven’t been able to imagine any mechanism other than natural selection that has the ability to conserve DNA sequences over long periods of time. " Exactly! That is why the researchers who did the knockout experiments were astonished. These "conserved" regions should have been utterly scrambled by now! There are some possible explanations, some more palatable than others: 1. The convered regions are enforced through some sort of error correcting mechanism, but then this raises the deeper question, WHY? And how could such an enforcement mechanism evolve in the first place, and then it must somehow still square with the molecular clock and hierarchical patterns! Do we have a hierachically architected enforcement mechanimsm that spans all the species. At some point this looks no different than appeals to astrology or (gasp) intelligent design. 2. If the enforcement mechanism exists, then this is going to absolutely overturn evolutionary biology which relies on a certain degree of undirected variability. Enforcement of patterns at this scale is anathema to organic evolution! 3. Front loaded evolution or special creation, but I'm not about to touch special creation today! Salvador scordova
cambion: "For example, with enough information, you could theoretically calculate the chance that RM+NS would have brought a particular protein into being, let’s say a novel protein kinase involved in a signally cascade. Let’s say we find that this chance is 1 in a million. The difficulty here is that there could be a million other possible proteins each with a 1 in a million chance of being arrived upon by RM+NS. Just because the chance of this particular realization is low, does not mean the chance of any realization is low." Well, this is the nub of the issue. You seem to suggest that maybe a "million" possible configurations could serve as a "kinase." And, so, I'm just guessing, I suppose you want to say that if there is a million (10^6) such viable permutations, and the chance of one such permutation coming about is one in a million, then 10^6 X 10^-6=1. Yet the kinds of probabilistic calculations that ID makes--and I see no reason to think them far-fetched--would say that the chance of one such kinase coming about, strictly by chance, would be in the order of 10^-150. When you multiply this by 10^6, it's still 10^-144. This is unimaginably small. How do you get around such calculations? PaV
DaveScot, ((I only said I didn't want to talk to you if you continued to ignore my explanation of neutral evolution in a subset of sequences. I should have given up on coming to a common understanding regarding this a while ago. It would have saved me a lot of frustration.)) One parting volley though... Your "front-loaded" theory of evolution goes something like: Step 1: Intelligent life elsewhere in the galaxy / universe create an uber-genome and enclose it in a very special 'first egg'. Step 2: One (or more) of these 'first eggs' land on earth somewhere around ~3.7 billion years ago. Step 3: The information contained within this uber-genome unfolds into the phylogeny of life on earth, eventually resulting in additional intelligent life. How do you explain the existence of the first designers? Many ID folks can just turn to metaphysical arguments for the existence of 'first mover.' You, however, do not have that luxury. If these first designers were the products of evolution by natural selection (or by a yet undiscovered natural mechanism), that would mean that natural selection had to have worked in their case. And if it worked for them, why can't it have worked for us? cambion
Salvador The root of the "problem" is that chance worshippers haven't been able to imagine any mechanism other than natural selection that has the ability to conserve DNA sequences over long periods of time. Intelligent agency however can conserve anything it wants to conserve. Intelligent agency doesn't depend on death or disability as the sole error detection mechanism. Error checking mechanisms of any arbitrary reliability can be applied to any desired data by intelligent agency. The nature of the data is not restricted to immediate life critical kinds as it is with natural selection. Such error detection is basic stuff for any intelligent agents familiar with computer architecture. ;-) Unfortunately for them chance worshippers a priori rule out intelligent agency in nature. They'll spin their wheels forever trying to spin a gross concoction of ad hoc crappola trying to explain what's readily, easily, and intuitively obvious to anyone that isn't bound by chance worship. It's really pretty darn funny watching them spin in circles. Who says epicycles died when Copernicus was born? They're alive and well in evolution today. ROFLMAO! DaveScot
For anyone not in denial: Functional protein classes required for (theoretical) minimal free living cell: translation ribosome structure biogenesis transcription replication recombination repair metabolism chaperone functions secretion cell division cell wall biogenesis Every cell in the human body (excepting red blood cells) contains all the genes required to construct all the protein classes described above. Ergo, you can get a bacterial genome from a human genome but the converse is not true as bacterial genomes don't contain genetic information for specialized cell types, tissue types, organs, and body plans. Anyone in denial can talk to the hand. Thanks. DaveScot
Cambion says "I don’t want to talk to you" Wonderful. I don't want to talk to you either. Sounds like a win-win deal. Let's leave it there. You have my permission to declare unilateral victory too if you feel like injecting a bit of humor. DaveScot
There is a looming problem for netural theory which the selectionists are all too happy to point out: http://www.nature.com/news/2004/041018/full/041018-7.html Here is what happens to mice with 3% of there genome knocked out through genetic engineering. These "conserved" regiouns were believed to be absolute proof of selection value. Well there's a problem. It didn't seem to create any selective effect when the regions were knocked out. "More than 90% the genome of organisms such as mice and humans does not appear to code for any proteins. And yet this DNA shows striking similarities between species. If they had no function, over time mutations would scramble the sequences. Why have these bits of the genome remained so highly conserved?" (I apologize that this article is only for purchase from Nature, but it was the best I could do.) These regions have been empirically shown to have low selective value. I alluded to the problem above with low interspecific (within the same species) divergences, but here is a failure for neutral theory. As I said, both neturalists and selectionists have found fatal flaws in each others theories. There is Mutually Assured Destruction (MAD) of each theory, and I believe that is by design. Salvador scordova
DaveScot, If it is "inconsequential" anyway, will you agree with my explanation of the patterns of substitutions at synonymous and ancestral repeats in the mouse and human genomes? cambion
For example, with enough information, you could theoretically calculate the chance that RM+NS would have brought a particular protein into being, let's say a novel protein kinase involved in a signally cascade. Let's say we find that this chance is 1 in a million. The difficulty here is that there could be a million other possible proteins each with a 1 in a million chance of being arrived upon by RM+NS. Just because the chance of this particular realization is low, does not mean the chance of any realization is low. cambion
My comments don't seem to want to post... cambion
PaV, I'm glad we seem to be on the same page regarding the ability of natural selection to prune disadvantageous phenotypes. Thus giving a dynamic genome, but static phenotype (as long as the environment doesn't change too horribly). A very short (and incomplete) answer to your very important question: "But if NS maintains this equilibria, then how does the genome come into existence in the first place?" This is where all the action is. The window that natural selection acts upon is very narrow. It only sees what is immediately advantageous. Can a long series of immediately advantageous mutations move a genome from one adaptive (as in fit to its environment) phenotype to a novel adaptive phenotype of greater complexity? This is very complicated question. Demonstrating conclusively one way or the other will be incredibly difficult. And the probability calculations become very tricky do to the fact that the sequence of events could have conceivably happened any number of ways. It just so happens that we saw this particular realization. cambion
Testing... cambion
Inconsequential noise is, by definition, without consequence. Anything without consequence is not worth belaboring. There's random noise in the universe. It's all over the place. Things of interest are signal, not random noise. The only thing you do with noise is figure out how to subtract, cancel, or otherwise ignore it so you can see the signal. Subtract all point mutations which, despite decades of focus, have not been demonstrated to do anything in particular that's constructive. Subtract the random noise and focus on what's left over. Therein lies the answers. DaveScot
The gene hypothesis of evolution is pure speculation still alive today merely through inertia. At one time over-anxious scientists thought everything was regulated by coding genes. This naive early assumption has been proven quite false. The marked difference between species is chromosomal and positional in nature. Science has little understanding of position effect yet. An important clue in how speciation really takes place is the demonstrated ability of XX females to reproduce asexually and/or develop into perfectly functional males without a Y chromosome. Everything required for reproduction is contained in the female genome. Speciation in the fossil record appears to be a matter of saltation. There is no fossil record of continual gradual change nor is there any continuum of extant species today separated by miniscule phenotypical differences. The mechanism is still a mystery but semi-meiosis (meiosis interruptus if you will) appears to me to be the only plausible candidate mechanism for saltation. Unfortunately science has been blinded by all-powerful gene theories of one flavor or another where all major organic change is accomplished by accumulation of miniscule coding gene mutations. No amount of evidence to the contrary seems able to shake this entrenched belief. Faith is indeed a powerful thing. Multitudinous point mututations between genes can accumulate without speciation and relatively few point mutations can exist between different species. There is virtually no correlation between speciation and number of different point mutations. There is strong correlation between point mutations and total elapsed time. There is also a stronge correlation between speciation and total elapsed time. The two dots don't logically connect. Speciation via accumulation of point mutations over time is a non sequitur. Point mutations accumulate over time, different species accumulate over time, but there is no demonstrated connection between the two. DaveScot
The 'neutral theory' (not my word for it) as it applies to protein sequences is complicated and weird. I have never denied that. There are many competing explanations regarding what is going on with protein sequence change. I'm just trying to find some common ground here. Presenting the one of the simplest evolutionary models that I know of: neutral evolution at synonymous sites and ancestral repeats. Here, there is no debate (within the scientific community). And I am not trying to say that these changes are anything more than "inconsequential noise". I'm just seeking to explain some of the patterns we see in the genomic data. Your lambasting of the neutral theory of protein evolution does not speak to the issue at hand... cambion
cambion wrote: "Almost all mutations that result in a change in phenotype will be neutral and hence will not drift to fixation. Natural will prune them, before they ever get that far. Thus, the genome is dynamic, but the phenotype is static." You seem to be saying that the genome and phenome are in static equilibrium (more or less) due to the work of natural selection(I'll assume you meant to say this in sentence 2). But if NS maintains this equilibria, then how does the genome come into existence in the first place? I think you'd probably answer that this happens as the genome walks itself through the genomic search space until the genome finds the right environment and vice-versa. But, in fact, that's what ID would refute. There doesn't even exist the possibility of making such a "random walk" since the relevant search space is so large. PaV
DaveScot, What exactly are you saying here ? The neutral theory is effectively a null hypothesis. Are you saying that you don't agree with Kimura's results about the behavior of a neutral mutation in a population such as results shown here http://www.genetics.org/cgi/reprint/47/6/713.pdf ? taut tautologydna
Neutral drift has not been shown to be either more than than inconsequential noise or less than an important motive force in diversification - obfuscatory jingoistic technobabble notwithstanding. It began as a legitimate scientific hypothesis based upon a belief that so-called neutral mutations (which are presumed be neutral by lack of evidence to the contrary, not demonstrated to be neutral) occur at a constant rate. It has since been demonstrated that these mutations do not proceed at a constant rate. Instead of abandoning the hypothesis as proper science does with hypotheses that don't pan out, it is getting propped up by ad hoc modifications that explain its failed predictions. Evolution in general is a huge conglomeration of ad hoc explanations on an order that would make paranormal researchers blush. Neutral evolution is a smaller example of the larger problem. DaveScot
Oops... Almost all mutations that result in a change in phenotype will be neutral and hence will not drift to fixation. ---> Almost all mutations that result in a change in phenotype will lose fitness and hence will not drift to fixation. cambion
DaveScot, I don't want to talk to you until you engage my explanation of neutral evolution in synonymous sites and ancestral repeats in the mouse and human genomes. Just saying "I showed all kinds of holes" will not cut it. You can either concede that data fits a model of neutral evolution at synonymous sites, or you can show me how I am mistaken. Keep in mind that neutral evolution at a subset of sites in the mouse and human genomes would in no way invalidate your theory of "font-loaded" evolution. I would just like to know that we can discuss this little concrete thing and actually get some where in conveying are viewpoints to one another... That said, I'll cheat a bit and respond to your previous post (but no more until you finally address neutral evolution): "While you can’t get a human genome from a bacteria without adding code you can certainly get a bacteria from a human genome without adding code." This is patently false. Bacteria have many unique non-homologous genes when compared to eukaryotes, and those genes that are shared differ considerably. I suspect that amount of code that would needed to go from a human genome to a bacterial genome is roughly 70%-80% the size of a bacterial genome. "A good way of looking at genomes is imagining they’re like a deck of cards and species are like hands of cards. You don’t need a new deck to get a new species, you just need to shuffle and deal a new hand. There’s a nearly infinite number of unique hands that can be dealt." If your shuffling analogy were true, it would provide strong evidence against evolution by natural selection, and support for some sort of "front-loaded" evolution. However, this is not what we see. Each genome is unique, there are some base pairs that are the same and there some that are different. But even when two organisms share a gene (by common ancestry) that gene will have diverged in sequence between the two species. I think that a branching process is a much better metaphor than a deck of cards. cambion
I was actually trying my very best to give a decent figure for the amount of DNA that would need to be present in the uber-genome. "3% of the human genome codes." If you say that only the protein coding genes and their regulatory regions are important for creating an organism, the amount of DNA required drops substantially (the mouse genome paper estimates that 5% of the human genome is functional). However, I think DaveScot might argue that the other 95% is absolutely necessary... I was, of course, saying that the entire genome is specified. "Between chimps and humans there’s less than 3% difference in the coding portions (genes)." It's true. But this actually adds up pretty quickly. Almost half of the base pairs between mice and humans have altered, that is 1.65 billion base pairs for just 90 million years. I was really trying to be conservative in this regard by using the yeast genome as a template, as it contains basically no junk DNA and is *one thousand* times smaller than the human genome. So, I was hoping that using the really small genome would balance out the base pairs that are shared between species. Also, these calculations were not at all taking into account the program that would be necessary to specify when and how to implement changes in the genome. I would assume that this program would have to be fairly bulky, but I don't know. So that, overall, I would stick to my estimate of 400,000 the size of the human genome, but there are so many factors here that it basically makes it impossible to say for sure. cambion
PaV, I would really love to be able to find a little common ground for us to stand on. I was trying to present the neutral evolution of a subset of sites in the mouse and human genomes as a very concrete example of something we do know about how evolution works (when you start talking about genes and functions ala the "missense" paper, things get a lot more complicated). As you point out, drift is a very powerful force in shaping genomes, essentially causing genomes to explore the space of sequences that map to the same phenotype. If we compare the human and chimp genomes, we find around 30 million base pair differences (that's using the commonly cited 99% identical figure - I actually haven't checked to see what the chimp genome paper says). I would wager that 29 million 990 thousand make no phenotypic difference. Only around 10 thousand are meaningful. Neutral evolution moves genome sequence much more quickly than natural selection is able to (see Haldane's Dilemma). "Now, if you like, you can work in the theory of punctuated equilibria into this as well, saying that PE represents “neutral” mutations which, in a small, isolated population confers fitness in the new, and different, environment the population now finds itself in." I don't think that these neutral mutations end up conferring fitness in novel environments. I would guess that PE is brought about by how selection acts upon organisms that fill stable niches in complex ecosystems. Most of the time, selection acts to maintain morphological form (and hence ecological niche), but sometimes things get pushed around and niches are created, lost or changed. Then selection acts to adapt the organism to the new status quo. However, evolution of species and morphological change is not really my thing... "But this is so transitory a situation that it boggles my mind how organisms could maintain their identity over time." Almost all mutations that result in a change in phenotype will be neutral and hence will not drift to fixation. Natural will prune them, before they ever get that far. Thus, the genome is dynamic, but the phenotype is static. cambion
Evolution is a theory in crisis. Nothing in biology makes sense in light of an unguided, unplanned process where cumulative random mutation plus natural selection results in diversification of life. DaveScot
I'm actually somewhat sympathetic to the Neutral Theory; it at least has an honest-to-the-facts starting-off point. But, as this "Missense" article shows, most mutations can be considered "neutral" since, at the proteosome level "fitness values" are changing all the time(at the cellular level). Now, if the fitness level of neutral mutations are zero--that is, they confer NO advantage/increase of fitness to the individual, then, how can these "mutations" become "fixed" in the population? It's nothing but drift. Now, if you like, you can work in the theory of punctuated equilibria into this as well, saying that PE represents "neutral" mutations which, in a small, isolated population confers fitness in the new, and different, environment the population now finds itself in. But this is so transitory a situation that it boggles my mind how organisms could maintain their identity over time. In other words, this pretty much represents what Darwin himself thought of the evolutionary process--lots of variation acted upon by the environment; but, of course, the fossil record is not one of huge numbers of intermediates, but rather one of enormous periods of "stasis". Additionally, this "neutrality", even as cambion pointed out himself in places, is a denial, effectively, of NS as a drving force in evolution. PaV
Dave Scott: Your last post forms the gist of the argument I was going to make. 3% of the human genome codes. Between chimps and humans there's less than 3% difference in the coding portions (genes). So there's a whole heck of a lot of room for more information. PaV
"While you can’t get a human genome from a bacteria without adding code you can certainly get a bacteria from a human genome without adding code." Well, I may misunderstand your point here, but you aren't going to get a nitrogen-fixing bacteria. The machinery of nitrogen fixation is bacteria specific. tautologydna
The calculation is utter crap and demonstrates either great ignorance or great dishonesty of the subject at hand. 200 genomes the size of a human genome is adequate for at least 200 phenotypic representative from 200 distinct phyla. Since the human is larger than average it would actually be more than that. But that's just the beginning of how wrong Cambion was. A large fraction of all genomes are common between all living things and smaller genomes are subsets of the larger ones. While you can't get a human genome from a bacteria without adding code you can certainly get a bacteria from a human genome without adding code. A good way of looking at genomes is imagining they're like a deck of cards and species are like hands of cards. You don't need a new deck to get a new species, you just need to shuffle and deal a new hand. There's a nearly infinite number of unique hands that can be dealt. I suspect Cambion realizes this and is simply dishonest. DaveScot
Cambion wrote: "1.2 x 10^15 x 660 daltons x 1.66053886 x 10^-27 kg/dalton = 1.315 x 10^-9 kg or 1.315 ng in uber-dubia’s genome without compression So, without compression, we need a genome of around 400,000 times the size of modern day humans. I would say that this number would not be biologically feasible, one would need cells the size of beanbags… " I disagree with this calculation. If you're lurking, please respond before I go on. PaV
Hi Salvador, I think you are confused about the molecular clock. The molecular clock is a null hypothesis for protein evolution. Evolutionary biologists reject the clock for protein sequences all the time when doing phylogenetics. No one is surprised by that. Another aspect of the clock that people find interesting is the fact that it is "overdispersed". If mutation were to happen according to a Poisson process, then the variance should be equal to the mean. This is often not the case. Here is an interesting reference about the "overdispersed clock" that might interest you. http://www.genetics.org/cgi/content/full/154/3/1403 Take care. tautologydna
For those not in denial... http://kimura.tau.ac.il/graur/ArticlesPDFs/graurandmartin2004.pdf Reading the entrails of chickens: molecular timescales of evolution and the illusion of precision Dan Graur1 and William Martin2 1Department of Biology and Biochemistry, University of Houston, Houston, TX 77204-5001, USA 2Institut fu¨ r Botanik III, Heinrich-Heine Universita¨t Du¨ sseldorf, Universita¨ tsstraße 1, 40225 Du¨ sseldorf, Germany For almost a decade now, a team of molecular evolutionists has produced a plethora of seemingly precise molecular clock estimates for divergence events ranging from the speciation of cats and dogs to lineage separations that might have occurred ,4 billion years ago. Because the appearance of accuracy has an irresistible allure, non-specialists frequently treat these estimates as factual. In this article, we show that all of these divergence-time estimates were generated through improper methodology on the basis of a single calibration point that has been unjustly denuded of error. The illusion of precision was achieved mainly through the conversion of statistical estimates (which by definition possess standard errors, ranges and confi- dence intervals) into errorless numbers. By employing such techniques successively, the time estimates of even the most ancient divergence events were made to look deceptively precise. For example, on the basis of just 15 genes, the arthropod–nematode divergence event was ‘calculated’ to have occurred 1167 6 83 million years ago (i.e. within a 95% confidence interval of ,350 million years). Were calibration and derivation uncertainties taken into proper consideration, the 95% confidence interval would have turned out to be at least 40 times larger (,14.2 billion years). --more at link --anyone in a state of denial can talk to the hand :-) DaveScot
Hartl also points out in his book that molecular clocks must tick at a different rate for each protein! Thus we actually must invoke multiple molecular clocks! Further, the generation cycles for organisms is not the same. Fruit flies regenerate faster than humans. The clocks must have some mysterious mechanism to adjust mutation rates to the varying generation cycles. Or maybe, as Denton suggests, this should not even be a serious scientific theory if it needs this much jury rigging to keep it afloat. It is little more than a tautology that is trying to explain away the apparent design in the typological isolation between organisms. Further, a question I have outstanding, and tried to get some post-docs to look at is that "living fossils" like sharks have very low interspecific divergences in the very DNA loci that are supposedly subject to a neutral selection. This seems to be a contradiction to the molecular clock hypothesis (a fact the selectionists are all too happy to point out to the neturalists!). The selectionist and neutralist camps continue find fatal flaws in each others theories. We will see how this plays out. I think more surprises are in store. I believe the sequence divergences were designed to resist purely naturalistic origins, and thus the molecular clock will ultimately fail as a hypothesis. Salvador scordova
DaveScot, You are obviously a very smart guy. It makes me sad that you are so content to live a dream world of your creation, rather than interfacing the facts of the natural world around you. Your theory of "front-loaded evolution" is indeed possible, but in the same way that it is possible that invisible blue fairies exist all over the world, but we just have no means of detecting them. You posit quantum molecular computers, error-free replication mechanisms, and functions in noncoding DNA that operate on geologic timescales. None of these things have any evidence whatsoever of actually existing. Your theories have no basis in the scientific method that has brought us so far in the last 300 years. When I finally cornered you about neutral evolution in junk DNA in the human and mouse genomes, you decided to put your fingers in your ears and run. It doesn't give me much faith in your future dealings the scientific theory and argument... Still, I hope that some day you take the blinders off. Take care, - cambion cambion
In comment #32 I outlined my theory behind the evolution neutral junk DNA (4-fold synonymous sites + noncoding ancestral repeats). Your responses: Non-exact match between spontaneous mutation rate and rate of neutral evolution: Your last words on the subject: "Oops - that’ll teach me to drink and calculate. My burst. The numbers differ by a factor of 2. Let me read your link again. At any rate, you asked me for an example of an organism with enough DNA to be a preprogrammed ancestor to everything..." You dropped the argument, thus conceding that a factor of 2 is within the margin of experimental error. Rate of neutral fixation: You said: "Surely you’re not asking me to believe that almost all mutations become fixed?" and "I have a problem with high percentage of fixation in population of neutral mutations. These should fix at a low rate unless in small isolated populations and/or increase fitness." I gave you the population genetic theory, showing that spontaneous mutation rate equals the rate of neutral evolution. You have not given a rebuttal to Kimura's equations. Fossil calibration: You said: "How do we know that the human and mouse clock rates differ by a factor of 2? Because we’ve calibrated it with a 75 Myr FOSSIL dated divergence date." I showed how the factor of two is arrived at independently to fossil estimates using consensus sequence of ancestral repeats. Also that the date from the fossils corresponds to the experimental determination of spontaneous mutation rate (i.e. no calibration, instead confirmation). You have made no rebuttals to these points. Synonymous mutation and function: You said: "I have a problem with the assumption that point mutations that don’t change the amino acid specified are indeed neutral." I responded with: "You need a more concrete criticism than, 'well… they could have some function.'" Indeed, arguing that something exists without any evidence of it is against how science works. You have made no rebuttal here. Variable molecular clocks: You said: "The clocks for different genes and different organisms exhibit a high degree of variabilitly." I gave you a peer-reviewed article in a high-class journal showing that the clock rates for neutral mutations (in this case synonymous mutations) do NOT differ significantly between mammalian species and genes with a genome. You have not rebutted this argument. Increased mutation rate in response to stress in prokaryotes: You said: "This unpredicatable variance in molecular clocks may be due to a changing environment." I responded: "showed how spontaneous mutation of neutral sequence can elegantly explain the genomic divergences between mouse and human. How exactly does this speak to my presentation?" Indeed, you would need to posit that this also occurs in eukaryotes (with no evidence of such). Also, I showed that there was little variance in the silent clock. You have made no rebuttal here. Conclusion: You haven't shown a single hole in my argument. cambion
DaveScot, Cambion is right here, I think. For one, in those type of sequences, there is tons of evidence that they are behaving according to neutral theory. Under a Wright-Fisher population model, there is an expected neutral "site frequency distribution" . ie, the frequency of singletons in a sample, doubletons etc. The polymorphism in those sequences behave as one would expect if they were neutral. If a piece of junk DNA were deleterious, it would be at low frequency due to the action of natural selection. If they were advantageous, they would be expected to rise to high frequency quickly. However, many of these insertions appear to be at frequencies within populations that would be expected if they were neutral. Second, while in theory, for error correction, your strategy "Compute an arbitrarily long CRC during replication and if it doesn’t match the original you trash the copy" is full proof, do you have any idea how energetically expensive it would be to replicate an entire genome, scan against the old copy, then trash the new copy if there is a single mistake ? There would be no reproduction at all ! Furthermore, even if that were energetically feasible (which it isn't), think of the time frame required ? It takes about 1 minute for dna polymerase to replicate a strand of dna about 2 kb long in a pcr reaction. What happens if there is a mutation in the ORIGINAL copy that occurs during replication - the copy that is supposedly scanned ? Then, what template do you use to correct from ? Best regards. tautologydna
I showed all kinds of holes. You are in denial. Conversation over. DaveScot
You're right that "Going from “no known function” to “no function” is a logical fallacy called an argument from ignorance." However, that is not what I did. The logic goes: 1. These sequences have no known function. 2. Sequences with no known function should evolve at a rate equal to the rate of spontaneous mutation (ala Kimura). 3. Measured rates of evolution of these sequences correspond nicely to the rate of spontaneous mutation. Thus, we can conclude that they have no function. What is so difficult here? Also, you just said: "You started out calling it “junk DNA” and when I called you to the carpet on that you changed to 'neutral sequences' " You didn't call me on anything. Just a few comments up in #58 I say: "repeated sequences in the noncoding “junk” DNA behave just like we would predict functionless DNA to behave" I've been using "neutral sequences" and "junk DNA" interchangably. You still haven't shown a single hole in my explanation of the patterns of evolution in junk DNA in the mouse and human genomes. cambion
DaveScot: "They vary by gene, by selection pressure, by fixation rate, by reproductive rate, by environmental variables (think radiation and chemicals that increase germ cell mutation rate)." And species? If I'm not mistaken? Charlie
"to replicate DNA at an absolutely 0% rate of error incorporation is indeed a physical impossibility" Nothing is absolute but you can get to practically zero easily. Compute an arbitrarily long CRC during replication and if it doesn't match the original you trash the copy. Far more complex intracellular processes are routinely accomplished. There is absolutely no reason to deny a cell the ability to have practically 100% assured integrity of DNA copying if there is a compelling need. “what would be so bad about saying that some sequences in the human and mouse genomes appear to fit all of the data for neutral evolution?” What would be so bad about saying that some structures in the human and mouse genomes appear to fit all of the data for complex specified information? The data certainly does support this. It practically screams it. "I’ve presented a theory supported by the data" You've presented a theory chock full of assumptions I don't agree with and you cannot verify through experiment. Neutral sequences? Posh. You started out calling it "junk DNA" and when I called you to the carpet on that you changed to "neutral sequences" and continued right along with your just-so story. You don't know they're neutral. They have no known function. Going from "no known function" to "no function" is a logical fallacy called an argument from ignorance. Human and mouse molecular clocks in the first nature article you gave me say they differ in rate by a factor of two. Imagine how much they might differ between a mouse and a bird or a mouse and a salamander. Molecular clocks are pretty much useless. They vary by gene, by selection pressure, by fixation rate, by reproductive rate, by environmental variables (think radiation and chemicals that increase germ cell mutation rate). You appear to be in a complete state of denial regarding these problems. Historical biology is not an experimental science. Past evolution cannot be repeated and it cannot be observed. Theories that "appear" to fit the facts is as far as it goes but such theories will always be plagued by assumptions that cannot be verified. It's always going to be a narrative. Now there's nothing particularly unscientific about such narratives as long as they're presented as scientific narratives, theories only, and not presented as experimental fact. DaveScot
P.S. Please refrain from using phrases like "Hoist by your own petard" and "What, are you in high school and haven’t studied exponents yet?" It's really not conducive to the sort of discussion we're trying to have. cambion
DaveScot, First point, to replicate DNA at an absolutely 0% rate of error incorporation is indeed a physical impossibility. I takes a lot of energy to add addition layers of enzymatic error checking. You could have better error-checking, but never perfect within the energy constraints of a cell. This is not the real issue though... "What would be so bad about saying that some structures in the human and mouse genomes give all appearances of being intelligently designed?" Hmm... maybe I should be a bit more careful with more wording. I had intended the meaning of my statement to be: "what would be so bad about saying that some sequences in the human and mouse genomes appear to fit all of the data for neutral evolution?" The key point here being the data. I've presented a theory supported by the data of how this subset of sequences in mouse and human are evolving. I'm working completely from the observable natural world here. You keep positing undiscovered error-correction pathways, and undiscovered functions for what looks like neutral DNA. You *not* working from the observable world. Again... "What would be so bad about saying that some structures in the human and mouse genomes give all appearances of being intelligently designed?" This is simple, the data does not support such a conclusion. Still, you continue to skirt the question. Tell me why my story of neutral sequence evolution at 4-fold synonymous and ancestral repeat sites in the mouse and human genomes does not hold water. cambion
"what would be so bad about saying that some sequences in the human and mouse genomes give all appearances to be evolving neutrally?" What would be so bad about saying that some structures in the human and mouse genomes give all appearances of being intelligently designed? If appearances were reality there would be no debate. Even Dawkins, especially Dawkins, acknowledges the appearance of design. Hoist by your own petard. DaveScot
"It [practically perfect error detection] is a physical impossibility." Not only is it possible, it's really easy. Do the initials CRC mean anything to you? DaveScot
"It takes energy to run error correction." Yeah, so what? It takes energy to do just about anything. That doesn't stop things from getting done. "I showed how spontaneous mutation of neutral sequence can elegantly explain the genomic divergences between mouse and human." Huh? You didn't show they were spontaneous. That was an assumption. You didn't show the sequences were neutral. That too is an assumption. The genomic divergences cannot even begin to be explained by point mutations. For example, God only knows how many chromosomal rearrangements there have been in the two lineages. Assumption after assumption after assumption. None demonstrated by observation or experiment. A big just-so story. DaveScot
DaveScot, The real question: what would be so bad about saying that some sequences in the human and mouse genomes give all appearances to be evolving neutrally? Why do you have trouble accepting this? cambion
"If the enviroment can modulate mutation rate then there’s really no such thing as random mutation." I showed how spontaneous mutation of neutral sequence can elegantly explain the genomic divergences between mouse and human. How exactly does this speak to my presentation? cambion
"It’s relatively easy to implement fool-proof error detection and correction in code sequences." The killer here is the 2nd law of thermodynamics. It takes energy to run error correction. There are many layers of error correction that take place when DNA is being replicated, these drop the rate of spontaneous mutation to around 10^-9 per base pair. This is an amazingly low number. So, you would have to posit more layer of error correction for our "special" sequences. These layers take up energy, but also can never fuller eleminate the incorporation of errors into the genome. It is a physical impossibility. Also, positing this extra layer of "special" error correction when no evidence at all that exists is contrary to the makings of good science. You cannot just say "well... there could be some special enzymes that do special (thermodynamically impossible) things." Finally, let's say there is an extra layer or three of error correction. Let's say these layers reduce the spontaneous mutation rate one thousand fold, ending up with a rate of around 10^-12. If this were the case a region of DNA reserved for future implementation would still break down on geological timescales. After a billion years of microbial evolution (about 10^11 generations at a conservative 100 generations per year), 1 in 10 sites of our reserved sequence will be mutated. And this rate of mutation will occur in each branch of the phylogeny independently, further increasing target size. cambion
"The clocks for different genes and different organisms exhibit a high degree of variabilitly." Actually, the rates of silent (not changing the amino-acid sequence) substitution are very similar between different genes / organisms. This is from a recent article on the subject (Kumar and Subramanian. 2002. Mutation rates in mammalian genomes. PNAS 99: 803-808): "We have conducted a computational analysis of 5,669 genes (17,208 sequences) from species representing major groups of placental mammals to characterize the extent of mutation rate differences among genes in a genome and among diverse mammalian lineages. We find that mutation rate is approximately constant per year and largely similar among genes. Similarity of mutation rates among lineages with vastly different generation lengths and physiological attributes points to a much greater contribution of replication-independent mutational processes to the overall mutation rate." In addition, they conclude: "In conclusion, our results argue against the widely held notion about large differences in mutation rates among genes in a genome and among major mammalian lineages. This approximate similarity of mutation rates among genes and among lineages is likely to be important for estimating divergence time for closely related species, testing for selection by comparative sequence analysis, inferring coalescent times, and understanding the mutational processes that govern evolution of mammalian genomes." cambion
"I have a problem with the assumption that point mutations that don’t change the amino acid specified are indeed neutral." I don't think you've thought your reasoning through here. You say: "A frame shift can cause a redundant mutation to become one that specifies a different amino acid." If this occurs, the second (frameshift) mutation will be what is selected against, not the initial synonymous mutation. You need a more concrete criticism than, "well... they could have some function." cambion
DaveScot, It seems that you are skirting the question... I presented a nice simple evolutionary theory to explanation the observed patterns of evolution of 4-fold synonymous sites and ancestral repeats in noncoding DNA. The rates coincide very nicely with observed experimental rate of spontaneous mutation, just as the theory predicts. Why do not accept this explanation of the evolution of this subset of DNA? To address your concerns: "I have a problem with high percentage of fixation in population of neutral mutations." Population genetics deals with exactly this issue, showing exactly how likely it is to fix a new mutation of a given fitness effect. The theory behind it is rather beautiful in my opinion, drawing heavily on Kolmogorov's equations of heat diffusion (the diffusion of a gene through a population ends up being very similar to the diffusion of heat through a pipe). The neutral case is very simple, I showed you the math earlier. What was your specific problem with it? You say: "These [neutral mutations] should fix at a low rate." Kind of, the chance of fixation of any given mutation is low. As I said earlier, that chance is 1/2N. So, with a population of a million individuals the chance of new neutral mutation fixing is just 5 x 10^-7. However, as each individual in the population is a possible target for a new mutation, this offsets the low chance of any particular neutral mutant fixing. Again, as I said before, the rate of neutral mutation equals the rate of spontaneous mutation. I don't think any open-minded person could possibly argue with this statement. I'd suggest that you read Hartl's Principals of Population Genetics, it's compact and covers all of this stuff. I suspect that you'd enjoy the math... cambion
Cambion Prokaryotes have been found to increase mutation rates of certain genes in response to survival stress from the environment. Go to the comments at the following link where I addressed this a few months ago. http://darwin.bc.asu.edu/blog/?p=404 If the enviroment can modulate mutation rate then there's really no such thing as random mutation. A prediction of this would be that molecular clocks will show much variance by gene in the same species and in homologous genes in disparate species. This is in fact what has been observed. This unpredicatable variance in molecular clocks may be due to a changing environment. I have a problem presuming that eukaryotes are denied any survival tools available to prokaryotes. DaveScot
Cambion I have a problem with high percentage of fixation in population of neutral mutations. These should fix at a low rate unless in small isolated populations and/or increase fitness. Here's how it works: bad mutation doesn't hang around, neutral mutations may get lucky and become fixed, beneficial mutations have good chance of fixation. I have a problem with the assumption that point mutations that don't change the amino acid specified are indeed neutral. Transcription edits via introns, extrons, and whatever other mechanisms have been identified (which is unlikely to be a comprehensive list at this point in time) include frame shifts. A frame shift can cause a redundant mutation to become one that specifies a different amino acid. The clocks for different genes and different organisms exhibit a high degree of variabilitly. UNPREDICTABLE variability. Clearly we don't exaclty what, how, and how much various factors effect these so-called molecular clocks. It's relatively easy to implement fool-proof error detection and correction in code sequences. We've done as much in computer systems where we really care about data integrity. Presumably front-loaded evolution would impliment error-detection and correction on code being reserved for future use. DaveScot
DaveScot, At this point, are you willing to believe that 4-fold synonymous sites (sites within coding sequence that do not change the amino acid produced) and repeated sequences in the noncoding "junk" DNA behave just like we would predict functionless DNA to behave, evolving in a clock-like fashion at a rate equal to the rate of spontaneous mutation? Also, do you have a response to my point about how random mutation will cause any piece of the genome that does not serve an immediate function to randomly mutate (and thus break down any function meant to be employed at a later time)? I think that this shows it is quite impossible to encode a phylogeny (even a small one) into an ancestral uber-genome. cambion
cambion I didn't mean to say the blueprint of technologic civilization is in the seed. However, a blueprint that more or less inevitably evolves into a big brained technophilic animal with an instinct to explore the proverbial undiscovered country doesn't seem unreasonable. The ontogeny parallel however doesn't take chance into account. The developmental sequence of any individual organism is very strictly defined from egg to adult. You only get chickens from chicken eggs. DaveScot
Oops. Correction to 55. "within a bacterial *cell* (not nucleus). Bacteria don't have nuclei. Silly me. DaveScot
cambion The Voyager spacecraft, launched I think in the 1970's, officially left the solar system not long ago and is in interstellar space. I'd be willing to bet some tough microbial spores hitched a ride on it. Without targeting the likelyhood of it splashing down somewhere those spores could germinate, multiply, and evolve is vanishingly small. But I think it illustrates the concept of how seeds of life can spread through the galaxy and the minimal technology it takes to do it. Isn't science great! Here's one of my all-time favorite books within lies chapters that explore the limits of the physically possible according to known laws of physics. I'm an engineer and our mantra is "anything that is physically possible is a problem bounded by only time and money". I read this book in 1987 and it's probably the most influential technology tome I've ever read. :-) http://www.foresight.org/EOC/ Another area of interest is quantum computing (I'm a retired computer engineer). Check out some of these links: http://www.google.com/search?hl=en&q=qubits+%22protein+folding%22+ Interestingly, the first quantum computing elements, developed at IBM, use the spin states of carbon atoms in amino acids as the quantum storage elements. A quantum computer capable of predicing how an arbitrary protein sequence will fold will fit handily in a bacterial nucleus. Quantum computers can do amazing things with very few logic elements. Imagine that mobile elements flitting around the DNA molecule doing poorly understood things might be computing elements in action. One of my fonder speculations is that the intelligence responsible for the appearance of design is built right into cells in the form of quantum computers. That doesn't say how the hypothetical biological computer evolved but it sure takes care of the subsequent appearance of complex specified information. DaveScot
DaveScot, In response to #53. I think that's quite a brilliant idea actually. Except I wouldn't go as far as saying the blueprint for technological civilization is built in the first arriving egg. However, I can imagine the properties of DNA as a (more or less) perfect genetic material being selected for (or possibly engineered). An up-and-coming civilization could shoot rockets full of sturdy bacteria off in all directions of the galaxy, with the knowledge that upon arrival at a decent enough planet, that these bacteria would eventually terraform it through evolution by natural selection. cambion
Oh, and let me reiterate one thing. I don't at all deny the statement that “Dating phylogenetic divergences via molecular clocks remains seriously inaccurate, and ultimately relies primarily on fossil benchmarks.” However, you keep in mind where it's coming from. The people making this claim are systematists concerned with describing the phylogeny of a particular clade of, let's say, beetles. There systematists for basically every group of organism. It's kindof a cottage industry. They want robust phylogenies with as little work as possible (i.e. they cannot sequence whole genomes), so they try to find a few slowly evolving readily sequencable genes with which to base their phylogenies upon. This will not work very well to give a clock for (at least) a couple of reasons. First off, there is the huge variability of a Poisson process (as I discussed earlier), making the confidence intervals of their divergence dates huge. Additionally, they are using protein sequence (this is cheaper and easier, as well as necessary when dealing with ancient phylogenies - say over 200 million years), and proteins evolve. A neutral mutation occuring at this base pair, may alter the landscape of available subsequent mutations. Even though the molecular clock was first discussed using protein sequence, I don't think it's actually a valid hypothesis for protein sequence. However, the evidence clearly shows that it does work for functionless sequence. cambion
cambion If you're interested in astrobiology google up "GHZ" or "galactic habitable zone". Recent dead tree SciAm article tipped me off then I read more on the web. Here's an idea I'm not sure where I read. Suppose the goal of phylogeny isn't rational man, per se, but rather any organism that can develop technology to spread to another planet. Consider, the goal of all life seems to be (roughly) find resources needed to reproduce then reproduce. Now, the earth and any other planet has a finite length of time when it can support life. If life can't find a way to a newer planet before the older planet expires then it dies without reproducing. So seeds of life that have the innate ability to terraform planets and produce technologic civilizations that can send more seeds out to new planets actually follows an established pattern but again, like comparing ontogeny to phlogeny, the patterns are on vastly different scales. I have a real hard time ignoring patterns in nature that repeat on different scales like that. I don't think the similarities are mere coincidence. DaveScot
That is exactly why I brought in the experimentally determined spontaneous rate of mutation. If the numbers from comparing what we know to be neutral sequences (which should evolve at the spontaneous mutation rate ala Kimura) don't match up with the observed spontaneous rates of mutation, then we would have a problem with clock. Here we see evolutionary theory presenting a remarkably elegant and consistent story regarding the evolution of neutral genomic sequences (saying nothing about functional sequences). Also, you missed an important piece of the paper that I quoted: "Ancestral repeats provide a powerful measure of neutral substitution rates, on the basis of comparing thousands of current copies to the inferred consensus sequence of the ancestral element." Here's how it works: There is an parasitic element which takes off in the genome of the human-mouse ancestor. Eventually the human-mouse gets things under control, although the remnants of these elements remain as "ancestral repeats." So, we have bunch of copies of one sequence in the mouse-human ancestor. If that sequence is AAATTT (which we determine by taking the consensus of the thousands of present day sequences), and a particular mouse sequence is ATATGG, we can infer three changes on the mouse lineage, regardless of what is happening on the human lineage. Thus, we know the human clock ticks half as fast as the mouse clock without any need whatsoever for fossil calibration. In theory, this very same technique could be extended to more mammalian genomes to estimate clock rates along every branch of the phylogeny. cambion
Back to molecular clocks... I put forward: http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=151321 "Dating phylogenetic divergences via molecular clocks remains seriously inaccurate, and ultimately relies primarily on fossil benchmarks." Cambion responds: Nature 420: 520-562 "Assuming a speciation time of 75 Myr, the average substitution rates would have been 2.2 x 10^-9 and 4.5 x 10^-9 in the human and mouse lineages, respectively." This actually confirms the serious inaccuracy and need for fossil dating calibration. The key is that the human lineage clock ticks half as fast as the mouse lineage clock. How do we know that the human and mouse clock rates differ by a factor of 2? Because we've calibrated it with a 75 Myr FOSSIL dated divergence date. Now suppose we want to know when the line ending in wolves diverged? How do we know how fast the wolf clock ticks when we know that the molecular clock rates of different species can vary wildly? Any answer we get that relies on a molecular clock will be seriously inaccurate unless we calibrate the wolf clock speed with the fossil record. DaveScot
DaveScot, You make a good point. When I said theory, I meant an explanation of a plethora of facts as we see them. There is theory of abiogenesis + evolution by natural selection and there is the theory of extraterrestrial origin + blueprinted complexity. Both are theories. Whether they differ in their validity is another point entirely. For me, the theory of blueprinted complexity is both unwarrented (I believe evolution by natural selection + some general principals of order ala Stuart Kauffman can explain the path life took from the first egg to the preset biosphere) and impossible (as a genetist I don't see how such a program could operate within the genomic framework over billion year timescales while being constantly bombarded by mutation). However, I am actually a closet supporter of the extraterrestrial origin of the first DNA-based egg. The 600 million years from Earth's formation to the first fossil evidence of cellular life seems just too short for me. The complexity of the simplest prokaryote too vast. By allowing abiogenesis to occur elsewhere in the galaxy / universe during the previous 15 billion years the universe has been around, expands the possibility space greatly. I hadn't given too much thought to the relative chance of accidental vs. targetted arrival, but I think that is an interesting point... cambion
"You have a theory for how things work, and I respect that." No, you misunderstand. I claim there's more than one explanation for the fossil and molecular evidence. Some explain the evidence better than others. None are proven beyond a doubt. Standard evolution doesn't have a monopoly on the evidence nor on scientific explanations of same. There's a latin expression "omne vivo ex ovum". It means everything comes from an egg. If you trace backwards from any living creature there is an unbroken cell line back to an original egg - this is the definition of common descent. Presumably the egg in this case is 3.5 billion years old and came onto the scene shortly (in geological time) after the earth cooled off enough so it didn't get vaporized. Given the minimal identified complexity of the simplest free-living cell there doesn't appear to be near enough time for the complexity to have been produced on the earth by any known evolutionary mechanism. Moreover, all the interesting stuff in evolution after the first free-living cell took place in the last 500 million years and most of the modern phyla appeared in fully diversified form in a span of 50 million years. This also seems a rather short timespan. These problems go away, or at least aren't as difficult to explain, by the straightforward assumption that the first cell that appeared on the earth didn't originate on the earth and, like any other egg, contained blueprints of much greater complexity than what's expressed in the single egg cell. I know of no physical laws that in any way, shape, or form forbid life from evolving prior to the earth's formation and being transported to the earth. And it better fits the empirical evidence. Estimates of the global habitable zone where abiogenesis is reasonably possible include many millions of solar systems ranging in age up to about 4 billion years older than ours. So now instead of 500 million years for abiogenesis on a single planet we have 4 billion years on millions of planets. That's a lot more opportunity. The biggest problem thus far identified in astrobiology explanation of origins is that the earth is a TINY target for a LUCA to hit by accident from another solar system. According to estimates I've read the chance is so remote that the only reasonable possibility is that the LUCA arrived here in a targeted manner. So we're STILL ending up with the evidence pointing to design even after expanding the breeding ground for abiogenesis by orders of magnitude. DaveScot
I'd just like to make sure we're on the same page here... Since it is impossible for an uber-genome to specify every base pair in the entire phylogeny of life, we are instead going to consider an uber-genome that only specifies some of the major transitions in morphological form that have occured in the history of life. A situation where all real novelty comes from design and RM+NS plays only a supporting role. I cannot argue against this in a purely mass action way (yes an uber-genome would contain the information specifying 1000 major transitions). However, I don't think that this could have occured. Here it goes: There is a relatively high rate of DNA mutation (including base pair substitutions, deletions and rearrangements). This rate is high enough that after around 75 million years of independent evolution almost half of all (0.46-0.47) "neutral" sites differ between the human and mouse genomes. (Also note that this rate of neutral evolution is very slow - because of the extremely long generation times of mammals compared to microorganisms). A mutation that hits a "neutral" site, will randomly fix with a chance of 1/2N. This is, of course, unless that site is somehow useful to the organism. In your situtation, we must have many many sites in the uber-organism that possess no immediate function (they are reserved for later implementation). How can this uber-organism avoid having these sites washed away by the recurrent action of mutation? I cannot think of a way... cambion
Cool, I love population genetics. No, I'm not asking you to believe that all mutations that arise become fixed, but that result can be explained amazingly elegantly. Kimura (1968 Nature: 217:624-626) famously showed that the rate of mutations arising in a population will be: 2 (in diploid organisms such as human and mouse) x N (population size) x mu (mutation rate) x row (chance of fixation of mutation) Now, if mutations are strictly neutral than they have a 1 / 2N chance of becoming fixed in the population (think of this as a random draw with the chance of fixation equal to the current frequency - so that if a neutral mutation is currently at 50% frequency it will have a 50% chance to be fixed and a 50% chance to be lost - a new mutation necessarily starts out at 1 / 2N frequency) So, we get: 2 x N x mu x (1/2N) = mu And there we have it, the rate of neutral molecular evolution equals the rate of spontaneous mutation. cambion
We're not done with our lesson in molecular clocks by a long shot. DaveScot
"You need it to be everything, or otherwise you need evolution by RM + NS. And if RM + NS works for everything else, why can’t it work for the 200 equally complx but radically different organisms? I’m still waiting…" 200 times a human genome should be enough for a primary representative for at least 200 distinct phyla. Since there's a lot of duplication between phyla, and many phyla are far simpler than mammals, the number would probably be more like 1000 distinctly different organisms. Furthermore, I never claimed RM+NS doesn't do anything. That's a straw man. Clearly it does function at the microevolutionary scale so the complete genome for every species that ever lived is not a requirement. DaveScot
DaveScot, I just read your post #23 (somehow missed it earlier). I quite liked it. I have to say that I've enjoyed this discussion so far. It's very interesting to talk about alternative truth statements. You have a theory for how things work, and I respect that. I wish more or the ID folk would try to make that brand of statements. I think it cuts much closer to the nature of science that way. cambion
Hmmm... okay, reading your links again. From the Mouse Genome Project the divergence from the ancestral consensus sequence in the common ancestor and the extant mouse & human are measuring mutation rate from mutations that became fixed in the population. The spontaneous mutation rate measured in extant mice by Drake that you point out as quite close appears to be all mutations, not mutations that become fixed. Surely you're not asking me to believe that almost all mutations become fixed? DaveScot
Yes, but I asked for an organism with enough DNA to be a proprogrammed ancestor to *everything.* You need it to be everything, or otherwise you need evolution by RM + NS. And if RM + NS works for everything else, why can't it work for the 200 equally complx but radically different organisms? I'm still waiting... So, are we done with our lesson in molecular clocks then? cambion
At any rate, you asked me for an example of an organism with enough DNA to be a preprogrammed ancestor to everything. I gave you an examples of an extant organism with 200 times the DNA of a human. Clearly that's sufficient quantity to build a human plus at least 200 other equally complex but radically different organisms. Why not just acknowledge I gave what you asked for? DaveScot
Oops - that'll teach me to drink and calculate. My burst. The numbers differ by a factor of 2. Let me read your link again. DaveScot
cambion "Oh, one thing, scientists (in all disciplines) commonly use the “within an order of magnitude” metric to assess whether two measures are congruent. This situation exceeds that." Unfortunately the two numbers you said are "close" differ by a factor of 40. What, are you in high school and haven't studied exponents yet? DaveScot
Oh, one thing, scientists (in all disciplines) commonly use the "within an order of magnitude" metric to assess whether two measures are congruent. This situation exceeds that. cambion
A 2.44 fold difference is not "quite a close match"? You are crazy. That's amazingly close considering we're dealing with 9! orders of magnitude. No experimental study can be exact. It's an amazingly close match. Also, you did not respond to the point that "junk" DNA and 4-fold synonymous sites show between 0.46 and 0.47 substitutions per site. How do you explain this? Try again... cambion
cambion "spontaneous mutation measured in laboratory mice as “1.1 x 10^-8″quite a close match to the observed rate of evolutionary change." "the average substitution rates would have been 2.2 x 10^-9 and 4.5 x 10^-9 in the human and mouse lineages, respectively." 4.5*10^-9 and 1.1*10^-8 aren't "quite a close match" in my book - sorry - try again DaveScot
As long as covering all the bases, I should respond to your point that “I don’t buy the lame explanation that a.dubia is carrying around almost 700 BILLION base pairs of junk DNA. C-value doesn’t correspond with complexity but it sure as hell corresponds with nucleus size in eukaryotes and nucleus size corresponds with cell size and cell size corresponds with time it takes for cell division and cell division time corresponds with energy required for cell division and energy required for cell division corresponds to competitive fitness. A.dubia should have been eaten alive (literally) by its smaller, faster, nimbler, more efficient competitors if there’s no survival value in all that extra DNA.” Here you placing the evolutionary explanation of the 700 billion base pairs of junk DNA in A. dubia to the pan-selectionist paradigm. I do not believe selection to be all powerful. It has to contend with a number of limitations and constraints. I strongly suspect that if you looked at the genome of A. dubia you would find it choke full of the same sequence repeated and over and over. Some genetic parasite that got loose and ran wild. This happens a lot, and is pretty well understood (for a review go to Hurst, G. D. D., and J. H. Werren. 2001. The role of selfish genetic elements in eukaryotic evolution. Nat. Rev. Genet. 2: 597–606. http://sunflower.bio.indiana.edu/~clively/L567/readings/Hurst&Werren%202001.pdf). I also suspect that A. dubia has lost fitness due to it’s inability to control its genomic parasites. It may even be on its way to extinction because of this (the review discusses a similar scenario). Natural selection is selecting the parasites to replicate faster and faster (those that replicate more leave more descendants in the genome), while at the same time selecting for A. dubias that can control the spread of the parasites. Finally, as long as we’re flowing through arguments, you should respond to my calculation of an uber-dubia genome size of 400,000 times that human, or otherwise “concede by default.” jimbo, To respond to your point about specifying information in the uber-genome… Even if the uber-genome does not specify every base pair in all of the millions of genomes that flow from it, there still has to be a mechanism by which these base pairs are selected. You could just say that it’s random which base pairs get thrown in, and the ones that work stick around, but that sounds awful familar doesn’t it? cambion
So, we see large swaths of “junk” DNA (these are regions that used to be active parasitic elements, but have since degraded), show a very consistent trend in their rate of molecular evolution. Also, if this isn’t enough, note that (also from the mouse genome paper): “Having established the neutral substitution rate by examining aligned ancestral repeats, we then investigated a second class of potentially neutral sites: fourfold degenerate sites in codons of genes. Fourfold degenerate sites are subject to selection in invertebrates, such as Drosophila, but the situation is unclear for mammals. We examined alignments between fourfold degenerate codons in orthologous genes. The fourfold degenerate codons were defined as GCX (Ala), CCX (Pro), TCX (Ser), ACX (Thr), CGX (Arg), GGX (Gly), CTX (Leu) and GTX (Val). Thus for Leu, Ser and Arg, we used four of their six codons. Only fourfold degenerate codons in which the first two positions were identical in both species were considered, so that the encoded amino acid was identical. Slightly fewer than 2 million such sites were studied, defined in the human genome from about 9,600 human RefSeq cDNAs and aligned to their mouse orthologues. The observed sequence identity in four-fold degenerate sites was 67%, and the estimated number of substitutions per site, between 0.46 and 0.47, was similar to that in the ancestral repeat sites (see Supplementary Information).” So, what we see is that “junk” noncoding DNA evolves at the same rate as 4-fold synonymous sites (where changing the nucleotide has no effect on the sequence of the protein produced), and that this rate closely corresponds with the rate of spontaneous mutation measured in the laboratory. What more do you want? cambion
Okay, this is getting weird. Can't see it any more. I'm going to post it again. I apologize for confusing things... -------------------------------------------------------------- DaveScot, I wasn’t aware of the rules of the debate we had going. Didn’t know a formal flow-through was necessary. Oh well… I said earlier: “When a molecular clock is run with lots and lots of data (say a full genome, as they’ve done with human-mouse and other comparisons), the large variance is swamped out by all the signal and estimates that very closely match the fossil record are arrived upon.” You have asked me to cough up some links. So be it… This is from the mouse genome paper (Mouse Genome Sequencing Consortium. 2002. Initial sequencing and comparative analysis of the mouse genome. Nature 420: 520-562). I bet you can find the whole paper on Google scholar if you care to look, but the relevant section is below: “Ancestral repeats provide a powerful measure of neutral substitution rates, on the basis of comparing thousands of current copies to the inferred consensus sequence of the ancestral element. The large copy number and ubiquitous distribution of ancestral repeats overcome issues of local variation in substitution rates (see below). Most notably, differences in divergence levels are not affected by phylogenetic assumptions, as the time spent by an ancestral repeat family in either lineage is necessarily identical. The median divergence levels of 18 subfamilies of interspersed repeats that were active shortly before the human–rodent speciation (Table 6) indicates an approximately twofold higher average substitution rate in the mouse lineage than in the human lineage, corresponding closely to an early estimate by Wu and Li. In human, the least-diverged ancestral repeats have about 16% mismatch to their consensus sequences, which corresponds to approximately 0.17 substitutions per site. In contrast, mouse repeats have diverged by at least 26–27% or about 0.34 substitutions per site, which is about twofold higher than in the human lineage. The total number of substitutions in the two lineages can be estimated at 0.51. Below, we obtain an estimate of a combined rate of 0.46–0.47 substitutions per site, on the basis of an analysis that counts only substitutions since the divergence of the species (see Supplementary Information concerning the methods used). Assuming a speciation time of 75 Myr, the average substitution rates would have been 2.2 x 10^-9 and 4.5 x 10^-9 in the human and mouse lineages, respectively. This is in accord with previous estimates of neutral substitution rates in these organisms.” So, here note that Drake et al. 1998 (http://www.genetics.org/cgi/content/full/148/4/1667) give the rate of spontaneous mutation measured in laboratory mice as “1.1 x 10^-8″, quite a close match to the observed rate of evolutionary change. cambion
There comment appears to be back up. Please ignore #31. Sorry about that... cambion
This is very strange... I wrote a very long comment (#30) a couple of hours ago, it showed up in my browser even, but now I'm not seeing it. Any suggestions on what's going on? cambion
DaveScot, I wasn't aware of the rules of the debate we had going. Didn't know a formal flow-through was necessary. Oh well... I said earlier: "When a molecular clock is run with lots and lots of data (say a full genome, as they’ve done with human-mouse and other comparisons), the large variance is swamped out by all the signal and estimates that very closely match the fossil record are arrived upon." You have asked me to cough up some links. So be it... This is from the mouse genome paper (Mouse Genome Sequencing Consortium. 2002. Initial sequencing and comparative analysis of the mouse genome. Nature 420: 520-562). I bet you can find the whole paper on Google scholar if you care to look, but the relevant section is below: "Ancestral repeats provide a powerful measure of neutral substitution rates, on the basis of comparing thousands of current copies to the inferred consensus sequence of the ancestral element. The large copy number and ubiquitous distribution of ancestral repeats overcome issues of local variation in substitution rates (see below). Most notably, differences in divergence levels are not affected by phylogenetic assumptions, as the time spent by an ancestral repeat family in either lineage is necessarily identical. The median divergence levels of 18 subfamilies of interspersed repeats that were active shortly before the human–rodent speciation (Table 6) indicates an approximately twofold higher average substitution rate in the mouse lineage than in the human lineage, corresponding closely to an early estimate by Wu and Li. In human, the least-diverged ancestral repeats have about 16% mismatch to their consensus sequences, which corresponds to approximately 0.17 substitutions per site. In contrast, mouse repeats have diverged by at least 26–27% or about 0.34 substitutions per site, which is about twofold higher than in the human lineage. The total number of substitutions in the two lineages can be estimated at 0.51. Below, we obtain an estimate of a combined rate of 0.46–0.47 substitutions per site, on the basis of an analysis that counts only substitutions since the divergence of the species (see Supplementary Information concerning the methods used). Assuming a speciation time of 75 Myr, the average substitution rates would have been 2.2 x 10^-9 and 4.5 x 10^-9 in the human and mouse lineages, respectively. This is in accord with previous estimates of neutral substitution rates in these organisms." So, here note that Drake et al. 1998 (http://www.genetics.org/cgi/content/full/148/4/1667) give the rate of spontaneous mutation measured in laboratory mice as "1.1 x 10^-8", quite a close match to the observed rate of evolutionary change. So, we see large swaths of "junk" DNA (these are regions that used to be active parasitic elements, but have since degraded), show a very consistent trend in their rate of molecular evolution. Also, if this isn't enough, note that (also from the mouse genome paper): "Having established the neutral substitution rate by examining aligned ancestral repeats, we then investigated a second class of potentially neutral sites: fourfold degenerate sites in codons of genes. Fourfold degenerate sites are subject to selection in invertebrates, such as Drosophila, but the situation is unclear for mammals. We examined alignments between fourfold degenerate codons in orthologous genes. The fourfold degenerate codons were defined as GCX (Ala), CCX (Pro), TCX (Ser), ACX (Thr), CGX (Arg), GGX (Gly), CTX (Leu) and GTX (Val). Thus for Leu, Ser and Arg, we used four of their six codons. Only fourfold degenerate codons in which the first two positions were identical in both species were considered, so that the encoded amino acid was identical. Slightly fewer than 2 million such sites were studied, defined in the human genome from about 9,600 human RefSeq cDNAs and aligned to their mouse orthologues. The observed sequence identity in four-fold degenerate sites was 67%, and the estimated number of substitutions per site, between 0.46 and 0.47, was similar to that in the ancestral repeat sites (see Supplementary Information)." So, what we see is that "junk" noncoding DNA evolves at the same rate as 4-fold synonymous sites (where changing the nucleotide has no effect on the sequence of the protein produced), and that this rate closely corresponds with the rate of spontaneous mutation measured in the laboratory. What more do you want? As long as covering all the bases, I should respond to your point that "I don’t buy the lame explanation that a.dubia is carrying around almost 700 BILLION base pairs of junk DNA. C-value doesn’t correspond with complexity but it sure as hell corresponds with nucleus size in eukaryotes and nucleus size corresponds with cell size and cell size corresponds with time it takes for cell division and cell division time corresponds with energy required for cell division and energy required for cell division corresponds to competitive fitness. A.dubia should have been eaten alive (literally) by its smaller, faster, nimbler, more efficient competitors if there’s no survival value in all that extra DNA." Here you placing the evolutionary explanation of the 700 billion base pairs of junk DNA in A. dubia to the pan-selectionist paradigm. I do not believe selection to be all powerful. It has to contend with a number of limitations and constraints. I strongly suspect that if you looked at the genome of A. dubia you would find it choke full of the same sequence repeated and over and over. Some genetic parasite that got loose and ran wild. This happens a lot, and is pretty well understood (for a review go to Hurst, G. D. D., and J. H. Werren. 2001. The role of selfish genetic elements in eukaryotic evolution. Nat. Rev. Genet. 2: 597–606. http://sunflower.bio.indiana.edu/~clively/L567/readings/Hurst&Werren%202001.pdf). I also suspect that A. dubia has lost fitness due to it's inability to control its genomic parasites. It may even be on its way to extinction because of this (the review discusses a similar scenario). Natural selection is selecting the parasites to replicate faster and faster (those that replicate more leave more descendants in the genome), while at the same time selecting for A. dubias that can control the spread of the parasites. Finally, as long as we're flowing through arguments, you should respond to my calculation of an uber-dubia genome size of 400,000 times that human, or otherwise "concede by default." jimbo, To respond to your point about specifying information in the uber-genome... Even if the uber-genome does not specify every base pair in all of the millions of genomes that flow from it, there still has to be a mechanism by which these base pairs are selected. You could just say that it's random which base pairs get thrown in, and the ones that work stick around, but that sounds awful familar doesn't it? cambion
cambion "Also, if the whole point of this phylogenetic program is to end up at humans, it seems like a pretty nonsensical way of going about it…" Nonsensical from a human point of view because human engineers usually like to see the completion of the projects they start before they die. Suppose the hypothetical designer is immortal and/or doesn't mark the passage of time the way we do and a billion year gestation period is not a concern. Or suppose that there are milestones that have to be satisfied to reach your goal such as first oxygenating the atmosphere so land animals with fast metabolisms can emerge then building up large fossil fuel reserves so they have a ready supply of energy to build a technological civilization. It appears to me that the second paragraph is apt if not the first too. The earth needed terraforming before rational man could build a civilization. Terraforming takes a long time. You owe me some links supporting your claims about molecular clocks too if you wish to not concede by default. DaveScot
Cambion, I like your reasoning and logic in post #15, but I think your premise provides part of what is seen by some as the tautology of molecular clocks; specifically, they are calibrated upon the initial assumption of evolutionary timelines and rely upon the presumed existence of common ancestors. Then if the clocks are used as support for the same the circularity sets in. Charlie
Cambion - I think your making a rather fallacious assumption: that you can't currently see a reason for dinosaurs, therefore there must not be one. If you look at it holistically, perhaps the dinosaurs prepared the ecosphere in some way to prepare the way for later mammalian evolution. Or maybe you can't get certain productive branches without also getting some other nonproductive branches. Who knows? But to say that at out current fairly small level of knowledge that we can see all the implications of 4 billion years of evolution seems rather arrogant. As to the level of information in the genome: one could say that part of the information is there, and part is "encoded" in laws of nature themsleves (in how proteins fold, how different functional proteins are related, etc.) If you take Dawkins's celebrated "Methinks it is a weasel" example, in english it doesn't work. But one could imagine a language in which each intermediate phrase actually had meaning, and furthur each followed upon the other to tell a continuous story. Part of the information, then, would be in the phrase itself, but most would be in the language itself. jimbo
Oh, one thing, I was thinking of the beanbags the size of hackysacks, not the armchair kind. cambion
Also, if the whole point of this phylogenetic program is to end up at humans, it seems like a pretty nonsensical way of going about it... Why both to continue evolution on branches that will not lead to humans, and will not interact with the human ecosystem. Here I'm thinking of hyperthermophylic bacteria living in the Yellowstone Geysers. These Archeobacteria probably evolved from some of our earliest ancestors. Or similarly, what about deep sea fauna? Also, why both at all with the dinosaurs? The mammalian clade of reptiles branched off early, in the Triassic, why not unfold millions of years of evolution of the dinosaurs (it's a lot of work to write all that code), when all you really want is mammals. cambion
DaveScot, The genome of any sort of uber-dubia could not contain enough information to specificy the entire unfolding of life on Earth. I'm going to use some back of the envelope calculations here. I will try to be as conservative as possible in my estimates. I'm sure you will let me know if you disagree with any of the parameters... From http://en.wikipedia.org/wiki/Biodiversity : "Estimates of the present global macroscopic species diversity vary from 2 million to 100 million species, with a best estimate of somewhere near 10 million" and "99% of the species that have ever lived on earth are today extinct" So, 10 million extant genomes and 1 billion genomes ever existing (this is completely discounting polymorphism, and assuming each species has just a single genome). From http://www.ensembl.org/Saccharomyces_cerevisiae/index.html : "Base Pairs: 12,156,590" So, the yeast genome (one of the simplest eukaryotes known - using eukaryotes because they make up the majority of those species - but, not of course biomass) is around 12 million base pairs. From http://www.newton.dep.anl.gov/askasci/mole00/mole00415.htm : "Since DNA is double stranded, i.e., one nucleotide pairs with a nucleotide on the opposing strand, then we can say the average molecular weight of a nucleotide PAIR in DNA is 660 daltons. " (This gives nearly the same numbers http://www.jtbaker.com/conversion/conversions.htm) From http://en.wikipedia.org/wiki/Dalton_(unit) : "1 u ≈ 1.66053886 x 10-27 kg" Cool, now let's do the math (but first note that the haploid human genome weighs 3.3 x 10^-15 kg 10^9 genomes * 1.2 x 10^6 base pairs per genome = 1.2 x 10^15 base pairs ever existing 1.2 x 10^15 x 660 daltons x 1.66053886 x 10^-27 kg/dalton = 1.315 x 10^-9 kg or 1.315 ng in uber-dubia's genome without compression So, without compression, we need a genome of around 400,000 times the size of modern day humans. I would say that this number would not be biologically feasible, one would need cells the size of beanbags... So, now we get to argue about how much compression is possible... I would like to add however, that this includes no room at all for the program that is necessary to unfold the phylogeny of life. It only contains the what is "printed out" by our running phylogeny program. You would have to add a significant chunk of DNA to account for this. I just don't see how it's possible. cambion
Cambion Be a sport and see if you can dig up a molecular clock that can tell me when a.dubia split from its MCRA. Them there molecular clocks been a tickin' for billions of years ya think? :-) Also, ya got any incontrovertable evidence that prokaryotes didn't come along AFTER eukaryotes? I can't find any. It's (again) just an a priori presumption that smaller genomes preceded larger genomes. So many prokaryotes are parasites of eukaryotes it makes me really question which came first, the host or the parasite. Logic says the host came first. Moreover, we have to consider viruses. They're exclusively parasites and have even simpler genomes than prokaryotes. They even parasitize prokaryotes. It's almost a foregone conclusion that viruses evolved after their hosts. I see no good reason to presume that prokaryotes didn't cleave off from eukaryotes instead of the dogmatic presumption to the contrary. Everything we know about evolution on the planet earth falls neatly into place if you ditch the belief that the first cell evolved on this planet from a primordial soup of inanimate chemicals. That geocentric notion of origins dictates a path of less complex to more complex and is full of gaps and unexplained events. If one allows in the belief that life evolved (or was created) elsewhere and arrived here (again either by chance or design) in a complex seed then everything thereafter falls into place. No more gaps. The gap is moved off this planet into a far larger temporal and spatial environment where more is reasonably possible. To get an idea just how much larger in time and space (the causally connected universe where life could have evolved): http://en.wikipedia.org/wiki/Habitable_zone Of course the GHZ presumes carbon based life and I'm no carbon chauvinist! http://en.wikipedia.org/wiki/Carbon_chauvinist It's been very recently discovered that we only have a good handle on 5% of the "stuff" that makes up the observable universe. 95% of the "stuff" that reveals itself only through gravitational interaction is mysterious stuff termed "dark matter" (20% of the unknown stuff) and "dark energy" (the other 70% of the unknown stuff). With 95% of the stuff of universe undescribed by modern physics I'm pretty damn hesitant to proclaim much of anything about what might be out there in terms of intelligent entities. Hopefully this begins to explain why I'm agnostic and leads others into my state of not knowing anything for sure (misery loves company, after all). DaveScot
I don't buy the lame explanation that a.dubia is carrying around almost 700 BILLION base pairs of junk DNA. C-value doesn't correspond with complexity but it sure as hell corresponds with nucleus size in eukaryotes and nucleus size corresponds with cell size and cell size corresponds with time it takes for cell division and cell division time corresponds with energy required for cell division and energy required for cell division corresponds to competitive fitness. A.dubia should have been eaten alive (literally) by its smaller, faster, nimbler, more efficient competitors if there's no survival value in all that extra DNA. Period. Either you believe that or you have to throw out the whole concept of survival of the fittest. Takes yo pick. DaveScot
"If someone could demonstrate for me that that feat [feat = packing a genome with enough information for vast diversity] would be indeed possible, I would consider it a possible alternative hypothesis." No problemo. http://www.genome.org/cgi/content/full/9/4/317 The C-Value Paradox The lack of a correlation between genome size and organismal complexity has so surprised biologists that it has come to be known as the "C-value paradox" (Thomas 1971). For example, Homo sapiens has a genome size 200 times smaller than that of Amoeba dubia (Li 1997). Moreover, it has been well established that the genomes of most eukaryotes contain thousands of times more DNA than required to carry out all necessary protein coding and regulatory functions. Some early attempts to explain this lack of an association between C-value and complexity proposed that the superfluous DNA present in large genomes acted as a storehouse of genetic variability that could be recruited by evolution should the need arise (Jain 1980). The fallacy of ascribing such foresight to the evolutionary process is now well recognized, but several plausible solutions to this puzzle remain. --------------------------- I'm not at all convinced that the "fallacy of ascribing such foresight to the evolutinary process" is well recognized. I believe that's a dogmatic conclusion based upon an a priori presumption that evolution is without foresight. However, I am convinced that ameoba dubia, with a genome 200 times the size of the human genome, is sufficient to contain enough information to launch at least a couple hundred phyla which corresponds rather nicely with the number of phyla that have been identified over the course of evolution. Nothing says a.dubia is the biggest either. It's just the biggest so far discovered in an extant critter. There is absolutely no reason in the world why an uber-dubia (say, that's got a nice cadence, dunnit?) couldnt't exist with an even larger genome. What say you now? DaveScot
Actually when lots of lots of data are used you get lots and lots of different answers. Surely you've read of different proteins giving different answers to MRCAs. Molecular clocks as dating devices for phylogenetic divergence are "seriously innaccurate". Don't take my word for it, take Richard M. Bateman and William A. DiMichele's word for it. I gave you a link to peer reviewed literature. Where's your link contradicting mine? What's good for the goose is good for the gander. Cough up some corroboration for your claims. DaveScot
DaveScot, There's a lot to respond to here. I'll just touch on a few points... The molecular clock as tautology: I'll admit that inferring selection when the molecular clock doesn't match up to the fossil record is a bit iffy at best. Being able to say that a high rate of change for this particular gene at this particular branch infers selection is a bit strong, especially when multiple tests come into the equation. I'm not too familar with the matching up of molecular clocks and fossil clocks, however, I will take the statement "Dating phylogenetic divergences via molecular clocks remains seriously inaccurate, and ultimately relies primarily on fossil benchmarks" at face value. I think one part of things that people might not be taking into account is the enourmass variance caused by a Poisson process like neutral molecular evolution. If each year there is a 10^-9 chance of mutation becoming fixed, then the clock will proceed at an average rate of one change per million years, however the variance around this million year average is huge (you can plot this out easily enough with Mathematica or similar software). Thus, if the scientist sees that there are 3 mutations in a million year timespan (following the previous numbers) that is no reason to reject the clock. This clock is only useful as a heurestic, a null hypothesis as it were. I would think you would be very happy to agree with it, as it allows for genetic change to occur without invoking natural selection whatsoever. One final point on this matter. When a molecular clock is run with lots and lots of data (say a full genome, as they've done with human-mouse and other comparisons), the large variance is swamped out by all the signal and estimates that very closely match the fossil record are arrived upon. Distinguishing common descent and common design: Historically this has been quite an issue. My metric lies in those pieces of an organism that lack function. Thus, one of the strongest pieces of evidence that whales and other mammals shared a common ancester lies in the fact that they have vestigal pelvic bones, which are absolutely useless to the whale, but show perfect homology to the pelvic structures of other mammals. So, one could use this same metric at the level of DNA. There are many regions of the genome that serve no purpose, mostly slowly decaying defunct parasitic elements. By comparing patterns such sequences convergent evolution or convergent design can be safely ruled out. Ontogeny / phylogeny: On this point, have you read any Philip K. Dick? In his book VALIS, he outlines a very similar scenario. I think the most that I can saw is that I think it would be impossible to package all of the information required for subsequent unfolding of phylogeny into the Earth's earliest cell. If someone could demonstrate for me that that feat would be indeed possible, I would consider it a possible alternative hypothesis. As it stands, it's very difficult for me to believe in any sort mechanism by which a designer would continually inject information into the evolving biosphere. cambion
"I think it’s entirely possible, that very different things are happening at the two scales." I'm more intrigued by idea that the same things are happening on very different scales. Particularly ontogenesis and phylogenesis. The parallels seem too numerous and I suspect they're the same process on different timescales. Consider ontogenesis as the unfolding of a preprogrammed sequence from single cell to adult form. All the information required to construct the adult form is contained, unexpressed, in the original cell. What's phylogenesis? It's almost certainly the story of a single cell progressing through a sequence into an adult form. Ostensibly any species today can be traced through an unbroken cell line to a single celled LUCA some 3.5 billion years ago. Now we know nature loves a fractal - similar patterns unfolding from the same seed on vastly different scales. Let's take the ontogeny/phylogeny comparison a hypothetical step closer to fractal similarity - suppose that phylogeny is the unfolding of a single original cell *in a preprogrammed sequence* into an adult form. Only instead of typical ontogenetic gestation periods measured in days, weeks, and months, it's a different scale - the phylogenetic gestation period is one measured in thousands, millions, and billions of years. That's not a big conceptual step when you think about it. It makes good sense. Other interesting parallels include death of individuals and extinction of species. Adult forms resulting from ontogenesis live a span of days, months, or years, then die out. Phylogeny is a story of adult forms (species) that live a span of thousands or millions of years then die out. In both cases they appear to have the goal of reproducing into slightly modified forms that repeat the cycle of birth, life, and death, but on timescales that differ by a few orders of magnitude. The death of the old makes room for life of the new in both individual organisms and individual species. Intriguing, is it not? Let's examine another question that we seldom hear asked. Has phylogenetic evolution stopped? When's the last date where we've marked the emergence of a new kingdom, order, family, phyla, or genus? It looks to me like either evolution at the genus level and higher either took a break beginning a few million years ago or it stopped altogether. How can we say whether it is still occuring today beyond minor adaptations which can be easily attributable to changes in allele frequencies? So macroevolution has either stopped or is on hiatus. We can't tell the difference because the only way to do so is by waiting it out over macroevolutionary timescales. Suppose it's ended. Suppose rational man is the adult form that phylogenesis was intended to reach. Why not? We're unique enough. Now take it one step further. Suppose phylogenesis has entered a new phase. I call this phase technological evolution (as opposed to biological evolution). If intelligent design wasn't a part and parcel of biological diversity in the past it sure is now and I'll pelt anyone who tries to disagree with variety of rotten genetically engineered fruits and vegetables. Now we've entered yet another timescale for phylogenetic evolution. Through intelligent designers wearing labcoats genetic evolution has entered a new era. If we don't wipe ourselves out macroevolution is heading somewhere new and it's going there FAST. No more waiting around millions of year for random variation (if there is such a thing in the first place) to produce interesting new things - now it's getting done at the speed of engineering workstations coupled to gene splicing machines... Food for thought. DaveScot
Cambion By the way, how does your example of phlyogenetic homology support anything at all about constancy in the rate of protein evolution? You're preaching to the choir about common descent. I have no strong doubts about a hypothetical LUCA that existed 3.5 billion years ago. The genetic code is virtually identical in every living thing ever examined from the most simple prokaryote to the most complex eukaryote. That's pretty compelling evidence for common descent although one must logically acknowledge that common design is also a valid inference. I know of no way to distinguish between common descent and common design. Feel free to expound on a way to discriminate between the two if you know of one. DaveScot
I agree one can make a lot of inferences... How can we falsify those inferences? How can you rule out horizontal gene transfer as the mechanism that linked the genes instead of common ancestry? How can you rule out convergent evolution for that particular gene? Lots and lots of inferences. All valid. None falsifiable because you can't duplicate historical evolution by experiment and you can't sequence genes that no longer exist. You know the difference between theoretical science and experimental science, right? Is historical biology a theoretical or experimental science? DaveScot
DaveScot, You can make a lot of inferences about what ancestral sequences would have looked like using only extant sequence data. Let's say we sequence a gene in mice and sequence the same gene in humans. Let's say a piece of the sequences we get looks like: mouse: ATA GCG GAT human: ACA GGG CCT We would infer that the most recent common ancester (MRCA) between humans and mice (which lived somewhere around 90 million years ago) had a sequence something like: MRCA: A-A G-G --T Where we know the first blank to be either T or C, and the second blank to be either C or G, and so on. Thus, we infer a total of 4 mutations occuring along the two branches leading from this MRCA to present day mouse and human. We do not know, however, which branches they occured in; all 4 could have been in mouse, or 2 in human and 2 in mouse, and so on... We also cannot know whether these 4 mutations occured at the same time, or if they occured in a spread-out clock-like fashion. Okay, so, two extant sequences doesn't get us very far. However, we have the sequences from a number of species that lie within this phylogeny. To update what we had before: mouse: ATA GCG GAT rat: ATA GCG GCT gibbon: ATA GGG CCT human: ACA GGG GCT chimp: ACA GGG GCT Thus, we infer the MRCA to be ATA G-G GCT. We can also infer much more precisely on which branch individual mutations fall: There is a T->C mutation on the primate branch leading to human-chimp, there is a G->C mutation on the gibbon-specific branch, there is a C->A mutation on the mouse-specific branch, and there is either a G->C mutation on the branch leading to mouse-rat, or a C->G mutation on the branch leading to the primates. Thus, by adding more and more extant sequences we can partition where ancestral mutations occur with finer and finer accuracy. When this is done for a gene with hundreds of base pairs and a few dozen mutations, the mutations fall out onto the branch more or less as one would expect if mutations constantly accumulating at a very slow and steady rate. This has been done over and over again, for many genes, with the result that in the vast majority of the time, a purely neutral accumulation of mutations cannot be statistically rejected. Finally, I think it's very interesting and not at all explained, how protein evolution can occur in such a constant fashion, while morphological evolution occurs in such fits and starts. I think it's entirely possible, that very different things are happening at the two scales. Compare this to physics, with random noise and quantum mechanics when you look very close and Newtonian motion when you look from farther away. cambion
Oops - forgot the second article. I was laughing too hard. http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=151321 This one rather candidly remarks that molecular clocks are "seriously innaccurate" for dating phylogenetic divergence. If they're seriously innaccurate for dating phylogenetic divergence then exactly what the heck are they *useful* for? Anyone? Anyone? Ferris Bueller? Anyone? DaveScot
cambion Here's a couple of good articles... http://en.wikipedia.org/wiki/Neutral_theory_of_molecular_evolution A good example of tautology is at the bottom where the neutral theory is used to detect selection i.e. if fossil dating doesn't agree with molecular dating it's declared as evidence that the gene underwent selection pressure (i.e. it non-neutral). It's a circular argument with no wrong answers. When fossil dating and molecular dating agree it's confirmation for neutral evolution. When fossil dating and molecular dating disagree it's confirmation for natural selection. There's no way to falsify either one of them! It's Darwin of the Gaps!!!! :-) DaveScot
cambion Just out of curiosity - how does one determine that a given gene mutates at a "constant rate" over geologic timespans when no copy of the gene except from living tissue is available to sequence? How do we know that it didn't undergo bursts of rapid mutation and long periods of stasis? The indisputable testimony of the fossil record is a story of bursts of rapid evolution followed by long periods of stasis. Why should individual proteins exhibit constant, predictable behavior when the speciation that ostensibly results from cumulative mutations is unpredictable and erratic? DaveScot
This is interesting. I must admit that I haven't been keeping up that much with the latest literature on the neutral theory / hypothesis. Which predictions does it make, that are falsified by experimental data? Any references that you could provide would be much appreciated... cambion
cambion The neutral theory (eh? neutral HYPOTHESIS actually) behind genetic clocks is another classic example of a historical biology hypothesis that fails its predictions (falsified) but instead of being abandoned like falsified hypotheses should be it is instead propped up by ad hoc modifications to the theor... er, hypothesis. As with most of these failed predictions it's what I call "Darwin of the Gaps" to the rescue. In the large number of cases where radiocarbon dating and molecular dating disagree why we then just blame it on natural selection! RM+NS to the rescue again!!! Nothing in historical biology makes sense without the all-powerful, unfalsifiable RM+NS gap filler. Karl Popper is spinning in his grave... DaveScot
Hmm... I'm still having a little trouble trying to follow you. If you had to say something, make a hypothesis as it were, (rather than "we just don't know"), regarding what is happening to protein sequences over millions of years, what would it be? It seems that you don't think they are changing in a slow and constant fashion, instead (if I'm following you correctly), they only give the 'appearance' of slow and constant change. What appears clock-like at first glance, is in fact large pieces of divergence thrown in at the appropriate times? cambion
cambion said, "The observation is a almost constant rate of sequence change during the course of particular protein’s evolution. " There are sequence divergence in proteins. That is undeniable. The cause of that divergence is presumed to act at a clock-like rate, but that is a presumption not an observation. Further, no one knows what the initial conditions of the ancestors were. For example, what if the creatures suddenly appeared with the sequence divergences intact (like at the cambrian explosion), and every speciation event added an appropriate amaount of divergence until the tree of life was filled out. This of course would be too fantastic a story to accept, and that is why it is rejected in favor of a molecular clock. But if the molecular clock is broken, and worse, if it never existed, we must come to terms with it, no matter how un-palatable the alternatives. There is one alternative that is always available: "we don't know". I'm predicting that to sustain the molecular clock, the "just so stories" would have to be so fantastic to rescue the theory that any postulated evolutionary sequence of events would be indistinguishable from appeals to magic. Salvador scordova
Hi jonnyb, I apologize for that problem. I have subscription to Nature, plus I have institutional access, and many times I don't realize my links won't work for others. An Abstract can be found: http://mbe.oxfordjournals.org/cgi/content/abstract/22/7/1561 "Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution. " There is a diluted version with a decidedly (gasp) creationist spin is here: http://creationsafaris.com/crev200507.htm#20050715d This is an important field of study as I have personally gone out on a limb and made a bold empirically verifiable prediciton that this is a disaster waiting to happen in evolutionary biology. Mutation rates are difficult to measure directly in some cases, especially the kind that is of interest which may have an occurrence of 1 out every million to a billion. I predict measured rates will not be consistent with the speculated rates of the evolutionary biologists. Preliminary lab results for the few direct measurements have been consistent with my predictions and have been unfavorable to prevailing theories. The paper by Ho is further evidence to that end. And the paper Bill cited yields even more problems. I certainly have my biases, but here is case where I am coming out with a bold testable prediction, and if I'm right, 20 years from now, there's going to be a lot of egg on some faces, so to speak. I credit Denton and Hoyle who forsaw all these difficulties 20 years ago. Frankly, I agree with Denton, the molecular clock hypothesis is more like mideaval astrology than a serious scientific theory. In the following link, with a (gasp, my apologies) heavy creationist bent, a medical doctor and medical technologist explain what the molecular clock is all about and even cites some reasonable reservations about Denton's interpretation: http://www.creationinthecrossfire.com/Articles/The%20Molecular%20Clock%20Hypothesis.html Salvador scordova
scordova: I'm pretty familar with the writings on the molecular clock. However, I don't really understand what you mean by calling Hartl's "scientific" explanation of it a tautology. The observation is a almost constant rate of sequence change during the course of particular protein's evolution. The (current) hypothesis for this observation is that the vast majority of sequence changes are completely neutral in the eyes of natural selection. I would think that a tautology would require the observation and the hypothesis to be basically the same thing. What am I missing? cambion
scordova: Can you provide a link to the abstract? The link you provided was to purchase the article, but the page itself didn't even give the title. johnnyb
Pav, Looking at sequence divergences in proteins, molecular evolution appears to proceed at a clocklike rate. To explain this rate, a mechanism was first proposed by Nobel Laureate Linus Pauling and Emile Zuckerkandl, and is known as the molecular clock hypothesis, which is not really a mechanism, but rather a tautology! See Denton's Evolution a Theory in Crisis, and the chapter on "Biochemical Echo of Typology" to see that their was a looming disaster in evolutionary biology which Denton predicted 21 years ago. This recent paper is essentially bearing out this disaster. Sure, they propose "a new model", but I'm confident it is little more than a just-so story tautology! Denton wrote prophetically 21 years ago, "the idea of uniform rates of evolution is presented in the literature as if it were an empirical discovery . The hold of the evolutionary paradigm is so powerful that an idea which is more like a principle of medieval astrology than a serious twentieth century scientific theory has become a reality for evolutionary biologists. ... the biological community seems content to offer explanations which are no more than apologetic tautologies." I have Hartl's graduate level population genetics book. The "scientific" explanation for the apparent molecular clock was little more than a tautology. I'm confident Hartl's latest paper is little more than an improved tautology! These guys are incredibly predictable. For interested parties, an equally devasting problem for the molecular clock is arising elsewhere: http://tinyurl.com/b5s7r The evolution (pun intended) of the molecular clock hypothesis in peer-reviewed literature is like wathcing a train wreck. Salvador scordova
But when they're talking about a "constrained" 'random walk', aren't the constraints imposed by nature, and hence flow from the laws of nature? I have the paper downloaded now, so I'll read it over and hopefully I'll understand exactly what they mean by 'constraint'. From an informational point of view, however, I suspect you would differ from Denton in saying that the "laws" of nature cannot "add" information, and hence the complicated "configuration spaces" that proteins embody represent information (quantum bits, almost), and, so, the "laws" of nature alone do not suffice in explaining the rise of this atomic information. Is that roughly on the mark? PaV
No. Denton is arguing that life is built into the laws of nature and that these laws are inherently teleological, directed toward bringing about not just life in general but humanity. At the Mere Creation conference in 1996 he remarked that what he was doing in Nature's Destiny was "pure Aristotle." William Dembski
**We propose a new model of protein evolution that is reminiscent of a constrained ‘random walk’ through fitness space, which is based on the fitness consequences and distribution of mutational effects on function, stability, aggregation and degradation. Bill: Is this Denton's point in Nature's Destiny? PaV

Leave a Reply