Uncommon Descent Serving The Intelligent Design Community

Build me a protein – no guidance allowed! A response to Allan Miller and to Dryden, Thomson and White


Could proteins have developed naturally on Earth, without any intelligent guidance? The late astrophysicist Sir Fred Hoyle (1915-2001) thought not, and one can immediately grasp why, just by looking at the picture above, which shows the protein hexokinase, with much smaller molecules of ATP and the simplest sugar, glucose, shown in the top right corner for comparison (image courtesy of Tim Vickers and Wikipedia). Briefly, Hoyle argued that since a protein is typically made up of at least 100 or so amino acids, of which there are 20 kinds, the number of possible amino acid sequences of length 100 is astronomically large. Among these, the proportion that are able to fold up and perform a biologically useful task as proteins is vanishingly small. Hoyle argued that there wouldn’t have been enough time for Nature to explore the set of all possibilities on the primordial Earth and hit on a protein that could do something useful. Even billions of years would not have been enough. The origin of even a single protein looks like a biochemical miracle. I should mention in passing that there are a lot of misleading Web sites on “Hoyle’s fallacy”, which purport to take apart his argument without a proper understanding of the mathematical logic that underlies it. For those who would like to learn more about Hoyle’s argument, I would recommend biologist Stephen Jones’ online article, Fred Hoyle about the 747, the tornado and the junkyard, as well as a 1981 essay by Hoyle entitled, The Universe: Past and Present Reflections. But I digress.

Over at the Skeptical Zone, Allan Miller has written a post, entitled, Journal club – Protein Space. Big, isn’t it?, which attempts to refute the claim made by the late Sir Fred Hoyle (1915-2001) that the chance of Nature hitting on a functional protein during the 4.5-billion-year history of the Earth is astronomically low. Miller argues that there would have been plenty of time for Nature to build functional proteins on the primordial Earth. Evolution, he claims, could still work perfectly well even if the set of all “possible proteins” were much smaller than it is today, making it easier to explore quickly; moreover, evolution could have easily searched this reduced set of all “possible proteins” within the time available (say, a billion years), and hit upon some proteins that actually worked. In his post, Miller also addresses the thermodynamic issues relating to how proteins could have formed in the first place.

To support his case, Miller cites a 2008 paper by David Dryden, Andrew Thomson and John White of Edinburgh University, entitled, How much of protein sequence space has been explored by life on Earth?, which defends the claim that “a reduced alphabet of amino acids is quite capable of producing all protein folds (approx. a few thousand discrete folds; Denton 2008) and providing a scaffold capable of supporting all protein functions…. Therefore it is entirely feasible that for all practical (i.e. functional and structural) purposes, protein sequence space has been fully explored during the course of evolution of life on Earth.”

Miller may or may not be aware that there have been no less than four responses to Dryden, Thomson and White’s paper, which shoot it full of holes.

Proteins before and after folding. Image courtesy of Wikipedia.

1. A response to Dryden, Thomson and White from Dr. Cornelius Hunter

Dr. Cornelius G. Hunter is a graduate of the University of Illinois where he earned a Ph.D. in Biophysics and Computational Biology. In 2011, Dr. Hunter rebutted the arguments in Dryden, Thomson and White’s paper, in two posts over at his blog, Darwin’s God: Response to Comments: Natural Selection Doesn’t Help, Gradualism is Out, and so is Evolution (July 2, 2011) and The Amyloid Threat, Big Numbers Game and Quote Mining: Protein Evolution and How Evolutionists Respond to the Empirical Evidence (September 15, 2011). The key points from Dr. Hunter’s response are as follows:

The paper [by Dryden, Thomson and White] attempts to make two general points. First that evolution can succeed with a much smaller protein sequence space and second, that evolution can easily search the entire protein sequence space. Both conclusions are scientifically ridiculous and are inconsistent with what we do understand about proteins…

For the first claim, the evolutionists argue for a smaller protein sequence space because:

A. “the actual identity of most of the amino acids in a protein is irrelevant” and so we can assume there were only a few amino acids in the evolution of proteins, rather than today’s 20.B. Only the surface residues of a protein are important.

C. Proteins need not be very long. Instead of hundreds of residues, evolution could have used about 50 for most proteins.

For Point A, the evolutionists use as support a series of simplistic studies that replaced the actual protein three-dimensional structure and amino acid chemistries with cartoon, two-dimensional lattice versions.

Likewise Point B is at odds with science, and again is an unwarranted extrapolation on a simplistic lattice study.

For Point C, the evolutionists note that many proteins are modular and consist of self-contained domains “of as few as approximately 50 amino acids.” But the vast majority of protein domains are far longer than 50 residues. Single domain proteins, and domains in multiple-domain proteins are typically in the hundreds of residues…

To defend their second claim, that evolution can easily search the entire protein sequence space, the evolutionists present upper and lower bound estimates of the number of different sequences evolution can explore.

Their upper bound estimate of 10^43 (a one followed by 43 zeros) is ridiculous. It assumes a four billion year time frame with 10^30 bacteria constantly testing out new proteins… You can’t use bacteria to explain how proteins first evolved when the bacteria themselves require an army of proteins.

The lower bound of 10^21 is hardly any more realistic. The evolutionists … continue to rely on the pre-existence of an earth filled with a billion species of bacteria (with their many thousands of pre-existing proteins)…

The scientific fact is that the numbers are big. This isn’t a “game.”

For instance, consider an example protein of 300 residues (many proteins are much longer than this). With 20 different amino acids to choose from, there are a total of 10^390 different amino acid sequences possible. Now let’s simplify and assume only four different amino acids are needed. This reduces the problem to 10^180 different sequences.

Next let’s assume that only 50% of the residues are important. At the other 50%, any amino acid will do. That is, fully half of the amino acid sequence is inconsequential. These are extremely aggressive and unrealistic assumptions, yet nonetheless we are left with a total of 10^90 sequences. 90 may not appear to be a big number, but a one followed by 90 zeros is. It is completely impractical for evolution.

And if you don’t agree with my example, then we have the evolutionary experiments, described above, which concluded that 10^70 tries would be required. And even that was only for a fraction of the protein machine, and it assumed a pre-existing biological world with its many proteins already in place.

So let’s take the evolutionist’s own numbers at face value, giving them every advantage. The number of experiments required is 10^70 and the number of experiment possible is 10^43. Even here, giving the evolutionists every advantage, evolution falls short by 27 orders of magnitude.

The theory, even by the evolutionist’s own reckoning, is unworkable. Gradualistic evolution—the test that Darwin himself set forth—or non gradualistic evolution, it does not matter. Evolution fails by a degree that is incomparable in science.

The numbers, then, appear to rule out the scenario envisaged by Dryden, Thomson and White. Even using the wildly optimistic suppositions made by Darwinian evolutionists, there wouldn’t have been enough time for Nature to try out all possibilities and thereby hit upon a sequence of amino acids that could fold up properly, enabling it to perform a biologically useful task. Billions of years isn’t anywhere near enough time, when you need decillions of years to complete the task!

2. A response to Dryden, Thomson and White from Dr. Douglas Axe

Miller also appears to be unaware that Biologic Institute director Dr. Douglas Axe responded to Dryden, Thomson and White’s paper back in 2008, in an online post entitled, Science stories. Dr. Axe’s credentials in the field are impressive: after obtaining a Caltech Ph.D., he held postdoctoral and research scientist positions at the University of Cambridge, the Cambridge Medical Research Council Centre, and the Babraham Institute in Cambridge. He has also written two articles for the Journal of Molecular Biology (see here and here for abstracts). He has also co-authored an article published in the Proceedings of the National Academy of Sciences, an article in Biochemistry and an article published in PLoS ONE. His work has been reviewed in Nature and featured in a number of books, magazines and newspaper articles.

Dr. Axe’s article pithily summarizes of Dryden, Thomson and White’s argument before proceeding to mow it down:

Here we’ll look at a recent paper by Dryden, Thomson, and White (DTW), published in Journal of the Royal Society Interface. [2] Its stated conclusion is: “It is entirely feasible that for all practical (i.e. functional and structural) purposes, protein sequence space has been fully explored during the course of evolution of life on Earth.” What this means, in simple language, is that the functions we see proteins performing in cells are not so extraordinary that we should be surprised to see them.

To understand how the DTW [Dryden, Thomson and White – VJT] paper attempts to justify its claim, consider the following analogy between proteins and sentences. Just as sentences are written by arranging characters in sequence, so proteins are built by linking amino acids into strings with specified sequences. The amino acid ‘alphabet’ has twenty members, comparable to the size of actual alphabets, and the length of protein ‘sentences’ written in their alphabet is similar to the length of actual written sentences. In both cases the ability to do many useful things by arranging characters into appropriate sequences opens up a world of possibilities.

By this way of viewing things, cells depend on several thousand protein ‘sentences’, each with its own important meaning. Considering the complexity of this biological ‘text’, chance-based explanations of it certainly call for careful probabilistic evaluation. But as DTW point out, the actual probabilistic difficulty of such a thing depends on several factors. Their paper focuses on two of these: the length of the required ‘sentences’, and the size of the ‘alphabet’ needed to write them. Their claim is that neither of these requirements is really as stringent as it appears to be.

We’ll use the analogy to get a feel for this claim, keeping in mind of course that the claim is about proteins rather than sentences. Consider the DTW conclusion quoted above. That sentence is 185 characters long, making it similar in length to biological proteins [3]

We might try shortening it to “Earthly life has fully explored protein functions.” This brings the length down to 50 characters, though not without affecting the meaning. The bigger problem, though, is that the DTW proposal also calls for radical reduction of the alphabet size. In fact, for this shortened sentence to meet their proposal, we would need to re-write it with a tiny alphabet of four or five symbols — and that mini-alphabet would have to work not just for this sentence but for all sentences in a text the size of the DTW paper.

…According to DTW, the functions that biological proteins perform could be adequately performed with proteins that are much shorter and incorporate considerably fewer kinds of amino acids…

In fact, the most conclusive scientific evidence on this matter seems to contradict their claim. First and foremost is the very observation they seek to explain — the functional proteins we see in nature. The mere fact that these proteins are far too long and employ far too many amino acids to meet the DTW restrictions ought to make us assume that they don’t meet those restrictions, absent a convincing case that they do. After all, why would cells go to so much trouble making all twenty amino acids if far fewer would do? And if fewer really would do, why do cells so meticulously avoid mistaking any of the twenty for any other in their manufacture of proteins? [4]

The cellular apparatus for making proteins does incorporate wrong amino acids, but the rarity of these errors makes the process remarkably well tuned for accurate synthesis of long proteins, not mini-proteins. A popular biochemistry textbook puts it this way: “An error frequency of about 0.0001 per amino acid residue was selected in the course of evolution to accurately produce proteins consisting of as many as 1000 amino acids while maintaining a remarkably rapid rate for protein synthesis.” [5] So, while the textbook ignores the problem that the DTW paper addresses — how on earth such things could evolve — the DTW paper ignores the aspects of proteins that plainly defy simplification.

What’s more, the inherent complexity of biological proteins is confirmed by experiments that test it directly. We know that amino acid changes tend to be functionally disruptive even when the replacements are similar to the originals [6], and we know what typically causes this — reduced structural stability of the functional form [7]. So, not only does the DTW claim suffer from a lack of direct supportive evidence — it also suffers from a substantial body of directly contrary evidence.


[2] http://rsif.royalsocietypublishing.org/content/5/25/953.full

[3] http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1150220

[4] http://www.sciencedirect.com/science

[5] Berg JM, Tymoczko JL, Stryer L (2002) Biochemistry (5th edition). Freeman.

[6] http://www.ncbi.nlm.nih.gov/pubmed/10966772

[7] http://www.ncbi.nlm.nih.gov/pubmed/12079393

Dr. Axe’s questions deserve to be answered: why would Nature go to the trouble of building proteins out of 20 different kinds of amino acids, if just a few would suffice? And why would it come up with an efficient mechanism for detecting errors in amino acids, if these errors don’t matter very much?

Three possible representations of the three-dimensional structure of the protein triose phosphate isomerase. Left: all-atom representation colored by atom type. Middle: Simplified representation illustrating the backbone conformation, colored by secondary structure. Right: Solvent-accessible surface representation colored by residue type (acidic residues red, basic residues blue, polar residues green, nonpolar residues white). Image courtesy of Wikipedia.

3. Dr. Kirk Durston’s criticisms of Dryden, Thomson and White’s paper

In 2009, another critic, Dr. Kirk Durston, uncovered several serious flaws in Dryden, Thomson and White’s paper. Dr. Kirk Durston completed his Ph.D. in Biophysics at the University of Guelph, specializing in the identification, quantification, and application of functional information to protein structure. He is also the Director of the New Scholars Society. Dr. Durston sent a letter to Dryden, who promised that he would get back to Durston, after initially corresponding with him. Unfortunately, he never did. Dr. Durston has kindly forwarded me a copy of his original letter of March 2, 2009, from which I quote the following excerpts:


I read your brief paper with interest. You have presented some interesting ideas, but I do have some major reservations which I have summarized below.

1. If one reduces the size of sequence space by reducing its dimensionality, the size of functional sequence space is also reduced. Of course, your point was that if the size of sequence space was sufficiently reduced, then all of the remaining sequence space could be searched given certain parameters. My concern has to do with whether functional sequence space shrinks to zero before the size of the total sequence space becomes small enough to be adequately explored. My own research suggests that it might be radically optimistic to posit that 2-amino acid space (or 2-property space), or even 3-amino acid space contains any functional sequence space for the majority of protein families…

2. It is true that if one examines a sequence alignment for an entire protein family, often consisting of 1,000 or more sequences, no site is perfectly conserved. That observation can be misleading, however, for it ignores the higher order associations between sites. These higher order relationships, related to the final structure and function of the protein, often require that if amino acid x occurs at site b, then amino acid y is required at site g and amino acid z is required at site k… The bottom line is this: functional sequence space is much smaller than what one might infer from simply looking at the variation in amino acids per site, due to the higher order associations between sites.

3. My current research involves the computational detection of higher order relationships between sites… [T]he protein family that I have been analyzing contains numerous higher order site relationships ranging from simple pair-wise relationships up to one 29th order relationship (consists of 29 sites). In mapping these higher order relationships to the 3-D structure of that protein, it can be seen that these relationships are structural. My point is that any discussion of functional sequence space must take into consideration the higher order relationships within the protein that are essential to the stability of its structure. Given what I am seeing in my own research, the size of functional sequence space for most proteins is likely to shrink to zero before the overall sequence space is downsized to 2 or even 3-amino acid space (or 2-property space) is reached. For this reason, I do not think a hypothesized 2 or 3-amino acid space is going to be sufficient for most of the protein families.

4. With regard to whether protein families can be simplified to one polar and one non-polar amino acid, or even two groups of acids, one polar and one non-polar, again my results suggest otherwise. I have attached an Excel file that contains my results for Ribosomal S2, a universal protein and, thus, possibly a component of the LUCA [last universal common ancestor – VJT]… These results suggest that a simple polar/non-polar 2-amino acid world would be a severe problem for RS 2. I have similar findings for RS 7, another universal protein. In the early stages of my research, my hypothesis was that the universal proteins were likely to be less complex and easier to find in an evolutionary search. I began with transforming their 20-amino acid sequence space into 2-property sequence space (as you have proposed). It became apparent in very short order that 2- property space was far too crude.

5. I would think that a prediction arising out of your hypothesis would be that the universal proteins should be more amenable to a 2 or a 3-amino acid world, (or a 2-property world) if we assume they are required for LUCA and are, thus, quite early. I’ve looked at several universal proteins and I am not optimistic that this prediction can survive falsification… Alternatively, your hypothesis predicts a relatively smooth probability distribution for the 20 amino acids. I’ve computed the probability distributions for all 20 amino acids in 35 different protein families. They are not even close to a 2-property probability distribution.

6. Finally, I see massive problems in going from, say, a 3-amino acid biological world to a 20 amino acid world. For example, as I’m sure you readily recognize, there are large problems in going from coding for 3 amino acids to coding for 20 amino acids. I am sure you are aware of several other very obvious problems related to the coding, translation, and fitness process. As you know, many of the universal proteins are related to the ribosome/translation component of biological life. From a purely Darwinian perspective, there is an energy and fitness expense for having 20 amino acids if just 2 or 3 or 5 will do.

Overall, my work with real data involving 35 different protein families, has left me not at all optimistic that the hypothesis you advance in your paper is viable… What I am seeing is that the functional complexity of the coding, structure and function of protein families is significantly more advanced than I previously expected.

Best regards,

Computational Biophysics, University of Guelph

It seems that Dryden, Thomson and White’s paper has died the death of a thousand cuts at the hands of Dr. Durston’s skilled, rapier-like logic. Proteins are complex beasties, whose parts are delicately inter-linked. The idea that they could retain their functionality – or indeed, any kind of functionality – while the number of different kinds of amino acids they contain is reduced from 20 to 2 or 3 is simply preposterous, as well as being at odds with the experimental data that’s available to date. Moreover, there would have been an enormous cost in Nature’s using 20 amino acids to build a functional protein, if just a few would do the trick.

I’d like to mention one more critic of Dryden, Thomson and White’s paper, who has administered what I consider to be the coup do grace with his work on “singleton” proteins. “Singleton proteins?” I hear you ask. “What are they?” Suffice to say that you’ll be hearing a lot more about them from now on, on this Website. Let us continue.

4. Dr. Branko Kozulic’s rebuttal of Dryden, Thomson and White’s paper

Dr. Branko Kozulic received Ph.D. in biochemistry from the University of Zagreb, Croatia, in 1979. From 1983 to 1988 he worked at Institute of Biotechnology, ETH-Zurich, Switzerland. For about fifteen years he was employed at a private Swiss biotech company, of which he was a co-founder. He currently works at Gentius Ltd. in Zadar, Croatia. He also teaches at the Faculty of Food Technology and Biotechnology in Zagreb. He is also a member of the Editorial Team of the journal Bio-complexity.

In 2011, Dr. Kozulic authored a paper entitled, Proteins and Genes, Singletons and Species. In his paper, Dr. Kozulic discusses the difficulty of generating even one functional protein by a random search, during the Earth’s 4.5 billion-year history. He then proceeds to assess the claims made in Dryden, Thomson and White’s paper, which argues that the search for the Earth’s first proteins may have been far easier than it looks today, as proteins were shorter then and were composed of fewer amino acids:

One strategy for defusing the problem associated with the finding of functional proteins by random search through the enormous protein sequence space has been to arbitrarily reduce the size of that space. Because the space size is related to protein length (L) as 20^L, where 20 denotes the number of different amino acids of which proteins are made, the number of unique protein sequences will rapidly decrease if one assumes that the number of different amino acids can be less than 20. The same is true if one takes small L values. Dryden et al. used this strategy to illustrate the feasibility of searching through the whole protein sequence space on Earth, estimating that the maximal number of different proteins that could have been formed on planet Earth in geological time was 4 x 10^43 [9]. In [the] laboratory, researchers have designed functional proteins with fewer than 20 amino acids [10, 11], but in nature all living organisms studied thus far, from bacteria to man, use all 20 amino acids to build their proteins. Therefore, the conclusions based on the calculations that rely on fewer than 20 amino acids are irrelevant in biology. Concerning protein length, the reported median lengths of bacterial and eukaryotic proteins are 267 and 361 amino acids, respectively [12]. Furthermore, about 30% of proteins in eukaryotes have more than 500 amino acids, while about 7% of them have more than 1,000 amino acids [13]. The largest known protein, titin, is built of more than 30,000 amino acids [14]. Only such experimentally found values for L are meaningful for calculating the real size of the protein sequence space, which thus corresponds to a median figure of 10^347 (20^267) for bacterial, and 10^470 (20^361) for eukaryotic proteins.

Kozulic’s take-home message is clear and unambiguous: in the real world, proteins need 20 different amino acids, not two. Moreover, hundreds of amino acid molecules are required to make a typical protein.

Now, this wouldn’t be so bad if proteins were somehow bunched or linked together in terms of their chemical properties. That way, you might start with a small functional protein and (through a series of lucky chemical accidents) eventually work your way up from that small protein to a larger one, via a kind of “island-hopping” process. But proteins in the real world aren’t like that. Many of them are loners, with no close chemical “relatives.” Such proteins are called “singletons”, and they make up a very large proportion of all known proteins.

What have we learned from these tens of millions of protein sequences originating from the genomes of more than one thousand species? When proteins of similar sequences are grouped into families, their distribution follows a power-law [65-72], prompting some authors to suggest that the protein sequence space can be viewed as a network similar to the World Wide Web, electrical power grid or collaboration network of movie actors, due to the similarity of respective distribution graphs. There are thus small numbers of families with thousands of member proteins having similar sequences, while, at the other extreme, there are thousands of families with just a few members. The most numerous are “families” with only one member; these lone proteins are usually called singletons. This regularity was evident already from the analysis of 20 genomes in 2001 [66], and 83 genomes in 2003 [69]. As more sequences were added to the databases more novel families were discovered, so that according to one estimate about 180,000 families were needed for complete coverage of the sequences in the Pfam database from 2008 [71]. Another study, published in the same year, identified 190,000 protein families with more than 5 members – and additionally about 600,000 singletons – in a set of 1.9 million distinct protein sequences [73]


65. Huynen MA, van Nimwegen E (1998) The frequency distribution of gene family sizes in complete genomes. Mol Biol Evol 15: 583-589.

66. Qian J, Luscombe NM, Gerstein M (2001) Protein family and fold occurrence in genomes: power-law behaviour and evolutionary model. J Mol Biol 313: 673-681. doi:10.1006/jmbi.2001.5079.

67. Luscombe NM, Qian J, Zhang Z, Johnson T, Gerstein M (2002) The dominance of the population by a selected few: power-law behaviour applies to a wide variety of genomic properties. Genome Biol 3:research0040.1-0040.7.

68. Unger R, Uliel S, Havlin S (2003) Scaling law in sizes of protein sequence families: from super-families to orphan genes. Proteins 51: 569-576.

69. Enright AJ, Kunin V, Ouzounis CA (2003) Protein families and TRIBES in genome sequence space. Nucleic Acids Res 31: 4632-4638. doi:10.1093/nar/gkg495.

70. Lee D, Grant A, Marsden RL, Orengo C (2005) Identification and distribution of protein families in 120 completed genomes using Gene3D. Proteins 59: 603-615. doi:10.1002/prot.200409.

71. Sammut SJ, Finn RD, Bateman A (2008) Pfam 10 years on: 10 000 families and still growing. Brief Bioinform 9: 210-219. doi:10.1093/bib/bbn010.

72. Orengo CA, Thornton JM (2005) Protein families and their evolution – a structural perspective. Annu Rev Biochem 74: 867-900. doi:10.1146/annurev.biochem.74.082803.133029.

73. Yeats C, Lees J, Reid A, Kellam P, Martin N, Liu X, Orengo C (2008) Gene3D: comprehensive structural and functional annotation of genomes. Nuclei Acids Res 36: D414-D418. doi: 10.1093/nar/gkm1019.

The take-home message here is that the island-hopping strategy won’t work: most of the proteins that exist in Nature are “loners” or “singletons”, that can’t be generated in this way. Intelligent foresight is the only known process that can overcome this probabilistic hurdle. The evidence from what we know about proteins is by now luminously clear: they could only have been made by careful planning.

Thermodynamic issues relating to protein formation

When discussing thermodynamic issues relating to the formation of the first proteins on the primordial Earth, Allan Miller makes some rather remarkable concessions in his post over at The Skeptical Zone. He frankly acknowledges that proteins could never have evolved in the Earth’s primordial seas (Darwin’s “warm little pond”):

The ‘warm little pond’ is chemically naive; a strawman. Darwin (who coined the phrase) knew nothing of thermodynamics, nor protein. The free energy change associated with condensation/hydrolysis of the peptide bond means that it requires the input of energy to make it. The energy of motions of molecules in solution is not enough. Even with appropriate energy, having hit the jackpot once is insufficient. One has to retain that sequence, and this random process is not repeatable. So calculating ‘the probability of a protein’ by combinatorial means is irrelevant if that is not how it happened.

Allan Miller proposes instead that ribozymes may have played a role in assisting peptide chains to explore the whole of protein space.

Like short peptides, short RNA and DNA strands have catalytic ability (ribozymes), and all the basic reactions are within their scope. One particularly relevant reaction is the ability to join an amino acid to a nucleic acid monomer, ATP, which can be accomplished by a ribozyme just 5 bases long. This is a central step in modern protein synthesis, the lone monomer now extended by an elaborate ‘tail’ arrangement – the tRNA molecule – and the joining now performed by a protein catalyst. Charging the acid in this way overcomes the thermodynamic barrier to peptide synthesis, because aminoacylated ATP has the energy to form a peptide bond where ‘bare’ amino acids do not. This gives an inkling of the mode by which Hoyle’s peptide space may have been actually accessed and explored. Short peptides formed by ribozymes from a limited acid library, with limited catalytic ability, may become longer and more specific and versatile by duplications and recombination of subunits.

I’d like to answer Miller’s proposal with a single picture. This is what a ribozyme looks like. It’s the hammerhead ribozyme (image courtesy of William G. Scott and Wikipedia).

I put it to my readers that Miller has solved one problem (how to build proteins) only by creating another (how to build ribozymes).

I’d also like to quote what the Wikipedia article says about this ribozyme:

In the natural state, a hammerhead RNA motif is a single strand of RNA, and although the cleavage takes place in the absence of enzymes, the hammerhead RNA itself is not a catalyst in its natural state, as it is consumed by the reaction and cannot catalyze multiple turnovers.

Hmmm. Doesn’t sound too promising, does it? And on top of that, it appears fiendishly difficult to generate naturally, in the absence of intelligent guidance.

But don’t take my word for it. I suggest that readers go and have a look at the article, A New Study Questions RNA World (April 16, 2012) over at Evolution News and Views:

A new study in PLoS One shows that RNA and the proteins involved in protein synthesis must have co-evolved. This flies in the face of RNA-world theories, which presume that RNA formed first and that catalytic function (usually performed by proteins) was completed by catalytic RNA, known as ribozymes.

Researchers at the University of Illinois used phylogenetic modeling methods to evaluate the evolutionary history of the ribosome by correlating RNA structure and the ribosome protein structure. Their studies reveal several things of interest.

One of the assumptions in the RNA first hypothesis is that the active site of the ribosome, the peptidyl transferase center (PTC), which is the key player in protein synthesis, evolved first. However, Harish et al.’s studies reveal that the ribosome subunits actually evolved before the PTC active site and those subunits co-evolved with RNA, or what would eventually be sections of tRNA.

The authors conclude that their study answers some of the difficult questions associated with the RNA First World, while suggesting that there may have been a ribonuceloprotein primordial world


Overall, the authors appeal to co-option and co-evolution and justify this using phylogenetic homology studies. They contend as many in the ID camp do that “the de novo appearance of complex functions is highly unlikely. Similarly, it is highly unlikely that a multi-component molecular complex harboring several functional processes needed for modern translation could emerge in a single or only a few events of evolutionary novelty.” Their explanation, however, is that a simpler system was performing a different function, and then was recruited into the complex protein translation machine.

The question that follows is what exactly did the recruiting? What provokes recruitment to another system? The authors labeled this time of recruitment the “first major transition” but their explanation of the transition is a little cloudy.

They seem to answer the question of “motivation to recruitment” by appealing to co-evolution. The RNA and ribosome proteins are co-dependent such that as one evolves, the other does too and somehow it reached a point where a “major transition” occurs.

There are many striking features of this study, such as the authors’ acknowledgement of the deficiency of ribozymes to account for the “chicken-and-egg” problem with protein synthesis, and their recognition of the improbable evolution of RNA apart from the ribosomal protein in view of the fact that the relevant functions are so intimately intertwined.

While these results show a relationship and even a correlation between tRNA and the ribosome, it is still unclear what exactly promoted recruitment, what attracted the tRNA to the proto-ribosome, or why co-option must be the conclusion. Could this not also be a case of an irreducibly complex machine?

Indeed. In the absence of any experimental data confirming these fanciful speculations about how proteins may have arisen via an unguided natural process, I can only regard them as the chemical equivalent of castles in the air.

But speculation is one thing; misinformation is another. Even though there are now several detailed online rebuttals of claims Dryden, Thomson and White’s claim that there would have been plenty of time for natural processes to hit upon a functional protein on the primordial Earth, the myth refuses to die. It is to be hoped that this little bouquet of mine, in which I have brought all the rebuttals together in a single post, will help to slay this myth.

All the evidence we have to date on proteins points towards a single conclusion. In the words of the late Sir Fred Hoyle:

A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.
(The Universe: Past and Present Reflections, Engineering and Science, November 1981, p. 12.)

Allan Miller has responded to this post but unfortunately it is evidence-free, ie it doesn't contain any evidence for unguided processes producing functional proteins in a world that didn't have proteins. Joe
Ba77, thanks for that. I've seen some of it before, other stuff is new. The video you linked to has been added to my watch list, not just for that episode, but for the entire series. It looks like it will be both entertaining and informative. I just need to find time to watch the entire thing. :) Chance Ratcliff
Chance, from the 38:38 minute mark to the 40:32 minute mark of the following video, the quantum teleportation experiments between the Canary Islands of La Palma and Tenerife are more clearly explained. The snippet features a small clip of Anton Zeilinger.
Zeilinger on Quantum Teleportation - video http://www.youtube.com/watch?feature=player_detailpage&v=EGhQmNZhlqw#t=2318s
As to the old complaint from materialists that quantum spookiness only applies at the small scale, please note exactly what highly sensitive instruments were used which enabled the teleportation experiment to be successful:
Quantum Spookiness Spans the Canary Islands - March 2007 Excerpt: A team has transmitted entangled photons some 144 kilometers (89 miles) between La Palma and Tenerife, two of Spain's Canary Islands off the coast of Morocco.,,, Using a laser, the researchers created entangled pairs of photons on La Palma and fired one member of each pair to a European Space Agency (ESA) telescope on Tenerife, ,, Hughes says his group employed highly sensitive detectors normally used in astronomy,,, http://www.scientificamerican.com/article.cfm?id=entangled-photons-quantum-spookiness
also of note:
LIVING IN A QUANTUM WORLD - Vlatko Vedral - 2011 Excerpt: Thus, the fact that quantum mechanics applies on all scales forces us to confront the theory’s deepest mysteries. We cannot simply write them off as mere details that matter only on the very smallest scales. For instance, space and time are two of the most fundamental classical concepts, but according to quantum mechanics they are secondary. The entanglements are primary. They interconnect quantum systems without reference to space and time. If there were a dividing line between the quantum and the classical worlds, we could use the space and time of the classical world to provide a framework for describing quantum processes. But without such a dividing line—and, indeed, with­out a truly classical world—we lose this framework. We must ex­plain space and time (4D space-time) as somehow emerging from fundamental­ly spaceless and timeless physics. http://phy.ntnu.edu.tw/~chchang/Notes10b/0611038.pdf
As to the most radical implication coming out of all these experiments, the implication that is most directly challenging atheistic/materialistic thinking, the most radical implication out of all these experiments is that consciousness, not energy and/or matter, is the 'ultimate universal reality'.
"It will remain remarkable, in whatever way our future concepts may develop, that the very study of the external world led to the scientific conclusion that the content of the consciousness is the ultimate universal reality" - Eugene Wigner - (Remarks on the Mind-Body Question, Eugene Wigner, in Wheeler and Zurek, p.169) 1961 - received Nobel Prize in 1963 for 'Quantum Symmetries' Quantum physics says goodbye to reality - Apr 20, 2007 Excerpt: They found that, just as in the realizations of Bell's thought experiment, Leggett's inequality is violated – thus stressing the quantum-mechanical assertion that reality does not exist when we're not observing it. "Our study shows that 'just' giving up the concept of locality would not be enough to obtain a more complete description of quantum mechanics," Aspelmeyer told Physics Web. "You would also have to give up certain intuitive features of realism." http://physicsworld.com/cws/article/news/27640 Lecture 11: Decoherence and Hidden Variables - Scott Aaronson Excerpt: "Look, we all have fun ridiculing the creationists who think the world sprang into existence on October 23, 4004 BC at 9AM (presumably Babylonian time), with the fossils already in the ground, light from distant stars heading toward us, etc. But if we accept the usual picture of quantum mechanics, then in a certain sense the situation is far worse: the world (as you experience it) might as well not have existed 10^-43 seconds ago!" http://www.scottaaronson.com/democritus/lec11.html
Chance Ratcliff, It might interest you to know that La Palma, the island featured in the time-lapse video, is one of the islands that has become semi-famous for the work that Anton Zeilinger and company have accomplished there. Work in quantum mechanics. Work which has dramatically challenged the way most people view reality. (which is why it is very fitting to have such a deeply contemplative time lapse video done there.) Such work as the following:
143 km: Physicists break quantum teleportation distance - Sep 05, 2012 Excerpt: An international team led by the Austrian physicist Anton Zeilinger has successfully transmitted quantum states between the two Canary Islands of La Palma and Tenerife, over a distance of 143 km. The previous record, set by researchers in China just a few months ago, was 97 km. http://phys.org/news/2012-09-km-physicists-quantum-teleportation-distance.html Fundamental Experiments With Entangled Photons: - 2010 One example is the quantum link between the Canary Islands of Tenerife and La Palma which allows detailed tests of entanglement. Recently, an experiment there for the first time closed a loophole in Bell experiments related to Bell's requirement that the choice of measurement must be free and random, i.e. uninfluenced by the photon emission at the source. This is achieved by ensuring space-like separation of the decisions what will be measured from the emission event. http://physics.berkeley.edu/index.php?option=com_dept_management&act=events&Itemid=444&task=view&id=1066
Here is a more detailed article on some of the work conducted on La Palma:
,,,A source in a laboratory located in La Palma, on the Canary Islands, produces path-polarization entangled photon pairs: with entanglement between two different degrees of freedom, namely the path of one photon denoted as the system photon, and the polarization of the other photon denoted as the environment photon. The system photon is sent to an interferometer, and the environment photon is subject to polarization measurements. The environment photon is sent away from the system photon to Tenerife,,, http://delorian64.wordpress.com/tag/entanglement-swapping/
Dude, that time lapse was fantastic. Here's one of my favorites: View from the ISS at Night. Music by John Murphy: Adagio in D Minor. The mp3 for that track is available free with signup for the website newsletter: Sign up Chance Ratcliff
OT: Time Lapse: Island in the Sky - video http://www.metacafe.com/watch/cb-YJX7RBzB_q9H/time_lapse_island_in_the_sky/ bornagain77
The Origin Of Life Requires Intelligence - Kirk Durston PhD - video http://www.metacafe.com/watch/10335610/ bornagain77
Eric @20, thanks. I found it odd that the abstract to the paper I referenced at #19 said the following:
Abstract The origin and evolution of the ribosome is central to our understanding of the cellular world. Most hypotheses posit that the ribosome originated in the peptidyl transferase center of the large ribosomal subunit. However, these proposals do not link protein synthesis to RNA recognition and do not use a phylogenetic comparative framework to study ribosomal evolution. Here we infer evolution of the structural components of the ribosome. Phylogenetic methods widely used in morphometrics are applied directly to RNA structures of thousands of molecules and to a census of protein structures in hundreds of genomes. We find that components of the small subunit involved in ribosomal processivity evolved earlier than the catalytic peptidyl transferase center responsible for protein synthesis. Remarkably, subunit RNA and proteins coevolved, starting with interactions between the oldest proteins (S12 and S17) and the oldest substructure (the ribosomal ratchet) in the small subunit and ending with the rise of a modern multi-subunit ribosome. Ancestral ribonucleoprotein components show similarities to in vitro evolved RNA replicase ribozymes and protein structures in extant replication machinery. Our study therefore provides important clues about the chicken-or-egg dilemma associated with the central dogma of molecular biology by showing that ribosomal history is driven by the gradual structural accretion of protein and RNA structures. Most importantly, results suggest that functionally important and conserved regions of the ribosome were recruited and could be relics of an ancient ribonucleoprotein world.
I'm unclear how one would go about addressing the chicken-and-egg paradox without addressing templating from proteins or RNA onto DNA. Here's everything I found in the paper about DNA:
Remarkably, these primordial r-proteins share ancient structural designs, the OB-fold and the related SH3-like small ?-barrel folds. Translation initiation factors, tRNA binding proteins including AARSs, DNA binding proteins like T7 DNA ligase, and telomere binding proteins share the same fold arrangement [54]. RNA binding and DNA binding proteins therefore have a common evolutionary origin, suggesting ancient r-proteins and homologs were originally part of primitive replication machinery, which diversified and was co-opted for modern translation. ... Kinetic studies have shown that codon-anticodon base paring initiates translation elongation and accelerates the induced-fit of substrate selection. Other template directed enzymes such as RNA and DNA polymerases use similar mechanisms [70] ... Randomizations of mono- and dinucleotides in single-stranded nucleic acids have been used to assess the effects of composition and order of nucleotides in the stability of folded molecules, uncovering evolutionary processes acting at DNA and RNA levels [97].
Besides invoking co-option and sequence homology as evidence for common origins, there doesn't seem to be much paradox resolution going on. To be fair however, the full-color charts and diagrams are extremely well done, gorgeous, and compelling by themselves. :)
"College entrance interview: “Do you believe in materialistic abiogenesis?” “Yes.” “I’m very sorry. In that case you’ll unfortunately need to go back to grade school to start over.”"
Lol! Personally I think that proficiency in basic logic should be requisite for good high school education. Chance Ratcliff
Chance: Thank you. Unfortunately, sometimes there is so little time... But I am always here with my heart :) gpuccio
Chance @16: Excellent point. The RNA-first idea is absurd. The only reason it was proposed is because it is perhaps (very slightly) less outrageous than the DNA-first model. But you're right, whichever side you start with, it has to eventually get mapped onto the other until we get the system we see today. It's not like we're just talking about a few details that need to be filled in. The whole materialistic abiogenesis concept is such a complete joke it ought to be used as a basic IQ test. College entrance interview: "Do you believe in materialistic abiogenesis?" "Yes." "I'm very sorry. In that case you'll unfortunately need to go back to grade school to start over." Eric Anderson
Thanks Rex, I just wanted to make sure I wasn't missing something. I looked at the PLoS One paper, Ribosomal History Reveals Origins of Modern Protein Synthesis, and it made scant references to DNA. Chance Ratcliff
Chance @16 Don't worry. You said what I was thinking. After a supposedly blind yet extremely efficient search, a protein is "deemed" functional. At some point the sequencing information of the nascent protein must be stored in the DNA* for later use. Sometimes some of the explanations that Darwinists come up with are so ridiculous that we're left second guessing our own powers of critical thinking. I often ask myself after reading these just so stories, "That doesn't make any sense. Am I missing something?" *assuming DNA was around to store the info in the first place RexTugwell
Andre @6
"Does nobody find it strange that people believe in their minds that time can mystically create complex structures?"
Well deep time is certainly a necessary condition for gradual evolution by random variation and natural selection. There is no other choice. If one is to avoid design implications, an entire suite of inexplicable events and conditions must be satisfied. The whole enterprise of material origins looks like a game of Jenga However the blocks are currently held in place by Tinker Toys scaffolding, to prevent the whole mess from crashing down. Some of the players are protesting about the cheating, but unfortunately Judge Jones is one of the players using Tinker Toys. :D Chance Ratcliff
Doesn't an RNA-and-protein-first world suggest that proteins were templated to DNA at some point? Or are protein sequences presumed to be maintained in RNA segments which could have been reverse transcribed to DNA? In either case, OOL would be moving in a direction opposite the Central Dogma. Somebody please tell me if what I'm saying here makes absolutely no sense. Chance Ratcliff
gpuccio, it's always a pleasure to see you commenting here, especially after so long. :) Chance Ratcliff
VJ: Wonderful contribution, on a very important point. I have always believed that the paper in question makes no sense. I am happy that Axe, Durston and others have taken the time to show in detail some of the reasons why. I would simply suggest a very easy way to test the hypothesis: why don't they try to just rebuild some of the existing functional proteins, with their reduced aminoacid alphabet, working only on the parts that, in their theory, are important, and substituting all the rest with random sequences? And then, test the function of the result. Good luck. The whole field of protein engineering would certainly become much simpler and easier. gpuccio
The materialistic myth of abiogenesis is preposterous almost beyond words. Not quite, I think, Eric. mapou's mordant epigram seems close to perfection. And yet the fact that it is the actual reality is not at all funny. It was something like, 'Evolution is a cretinous conjecture by cretins for cretins.' It's not that they are congenital cretins. That's what is so infuriating. They deliberately choose to think like cretins, because of their fear of the deism they KNOW it will all lead to. That is, before theism and Christianity. WHAT IS IT ABOUT THE IMPLICATION OF THE WORD, NON-LOCAL, THAT TERRIFIES YOU, ATHEISTS? We know, but we'd like to hear you to say it. You know... 'a step in the door, etc.' Come on Mr Luwontin. You can 'rise' to the occasion as you did then. Its EXISTENCE is not one of the mysteries of QM. So why do you materialists continue to pretend it is, and ignore it, as if it did NOT exist? A world beyond time and space, the matrix of our world. Most of the mysteries you call, 'counter-intuitive', are not counter-intuitive at all. While being utterly counter-rational, the vast majority of mankind have always intuited the existence of 'non-locality' - AS WELL AS inferring it from the best whatsisname. Axel
Moreover genes are modified in a myriad of ways by 'species specific' alternative splicing codes'. The unfathomed regulated complexity being discovered, at which genes are recombined into functional proteins, has some researcher calling for the redefinition of the concept of 'gene' altogether:
The Extreme Complexity Of Genes – Dr. Raymond G. Bohlin – video http://www.metacafe.com/watch/8593991/ "Sixty years on, the very definition of 'gene' is hotly debated. We do not know what most of our DNA does, nor how, or to what extent it governs traits. In other words, we do not fully understand how evolution works at the molecular level." (DNA at 60: Still Much to Learn April 28, 2013) http://www.scientificamerican.com/article.cfm?id=dna-at-60-still-much-to-learn Landscape of transcription in human cells – Sept. 6, 2012 Excerpt: Here we report evidence that three-quarters of the human genome is capable of being transcribed, as well as observations about the range and levels of expression, localization, processing fates, regulatory regions and modifications of almost all currently annotated and thousands of previously unannotated RNAs. These observations, taken together, prompt a redefinition of the concept of a gene. http://www.nature.com/nature/journal/v489/n7414/full/nature11233.html Time to Redefine the Concept of a Gene? - Sept. 10, 2012 Excerpt: As detailed in my second post on alternative splicing, there is one human gene that codes for 576 different proteins, and there is one fruit fly gene that codes for 38,016 different proteins! While the fact that a single gene can code for so many proteins is truly astounding, we didn’t really know how prevalent alternative splicing is. Are there only a few genes that participate in it, or do most genes engage in it? The ENCODE data presented in reference 2 indicates that at least 75% of all genes participate in alternative splicing. They also indicate that the number of different proteins each gene makes varies significantly, with most genes producing somewhere between 2 and 25. Based on these results, it seems clear that the RNA transcripts are the real carriers of genetic information. This is why some members of the ENCODE team are arguing that an RNA transcript, not a gene, should be considered the fundamental unit of inheritance. http://networkedblogs.com/BYdo8
Moreover, proteins are far more complex than meets the eye. For instance biophotonic communication has been discovered for proteins (and DNA):
The mechanism and properties of bio-photon emission and absorption in protein molecules in living systems – May 2012 Excerpt: From the energy spectra, it was determined that the protein molecules could both radiate and absorb bio-photons with wavelengths of less than 3 micrometers and 5–7 micrometers, consistent with the energy level transitions of the excitons.,,, http://jap.aip.org/resource/1/japiau/v111/i9/p093519_s1?isAuthorized=no Watching a protein as it functions - March 15, 2013 Excerpt: When it comes to understanding how proteins perform their amazing cellular feats, it is often the case that the more one knows the less one realizes they know. For decades, biochemists and biophysicists have worked to reveal the relationship between protein structural complexity and function, only to discover more complexity.,,, A signaling protein usually responds to a messenger or trigger, such as heat or light, by changing its shape, which initiates a regulatory response in the cell. Signaling proteins are all-important to the proper functioning of biological systems, yet the rapid sequence of events, occurring in picoseconds, had, until now, meant that only an approximate idea of what was actually occurring could be obtained.,, The team identified four major intermediates in the photoisomerization cycle. ,,, By tracking structurally the PYP photocycle with near-atomic resolution, the team provided a foundation for understanding the general process of signal transduction in proteins at nearly the lightning speed in which they are actually happening. http://phys.org/news/2013-03-protein-functions.html
Finding light to play a regulatory role in turning specific cell signaling pathways on and off for proteins is no small thing to consider since cell signaling pathways are extremely (irreducibly) complex with many different proteins involved in each specific pathway,,,
Signaling Pathways and Tables http://www.cellsignal.com/reference/pathway/index.html
To coordination that 'regulatory biophotonic light' orchestrates onto protein networks is observed here:
An Electric Face: A Rendering Worth a Thousand Falsifications - September 2011 Excerpt: The video suggests that bioelectric signals presage the morphological development of the face. It also, in an instant, gives a peak at the phenomenal processes at work in biology. As the lead researcher said, “It’s a jaw dropper.” http://darwins-god.blogspot.com/2011/09/electric-face-rendering-worth-thousand.html
Moreover, protein folding belongs to quantum physics, not to classical physics:
Physicists Discover Quantum Law of Protein Folding – February 22, 2011 Quantum mechanics finally explains why protein folding depends on temperature in such a strange way. Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from. To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,, Their astonishing result is that this quantum transition model fits the folding curves of 15 different proteins and even explains the difference in folding and unfolding rates of the same proteins. That's a significant breakthrough. Luo and Lo's equations amount to the first universal laws of protein folding. That’s the equivalent in biology to something like the thermodynamic laws in physics. http://www.technologyreview.com/view/423087/physicists-discover-quantum-law-of-protein/
Of course Dr. Torley, due to the complexity involved, one could dig much, much, further elucidating fascinating details of proteins, but Dr. Torley I'm sure that I have now overstayed my welcome on your post by trying to convey just a small glimpse of just how extremely complex a 'simple' protein actually is. Verse
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. In him was life, and that life was the light of all mankind.
Dr. Torley you also mentioned the work on 'context dependency' of amino acids in proteins by Durston (higher order relationships between (AA) sites). Here is the paper:
(A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics – 2012 Excerpt (Page 4): The Probabilities Get Worse This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search. http://powertochange.com/wp-content/uploads/2012/11/Devious-Distortions-Durston-or-Myers_.pdf Statistical discovery of site inter-dependencies in sub-molecular hierarchical protein structuring - Kirk K Durston, David KY Chiu, Andrew KC Wong and Gary CL Li - 2012 Results The k-modes site clustering algorithm we developed maximizes the intra-group interdependencies based on a normalized mutual information measure. The clusters formed correspond to sub-structural components or binding and interface locations. Applying this data-directed method to the ubiquitin and transthyretin protein family multiple sequence alignments as a test bed, we located numerous interesting associations of interdependent sites. These clusters were then arranged into cluster tree diagrams which revealed four structural sub-domains within the single domain structure of ubiquitin and a single large sub-domain within transthyretin associated with the interface among transthyretin monomers. In addition, several clusters of mutually interdependent sites were discovered for each protein family, each of which appear to play an important role in the molecular structure and/or function. Conclusions Our results demonstrate that the method we present here using a k-modes site clustering algorithm based on interdependency evaluation among sites obtained from a sequence alignment of homologous proteins can provide significant insights into the complex, hierarchical inter-residue structural relationships within the 3D structure of a protein family. http://bsb.eurasipjournals.com/content/2012/1/8
Dr. Gauger observes that 'context dependency' is found at every level here:
"Why Proteins Aren't Easily Recombined, Part 2" - Ann Gauger - May 2012 Excerpt: "So we have context-dependent effects on protein function at the level of primary sequence, secondary structure, and tertiary (domain-level) structure. This does not bode well for successful, random recombination of bits of sequence into functional, stable protein folds, or even for domain-level recombinations where significant interaction is required." http://www.biologicinstitute.org/post/23170843182/why-proteins-arent-easily-recombined-part-2
The RecA protein that Dr. Durston looked at displays an amazing ability here:
The World’s Toughest Bacterium - 2002 Excerpt: Several recent studies of the bacterium's DNA repair pathway have focused on one protein that is now known to be essential for radiation resistance—the RecA protein.,, "When subjected to high levels of radiation, the Deinococcus genome is reduced to fragments," they write in Proceedings of the National Academy of Sciences. "RecA proteins may play role in finding overlapping fragments and splicing them together." http://www.genomenewsnetwork.org/articles/07_02/deinococcus.shtml Extreme Genome Repair - 20 March 2009 Excerpt: If its naming had followed, rather than preceded, molecular analyses of its DNA, the extremophile bacterium Deinococcus radiodurans might have been called Lazarus. After shattering of its 3.2 Mb genome into 20–30 kb pieces by desiccation or a high dose of ionizing radiation, D. radiodurans miraculously reassembles its genome such that only 3 hr later fully reconstituted nonrearranged chromosomes are present, and the cells carry on, alive as normal. http://www.sciencedirect.com/science/article/pii/S0092867409002657
Certainly higher order relationships, as Dr. Durston termed 'context dependency', is required to explain the preceding 'Lazarus miracle'. And this 'Lazarus miracle' is not the only place we see these 'miraculous' higher order relationships,,
Understanding ENCODE - gene regulation is similar to adding punctuation and spacing to a paragraph of written text - video http://www.youtube.com/watch?v=yjpW30z-SB8
I like the analogy in the preceding video of comparing the genetic text in the DNA to the written text of humans, but I would hold that the 'higher order' regulation of genes that they have uncovered thus far by the ENCODE project, is much more appropriately compared to constructing entire paragraphs, whole cloth, complete with punctuation and spelling, from a dictionary of a very basic set of 23,000 'gene words'. That extended analogy would be much more realistic as to what ENCODE has actually found in life! As to the problem Dr. Durston mentioned for changing the code from a 2 or 3 letter code to a 20 letter code,, Richard Dawkins nicely, and simply, sums up the 'instantly catastrophic' problem faced by neo-Darwinists here,,
Venter vs. Dawkins on the Tree of Life - and Another Dawkins Whopper - March 2011 Excerpt:,,, But first, let's look at the reason Dawkins gives for why the code must be universal: "The reason is interesting. Any mutation in the genetic code itself (as opposed to mutations in the genes that it encodes) would have an instantly catastrophic effect, not just in one place but throughout the whole organism. If any word in the 64-word dictionary changed its meaning, so that it came to specify a different amino acid, just about every protein in the body would instantaneously change, probably in many places along its length. Unlike an ordinary mutation...this would spell disaster." (2009, p. 409-10) OK. Keep Dawkins' claim of universality in mind, along with his argument for why the code must be universal, and then go here (linked site listing 23 variants of the genetic code). Simple counting question: does "one or two" equal 23? That's the number of known variant genetic codes compiled by the National Center for Biotechnology Information. By any measure, Dawkins is off by an order of magnitude, times a factor of two. http://www.evolutionnews.org/2011/03/venter_vs_dawkins_on_the_tree_044681.html
Well researched Dr. Torley. Another keeper. A few notes of interest (hopefully). Life suddenly appeared on Earth as soon as it was possible to for biotic life to appear after the 'Late Heavy Bombardment', with no prebiotic chemical signatures beforehand. Thus why do neo-Darwinists keep appealing to 'deep time' to try to solve their 'protein problem'? Oh that's right, as Dr. Hunter pointed out, you just have to assume the existence of proteins and bacteria for billions of years in order to try to explain the origin of proteins and bacteria! :) How could we have missed that caveat?
The Sudden Appearance Of Life On Earth - video http://www.metacafe.com/watch/4262918 U-rich Archaean sea-floor sediments from Greenland - indications of +3700 Ma oxygenic photosynthesis (2003) http://adsabs.harvard.edu/abs/2004E&PSL.217..237R Dr. Hugh Ross - Origin Of Life Paradox (No prebiotic chemical signatures)- video http://www.metacafe.com/watch/4012696 Late Heavy Bombardment - image http://www.reasons.org/Media/Default/Images/Archive/clip_image008_0000.jpg
Moreover, evidence for 'sulfate reducing' bacteria has been discovered alongside the evidence for photosynthetic bacteria:
When Did Life First Appear on Earth? - Fazale Rana - December 2010 Excerpt: The primary evidence for 3.8 billion-year-old life consists of carbonaceous deposits, such as graphite, found in rock formations in western Greenland. These deposits display an enrichment of the carbon-12 isotope. Other chemical signatures from these formations that have been interpreted as biological remnants include uranium/thorium fractionation and banded iron formations. Recently, a team from Australia argued that the dolomite in these formations also reflects biological activity, specifically that of sulfate-reducing bacteria. http://www.reasons.org/when-did-life-first-appear-earth Iron in Primeval Seas Rusted by Bacteria - Apr. 23, 2013 Excerpt: The oldest known iron ores were deposited in the Precambrian period and are up to four billion years old (the Earth itself is estimated to be about 4.6 billion years old). ,,, This research not only provides the first clear evidence that microorganisms were directly involved in the deposition of Earth's oldest iron formations; it also indicates that large populations of oxygen-producing cyanobacteria were at work in the shallow areas of the ancient oceans, while deeper water still reached by the light (the photic zone) tended to be populated by anoxyenic or micro-aerophilic iron-oxidizing bacteria which formed the iron deposits.,,, http://www.sciencedaily.com/releases/2013/04/130423110750.htm
As to the fantasy of a riboszme making proteins, well contrary to what neo-Darwinists imagine to be possible, the Ribosome, which is the molecular machine which actually makes the proteins of life, is fantastically complex and very similar to a CPU of a electronic computer, Thus why would evolution go to all that trouble if a riboszme would do? As well, as was also pointed out Dr. Torley, the ribosome is extremely intolerant of errors:
The Ribosome of the cell is found to be very similar to a CPU in a electronic computer:,,, Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems - 2012 David J D’Onofrio1*, David L Abel* and Donald E Johnson Excerpt: An operational analysis of the ribosome has revealed that this molecular machine with all of its parts follows an order of operations to produce a protein product. This order of operations has been detailed in a step-by-step process that has been observed to be self-executable. The ribosome operation has been proposed to be algorithmic (Ralgorithm) because it has been shown to contain a step-by-step process flow allowing for decision control, iterative branching and halting capability. The R-algorithm contains logical structures of linear sequencing, branch and conditional control. All of these features at a minimum meet the definition of an algorithm and when combined with the data from the mRNA, satisfy the rule that Algorithm = data + control. Remembering that mere constraints cannot serve as bona fide formal controls, we therefore conclude that the ribosome is a physical instantiation of an algorithm.,,, ,, It is interesting to note that the CPU of an electronic computer is an instance of a prescriptive algorithm instantiated into an electronic circuit, whereas the software under execution is read and processed by the CPU to prescribe the program’s desired output. Both hardware and software are prescriptive. http://www.tbiomed.com/content/pdf/1742-4682-9-8.pdf Honors to Researchers Who Probed Atomic Structure of Ribosomes - Robert F. Service Excerpt: "The ribosome’s dance, however, is more like a grand ballet, with dozens of ribosomal proteins and subunits pirouetting with every step while other key biomolecules leap in, carrying other dancers needed to complete the act.” http://creationsafaris.com/crev200910.htm#20091010a The Ribosome: Perfectionist Protein-maker Trashes Errors - 2009 Excerpt: The enzyme machine that translates a cell's DNA code into the proteins of life is nothing if not an editorial perfectionist...the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products... To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is "shocking" and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis. http://www.sciencedaily.com/releases/2009/01/090107134529.htm
Hey, you know that famous quote by Darwin about he could not believe in God because of parasites? Well I read the quote and he goes on to say that he believes in intelligent design of the laws of nature: ”With respect to the theological view of the question. This is always painful to me. I am bewildered. I had no intention to write atheistically. But I own that I cannot see as plainly as others do, and as I should wish to do, evidence of design and beneficence on all sides of us. There seems to me too much misery in the world. I cannot persuade myself that a beneficent and omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of Caterpillars, or that a cat should play with mice. Not believing this, I see no necessity in the belief that the eye was expressly designed. On the other hand, I cannot anyhow be contented to view this wonderful universe, and especially the nature of man, and to conclude that everything is the result of brute force. I am inclined to look at everything as resulting from designed laws, with the details, whether good or bad, left to the working out of what we may call chance. Not that this notion at all satisfies me. I feel most deeply that the whole subject is too profound for the human intellect. A dog might as well speculate on the mind of Newton. Let each man hope and believe what he can. Certainly I agree with you that my views are not at all necessarily atheistical." noam_ghish
The TSZ ilk are long on rhetoric and very, very short on actual evidence. Joe
And on top of all that, you then have to find a way to separate left and right handed amino acids since only left handed ones are used in life. Even just one right-handed amino acid would destroy the chain, even if by chance they were all in the correct order and learned how to properly fold! Homo-chirality - another nail in the coffin of abiogenesis. tjguy
Does nobody find it strange that people believe in their minds that time can mystically create complex structures? Is it not strange that people reason like this "if you can win the lotto by chance then amino acids can form proteins by chance?" I find that type of reasoning very disturbing. What goes on in a mind that thinks like that? O wait it's normal because those minds don't care about truth or logic only survival! Andre
This stuff can easliy be explained by with time and chance! You see given enough time anything is possible except for the following two things that people don't really seem to grasp or are willfully ignoring! 1.) Time can not cause anything! time is just a period from then to now! 2.) Chance can not cause anything! Chance is a mathematical term and that is that! Good luck to anybody that wants to prove that matter using chance and time can come alive, you need some seriously of the top dogmatic faith to belief in that kind of magic! Andre
To make things easy for the materialist creation myth we usually just look at number of amino acid possibilities to the power of the positions and then assume a decent amount of tolerance in the specific amino acids at each position to make it easier. The numbers show that the odds of getting a reasonable-length functional sequence of amino acids are laughable within the time and resources of the known universe. But the reality is much worse than that. We often tend to talk about forming a protein in a materialist abiogenesis context almost like we are talking about a sterile, idealized, fictional, lab-like setting where everything just happens smoothly and the only challenge is getting the right sequence. But the problems really start to multiply when we consider the practicalities of getting a sequence in a real-world chemical soup. Let's say, hypothetically, that the end product of a functional sequence that is forming is a 100-length amino acid chain. So we start out in our chemical soup with the chain starting to form: amino acid 1 (AA1), AA2 . . . AA20 . . . Things are looking good . . . But wait, while we're continuing to form the chain (AA21, AA22, etc.) what are AA1-AA20 doing? Just sitting there nicely waiting patiently for AA100 to get added to the chain so that they can then fold into a functional sequence? Of course not. They are immediately being bonded to other molecules in the molecular soup, engaging, disengaging, breaking down the chain, etc. While we're busily adding amino acids to one end of our nascent chain, the prior amino acids have literally dozens of opportunities to react with other molecules in the soup. And they will do so. By the time I get to AA100, I don't have a nice sanitary chain of 100 amino acids waiting to fold neatly into a functional protein.* Instead I have an absolutely tangled mess, with side chains going off in all directions, some parts broken off, others interacting and meshing in the wrong place. And that is if the original AA1-AA99 are even still with me by the time I get to AA100. So, yes, getting a functional sequence of 100 amino acids is indeed highly unlikely. But the problem of interfering cross reactions is in a very real sense just as big of a problem. Then there are all of the other considerations: getting only peptide bonds, only left-handed amino acids, avoiding immediate breakdown if a protein were in fact ever formed, getting more than one protein to work together to actually do something meaningful, and so on. The materialistic myth of abiogenesis is preposterous almost beyond words. ----- * And this sets aside for a moment that many amnino acid sequences do not just automatically fold into the proper shape for a functional protein anyway, but need assistance from cellular machinery to do so. Eric Anderson
When will the jury confirm, "No random births for the protein"? In the face of this juggernaut now "locomoting" from these Ph.D's who, if you please, Have chewed and spat out Miller's lame sugar-coating. Tim
We are living in interesting times. I recently read Dr. Kozulic's paper Proteins and Genes, Singletons and Species and look forward to seeing more discussion, hearing objections, etc. Singletons seem especially problematic, not just for protein evolution but for universal common descent. Chance Ratcliff
VJT: Well researched and sobering, as usual. KF kairosfocus

Leave a Reply