Uncommon Descent Serving The Intelligent Design Community

Barriers to macroevolution: what the proteins say

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

KeithS has been requesting scientific evidence of a genuine barrier to macroevolution. The following is a condensed, non-technical summary of Dr. Douglas Axe’s paper, The Case Against a Darwinian Origin of Protein Folds. Since (i) proteins are a pervasive feature of living organisms, (ii) new proteins and new protein folds have been continually appearing throughout the four-billion-year history of life on Earth, and (iii) at least some macroevolutionary events must have involved the generation of new protein folds, it follows that if Dr. Axe’s argument is correct and neo-Darwinian processes are incapable of hitting upon new functional protein folds, then there are indeed genuine barriers to macroevolution, in at least some cases. The argument put forward by Dr. Axe is robustly quantifiable, and it is fair to say that Dr. Axe carefully considers the many objections that might be put forward against his argument. If there is a hole in his logic, then I defy KeithS to find it.

Finally I would like to thank Dr. Axe for putting his paper online and making it available for public discussion. The headings below are my own; the text is entirely taken from his paper.

Abstract

Four decades ago, several scientists suggested that the impossibility of any evolutionary process sampling anything but a miniscule fraction of the possible protein sequences posed a problem for the evolution of new proteins. This potential problem – the sampling problem – was largely ignored, in part because those who raised it had to rely on guesswork to fill some key gaps in their understanding of proteins. The huge advances since that time call for a careful reassessment of the issue they raised. Focusing specifically on the origin of new protein folds, I argue here that the sampling problem remains. The difficulty stems from the fact that new protein functions, when analyzed at the level of new beneficial phenotypes, typically require multiple new protein folds, which in turn require long stretches of new protein sequence. Two conceivable ways for this not to pose an insurmountable barrier to Darwinian searches exist. One is that protein function might generally be largely indifferent to protein sequence. The other is that relatively simple manipulations of existing genes, such as shuffling of genetic modules, might be able to produce the necessary new folds. I argue that these ideas now stand at odds both with known principles of protein structure and with direct experimental evidence. If this is correct, the sampling problem is here to stay, and we should be looking well outside the Darwinian framework for an adequate explanation of fold origins.

Why the origin of new protein folds is a “search problem”

[T]he origin of protein folds can be framed with complete generality as a search problem. Briefly, because genes encode proteins, any functional problem that can be solved with a suitable protein can be solved with a suitable gene. Therefore any functional challenge that calls for structural innovation may be thought of as posing a search problem where the search space is the set of possible gene sequences and the target is the subset of genes within that space that are suitable for meeting the challenge… The aim here will be to decide whether Darwinian mechanisms (broadly construed) can reasonably be credited with this success.

If we take 300 residues as a typical chain length for functional proteins, then the corresponding set of amino acid sequence possibilities is unimaginably large, having 20^300 (= 10^390) members… Here the point is simply that biological protein sequences are indeed members of astoundingly large sets of sequence possibilities. And by ‘astoundingly large’ we mean much more numerous than any mutation events we might postulate as having produced them. According to one estimate, the maximum number of distinct physical events that could have occurred within the visible universe, including all particles throughout the time since the Big Bang, is 10^150. Since only a minute fraction of these events had anything to do with producing new protein sequences, we can assert with confidence that there is a vast disparity between the number of distinct protein sequences of normal length that are possible, on the one hand, and the number that might have become actual, on the other. In other words, real events have provided only an exceedingly sparse sampling of the whole set of sequence possibilities.

Axe’s metaphor: Searching for a gemstone in the Sahara Desert

We will refer to this as the problem of sparse sampling, or the sampling problem, with the intent of deciding whether or not it really is a problem for the standard evolutionary model. At the very least it raises the important question of how such sparse sampling would uncover so many highly functional protein sequences. To picture the difficulty, imagine being informed that a valuable gemstone was lost somewhere in the Sahara Desert. Without more specific information, any proposal for finding the missing gem would have to come to terms with the vastness of this desert. If only an infinitesimal fraction of the expanse can feasibly be searched, we would judge the odds of success to be infinitesimally small.

What if there’s more than one gemstone?

Evolutionary searches for functional proteins might seem less hopeless in some respects, though. For one, there is a highly many-to-one mapping of protein sequences onto protein functions. This means that vast numbers of comparably valuable targets (protein sequences that are comparably suitable for any particular function) are there to be found. Therefore, while it is effectively impossible to stumble upon a particular 1-in-10^390 protein sequence by chance, the likelihood of stumbling upon a particular protein function by chance will be m-fold higher, where m represents the multiplicity of sequences capable of performing that function…

Why the search space for a protein has to be very large

On the most basic level, it has become clear that protein chains have to be of a certain length in order to fold into stable three-dimensional structures. This requires several dozen amino acid residues in the simplest structures, with more complex structures requiring much longer chains. In addition to this minimal requirement of stability, most folded protein chains perform their functions in physical association with other folded chains [12]. The complexes formed by these associations may have symmetrical structures made by combining identical proteins or asymmetrical ones made by combining different proteins. In either case the associations involve specific inter-protein contacts with extensive interfaces. The need to stabilize these contacts between proteins therefore adds to their size, over and above the need to stabilize the structures of the individual folded chains…

The ATP synthase provides an opportunity at this point to refine the connection between protein size and the sampling problem. Returning to the lost gemstone metaphor, the gem is a new beneficial function that can be provided by a protein or a set of proteins working together, and the desert is the whole space of sequence possibilities within which successful solutions are to be found. Although some of the component proteins that form the ATP synthase are at the small end of the distribution shown in Figure 1 (see Figure 3 legend), none of these performs a useful function in itself. Rather, the function of ATP production requires the whole suite of protein components acting in a properly assembled complex. Consequently, the desert is most precisely thought of as the space of all DNA sequences long enough to encode that full suite. For our purposes, though, it will suffice to picture the space of protein sequences of a length equaling the combined length of the different protein types used to form the working complex (around 2,000 residues for the ATP synthase; see Figure 3 legend).

Two possible “ways out” for neo-Darwinian evolution: either there are lots of gemstones in the desert, or the gemstones are suitably lined up, making them easy to find if the first one is located

Having shown that the problem of sparse sampling is real – meaning that cellular functions require proteins or suites of proteins that are of necessity far too large for the sequence possibilities to have been sampled appreciably – we now turn to the question of whether it is really a problem for neo-Darwinian evolution. Two possibilities for mitigating the problem need to be considered. One of these has been mentioned already. It is the possibility that the multiplicity of sequences capable of performing the requisite functions, m, might be large enough for working sequences to be found by random searches. The second possibility is that functional protein sequences might bear a relationship to one another that greatly facilitates the search. In the desert metaphor, imagine all the different gems being together in close proximity or perhaps lined up along lines of longitude and latitude. In either of these situations, or in others like them, finding the first gem would greatly facilitate finding the others because of the relationship their positions bear to one another…

Why the first neo-Darwinian solution to the sampling problem won’t work

…[W]e need to quantify a boundary value for m, meaning a value which, if exceeded, would solve the whole sampling problem. To get this we begin by estimating the maximum number of opportunities for spontaneous mutations to produce any species-wide trait, meaning a trait which is fixed in the population through natural selection (selective sweep). Bacterial species are most conducive to this because of their large effective population sizes. So let us assume, generously, that an ancient bacterial population sustained a species consisting of 10^10 individuals [26], passing through 10^4 generations per year. After five billion years, such a species would produce a total of 5×10^23 (=(5×10^9)x(10^4)x(10^10)) cells that happen to avoid the small-scale extinction events that kill most cells irrespective of fitness. These 5×10^23 ‘lucky survivors’ are the cells that are available for spontaneous mutation to accomplish whatever will be accomplished in the species… [A]ny adaptive step that is unlikely to appear in that number of cells is unlikely to have evolved in the entire history of the species.

In real bacterial populations, spontaneous mutations occur in only a small fraction of the lucky survivors (roughly one in 300). As a generous upper limit, we will assume that all lucky survivors happen to receive mutations in portions of the genome that are not constrained by existing functions, making them free to evolve new ones. At most, then, the number of different viable genotypes that could appear within the lucky survivors is equal to their number, which is 5 × 10^23. And again, since many of the genotype differences would not cause distinctly new proteins to be produced, this serves as an upper bound on the number of new protein sequences that a bacterial species may have sampled in search of an adaptive new protein structure.

Let us suppose for a moment, then, that protein sequences that produce new functions by means of new folds are common enough for success to be likely within that number of sampled sequences. Taking a new 300-residue structure as a basis for calculation (I show this to be modest below), we are effectively supposing that the multiplicity factor m introduced in the previous section can be as large as (20^300)/(5×10^23), or 10^366. [Recall that 20^300 is about 10^390 – VJT.] In other words, we are supposing that particular functions requiring a 300-residue structure are realizable through something like 10^366 distinct amino acid sequences. If that were so, what degree of sequence degeneracy would be implied? More specifically, if 1 in 5×10^23 full-length sequences are supposed capable of performing the function in question, then what proportion of the twenty amino acids would have to be suitable on average at any given position? The answer is calculated as the 300th root of 1/(5×10^23), which amounts to about 83%, or 17 of the 20 amino acids. That is, by the current assumption proteins would have to provide the function in question by merely avoiding three or so unacceptable amino acids at each position along their lengths.

No study of real protein functions suggests anything like this degree of indifference to sequence…

The second neo-Darwinian solution: Shortcuts to new folds?

The possibility yet to be examined is that functional protein sequences might bear a relationship to one another that allows spontaneous mutations to discover new functional protein folds much more readily than wholly random sampling would. The simplest way for this to occur would be if all functional sequences, regardless of what their functions are, happen to be much more similar to each other than a pair of random sequences would be. In other words, suppose there were a universal consensus sequence that typified all biological proteins, with functional diversity caused by minor deviations from that consensus. The effect of such a universal correlation between sequence and function would be to concentrate all the useful protein sequences within a tiny region of sequence space, making searches that start in that region much more likely to succeed.

Localized searches of this kind are known to work in some cases… The problem comes when we attempt to generalize this local phenomenon. Although there are definite correlations between the various kinds of functions that proteins perform and the respective fold structures used to perform them, and these structural correlations often imply sequence correlations as well, it is simply not the case that all functional folds or sequences are substantially alike. Consequently, while local searches may explain certain local functional transitions, we are left with the bigger problem of explaining how so many fundamentally new protein structures and functions first appeared.

To get an idea of the scale of this problem, consider that the SCOP classification of protein structures currently has 1,777 different structural categories for protein domains, the basic units of folded protein structure… [N]o model of protein origins can be considered satisfactory without accounting for the origin of this great variety of domain folds.

In fact, although the sampling problem has here been framed in terms of protein chains, it could equally be framed in terms of domains. Since domains are presumed to be the fundamental units of conserved structure in protein evolution [33], the question of whether functional sequences are confined to a small patch of sequence space is best addressed at the domain level. And it turns out that domain sequences are not confined in this way…

It therefore seems inescapable that considerable distances must be traversed through sequence space in order for new protein folds to be found. Consequently, any shortcut to success, if it exists, must work by traversing those distances more effectively rather than by shortening them.

A third neo-Darwinian possibility: proteins are made up of small reusable modules, which a search can easily discover

The only obvious possibility here is that new folds might be assembled by recombining sections of existing folds [40-42]. If modular assembly of this kind works, it would explain how just one or two gene fusion events might produce a new protein that differs substantially from its ‘parents’ in terms of overall sequence and structure. Of course, probabilistic limitations would need to be addressed before this could be deemed a likely explanation (because precise fusion events are much less likely than point mutations), but the first question to ask is whether the assumed modularity is itself plausible.

To examine this further, we begin by considering what this kind of modularity would require. If it is to be of general use for building up new folds, it seems to require that folds be divisible into more or less self-contained structural components that can be recombined in numerous ways, with each combination having a good chance of producing a well-formed composite structure. Two physical criteria would have to be met for this to be true. First, the sequence specificity for forming these components must be internal to the components themselves (making their structures self-contained), and second, the interactions that hold neighboring components together to form composite structures must be generic in the sense of lacking critical dependence on the particulars of the components.

The immediate problem is that the first criterion tends to be met only at the level of a complete fold – a folding domain. Important structural features are certainly discernible at lower levels, the most ubiquitous of these being the regular chain conformations known as the alpha helix and the beta strand (secondary structure being the term for these repetitive patterns in local chain structure). But these only find stable existence in the context of larger fold structures (tertiary structure) that contain them. That is, the smallest unit of protein structure that forms stably and spontaneously is typically a complete globular assembly with multiple, layered elements of secondary structure. Smaller pieces of structure can have some tendency to form on their own, which is important for triggering the overall folding process [43], but the highly co-operative nature of protein folding [44] means that stable structure forms all at once in whole chunks – domains – rather than in small pieces. Consequently, self-contained structural modules only become a reality at the domain level, which makes them unhelpful for explaining new folds at that level…

The binding interfaces by which elements of secondary structure combine to become units of tertiary structure are predominantly sequence dependent, and therefore not generic. This presents a major challenge for the idea of modular assembly of new folds, at least as a general explanation… As we will see next, several studies demonstrate that proteins with substantially different amino acid sequences (roughly 50% amino acid identity or less) fail to show part-for-part structural equivalence even if they are highly similar in terms of overall structure and function. Since the modularity hypothesis assumes a much more demanding sense of structural equivalence (where modules retain their structure even when moved between proteins that differ radically in terms of overall structure and function) the failure of the less demanding sense seems to rule that hypothesis out…

With no discernible shortcut to new protein folds, we conclude that the sampling problem really is a problem for evolutionary accounts of their origins. The final thing to consider is how pervasive this problem is. How often in the history of life would new phenotypes have required new protein folds? Or, narrowing that question, how much structural novelty do metabolic innovations appear to have required in the history of bacteria? Continuing to use protein domains as the basis of analysis, we find that domains tend to be about half the size of complete protein chains (compare Figure 10 to Figure 1), implying that two domains per protein chain is roughly typical. This of course means that the space of sequence possibilities for an average domain, while vast, is nowhere near as vast as the space for an average chain. But as discussed above, the relevant sequence space for evolutionary searches is determined by the combined length of all the new domains needed to produce a new beneficial phenotype…

Summary: the gemstone metaphor revisited

…We have used a picture of gems hidden in a vast desert at various points in our discussion in order to illustrate the challenge. Now that we have estimated the relevant fractions it may be helpful to return to this picture. Imagine that the search for gems is conducted by specifying sample points as mathematically exact geographic coordinate pairs (longitude and latitude). Sampling then consists of determining whether a gemstone rests at any of these specified points. A target the size of a grain of sand amounts to about one part in 10^20 of a search space the size of the Sahara, which is above the feasibility threshold of one part in 5 × 10^23. So under favorable circumstances a Darwinian search would be capable of locating a sand-grain-sized gemstone in a Sahara-sized search space. As mentioned above, the ability to accomplish a search on this scale is clearly of some practical significance.

But as a generator of new protein folds, it turns out to be decidedly insignificant. Extending our desert picture, imagine that the top surface of every grain of sand in the Sahara has a miniature desert of its own resting upon it – one in which the entire Sahara is replicated in minute detail. We may call the sub-microscopic sand in these miniature deserts level-1 sand, referring to the fact that it is one level removed from the real world (where we find level-0 sand). This terminology can be applied to arbitrarily small targets by invoking a succession of levels (along the lines of De Morgan’s memorable recursion of fleas). In terms of this picture, the sampling problem stems from the fact that the targets for locating new protein folds appear to be much smaller than a grain of level-0 sand. For example, the target that must be hit in order to discover one new functional domain fold of typical size is estimated to cover not more than one ten-trillionth of the surface of a single grain of level-1 sand. Under favorable circumstances a Darwinian search will eventually sample the grain of level-0 sand on which the right grain of level-1 sand rests, but even then the odds of sampling that level-1 grain are negligible, to say nothing of the target region on that grain. And the situation rapidly deteriorates when we consider more relevant targets, like beneficial new phenotypes that employ (typically) several new protein structures. In the end, it seems that a search mechanism unable to locate a small patch on a grain of level-14 sand is not apt to provide the explanation of fold origins that we seek.

Clearly, if this conclusion is correct it calls for a serious rethink of how we explain protein origins, and that means a rethink of biological origins as a whole.

————————————————-

FINAL NOTE:

Readers will observe that the foregoing argument made by Dr. Axe has nothing to do with the argument made in his and Dr. Ann Gauger’s subsequent paper, The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway. Even if the argument in that paper were invalid, as KeithS claims, the above argument would still stand as a genuine barrier to macroevolution.

In any case, Dr. Gauger has replied to critics of the latter paper, here, here and here. (Dr. McBride’s comments are available here.) I invite readers to draw their own conclusions.

Comments
According to this source Dr. Gauger gave quite an impressive presentation during the "Wistar 2" conference.sparc
November 7, 2014
November
11
Nov
7
07
2014
06:34 AM
6
06
34
AM
PDT
VJT: A good, workmanlike job as usual. Axe's argument is a stiff challenge to those who would dismiss the relevance of islands of function in v large config spaces with only v sparse search possible. I only note that a more realistic estimate of number of atom-level events would pivot on something about 10^30 times slower than the Planck time sometimes rounded to 10^-45 s but actually like 4 * 10^-44 s, ie a range to 10^-43 s. Namely, fast chem rxn scale events. With 10^80 atoms in the observed cosmos, 10^14 events/s and 10^17 s on a typical timeline, we are looking at more like 10^111 events or thereabouts. 500 bits have a config space of 3.27*10^150 possibilities and 1,000 bits, 1.07*10^301. Such numbers -- and dismissals on "big numbers, harrumph" fail -- pose sobering search challenges. Where, as any base may succeed any other and as any AA may succeed any other, there really is such a large space to search. Especially at OOL but also to source the cell types, tissues, organs and systems to form a new body plan. Genome sizes for that run like 10 - 100+ mn bases. I repeat, in the teeth of current caricatures -- the only known source of the requisite FSCO/I is design. With, the design inference filter approach being perfectly willing to accept false negatives by assigning the defaults to mechanical necessity and chance contingency, through imposing a stiff hurdle to infer design. The payoff is, when design is inferred, the inference is strong. KFkairosfocus
November 7, 2014
November
11
Nov
7
07
2014
05:14 AM
5
05
14
AM
PDT
Another reason why finding quantum entanglement/computation in proteins is so foreign to neo-Darwinian thought, is that one must appeal to a non-local, beyond spece and time, cause in order to explain entanglement, yet neo-Darwinism is based upon the reductive materialistic premise that there are no beyond space and time causes.
Looking beyond space and time to cope with quantum theory – 29 October 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” http://www.quantumlah.org/highlight/121029_hidden_influences.php Closing the last Bell-test loophole for photons - Jun 11, 2013 Excerpt:– requiring no assumptions or correction of count rates – that confirmed quantum entanglement to nearly 70 standard deviations.,,, http://phys.org/news/2013-06-bell-test-loophole-photons.html
of supplemental note: Context dependency is a far more difficult problem for Darwinists to deal with than I have illustrated here, because (and Aristotle would be happy) the 'form' of a organism is not reducible to the sequences and DNA and proteins (Jonathan Wells). But 'form' is its own independent source of information that provides the primary basis for the overall 'context' of an organism. To get this very important 'context dependency' point across, I highly recommend Wiker & Witt’s book “A Meaningful World” in which they show, using the “Methinks it is like a weasel” phrase of Richard Dawkins, that the “Methinks it is like a weasel” phrase doesn't makes any sense at all unless the entire context of the play of Hamlet is taken into consideration.
A Meaningful World: How the Arts and Sciences Reveal the Genius of Nature – Book Review Excerpt: They focus instead on what “Methinks it is like a weasel” really means. In isolation, in fact, it means almost nothing. Who said it? Why? What does the “it” refer to? What does it reveal about the characters? How does it advance the plot? In the context of the entire play, and of Elizabethan culture, this brief line takes on significance of surprising depth. The whole is required to give meaning to the part. http://www.thinkingchristian.net/C228303755/E20060821202417/
In fact it is interesting to note what the overall context is for “Methinks it is like a weasel” that is used in the Hamlet play. The context in which the phrase is used is to illustrate the spineless nature of one of the characters of the play. To illustrate how easily the spineless character can be led to say anything that Hamlet wants him to say:
Ham. Do you see yonder cloud that ’s almost in shape of a camel? Pol. By the mass, and ’t is like a camel, indeed. Ham. Methinks it is like a weasel. Pol. It is backed like a weasel. Ham. Or like a whale? Pol. Very like a whale. http://www.bartleby.com/100/138.32.147.html
After realizing what the context of ‘Methinks it is like a weasel’ actually was, I remember thinking to myself that it was perhaps the worse possible phrase Dawkins could have possibly chosen to try to illustrate his point, since the phrase, when taken into context, actually illustrates that the person saying it (Hamlet) was manipulating the other character into saying a cloud looked like a weasel. Which I am sure is hardly the idea, i.e. deception and manipulation, that Dawkins was trying to convey with his ‘Weasel’ example. also of note, Artificial intelligence (AI) does not 'understand' context, and is thus a major roadblock in attmpts at AI,,,
What Is a Mind? More Hype from Big Data - Erik J. Larson - May 6, 2014 Excerpt: In 1979, University of Pittsburgh philosopher John Haugeland wrote an interesting article in the Journal of Philosophy, "Understanding Natural Language," about Artificial Intelligence. At that time, philosophy and AI were still paired, if uncomfortably. Haugeland's article is one of my all time favorite expositions of the deep mystery of how we interpret language. He gave a number of examples of sentences and longer narratives that, because of ambiguities at the lexical (word) level, he said required "holistic interpretation." That is, the ambiguities weren't resolvable except by taking a broader context into account. The words by themselves weren't enough. Well, I took the old 1979 examples Haugeland claimed were difficult for MT, and submitted them to Google Translate, as an informal "test" to see if his claims were still valid today.,,, ,,,Translation must account for context, so the fact that Google Translate generates the same phrase in radically different contexts is simply Haugeland's point about machine translation made afresh, in 2014. Erik J. Larson - Founder and CEO of a software company in Austin, Texas http://www.evolutionnews.org/2014/05/what_is_a_mind085251.html
Verse: John 1:1 In the beginning was the Word, and the Word was with God, and the Word was God.bornagain77
November 7, 2014
November
11
Nov
7
07
2014
03:42 AM
3
03
42
AM
PDT
Dr. Torley, another problem that greatly exasperates the problem for proteins is what is termed 'context dependency'. Dr. Durston puts the problem of context dependency as such,,,
(A Reply To PZ Myers) Estimating the Probability of Functional Biological Proteins? Kirk Durston , Ph.D. Biophysics – 2012 Excerpt (Page 4): The Probabilities Get Worse This measure of functional information (for the RecA protein) is good as a first pass estimate, but the situation is actually far worse for an evolutionary search. In the method described above and as noted in our paper, each site in an amino acid protein sequence is assumed to be independent of all other sites in the sequence. In reality, we know that this is not the case. There are numerous sites in the sequence that are mutually interdependent with other sites somewhere else in the sequence. A more recent paper shows how these interdependencies can be located within multiple sequence alignments.[6] These interdependencies greatly reduce the number of possible functional protein sequences by many orders of magnitude which, in turn, reduce the probabilities by many orders of magnitude as well. In other words, the numbers we obtained for RecA above are exceedingly generous; the actual situation is far worse for an evolutionary search. http://powertochange.com/wp-content/uploads/2012/11/Devious-Distortions-Durston-or-Myers_.pdf
Moreover, Dr. Gauger informs us that context dependency is found thoughout the protein structure at the level of primary sequence, secondary structure, and tertiary (domain-level) structure.
“Why Proteins Aren’t Easily Recombined, Part 2? – Ann Gauger – May 2012 Excerpt: “So we have context-dependent effects on protein function at the level of primary sequence, secondary structure, and tertiary (domain-level) structure. This does not bode well for successful, random recombination of bits of sequence into functional, stable protein folds, or even for domain-level recombinations where significant interaction is required.” http://www.biologicinstitute.org/post/23170843182/why-proteins-arent-easily-recombined-part-2
That protein function is 'context dependent' is revealed by the following study. Proteins have now been shown to have a ‘Cruise Control’ mechanism which works to ‘self-correct’ the protein function (and structure) from any random mutations imposed on the proteins.
Proteins with cruise control provide new perspective: “A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.” http://www.princeton.edu/main/news/archive/S22/60/95O56/
Thus the entire protein is shown to be involved in safe guarding the specific function(s) of a protein from any random mutations imposed on it. In other words, much contrary to Darwinian thought, proteins are not 'searching' for new functions by varying their shape when mutations happen to the individual amino acids, as is presupposed in Darwinian thought, but proteins are instead designed, as a whole, to prevent mutations to its amino acids from having any detrimental effect on its function(s). How is it possible for a protein to operate as a cohesive whole despite mutations to individual amino acids? The reason it is possible for a protein to act as a cohesive whole, in a context dependent manner, is because the entire protein structure is found to be quantumly entangled as as 'a single quantum state',,,
Coherent Intrachain energy migration at room temperature – Elisabetta Collini & Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/
That the entire protein structure is quantumly entangled as as 'a single quantum state' is also revealed by the fact that protein folding is dependent on 'quantum computation' in order to find its final folded state. First a little background on how extremely difficult it is for 'random processes' to explain protein folding. Protein folding, contrary to what Darwinists would prefer to believe beforehand, is simply found not to be amicable to the 'randomness' of Darwinists
The Humpty-Dumpty Effect: A Revolutionary Paper with Far-Reaching Implications - Paul Nelson - October 23, 2012 Excerpt: Put simply, the Levinthal paradox states that when one calculates the number of possible topological (rotational) configurations for the amino acids in even a small (say, 100 residue) unfolded protein, random search could never find the final folded conformation of that same protein during the lifetime of the physical universe. http://www.evolutionnews.org/2012/10/a_revolutionary065521.html
In fact, solving the final form of a relatively short and simple protein fold requires several weeks to accomplish, even though several hundred thousand computers have been linked together for the task.
A Few Hundred Thousand Computers vs. A Single Protein Molecule – video https://www.youtube.com/watch?v=lHqi3ih0GrI
The reason why protein folding is so difficult for super-somputing to solve is that protein folding is similar to the infamous travelling-salesman problem, and travelling-salesman problems are notorious for keeping supercomputers busy for days.
Confronting Science’s Logical Limits – John L. Casti – 1996 Excerpt: It has been estimated that a supercomputer applying plausible rules for protein folding would need 10^127 years to find the final folded form for even a very short sequence consisting of just 100 amino acids. (The universe is 13.7 x 10^9 years old). In fact, in 1993 Aviezri S. Fraenkel of the University of Pennsylvania showed that the mathematical formulation of the protein-folding problem is computationally “hard” in the same way that the traveling-salesman problem is hard. http://www.cs.virginia.edu/~robins/Confronting_Sciences_Logical_Limits.pdf
Yet it is exactly this type of ‘traveling salesman problem’ that quantum computers excel at:
Speed Test of Quantum Versus Conventional Computing: Quantum Computer Wins - May 8, 2013 Excerpt: quantum computing is, "in some cases, really, really fast." McGeoch says the calculations the D-Wave excels at involve a specific combinatorial optimization problem, comparable in difficulty to the more famous "travelling salesperson" problem that's been a foundation of theoretical computing for decades.,,, "This type of computer is not intended for surfing the internet, but it does solve this narrow but important type of problem really, really fast," McGeoch says. "There are degrees of what it can do. If you want it to solve the exact problem it's built to solve, at the problem sizes I tested, it's thousands of times faster than anything I'm aware of. If you want it to solve more general problems of that size, I would say it competes -- it does as well as some of the best things I've looked at. At this point it's merely above average but shows a promising scaling trajectory." http://www.sciencedaily.com/releases/2013/05/130508122828.htm
Thus, it should not be surprising to learn that protein folding is now found to belong to the 'spooky' world of quantum mechanics and that protein folding does not belong to the world of classical mechanics.
Physicists Discover Quantum Law of Protein Folding – February 22, 2011 Excerpt: First, a little background on protein folding. Proteins are long chains of amino acids that become biologically active only when they fold into specific, highly complex shapes. The puzzle is how proteins do this so quickly when they have so many possible configurations to choose from. To put this in perspective, a relatively small protein of only 100 amino acids can take some 10^100 different configurations. If it tried these shapes at the rate of 100 billion a second, it would take longer than the age of the universe to find the correct one. Just how these molecules do the job in nanoseconds, nobody knows.,,, Their astonishing result is that this quantum transition model fits the folding curves of 15 different proteins and even explains the difference in folding and unfolding rates of the same proteins. That's a significant breakthrough. Luo and Lo's equations amount to the first universal laws of protein folding. That’s the equivalent in biology to something like the thermodynamic laws in physics. http://www.technologyreview.com/view/423087/physicists-discover-quantum-law-of-protein/
bornagain77
November 7, 2014
November
11
Nov
7
07
2014
03:40 AM
3
03
40
AM
PDT
VJ: "KeithS has been requesting scientific evidence of a genuine barrier to macroevolution. " Keith would better spend his time, and ours, if he requested of himself scientific evidence of a genuine path to macroevolution of proteins. Should I remind him that a non existing, never observed, never logically supported path is the best "barrier" we can imagine in empirical science?gpuccio
November 7, 2014
November
11
Nov
7
07
2014
02:35 AM
2
02
35
AM
PDT
1 2 3

Leave a Reply