Uncommon Descent Serving The Intelligent Design Community

Proteins Fold As Darwin Crumbles

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

A Review Of The Case Against A Darwinian Origin Of Protein Folds By Douglas Axe, Bio-Complexity, Issue 1, pp. 1-12

Proteins adopt a higher order structure (eg: alpha helices and beta sheets) that define their functional domains.  Years ago Michael Denton and Craig Marshall reviewed this higher structural order in proteins and proposed that protein folding patterns could be classified into a finite number of discrete families whose construction might be constrained by a set of underlying natural laws (1).  In his latest critique Biologic Institute molecular biologist Douglas Axe has raised the ever-pertinent question of whether Darwinian evolution can adequately explain the origins of protein structure folds given the vast search space of possible protein sequence combinations that exist for moderately large proteins, say 300 amino acids in length.  To begin Axe introduces his readers to the sampling problem.  That is, given the postulated maximum number of distinct physical events that could have occurred since the universe began (10150) we cannot surmise that evolution has had enough time to find the 10390 possible amino-acid combinations of a 300 amino acid long protein.

The battle cry often heard in response to this apparently insurmountable barricade is that even though probabilistic resources would not allow a blind search to stumble upon any given protein sequence, the chances of finding a particular protein function might be considerably better.  Countering such a facile dismissal of reality, we find that proteins must meet very stringent sequence requirements if a given function is to be attained.  And size is important.  We find that enzymes, for example, are large in comparison to their substrates.  Protein structuralists have demonstrably asserted that size is crucial for assuring the stability of protein architecture.

Axe has raised the bar of the discussion by pointing out that very often enzyme catalytic functions depend on more that just their core active sites.  In fact enzymes almost invariably contain regions that prep, channel and orient their substrates, as well as a multiplicity of co-factors, in readiness for catalysis.  Carbamoyl Phosphate Synthetase (CPS) and the Proton Translocating Synthase (PTS) stand out as favorites amongst molecular biologists for showing how enzyme complexes are capable of simultaneously coordinating such processes.  Overall each of these complexes contains 1400-2000 amino acid residues distributed amongst several proteins all of which are required for activity.

Axe employs a relatively straightforward mathematical rationale for assessing the plausibility of finding novel protein functions through a Darwinian search.  Using bacteria as his model system (chosen because of their relatively large population sizes) he shows how a culture of 1010 bacteria passing through 104 generations per year over five billion years would produce a maximum of 5×1023 novel genotypes.  This number represents the ‘upper bound’ on the number of new protein sequences since many of the differences in genotype would not generate “distinctly new proteins”.  Extending this further, novel protein functions requiring a 300 amino acid sequence (20300 possible sequences) could theoretically be achieved in 10366 different ways (20300/5×1023). 

Ultimately we find that proteins do not tolerate this extraordinary level of “sequence indifference”.  High profile mutagenesis experiments of beta lactamases and bacterial ribonucleases have shown that functionality is decisively eradicated when a mere 10% of amino-acids are substituted in conservative regions of these proteins.  A more in-depth breakdown of data from a beta lactamase domain and the enzyme chorismate mutase  has further reinforced the pronouncement that very few protein sequences can actually perform a desired function; so few in fact that they are “far too rare to be found by random sampling”.

But Axe’s landslide evaluation does not end here.  He further considers the possibility that disparate protein functions might share similar amino-acid identities and that therefore the jump between functions in sequence space might be realistically achievable through random searches.  Sequence alignment studies between different protein domains do not support such an exit to the sampling problem.  While the identification of a single amino acid conformational switch has been heralded in the peer-review literature as a convincing example of how changes in folding can occur with minimal adjustments to sequence, what we find is that the resulting conformational variants are unstable at physiological temperatures.  Moreover such a change has only been achieved in vitro and most probably does not meet the rigorous demands for functionality that play out in a true biological context.  What we also find is that there are 21 other amino-acid substitutions that must be in place before the conformational switch is observed. 

Axe closes his compendious dismantling of protein evolution by exposing the shortcomings of modular assembly models that purport to explain the origin of new protein folds.  The highly cooperative nature of structural folds in any given protein means that stable structures tend to form all at once at the domain (tertiary structure) level rather that at the fold (secondary structure) level of the protein.  Context is everything.  Indeed experiments have held up the assertion that binding interfaces between different forms of secondary structure are sequence dependent (ie: non-generic).  Consequently a much anticipated “modular transportability of folds” between proteins is highly unlikely. 

Metaphors are everything in scientific argumentation.  And Axe’s story of a random search for gem stones dispersed across a vast multi-level desert serves him well for illustrating the improbabilities of a Darwinian search for novel folds.  Axe’s own experience has shown that reticence towards accepting his probabilistic argument stems not from some non-scientific point of departure in what he has to say but from deeply held prejudices against the end point that naturally follows.  Rather than a house of cards crumbling on slippery foundations, the case against the neo-Darwinian explanation is an edifice built on a firm substratum of scientific authenticity.  So much so that critics of those who, like Axe, have stood firm in promulgating their case, better take note. 

Read Axe’s paper at: http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.1

Further Reading

  1. Michael Denton, Craig Marshall (2001), Laws of form revisited, Nature Volume 410, p. 417
Comments
"This selection yielded four new ATP-binding proteins that appear to be unrelated to each other or to anything found in the current databases of biological proteins."anaruiz
July 1, 2010
July
07
Jul
1
01
2010
07:49 PM
7
07
49
PM
PST
Szostak's 1 in 10^12? sorry petrushka, no soup for you: Szostak's number of 1 in 10^12 is severely misleading as to finding a protein that will perform a specific function. as well, Szostak also seems to have allowed "slop" in his experiment with "tethered" mRNA's.,,, Plus, Szostak's work has now been brought into severe question by empirical work which show his proteins lead to "cascading failure: A Man-Made ATP-Binding Protein Evolved Independent of Nature Causes Abnormal Growth in Bacterial Cells Excerpt: "Recent advances in de novo protein evolution have made it possible to create synthetic proteins from unbiased libraries that fold into stable tertiary structures with predefined functions. However, it is not known whether such proteins will be functional when expressed inside living cells or how a host organism would respond to an encounter with a non-biological protein. Here, we examine the physiology and morphology of Escherichia coli cells engineered to express a synthetic ATP-binding protein evolved entirely from non-biological origins. We show that this man-made protein disrupts the normal energetic balance of the cell by altering the levels of intracellular ATP. This disruption cascades into a series of events that ultimately limit reproductive competency by inhibiting cell division." http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0007385 It is also interesting to note: Yet even if Szostak's "optimistic" 1 in 10^12 (trillion) number were true, if you can call 1 in a trillion optimistic, for finding biologically functional proteins in sequence space, it would still be so rare as to present insurmountable mathematical difficulties for any evolutionary scenario. There simply is no vast reservoir of trillions upon trillions of "junk proteins" to be sifted through in nature (proteins don't form "naturally") waiting to accidentally form into a "simple" self replicating molecule. Nor is there any vast reservoir of junk proteins, to be found in living organisms, waiting for natural selection to sift through them to find any novel combinations that may be useful. In fact ribosomes are severely intolerant of inexact proteins: The Ribosome: Perfectionist Protein-maker Trashes Errors Excerpt: The enzyme machine that translates a cell's DNA code into the proteins of life is nothing if not an editorial perfectionist...the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products... To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is "shocking" and reveals just how much of a stickler the ribosome is about high-fidelity protein synthesis. http://www.sciencedaily.com/releases/2009/01/090107134529.htm So since humans have 80% different proteins than chimps how in the world did this occur with a system so dead set against variance Petrushka? As well, the "protein factory" of the ribosome is far more complicated than first thought: Honors to Researchers Who Probed Atomic Structure of Ribosomes - Robert F. Service Excerpt: "The ribosome’s dance, however, is more like a grand ballet, with dozens of ribosomal proteins and subunits pirouetting with every step while other key biomolecules leap in, carrying other dancers needed to complete the act.” http://www.creationsafaris.com/crev200910.htm#20091015abornagain77
July 1, 2010
July
07
Jul
1
01
2010
06:56 PM
6
06
56
PM
PST
http://www.nature.com/nature/journal/v410/n6829/full/410715a0.htmlPetrushka
July 1, 2010
July
07
Jul
1
01
2010
06:18 PM
6
06
18
PM
PST
Petruska, number one, exactly who are you talking to?, and number two, where is your referenced citation for your assertion?bornagain77
July 1, 2010
July
07
Jul
1
01
2010
03:40 PM
3
03
40
PM
PST
Bantay, Indeed, mere evidence for design does not require identification of a designer. ID proponents are still often asked who their Designer was. Some say that there is no reason to appeal for a supernatural designer. Even Dawkins accepts that life could be designed (by other, non-designed, evolved intelligent beings, that is.) Brute force searches for functional proteins, however, would fail even if we had infinite time, simply because there is not enough matter in the universe to store a single bit of information about each possible combination. We would just run out of memory. Consequently, we will never, ever, ever, ever, ever, ever, ever, ever, ever, ever be able to learn about all possible combinations. In the past, when we learned a something from nature we could often use it in engineering. I expect protein engineering to become a proper technical discipline of its own in my life time. Maybe then, as gpuccio also confirms, we will learn how to find islands of function in this vast space. On the other hand, it is also jolly well possible that once we try to make our owns we will find the already existing proteins even more amazing. bornagain, gpuccio, Thanks for your comments.Alex73
July 1, 2010
July
07
Jul
1
01
2010
03:04 PM
3
03
04
PM
PST
Does your experimentally based estimate account for the fact that functional proteins can be found in random distributions accessible in ordinary human lifetimes?Petrushka
July 1, 2010
July
07
Jul
1
01
2010
03:00 PM
3
03
00
PM
PST
Doug Axe also used a computer example to illustrate the extreme rarity of finding a functional protein: Doug Axe Knows His Work Better Than Steve Matheson Excerpt: Suppose a secretive organization has a large network of computers, each secured with a unique 39-character password composed from the full 94-charater set of ASCII printable characters. Unless serious mistakes have been made, these passwords would be much uglier than any you or I normally use (and much more secure as a result). Try memorizing this: C0$lhJ#9Vu]Clejnv%nr&^n2]B!+9Z:n`JhY:21 Now, if someone were to tell you that these computers can be hacked by the thousands through a trial-and-error process of guessing passwords, you ought to doubt their claim instinctively. But you would need to do some math to become fully confident in your skepticism. Most importantly, you would want to know how many trials a successful hack is expected to require, on average. Regardless of how the trials are performed, the answer ends up being at least half of the total number of password possibilities, which is the staggering figure of 10^77 (written out as 100, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000). Armed with this calculation, you should be very confident in your skepticism, because a 1 in 10^77 chance of success is, for all practical purposes, no chance of success. My experimentally based estimate of the rarity of functional proteins produced that same figure, making these likewise apparently beyond the reach of chance. http://www.evolutionnews.org/2010/06/doug_axe_knows_his_work_better035561.htmlbornagain77
July 1, 2010
July
07
Jul
1
01
2010
01:30 PM
1
01
30
PM
PST
Alex73, I found this more recent article on the limits of quantum computing: The Limits of Quantum Computers - March 2008 Excerpt: Quantum computers would be exceptionally fast at a few specific tasks, but it appears that for most problems they would outclass today's computers only modestly. This realization may lead to a new fundamental physical principle Key Concepts * Quantum computers would exploit the strange rules of quantum mechanics to process information in ways that are impossible on a standard computer. * They would solve certain specific problems, such as factoring integers, dramatically faster than we know how to solve them with today’s computers, but analysis suggests that for most problems quantum computers would surpass conventional ones only slightly. * Exotic alterations to the known laws of physics would allow construction of computers that could solve large classes of hard problems efficiently. But those alterations seem implausible. In the real world, perhaps the impossibility of efficiently solving these problems should be taken as a basic physical principle. http://www.scientificamerican.com/article.cfm?id=the-limits-of-quantum-computers Thus it seems Alex73 the maximum limit of computing that is achievable by the most "perfect" ideal supercomputer in the physical universe will not be able to surpass the threshold that has already been granted to evolutionists for resources, Namely: Even if you grant the most generous assumptions: that every elementary particle in the observable universe is a chemical laboratory randomly splicing amino acids into proteins every Planck time for the entire history of the universe, there is a vanishingly small probability that even a single functionally folded protein of 150 amino acids would have been created. http://www.fourmilab.ch/documents/reading_list/indices/book_726.htmlbornagain77
July 1, 2010
July
07
Jul
1
01
2010
12:09 PM
12
12
09
PM
PST
If you believe in common descent (and I do), it seems reasonable that some early common ancestor must have existed.
I think LUCA is considered to be a population of DNA exchanging organisms rather than an individual. It's not all just so stories, You can see bacteria exchanging DNA with a light microscope. The question about where and when genes originated is a gaps question, and gaps questions are attacked with research. You can do, as your referenced paper does, and work backward from existing proteins, or you can work forward from synthesized replicators, or you can do both. We are used to seeing difficult problems in science solved quickly -- in a few decades at most -- and we tend to forget that problems like gravity have remained incompletely solved for centuries. The argument from gaps erodes over time, although the rate of progress is variable.Petrushka
July 1, 2010
July
07
Jul
1
01
2010
10:35 AM
10
10
35
AM
PST
Bantay: Your remark is obviously right: we can think of a transcendent designer, or of an omniscient designer, or of an omnipotent designer. But we can also think of an immanent designer, who acts with power, but also with context dependent limitations. As ID, in its current status, cannot give us detailed inferences about the nature of the designer, it is important to explore all possibilities, at least as potential models. And I do believe that an immanent, non omniscient designer could do it, could build up biological information. That's an important point, whatever the final nature of the designer will be found to be. It's a point about design, and about intelligent conscious beings. And it's a point which must be discussed.gpuccio
July 1, 2010
July
07
Jul
1
01
2010
10:15 AM
10
10
15
AM
PST
Alex @2 You asked "Can an intelligent agent, confined fully within this known universe, design life (exactly as we know it) from scratch?" According to Dawkins, intelligent aliens can do it. But more importantly, Why the limiter of "confined fully within this known universe" ?? Correct me if I'm wrong, but evidence of design is not contingent upon it being only from within this universe.Bantay
July 1, 2010
July
07
Jul
1
01
2010
10:01 AM
10
10
01
AM
PST
Petrushka: LUCA is not my personal invention: it's a very widespread assumption of the main darwinist theory. Even if there was exchange of genes in the early history of life, where didi those genes come from? If you believe in common descent (and I do), it seems reasonable that some early common ancestor must have existed. And the criteria to ascribe to that ancestor a basic protein repertoire, while certainly indirect, seem reasonable enough. And that basic repertoire, according not to me or to ID, but to current darwinists science, seems to have been rather rich and complex.gpuccio
July 1, 2010
July
07
Jul
1
01
2010
09:34 AM
9
09
34
AM
PST
Petrushka, though I don't want to waste time diverging from the main topic of protein evolution, or lack thereof, I want to take one post to state I would hardly describe the method in which bacteria transfer DNA as "promiscuously",: In particular one method of DNA transfer between bacteria gives clear indication of being intelligently designed method of communication between bacteria cells: Transduction (genetics) http://en.wikipedia.org/wiki/Transduction_(genetics) The Bacteriophage Virus - Assembly Of A Molecular "Lunar Landing" Machine - Intelligent Design - video http://www.metacafe.com/watch/4023122/ As well, need I remind you of Dr. Cano's studies of ancient bacteria which show extreme conservation of molecular sequences?bornagain77
July 1, 2010
July
07
Jul
1
01
2010
09:14 AM
9
09
14
AM
PST
That’s just my idea, but I don’t believe that anyway the gap between OOL (whatever it was) and LUCA can be very great, even for those who believe in some model of gradual OOL: let’s say 100 – 200 My?
Considering that bacteria exchange DNA rather promiscuously, I'm not sure the concept of a single common ancestor of single celled organisms makes much sense.Petrushka
July 1, 2010
July
07
Jul
1
01
2010
08:55 AM
8
08
55
AM
PST
gpuccio you said: "The obvious question is: how was the functional information present at the “big bang” achieved, especially if it was so complex and compact at that time?" That is indeed a very important question, as Dr. Meyers has given no small headache to evolutionists about, but I still find the assumptions in their model to be problematic and I feel fairly confident, from the few studies I've seen so far, that even their "modest" assumptions for "functional spaces as related to the general search space" will be found to be way to optimistic as to what the "real world" will allow. A couple of videos that might be of interest to some: Life On Earth Its earliest evidence - video http://science.discovery.com/videos/the-planets-life-earliest-evidence.html U-rich Archaean sea-floor sediments from Greenland - indications of >3700 Ma oxygenic photosynthesis http://adsabs.harvard.edu/abs/2004E&PSL.217..237R Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Gregory S. Engel, Nature (12 April 2007) Photosynthetic complexes are exquisitely tuned to capture solar light efficiently, and then transmit the excitation energy to reaction centres, where long term energy storage is initiated.,,,, This wavelike characteristic of the energy transfer within the photosynthetic complex can explain its extreme efficiency, in that it allows the complexes to sample vast areas of phase space to find the most efficient path. ---- Conclusion? Obviously Photosynthesis is a brilliant piece of design by "Someone" who even knows how quantum mechanics works. http://www.ncbi.nlm.nih.gov/pubmed/17429397 Dr. Hugh Ross - Origin Of Life Paradox - video http://www.metacafe.com/watch/4012696 Life - Its Sudden Origin and Extreme Complexity - Dr. Fazale Rana - video http://www.metacafe.com/watch/4287513 Biological Big Bangs - Origin Of Life and Cambrian - Dr. Fazale Rana - video http://www.metacafe.com/watch/4284466 The Biological Big Bang model for the major transitions in evolution - Eugene V Koonin - Background: "Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. The relationships between major groups within an emergent new class of biological entities are hard to decipher and do not seem to fit the tree pattern that, following Darwin's original proposal, remains the dominant description of biological evolution. The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic supergroups; and animal phyla. In each of these pivotal nexuses in life's history, the principal "types" seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate "grades" or intermediate forms between different types are detectable; http://www.biology-direct.com/content/2/1/21bornagain77
July 1, 2010
July
07
Jul
1
01
2010
08:47 AM
8
08
47
AM
PST
Alex73: Your points are interesting. I still believe we know too little to make that kind of assessment, although I am very confident that even the complexities in biological information are certainly accessible to intelligent speculation. A starting point could be to wait and see if protein engineers fare better in building functional proteins "from scratch". Up to now, the results are not encouraging, but as I said, we have just begun.gpuccio
July 1, 2010
July
07
Jul
1
01
2010
08:32 AM
8
08
32
AM
PST
bornagain, gpuccio, Thanks for coming back. My hunch is -like bornagain's- that the required computational capacity goes beyond the available resources. I also think that the margin is enormous, i.e. not even a single, primitive bacterium could be designed from scratch. Now if the margin is indeed so large, then perhaps it will be possible to identify a well defined subsystem where the estimates can be performed. Also, the realistically available resources for the project are just a small portion of the mass and energy of the universe, after all, we see stars and galaxies around us and not the interiors of a mighty research lab. Anyway, I will keep on thinking about a possible way to attack the problem. I think the issue is important, because the Darwinist camp keeps on pestering the ID folks for more details about the Designer. Most of the ID studies I am aware of focus on showing 1. the existence of design 2. the utter inability of an unintelligent processes to generate significant anounts of information i.e. the main thrust was to disprove Darwinism. Now paving the way to calculate the resources necessary for an intelligent design procedure could be a unique ID related research subject with certainly serious philosophical consequences. Dr Dembski's studies into "search for an optimum search" definitely seem to go in this way also.Alex73
July 1, 2010
July
07
Jul
1
01
2010
08:26 AM
8
08
26
AM
PST
BA: No, I don't think that's the point. I am still reading the paper, so I could be wrong on some points, but I believe that it explores, through a computational model, how proteins which were very similar (or identical) at the beginning can become different in time at the primary sequence level, while retaining their 3D structure and function, through random mutations and negative selection. IOW, the protein family starts from a small point (at the "big bang" in protein space), and then, throughout evolution, it "explores" its functional space while remaining essentially the same at the functional level, and changing its primary structure as far as functional constraints allow. That has nothing to do with finding a new functional space and function. It is instead a way to reason about functional spaces as related to the general search space. For instance, an interesting point is that according to the authors the exploration of the functional space is slow and constant, due to functional constraints. That's very interesting, considering that those protein clusters where already functionally defined and separated in LUCA, and them have continued to diverge at the level of primary sequence. The obvious question is: how was the fucntional information present at the "big bang" achieved, especially if it was so complex and compact at that time?gpuccio
July 1, 2010
July
07
Jul
1
01
2010
08:12 AM
8
08
12
AM
PST
Petrushka (#11): I have no special reason to put LUCA or OOL at any special place in the timeline. I just accept what is usually considered the best assumption: Wikipedia: "The LUA is estimated to have lived some 3.5 to 3.8 billion years ago (sometime in the Paleoarchean era)" About OOL, I have no reason to believe that it was much earlier than that. Indeed, I think OOL was probably rather sudden and not gradual. So, it could just start with LUCA or something similar in a relatively short time. That's just my idea, but I don't believe that anyway the gap between OOL (whatever it was) and LUCA can be very great, even for those who believe in some model of gradual OOL: let's say 100 - 200 My?gpuccio
July 1, 2010
July
07
Jul
1
01
2010
08:01 AM
8
08
01
AM
PST
sorry gpuccio, I meant the paper you referenced, but if you do sponsor, where do I send my application 8) when you say: "it is applied (I can’t really say how reliably) to real biological data." Do they not merely find similar sequences of AA's of proteins, with the gargantuan assumption that you can get from point a to point b in the similar sequences by evolutionary processes, even though no one has demonstrated, in the real world, that it is possible to do as such for even a single protein as Dr. Behe pointed out in his review of Thornton's work?bornagain77
July 1, 2010
July
07
Jul
1
01
2010
07:53 AM
7
07
53
AM
PST
BA: As well gpuccio, I’ve seen this claim for a extraordinarily high percent for “parent” proteins required to be present at the OOL, but is not this number of proteins itself derived from the rather dubious fact that evolutionists could not account for the origination of certain proteins at certain levels of life and thus they “pushed them all back” to the “former age of miracles” at the OOL? No, I don't think so. I think it is derived form the dotribution of those proteins we observe in current living beings, and form the analysis of homologies between sequences, 3D structures and function.gpuccio
July 1, 2010
July
07
Jul
1
01
2010
07:48 AM
7
07
48
AM
PST
BA: I am not sponsoring the study, just bringing it to our attention for discussion. And anyway, this computational model is not a mere simulation, it is applied (I can't really say how reliably) to real biological data.gpuccio
July 1, 2010
July
07
Jul
1
01
2010
07:43 AM
7
07
43
AM
PST
Don't know how relevant this is, but first multicellular life seems to have been pushed back a billion years or so. It may have some bearing on the duration of the Cambrian explosion. http://news.yahoo.com/s/afp/20100630/sc_afp/sciencebiologyevolutionlife_20100630232304Petrushka
July 1, 2010
July
07
Jul
1
01
2010
07:38 AM
7
07
38
AM
PST
gpuccio I noticed this in the abstract of your paper: "We formulate a computational approach to study the rate of divergence of distant protein sequences and measure this rate for ancient proteins, those that were present in the last universal common ancestor. We show that ancient proteins are still diverging from each other, indicating an ongoing expansion of the protein sequence universe." Not to belittle computer models too much, but as Gil has pointed out repeatedly, computer models are only as good as can be mapped to the real world and the potential to be severely led astray by computer models, while neglecting to reference what is actually possible in the real world, are great. i.e. The actual restriction of proteins diverging may be far greater than they have assumed in the parameters of their model: As is pointed out here by Dr. Behe: The proteins that are actually found in life "for evolution to actually work with" are shown to be highly constrained in their ability to evolve into other proteins: Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe - Oct 2009 Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,, A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses. http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.html Severe Limits to Darwinian Evolution: - Michael Behe - Oct. 2009 Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed. http://www.evolutionnews.org/2009/10/severe_limits_to_darwinian_evo.html#more as well I have a strong reason to believe that functional sequences for proteins are far more rare than even Dr. Axe's work of 1 in 10^77 would indicate: proteins have now been shown to have a "Cruise Control" mechanism, which works to "self-correct" the integrity of the protein structure from any random mutations imposed on them. Proteins with cruise control provide new perspective: "A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order." http://www.princeton.edu/main/news/archive/S22/60/95O56/ Cruise Control?,, The equations of calculus involved in achieving even a simple process control loop, such as a dynamic cruise control loop, are very complex. In fact it seems readily apparent to me that highly advanced algorithmic information must reside in each individual amino acid used in a protein in order to achieve such control. This fact gives us clear evidence that far more functional information resides in proteins than meets the eye. For a sample of the equations that must be dealt with, to "engineer" even a simple process control loop like cruise control, for a single protein, please see this following site: PID controller A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal. http://en.wikipedia.org/wiki/PID_controller It is in realizing the staggering level of engineering that must be dealt with to achieve “cruise control”, for each individual protein, that it becomes apparent even Axe’s 1 in 10^77 estimate for finding specific functional proteins within sequence space, may be far too generous, since the individual amino acids themselves are clearly embedded with highly advanced mathematical language in their structures, which adds an additional severe constraint, on top of the 1 in 10^77 constraint, on finding exactly which of the precise sequences of amino acids in sequence space will perform a specific function. Though the authors of the paper tried to put a evolution friendly spin on the "cruise control" evidence, finding an advanced "Process Control Loop" at such a base molecular level, before natural selection even has a chance to select for any morphological novelty, is very much to be expected as a Intelligent Design/Genetic Entropy feature, and is in fact a very constraining thing to the amount of variation we can expect from proteins in the first place. As well gpuccio, I've seen this claim for a extraordinarily high percent for "parent" proteins required to be present at the OOL, but is not this number of proteins itself derived from the rather dubious fact that evolutionists could not account for the origination of certain proteins at certain levels of life and thus they "pushed them all back" to the "former age of miracles" at the OOL?bornagain77
July 1, 2010
July
07
Jul
1
01
2010
07:28 AM
7
07
28
AM
PST
The important point IMO is that many proteins were already present in LUCA (or very early)...
Just curious. Where do you place LUCA on a timeline, and where do you place OOL?Petrushka
July 1, 2010
July
07
Jul
1
01
2010
07:28 AM
7
07
28
AM
PST
I guess my question in response to Dat's question would be is,are complex organisms just amalgamations of single celled organisms?Phaedros
July 1, 2010
July
07
Jul
1
01
2010
07:24 AM
7
07
24
AM
PST
One comment to the above could be that Durston's method of using Shannon's entropy of the individual AA sites in functional protein clusters remains probably the best way of measuring empirically the explored functional space of a protein cluster.gpuccio
July 1, 2010
July
07
Jul
1
01
2010
07:22 AM
7
07
22
AM
PST
To all: I have found this very interesting recent paper: "Sequence space and the ongoing expansion of the protein universe" Inna S. Povolotskaya & Fyodor A. Kondrashov Nature, Vol 465| 17 June 2010| doi:10.1038/nature09105 The abstract: "The need to maintain the structural and functional integrity of an evolving protein severely restricts the repertoire of acceptable amino-acid substitutions1, 2, 3, 4. However, it is not known whether these restrictions impose a global limit on how far homologous protein sequences can diverge from each other. Here we explore the limits of protein evolution using sequence divergence data. We formulate a computational approach to study the rate of divergence of distant protein sequences and measure this rate for ancient proteins, those that were present in the last universal common ancestor. We show that ancient proteins are still diverging from each other, indicating an ongoing expansion of the protein sequence universe. The slow rate of this divergence is imposed by the sparseness of functional protein sequences in sequence space and the ruggedness of the protein fitness landscape: ~98 per cent of sites cannot accept an amino-acid substitution at any given moment but a vast majority of all sites may eventually be permitted to evolve when other, compensatory, changes occur. Thus, ~3.5?×?109?yr has not been enough to reach the limit of divergent evolution of proteins, and for most proteins the limit of sequence similarity imposed by common function may not exceed that of random sequences." The paper is freely available online, and I think it can certainly contribute to the present discussion. Tha important point IMO is that many proteins were already present in LUCA (or very early), and they have diverged in sequence while maintaining their function. Another important point is the following: "As a protein evolves along a rugged fitness ridge, some previously forbidden amino-acid substitutions at a site become acceptable and some previously acceptable substitutions become forbidden, owing to compensatory substitutions at other sites of the same protein or its interaction partners." The role of compensatory substitutions is very important, and it explains how some proteins, for instance some myoglobins, can be very different at the level of primary sequence, and yet retain the same 3D structure and function.gpuccio
July 1, 2010
July
07
Jul
1
01
2010
06:55 AM
6
06
55
AM
PST
Alex73: can we give an estimate for the quantity of required 1-bit (yes/no) decisions that will produce the functional information around us? No, we can't, because that depends critically on many variables we don't know: a) How much the agent knows in advance (you hypothesized that "the agent knows all physical and chemical laws". That's a vague statement, and indeed we don't know what the agent knows in advance and wht he has to discover). b) What kind of "search" the agent implements from time to time, and how much of it is algorithmic and how much is random search. c) How efficient is the agent in deriving new knowledge from the results of each search. That is only an example. Many other points could be added, some of them more philosophical, some more technical. I would just comment that it is true, we humans have been up to now particularly inefficient in the field of new biological design, but please bear in mind two important things: 1) We are just beginning, and can improve 2) The original designer of biological information is/was probably much better than we are at it. IOWs, it is not certainly only, or mainly, a question of computing power. Moreover, just reflecting on the formal ptoblem of "how much did the designer know in advance?", I would like to add a further comment. From what we know, and the facts we have, I am convinced that the greatest "leap" in information content in biological history was OOL. Indeed, I don't believe there is any objective data to hypothesize that OOL was a gradual process. I am truly convinced that it was rather sudden (whether that means a few million years or a few minutes, it's really difficult to hypothesize). That means, with reference to protein domains, that about half of known protein domains was already implemented at OOL, that is rather suddenly. A similar leap can be observed at the origin of metazoa body plans. The Ediacara and Cambrian explosions are not very compatible with a gradual search, be it a random or an intelligent one. On the contrary, other transitions are certainly more gradual, and could well speak for an intelligent gradual search. So, your questions are very interesting, but I doubt that at present they can be answered. But they are questions which, in principle, can and should be approached by scientific research. What we need are: a) more facts b) better reasoning on factsgpuccio
July 1, 2010
July
07
Jul
1
01
2010
06:12 AM
6
06
12
AM
PST
also of note: Possibilities and Limitations of Quantum Computing Excerpt: Together with co-authors, particularly Harry Buhrman, de Wolf proved various strong limitations on quantum computers. For most problems they are not significantly faster than classical computers. These limitations were proved by reducing complexity theoretic questions to algebraic questions about degrees of multivariate polynomials. Sufficient prove of strong -often optimal- lower bounds on the time a quantum algorithm needs to compute a Boolean function, can be given by proving a lower bound on the degree of an n-variate polynomial approximating that function. Moreover, de Wolf also contributed to the discovery of some quantum algorithms and protocols that outperform their classical counterparts. One example of this is a 'quantum fingerprinting' scheme. It allows two separated parties to compare large chunks of data more efficiently than classical computers. By assigning small quantum states to long classical strings, the amount of data that has to be exchanged for this operation can be exponentially reduced. In the future this technique could for example be used to create digital autographs. http://www.ercim.eu/publication/Ercim_News/enw56/de_wolf.htmlbornagain77
July 1, 2010
July
07
Jul
1
01
2010
05:00 AM
5
05
00
AM
PST
1 10 11 12 13

Leave a Reply