Uncommon Descent Serving The Intelligent Design Community

Mike Behe: A Blind Man Carrying a Legless Man Can Safely Cross the Street

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(11 January 2012) Here

The work of Finnegan et al (2012) strikes me as quite thorough and elegant. I have no reason to doubt that events could have unfolded that way. However, the implications of the work for unguided evolution appear very different to me then they’ve been spun in media reports. ( http://tinyurl.com/7lawgpl ) The most glaringly obvious point is that, like the results of Lenski’s work, this is evolution by degradation. All of the functional parts of the system were already in place before random mutation began to degrade them. Thus it is of no help to Darwinists, who require a mechanism that will construct new, functional systems. What’s more, unlike Lenski’s results, the mutated system of Thornton and colleagues is not even advantageous; it is neutral, according to the authors. Perhaps sensing the disappointment for Darwinism in the results, the title of the paper and news reports emphasize that the “complexity” of the system has increased. But increased complexity by itself is no help to life — rather, life requires functional complexity. One can say, if one wishes, that a congenitally blind man teaming up with a congenitally legless man to safely move around the environment is an increase in “complexity” over a sighted, ambulatory person. But it certainly is no improvement, nor does it give the slightest clue how vision and locomotion arose.

More.

Follow UD News at Twitter!

Comments
If you copy the bicycle first, so that you end up with a bicycle plus a doorstop, then yes. A little bit more. Alright, - so, take a copy, crush it into a doorstop. That's an information gain according to you. That's the standard you're going for here. So, a followup question: would a copy of a bicycle that gets crushed into a doorstop be an example of degradation?nullasalus
January 16, 2012
January
01
Jan
16
16
2012
08:13 PM
8
08
13
PM
PDT
If I take a bicycle, then crush it such that the resulting lump of metal/plastic could be used as a doorstop, would this be an example of the bicycle being ‘modified such that it has a different specificity or function’? If so, should this be regarded as an increase in information?
If you copy the bicycle first, so that you end up with a bicycle plus a doorstop, then yes. A little bit more. What if you copy a program that plots a bell curve, and the copy has a single change in a number that makes it plot a different curve? Less information or more? Of course, with genes, evolution is doing nothing so crude as crushing the sequences into a ball. Most/all of the sequence is retained in each duplicate. This has led, for example, to dozens and dozens of copies of MHC genes (the more genes, the more pathogens you can recognize), smell receptors, the red, green, and blue cone receptor genes in the human eye, etc. Wouldn't these be increases in information? (Maybe you think the IDer made all those copies, I don't care, for now I just want people to admit that such things would represent increases in information.)NickMatzke_UD
January 16, 2012
January
01
Jan
16
16
2012
08:01 PM
8
08
01
PM
PDT
BiPed, I read through your very long essay, and saw neither (a) an actual concise definition of information, nor (b) one that would answer my very simple, obvious, basic, simple question, so I asked my question. I do so again.NickMatzke_UD
January 16, 2012
January
01
Jan
16
16
2012
07:53 PM
7
07
53
PM
PDT
Indeed? What kind of dubious assertions and assumptions am I making? If only ~25% of the amino acid repertoire can be used among the flagellar rod proteins, then ~20% sequence identity is expected from chance alone. Couple this with the fact that we’ve also got natural selection in the equation, and convergence at the molecular level becomes a bit plausible. You seem to be under the impression that I am repudiating BLAST et al., but I am not. E.g., E-values don’t take into account the role of natural selection; they indicate if a hit is significant, not if a hit is the result of common ancestry.
A few points: 1. Nothing is ever as simple as "only 25% of the amino acids" can be used at a site. Rather, different amino acids occur at different frequencies. These frequencies are what go into the calculation of scores like e-values, or likelihoods of sequence data under different phylogenies and substitution models. Thus to judge the relative probability of different explanations of the observed sequence data, you really really really need to do it in an explicitly statistical way, not just eyeball it and cherry pick some bit of the data you think supports your position. 2. The variation you observe in a particular site in whatever collection of axial proteins you are looking at is likely less than the variation you would observe if you observed all available homologous sequences in the databases, which is very likely less than the variation you would observe if you could observe all currently-living variants of the sequence, which is very very likely less than the variation you would observe if you could observe all variants of the sequence that existed at any time in history, which is certainly much much less than the variation you would observe if you counted all of the possible ways of building axial filament structures with the 20 amino acids, given what we know about the fact that entirely different amino acid sequences can produce basically identical functions, binding sites, etc., plus we know that entirely dissimilar, nonhomologous proteins *are* observed to produce axial filaments (in archaeal flagella) and in numerous other pilus structures. Your observed limitation on amino acid variation, even if you could be sure it accurately described the True Complete limits on amino acid variation in the early stages I described above, wouldn't help at all with the later stages. 3. And it doesn't matter anyway, because even if the most extreme statement is taken as true, and, say, 20% amino acid similarity at particular sites is expected at random due to limitations in allowed amino acid positions (this is already a huge amount of convergence), well then, you multiply by 10 or 20 or 30 sites and your chance of independently getting detailed similarity across the alignment (which is what BLAST looks at) converges to 0 very quickly. DNA, after all, has only 4 allowed sites, and "random" similarity is therefore something like 25% (depending on G+C content), and yet BLAST and homology searches work perfectly fine anyway. 4. You could, if you wanted, build a statistical model wherein you asserted that the limited variation you observe at each site really is all the variation that could ever be tolerated at each site (a "high-convergence" model). Then you could compare the chances of getting the observed similarity independently, vs. the chance of getting it through copying up the inferred phylogenetic tree. A standard statistic like AIC could be used to make the judgment. If you can statistically significantly beat the copying-up-the-phylogenetic-tree model, well then, go ahead and publish. But without that kind of evidence, you won't get anywhere at all with anyone who knows bioinformatics. This is science, after all, and Genbank and NCBI and BLAST and the rest weren't built by a few schmucks who thought this stuff up yesterday, these methods have been pretty extensively tested in thousands of publications. Re: this:
On the contrary, it predicts that the bulk of the residues shared between Salmonella FlgK and Buchnera FlgE would be the result of common descent, rather than convergence. But if this was true, then we would expect these 37 identical positions shared between Salmonella FlgK and Buchnera FlgE to also be found in Salmonella FlgE. But we don’t. This can be explained away by rapid sequence divergence on the part of Salmonella FlgE, but this explanation is a bit ad hoc.
I would need to take a detailed look at the sequences, alignment length, phylogeny, related sequences, etc., to get a sense of whether or not this is really surprising. But, one thing to watch out for is that with the huge mass of sequencing data going into the databases these days, sequences are often mis-named, either through algorithm or human error. Another thing to watch out for is selective sampling of the best BLAST hit. If one BLASTS protein A, the database will return a bunch of homologous copies of A, and then start returning sequences from protein B, the sister group. However, out of say 1000 available B sequences, a few of them will be much closer to A than the average B sequence, because percentage similarity between sister groups will fall into a bell curve, and some sequence *has* to be on the end of the bell curve closer to A. Again (and I will say it again and again) the way to sort such things out is with a phylogeny of all the sequences, or a fair sample of the sequences. Even if massive convergence actually happened in real life, you would probably need to calculate a phylogeny (and then observe massive weirdness) in order to be sure you were seeing it.NickMatzke_UD
January 16, 2012
January
01
Jan
16
16
2012
07:46 PM
7
07
46
PM
PDT
You are making all kinds of dubious assumptions and assertions here. You need to learn something about the statistics of homology searches, the high statistical improbability of a sequence match *even* in the case of “strong” constraints on the sequence, etc. And then get it peer-reviewed and published. Literally hundreds of thousands of scientists and entire fields of research, all the genomics databases, etc., make use of homology and the statistical tools I have cited (like BLAST). It’s not just little old me making the assertion that convergence is not a likely explanation of sequence similarity.
Indeed? What kind of dubious assertions and assumptions am I making? If only ~25% of the amino acid repertoire can be used among the flagellar rod proteins, then ~20% sequence identity is expected from chance alone. Couple this with the fact that we've also got natural selection in the equation, and convergence at the molecular level becomes a bit plausible. You seem to be under the impression that I am repudiating BLAST et al., but I am not. E.g., E-values don't take into account the role of natural selection; they indicate if a hit is significant, not if a hit is the result of common ancestry.Genomicus
January 16, 2012
January
01
Jan
16
16
2012
06:11 PM
6
06
11
PM
PDT
Homology of sequence is due to common ancestry, so what the sequences do afterwards (diverge, or converge) doesn’t affect it.
Quite true, but if the sequence similarity shared among these flagellar rod proteins are the result of gene duplication (i.e., common ancestry), we wouldn't predict that fully half of the residues shared between Salmonella FlgK and Buchnera FlgE are not shared between Salmonella FlgK and Salmonella FlgE. But this fits neatly within the hypothesis that convergent evolution has played a major role in generating the sequence similarity shared among these rod proteins. We would expect stuff like that if this sequence similarity is indeed the result of convergent evolution, but this isn't expected if this sequence similarity is essentially the result of common ancestry.Genomicus
January 16, 2012
January
01
Jan
16
16
2012
06:07 PM
6
06
07
PM
PDT
I like your staying on target, UB. I have a question of my own about the question. if a gene is duplicated, and one copy get modified such that it has a different specificity or function, has the amount of information in the genome increased? If I take a bicycle, then crush it such that the resulting lump of metal/plastic could be used as a doorstop, would this be an example of the bicycle being 'modified such that it has a different specificity or function'? If so, should this be regarded as an increase in information?nullasalus
January 16, 2012
January
01
Jan
16
16
2012
06:00 PM
6
06
00
PM
PDT
Upright, He's referring to this question, as you well know:
if a gene is duplicated, and one copy get modified such that it has a different specificity or function, has the amount of information in the genome increased?
What is your answer?champignon
January 16, 2012
January
01
Jan
16
16
2012
05:47 PM
5
05
47
PM
PDT
Champignon, If Nick Matzke had a substantive refutation of the observations made, he would have voiced it. I have no particular desire to entertain a conversation otherwise.Upright BiPed
January 16, 2012
January
01
Jan
16
16
2012
05:46 PM
5
05
46
PM
PDT
Nick, are you referring to where I asked you substantiate your comments upthread by taking a quick look at a page, and you returned to say absolutely nothing whatsoever about the content. Are you referring to where you immediately changed the subject? If that was indeed what you meant, then I agree with you – “wow”.Upright BiPed
January 16, 2012
January
01
Jan
16
16
2012
05:27 PM
5
05
27
PM
PDT
I too am interested in hearing Upright Biped's answer to that question. How about it, Upright?champignon
January 16, 2012
January
01
Jan
16
16
2012
04:33 PM
4
04
33
PM
PDT
Wow. Way to totally not answer one very simple, not-hard, not-trick question.NickMatzke_UD
January 16, 2012
January
01
Jan
16
16
2012
04:19 PM
4
04
19
PM
PDT
moreover perhaps you would care to list the studies that Dr. Behe so carelessly overlooked so that we can, once and for all, see the almighty power of your beloved theory of evolution do anything whatsoever besides break things:
“The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/
bornagain77
January 16, 2012
January
01
Jan
16
16
2012
03:04 PM
3
03
04
PM
PDT
And your problem still remains that you were deliberately misleading to begin with, and are still deliberately misleading with duplication events or horizontal gene transfer since these events are 'non-random' number one, and number two they clearly do not address the elephant in the living room problem of the generation of functional sequence complexity, not to mention they do not address the higher levels of information above functional sequence complexity that are completely ignored in the reductive materialistic framework of neo_Darwinism: notes:
Stephen Meyer - Functional Proteins And Information For Body Plans - video http://www.metacafe.com/watch/4050681
Dr. Stephen Meyer comments at the end of the preceding video,,,
‘Now one more problem as far as the generation of information. It turns out that you don’t only need information to build genes and proteins, it turns out to build Body-Plans you need higher levels of information; Higher order assembly instructions. DNA codes for the building of proteins, but proteins must be arranged into distinctive circuitry to form distinctive cell types. Cell types have to be arranged into tissues. Tissues have to be arranged into organs. Organs and tissues must be specifically arranged to generate whole new Body-Plans, distinctive arrangements of those body parts. We now know that DNA alone is not responsible for those higher orders of organization. DNA codes for proteins, but by itself it does insure that proteins, cell types, tissues, organs, will all be arranged in the body. And what that means is that the Body-Plan morphogenesis, as it is called, depends upon information that is not encoded on DNA. Which means you can mutate DNA indefinitely. 80 million years, 100 million years, til the cows come home. It doesn’t matter, because in the best case you are just going to find a new protein some place out there in that vast combinatorial sequence space. You are not, by mutating DNA alone, going to generate higher order structures that are necessary to building a body plan. So what we can conclude from that is that the neo-Darwinian mechanism is grossly inadequate to explain the origin of information necessary to build new genes and proteins, and it is also grossly inadequate to explain the origination of novel biological form.’ - Stephen Meyer - (excerpt taken from Meyer/Sternberg vs. Shermer/Prothero debate - 2009) Epigenetics and the "Piano" Metaphor - January 2012 Excerpt: And this is only the construction of proteins we're talking about. It leaves out of the picture entirely the higher-level components -- tissues, organs, the whole body plan that draws all the lower-level stuff together into a coherent, functioning form. What we should really be talking about is not a lone piano but a vast orchestra under the directing guidance of an unknown conductor fulfilling an artistic vision, organizing and transcending the music of the assembly of individual players. http://www.evolutionnews.org/2012/01/epigenetics_and054731.html Revisiting the Central Dogma in the 21st Century - James A. Shapiro - 2009 Excerpt (Page 12): Underlying the central dogma and conventional views of genome evolution was the idea that the genome is a stable structure that changes rarely and accidentally by chemical fluctuations (106) or replication errors. This view has had to change with the realization that maintenance of genome stability is an active cellular function and the discovery of numerous dedicated biochemical systems for restructuring DNA molecules.(107–110) Genetic change is almost always the result of cellular action on the genome. These natural processes are analogous to human genetic engineering,,, (Page 14) Genome change arises as a consequence of natural genetic engineering, not from accidents. Replication errors and DNA damage are subject to cell surveillance and correction. When DNA damage correction does produce novel genetic structures, natural genetic engineering functions, such as mutator polymerases and nonhomologous end-joining complexes, are involved. Realizing that DNA change is a biochemical process means that it is subject to regulation like other cellular activities. Thus, we expect to see genome change occurring in response to different stimuli (Table 1) and operating nonrandomly throughout the genome, guided by various types of intermolecular contacts (Table 1 of Ref. 112). http://shapiro.bsd.uchicago.edu/Shapiro2009.AnnNYAcadSciMS.RevisitingCentral%20Dogma.pdf Modern Synthesis of Neo-Darwinism (Genetic Reductionism) Is Dead - Paul Nelson - video http://www.metacafe.com/watch/5548184/ Falsification Of Neo-Darwinism by Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US
bornagain77
January 16, 2012
January
01
Jan
16
16
2012
02:57 PM
2
02
57
PM
PDT
Behe excluded any studies that included more powerful gain of function possibilities, such as duplication events or horizontal gene transfer. If you limit your study to short term point mutation experiments, you will get point mutation results.Petrushka
January 16, 2012
January
01
Jan
16
16
2012
01:43 PM
1
01
43
PM
PDT
should read: appropriate response to a artificially induced defect.bornagain77
January 16, 2012
January
01
Jan
16
16
2012
01:15 PM
1
01
15
PM
PDT
And do you realize that those supposed 'gain of functions' in Behe's paper, were preceded by deletion events, or was that little fact just a 'inconvenient truth' you forgot to mention in your attempt to decieve?
4 nucleotide deletion in lysis gene of MS2 Reading frame restored by deletions, insertions G,G Licis and van Duin (2006) Viruses manipulated to be defective Deletion of 19 intercistronic nucleotides from RNA virus MS2 containing Shine-Dalgarno sequence and two hairpins One revertant deleted 6 nucleotides; another duplicated an adjoining 14- nucleotide sequence; missing functional coded elements substantially restored G,G Olsthoorn and van Duin (1996) Viruses forced to be interdependent Separate viruses, f1 and IKe, engineered to carry distinct antibiotic resistance markers In media containing both antibiotics, phages co-packaged into f1 protein coats; two-thirds of IKe genome deleted, second antibiotic gene captured by f1 L,M G Sachs and Bull (2005) http://www.lehigh.edu/~inbios/pdf/Behe/QRB_paper.pdf
Moreover, far from you proving that these trivial 'return of function', (which is far more appropriate than 'gain of function'), mutations were random, the 'mutations are in fact just more stunning proof of the sophisticated programming in the cell that calculated a appropriate response to a artificially induced environmental stress.bornagain77
January 16, 2012
January
01
Jan
16
16
2012
12:02 PM
12
12
02
PM
PDT
You realize, of course, that Behe's review article lists at least three instances of gain of function.Petrushka
January 16, 2012
January
01
Jan
16
16
2012
10:07 AM
10
10
07
AM
PDT
Nick: Again, you’ve got a severe problem with words like “complicated” if one gene is considered more complex than several genes. No, I don't think there is a problem. If one gene accomplished what it now takes several copies of a gene to accomplish, then it seems that the first gene was at least equally "complicated" as the several copies of the second gene are now. What (non-intelligently guided) theories of evolution need to demonstrate is the ability of genes to gain function, not lose function, which all that Thornton's group has demonstrated.Bilbo I
January 16, 2012
January
01
Jan
16
16
2012
09:14 AM
9
09
14
AM
PDT
I'm not sure if I understand you exactly. Homology is not due to sequence similarity, but rather sequence similarity is often a result of homology if there is not too much divergence. Homology of sequence is due to common ancestry, so what the sequences do afterwards (diverge, or converge) doesn't affect it. I'm not sure if I understood your point fully though, so if I have misunderstood it, please write back and I will try to answer further.Starbuck
January 15, 2012
January
01
Jan
15
15
2012
09:20 PM
9
09
20
PM
PDT
Nick Matzke, Of course, it cannot go unnoticed that you failed to engage the material evidence at any level whatsoever, not even to say that you're disinterested in it - if that’s even possible. I am not sure how your response, one demonstrating such clear avoidance of the issue in such an obvious manner, will age for a person like yourself; a person who has been something of activist in these matters. I would think it will not age well.Upright BiPed
January 15, 2012
January
01
Jan
15
15
2012
02:32 PM
2
02
32
PM
PDT
You are making all kinds of dubious assumptions and assertions here. You need to learn something about the statistics of homology searches, the high statistical improbability of a sequence match *even* in the case of "strong" constraints on the sequence, etc. And then get it peer-reviewed and published. Literally hundreds of thousands of scientists and entire fields of research, all the genomics databases, etc., make use of homology and the statistical tools I have cited (like BLAST). It's not just little old me making the assertion that convergence is not a likely explanation of sequence similarity. Also: if you want to do a serious analysis of the rod proteins: estimate a phylogeny, estimate a phylogeny, estimate a phylogeny. Random statements about X sharing certain similarities to Y don't mean much, you need to look at the overall picture, which is a phylogeny.NickMatzke_UD
January 15, 2012
January
01
Jan
15
15
2012
12:47 PM
12
12
47
PM
PDT
The ancestral inner membrane ring of the V-ATPase wasn't "very complex", it was one short protein (less than 100 amino acids). The one gene is used to synthesize hundreds of copies of the protein, and they assemble into monomeric rings in the inner membrane. The one-gene version is still the situation in many bacteria, which have the homologous FoF1-ATPase, where the inner membrane ring protein is called the c-subunit of the Fo portion of the system. (Except for those bacteria which have duplicated the gene/protein for the c subunit, which has happened at least in chloroplasts and, I think, in other cases also.) Again, you've got a severe problem with words like "complicated" if one gene is considered more complex than several genes. Any IDist anywhere, unless they had just been beaten over the head by the Thornton lab's work, would say that the multiprotein system is more complex, that it's a finely-tuned irreducible machine, that all those proteins working together must have come together at once and are an insuperable obstacle to Darwinism, yadda yadda yadda. You can't just abandon all those standard ID assertions like they were never made (especially when they will instantly be made again the next time some random system comes up.)NickMatzke_UD
January 15, 2012
January
01
Jan
15
15
2012
12:41 PM
12
12
41
PM
PDT
Hi BiPed, With regard to information, the main question I'm interested in is, if a gene is duplicated, and one copy get modified such that it has a different specificity or function, has the amount of information in the genome increased? I think, on any reasonable definition of "information", the answer is "yes". But this causes a problem for creationists/ID advocates, because they have invested a huge amount in the proposition that only intelligence can produce new information, and that natural processes such as evolution cannot. Thus, to defend their assertion, they have to invent all kinds of unreasonable and question-begging "definitions" of "information". So, what's your answer to the gene duplication question?NickMatzke_UD
January 15, 2012
January
01
Jan
15
15
2012
12:33 PM
12
12
33
PM
PDT
Nick, I can think of a simple chemical reaction taking more than one chemical, e.g. neutralisation which requires some acid and alkali. Can neutralisation phenomenon be reduced to just acid or just alkali?Eugene S
January 15, 2012
January
01
Jan
15
15
2012
10:18 AM
10
10
18
AM
PDT
But you don’t say why those positions would be better explained by convergence, what is the signature, why isn’t copying an inferior explanation?
Well, the hypothesis that the sequence similarity observed among FlgBCEFGK is due to convergence at the molecular level basically predicts that convergent evolution will occur among independent lineages. I.e., it would predict that, although "the FlgK in Salmonella is closer in sequence identity to Salmonella FlgE than to Buchnera FlgE, FlgK still shares about 37 residues that are identical in Buchnera FlgE, but are not identical in Salmonella FlgE, even though, on the whole, FlgK of Salmonella is closer to Salmonella FlgE." The hypothesis of plain ole' gene duplication doesn't predict anything like this. On the contrary, it predicts that the bulk of the residues shared between Salmonella FlgK and Buchnera FlgE would be the result of common descent, rather than convergence. But if this was true, then we would expect these 37 identical positions shared between Salmonella FlgK and Buchnera FlgE to also be found in Salmonella FlgE. But we don't. This can be explained away by rapid sequence divergence on the part of Salmonella FlgE, but this explanation is a bit ad hoc. Also, consider that when makes an extensive sequence alignment of, say, FlgB, spanning many bacteria phyla, on average only about 6 different amino acid residues are allowed for every position (excluding gaps). The positions of other flagellar rod proteins, like FlgC, are under even tighter functional constraint. Thus, convergent evolution at the molecular level wouldn't be that implausible at all, since only little more than 25% of the amino acid repertoire can be used. Gene duplication explains the sequence similarity shared among these proteins, but when we start digging under the surface we start seeing clues that convergent evolution might actually be at the root of this sequence similarity.Genomicus
January 15, 2012
January
01
Jan
15
15
2012
09:47 AM
9
09
47
AM
PDT
Nick Matzke,
Ditto for “information”...
Reading your comments above, the 'ditto' part of this seems to indicate that ID proponents "rely on an arbitrary and continuously-changing definition" of information. I certainly agree that persons (on both sides) have various definitons of information. Perhaps this happens for various reasons, but it very often seems to lead to an anthropomorphic view of information - which is understandable, but not neccesary. Since we live in a material universe where information has material effects and must be transfered from material substrates, perhaps you could end the confusion and provide an unambiguous view. I had the opportunity to approach Larry Moran on this issue, based solely on the physicality of information transfer. Unfortunately, after giving Dr Moran an overview, he decided not to engage the conversation. Perhaps you could take a look at those observations and point to where they are arbitrary with regard to the material evidence. ThanksUpright BiPed
January 14, 2012
January
01
Jan
14
14
2012
12:38 PM
12
12
38
PM
PDT
Nick: "You would do better to redefine “part” so that it is meant in terms of a logically-required function, rather than a specific protein." Yes, I think that would be a better approach. It is apparently trivial to evolve proteins that are required, even though the ancestor with a similar function didn’t strictly need the extra proteins. I'm not quite sure what you mean by "trivial." In the case of V-ATPase, apparently what was needed was a specific kind of wheel. It seems logically possible that originally V-ATPase was just one very complex protein, that through a series of gene duplications became five and one, and then four, one and one. But what happened is that the first original, very complex protein became less and less complex as it was replaced by simpler and simpler proteins. I think that's a very significant fact, which shouldn't be overlooked. How did that original, very complex protein come into existence? The trivial part, it seems to me, is the way it evolved into more, but simpler proteins.Bilbo I
January 14, 2012
January
01
Jan
14
14
2012
12:13 PM
12
12
13
PM
PDT
I agree with this partially, but really, how many "irreducible" systems which turn out to actually be *reducible* does it take before we admit that there is significant reason to doubt that "all parts required in system X at time 0 years before present-day" is not any kind of argument for "all those parts were required in system X at any time throughout evolutionary history, therefore it couldn't gradually evolve." You would do better to redefine "part" so that it is meant in terms of a logically-required function, rather than a specific protein. It is apparently trivial to evolve proteins that are required, even though the ancestor with a similar function didn't strictly need the extra proteins. Behe actually makes his argument this way sometimes in Darwin's Black Box -- e.g., for him, the flagellum had *three* parts -- motor, rotor, stator. But soon after, everyone got impressed by the large number of "required" proteins (meaning some knockout mutation abolished function in the lab), and that's what the debate became about. But if you move back to that argument, (1) no more snowing the public with "ooh, look at the high number of required parts", and (2) no more "Hey, we did a knockout experiment, this is ID research, look at us, take us seriously when we say there is ID research!!!" (Never mind that everyone since forever has done knockout experiments, and that knocking out a part is nothing like reversing evolution to test the functionality of ancestral stages, no more that cutting off someone's head helps you understand the evolution of heads.) Unfortunately, (2) is Casey Luskin's favorite argument, in particular it's the core of his argument about why ID was misjudged in the Dover case. So, I predict no one is going to abandon the "protein list = parts list" approach, and thus Thornton's work will remain highly relevant.NickMatzke_UD
January 14, 2012
January
01
Jan
14
14
2012
11:44 AM
11
11
44
AM
PDT
So, nullasalus, how are you interpreting the word "degradation" in Behe's piece? What do you think he means by it?Elizabeth Liddle
January 14, 2012
January
01
Jan
14
14
2012
09:57 AM
9
09
57
AM
PDT
1 2 3 4 5

Leave a Reply