Uncommon Descent Serving The Intelligent Design Community

“Competence” in the Field of Evolutionary Biology

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
arroba Email

Thomas Cudworth in his post here referenced “…being competent in the field of evolutionary biology.”

My question is, What does it mean to be “competent” in the field of evolutionary biology?

It seems to me that it would mean providing hard empirical evidence that the mechanism of random variation/mutation and natural selection which is known to exist (e.g., bacterial antibiotic resistance) can be extrapolated to explain the highly functionally integrated information-processing machinery of the cell — at a very minimum! This empirical demonstration should be a prerequisite, before we even begin to entertain speculation about how this mechanism produced body plans and the human brain.

Yet, the theoretically most “highly competent” evolutionary biologists never even attempt to address this requirement. They just wave their hands, make up increasingly bizarre, mathematically absurd, unsubstantiated stories out of whole cloth (like co-option), declare that the solution has been found, and that anyone who questions them is a religious fanatic.

This is the antithesis of legitimate scientific investigation.

My definition of competence in the field of evolutionary biology is Michael Behe, who has actually empirically investigated the limitations of the creative powers of the Darwinian mechanism. The conclusion is clear: It can do some stuff, but not much of any ultimate significance, and cannot possibly be extrapolated to explain what Darwinists expect us to accept through blind faith, in defiance of all reason and evidence.

Comments
Elizabeth is fine :) Not many people do, but I do rather like it. Elizabeth Liddle
Elizabeth: May I call you Elizabeth? I like that more... Well, I am happy to know more about you. As for me, I am a medical doctor in Italy, with a great interest in statistics, biology, genetics, practical philosophy, and many other things. ID has been for me a great passion, and has opened very important vistas on reality, for me. It has also given me the opportunity to deepen my understanding of molecular biology and genetics, even if they are not strictly my field. All my interventions here are completely on a "non authority" basis. I don't believe in authority in science, but competence is certainly a very high value. And, luckily, nobody has ever tried to call me "Dr."! :) gpuccio
It's been pointed out to me that as people keep addressing me as "Dr Liddle" I should probably explain my academics. I'm somewhat embarassed to be constantly so addressed, and only ever sign myself "Lizzie". I use my real name (sans salutation) as my login name, and I am not anonymous. Early on, I was addressed as Ms Liddle, and I somewhat facetiously explained that I was (IIRC) usually known as Mrs, or Lizzie, and occasionally Dr! So can I here and now invite everyone to drop the salutation - I am more than happy with Lizzie. But, as it is probably clear to most people that I do have a PhD (as I'm sure many others do here), let me give a brief academic CV: My first degree was in Music with Education (major, Music), followed by a postgraduate certificate in music education (PGCE). I taught high school music for a bit, and also did a lot of performing (in the field of "early music"). I then did a second bachelor's degree in architecture, followed by some years working freelance in various architectural practices, and continuing to work as a professional musician, also traveling to Basel to study viola da gamba with Jordi Savall! That was all fun. Then I finished my architecture training (including a masters in Urban Design) and moved to Vancouver, Canada, and had a much longed-for surprise baby (on my last egg!). My "miracle baby". Then tried to get a job in architecture in Vancouver in the middle of a property slump :(. Managed to get lots of music gigs though, which was good. But I came to the conclusion that being a touring musician wasn't really compatible with motherhood (too much when I had work; too little when I didn't) and thought I'd go back to school, pursue my lifelong interest in educational psychology, and do a masters, maybe a PhD in that, and practice as an Ed Psych. I did the most of that Masters (at UBC) then moved back to the UK, somewhat unexpectedly, and embarked on a PhD in cognitive psychology/neuroscience, in a motor control lab. Since completing my PhD I have worked as a researcher in the field of neuroscience, mostly in brain imaging, in connection with research into mental disorders. I am not a biologist, although of course my work is biological, nor a geneticist, although I work with genetic data. I'm not a programmer, though I program, and I'm not a computational modeller, though I have written, and published, computational models! I've always been fascinated by biology, and evolution, but have no more than an enthusiast's lay understanding, although as it impinges on my field (as it does on every biological field!) I read especially voraciously in that area. My area of research is learning (that's a lifelong theme) and timing (also lifelong), specifically learning of timing. Learning models are, in effect, evolutionary algorithms (or, most of them are) and so computational aspects of evolution are something that I am familiar with. I also use, and write, classifier algorithms, which are learning algorithms that are essentially Darwinian. But that's it. I don't wish to give the impression I have any special authority endowed by my PhD, except perhaps the authority that comes from the skill, shared by many here, of using the scientific method - operationalising hypotheses, casting them as statistically testable predictions, figuring out how to measure variables, handling data, etc. I have also, as some people know, written some children's books - not many. One is about Heaven ("Pip and the Edge of Heaven"), the other two are in German, and were part of a commissioned series on "questions children ask". They are fiction-with-a-purpose if you like. I also do some musical composing. I think that's it. Hope that has cleared things up. And please call me Lizzie :) Cheers Lizzie Elizabeth Liddle
Sorry, lost my bookmark for this one. Will be back, but probably not till weekend. Elizabeth Liddle
Elizabeth Liddle, where are you?? PaV
Elizabeth: First of all, two more papers that will help in our debate: The Evolutionary History of Protein Domains Viewed by Species Phylogeny http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0008378 and Sequence space and the ongoing expansion of the protein universe http://www.nature.com/nature/journal/v465/n7300/full/nature09105.html (this one is not free, but there is a summary of it here: http://www.lucasbrouwers.nl/blog/2010/05/the-big-bang-of-the-protein-universe/ Well, I would like to sum up my personal scenario, based in part on these sources. Let's begin with a clarification about FUCA and LUCA. Here I have to make specific epistemological choices. a) LUCA: Is the hypothesis of a last universal common ancestor a just so story? No, it is not. It is a reasonable inference derived from what we can observe in the proteome today (that is, from facts). I am convinced that the analysis of protein homologies can give great insight on their evolutionary links. So, if one believes in common descent (and I do), it is perfectly reasonable to isolate those proteins that are credibly very ancient, and common to all known living beings. Those proteins and protein families are hundreds and hundreds, and include many fundamental and complex functional systems (for instance, those of DNA replication, transcription, protein synthesis, and so on). If the observed homologies tell us that those families share a common ancestor, and have not arisen separately during the more recent evolutionary nodes, then it is reasonable to assume that somne progenitor to all modern lines of living beings must have existed. That is LUCA. We can not only assume that it existed, but also make some inference on its nature ant time of existence. as far as I can understand, there is no reason to think that LUCA was anything much different from a prokaryote, some bacterium - archea. It is very likely that it existed in very old times, probably not too long after the conditions on earth became compatible with life. I suppose that an exact timing of LUCA, anyway, is still controversial. b) FUCA: Is the hypothesis of one or more simpler progenitors of LUCA a just so story? Yes, it is. There is nothing based on facts that supports that theory. Let's sayy that LUCA could well have been FUCA, and there is nothing against that simple possibility, except the resistance to accept that life, at its emergence, may have already been very complex. Therefore, the existence of simpler precursors to LUCA is driven by ideology, not facts. You may have understood, at this point, that I am not a big fun of all existing OOL theories, from the primordial soup to the RNA world. I believe that all of them are at present just so stories. They can well be pusued, but there is no reason to accept them as credible scientific theories. Having stated this certainly unpopular belief, I would like to go on a little. From the above paper about protein domains evolution, we can derive the reasonable hypothesis, that I had already quoted without giving a specific reference, that more than half the basic protein information was already present in LUCA. You can see in Table 1 of the paper that according to the authors'analysis, 1984 domains were already represented at the beginning of cellular life, out of 3464. But it is also true that the remaining ones emerged after, most of them at the node of bacterial emergence (144) and eukaryote emergence (492), and gradually les at later points, up to about 10 after the mammals node. That is interesting, isn't it? A few considerations. a) If functional protein domains were so common and numeorus, how is it that all the powerful evolutionary search has found anly a few thousands of them? And more half of them at the stage of LUCA, very near to OOL? And yet, very important functional modifications have happened after that, not the least the emergence of humans. But, for some reasons, those modifications were realized with only minor new discoveried of basic functional protein structures. I would say. one of two: either not so many really useful protein structures remain to be found (my favourite), or for some reason life as we know it does not need the rest of what could be found. You choose. Let's go to the big bang theory of protein evolution, and to neutral mutations. This is how I see things. New protein structures (folds, superfamilies, domains, according to how you choose to define them). appear suddenly at specific points of evolution. Probably, when they are needed for what is being engineered (OK, this is an ID interpretation). A new fold, let's say, appears in some place inside the "functional island" of the sequences that can express that function. After that, what happens? The sequence "evolves". But what does that mean? In many cases, the function does not evolve at all: it remains the same, and the folding remains more or less the same (with minor adaptations). But the sequence changes. The more distant species are in time, the mnore the sequence of the same fold, with the same function, changes. In some cases, it changes so much that it becomes almost unrecognizable (less than 10 - 30% homology). Why? I am not sure, but the most likely explanation is: neutral mutations and negative selection. Plus a rugged landscape of function, that makes some mutations posiible only if other, compensatory mutations have happened before, and therefore slows down the process. That's what neutral mutations do: they change the sequence, and allow the proteins to "diverge" inside a functional island, where negative selection preserves the function. But when mutations reach the "border" of the functional island, and function is compromised, then mutations are no more neutral, and they are usually eliminated. So, the original functional structures emerge originally with all their functionality (are engineered, we would say in ID). But neutral mutations and negative evolution allow divergence without significant loss of function. I am obviously aware that, in many other cases, functional divergence also happens in a protein superfamily. That is usually obtained by relatively minor tweakings at the active site, while the general fold is often maintaned. That case is analyzed (IMO quite well) in the second Axe paper I quoted. Are this variations, usualy of a few AAs, micorevolution or macroevolution? Are they realized by a purely darwinian mechanism? I don't know, but I have my doubts. I can anyway say that they are certainly much more near the reach of darwinian theory than the emergence of basic protein domains. Well, I think that's enough for today. I have been probably too long. And here (in Italy) it's time to rest. gpuccio
Dr Liddle: The known replicators joined to metabolic cells -- the relevant case to life as we observe it and can reconstruct it on the conventional timeline to 3.2 - 3.8 BYA -- are based on vNSRS linked to metabolic systems. You may be able to construct something else but he evidence is that when we look at such self replication ans an additional facility, we are looking at irreducibly complex, symbol based representational systems. Things that are well beyond the 1000 bit threshold for the cosmos as a whole. Where in fact on a case by case basis, a solar system scaled site is the credible unit of study. So, the issue is to here get to the first funcitonal body plan, without which the issue of differential reproductive success on different metabolism oriented function in an environment does not even obtain. Multiply this by the known architecture of embryological development from an initial cell, and you run head on into the islands of function issue, given the easily perturbed embryological development processes. This issue is in my considered opinion, the decisive one. GEM of TKI kairosfocus
kf: I don't think that "life-forms reproduce" is past its sell-by date at all! I do understand that you consider it refuted, but I certainly do not! I've given several examples of self-replicators that are far simpler than your von Neumann self-replicator, and you have simply rejected them. The assertion that the minimum entity capable of Darwinian evolution is a von Neumann self-replicator is simply that - an assertion. But I'm happy to leave you with your fine-tuning argument - I readily concede that life is unlikely in a universe lacking carbon-weight elements. I don't think the fine-tuning argument actually works very well, but it's not what I'm arguing here, or what any "Darwinian" argues. Darwinian evolution rests on (at least) two premises: that physics and chemistry as we know it existed; that self-replicators, capable of replicating with variance in the efficiency with which they self-replicate, existed. But given those two premises, Darwinian evolution seems well up to explaining what we observe in the living world. And if you want to argue that the simplest self-replicator capable of Darwinian evolution is a von Neumann self-replicator, be my guest :) I'm not yet persuaded :) Elizabeth Liddle
PS: And before someone trots out the long past sell-by date "life forms reproduce" objection, remember, this is what is needed for the kind of "cell based, metabolising, von Neumann self-replication as additional capacity" entity involved. This is evidence of design of cell based life, which goes on top of evidence of design of a cosmos that is fine tuned and so set up to facilitate C-chemistry cell based life and in turn makes design the best candidate to explain the additional FSCI in body plans. kairosfocus
GP: The issue of islands of function starts with proteins but goes beyond there. No one in his right mind would imagine that a tornado in a junkyard will form a 747 jumbo jet. But it will not form a single instrument on its panel either, not even a basic D'Arsonval moving coil Galvanometer. For the same basic reason. And in fact I would be astonished to see such a tornado winding the 50-turn fine wire coil, putting in the leads and inserting the spindle and jewels at the ends correctly. As to matching that to the magnets and iron core etc to get he meter to work, much less needle spring and scale . . . Don't even think about boxing it properly! The monkeys at keyboards exercise backed up by scans of literature has been tried, the result to date is summed up by Wiki, in effect searching a space of 10^50, not 10^150:
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[21] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
This has been brought up many times for months. GEM of TKI kairosfocus
F/N: Search on the gamut of our observed cosmos has the upper bound of 10^150 Planck Time Quantum states [PTQS] for the atomic matter of the observed cosmos. About 10^80 atoms, 10^25 s to effective heat death, 10^45 states per sec, roughly. Where the FASTEST CHEM RXNS TAKE 10^30 PTQS. Think of one PTQS as the cosmic clock tick. 500 bits [2^500 ~ 3*10^150) is at that level and is beyond practical search, number of states is beyond chemistry to find needles in haystacks, noting that stats tell us random samples strongly tend to be typical not atypical. 1,000 is so far beyond cosmos as to be ridiculous, you just squared a 500 bit space. And yet 125 bytes or 143 7-bit ASCII characters is a trivial quantum of control code for anything of consequence. Life forms start out at about 100,000 bits, Again so far beyond the ridiculous that this is now blatant to the point of self evidence. New body plans are going to require 10 - 100 million bits, per observations. This is off the chart. The ONLY empirically warranted way to get FSCI on that scale is intelligence. The inference to design on the genome -- we have not yet touched epigenetic factors -- is obvious. Save to those indoctrinated into a C20 update on a C19 theory that has held institutional dominance and is ruthlessly abusing power to keep itself going long past when it should have gone out to pasture. I'd say some time by the 1960s when it was clear what DNA meant and when we saw what it took to make computers and do cybernetics and controls things that are a lot less sophisticated than a living, metabolising self replicating cell. As in "rocket science." No wonder Wernher von Braun was a Creationist! Yup, I know: those ignorant, stupid, insane or wicked fundies! GEM of TKI kairosfocus
Elizabeth: About functional islands. It's controversial. The truth is we still don't know for certain. Darwinists obviously hope and try to demonstarte (not always fairly) that there are many many of them, that each one is very very big, and that findiong them by a random walk is, if not easy, perfectly possible. I will be brief, and list some arguments that, IMO, point to the opposite view. Many of these points are in some way discussed in the first of the two Axe papers I pointed to. 1) Human protein engineering, with all its intelligent search, has not yet been able (to my knowledge) to produce a single new functional protein folding, unrelated to those in the proteome, which has a really useful function. Least of all a selectable one. The last SCOP release mentions in the known proteome 1195 folds,1962 superfamilies,3902 families. Designed proteins, both by top down or bottom up methods, are only a few, and as far as I can understand they are neither functional nor selectable (I have not checked the most recent, so I could be wrong about that). I am not saying that intelligent engineering cannot design proteins: it certainly can, and will do that. But the same fact that, with all our understanding of biochemistry and with all our computational resources that is still an extremely difficult task is scarcely in favor of common functional proteins, everywhere to be found. 2) Recent research aimed to transform one fold into another (mentioned by Axe) shows that that is a very difficult task, even with short proteins, and even followinf a natural algorithm that guides from one known form to another known form. 3) Tha main research paper cited by darwinists to affirm that functional proteins are present in a random repertoire is the Szostak paper, which is flawed and does not in any way justify that conclusion, as I have tried to show many times here and elsewhere. 4) I would suggest the following interesting paper: Experimental Rugged Fitness Landscape in Protein Sequence Space http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0000096 While suggested to me by a darwinist, it has become one of my favourite. It si about optimization, not about the finding of an original functional structure, but still it shows how a random repertoire is a very poor tool for a serious optimization, even in the presence of strong natural selection. I would like to emphasize the following part of the discussion: "The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness." That is really interesting. Another interesting point about optimization, always from that paper, is that suboptimal peaks of function in a rugged landscape are indeed obstacles to the final optimization, because they become dead ends where the random walk stops, and not steps to further optimization. 5) I will present my last argument in my next post, because it is about the role of neutral mutations, and it deserves a detailed discussion. That's all, for now. gpuccio
GP: Well said as usual. A claim that there are infinitely many functionally specific complex patterns of relevant elements is in effect a claim to an infinite multiverse supercosmos where everything no matter how improbable, can and does happen. There is ZERO clear observational evidence for such. This is unfounded metaphysical speculation designed to save the phenomena for a preferred worldview. In short, worldview speculation. So, we are entitled to challenge it on phil terms, such as, comparative difficulties with other credible alternatives on the table and ALL the evidence in their favour. Such as this. And, if that sort of multiverse speculation is the real alternative to a design inference on the significance of FSCI in light of the known cosmos, that tells us that science is not the root issue. The empirical evidence we do have supports that FSCI etc are excellent and reliable signs of intelligent cause. To counter this, due to a priori commitment to a dubious materialistic worldview, resort is made to metaphysical speculation hiding in a lab coat. And then saying we are science, so you cannot challenge our metaphysics on comparative difficulties. Nor can you suggest to students that there are other ways to understand science, its methods and its findings. That is telling. Science held ideological captive. By the materialists. The new establishment. GEM of TKI kairosfocus
computerist (#60): Again, I don't understand. gpuccio
computerist: While we cannot measure Natural Selection explicitly, NS can actually be deduced from a set of possible configurations. No. Absolutely no. Natural selection can interven only is any new step gives a reproductive advantage. Show the selectable steps, shiw why they are selectable, and then and only then you can hypothesize a naturally selectable path. One objection to Durston’s equation was that it didn’t take into account NS, and that the equation only takes into account a target and the probability of RM alone producing the target in one try. That is absolutely true, but it is no objection at all. Durston is examining protein fanilies for which no path based on natural selection has ever been even vaguely proposed. They are protein families which have no sequence relationship with others. So, their functional complexity must be accounted for entirely, unless and until true scientific hypotheses about a selectable path are made. We are all ready and eager to evaluate darwinian paths based on NS, if and when they are explicitly and realistically proposed. We are just waiting. Science is done on facts and explicit, verifiable hypotheses, not on just so stories and ideology. However, this is not the case as Durston’s equation takes into account the number of possible configurations. Durston's equation takes into account the number of functional configurations in relation to the number of total configurations. It has nothing to do with natural selection. Nothing at all. the probability of RM alone producing the target in one try. Again the error I outlined in my previous post. Not in one try. By a random walk. The number of possible configurations can be restated to the number of steps taken before a locked function, which is in essence NS. No. If the steps are intelligently selected, it is intelligent selection. If the steps are intelligently guided, it is direct input of information. It is NS only if you can demonstrate that each step expanded in the population because it conferred a reproductive advantage. 320 Gen to reach a target is equivalent to 320 possible configurations before a FCSI effect is achieved. The Weasel is an obvious example of intelligent selection based on knowledge of the target. Even Dawkins should have understood that by now. No NS is involved. The only objection I can foresee from Darwinists is that there is infinite possible FCSI’s, and therefore any combination in a biological context would likely produce FSCI. FCSI doesn’t really exist and is an illusion (and the universe is inside a magic 8 ball). I hope you understand what you are saying here. I don't. gpuccio
WilliamRoache: Wow, so many answers to give, and so little time to do it! What mechanism are you proposing is used to search it? Does the search have to find all 462 bits at once? Design, which allows an input of active information. That can take different forms (direct input as guided mutation, targeted random mutations plus intelligent selection, and so on). The 462 bits need not ne found at once, but they must be in some way "recognized" and fixed, otherwise we are again in the scenario of a random walk, which is incompatible with a realistic search. And perhaps those isolated islands when zoomed in upon reveal themselves to be capable of sustaining life as we know it? I am not sure I understand what you mean. If I understand well, that is obviously true. If you are looking for single 462 bit sequences by random searching (monkey on keyboard, tornado in a junkyard) then of course they will be rare. Very rare. I am looking for functional sequences of high complexity. They are rare. And it's impossible to find them by a random walk. But nobody except ID proponents claim that such sequences arose in a single step with no simpler precursors. Again, the point is not, and never has been, that they must arise "in a single step". They can arise in as many steps you like. But, if each step is not simple and naturally selectable, the random walk rameins a random walk, and the probabilities remain the same. Darwinists like to create ambiguity about the temporal context, affirming that IDists believe that the variation must happen "all at once". That is not true. The probabilities for a specific 400 bit variation are ridiculously low, either we try sinlge variation of 400 bits or many successive variations of one bit. Id natural selection cannot expand each single step, then we are always in the random walk scenario, and the probabilities are practically nil. So what’s your claim? That the protein families in question were intelligent designed? All proteins? Could some have evolved? How do you tell the difference? It's simple. All protein families which have a functional complexity higher than some conventional threshold, and cannot be deconstructed into an explicit path of naturally selectable precursors, are probably designed. A reasonable biological threshold for a random step can be set, IMO, at 100 bits (about 25 coordinated AAs), even if empirical thresholds are certainly much lower. Simpler variations could in principle be in the range of random variation, but probably any single non deconstructible step of more than 5 - 10 AAs is designed. Remember that all protein superfamilies are unrelated at the sequence level, all 2000 of them. gpuccio
Elizabeth Liddle @ 11:
You may have meant: 600 genes are necessary for mitosis. I don’t know if this is the case: it may be, now. That would not mean it had always been that way, nor does it mean that there is no allelic variation in those genes.
"I don't know if this is the case: it may be, now." Are you seriously suggesting that other, simpler means of mitosis existed? How is this more than hand-waving? This is an argument from ignorance. " . . . nor does it mean that there is no allelic variation in those genes." In most proteins, from what I've read, there are conserved portions, and non-conserved portions. How do we differentiate? By looking at polymorphisms across a population or across related species. Since the vital portion of the protein/enzyme is, per definition, 'conserved', then you can have all kinds of varied alleles and yet this does not affect function (per definition). So, it would seem that if we assume that, let's say, 30% of a gene/allele is vital/conserved, then some mechanism must explain how that 30% came about. The number of "alleles", to my mind, is simply immaterial. That is, wherein proteins function, they do not vary; wherein they vary, they do not affect function. In support of my point, let me not that in the paper I mentioned in my last post, they compared critical sequences across related populations and found NO polymorphisms whatever in the regions critical to function. In fact, isn't that really the true role of what we call NS? That is, putative "purifying selection". PaV
Elizabeth Liddle @ 35:
Firstly, you are assuming an a priori “target” – a specific trait, that involves two independent amino acid changes, and that will confer a reproductive advantage.
There is a paper recently out wherein for a beneficial novelty to occur, a minimum of thirteen mutations were needed. The authors seemed to suggest that only five mutations might suffice. Not to quibble, let's assume that only five mutations are needed. Now they mention that if they make each of the five mutational changes individually, the increase in fitness is not that great. The beneficial effect occurs only when all five mutations are in place 'simultaneously'. Now, in a population of half a million individuals, let's say that there are, at time t=0, 500 individuals with one of these needed mutations. Since the increase in fitness is mild, then for this mutation to become fixed in the population, 4N of such mutations are needed. So, 1,999,500 more such mutations are needed. If, per generation, 10^-6 mutations occur per individual (using Kimura's number), then every two years, another needed mutation at just the right spot will occur. So, to arrive at the needed two million mutations (rough count), 4 million years would be needed. Now what about the second, and the third, (and so forth) mutations? It's possible that the genome that 'fixes' will have one of these other mutations. What is the possibility? Is it one in a half a million? Is it one in four? Most likely it is one in four, since at any other locus on the genome, that locus, too, will have changed (theoretically) two million times; but the chance of the proper nucleotide base being in place is always just one in four. Using this probability, since the genome with the first needed mutation has become 'fixed', there should be 1/4 x 0.5 x 10^6 possibilities for the second needed mutation already at hand. This equals 125,000. So, for the second mutation to become fixed, we need 4N mutations at that particular locus. At 10^-6 mutations per locus per generation, and with the need for 1,875,000 second mutations, we'll need 3.75 million years. This will be the same for the third, fourth and fifth mutations. Total time for all 5 needed mutations: 19 million years. But what if all 13 mutations are needed, as found in nature? Then roughly 53 million years are needed to bring this novelty about through 'random drift'. This is some, relatively, simple change to ONE protein..............................and roughly 53 million years (at the very minimum, 19 million years) are needed. Excuse me, Elizabeth, but this is not impressive in the least. How does a functioning genome come into existence under this kind of scenario? To the rational mind, it's absolutely inconceivable! PaV
Elizabeth Liddle:
So even if we assume (probably a fair assumption) that many advantageous traits arise from gene-gene interactions (you need a specific combo to get the advantage), the opportunities for those combinations to occur is actually quite substantial. Of course the probability of any one specific combination, out of all the alleles, occurring, may be very low, but that is not the relevant probability. The relevant probability is the probability of some advantageous combination occurring in some individual at some point.
Your basic argument, or so it seems to me, is that 'statistically', any genome is just a matter of time. One would have to assume that a corollary to this argument is that the more complex the genomic structure, the longer the time needed to 'find' (stochastically) the 'target' (which, of course, to Darwinists, is not known ahead of time---but that doesn't matter here.) Then, per this corollary, the fossil record should provide us with two things: (1) there ought to be a tremendous gap in time between the arrival of the first eukaryotes (the truly LUCA) and the discovery (stochastically) of Phyla; that is, body-plans; (2) after a long period of time, newer body-plans should be also 'discovered'. We find neither of these to be true. Darwin, who, having had his religious sensibilities shaken, reckoning the Earth to have existed for eons upon eons, felt that what we call the Cambrian Explosion must have, by force of his 'theory', been preceded by eons upon eons of intermediate forms. But the Big Bang has burst the bubble of envisioning the Earth to be quasi-eternal, and the intermediate forms have not been discovered. And quite to the contrary, an intricate Mullusc eye has been found right in the middle of the Cambrian. So, help me if you can, but frankly I don't see any way of interpreting neutral drift so as to explain what the fossil record provides. Again, as in my previous post, this is a real-life example. It's not hypothetical. PaV
Elizabeth Liddle @ 35:
Firstly, you are assuming an a priori “target” – a specific trait, that involves two independent amino acid changes, and that will confer a reproductive advantage. However, this would be the Texas Sharp Shooter fallacy! There may be many many proteins that would confer the same or similar advantageous trait, and, indeed, there may be (and are) many advantageous traits.
I suspect you have not read Michael Behe's book, The Edge of Evolution. There he deals with a real-life---as opposed to a purely hypothetical---scenario in which the P. falciparum (malarial parasite), which reproduces asexually, is in a life-and-death struggle to overcome the effects of chloroquine. In any individual infected with the parasite, at peak infection 10^12 replicates will have been produced (the mutational equivalent of a population of one million organisms maintaining its population size for one million years). This represents a tremendous amount of mutation: the same genome replicating itself generation after generation, with some constant number of mutations introduced every time the genome is reproduced. And yet, with all of the proteins contained in its genome, with all of the replications, with all of the mutations, not a SINGLE one is capable of fighting off the effects of a rather simple chemical (because of 'neutral drift' there should be so many candidates, right?). When, after twenty years of millions of people annually becoming infected with different versions of malarial genomes, finally a solution, a real, live solution, is found. It comprises of, basically, two amino acid substitutions. Now, with this great level of selective pressure at play, and when, in the end, only two a.a. residues need to be changed, why is it that roughly 10^20 replications are needed for the solution, if, as you say, there are many, many proteins able to "confer the same or similar advantageous trait"? Again, this isn't theoretical, it's real. Very real. PaV
Just to add to 59, in my opinion I think slight modifications to Durston's equation could render a much more realistic result. In particular, instead of using "dispersed" possible configurations we use an approximate starting configuration and a relative-random offset. In order to model NS more precisely a dependency condition between subsequent configurations should be made. OR We be very generous (as Durston has) with the number of possible configurations. computerist
gpuccio said:
that is how many sequences can approximately express that biochemical function versus the total search space.
This can also be stated another, more relevant way I believe. I have not followed the entire thread, so please forgive if I'm repeating whats already been mentioned. While we cannot measure Natural Selection explicitly, NS can actually be deduced from a set of possible configurations. One objection to Durston's equation was that it didn't take into account NS, and that the equation only takes into account a target and the probability of RM alone producing the target in one try. However, this is not the case as Durston's equation takes into account the number of possible configurations. The number of possible configurations can be restated to the number of steps taken before a locked function, which is in essence NS. Going back to Weasel for example; 320 Gen to reach a target is equivalent to 320 possible configurations before a FCSI effect is achieved. So it does in fact, take into account NS. The only objection I can foresee from Darwinists is that there is infinite possible FCSI's, and therefore any combination in a biological context would likely produce FSCI. FCSI doesn't really exist and is an illusion (and the universe is inside a magic 8 ball). computerist
WR:
But nobody except ID proponents claim that such sequences arose in a single step with no simpler precursors.
Is that what Durston, Abel, Axe, etc claim? I missed that part. Mung
Mung:
My grandmother was not a single celled organism that reproduced by cloning.
But then, you already knew that, didn't you. Elizabeth Liddle:
Well, “allele shuffling” isn’t the only mechanism of mutation, and indeed, wouldn’t have been relevant to our LUCA which can’t have been a sexually-reproducing organism.
So what was the point of bringing in my grandmother? Elizabeth Liddle:
And our LUCA was almost certainly more complex than its predecessors, so asking why “1000 superfamilies were already present in LUCA” is a bit like asking why a runner half way round the track has “already” completed half a lap!
And if drift is a phenomenon associated with sexual reproduction, how is it going to help you? How does neutral theory help? Mung
gpuccio,
But 462 bits still means a search space for the function of about 10^139. We are in the order of magnitude, more or less, of Dembski’s UPB. No kidding here.
That is indeed a large search space. What mechanism are you proposing is used to search it? Does the search have to find all 462 bits at once?
I will try to give you my reasons for my firm belief that functional configurations are rare and isolated islands in the ocean of possible sequences.
And perhaps those isolated islands when zoomed in upon reveal themselves to be capable of sustaining life as we know it? If you are looking for single 462 bit sequences by random searching (monkey on keyboard, tornado in a junkyard) then of course they will be rare. Very rare. But nobody except ID proponents claim that such sequences arose in a single step with no simpler precursors.
We are in the order of magnitude, more or less, of Dembski’s UPB. No kidding here.
So what's your claim? That the protein families in question were intelligent designed? All proteins? Could some have evolved? How do you tell the difference? WilliamRoache
Elizabeth, My grandmother was not a single celled organism that reproduced by cloning. Was the point of my post @45 just entirely lost? Mung
Yes, it's late here too (Nottingham, UK), so time to chuck the cat off my knee and maybe look at those papers in bed :) See you tomorrow. Lizzie Elizabeth Liddle
Elizabeth: Well, it's rather late here, and I think I will go on tomorrow, if I can. My next, big argument will be about how common functional configurations are in the search space. It is a very controversial subject, and an obviously crucial one. I will try to give you my reasons for my firm belief that functional configurations are rare and isolated islands in the ocean of possible sequences. So, to tomorrow (if my work allows). Have a good night, whatever time it is where you live. gpuccio
Elizabeth: First of all, a methodological premise. In the discussion, I will have to label some of the ideas you express as "just so stories" or "fairy tales". I don't do that with any intention of being derogatory to you, but rather with a specific espistemological meaning. We are indeed discussing scientific theories, and any theory can be admitted and discussed. Bur a scientific theory must have at least a minimun of empirical justification to be really interesting. Otherwise, for me, it is only a "just so story". That said, I would like to briefly comment on the Durston paper, and why I consider it so important. 1) The Durston method to compute functional complexity in protein families. If you read carefully the paper, you will see that it shows a really ingenious way to approximate the true value of fucntional complexity in a protein family, applying a variation of Shannon's H to the data in the known proteome. We can discuss the detail if you want, but for the moment I would like to mention that it is the only simple way to account for the size of the target space for a function, that is how many sequences can approximately express that biochemical function versus the total search space. Now, look at the table with the results and take, for instance, Ribosomal S2, a not too big protein of 197 AAs. While the total search space is of 851 bits, the functional complexity of the family is of "only" 462 bits. So, as you can see, in ID we do take into account that the target space for one function is big, and not limited to one or a few sequences. But 462 bits still means a search space for the function of about 10^139. We are in the order of magnitude, more or less, of Dembski's UPB. No kidding here. The table shows the results for 35 protein families. Most of them are above 300 bits of functional complexity. That's something. And the important point is also that functional complexity does not strictly correlate with protein length: some proteins are more functionally dense than others. So, we are really measuring an important property of the protein family here. gpuccio
gpuccio:
Elizabeth:
Anyway, nice to talk to you. It’s good when people return the ball Your serve.
Me too! I enjoy talking to you, because you don’t elude the arguments, understand them, and sincerely try to give your answers.
*blushes* Thanks :) I'm enjoying it here, and I appreciate the tolerance I've been met with!
That’s very fine, and it’s more than I usually can expect from many of my interlocutors here. Moreover, it’s the first time that I can debate with a convinced neutralist, and it’s fun.
*adds yet another label to her luggage*
So, be sure that I respect you and your position, that I believe you entertain in full good faith. Unfortunately, I disagree with many of them and therefore, in a spirit of friendship, I will try to explain why.
Cool.
You raise many different points which deserve detailed discussion. Time is limited, so I will start in a brief way, and we can deepen any point that you find interesting, or simply don’t agree with. I sincerely enjoy intellectual confrontation, provided that it conveys true reciprocal clarification, and not only stereotyped antagonism.
Me too :)
So, let’s start. First of all, the papers I quoted. The Durston paper is the one you have already found. Tha two Axe papers, instead, are the following: The Case Against a Darwinian Origin of Protein Folds http://bio-complexity.org/ojs/.....O-C.2010.1 and The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway http://bio-complexity.org/ojs/.....O-C.2011.1 More in the next post
Thanks. I will get hold of those and read them. Elizabeth Liddle
Ah yes, you are right. I always get cousins muddled! Yes, second cousin LCA would be great-grandmother. Thanks! Elizabeth Liddle
Elizabeth,
Take you and a hypothetical second cousin. Who is your LCA? (Last Common Ancestor). The answer is your grandmother.
2nd cousin LCA would be a great-grandmother, correct? lastyearon
Elizabeth: Anyway, nice to talk to you. It’s good when people return the ball Your serve. Me too! I enjoy talking to you, because you don't elude the arguments, understand them, and sincerely try to give your answers. That's very fine, and it's more than I usually can expect from many of my interlocutors here. Moreover, it's the first time that I can debate with a convinced neutralist, and it's fun. So, be sure that I respect you and your position, that I believe you entertain in full good faith. Unfortunately, I disagree with many of them and therefore, in a spirit of friendship, I will try to explain why. You raise many different points which deserve detailed discussion. Time is limited, so I will start in a brief way, and we can deepen any point that you find interesting, or simply don't agree with. I sincerely enjoy intellectual confrontation, provided that it conveys true reciprocal clarification, and not only stereotyped antagonism. So, let's start. First of all, the papers I quoted. The Durston paper is the one you have already found. Tha two Axe papers, instead, are the following: The Case Against a Darwinian Origin of Protein Folds http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.1/BIO-C.2010.1 and The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2011.1/BIO-C.2011.1 More in the next post gpuccio
Mung:
Hi Elizabeth, could you stick to one model please, lol. In addition to focusing on the cell, as in unicellular organisms, I also prefer to focus on a model that doesn’t depend on sexual reproduction.
OK, sure. But the evolutionary dynamics are very different.
Obviously the capacity for sexual reproduction itself had to evolve, and it had to do so based on some model other than sexual reproduction.
I don't expect it evolved "based on some model" at all. I expect it evolved because populations of organisms in which horizontal gene transfer regularly occurred went extinct less often than onse that didn't.
It just confuses things to be switching back and forth between the two. Apples and Oranges, and all that.
Sure, but drift dynamics will be totally different in a cloning population. In fact I don't even think people talk about "drift" with bacteria do they? It wouldn't really make much sense.
And the LUCA is the FUCA. Just think about it, please.
I might ask you to do the same :) Take you and a hypothetical second cousin. Who is your LCA? (Last Common Ancestor). The answer is your grandmother. What about her Mother? She is also a Common Ancestor (CA), but not the Last Common Ancestor (LCA) - she preceded the LCA. Yes? Elizabeth Liddle
Well, no, it’s more like shooting fish in a barrel With a pea shooter and using spitwads for ammo. Mung
Hi Elizabeth, could you stick to one model please, lol. In addition to focusing on the cell, as in unicellular organisms, I also prefer to focus on a model that doesn't depend on sexual reproduction. Obviously the capacity for sexual reproduction itself had to evolve, and it had to do so based on some model other than sexual reproduction. It just confuses things to be switching back and forth between the two. Apples and Oranges, and all that. And the LUCA is the FUCA. Just think about it, please. Mung
ScottAndrews:
I enjoyed your explanation of the irrelevancy of genetic drift. As I understood it, it takes the problem of specific improvements and seeks to make it smaller by spreading it out over the population.
huh?
This ultimately accomplishes nothing because it still doesn’t explain why it constantly moves toward relatively small targets of improvement rather than behaving randomly.
What moves? Which targets?
The counterargument is that there are many possible targets, not just one, so we shouldn’t look at odds of just one outcome.
Yes indeed.
That’s a bit like saying that if you shoot a missile randomly into space, you’re bound to hit a star (gravity notwithstanding) because there are just so darn many of them.
Well, no, it's more like shooting fish in a barrel :) Elizabeth Liddle
gpuccio I enjoyed your explanation of the irrelevancy of genetic drift. As I understood it, it takes the problem of specific improvements and seeks to make it smaller by spreading it out over the population. This ultimately accomplishes nothing because it still doesn't explain why it constantly moves toward relatively small targets of improvement rather than behaving randomly. The counterargument is that there are many possible targets, not just one, so we shouldn't look at odds of just one outcome. That's a bit like saying that if you shoot a missile randomly into space, you're bound to hit a star (gravity notwithstanding) because there are just so darn many of them. ScottAndrews
gpuccio:
Elizabeth: I must say that you seem to understand well, and essentially agree, on my points, and I thank you for your correct interpretation of them.
No problem, and thank you :)
To your counterpoints, I answer the following: 1) “assuming an a priori “target””. That is the usual “argument” that any beneficial mutation will be selected, and that therefore the “target” is very big. That is false reasoning. First of all, it is not true at all that a lot of molecular variation can be beneficial. Please consider that: a) An existing system such as a living cell, with its structure and metabolism already extremely complex and fine tuned, canstrains highly possible variations. That should be obvious also to those who have some experience in programming. The more structure and complex a system already is, the more any useful variation will have to be complex and integrated just to work with waht is already there.
Absolutely. Once an organism is near-optimized for its environment, only a very small number of mutations are likely to offer net advantage over what is already there. In other words, in a population well-adapted ("fine tuned") to its environment, there are far more ways to make it worse than make it better. This is why we must assume of near-neutral mutations, more are likely to be very slightly deleterious than very slightly beneficial in that environment. But now change the environment - make it slightly cooler, or have the population migrate northwards. Now, the organisms are no longer "fine-tuned" for their environment, and what was very slightly beneficial may be very slightly deleterious and vice versa. So what we must think of, I submit, is that within any given population, there is a large amount of allelic variation, constantly, if slowly, being drip fed by new alleles, mostly near-neutral. The allele frequencies tend to stabilise at an optimum in a given environment. But change the environment, and these distributions will change, so that a different set of alleles have the highest frequencies and a different set have the lowest, and a different set have the mean. In the process, interactive combinations that are adventageous will tend to result in a boost in frequency to the alleles that work well in combo, so those combos will themselves tend to increase in frequency. But given a stable environment, of course you are absolutely right - if there is no-where to go but down, then "natural selection" will be a conservative force, or rather, the best reproducing individuals will be those that inherit the tried-and-true.
b) Going back to the problem of the structure of functional proteins, it can be reasonably calculated that the functional target is anyway small versu the search space, in spite of all the biased research which deperately tries to show the opposite, without succeeding. The Durston method, at present the only one which can evaluate functional information in protein families, gives very high values for most protein families, well beyond the reach of any random walk. And the recent papers by Axe show the improbabilities of any basci fold emerging by a random walk, and even of the following “evolution” of active sites in an already existing protein family. Nobody in ID has ever said that a single sequence is the target. We are well aware that the whole set of functional sequences is the target. And we take that into account in our calculations.
Well, I'd be grateful for a citation, and even better, a summary, of Durston's work. I've just read this paper: http://www.tbiomed.com/content/4/1/47/abstract What he seems to be saying is that for a known function, some proteins are more what I would term "brittle" (i.e. there are a smaller proportion of possible sequences are sequences that will perform the function) than others, and that the brittle ones won't readily evolve by random walk. In other words, that protein families are "irreducibly complex". Would you agree with that summary? I'll perhaps wait for you to comment before I comment further :)
2) “Secondly, but relatedly, the probability of propagation changes dynamically over time. The more individuals who bear an allele, the more probable it becomes that the mutation will eventually go to fixation.” This is nonsense. Your “allele” can exactly be the one which is not apt to receive the second mutation. There is no reason why drift should favor an allele which is compatible with the second, or any other, favourable mutation. Fixation by drift is random, and there is no reason why the fixed mutation should be better than any one that is lost or remains limited. In all cases. the random walk remains a random walk.
I'm sorry, but I'm not understanding your point. No, there is no reason why drift should favour one allele over another - that's why it is called drift! I'm just saying that the more copies of an allele there are in a population, the more likely it is that even more copies will by made. An allele that has drifted near fixation is much more likely to reach it, than one that hasn't! It's a simple point, but worth making nonetheless. Every organism bearing allele X represents an opportunity for allele X to be replicated. And once a large number of organisms have allele X, the probability that a new allele Y will end up in a genotype that also contains allele X, becomes quite high. But this is obvious, so I'm equally obviously missing your point :) Please advise.
3) “Thirdly, at any given time, a population hosts a large number of polymorphisms, most of which are probably selectively neutral, in the current environment.” The point of polymorphism does not help in explaining how new folds and superfamilies emerge, which was my question.
If some new alleles are selectively neutral (for example Valine-methionine substitutions are fairly neutral, then you will get protein families arising because mutations that result in a functional protein won't be deleterious, and some of them will propagate by drift. The ones that result in a degradation of the protein will tend not to. So over time, swilling around the population will be many polymorphisms, some of which will be more prevalent than others. However, in a sexually reproducing population, as well as Single Nucleotide Polymorphisms (SNPs) recombination processes will mean that from time to time, during recombination, part of one allele will be spliced with part of another - so the offspring ends up with part of her grandmother's allele and part of her grandfather's. And again, if these recombined alleles are selectively neutral they will drift around the population, sometimes dropping right out, sometimes propagating extensively, sometimes in between. And they, in turn will be subject to recombination events. So new alleles are not only being produced all the time by substitution and repetition and deletion events they are also being recombined, so that we can think of the gene as consisting of sub-genes that can move independently through the population. Which, in turn, means that from time to time a combination of parts will show up that has some advantageous characteristic - a new fold pattern perhaps. Then, with a bit of luck, that allele will propagate, by Darwinian means, through the population. My point being that mutations with a potential advantage when found in combination don't have to arise simultaneously in the same individual. They can arise at widely spaced time points, as long as they can drift independently through the population, which, in a sexually reproducing population, they can.
The emergence of a new protein fold from an existing one require the change of a lot of aminoacids, a coordinated change which is utterly unlikely by any random walk.
Why is it? Any one fold may be unlikely, but we are back to the Texas Sharpshooter. The appearance of new fold of some kind may have a perfectly good non-zero frequency over time. In fact I'd expect the pdf to have a Poisson distribution!
Funtional and selectable intermediate steps have never been shown for any such theorical transitions, for the simple reason that they don’t exist and there is no reason that they should exist.
But, as I keep saying, selectable intermediate steps aren't required. That's one of several huge problems with the concept of Irreducible Complexity, which is what we are, in fact, talking about here. Sure, a couple of selectable (i.e. advantageous in some environment in which the population finds itself at some point) will give the probabilities a healthy kick, but stuff drifts. Some of that stuff comes in useful, sometimes singly, sometimes only in combination. But given enough independence of the drifting process, useful combos are bound to happen. What we can't then do is turn round and say "hey, look! isn't it awesome that this protein, which won't perform this amazing function unless it's just the way it is, just happened to arise!" Because that really is just like dealing hand after hand of cards and then expressing astonishment when one day you get a really good hand. Moreover, back to biology, if that lucky hand does something like confer immunity to some environmental pathogen, then those individuals bearing the lucky hand may speciate - move into environments where the pathogen hitherto prevented them from moving, and interbreed, and adapt in other ways to that new environment. Whereas the rest of the population doesn't, and the lucky hand actually drops out of that population. Then a biologist comes along, eons later, and says: "how come this population just happened to get this necessary allele?" when she should simply be noting "this allele enabled this population to evolve in this environment, which would otherwise have been inimicable to its continuation">
4) “And our LUCA was almost certainly more complex than its predecessors, so asking why “1000 superfamilies were already present in LUCA” is a bit like asking why a runner half way round the track has “already” completed half a lap!” What a pity that there is no evidence of such predecessors, and no reasonable theory about what they were, and no example of any independent living thing in the universe which is simpler than bacteria or archea. So, your “half a lap” is what we in ID call a “just so story”.
What do you mean "there is no evidence of such predecessors?" Absence of evidence is not evidence of absence. Why should we assume that our LUCA was also our FUCA? And if our LUCA was very much more efficient than than other descendents of the then LUCA, why would we not expect that they would have eventually have gone extinct? Indeed the argument is circular! The LUCA is defined as the ancestral population to all currently living things, so if our LUCA's cousins hadn't gone extinct, that LUCA wouldn't be our LUCA!
5) “The “mean functional unit” now may well be 100-150 AAs long (and may have been in our LUCA, though I don’t know), and I agree it seems unlikely that the required sequence would have emerged through “random walk”, but that doesn’t mean that its predecessors didn’t, or that nothing simpler conferred reproductive advantage.” Again, we must reason on what we know, not on just so stories. In the whole known proteome, the “mean functional unit” is 100-150 AAs long. This is a fact, and I reason on facts.
Yes, indeed, but it is surely unsound to assume that what is true of living organisms must have been true of long dead ones. Unless you can supply a good argument that our LUCA was not only our LUCA but the FUCA. Without that argument, we simply cannot extrapolate as you suggest, because by definition, our LUCA is going to have characteristics all living things share, whereas the FUCA (unless it is the same as the LUCA) won't.
6) “But this is a very different question.” No, it isn’t. And anyway, please would you answer it?
Well, I've had a go :) Anyway, nice to talk to you. It's good when people return the ball :) Your serve. Elizabeth Liddle
Sorry, that last sentence should read.. What if the initial mutation drastically increases the functional target be simplifying the protein? lastyearon
Going back to the problem of the structure of functional proteins, it can be reasonably calculated that the functional target is anyway small versu the search space, in spite of all the biased research which deperately tries to show the opposite, without succeeding.
How is it possible to calculate the functional target? The only measure of success for any genetic mutation is that it makes the host organism better able to reproduce. Aren't there many examples of proteins that once did one thing and now perform a completely different function? Also, what if the initial mutation drastically reduces the functional target be simplifying the protein? lastyearon
LYO: Pardon, but that's nonsense. Parts have to work with other parts that are matched to them in any reasonably precise system. Actually, that starts with building a chair or a bookshelf. Yes adaptability is key to a robust design, but that is not as opposed to being properly matched together to work. GEM of TKI kairosfocus
gpuccio,
The more structure and complex a system already is, the more any useful variation will have to be complex and integrated just to work with waht is already there.
That is a hallmark of poor design. Well designed systems are scalable. Good designers understand that current system may need to adapt, that parts or functions may need to be added later. As much as possible, they plan for the future in their designs. lastyearon
Elizabeth: I must say that you seem to understand well, and essentially agree, on my points, and I thank you for your correct interpretation of them. To your counterpoints, I answer the following: 1) "assuming an a priori “target”". That is the usual "argument" that any beneficial mutation will be selected, and that therefore the "target" is very big. That is false reasoning. First of all, it is not true at all that a lot of molecular variation can be beneficial. Please consider that: a) An existing system such as a living cell, with its structure and metabolism already extremely complex and fine tuned, canstrains highly possible variations. That should be obvious also to those who have some experience in programming. The more structure and complex a system already is, the more any useful variation will have to be complex and integrated just to work with waht is already there. b) Going back to the problem of the structure of functional proteins, it can be reasonably calculated that the functional target is anyway small versu the search space, in spite of all the biased research which deperately tries to show the opposite, without succeeding. The Durston method, at present the only one which can evaluate functional information in protein families, gives very high values for most protein families, well beyond the reach of any random walk. And the recent papers by Axe show the improbabilities of any basci fold emerging by a random walk, and even of the following "evolution" of active sites in an already existing protein family. Nobody in ID has ever said that a single sequence is the target. We are well aware that the whole set of functional sequences is the target. And we take that into account in our calculations. 2) "Secondly, but relatedly, the probability of propagation changes dynamically over time. The more individuals who bear an allele, the more probable it becomes that the mutation will eventually go to fixation." This is nonsense. Your "allele" can exactly be the one which is not apt to receive the second mutation. There is no reason why drift should favor an allele which is compatible with the second, or any other, favourable mutation. Fixation by drift is random, and there is no reason why the fixed mutation should be better than any one that is lost or remains limited. In all cases. the random walk remains a random walk. 3) "Thirdly, at any given time, a population hosts a large number of polymorphisms, most of which are probably selectively neutral, in the current environment." The point of polymorphism does not help in explaining how new folds and superfamilies emerge, which was my question. The emergence of a new protein fold from an existing one require the change of a lot of aminoacids, a coordinated change which is utterly unlikely by any random walk. Funtional and selectable intermediate steps have never been shown for any such theorical transitions, for the simple reason that they don't exist and there is no reason that they should exist. 4) "And our LUCA was almost certainly more complex than its predecessors, so asking why “1000 superfamilies were already present in LUCA” is a bit like asking why a runner half way round the track has “already” completed half a lap!" What a pity that there is no evidence of such predecessors, and no reasonable theory about what they were, and no example of any independent living thing in the universe which is simpler than bacteria or archea. So, your "half a lap" is what we in ID call a "just so story". 5) "The “mean functional unit” now may well be 100-150 AAs long (and may have been in our LUCA, though I don’t know), and I agree it seems unlikely that the required sequence would have emerged through “random walk”, but that doesn’t mean that its predecessors didn’t, or that nothing simpler conferred reproductive advantage." Again, we must reason on what we know, not on just so stories. In the whole known proteome, the “mean functional unit” is 100-150 AAs long. This is a fact, and I reason on facts. 6) "But this is a very different question." No, it isn't. And anyway, please would you answer it? gpuccio
GP: on the ball as usual. kairosfocus
Gpuccio:
Elizabeth: I am well aware of the concept of genetic drift, and I have always found it completely useless for evolutionary theory. I will try to explain myself better. OK, genetic drift, if and when it happens (you certainly know that some essential conditions must be satisfied) can expand a neutral or quasi neutral mutation without the need of a positive selection due to survival benefit. OK. And so? The problem, apparently overlooked by neutralists (and by you) is that any neutral mutation can randomly expand, and any neutral mutation can randomly be lost. And most neutral mutations will simply remain there, in a minority of the population.
Yes indeed (I have not overlooked it!)
Now, from the point of view of probabilities, that does not give any advantage versus a purely random walk. If I need, for a selectable trait to emerge, the coordinated mutation of two aminoacids, and if the forst mutation happens in one individual, the propbability of that specific mutation to expand by genetic drift is extremely low, because any new mutation can expand by drift, but only a few will do that, while a few will be lost. So, the probability that the second necessary mutation may happen in the subclone with the first mutation is extremely low if no drift happens (probabilities here obviously multiply), but the probability that the first mutation be expanded by drift and then the second mutation may happen in the expanded clone is similarly low, due to the low probability of the first event (expansion by drift) for that particular mutation.
I think metaphors is tripping you up here. Let me try to rephrase (without change in meaning, I hope, but trying to avoid metaphors where possible) what you just wrote, above: "Now, purely random walk will not result in an advantageous trait. If an advantageous trait requires two changed aminoacids in a protein, neither of which, singly, confer any advantageous phenotypic effect, and if the mutation that produces the first change happens in one individual, the probability that it will propagate solely through genetic drift is low, because only a few will do that, to any great extent, and a few will be lost completely. So the probability that the second mutation will occur in a bearer of the first will be extremely low. And if it does not, the probability that it will propagate by drift will be as low as the probability of the first propagating by drift. And because probabilities here obviously multiply, probability of the two mutation ever meeting up in the same individual are even more extremely low. I think this is what you are saying (I've kept in the random walk metaphor as we both seem clear about what it signifies). If so, my response to your summary here:
In brief, genetic drift in no way adds to the probability of having a complex functional new trait by purely random mechanisms (and genetic drift, like mutation, is a purely random mechanism).
is threefold. Firstly, you are assuming an a priori “target” – a specific trait, that involves two independent amino acid changes, and that will confer a reproductive advantage. However, this would be the Texas Sharp Shooter fallacy! There may be many many proteins that would confer the same or similar advantageous trait, and, indeed, there may be (and are) many advantageous traits. So computing the probability of any one, by the method you give, is the equivalent of computing the probability of a single hand of cards, instead of the probability of a good hand, or even more appositely, the probability of a hand that may become a good hand after some subsequent draw. Which is obviously much higher. Secondly, but relatedly, the probability of propagation changes dynamically over time. The more individuals who bear an allele, the more probable it becomes that the mutation will eventually go to fixation. Thirdly, at any given time, a population hosts a large number of polymorphisms, most of which are probably selectively neutral, in the current environment. They may have propagated because at some previous time they were advantageous, or they may simply have drifted into substantial prevalence. It’s quite salutary to write a program of the kind I just recommended to Mung, and to how true the Central Limit Theorem really is. You always end up with a Gaussian, in which a few alleles are very rare, a few are very common, and a very large number are borne by about half the population. So even if we assume (probably a fair assumption) that many advantageous traits arise from gene-gene interactions (you need a specific combo to get the advantage), the opportunities for those combinations to occur is actually quite substantial. Of course the probability of any one specific combination, out of all the alleles, occurring, may be very low, but that is not the relevant probability. The relevant probability is the probability of some advantageous combination occurring in some individual at some point. And if that individual has lots of offspring (or if it occurs in several individuals, and they all tend to have more offspring), then individuals with the combination will become more prevalent in the population. It’s fairly easy to model, and I may do it later if I have a moment. Start off with no selection at all, then modify the model so that a minority of combinations are advantageous, and see how often an advantageous combination emerges and propagates through the population.
Only positive natural selection, if and when it happens, adds a necessity component to the algorithm, and therefore can in principle increase the probabilities of complex new functional traits. That’s why the whole darwinian mechanism is useless, if complex traits are not deconstructable into simple selectable variations. Which is exactly the case.
Except that it isn’t :)
Moreover, you insist on allele shuffling, but I can’t see how you can even begin to try to explain the emergence of new protein superfamilies by that mechanism. Please, try to explain how about 1000 superfamilies were already present in LUCA, and how 1000 more were generated in the course of evolution, keeping in mind that the mean functional unit (the protein domain) is usually 100 – 150 AAs long, with a functional complexity well beyond the reach of any random walk, with or without drift. Good luck.
Well, “allele shuffling” isn’t the only mechanism of mutation, and indeed, wouldn’t have been relevant to our LUCA which can’t have been a sexually-reproducing organism. And our LUCA was almost certainly more complex than its predecessors, so asking why “1000 superfamilies were already present in LUCA” is a bit like asking why a runner half way round the track has “already” completed half a lap! Because he started earlier than the time you are asking about! The “mean functional unit” now may well be 100-150 AAs long (and may have been in our LUCA, though I don’t know), and I agree it seems unlikely that the required sequence would have emerged through “random walk”, but that doesn’t mean that its predecessors didn’t, or that nothing simpler conferred reproductive advantage. But this is a very different question. Elizabeth Liddle
Elizabeth: I am well aware of the concept of genetic drift, and I have always found it completely useless for evolutionary theory. I will try to explain myself better. OK, genetic drift, if and when it happens (you certainly know that some essential conditions must be satisfied) can expand a neutral or quasi neutral mutation without the need of a positive selection due to survival benefit. OK. And so? The problem, apparently overlooked by neutralists (and by you) is that any neutral mutation can randomly expand, and any neutral mutation can randomly be lost. And most neutral mutations will simply remain there, in a minority of the population. Now, from the point of view of probabilities, that does not give any advantage versus a purely random walk. If I need, for a selectable trait to emerge, the coordinated mutation of two aminoacids, and if the forst mutation happens in one individual, the propbability of that specific mutation to expand by genetic drift is extremely low, because any new mutation can expand by drift, but only a few will do that, while a few will be lost. So, the probability that the second necessary mutation may happen in the subclone with the first mutation is extremely low if no drift happens (probabilities here obviously multiply), but the probability that the first mutation be expanded by drift and then the second mutation may happen in the expanded clone is similarly low, due to the low probability of the first event (expansion by drift) for that particular mutation. In brief, genetic drift in no way adds to the probability of having a complex functional new trait by purely random mechanisms (and genetic drift, like mutation, is a purely random mechanism). Only positive natural selection, if and when it happens, adds a necessity component to the algorithm, and therefore can in principle increase the probabilities of complex new functional traits. That's why the whole darwinian mechanism is useless, if complex traits are not deconstructable into simple selectable variations. Which is exactly the case. Moreover, you insist on allele shuffling, but I can't see how you can even begin to try to explain the emergence of new protein superfamilies by that mechanism. Please, try to explain how about 1000 superfamilies were already present in LUCA, and how 1000 more were generated in the course of evolution, keeping in mind that the mean functional unit (the protein domain) is usually 100 - 150 AAs long, with a functional complexity well beyond the reach of any random walk, with or without drift. Good luck. gpuccio
Mung:
But then, assuming that other organisms in the population which do not carry that mutation are reproducing at at least the same rate (it’s neutral, after all), then that mutation is not increasing in frequency in the population, is it.
You are a programmer - program a simple drift model and see what happens :) In other words, make reproduction entirely orthogonal to genotype, and see what happens to allele frequencies in each generation. You can keep the population constant, or let it fluctuate, as you like. I might even post one later - race you :) Elizabeth Liddle
oops yet again messed up the quote tags! Sorry! Elizabeth Liddle
gpuccio:
The point you seem not to understand is that a mutation has to be “selected”, that is to “expand” in the population, and not only to “survive”, if that mutation has to have any advantage versus a random search.
Forget the language of "select" and "search" for now - they are just metaphors. Think of what actually happens. If a mutation is truly neutral (confers a phenotypic effect but that effect makes no difference at all to the chances of survival), then it may die out within a generation, if its owner happens to be eaten by a predator before reproducing, or it may be passed on. And you read my post about the drunkard's walk,you will see that even a truly neutral mutation (an absolutely flat street - no bias in the drunk towards staggering one way or the other) have a non-zero chance of ending up at the North end of the street, and the further up they get, the better there chances of making it to the other end. This is what we call drift.
The whole necessity algorithm of neo-darwinian theory relies on positive selection, even if darwinists often seem to forget that.
Ultimately, yes. But propagation of potentially useful mutations does not necessarily depend on "positive selection" at the time when they first appear. Indeed, my hunch is that most ultimately useful traits i.e those that really do increase the probability of survival are combinations of alleles that have already been swimming around neutrally in the gene pool for some time. My guess is that mutations that are useful at their time of first appearance are probably the exception rather than the rule.
Without expansion of the selected trait in the population, the chances of a coordinated double or multiple mutation are not essentially different from a random search: IOWs, no even slightly complex functional trait can ever be generated by random mutations alone.
Except that I think your premise is wrong, as I explain above. If most advantageous traits (in a particular environment) are polygeneic traits that have been drifting around for many generations until they proved advantageous, then there isn't a problem, is there? A criticism often leveled at the extrapolation from "micro" to "macro" evolution is that "micro" evolution doesn't involve new alleles. This is probably false, in fact - new alleles are appearing all the time, and there's no good reason to think they don't contribute to microevolution. But I would agree that most evolution (including microevolution) occurs by means of differential reproduction, from generation to generation, arising from the interaction between genotype and environment, those with the best allele "cocktails" for the current environment preferentially passing on those "cocktails" to the next generation. So alleles that tend to appear most often in the most successful cocktails become more prevalent, and those that appear less commonly, less Then the environment changes, and the best-performing cocktail changes. Some alleles may be common to both optima, some may not. But I do think it's important to get away from the idea that single genes/alleles are what matter in evolution. It's simply wrong, not least because gene-gene and gene-environment interactions matter too.
Elizabeth Liddle
Mung:
As in leaving 1.022 offspring rather than 1.020 offspring?
Averaged across a population, yes. Elizabeth Liddle
But then, assuming that other organisms in the population which do not carry that mutation are reproducing at at least the same rate (it's neutral, after all), then that mutation is not increasing in frequency in the population, is it. Mung
The point you seem not to understand is that a mutation has to be “selected”, that is to “expand” in the population, and not only to “survive”, if that mutation has to have any advantage versus a random search.
The near neutral mutation doesn't need to be selected for, the organism that carries it just needs to survive and reproduce, thus reproducing the neutral mutation. Take this (overly simplistic for the point of pedagogy) example: If the organism has four offspring that carry the mutation, and two of them have another four offspring each, of which two reproduce, after four generations you should have 16 individuals who carry that mutation, and 32 in the next generation. The mutation is not selected 'for' it just isn't selected against and the natural process of reproduction spreads the mutation. DrBot
Elizabeth: So we can say that half of all near-neutral mutations have a chance of surviving several generations, and of those that do, quite a number of them will propagate quite widely. The point you seem not to understand is that a mutation has to be "selected", that is to "expand" in the population, and not only to "survive", if that mutation has to have any advantage versus a random search. The whole necessity algorithm of neo-darwinian theory relies on positive selection, even if darwinists often seem to forget that. Without expansion of the selected trait in the population, the chances of a coordinated double or multiple mutation are not essentially different from a random search: IOWs, no even slightly complex functional trait can ever be generated by random mutations alone. gpuccio
What is probably more true is that in general, those effects are likely to have only a small effect on successful reproduction.
As in leaving 1.022 offspring rather than 1.020 offspring? Mung
Strange that there isn’t a Nobel Prize for “evolutionary biology”. Nor has anyone in evolutionary biology made any discovery worth one.
Not really strange - the Nobel prizes were instituted over 100 years ago, before evolutionary biology was its own discipline. But even then the 1973 prize was awarded to Karl von Frisch, Konrad Lorenz and Nikolaas Tinbergen for their work in animal behaviour, using an evolutionary framework:
The way out of this dilemma [between vitalists and reflexologists] was indicated by investigators who focused on the survival value of various behaviour patterns in their studies of species differences. Behaviour patterns become explicable when interpreted as the result of natural selection, analogous with anatomical and physiological characteristics. This year's prize winners hold a unique position in this field.
Heinrich
So we can say that half of all near-neutral mutations have a chance of surviving several generations, and of those that do, quite a number of them will propagate quite widely. Even if they confer no benefit at all, alone, and even, interestingly, if the confer a slightly disadvantage. So the chances of several near-neutral mutations “meeting up” is no stranger than finding several drunks near the north end of the street, if rather more of them started off near the lamppost.
Mutation and selection of beneficial genes is an insufficient explanation for most any feature of living things. It enables the hope that very best of many tiny modifications add up to something. How is mutation and selection of just about anything, beneficial, neutral, or disadvantageous, a better explanation? That's akin to using those wandering drunks to explain an orchestrated cheerleading routine or a successful corporation. So much is set about what might be hypothetically possible and how genes propagate, but there's no actual explanation buried in there. ScottAndrews
Both questions really apply. No organism is guaranteed survival to reproduction or the survival of its offspring due to a single potentially beneficial gene. The difference would be marginal, perhaps undetectable. And yet the explanation is built upon those changes being selected. And, at the same time, the concept of things competing for survival and to reproduce doesn't take into account that multiple beneficial mutations (generously allowing that such things occur as often as raindrops) must also compete against each other. Why are so many smart people still chasing after this? ScottAndrews
ScottAndrews:
Population geneticists even have equations for it. In my example I used tiny changes because those are the only sort that individual mutations can account for.
Actually, that's not quite true. Mutations that affect developmental trajectory can have quite marked phenotypic effects (limb length, for example). But generally, of course, you are correct. What is probably more true is that in general, those effects are likely to have only a small effect on successful reproduction.
But the implication is that such tiny changes, each originating in a single organism, add up to staggering, varied works of art, marvels of engineering, and ingenuous behaviors.
Yes, when accumulated over many generations.
And supposedly each mutation must survive and propagate, despite conferring at best mininal benefit, until it meets up with several more and finally accomplishes something. (Wow, that sounds an awful lot like foresight and planning.)
No, not really, because neutral, or near neutral mutations do in fact propagate, in the manner of the drunkard's walk. The usual picture is: imagine a drunk at a lamp post. Every step he takes will take him either North or South, with equal probability. After a couple of hours, the chances that he will be at the lamp-post is very small, and the chances he will be quite a long way from the lamp post are quite high. The chances that be a quite long way North of the lamp post are half that. So we can say that half of all near-neutral mutations have a chance of surviving several generations, and of those that do, quite a number of them will propagate quite widely. Even if they confer no benefit at all, alone, and even, interestingly, if the confer a slightly disadvantage. So the chances of several near-neutral mutations "meeting up" is no stranger than finding several drunks near the north end of the street, if rather more of them started off near the lamppost.
I hope no one reads the cited comment and gets the mistaken impression that there are verifiable formulas for such things or even specific hypotheses to explain them. I wonder how many times people read that ‘x happens and then y happens and then z happens’ and because of the confident wording they never realize that no one has ever seen x, y, or z.
You are right that population genetics is a very theoretical field (which is why their pronouncements and dilemmas should be taken with a large pinch of salt!) But asexual populations have been extensively studied (obviously in an asexually reproducing population, you won't get coincident mutations at all, but you can measure the relative fitness of a mutated population with its ancestral population), and genetic accumulations can be traced statistically. For instance, in the human genome, gene-gene interactions are the subject of important investigation, and that is precisely about what happens "when x happens and then y happens then z happens" and the result is an individual with x, y and z. But I do think that the Selfish Gene concept has been a misleading one (though Dawkins had an important, if minor, point). Genes aren't selected, phenotypes are, and phenotypes bear vast cocktails of genes many of which have large numbers of polymorphisms. That's why we are all unique, genetically. So if a certain range of allele cocktails are beneficial in a certain environment, by conferring a certain range of a certain trait, those alleles will tend to become more prevalent in the population, and any new alleles that contribute extend the range of that trait in the same direction will tend to join the favoured cocktail. Then, when the environment changes, the optimal cocktail will shift, and again, any new alleles that contribute to that shift will tend to join the crowd. A bit like fashion and celebrity, really :) And the interesting thing is that because most traits that are polygeneic, the contributor genes will tend to be preserved in a population, even when the environment changes, precisely because no one gene is responsible for the trait, leaving the gene pool rich enough to respond to new changes. As you say, a kind of "foresight", or perhaps "memory" - but one that is perfectly explicable by means of fairly simple naturalistic stochastic models.
Elizabeth Liddle
Strange that there isn't a Nobel Prize for "evolutionary biology". Nor has anyone in evolutionary biology made any discovery worth one. Perhaps that answers Thomas' question about competence... Joseph
tsmith:
And something Ms. Liddle and all the other followers of Darwin here have yet to answer is, what is the probability that a new beneficial mutation will be lost due to ‘chance’ alone?
I have already answered this. The answer is “high”.
Haldane’s dilemma
Actually Haldane's dilemma is a little different. It's closer to ScottAndrews' scenario http://en.wikipedia.org/wiki/Haldane%27s_dilemma Elizabeth Liddle
Population geneticists even have equations for it. In my example I used tiny changes because those are the only sort that individual mutations can account for. But the implication is that such tiny changes, each originating in a single organism, add up to staggering, varied works of art, marvels of engineering, and ingenuous behaviors. And supposedly each mutation must survive and propagate, despite conferring at best mininal benefit, until it meets up with several more and finally accomplishes something. (Wow, that sounds an awful lot like foresight and planning.) I hope no one reads the cited comment and gets the mistaken impression that there are verifiable formulas for such things or even specific hypotheses to explain them. I wonder how many times people read that 'x happens and then y happens and then z happens' and because of the confident wording they never realize that no one has ever seen x, y, or z. ScottAndrews
velikovskys: I didn't establish the criterion of peer-reviewed publications. If you followed the Dover trial, and all the nasty blogging against ID that has occurred ever since, you will know that the criterion of peer-reviewed publications comes from the ID critics, not from me. I'm merely applying the criterion to those who loudly insist upon it. Can they meet their own standard for good science? gpuccio: Good to hear from you. Can you name me a paper *in evolutionary biology*, either published in a peer-reviewed secular journal, or read at a secular biology or evolution conference, published or read in the past ten years, by Ken Miller, by Larry Moran, by P. Z. Myers, by Eugenie Scott, or by any of the regular or frequent columnists at Biologos? That's what I'm asking for. Peer-reviewed material by full-time evolutionary biologists is another matter. I'm not denying that publications by Coyne and Carroll and Margulis and Lima de Faria exist. The question is whether most of our critics are competent to speak about the field in which they are making dogmatic statements. Thomas Cudworth
And something Ms. Liddle and all the other followers of Darwin here have yet to answer is, what is the probability that a new beneficial mutation will be lost due to ‘chance’ alone? I have already answered this. The answer is “high”.
Haldane's dilemma tsmith
tural selection operates at the level of the organism not at the level of the cell.
definition:
the process by which forms of life having traits that better enable them to adapt to specific environmental pressures, as predators, changes in climate, or competition for food or mates, will tend to survive and reproduce in greater numbers than others of their kind, thus ensuring the perpetuation of those favorable traits in succeeding generations.
is a meaningless tautology. whats a favorable trait? one that helps it survive...how do we know its a favorable trait? it helps it survive. survival of the fittest meaningless and uesless just like evolution itself....as even Coyne has admitted. tsmith
ScottAndrews:
Mung what is the probability that a new beneficial mutation will be lost due to ‘chance’ alone? This doesn’t get mentioned nearly enough. How is the effect of a single beneficial mutation great enough to ensure survival and reproduction?
It probably isn't. That doesn't matter. Nothing "ensure[s] survival and reproduction" it only raises the likelihood of doing so in a specific environment.
How many times does an astronomically unlikely* complimentary set of mutations for some minor improvement in digestion or immune response get erased because the creature gets picked off by a predator before reproduces?
Not very often, because it is astronomically unlikely, as you say, for a compimentary set of mutations for a minor improvement in digestion or immune response to arise in the same individual. What is far more likely is that they arise separately in different individuals, who then breed, each one propagating by drift so that they are possessed by relatively large numbers of the population, and thus quite likely to coincided in a number of individuals, who then, in aggregate, will tend to produce more offspring than those who do not possess the relevant combo.
No one factors that in when they imagine this constant, incremental process in which beneficial mutations increase fitness, get selected, etc., etc.
Well, yes, they do. Almost everyone does. Population geneticists even have equations for it.
*I realize that use of this phrase invites mockery, because only an ignorant rube takes the likelihood of events into account, and the burden is entirely on me to calculate just how unlikely they are.
Not at all. You are absolutely right. The probability of many interactively advantageous mutations occurring all together in a single freakishly lucky offspring are astronomically low, and, indeed, if the advantage is small, the poor kid might get zapped by something quite different anyway. And even if it bred, it's only going to have one copy of each of these amazing new alleles, so it's only going to hand half of them, on average to its offspring, who then won't have the magic combo. But then nobody actually postulates that that's how polygeneic traits actually arise! Elizabeth Liddle
Mung what is the probability that a new beneficial mutation will be lost due to ‘chance’ alone? This doesn't get mentioned nearly enough. How is the effect of a single beneficial mutation great enough to ensure survival and reproduction? How many times does an astronomically unlikely* complimentary set of mutations for some minor improvement in digestion or immune response get erased because the creature gets picked off by a predator before reproduces? No one factors that in when they imagine this constant, incremental process in which beneficial mutations increase fitness, get selected, etc., etc. *I realize that use of this phrase invites mockery, because only an ignorant rube takes the likelihood of events into account, and the burden is entirely on me to calculate just how unlikely they are. ScottAndrews
TC So I’m giving these people the chance to disprove my hypothesis by writing in and telling us what they have published, and what conferences they have read papers at. I will take silence as confirmation of my hypothesis. What is your expertise in judging whether or not this is a valid criteria?Are you applying the same standard to Darwin critics? Should't they demonstrate the same? velikovskys
Thomas Cudworth @4 -
By “competent” I meant competent as understood by current practitioners, i.e., by scientists whose full-time job is evolutionary biology. ... I’m implying only that some people are far more versed in the evolutionary biology literature than others, i.e., keep up with the latest theoretical models and the latest data, while others “keep up” only by reading Scientific American.
This seems reasonable to me. I'd guess that Miller, Moran and Myers all keep up with the primary literature, and hence would be considered competent in the field. Falk & Venema may not be - their expertise seem to be in molecular and developmental biology (I may be doing them a dis-service: I don't know their work at all. It's a long way from what I do). I don't know how much effort Genie Scott puts into keeping up with the literature, so I can't comment on her.
It’s my working hypothesis that Miller, Falk, Venema, Moran, Myers, Pennock, Scott, etc., would not be considered competent *in current evolutionary theory* by the vast majority of full-time practitioners. They would be thought of as in some cases good scientists in their own special areas, and in other cases as useful popularizers of evolution, but not as making any original contribution to understanding how evolution works.
You've moved the goalposts, from following and understanding evolution, to making an original contribution to its understanding. I'm not sure being a competent evolutionary biologist is the same as being a competent researcher in evolutionary biology. Heinrich
As for polygeneic traits: Actually, interestingly this is one of the aspects of phenotypic traits that confers robustness on a population. Most traits are polygeneic. So, because selection operates at the level of the whole organism, what is "selected" are good cocktails of alleles not (usually) single alleles. This is one mechanism by which alleles that are potentially useful, but possibly slightly deleterious in the current environment are retained in the population. Populations in whom beneficial traits depend on a single allele are very fragile in the face of environmental change. You may have meant: 600 genes are necessary for mitosis. I don't know if this is the case: it may be, now. That would not mean it had always been that way, nor does it mean that there is no allelic variation in those genes. Elizabeth Liddle
Mung:
I try more and more to ignore any aspect of evolutionary theory which is above the level of the cell, what takes place in the cell and during the cell cycle. Cellular structures and processes.
In that case you cannot hope to understand evolutionary theory because natural selection operates at the level of the organism not at the level of the cell. And natural selection is fairly fundamental to evolutionary theory!
And something Ms. Liddle and all the other followers of Darwin here have yet to answer is, what is the probability that a new beneficial mutation will be lost due to ‘chance’ alone?
I have already answered this. The answer is "high". The riskiest stage for a beneficial mutation is when it only exists in one organism. If that organism doesn't reproduce for some reason, then it is lost (and we'll never know whether it would have been beneficial or not, but let's assume that LaPlace's demon does). If the organism does reproduce, and that allele is passed on, then the risk of being lost drops very slightly, and with every generation, the risk drops still more. But hazard being what it is, a new allele that would, potentially, hugely increase an organism's life chances can still become extinct within a few generations or less simply because the hazards those individuals happened to encounter were not the ones to which the new allele offered any special advantage, or not sufficient advantage. Elizabeth Liddle
Mung: 600 ‘genes’ involved in mitosis alone!? Are you kidding me? You are totally right! And there is much more. What about half of the existing protein superfamilies already present in LUCA (whatever it is)? When Gil recently gave his intriguing metaphor about what neo darwinist thought really is (for the distracted, I refer to the "Himalayan dung heap" concept), he was not exaggerating! gpuccio
Thomas: If I am right, we would expect to find few or no scientific conference papers or refereed secular journal articles on evolutionary biology written by any of these people in the past ten years. Unfortunately, that is not the case. The point is, biologists are certainly competent, but they are biased. They find true facts and they interpret them wrongly, to remain in the context of the existing paradigm. Sometimes, they even, more ore less consciously, force the research context to be able to support conclusions that cannot be supported. I have discussed here in detail, in the past, the case of the famous Szostak paper about functional information in random protein repertoires, which is a good example of that kind of biased research. While the author is certainly very competent, his methodology and his conclusions are wrong, because ideologically biased. gpuccio
Graham: And Im not sure I would be positing Behe as the gold standard … isnt he the guy that was disowned in a signed statement from the rest of his faculty ? Well, personally I would not posit anyone as the gold standard. But the "statement" you refer to is certainly one of the most shameful things I have ever seen in my life: a simple act of intolerance, ignorance and, probably, cowardice. Any sincere scientist, even the most die hard neo darwinist, should feel compelled to disavow such a behaviour, in order to keep a minimum of moral integrity. gpuccio
Gil: There is a technical competence, which has its value: understanding empirical data, knowing in detail how experiments are done and how things work in the lab. But evolutionary theory is a philosophical dogma, a cognitive bias which has obscure biological (and, more generally, scientific) thought for decades. So, being "competent" does not in any way mean that one is more easily free from that bias. Indeed, the opposite is true: as competence must be obtained usually in the context of Academy, and as Academy has been for long time the e3xtreme defender of that bias, a "competent" person (ID, a good biologist) is much more likely to wholly accept and share that bias. And even if he does not, he will usually keep quiet for obvious reasons. Evolutionary biologists are good scientists as long as they find new data. They are often very biased scientists when they interpret them. The war between neo-darwinism and ID is more a philosophical and cultural war than a scientific one. At the scientific level, ID is already winning, by far :) gpuccio
I will take silence as confirmation of my hypothesis. NOISE! Mung
Gil, I hear and I sympathize. By "competent" I meant competent as understood by current practitioners, i.e., by scientists whose full-time job is evolutionary biology. I'm not implying that evolutionary biology has got very far in explaining anything. I'm implying only that some people are far more versed in the evolutionary biology literature than others, i.e., keep up with the latest theoretical models and the latest data, while others "keep up" only by reading Scientific American. It's my working hypothesis that Miller, Falk, Venema, Moran, Myers, Pennock, Scott, etc., would not be considered competent *in current evolutionary theory* by the vast majority of full-time practitioners. They would be thought of as in some cases good scientists in their own special areas, and in other cases as useful popularizers of evolution, but not as making any original contribution to understanding how evolution works. If I am right, we would expect to find few or no scientific conference papers or refereed secular journal articles on evolutionary biology written by any of these people in the past ten years. So I'm giving these people the chance to disprove my hypothesis by writing in and telling us what they have published, and what conferences they have read papers at. I will take silence as confirmation of my hypothesis. Thomas Cudworth
Graham,
And Im not sure I would be positing Behe as the gold standard … isnt he the guy that was disowned in a signed statement from the rest of his faculty ?
Correction, his Darwinian ideological faculty. Clive Hayden
An electrician can be competent, yet know nothing about deeper electronic theory. And Im not sure I would be positing Behe as the gold standard ... isnt he the guy that was disowned in a signed statement from the rest of his faculty ? Graham
I try more and more to ignore any aspect of evolutionary theory which is above the level of the cell, what takes place in the cell and during the cell cycle. Cellular structures and processes. 600 'genes' involved in mitosis alone!? Are you kidding me? All coming into being by chance mutations and then retained for the increased rate of reproduction they provided? That's what evolutionary theory states, right? Those who reproduce the most win? Those who reproduce fastest will of course reproduce the most. And each 'gene' a 'descendant' of a prior 'gene.' Where's the nested hierarchy? And the best model to date to demonstrate "the power of cumulative selection" is the Dawkins WEASEL program? DaWeasel? Evolution is at it's core stochastic, right? How does selection change that basic foundational fact of evolutionary theory? And each new random change to the genome is independent of any prior change to the genome, isn't it? And something Ms. Liddle and all the other followers of Darwin here have yet to answer is, what is the probability that a new beneficial mutation will be lost due to 'chance' alone? How many beneficial mutations are required just to get beyond that barrier? To make it even 50/50 that the mutation will inevitably become fixed? Wash, rinse, repeat. I have yet to see a realistic model of evolution. But it's SCIENCE I tell you. Mung

Leave a Reply