Uncommon Descent Serving The Intelligent Design Community

The School of Nanometry

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The finding that the protein components, called “ankyrin repeats,” exhibit such unprecedented elastic properties could lead to a new understanding of how organisms, including humans, sense and respond to physical forces at the cellular level, the researchers said. The nanometer-sized springs are also ideal candidates for building biologically-inspired springy nanostructures and nanomaterials with an inherent ability to self-repair, they reported. A nanometer is one billionth of a meter.

“Whereas other known proteins can act like floppy springs, ankyrin molecules behave more like steel,” said Piotr Marszalek, professor of mechanical engineering and materials science at the Duke Pratt School of Engineering. “After repeated stretching, the molecules immediately refold themselves, retaining their shape and strength.”

“The fully extended molecules not only bounce back to their original shape in real time, but they also generate force in the process of this rapid refolding – something that had never been seen before,” added HHMI investigator Vann Bennett, professor of cell biology at Duke University Medical Center. “It’s the equivalent of un-boiling an egg.”
(more….)

I never get tired of these kinds of stories. As nanotechnology becomes more and more a lucrative economic goal, more and more of these studies will be done, and more and more the ‘solutions’ that nature provides to nanostructure problems will be implemented into the growing “School of Nanotechnology Architecture.” At some point, I think scientists will be forced to admit that nature has the hallmark of just the kinds of elemental design features that they incorporate into their own nano-level machines. And, of course, the question becomes: whence the design.

Comments
I think these are good questions but neither of us know enough biology to answer them at the moment! physicist
physicist: The following is from Wikipedia.org: "Proteins are amino acid chains, made up from 20 different L-α-amino acids, also referred to as residues, that fold into unique three-dimensional protein structures. The shape into a which a protein naturally folds is known as its native state, which is determined by its sequence of amino acids. Below about 40 residues the term peptide is frequently used. A certain number of residues is necessary to perform a particular biochemical function, and around 40-50 residues appears to be the lower limit for a functional domain size. Protein sizes range from this lower limit to several thousand residues in multi-functional or structural proteins. However, the current estimate for the average protein length is around 300 residues." So, 40 to 50 residues=a.a., appears to be the biological lower limit. But this might not be the best way of looking at the problem since DNA codes for RNA which is in turn translated into a protein. So, in other words, if a protein sequence exists as code in the DNA, one then has to explain how that complete sequence came about. If we want to take the 40-50 a.a. as a lower limit, then we might inquire of a pharmaceutical lab--where, presumably the try to 'build' proteins/enzymes--whether certain combinations of a.a. are prevented, or favored. I don't know the answer to that one either. "my instinct is that the shorter chains could have islands of more stable configurations, but that these just aren’t as stable as the very long chains, so we don’t observe them in large quantities." As the Wikipedia piece states, 'biological function' has a limit of 40 to 50 residues. Whether that has to do with 'stability' or, which I think the case, that biological life needs that many 'combinations' so as to completely distinguish one protein from another. (In other words, the longer the chain, the greater the probability space; and the greater the probability space, the more unique any one combination becomes.) PaV
hi PaV I thought MN was supposed to be a way to compute a value for R---perhaps I am misunderstanding that. He ends up with a figure of 10^120 multiplying phi P, in any case---whether this 10^120 is called R or not probably isn't important, right? "I would agree. But, as I stated before, if you eliminate one out of every two possible combinations that still leaves us with a probability of 10^300th power." Sure, but the point is we don't know how many combinations *might* be eliminated by NS. If it was only one in two, I agree it wouldn't make much difference to P. Maybe it's more. For example as you build longer chains of a.a.s, maybe typically there is only one or two a.a.s you can stably add each time. Or, maybe it's less. I understand your point about intermediate states, but I do not know the answer. I suspect a professional biologist would know much better than me---have you tried asking this question of someone like that? it woiuld be interesting to hear the answer. my instinct is that the shorter chains could have islands of more stable configurations, but that these just aren't as stable as the very long chains, so we don't observe them in large quantities. sorry, that is not meant to be a convincing answer---just a possibility. it would be interesting to hear an expert discussion of that point. "on the other hand, we can say that the probabilities of ‘random’ protein formation are astronomically low, and that the absence of ‘mini’ proteins to act as precursors to larger proteins, suggest that there is no other way of tackling the problem." I agree entirely the probabilites of random protein formation are astronomically low, but i don't think this is necessarily the right question to ask. for the reasons above i am not convinced the absence of `mini' proteins is convincing proof that there were not preferred `mini' proteins along the way to forming the proteins we do observe. it's a question that one could hope to answer though. i might try and ask some biologists myself! anyway, i think we both agree that calculation the appropriate P is not necessarily a case of taking the combinatorial value. physicist
PaV: I just read through this thread for the first time, and I noticed an error in your determination of the probability of random protein formation in comment 44: "Events can be considered as mutually exclusive–as in your game where the roll of the second ‘die’ is has nothing to do with the roll of the first ‘die’ (I’m using quote marks to avoid the anti-spam thingaroo)–in which case probabilities of the individual events are ADDED; or the events are interconnected–intersect with one another, as in the case of two dice being rolled at once, with the sum total having some consequence–in which case the probabilities are MULTIPLIED." Events are defined as mutually exclusive if the occurrence of one event precludes the occurrence of the other. The classic example is flipping a coin once. You either get heads or tails, never both, and getting one means you didn't get the other. The example of rolling one 'die' and then a second 'die' is not a case of mutual exclusivity: the outcome of the first roll does not preclude any of the possible outcomes of the second roll. The probability of getting one of a set of mutually exclusive events is equal to the sum of the probabilities of the individual events, as you said. It's just that your definition of 'mutually exclusive' was wrong. What you described as mutual exclusivity is actually known as 'independence'. The outcome of the first roll has nothing to do with the second, so they are independent. This is true whether one 'die' is rolled twice in succession or two dice are rolled at the same time. To calculate the probability of getting a particular number on the first roll, followed by a particular number on the second roll, you multiply the individual probabilities, because they are independent. If you're looking at the sum of the results of the two rolls it gets more complicated because there are multiple ways to get a particular sum (6 ways to get a sum of 7, for example: 1+6, 2+5, etc.). And if you only roll a second time if you get '6' on the first roll, it's more complicated still, because then it becomes a conditional probability. "If this was in fact what we found in nature, then I would agree, we can think of these linkage events (an amino acid linking onto another/other amino acid(s))as mutually exclusive, and that, therefore, the probability of their formation would be the sum of the individual probabilities; that is, a protein of 300 a.a. length would have a probability of formation of 1/20 + 1/400 (=20 squared) + 1/8,000 + 1/160,000….which ends up being the sum of [1/20] factorial, which is roughly 1/20, instead of 20^300th power." As you described them, the linkage events are independent, not mutually exclusive. So the probability of getting a particular protein of length n is equal to (1/20)^n (It's actually higher when you consider that you don't have to start at one end and work toward the other, adding one amino acid at a time). Your chance of getting a particular protein of length 300 is therefore (1/20)^300. It's easy to see why adding the probabilities is wrong, because if you add them you get a higher probability for a longer protein than for a shorter protein, which doesn't make sense. The calculation is even more complex, because you have to consider a bunch of other factors: 1. Are the amino acids in your 'soup' equally concentrated? 2. How much soup is there? 3. How often do linkage events occur per unit volume? 4. How often do 'unlinkage' events occur per unit volume? 5. Are these rates fixed, or dependent on the intermediates present? 6. What are the selective advantages or disadvantages for particular intermediates? 7. How many final proteins are functionally equivalent to the ones you're 'shooting' for? and more. watchmaker
physicist: Please leave the URL, because I don't see any 'R' anywhere on p.31. Just from memory, Dembski's final equation contains a 'M' function and a 'N' function, both multiplied by P(T|H). I just have no idea what 'R' you're referring to. You wrote: However, do you agree that for biological systems NS *can* affect the value of P(T|H) to be drastically smaller than the naive combinatorial value? I think this is an important point, do you agree? The way you're using NS is problematic. Basically NS is the 'killer' of organism--the Grim Reaper, to use Dawkins' metaphor. We're basically talking about 'organic chemistry'--if I understand you're thinking correctly--and I think what you mean by NS is that "nature" will not allow some protein configurations to exist. (I'm not just trying to nit-pick here; what you're concerned with applies whether or not life exists. But then having said this, the only 'machines' that we know of that make 'proteins' are biological systems---so, maybe, I am nit-picking....but, again, I think you're thinking more in terms of chemical/quantum mechanical considerations than strictly biological 'selection.') I would agree. But, as I stated before, if you eliminate one out of every two possible combinations that still leaves us with a probability of 10^300th power. I would separate the problem into two scenarios. For life to form, there is a minimum of 1-2,000 proteins that are needed. If 9 out 10 possible combinations of proteins are restricted for chemical reasons, then for 'life' to form by chance is still wildly improbable. It's the problem of abiogenesis, which Darwinists don't want to deal with. Their approach is to then say, "Well, abiogenesis is one thing; but assuming life began--as surely it has--let's now talk about NS and how it works." So scenario two is the Darwinist position of side-stepping the rather thorny issue of how life began. So, if we take life as a starting point with the implicit presumption that we're dealing with DNA/RNA/ribosomes/protein production, then the question is whether NS can effectively 'eliminate' a sufficient number of 'wrong' protein configurations so as to bring about more complex organisms. Part of this question is whether bringing about 'new' proteins can really result in organismic advance. Let's just leave that to the one side. Focusing just on protein formation, you have Darwinists like Dawkins who would argue that, yes, there are little 'islands' in protein probability space that are 'favored', and that NS is completely competent enough to move 'evolving' proteins along, island to island,--always in a 'random' way--and so, the effective probability of protein formation is somewhat low. You seem to favor this position; or, at least, seem to think this is a reasonable position. But as I stated above, this suggests that if there are 'islands' in the space, that these 'favored' location (configurations) will be 'stable', and should then 'manifest' themselves as 'intermediate' protein forms. And, of course, we don't see these intermediate forms. So, we're kind of left with this: on the one hand, we can simply conjecture/assume that 'nature' must really whittle down the combineotrics, without having any way of 'proving' this to be the case; or, on the other hand, we can say that the probabilities of 'random' protein formation are astronomically low, and that the absence of 'mini' proteins to act as precursors to larger proteins, suggest that there is no other way of tackling the problem. Certainly ID crunches the numbers and comes down on the side of the impossibility of chance formation of proteins. I favor this side. But your question is really the crux of the issue, and no one, on either side of the debate, can really tell you what's going on. PaV
So for example if you wanted to calculate the specific complexity for the bacterial flagellum, it would be crucial to include NS in calculating P(T,H), right? physicist
PaV yes, he may stop using the letter `R'. but in the relevant formulae throughout the paper, he is substituting a particular *value* for R (=10^120, usually). For example p31, the last page of the paper proper. Agreed? i suspect we are arguing at cross-purposes, and whether he uses the letter R is not the main point. OK, I think i misunderstood your point about the proteins. Sorry! I'm not a biologist! However, do you agree that for biological systems NS *can* affect the value of P(T|H) to be drastically smaller than the naive combinatorial value? I think this is an important point, do you agree? I think your argument is that for proteins in particular, NS doesn't take place in the same sense as for biological systems. Can I check that I'm correct about that, then I'll try to respond. (My point was simply that understanding NS is crucial to getting the right specified complexity for a biological system.) PS this is about to slide off the page, so perhaps we could continue the discussion in another thread? (If you'd like to.) I'll check this again tomorrow and you could point me in the direction. physicist
physicist: "My intuition is that by using P=20^300 what you are proving is that it is exceedinly unlikely that the protein spontaneously formed in a particular way by adding 300 a.a.s together all at once. I suspect that biologists do not claim this to be the way that proteins arose. So I think the evolutionary mechanism (NS) has to be taken into account to choose the right P. Do you agree?" Well, isn't the problem here that UNTIL proteins are formed, nature (biological life) doesn't exist; and if nature doesn't exist then how can NS work; and if NS doesn't exist either, then what is the mechanism that 'chose' a particular protein configuration out of a configuration space of 20^300th power? That's why I was talking about 'proteins' of 1 a.a. length and 2 a.a. length, 3 a.a. length, etc., just 'floating around', since this would give the impression that 'protein' formation is a function of chemical and physical laws and properties. But we don't see this. So in the end, P(T|H) is a hugely small 20^-300th power. P.S. I went to the link above, clicked to it, saw that it was the same paper as mine, and then did a search for (R). The last reference was on p.15 (except for p.32, which is part of the Addenda). (R) is just some random sequence of binary digits. PaV
PS PaV the url of the paper was posted in #14 on this thread. physicist
Hi PaV As you can see from my username, I'm not a biologist---I am just imagining plausible mechanisms. Still, I think we agree that the right way to calculate P(T|H) depends crucially though on selection mechanisms, right? Your dice game was one way, mine was another. Mine is indeed *not* Cr@ps, but there's no a priori reason why Cr@ps is the right game to choose to make the analogy. Hopefully we can decide between the two mechanisms--or indeed other mechanisms--in order to work out P. So, does one find *any* single amino acids floating around in nature? or many pairs? I really don;t know. But, I would imagine that if there are very few single or paired (or short chains) amino acids at all, the explanation would be that particular short chains *were* preferred along the evolutionary path towards long chains, but tha the longer chains are *even more* preferred than the shorter chains. So once an extra amino acid has been successfully added, the longer chain is naturally selected in favour of the shorter chain. Hence you'd find a change in P of the kind I'm suggesting, but wouldn't expect to find the intermediate products. You're suggesting I should find lots of sixes in my dice game. But if there is a further selection mechanism which prefers pairs of dice instead of singles, you wouldn't find any of the sixes floating round. For example, the selection mechanism in the c@sino would be: you don't get any money at all unless you roll the second dice. Again, I'm not a biologist, so this might be completely wrong! I'm just trying to demonstrate that it is not at all obvious that the 20^300 calculation is the right one to make. And hopefully we can decide between the different methods by as you say looking for evidence about what happened in intermediate stages of evolution. My intuition is that by using P=20^300 what you are proving is that it is exceedinly unlikely that the protein spontaneously formed in a particular way by adding 300 a.a.s together all at once. I suspect that biologists do not claim this to be the way that proteins arose. So I think the evolutionary mechanism (NS) has to be taken into account to choose the right P. Do you agree? physicist
Structures like flagella didn't "evolve" The appeared full blown all at once like every other basic structure including all the different kinds of eyes. It is interesting that the word evolution comes from the Latin evolvo meaning to unfold as the pages of a book. Books were written don't you know. Evolution in any sense of gradual transformation is a myth. The whole Darwinian fairy tale is based on the faulty premise that evolution had an exogenous cause. Such a cause has never been identified and I don't think ever existed. Reginald C. Punnett and Leo Berg both realized that the only role for Natural Selection was to maintain the status quo. One of these centuries Darwinism will be recognized as the biggest disaster in the history of science and will join the Phlogiston of Chemistry and the Ether of Physics as nothing but figments of a genetically based overactive human imagination. Ether, Selection, Phlogiston, in other words, ESP. Actually it should have died in 1871 about the same time that the Ether did when St George Jackson Mivart asked the question - How can Natural Selection be involved with a structure that has not yet appeared? Evolution is finished but when it was going on it was purely emergent and took place with little if any reference to the environment, exactly as ontogeny does today. How do you like tham impaled stuffed olives? John Davison
physicist: I don't see 'R' on my p.31 at all, so maybe we have different papers. Why don't you post the 'address' of the paper you're looking at? (Meanwhile, I'll do some looking around myself.) As to your game (which, as you note, is NOT Craps), you're using probabilities in the same way as Dawkins does in The Blind Watchmaker. Events can be considered as mutually exclusive--as in your game where the roll of the second 'die' is has nothing to do with the roll of the first 'die' (I'm using quote marks to avoid the anti-spam thingaroo)--in which case probabilities of the individual events are ADDED; or the events are interconnected--intersect with one another, as in the case of two dice being rolled at once, with the sum total having some consequence--in which case the probabilities are MULTIPLIED. Here's the problem with that notion (and this is where 'irreducible complexity' comes in): it means that each 'step' (event, if you will) has to stand on its own right. Translating that to the real world, it means that "intermediates" are "selected for". What you're suggesting is that there's a predilection for a particular 'first' amino acid (the equivalent to your rolling 'six' before you get to roll the second die), but not for the second (you can roll from 1 to 6 on with the second die). But if the first a.a. is favored, something must be acting on it to favor it. So, then, why don't we find one particular a.a. floating around at some sky high concentration, while the remainder are at very low levels. Why isn't there a linked pair of a.a. that nature prefers and that we see floating around at sizable concentrations. If this was in fact what we found in nature, then I would agree, we can think of these linkage events (an amino acid linking onto another/other amino acid(s))as mutually exclusive, and that, therefore, the probability of their formation would be the sum of the individual probabilities; that is, a protein of 300 a.a. length would have a probability of formation of 1/20 + 1/400 (=20 squared) + 1/8,000 + 1/160,000....which ends up being the sum of [1/20] factorial, which is roughly 1/20, instead of 20^300th power. But, again, where are the intermediate forms? Why don't we see proteins of 10 a.a. lengths, or 30, or 45, etc. PaV
Hmmm, just tried rewriting it, but I wonder if wordpress thinks I've submitted it already and won't allow me to repeat comments. I'd be happy to continue the discussion by email if you are willing! Not sure what else to do as I've tried changing the wording. physicist
Hi PaV, I have a reply/question relating to P(T) for the protein, but am having trouble submitting it as a comment. physicist
PaV Sorry if this appears twice, but I think my comment just got eaten. If i've been booted, I'd be interested in carrying on the conversation over email , if you are. Yes, I am reading the same paper as you, but can't find the last equation you are talking about. R=10^120 is still included on the last page of the paper proper (p31) and the addenda, but perhaps I am missing soemthing. Let me use your dice analogy. Imagine a variation on Craps where you roll one dice at a time. The casino rule is that unless you roll a six on your first dice, you can't carry on. If you do roll a six, you can roll the second dice as normal. ANyone measuring the sum of two dice will find their results are affected by this `casino selection' at an intermediate stage---for example, there is zero probability that anything with a sum less than 7 will ever `form'. I'm not a biologist, but the analogy with proteins might be that they evolved bit by bit, with natural selection all along the way, rather than spontaneously forming into one of 20^300 types of protein. Let's suppose the 300-length protein forms by adding a single amino acid onto one of the 299-length proteins that has been *allowed* by natural selection. This rules out most of the possible 300 length proteins in your set of 20^300 ever forming. Does the analogy make sense to you? physicist
Hi PaV Yes, that is the paper I have been referring to on this thread. I'm actually confused as to what you mean by the final equation (without the R). The definition of specified complexity including the R=10^120 appears on the last page of the paper proper (p31) and in the addenda, as well as earlier on---but I'm probably missing something. I guess I'm postponing thinking about the R, anyway. I understand and agree with your gambling example. However, I don't think all possible `proteins' (proteins in the sense of arbitrary combinations of amino acids) are equally likely to have formed in nature. I'm still thinking about the best way to put this---and I'm not a biologist so don't really know the details. But suppose the protein chain evolved bit by bit, by adding amino acids on to a smaller chain---let me denote the chain of length n by Pn. Can I not then think about the probability of a P300 occuring as 20xthe number of P299s that have survived by natural selection? Let me try to restate this with the dice analogy. Imagine a variation on Craps, where you roll the two dice one at a time. Let's suppose there is a casino rule that allows you only to play on if your first roll=6. Then you can roll your second dice, otherwise you're done. However, all anyone measures in their experiments about this game is the total of any two dice rolled. The natural selection I introduced after the first roll radically affects the chances of anyone ever measuring a result for two dice less than 7. In fact, there is no chance at all. What I am saying is, I think my game may well be a better than Craps as analogy for including natural selection in addition to chance. physicist
physicist: As I’ve said more than once, I think this is an issue of terminology. Many scientists will not regard ID as a theory, but they certainly should be interested in the falsification of RM+NS. The important thing to understand is how this falsification is supposed to work, right? But ID is a theory in the same sense that Darwinism is a theory: both are attempts at understanding the progressive evolution of biological forms. One says that progressive evolution occurred through RM+NS; the other says that an 'intelligence' was in operation to bring about these major, progressive changes in biological form. ID is a type of 'information' theory. The Law of Conservation of Information says that only intelligent agents can increase the information of a system. Think of what we mean by 'design'. Design is really about the generation of a 'form.' Elements are combined in a new and novel way. Think of what you have to do if you have an invention. You have to prove that your 'design' is different and better in signficant ways. You're bringing into the world of matter and substance, the 'form' that is in your mind as an inventor. This 'inform'-ation is translated into an actual drawing or scale model, or actual functional model. So ID not only falsifies the 'randomness' aspect of Darwinism--chance can't get us there, it positively avers that when we see a new 'form', that means new 'information' has been translated into the here and now. And it stipulates that an intelligent being/agent is responsible. So no need to shortchange ID. PaV
physicist: Btw, the paper i linked to in #14 is recent—Aug 2005. Is there any significant change since then? Sorry, it's been a while. The final equation that Dembski uses doesn't include R. You're apparently in the first part of the paper. R is a 'random sequence' (binary sequence) that Dembski generated for illustration purposes. [I'm using the paper: "Specification: The Patern that Signifies Intelligence", June 23, 2005] Well, 20^-300 would be the probability of finding a particular protein from randomly picking amino acids out of a box and linking them together, right? What I don’t get is—surely natural selection will rule out almost all of these combinations from surviving? So the subset of all the possible proteins `allowed’ by natural selection surely will be much smaller than 20^300? I'm saying this by way of trying to get a handle on what your proposing: let's say you go to Vegas and bet at the Craps Tables. They tell you that 3's and 10's, however they form, are not allowed. That doesn't affect the odds of rolling a 7 or an 11. [I wouldn't put money on the 3 and 10 on the Come Line, btw]. So whatever NS decides to do--in its limited way--does not affect the probabilities. You still have to construct a protein that is viable choosing 1 out of 20 a.a., 300 locations in a row. What does lower the 'odds' is that isomers at certain position along the protein polymer are allowed, so that at the location/position where isomers are allowed, the odds are 1 out of 20, but maybe 1 out of 10. But does this lower the overall probability to 20^225th power, or even 20^ 150th power? These are literally impossible odds. PaV
physicist, the August 2005 paper by Dembski is the latest that I know of on the subject in question. In it, he fine-tuned his definition of the UPB, I believe in response to criticisms. On the subject of irreducible complexity, you might want to check out "Irreducible Complexity Revisited" (November 2004), a paper by Dembski. j
Irreducible Complexity primarily focuses on the Direct Darwinian pathways that Darwin himself wrote about. Which, based upon the responses of opponents, appears to be a good challenge. Now INdirect Darwinian pathways are another matter. All Behe really has to say about those is that they're improbable and no one has demonstrated a real world example. Patrick
Inoculated Mind wrote: ....researchers are working to understand not only how it works, but also how it may have evolved. Granted, they have not put together a complete pathway....You may often hear that the flagellum consists of 40 different proteins....One has only 31 proteins....a lab group stripped it of two more proteins and it still retained its original function. .... Fantastic! The lowers the probability of the flagellum being formed by accident from what? 10^3000 to 10^2999? Anything beyond 10^150 is way beyond any mathemetician's Universal Probability Bound. Red Reader
I've only just seen the moderation above: > Darwin wrote: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, > successive, slight modifications, my theory would absolutely break down.” It’s not just hard to prove this, it’s impossible to prove > this. You cannot prove a negative. Asking the critic to prove the flagellum cannot evolve renders Darwin’s theory unfalsifiable > pseudo-science." I'm not sure what you are saying---do you think Darwinian evolution is falsifiable, and *has* been falsified by Behe's assertion of IC for particular organisms? Or do you think it is not falsifiable? > To preserve the theory’s falsifiability the burden of demonstrating the flagellum can evolve via Darwinian pathway is on the claimant, > not the contestant. I don;t really know why you think this would preserve a theory's falsifiability. My feeling is that it is falsifiable by attacking the problem with something like Dembski's law. But as I think we agree, you *can't* realistically prove Behe's assertion that the bacterial flagellum did not evolve by RM+NS---the issue is not proved either way. physicist
Oh, and I can only repeat my answer about Darwin's statement. All I am saying that it hasn't been proved that the bacterial flagellum *can't* have evolved from RM+NS. I am not presenting that as an argument for Darwinian evolution. I am simply saying that the issue is not proved either way, and so Darwin's challenge has not been met. physicist
Davescot, let me also repeat this question to you: do you think that classical and quantum mechanics underly the behaviour of complicated interacting systems of particles? Even when you can’t accurately model those systems? physicist
Hi Davescot I think we are talking at cross-purposes, to some extent. In order to falsify lack of direction one can show that the lack of direction is inconsistent with what's observed (what Bill Dembski's work is on, right?). Alternatively, you could indeed show experimentally the direction---a departure from RM+NS in the lab. In what sense has this been done? Sorry, I specifically tried to not fight the straw man---as I've said in lots of my comments, I don't think anyone here doubts that RM+NS is observed to take place in simple cases. In the comment just above from me for example: "I think the question IDers are asking is whether RM+NS is *sufficient* to account for the complexity and diversity of biological systems that we observe." Hopefully this is a fair statement---this is honestly how I understand the situation, so sorry if I have been unclear elsewhere. I'm not sure it is ever possible to `verify' any non-trivial hypothesis. So I would say the statement you attribute to me the other way round: "[It is] unprecedented in science that the falsification of one hypothesis is the verification of another." You obviously think this statement is indefensible; my only defence is that from my experience in physics this seems to be the case. What are the historical examples you are thinking of? I might well be overlooking something obvious. As I've said more than once, I think this is an issue of terminology. Many scientists will not regard ID as a theory, but they certainly should be interested in the falsification of RM+NS. The important thing to understand is how this falsification is supposed to work, right? physicist
physicist The problem that some people (including scientists, mathematicians, and engineers) have with evolution is exhibited by the statement from the so-called "Weisel 38" that evolution (implying RM+NS) is understood to be an undirected process. How does one falsify lack of direction if not by showing direction? If this cannot be falsified then RM+NS as an undirected process is a pseudo-scientific claim that needs to be clearly and explicitely abandoned. It's certainly not unprecedented in science that the verification of one hypothesis is the falsification of another. I have no idea where or how you can support the notion it's unprecedented. Perhaps my misunderstanding was simply refusal to think you were making such an obviously indefensible claim. As an aside, ID doesn't falsify RM+NS in all cases. It falsifies the assertion that evolution is devoid of direction. RM+NS is acknowledged to be a working mechanism at some level. You appear to be fighting a straw man. However, I remain of the opinion that NS is falsifiable in the way that Darwin stated - find an organ that cannot be explained by a series of small improvements over time. A challenge has been made that the flagella cannot be explained in this manner. Get back to me when you have an explanation. DaveScot
PaV I need to think about this some more. Btw, the paper i linked to in #14 is recent---Aug 2005. Is there any significant change since then? Well, 20^-300 would be the probability of finding a particular protein from randomly picking amino acids out of a box and linking them together, right? What I don't get is---surely natural selection will rule out almost all of these combinations from surviving? So the subset of all the possible proteins `allowed' by natural selection surely will be much smaller than 20^300? I;m just saying this very roughly, but this is why I don't see where natural selection has been included. I may be missing something. physicist
replying to#25 Well, as I said before, biologists know that organisms reproduce, and they know that this reproduction process will be imperfect and produce mutations. Does everyone here also agree with that? Moreover, these random mutations coupled to natural selection can be observed in the lab for simple systems. Is that also agreed, or is it disputed? So, I think a falsification of Darwinian processes *could* have been that random mutations don't occur, as then there would either be no evolution, or else some other method for evolution. So, having observed it in biological organisms today, biologists believe that RM+NS *will* have occured for biological systems in the earth's history. Does anyone here disagree with that belief? I think the question IDers are asking is whether RM+NS is *sufficient* to account for the complexity and diversity of biological systems that we observe. And unfortunately this is just really hard to test by brute force. Computer programmes simulating the earth's biological history are intractable, as discussed here and on the other thread and I'm sure on this site before. We also have fossils but the fossil record is incomplete---so there is not enough data about previous stages in the earth's biological history. So it's difficult to test. Sure, Darwin said that if you proved an organ could not evolve over incremental stages, that would disprove his theory. But proving this is hard! Darwin wrote: "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." It's not just hard to prove this, it's impossible to prove this. You cannot prove a negative. Asking the critic to prove the flagellum cannot evolve renders Darwin's theory unfalsifiable pseudo-science. To preserve the theory's falsifiability the burden of demonstrating the flagellum can evolve via Darwinian pathway is on the claimant, not the contestant. Michael Behe has not (AFAIK) claimed to proved the bacterial flagellum cannot have evolved from RM+NS, rather he has asserted that he believes it could not have, and has challenged people to prove him wrong. THat is just not the same as proving it couldn't have evolved. I wouldn't say a negative can't be proved, but I think it is very, very hard in this case. So a better argument might be this aspect of Darwinian RM+NS is not falsifiable. However, I think Dembski's law is an attempt to do so, right? Let me ask you a question. Do you think that classical and quantum mechanics underly the behaviour of complicated interacting systems of particles? Even when you can't accurately model those systems? Why? physicist
Davescot I was trying to say--and hopefully it is clear now in the thread on Darwin's nemesis--that ID was defined as the complement of Darwinism. I was not comparing two different versions of ID (I don't know really of any different versions). I think I made it clear in that thread in #67 what I meant by the set theory term 'complement'. The reason I am assuming that ID= not Darwinism is that AFAIK the ID claim is that falsifying Darwinism would be equivalent to proving ID. It is this statement, if I am understanding it correctly, that is unprecedented in science. This is why you find scientists object to calling ID `a theory'. Sorry---I didn't mean to imply you knew nothing of Classical Mechs or Quantum Mechs, just that I wasn't sure which part of what i was saying you didn't understand. But to return to that case, classical mechanics was eventually superceded by quantum mechanics. But, falsifying classical mechanics was *not* equivalent to proving quantum mechanics. However, physicists *could* have defined a new object, `notclassical mechanics', as being equivalent to the falsification of classical mechanics. It just wouldn't have been regarded as a theory. My point is that from the way most scientists (myself included) use the term `theory', they will not define ID as a theory. However, they certainly ought to be interested in any falsification of Darwinism---which is what Dembski is trying to do, right? physicist
DaveScot, My understanding of the flagellum issue is that researchers are working to understand not only how it works, but also how it may have evolved. Granted, they have not put together a complete pathway, but they have been able to eliminate proteins from the flagellum to make it simpler. You may often hear that the flagellum consists of 40 different proteins, but that is only one particular flagellum, and there are simpler ones in other bacterial species. One has only 31 proteins, and I don't know if you've read the research on it, but a lab group stripped it of two more proteins and it still retained its original function. They mentioned ID in their paper, undercutting Behe's concept of irreducible complexity, saying that they have lowered the bar to 29 proteins. I can find you a link in case anyone wants to read it. Anyway, although the pathways are nowhere near complete, it seems that even the flagellum is being reduced. During an interview on my show and during a susequent email discussion, Michael Behe told me that for him, a complete evolutionary pathway for the flagellum is not neccessary, but rather, that all it would take is one "IC" system of a similar level of complexity to be unraveled. Although on my show he tried to bet me a six-pack of beer over the flagellum, because its evolution is indeed a daunting task. What I'm wondering, Dave, is do you feel that the evolution of just the flagellum is necessary, or is any IC system enough, or do you think that evolutionary biology must explain several IC systems before you change your position? I'm not sure if you have answered this elsewhere, but you said that "The onus to provide a means for falsification is not borne by critics of the theory." How, then, does intelligent design provide a means by which it can be falsified? Again, I want to make this clear to everyone, I'm not trying to argue a position here, I'm just trying to understand people's positions. Karl BTW - Behe said a similar thing about proving a negative on my show, he said that he's being asked to prove a negative to assert his position. (not exact words) Inoculated Mind
physicist The onus to provide a means for falsification is not borne by critics of the theory. The proponents of the theory must provide it. How is it that RM+NS can be falsified? Keep in mind that the falsification methodology must be scientific. Claiming RM+NS can be falsfied by God appearing and proving He did it doesn't count because that's not science. RM+NS must in principle be able to be falsified. Darwin said that any organ that cannot evolve by incremental changes where each increment is favored over the former by natural selection would falsify his theory. So, we can't figure out how flagella can evolve in this manner. A negative cannot be proven so if you try to say we have to show there's no possible way for it to evolve then you lose because a negative cannot be proven even in principle and so there's no way to falsify RM+NS. The onus is on YOU to show that there is a Darwinian pathway for the flagellum to evolve. Good luck. Get back to me when you think you've got something. DaveScot
Checkout comments 12, 13, 14 + Charlie Charliecrs
"R*phi(T)*P(T|H)" What's R, phi(T), T, and H? anteater
physicist says thanks for that. perhaps that link in #17 is an older paper—i’d be happy if someone points out a newer one. .... I'm reading Dembski's book "No Free Lunch". I'm a historian, but his development of the theory in this book clear enough for me to understand. This may have already been discussed: What's being discussed here is the Universal Probability Bound which has to do with the number of particles in the Universe times the number of units of Plank time (the smallest meaningful measure of time) in 14+ billion years times the ability of a particle to change from one state to another. Dembski uses a bound of 10^150 which is way more conservative than other mathimeticians quoted in the book. One uses 10^80 and another uses 10^50. Check it out. Red Reader
physicist: The '20' is the number of different kinds of amino acids found in proteins. The '300' refers to proteins being, on average, of 300 a.a. in length. Which other 'thread'? Is it the one about 'adultery'? "Doesn;t the hypothesis H have to contain both chance *and* natural selection?" But the 'H' hypothesis IS natural selection in that NS acts on 'random' (chance) mutations. PaV
PS PaV if you can explain what I meant about the classical and quantum mechs example to Davescot, please help in the other thread, as I'm not sure if I'm being unclear I have no problem at all understanding classical and quantum mechanics. I have a problem with your description of some recent theory of ID being the complement of some older theory of ID. I have no idea what your definition of older and newer theories of ID are nor quite what you mean by a complement. "Complement" to me is an operation in boolean algebra and I'm pretty sure that's not what you mean by it. -ds physicist
Hi PaV thanks for that. perhaps that link in #17 is an older paper---i'd be happy if someone points out a newer one. I think I understand what P is supposed to be, but maybe I am missing something. Doesn;t the hypothesis H have to contain both chance *and* natural selection? is it obvious to you why it should be 20^-300 in the case you mention. I too have to dash, I will try to think about this more, later. physicist
physicist: "On the other hand, it seems like you have more of a chance to prove something with Dembski’s law. But what are the P(T|H)?" I think Bill Dembski is best suited to answer your question. But I'm sure he'll direct you back to his papers. The formula you quoted further above seems like an 'older' formula. Both formulas contain the P(T|H). It's the probability of T as the rejection space (created by) given the 'chance' hypothesis H. So the more probable that T came about by 'chance', then the 'higher' the value of P. We're now dealing with probability spaces, of course, and for the simple example of a protein of length 300 a.a. (amino acids), this P is 20^-300th power. Got to run. PaV
davescot---i would like to explain better the objections on #42 and #62 of the other thread. please do let me know where I'm being unclear. physicist
Sure, it seems there is debate about the appropriate value of R, but maybe it isn't always so important. What surely is important is whether P(T|H) has been calculated for any system? Surely you need to know this in order to falsify Darwinian RM+NS for some system with pattern T. Does anyone reading this know whether calculations of P(T|H) have been made? Davescot, I respect your experience in design, and I agree that there is design in nature---of course the debate is whether the designer can be natural selection. I'm going to make another analogy with physics (which may be imperfect). We understand very well the laws of classical and quantum mechanics, and these laws are experimentally verifed for the interactions of simple systems. However, these laws would be very difficult to experimentally verify for very complex systems, with lots of interactions. For example, for large numbers of particles one cannot directly make useful predictions for the motion of each of the particles---you could *never* practically do the calculations using the underlying classical (or really quantum) mechanics theory. What gives you confidence that the underlying theory still applies is that it works for very simple systems. I think biologists feel like this with RM+NS. They know that reproduction of organisms take place, and they know that it is inevitable that this reproduction is imperfect. I think the consequent mutations can be observed in the lab (and I'm not saying that IDers dispute this kind of microevolution---I realise that that's not the ID argument). So one has confidence that for very simple systems RM+NS *does* take place. The question then is, is the model of evolution via RM+NS consistent with the complexity and diversity of biological systems we observe in nature? And surely the problem is that it is difficult to test this hypothesis explicitly; not only are the calculations are intractible due to interactions with the environment and other species, but we don't even know the initial conditions of the environment on the earth very precisely. So it's just not a test one can do very easily. However, presumably since RM+NS works well for very simple, idealised systems, biologists are inclined to believe that it will still work for very complicated interacting systems, much (IMO) in the same way that physicists believe simple laws underly very complex behaviour, even if it is intractible to derive the complex behaviour directly. And yes, I think there is an element of belief involved. However, if one were able to directly contradict the predictions of RM+NS in a clever, non-brute force calculation, then of course that would be interesting. However, I still don't understand how the notion of irreducible complexity is more than an assertion about a given biological system. Can one prove the assertion? It just seems a very difficult thing to prove. On the other hand, it seems like you have more of a chance to prove something with Dembski's law. But what are the P(T|H)? physicist PS davescot, I hope you noticed my reply on the other thread---I was not knocking ID for failing to distinguish hypothesis and theory---that wasn't my point at all. sorry for the misunderstanding. physicist
physicist I'm a retired hardware/software design engineer. At this point you'll need to take up Dembski's mathematical theorems with someone else. The compelling evidence of design for me is irreducible complexity in molecular machinery, particulary the digital genetic code, the information it encodes, and the ribosome which together form a robotic protein assembler able to produce all the 3 dimensional components required to reproduce itself. It's so complex we haven't even cracked the algorithm for protein folding yet which is something of a holy grail in bioengineering. Digitally programmed machinery is something I spent a successful and lucrative career designing. I know a design when I see one and until someone can demonstate to me in a plausible, detailed, and laboratory verified manner how a self-replicating protein factory can self-organize then I consider Intelligent Design to be not just a live option but the only reasonable explanation for how it came into existence. This should not be censored from 9th grade biology students by tortured interpretations of the establishment clause. It isn't quite rocket science and to call it religion is an act of desperation by someone who knows he's obviously wrong. But I understand Dembski's universal probability bound to be 10^150 not 10^120 and is given as larger than the number of elementary particles in the universe. I could be wrong and with numbers that big it probably doesn't have any practical impact on the theorems. DaveScot
PS this is from reading: http://www.designinference.com/documents/2005.06.Specification.pdf physicist
that seemed to chop of the end of my post: I’ve now read a bit more about Dembski’s law. My understanding of the claim is that one rules out RM+NS as consistent with a biological pattern, T, if: R*phi(T)*P(T|H) is much less than one. Have I understood that correctly? I'm not sure how one should determine R, but I know it's postulated to be 10^120. More importantly, has P(T|H) been calculated for any biological pattern? physicist
Hi I've now read a bit more about Dembski's law. My understanding of the claim is that one rules out RM+NS as consistent with a biological pattern, T, if: R*phi(T)*P(T|H) physicist
woctor: "The number 20^300 vastly overstates the size of the search space. The real question is, how hard would it have been for evolution to go from a pre-existing protein to any protein having the desired property, taking into account the viability or non-viability of the intermediates?" It doesn't vastly overstate the size of the search space for those scientists in the lab. And your quotes from the article simply mean that 'ankyrin' is ubiquitous--as it should be given its extraoridinary property. You further state: "First of all, evolution does not randomly assemble proteins from scratch. It works by modifying pre-existing ones. Secondly, you seem to be assuming that only a single protein in the entire space has the desired properties. This is not true." As to the first, how do you know that evolution does not randomly assemble proteins from scratch? You're simply making an assertion. When the FIRST protein came into existence, did it come from 'scratch'? And what would have been the search space for that first protein? As to the second, do you have evidence to the contrary? Do you 'know' for a fact that there are other proteins that have this same property, or are YOU assuming? As a Darwinist you must be accustomed to thinking that what you assume to be true then must be true. Doesn't work here. PaV
Davescot Something I don't understand about Behe's assertion of IC. Is the idea that he is trying to falsify Darwinian RM+NS? But AFAIK he doesn't prove that the bacterial flagellum hasn't arisen by RM+NS. Challenging Darwinian theory isn't equivalent to falsifying it, surely. physicist
"Behe’s IC concept fails because it is defined solely with respect to the IC system’s final function. Evolution is not constrained to maintain the same function through successive precursor stages when “approaching” an IC system, so the concept of IC is mistargeted." Claptrap. This completely sidesteps the real issue - the fact that both direct and indirect Darwinian pathways have been demonstrated incapable of producing machinery whose IC core cannot be built gradualistically. You are essentially rephrasing Ken's Ko-option Kanard (cute eh?). It does not suffice. Bombadill
"He was presented with fifty-eight peer-reviewed publications, nine books, and several immunology textbook chapters about the evolution of the immune system;" Courtroom theatrics. What was Behe supposed to do, fisk thousands of pages of crap right there on the witness stand? Get real. Jones read none of it and wouldn't understand any of it if he did. DaveScot
woctor Darwin hisself said irreducible complexity would falsify his theory. The pesky thing about theories in science are that they aren't science if they aren't, at least in principle, falsifiable. So Darwin dedicated an entire chapter to weaknesses. Well sir, Michael Behe took up Darwin's challenge and said the bacterial flagellum was an organ that couldn't be formed from incremental changes where each change improved the fitness of the organism. This challenge will remain standing until someone produces a plausible, detailed progression of how a flagellum can be constructed by random mutation plus natural selection. No one has even come close yet. Yes, it's difficult. Be thankful he didn't choose the ribosome because that's a lot harder. Evolution is a theory in crisis. The only thing keeping it viable is judicial fiat. That won't be available to prop it up much longer. So can you feel the love yet? I can. :-) DaveScot
Has anyone else noticed how much woctor sounds like keiths (may he rest in peace)? woctor seems to have to come along just at the right time to take keiths' place after keiths got booted. keith, I mean woctor, consistent predictability is not thoughtful. Red Reader
Last time I checked, Behe's Irreducible Complexity was standing as strong as ever. The fact is that Darwin's mechanism of NS + RM fails to account for IC machinery. Telling me that a flagellum shares homologous components with other machines does nothing to tell me how a blind gradualistic mechanism can build it when it needs all of it's parts simultaneously to function. You know... Ken Miller's canard. Bombadill
woctor: "The key is not to find examples of “neat” design, but rather to find examples that cannot, in principle, have been produced by undirected evolution. This is what Behe attempted (and failed) to do with the concept of irreducible complexity." Here's a quote from the article: "After thousands of stretches, a pattern emerged," Marszalek said. "The molecule exhibited linear elasticity--a property that had never been seen in any other protein." The probability space for a normal-sized protein is 20^300th power. That means that scientists trying to synthetically compose a nanoparticle with this linear elasticity property would, by chance, i.e., randomly, NEVER, NEVER, EVER fabricate this protein even if they had all the time to do it from the beginning of the universe. And the Darwinist's response: Well, how do you know it didn't happen by chance? Because I can do the math. You ask IDers "to find examples that cannot, in principle, have been produced by undirected evolution." But this last example is an instance where the protein couldn't have been produced by DIRECTED (men and women in nanotech labs) evolution. What say you? PaV
Red Don't you know if 9th graders are told that evolution is a theory, not a fact, and should be carefully studied and critically considered it will drive western civilization back into the dark ages? I'm planning on winning so as a precaution, in case the nattering nabobs of negativity are right (they're scientists after all so they're probably right), I'm learning how to get along without electricity and collecting goods to use in a barter economy. ;-) DaveScot
PaV - I hear it said by Darwin's defenders that ID is something like "the end of science". Nothing could be further from the truth. I read something just in the last two days like how horrible it is that kids might learn ID in school. She said that the world is in danger of being overrun by mutant bacteria and if kids in school aren't taught the only real scientific theory--NDE--then there will be no one educated enough about evolution to stop the mutant threat. I do not have a link. But the opposite is true as your article suggests. Red Reader
Great first article, PaV! I think nano-engineers will be marveling at and learning from the examples given us by the designer of life for a long time to come. DaveScot

Leave a Reply