Uncommon Descent Serving The Intelligent Design Community

Chance, Law, Agency or Other?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Suppose you come across this tree:

Tree Chair

You know nothing else about the tree other than what you can infer from a visual inspection.

Multiple Choice:

A.  The tree probably obtained this shape through chance.

B.  The tree probably obtained this shape through mechanical necessity.

C.  The tree probably obtained this shape through a combination of chance and mechanical necessity.

D.  The tree probably obtained this shape as the result of the purposeful efforts of an intelligent agent.

E.  Other.

Select your answer and give supporting reasons.

Comments
Bob O'H: If you’re going to be a hypocrite, you’re not worth dealing with. By the way, I "introduced" cooption because cooption is tha basisi of the whole argument of the change from T3SS to flagellum. I have to remind you that the function of T3SS is definitely not the same as the function of the flagellum, so you can in no way have a direct path which simply increases the first function magically arriving at the second. Anyway, even if I suggested the possibility of intermediate cooptions, just to "increase the credibility of your model, I also took into consideration the simple split between the two existing functions, as you should know if you had read my posts, or if there were no hypocrisy in the world. Here, again, is the relevant part: "At this point, probably not knowing what else to say, you go back to the concept of gradual fixation. But what are you speaking of? Please explain how you pass from a function (T3SS) to another one (the flagellum) through 490 or so single aminoacid mutations, each one fixed in name of I don’t know what function increase. Are you thinking of, say, 245 gradual mutations improving at single steps the function of the T3SS, and then 245 more single mutations, building gradually the function of the flagellum? What a pity that, of those 490 or so internediaries, there is not a trace, either in existing bacteria, or in fossils, or in general models of protein functions. Or, if we want to exercise our fantasy (why not? in time I could become a good darwinist) we could split the cake in three: 150 mutations improving T3SS, 150 more to build some new, and then lost, intermediary function (double cooption!), and then 190 to build up the flagellum." It's a good thing that posts are saved in a blog, and we can always look again at what has been said. In spoken language, I have found often people who simply deny what has just happened in the discussion. It's interesting to find them even in written discussion, but at least here the evidence is under the eyes of all. Well, I hope this is it at last. Good by, and I really wish you the best. gpuccio
gpuccio @ 201 -
I am afraid there is no purpose in going on with this discussion with Bob. He goes on changing arguments.
And then how, in your very next post, did you respond to my repeating that you're assuming no fitness gain for intermediates? You discussed co-option, something which is not part of the argument I was making, and which I wasn't assuming (I had mentioned gene duplication earlier, but had dropped that line of inquiry because there were more urgent things). It is more than possible for intermediates to have a higher fitness and be carrying out the same function, i.e. for co-option not to be involved. Yep, thank you for, um, changing the argument. If you're going to be a hypocrite, you're not worth dealing with. I'd still be interested in seeing how F2XL responds to my criticisms of his maths, but I think he has disappeared. Bob O'H
Bob O'H: By the way, while I happily admit that I don't know the statistical applications you mention (I will try to study something about them, just to know), I still don't think there is any doubt that statistics is the science which studies random events. Obviously, it is perfectly possible to apply the study of random events to models which make causal assumptions. The fisherian model of hypothesis testing is a good example. There is no doubt that a caual assumtpion is made about the test hypothesis, and that it is evaluated by the statistical anlysis, but that is done indirectly, testing the probability of the nul hypothesis, that is of a randomness hypothesis. Whatever the sophisticated applications of statistics, there is no doubt that statistics works with statistical distributions, and that statistical distributions are distributions of random events. Anyway, all my calculations are merely of random probabilities. There is no causal assumption there. I have easily admitted (see my post #192) that if anybody can give a reasonable scenario where all the steps are no more than 2-3 nucleotides far away, and each step can be fixed by NS, then the probability calculations do not apply. I paste here again the relevant part: "I think that you obviously understand that there is no hope of splitting the transition into 245 or 163 functional intermediates (2-3 mutations steps). In that case, if you imagine that you can split the transition in such a way, and that each mutation can be fixed quickly, we would be in a scenario similar to the “Methinks it’s like a weasel” example. Indeed, a 2-3 aminoacids coordinated mutation is perfectly accessible to very rapidly dividing beings, like bacteria or malaria parasite, as Behe has clearly shown in his TEOE. We can accept that with prokatyotes, single mutations can be attained easily enough, double ones with a certain ease, given some time, and triple ones probably imply some difficulty, but can be done (especially if you imagine a whole earth covered with proliferating bacteria)." So, you see, I am admitting that you can find a pathway from T3SS to the flagellum, provided that you can give a model of that kind, and not just imagine it. I have already said many times the reasons, both logical and empirical, why such a model, IMO, can never exist. You have never commented about that. gpuccio
Bob O'H: I have almost renounced discussing with you, but just a last attempt. Both I and F2XL assume that there is no fitness gain during the passage from the original state (T3SS) and the final state. Indeed, I added my calculations to F2XL's model, because I thought (and still think) that he made an error in the mathematical development. I must remind you that the T3SS to flagellum scenario has been invented by darwinists to discredit Behe's concept of IC. Therefore, the T3SS system shiould be the already selected function which is "coopted" to easily pass to the flagellum. Indeed, Miller and Co. just try to convince everybody that, as the proteins for the T3SS are already there, building up the flagellum must certainly be a joke. F2XL's model (and mine) are aimed at proving that that is not true. Even if you start from the T3SS, to get the flagellum you still have to traverse an evolutionary path which is practically infinite, and which is equivalent to building up from scratch a new gene of at least 490 nucleotides. So, your concept of "cooption" is completely flawed. You insist on function gain as though it were a religious observance. But I must remind you that you have no possible path from T3SS to flagellum, that the argument of the IC of the flagellum stays untouched, and that, if you want to find evidence of hundreds of different coopted intermediate functions which can give a start of a model to traverse that landscape you are free to try. Simply, the "evidence" given by Miller about the T3SS does not solve any problem, because even if we acceppt the T3SS as ancestor of the flagellum (which I am not at all ready to do), there remains a huge information gap (at least 490 nucleotides, but again that's a very generous underestimate) which requires the demonstration of humdreds of new coopted functions, of which there is obviously no trace. That said, it is obvious that both F2XL's calculations and mine had only one purpose: to demonstrate that it is absolutely impossible that such an information gap may be traversed by random means alone. Is that clear? The calculations are calculations of the probability that the result be obtained by random variation, nothing else. F2XL's calculations are (IMO) wrong, but mine (IMO) are not. So, you have to give an answer: do you think that my calculations are right, in the sense that I have correctly calculate the probability that the "traversing" of the 490 nucleotides difference could happen by mere chance? Please, answer. If your answer is no, please show where is the mathematical error. Don't avoid the question citing again the problem of function gain. That will come after. If your answer is yes, then let us be clear: you are admitting that the passage from T3SS to flagellum could never happen by chance alone, because its probability is vastly below Dembski's UPB. That's the first question. Once you have admitted that my calculations are right (or, in alternative, shown why they are wrong), then we can speak of the assumptions of function gain or not gain. Indeed, I have discussed that in detail in the cited posts, and you have never commented. So, for you comfort, I paste here the more relevant parts. From post 155: When we ask for a path, we are asking for a path, not a single (or double) jump from here to almost here. I will be more clear: we need a model for at least two scenarios: 1) A de novo protein gene. See for that my detailed discussion in the relevant thread. De novo protein genes, which bear no recognizable homology to other proteins, are being increasingly recognized. They are an empirical fact, and they must be explained by some model. The length of these genes is conspicous (130 aminoacids in the example discussed on the thread). The search space huge. Where is the traversing apparatus? What form could it take? 2) The transition from a protein with a function to one other with another function, where the functions are distinctly different, and the proteins are too. Let’s say that they present some homology, say 30%, which lets darwinist boast that one is the ancestor of the other. That’s more or less the scenario for some proteins in the flagellum, isn’t it? Well we still have a 70% difference to explain. That’s quite a landscape to traverse, and the same questions as at point 1) apply. You cannot explain away these problems with examples of one or two muations bearing very similar proteins, indeed a same protein with slightly different recognition code. It is obvious hat even a single aminoacid can deeply affect recognition. You must explain different protein folding, different function (not just the same function on slightly different ligands), different protein assembly. That’s the kind of problems ID has always pointed out. Behe is not just “shifting the goalposts”. The goalposts have never been there. One or two aminoacid jumps inside the same island of functionality have never been denied by anyone, either logically or empirically. They are exactly the basic steps which you should use to build your model pathway: they are not the pathway itself. Let’s remember that Behe, in TEOE, places exactly at two coordnated aminoacid mutations the empirical “edge”, according to his reasonings about malaria parasite mutations. You can agree or not, but that is exactly his view. He is not shifting anything. From post 184: At this point, probably not knowing what else to say, you go back to the concept of gradual fixation. But what are you speaking of? Please explain how you pass from a function (T3SS) to another one (the flagellum) through 490 or so single aminoacid mutations, each one fixed in name of I don’t know what function increase. Are you thinking of, say, 245 gradual mutations improving at single steps the function of the T3SS, and then 245 more single mutations, building gradually the function of the flagellum? What a pity that, of those 490 or so internediaries, there is not a trace, either in existing bacteria, or in fossils, or in general models of protein functions. Or, if we want to exercise our fantasy (why not? in time I could become a good darwinist) we could split the cake in three: 150 mutations improving T3SS, 150 more to build some new, and then lost, intermediary function (double cooption!), and then 190 to build up the flagellum. Obviously, all that would happen in hundreds of special “niches”, each of it with a special fitness landscape, so that we can explain the total diappearance of all those intermediary “functions” form the surface of our planet! Do you really believe all that? From post 192: Patrick: I think that you obviously understand that there is no hope of splitting the transition into 245 or 163 functional intermediates (2-3 mutations steps). In that case, if you imagine that you can split the transition in such a way, and that each mutation can be fixed quickly, we would be in a scenario similar to the “Methinks it’s like a weasel” example. Indeed, a 2-3 aminoacids coordinated mutation is perfectly accessible to very rapidly dividing beings, like bacteria or malaria parasite, as Behe has clearly shown in his TEOE. We can accept that with prokatyotes, single mutations can be attained easily enough, double ones with a certain ease, given some time, and triple ones probably imply some difficulty, but can be done (especially if you imagine a whole earth covered with proliferating bacteria). The scenario would change dramatically, in the sense of impossibility, for bigger and slower animals, like mammals. But let’s stick to bacteria. The fact is, there are only two ways of splitting the path in 2-3 nucleotide changes: a) You need hundreds of intermediates, all of them with progressively increasing function of some type, all neatly ordered in the pathway which brings to the final flagellum. Each of them must have reproductive advantage enough so that it can be fixed. b) You know which are the ciorrect nucleotides, and you artificially fix them when they appear. That’s the case of “Methinks it’s like a weasel”. You already have the information, and you just select it. As the option b) is obviously design, let’s discuss option a). Again, it is not an option, for 3 different reasons: 1) Logical: there is no reason that in a search space functions can be so strictly and magically realated. There is no mathematical model which can anticipate such a structure. Indeed, if it existed, that would be heavy evidence for tha existence of God. Moreover, protein functions derive from totally empirical phenomena, like protein folding in a number of very different domains, and there is really no logical reasons that a ladder of intermidiate functions can “lead” from one function to a different one, and not only in a single lucky case, but practically in all cases. 2) Empirical: That process, or model, has never been onserved. The examples cited by Bob are definitely not examples of the splitting of a function in intermediate steps, but rather of single steps leading from one function to a slight variation of the same function. 3) Empirical (again): if the final function is reached through hundreds of intermediate functions, all of them selected and fixed, where are all those intermediates now? Why do we observe only the starting function (bacteria with T3SS) and the final function (bacteria with flagella)? Where are the intermediates? If graduality and fixation are what happens, why can’t we observe any evidence of that? In other words, we nedd literally billions of billions of molecular intermediates, which do not exist. Remember that the premise is that each successive step has greater fitness than the previous. Where are those steps? Where are those successful intermediates? were they erased by the final winner, the bacterium with flagellum? But then, why can we still easily observe the ancestor without flagella (and , obviously, without any of the amazing intermediate and never observed functions)? So, where are your detailed answers and comments to all that? If you don't want to discuss, just say that. If you have your model of function gain in this specific scenarion, please describe it. Do whatever you please. I have said all that I could reasonably say. gpuccio
gpuccio - you're right. It's not fair. F2XL tries one derivation of CSI, so I criticise it. You give another, so I criticise yours. And then you complain that I give different criticisms! The fitness gain assumption is important, because otherwise you're just calculating the proportion of the parameter space that could be explored. But, if there is a fitness gain of intermediates, this is an irrelevant calculation, because it will say little to nothing about the probability of the target being reached. It is, I believe, because of this that Dr. Dembski has been working so much on evolutionary informatics.
...statistics is the science of random events, not of causal assumptions.
Sorry, this is just wrong. A lot of modern-day statistics is about causal models. Look up topics like structural equation modelling, path analysis, graphical models etc. Bob O'H
Patrick: I am afraid there is no purpose in going on with this discussion with Bob. He goes on changing arguments. F2XL's calculations are wrong because they are statistically not correct. My calculations are wrong because I assume no fitness gain. I really think he is not fair at all. He has never, never said if he thinks that my calculations are right or wrong from a statistical point of view: that has obviously nothing to do with the fitness gain assumption, because, as I have tried to explain to him (although he probably already knows), statistics is the science of random events, not of causal assumptions. But everything is useless. Instead, I would really appreciate F2XL's feedback on the problem of the calculations, because I feel that it is not right to leave it open. So, I will stay tuned on this old thread, in case someone has to say something pertinent. gpuccio
you assume with absolutely no evidence whatsoever that intermediates have no fitness gain.
Name the functional intermediates in the indirect pathway. Also, since when did "intermediates" and "fitness gain" have anything to do with calculating informational bits? You seem to have your own definition of CSI. Or you seem to be asserting that the casual history must be known. F2XL and gpuccio are calculating the probability of an indirect pathway...NOT the informational bits of the flagellum! Bob, that is one giant basic misunderstanding. I cover calculating CSI over at Overwhelming Evidence. But I'll make it short here. In order to calculate the informational bits used to represent my name "Patrick" you consider that each letter has an information capacity of 8 bits. So the calculation is simple: 8 bits X 7 letters = 56 informational bits. In the same way, each "letter" of the DNA has an information capacity of 2 bits. The DNA sequence in the genome encodes to 42 proteins in the flagellum. So in order to calculate the informational bits for the flagellum: 2 bits X xxx letters = xxx informational bits. I did not look up the number of letters in the sequence, but that gives the basic idea. I should state the caveat that no one currently knows the exact information content and that this is the MINIMUM information since my estimate would only include genes. We're still attempting to comprehend the language conventions of the biological code. MicroRNA, etc. would certainly add to the information content for all biological systems. An analogy to computers is how XML, meta tags, and Dynamic Linking Libraries add to the core code (and thus informational content) of a program. I would also refer people back to comment #70 that covers other issues. This is perhaps why ID proponents are shy to provide this calculation since it's not final and since a lower estimate "might" be less than 500 (Dembski's UPB). But as I noted I did not look up the sequence although I remember there being at least 50 genes involved. Patrick
gpuccio - the reason I'm concentrating on F2XL is simply that he claimed that he could calculate CSI, and I've been trying to show why his calculations are wrong. Your own calculation are also wrong for the obvious reason that you assume with absolutely no evidence whatsoever that intermediates have no fitness gain. Bob O'H
Bob O'H: "If I’m wrong, you would do better to work through my argument and show where and why it’s wrong. At the moment you’re reduced to repeating yourself because you’re not applying yourself to my arguments." Excuse me, but I am repeating myself because, in the last few days, you gave me not only no argument, but indeed no answer. In the old post #183 you just cite again the possibility of fixation. I have answered in detail to that (see for instance my post #155, 184 and 192), giving very detailed arguments, about which you have never commented. Your only two more recent posts (#193 and 194) were answers to F2XL, and did not address in any way my calculations. Your strange behaviour seems to be: post to F2XL to say: "Hey, your calculations are wrong!" (true, but you have not understood why); and then post to me and say nothing of ny calculations (should I believe you suspect they are right?), and generically citing again the problem of gradual selection, without supporting it in any way, and without addressing the specific objections I have given at least three times in the recent history of this thread, in posts specifically addressed to you (once to Patrick, to be precise). To be clear, the two problems are completely separate: the possibility of selection has nothing to do with the calculation of a probability. Although many forget that, statistics can be applied only to random phenomena, to analyze them or, sometimes, to exclude them. If there is a non random cause, probabilities don't apply. So, the possibility of gradual selection has to be ruled out or affirmed on a different level (logical and empirical), but not by probabilistic calculations. I have tried to discuss that, and you have not answered. Instead, the possibility of random causes in each specific context or model have to be evaluated by a correct calculation of probabilities. I have tried to do that for our specific model, and you have never been kind enough to say if you believe I am right or wrong, or at least if you can't make up your mind. So, let's comment your last (and only) "argument", which I could not comment before because it's in your last post. I suppose it should be contained in the following phrase: "That would be true if it were the probability that there was at least one mutation at the locus in a given time, and that the final mutation was to the “correct” base. However, what is calculated is clearly not that: it’s lacking some vital constituents (like a mutation rate)." Would that be an argument? I scarcely can understamd what it means. Bu I will comment it just the same, in case you are expecting to accuse me of not replying... First of all, my calculations to which you refer are exactly what is needed in the model we were discussing, that is "the probability that there was at least one mutation at the locus in a given time, and that the final mutation was to the “correct” base.", or, to put it better (your phrase is rather obscure), they calculate the probability that the final mutation, after the minimum number of events (490) which can generate a 490 nucleotide mutation, can give the correct base at each of the 490 sites. That corresponds to p^490. Then you say: "However, what is calculated is clearly not that: it’s lacking some vital constituents (like a mutation rate)." What argument would that be? The calculation of a probability does not need any vital constituents: it just needs qunatitative data and context (model). A mutation rate is part of the model, but it must be taken into account after you have calculated the porbability of the events which the mutations should generate. So, if that can be called an argument, it's just wrong. Anyway, just to repeat myself for the nth time, in my post #190 I have given a simpler and more correct (IMO) way to calculate the probabilities in our discussed model, and I have also taken into account the probabilistic resources implied in the model (including the mutation rate). The result? No comment from you... gpuccio
The probability p has to be multiplied if we want to have the probability of more combined mutations.
That would be true if it were the probability that there was at least one mutation at the locus in a given time, and that the final mutation was to the "correct" base. However, what is calculated is clearly not that: it's lacking some vital constituents (like a mutation rate). If I'm wrong, you would do better to work through my argument and show where and why it's wrong. At the moment you're reduced to repeating yourself because you're not applying yourself to my arguments. Bob O'H
Correction of typo: "the probability of having the 490 correct results after the minimal event (490 mutations) if p^490, as corrctly stated by F2XL." should be: "the probability of having the 490 correct results after the minimal event (490 mutations) is p^490, as correctly stated by F2XL." gpuccio
Bob: At risk of repeating myself: a) 1/4^{4.7×10^6} is wrong. It is not the probability for one correct mutation, it is the whole search space of the whole genome, and is not perinent here. The correct probability for one single correct mutation is 1/(3*(4.7*10^6)), which is a much higher probability. Let's call it p. b) However, the second operation by F2XL is correct, and you are wrong. The probability p has to be multiplied if we want to have the probability of more combined mutations. As I have alredy said a couple of times (or more), the probability of having the 490 correct results after the minimal event (490 mutations) if p^490, as corrctly stated by F2XL. You are wrong about the problem of the order. Let's simplify, and calculate the probability of having two correct specific mutations, let's call them A and B. Each of them has a probability p. The probability of having both mutations is p^2. Although each of the two mutations has to happen at a specific site, so, if you want, in a specific spacial order, there is no difference if A happens before B, or vicversa, or if both happen simultaneously. The probability of the two mutations occurring together as a final result is the same: p^2. Anyway, I have suggested an alternative line of reasoning for that calculation, IMO much simpler, in my post #190. gpuccio
F2XL - nope, sorry. You still haven't realised that what you calculated was not what you thought you were calculating. If I follow you, you allow for 490 mutations in total (we can tackle this in more detail later). In 117, you obtain a probability of 1/4^{4.7x10^6}. This is the probability that one mutation happen at one specified position, and that it mutates to the correct base (should it be a 3? I think so, if you condition on the base being wrong to start off with). We'll call this p, anyway. You then take p to the 40th power. What are you calculating here? Well, the probability for the first mutation is the probability that one specified position mutates correctly. The probability for the second mutation is the probability that another specified position mutates correctly. Note that the position that mutates has to be specified too. So, if the order is position A, then B, you can't have B mutate first. Hence, the order of your mutations is important: it's not invariant to permutations of the order. You wanted to calculate something that didn't depend on the order, but I'm afraid you failed: you were out by a factor of 490!, which is a rather large number. Bob O'H
I guess the html code for the following paragraph wasn’t working so I’ll repost it to fix it:
Nope, still not working. :-( Bob O'H
Patrick: I think that you obviously understand that there is no hope of splitting the transition into 245 or 163 functional intermediates (2-3 mutations steps). In that case, if you imagine that you can split the transition in such a way, and that each mutation can be fixed quickly, we would be in a scenario similar to the "Methinks it's like a weasel" example. Indeed, a 2-3 aminoacids coordinated mutation is perfectly accessible to very rapidly dividing beings, like bacteria or malaria parasite, as Behe has clearly shown in his TEOE. We can accept that with prokatyotes, single mutations can be attained easily enough, double ones with a certain ease, given some time, and triple ones probably imply some difficulty, but can be done (especially if you imagine a whole earth covered with proliferating bacteria). The scenario would change dramatically, in the sense of impossibility, for bigger and slower animals, like mammals. But let's stick to bacteria. The fact is, there are only two ways of splitting the path in 2-3 nucleotide changes: a) You need hundreds of intermediates, all of them with progressively increasing function of some type, all neatly ordered in the pathway which brings to the final flagellum. Each of them must have reproductive advantage enough so that it can be fixed. b) You know which are the ciorrect nucleotides, and you artificially fix them when they appear. That's the case of "Methinks it's like a weasel". You already have the information, and you just select it. As the option b) is obviously design, let's discuss option a). Again, it is not an option, for 3 different reasons: 1) Logical: there is no reason that in a search space functions can be so strictly and magically realated. There is no mathematical model which can anticipate such a structure. Indeed, if it existed, that would be heavy evidence for tha existence of God. Moreover, protein functions derive from totally empirical phenomena, like protein folding in a number of very different domains, and there is really no logical reasons that a ladder of intermidiate functions can "lead" from one function to a different one, and not only in a single lucky case, but practically in all cases. 2) Empirical: That process, or model, has never been onserved. The examples cited by Bob are definitely not examples of the splitting of a function in intermediate steps, but rather of single steps leading from one function to a slight variation of the same function. 3) Empirical (again): if the final function is reached through hundreds of intermediate functions, all of them selected and fixed, where are all those intermediates now? Why do we observe only the starting function (bacteria with T3SS) and the final function (bacteria with flagella)? Where are the intermediates? If graduality and fixation are what happens, why can't we observe any evidence of that? In other words, we nedd literally billions of billions of molecular intermediates, which do not exist. Remember that the premise is that each successive step has greater fitness than the previous. Where are those steps? Where are those successful intermediates? were they erased by the final winner, the bacterium with flagellum? But then, why can we still easily observe the ancestor without flagella (and , obviously, without any of the amazing intermediate and never observed functions)? gpuccio
To keep this conversation from dying, let's be nice and assume an indirect pathway where there does exist functional intermediate states within the reach of Darwinian processes that do not have the same function as the final flagellum. Homologs flying everwhere like a tornado, HGT, duplications, all the "engines of variation". Bob hasn't deigned to define any functional intermediates, except to assert that "that's how evolution works!", but let's be nice and assume they're all within 2-3 changes. How would that change the math? Patrick
F2XL, Bob O'H, kairosfocus: While we wait for F2XL's answer, I would like to add some thoughts to the discussion. I realize that calculating the probabilistic resources for series of 490 mutations is rather challenging, so I would suggest a different approach, which IMO can simplify the mathematical reasoning. So, let's forget the whole E. coli genome for a moment, and let's consider just the 490 nucleotides which, according to F2XL's approximate reasoning, have to change in a specific way to allow the transition from the old function to the new, coopted function (the flagellum). Those 490 nucleotides are a specific subset of the whole genome. So, we can reason limiting our calculations to that specific subset. As the original set, the whole genome, is 4.7 * 10^6 bases long, we can say that our subset is about 1 : 10^4 compared to the whole genome. What is the serach space of possible configurations of that subset? That's easy. It's 4^490, that is about 10^295. If we assume that a specific configuration has to be reached, the random probability of achieving it through any random variation event is of 1 : 10^295. Now, let's reason about probabilistic resources. As our subset is smaller than the whole genome, our resources have to be divided by 10^4. In other words, each single mutation in the genome has about 1 : 10^4 probabilities of falling in the chosen subset. And there can be no doubt that any single mutation falling in the specific subset is determining a new state of the subset, and therefore is "exploring" one of the configurations in the search space. But what are our probabilistic resources? For that, please review F2XL's posts #95, 96, 99 and 102. I will copy here his final results: "To inflate the probabilistic resources even more we will round up to 10 to the 55th power. This number will represent the total number of mutations (in terms of base pair changes) that have happened since the origin of life. We now have established our replicational resources for this elaboration." Now, I must say that F2XL has been far too generous in calculating that number of 10^55 mutations in the bacterial genome in our whole planet in all the time of its existence. I do believe the real number is at least 20 orders of magnitude lower, and that is evident if you read his posts. Anyway, let's take it for good. Let's remember that we have to divide that number by 10^4. In other words, only 1 : 10^4 mutations will take place in our specific subset. That does not sound as a big issue at this point, after having conceded at least 20 extra orders of magnitude to the "adversary", but let's do it just the same. One should never be too generous with darwinists! So, our total number of available bacterial mutations in the whole history of earth goes "down" to 10^51. Are we OK here? Now, it's easy: we have a search space of 10^295 configurations which has been explored, in the whole history of our planet, at best with 10^51 independent mutations. What part has been explored? Easy enough. 1 : 10^244. In other words, all the mutations of our world history which took place in our relevant subset of 490 nucleotides have only 1 : 10^244 possibilities to find the correct solution. Let's remember that Dembski's UPB is 1 : 10^150! Are we still doubting the flagellum's CSI? Possible anticipated objections. Only two: 1) What about gradual mutation with functional fixation? That's not an objection. The point here is that we need 4990 changes to get to the new function. Fantasies about traversing landscapes will not do. I have answered in detail to this line of reasoning in various posts, especially in #184. I have received no answer from Bob about that. 2) The functional configurations of our 490 nucleotides can be more than one. That's true. But let's remember how we arrived at that number of 490 different nucleotides: F2XL assumed a very high homology between the 35 proteins of the T3SS and those of the flagellum (which is probably not true), and reduced to only 1% the nucleotides which really have to change in the 35 genes to achieve the new function. Again I think that he was too generous in that assumption. That's why I think that we can confidently assume that the functional island in those 490 nucleotides has to be considered extremely small. But how big should it be to change things? Easy: to go back at least to a probability of 1 : 10^150 (Dembski's UPB), which in itself would be enough to affirm CSI, our island of functional states would have to be as big as 10^94 configurations. In other words, those 490 nucleotides should be able to give the new function in 10^94 different configurations! And we still would be at Dembski's UPB value, which is indeed a generous concession to the adversary, if ever I saw one! I really believe that a value of about 1 : 10^50 is more than enough to exclude chance. In that case, our "functional island" would have to be as big as 10^196 configurations to give chance a reasonable possibility to work! So, to sum up: no possible model of step by step selection, no possible island of functionality to "traverse the landscape". The result? Very simple: the flagellum is pure CSI. It is IC. It is designed. gpuccio
Gentlefolks: Riding a tiger just now -- little time. I just note F2 that the Pacific drift analogy is originally GP's not mine. [Mine was on searching our configs in 1m^3 vats to get a flyable micro-jet. The same basic issues obtain, cf the always linked, appendix 1 note 6. The example is an adaptation of Sir Fred's famous tornado in a junkyard assembles a 747, scaled down so diffusion etc can be seen at work. Observe the link onward to the issues on 2LOT, especially in light of the usual open systems objections. Bottomline -- config spaces with sparse functional islands and archipelagos cannot be spanned effectively other than by intelligence, per experience and in light of implications of searches constrained by resources, even on the gamut of the observed cosmos.] Evo Mat advocates ALWAYS, in my experience, seek to divert from these issues. The reason: Frankly, the origin of complex, functionally constrained information issue is the mortal, bleeding wound in evo mat thought. GEM of TKI kairosfocus
F2XL: While completely agreeing with you with almost everything you say, and for addressing the conflicting objections of Bob (which I have tried to do too), I have to point out that you should also address my point that I think there is a serious error in your calculation. That is not intended as a critic at all: my reasoning is exactly the same as yours, and the final conclusions are absolutely the same, but I think we should definitely make clear what the correct mathematical approach is. If you could please find the time to review my posts #158 and #163, you can see that my idea is that you are wrong when you say (as you have repeated also in your last post): "With 3 possible changes that can happen after a point mutation has occurred on a base pair (duh), and with 4.7 million base pairs in the entire genome, the number of possible changes that can happen to the EXISTING INFORMATION (as gpuccio pointed out, I did make a mistake when I originally assumed 4 possible changes) is calculated by taking 3 to the 4.7 millionth power. This gives us a result of 10 to the 2,242,000th power base pair mutations that have the possibility of occurring." As I have already pointed out, you should not take 3 to the 4.7 millionth power, but just multiply 3 by 4.7 millions. Please, refer to my previous posts to see why. That does not change anything, because in the following step you have to take the result to the 490th power, and that gives us a low enough probability (about 1 : 10^3430) to go on safely with our reasonings. As I discussed with Bob, that would represent the probability of attaining that specific coordinated mutation of 490 nucleotides after the minimum mutational event which has the power to cause it: 490 mutations. To be definitely clear, the "chronological order in which the mutations happen" is not important (although the best results are obtained with 490 simultaneous mutations, which is the only scenario which avoids the possibility of a mutation erasing a previous favourable one), while the "order and site of each specific mutation in the genome" is absolutely important. It is important to distinguish between these two different meanings of "order", because many times darwinists have based their critics on that misinterpretation. That's all. As I have said, I could be wrong in that calculation, although at this point I don't think I am, at least in the general approach. Some mathemathical detail could need to be refined. But I would appreciate your feedback on that point, so that we can go on with your reasoning after having reached some agreement on the mathematics involved. gpuccio
I guess the html code for the following paragraph wasn't working so I'll repost it to fix it: "I guess all someone has to do to see right through that claim is read this comment to find out why the methods I used don't represent what you think they do." F2XL
I guess I'm drawing a lot of attention here. That would seem a reasonable evolutionary explanation So explain to me how a bacterial flagellum could evolve in a step by step manner that involved the change of only one base pair per step. Explain how each change would provide a functional benefit. No. You're presenting your model, based on your assumptions. Completely false. If you've been paying attention since the beginning of the discussion over the flagellum, you would know were the scenario comes from. https://uncommondesc.wpengine.com/intelligent-design/chance-law-agency-or-other/#comment-289302 My assumptions are based on MATZKE'S assumptions, which are that ALL the homologs can easily show up in a single E. coli cell (highly unrealistic, but that appears to be what he thinks is possible) and cross any hurdle to becoming a working functioning flagellum much like what we see today. Give use evidence that your assumptions are reasonable. 1. They aren't reasonable (nor are they supposed to be) 2. They aren't my assumptions. 3. All in all the assumptions put forth are biased AGAINST what I'm trying to prove. Especially the idea that all 35 genes that would code for the homologs share a 95% sequence similarity, and that all the homologs would already be present in every last E. coli that has ever existed on earth, and the notion that there have been 10 to the 55th power opportunities to make the changes needed to cross any neutral gaps that we may come across in the process of turning the homologs into the rotary flagellum. Oh, and arguments from incredulity or ignorance won't cut it. I noticed. Read the paper I linked to, please. Or, if don’t want to (and there’s no compulsion), don’t try and make criticisms of something you haven’t read. DOI NOT FOUND isn't what I would call an evolutionary explanation. If you claim it's sufficient to overthrow what I'm claiming then please, feel free to explain and apply it to what Matzke ('xcuse me,) I used as a scenario. In reality, it might make a difference, of course, but assuming the order is irrelevant is a decent first approximation. I agree with you on this one, and yes, if I really was paying close attention to the order in which the 490 base pairs had to change I would have to include an entirely new step into the whole process as well. Are you even aware that your calculation assumed a set order? I sure hope you aren't one of the many people who insists that Dembski is a "pseudo" mathematician... Look at how you set up the calculation - you took the product of the probabilities. Indeed, if you have multiple events which all have to happen then that would be the method you would use. Refer to the information below to see why. This assumes a fixed order, as I showed above. Are you kidding me? I guess all someone has to do to see right through that claim is read this comment to find out why the methods I used don't represent what you think they do. But apparently that wasn't clear enough. Let's see if I can make it any more detailed then I already did. 1. Take 3 pennies (or fair coins). 2. Number each penny with a respective number from 1-3. 3. Now ask yourself the following question: what are the odds that I will flip all tails after I've flipped each coin? 4. With the probability of each individual coin landing tails being roughly 1 chance in 2, the odds that I would get all tails after I flipped all three coins is one chance in 8 (1/8). 5. This is done by taking the individual odds for each coin (1/2) and muliplying it with itself 3 times since there are three coins. 1/2 X 1/2 X 1/2 = 1/8 6. Take note of the fact that it does not matter at all whether you flip the coin numbered "1" or "2" or "3," the odds of getting all tails when you've flipped all coins are the same. We can see this clearly below in a visual portrayal of all the possible outcomes: hhh hht hth thh htt tht tth ttt 7. You claimed that this didn't prove that when you take into account multiple events and put together their probabilities that you must multiply the individual probabilities. The evidence you site for your claim? The difference from the coin toss is that in a coin toss, the first toss has to come first (!), so the order can't be permuted. Yeah no s%#$ the first coin toss HAS to come first, that's why we call it the "first" coin toss! If we applied that kind of logic to mutations then no matter what the outcomes are we would have to argue that they had to happen in a specific order because the first mutation HAS to come first! By your own reasoning mutations HAVE to happen in a specific order(they don't)! 8. After taking a quick skim or your "evidence" that What I did assumed a set order, you presented the following: Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Suppose they are independent with probabilities are p_A, p_B, p_C. There are six ways in which this could occur: ABC ACB BAC BCA CAB CBA For the first, the probability is P(A).P(B|A).P(C|A,B) = p_A.p_B.p_C For the second, the probability is P(A).P(C|A).P(B|A,C) = p_A.p_C.p_B etc. The total probability is thus 6p_A.p_C.p_B. Again I'm very sorry if it sounds like I'm making personal attacks here, I'm just frustrated because many Darwinists, or Design critics accuse us of making the same "mistakes" you did. In this case the mistake you made was that you simply used a straw man of what I was trying to calculate. In your demonstration you calculated the odds of 3 events happening in a certain order. Irrelevant to say the least because what I was doing was trying to calculate what the odds would be for ALL those events to occur in the first place. Doesn’t matter which events happen in what order, the odds of ALL the events occurring is still the same. The same applies to the 490 mutations, it’s not the ORDER that they occur but WHETHER they occur. Since the odds of getting tails on a coin flip are one in two, and if you wanted to calculate the odds of getting all tails on X coin flips, then you would take the number of outcomes (2) and take that to a power equal to the number of places/times it must occur (X). 9. With 3 possible changes that can happen after a point mutation has occurred on a base pair (duh), and with 4.7 million base pairs in the entire genome, the number of possible changes that can happen to the EXISTING INFORMATION (as gpuccio pointed out, I did make a mistake when I originally assumed 4 possible changes) is calculated by taking 3 to the 4.7 millionth power. This gives us a result of 10 to the 2,242,000th power base pair mutations that have the possibility of occurring. So within the 35 genes that code for the homologs we have made the assumption that there is only a 5% sequencing difference between the info for the homologs and the info for the actual flagellum. We make an even more hopeful assumption that in order to fulfill 5 criteria previously mentioned and thus be preservable by selection there are only 490 base pairs (1% of the total information for the flagellum) which must all make their respective changes. With the odds of any particular point mutation occurring on the order of 10 to the 2,242,000th power (similar to the 1/2 odds with a coin), and with (AT LEAST) 490 mutations that have to occur (similar to the 3 coins which are to be flipped) you take the odds of any given mutation and take that to the 490th power (similar to taking 1/2 to the 3rd power with the three coins). 10. The final result? When taken to the 490th power, 10 to the 2,242,000th power becomes 10 to the 1,098,580,000th power (similar to 1/2 to the 3rd power becomes 1/8). Like it or not, those are the odds that 490 point mutations will hit all the right base pairs AND make the right changes thereafter. Can we apply the probabilistic resources now or what? F2XL
F2XL, please explain why I'm wrong, rather accusing me of ignorance about probability theory. I would like to quickly note Bob that I wasn't trying to attack you personally. I just felt like I made my explanation too vague on why the order of events occurring isn't important. I think maybe I should've given a more direct approach to what you said in 129. I first note to you that F2 seems to have very good reason, in an “Expelled” world for not revealing a lot about himself. Good observation. :) I try and keep a lot of this stuff to myself except for in online discussions, I'm just hoping I can get tenure before the system expels me first. ;) I gotta hand it to you KF, your analogy of drifting from island to island pretty much sums it up. Sure there may by many "endpoints" but that doesn't mean that you can realistically reach any of them with limited (probabilistic) resources. I know it sounds crazy but I actually haven't gotten around to reading Meyer's infamous article. I've skimmed through parts of it but I guess I better print it out and read the whole thing some time by the end of the month. gpuccio - quite simply, your assumption that there are only 490 mutations, because you need 490 bases to change, is absurd. You’re assuming that the only mutations that could happen were at those positions. Bob I really do apologize if I sound like I'm attacking you personally but I'm getting to the point where I think maybe you haven't actually read through what I was doing. As gpuccio pointed out he was basing his assumptions off of mine. And just so you and everyone else is aware I pointed out VERY CLEARLY BEFORE that mutations can happen anywhere in the entire 4.7 million base pair genome. The assumption that you only need 490 particular point mutations to occur (equivalent to 1% of the information for the entire flagellum, and a hundredth of a percent (.01%) of the genome for the entire E. coli) doesn't HELP what I'm trying to prove, it dumbs down the problem that selection faces by exaggerating the amount of information those 490 base pairs really code for. gpuccio - sorry, but you are assuming only 490 mutational events. If there are more, you have to calculate the probability that out of the N events, 490 are in the "correct" position. I would highly recommend you follow gpuccio's advice as follows: If, instead, you want to argue how F2XL got the number on which those calculations are made, that is the necessity of 490 specific mutations to the flagellum, then you should discuss F2XL’s previous posts, which detailed that reasoning. F2XL
Bob O'H: I have not followed in detail all the reasoning by F2XL up to now. If I were to sum it up (but I could be wrong) I would say: 1) Let's suppose that he flagellum is derived form some other functional ensemble by cooption (the usual "argument" against it irreducible complexity in the darwinian field). I suppose the general idea is that it is derived from the T3SS. 2) Let's avoid all the general objections to that relationship (which came first, and so on), and let's assume the derivation model (I think F2XL did that). 3) Obviously, the 35 proetins in the flagellum are not the same as those in the T3SS. Here is where the darwinian discourse is completely unfair. At best, many of them (but not all) present homologies with other proteins in T3SS. That means that there are similarities which are not random, but they could be easily explained on a functional basiand not as evidence of derivation. 4) However, let's assume derivation as hypothesis. I think F2XL has reasoned very simply: we have 35 proteins which have changed. If there were no change, bacteria with T3SS would have the flagellum, but that's not the case. How big is the change? Here F2XL has given an approximate reasoning, very generous indeed towards the other side, assuming high homology between the 35 protein pairs, calculating the number of different aminoacids between them according to that approximation, and then assuming that only a part of the aminoacid changes (10%, if I rememebr well) is really functionally necessary for the change from T3SS to flagellum. That's how that number of 490 specifically necessary mutations comes out. OK, it's an approximation, but do you really think it's wrong? I think, instead, that F2XL has been far too accomodating in his calculations. The real functional difference is probably much higher. We are speaking of 35 proteins which have to change in a coordinated way to realize a new function. That's the concept of cooption. I have always found that concept absolutely stupid, but that's what we are discussing here. 5) That's why we need our coordinated mutation of 490 nucleotides in the whole genome: to get to the new function from the old one. Again, that's the spirit of cooption. That coordinated mutation, according to my calculations, which at this point I think you are accepting, as you have given no specific objection to the mathematics, has a probability of the order of 1 : 10^3000 of being achieved by the minimum necessary mutational event: 490 indipendent mutations. 6) I see that in your last post, instead of saying anything against that reasoning, or against those numbers, you are going back to another kind of objection, which I have already answered in detail earlier in this thread (see post #155), a discussion you interrupted briskly with the following: "Folks, you’re now throwing examples at me that have nothing to do with the problem we were discussing. I don’t want to get side-tracked from F2XL’s problem, so I won’t respond here. I’m sure another post will appear at some point to discuss these matters further. Sorry for this, but I’d like to find out if F2XL’s calculation of CSI is valid. If we wander off onto other topics, he might decide we’re not interested any more, and not reply." Well, now you should find out if my calculation of CSI is valid. I have been extremely detailed. We are in front of a coordinated change which has such a low random probability that, in comparison, the achievement of Dembski's UPB becomes a kid's game. That's obviously CSI at its maximum. And we are only considering changes in the effector proteins, and not in regulation, assemblage, and so on. At this point, probably not knowing what else to say, you go back to the concept of gradual fixation. But what are you speaking of? Please explain how you pass from a function (T3SS) to another one (the flagellum) through 490 or so single aminoacid mutations, each one fixed in name of I don't know what function increase. Are you thinking of, say, 245 gradual mutations improving at single steps the function of the T3SS, and then 245 more single mutations, building gradually the function of the flagellum? What a pity that, of those 490 or so internediaries, there is not a trace, either in existing bacteria, or in fossils, or in general models of protein functions. Or, if we want to exercise our fantasy (why not? in time I could become a good darwinist) we could split the cake in three: 150 mutations improving T3SS, 150 more to build some new, and then lost, intermediary function (double cooption!), and then 190 to build up the flagellum. Obviously, all that would happen in hundreds of special "niches", each of it with a special fitness landscape, so that we can explain the total diappearance of all those intermediary "functions" form the surface of our planet! Do you really believe all that? So, finally, the issue is simple. If: a) My calculations are even grossly right, and we are in front of random probabilities of the order, at best, of 1 : 10^3000 and b) You cannot give any reasonable model of gradual mutation and fixation, least of all any evidence of it then: I think we are finished. What else is there to discuss? gpuccio
gpuccio - I don't see the point of calculating the probability of having the right result with 490 mutational events. How does that relate to any process we see in evolution? In reality, there is a constant process of mutation and fixation, so surely you should be considering that. Bob O'H
Bob O'H: That's perfectly correct. That would be the part regarding the computational resources. But calculating the probability of having the right result with 490 mutational events is the first step. Do you agree with my result, at least as order of magnitude? Take notice, moreover, that if you multiply the mutational events, each new event has approximately the same probability of finding a correct solution or of destroying a previous one... Indeed, for my first calculation to be correct, we would have to assume that the 490 mutational events are contemporary, otherwise we woud have to take into account the possibility of such am "interference". While with the first 490 event that possibility is negligible, it could become higher as you multiply the number of mutations, as long as you have no way of "fixing" the positive results already found. gpuccio
gpuccio - sorry, but you are assuming only 490 mutational events. If there are more, you have to calculate the probability that out of the N events, 490 are in the "correct" position. Bob O'H
Bob O'H: Wrong. I never assumed that. I just started form a definite point in F2XL reasoning, where he had assumed that, and you had not objected. Please, review the posts, and you will see that I did not comment on those starting numbers, I just tried to correct the following calculations, because I felt that F2XL was wrong in a couple of important points, and that your objections were wrong too. So, if you please want to stick to the problem of those specific calculations, I would like to have your opinion if my calculations are right. If, instead, you want to arguw how F2XL got the number on which those calulations are made, that is the necessity of 490 specific mutations to the flagellum, then you ahould discuss F2XL's previous posts, which detailed that reasoning. Anyway, as you opened the discussion on the calculations themselves (correctly, I would say, because I think F2XL's calculations were wroong, but still IMO with wrong arguments on yout part), I think you ahould now contribute to that specific part of the discussion, even if you don't agree on the premises. gpuccio
gpuccio - quite simply, your assumption that there are only 490 mutations, because you need 490 bases to change, is absurd. You're assuming that the only mutations that could happen were at those positions. Bob O'H
Bob O'H, F2XL and kairosfocus: I have posted a detailed, and motivated, version of the coalculations under discussion here, at post #158, and repeated it, with some further comment, at post #163. I would really appreciated if you could comment on that, so that we can try to find an agreement based on facts. I am not assuming that I am right, just suggest that we start discussing on the problem as it is. I think we can find the right approach without any question of authority, although any technical comment from a mathematician or statistician would be appreciated. Statistical problems can be tricky, but they can be solved. There is no reason to argue about something which can certainly be understood correctly by all of us. gpuccio
OOPS: 4^100 mns kairosfocus
Bob [and F2XL): I see your protest on probability. I first note to you that F2 seems to have very good reason, in an "Expelled" world for not revealing a lot about himself. Now, I agree that it is better to address on the merits rather than dismissively, but I also think you are conflating two (or three) very different things. Namely:
1a] De novo origin of life and associated codes, algorithms and maintenance and executing machinery; and 1b] Similar de novo major body plans based on DNA codes of order 100's k - mns - ~ 3 bn base pairs.
--> That is, OOL and what has been descriptively called body-plan level macroevolution. With:
2] Essentially microevolutionary changes in a near neighbourhood in Hamming/Configuration space.
By way of illustration, consider yourself on a raft in a vast pacific, moving at random. There are relatively small islands, in archipelagos -- some of the islands being close together, some archipelagos being close-spaced, some with islands big enough to have mountain ranges. You start at an arbitrary location, with finite and limited resources. What are the odds that you will be able to first get to ANY island? [Negligible] If instead you start on any given island, what are the odds that you will be able to drift at random to a remote archipelago? [Negligible] By contrast, what are the odds you could drift among islands of a tightly spaced archipelago? [much higher, but low still] Similarly, what are the odds that you will be able to move at random from one peak to another at random? [Surprisingly low, but doable.] In short, the three-element probability chains you posed in 129 are vastly divergent from the actual issues at stake. Hence the force and sting in Meyer's 2004 remarks which I beg to remind onlookers, passed "proper peer review by renowned scientists":
The Cambrian explosion represents a remarkable jump in the specified complexity or "complex specified information" (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6
De novo creation of 100 mn base pairs -- even using huge amounts of gene duplication to get the space to do the information generation in -- will be essentially a search in a config space of order 4^100 (as, there is no constraint on which of G, C, A, T may end up at the points in the chain until functionality re-appears, itself another challenge for the duplicate chains]. Even if we do 3^100 mns, we still wind up in a space with ~2.96 *10^47,712,125 states. And, in that state space we know that stop codons "at random" will be quite common in odd points in the chain so we know that functional states will be sparse, exceedingly sparse. Such a search, on the gamut of our onbserved planet [or even cosmos] will be maximally unlikely to succeed. By contrast, we commonly observe that FSCI, even on the relevant scales, is a routine product of agency. GEM of TKI PS: DS, I'm sure you would have brushed with a Mac or an early workstation; these were dominated by the 32/16 680000 family. The 6500 family had a strange relationship with the 6800 family of course, indeed it was "inspired " by it; and peripherals could easily be mixed and matched, the 6522 VIA being especially nice. In turn the archi of the 6800 has a more than passing resemblance to being an 8-bitter version on the PDP 11. (DEC was of course bought out by Compaq, then HP . . .) kairosfocus
F2XL, please explain why I'm wrong, rather accusing me of ignorance about probability theory. Or, if you don't want to do that, kindly show us your credentials so that we know you have the authority to judge others' numeracy. Bob O'H
Sorry I can't keep posting on a regular schedule. Because of the amount of free time I have on my hands I will probably be better suited if I just make larger comments regarding this discussion on the flagellum and the X filter along with CSI during the weekends. I will likely start responding to comments from 129 on up. It seems like there is one commenter in particular who doesn't know how to take into account the odds of multiple events occurring. But yes, I haven't forgotten the task at hand. Nonetheless though I would like to quickly say that I agree with the notion that random number generators are pseudo-random, eventually they will repeat the same pattern if given enough trials (as far as I know). F2XL
kf I'm trying to think of what computers I worked on with 6800 and 68K processors and am drawing a blank. I know there were some. Might have been video game consoles circa 1980 - 1982. I did some work with Apple IIx but those were 6502. Most of my work was with i80x86 hardware design and assembly language programming. DaveScot
DS:
First computer I built was an Altair 8800 (i8080 uP) which I believe predated your 6800 Heathkit by a few years . . .
I'd say! (A real pity the 6800 - 68000 evolution ran out of steam . . .) GEM of TKI kairosfocus
kf First computer I built was an Altair 8800 (i8080 uP) which I believe predated your 6800 Heathkit by a few years. First hardware design was an S-100 wirewrapped RS-232 card for the Altair. First program (that I recall) was a ~25 byte initialization sequence that had to be entered using binary front panel switches to initialize the RS-232 chip (I think it was an i8052). After the boot code was loaded in I could use an attached serial terminal to key in (using keys 0-9 and a-f) code faster. Good times. The next project was floppy disk controller. DaveScot
PS: Homebrew version! kairosfocus
Hi Dave I see your round slide rule. Mine was a general-purpose rule, with two hinged plastic pointers with hairlines, like the hands of a watch. They were set up so you could set the angles on a scale then advance the whole as required, reading off answers on one of as I recall five circular scales, as in like A through E. As Wiki discusses, there is a key wraparound capacity in the circular rule, and of course its linear dimension is 1/3 that of the equivalent linear job, thanks to pi. [Memory is a bit vague now! Decided to do a web search -- bingo, here it [or its first cousin] is; I gotta check my dad, as I think that I passed it back to him, complete with manual still in it. Manual is online at the just linked too. On further looking around, I think the unit is a Gilson Midget 4", right down to the case colour.] The key to the beasties is the power of logarithms. The key scales are log scales, and so they multiply/subtract by adding or taking away lengths on log scales. Multiplying/dividing logs gives you powers stuff [and conversion ofr log bases etc]. Y'know, come to think of it, my favourite log-linear and log-log graph paper or even ordinary paper with log scales etc are also analogue computers, with scientific visualisation tossed in. [Anyone here remember working with Smith Charts on T-lines etc?] Never thought of it that way before. (Next time I teach say A level physics or 1st yr college physics, I will have to remember that bit!) GEM of TKI PS: GP, my first computer was a 6800 family based SBC by Heathkit, which I assembled and used to teach myself machine code/assembly language programming. Still have it, though the manual was soaked and had to be discarded. [It sat in a barrel for several years here in M'rat, in storage as I had to be away from the volcano. Wonder if that manual is online?] kairosfocus
My first computer was analog - a slide rule. Kairos mentioned that there were some round versions of slide rules. I have one of those, or at least one common version, and need to know how to use it. It's the E6B. They're still commonly used today. DaveScot
Kairosfocus: thank you for your facsinating remembrances about analog computers. I really envy your background: my first experiences with a computer were with an Intel 8086, and I even tried to run a mandelbrot program on it (you can imagine with what results!). gpuccio
On RNG's, Most people in the intelligent design camp are aware of the total lack of any truly beneficial random mutational events to account for the evolution of complexity we see in life (Dr. Behe; Edge of Evolution 2007). So the problem, first and foremost, for the Theistic IDer (Intelligent Design), is to actually prove that “mind” can have a notable effect on “random chance” that would be greater from normal random chance that would occur from the “normal” environment. This following studies offer the first tentative “baby steps” in that direction of positive proof for the Theistic IDer. Page 187 “Your Eternal Self” Hogan In the studies, random number generators (RNGs) around the world were examined after events that affected great numbers of people whether the numbers began to show some order during the events. During widely televised events that have captured the attention of many people, such as Princess Diana’s de^ath and the 9/11 tragedies, the combined output of the 60 RNGs around the world showed changes at the exact moments of the announcements of the events that could not be due to chance. To add control to their study researchers identified an event they knew was about to happen that would have an impact on large numbers of people and set up a study to measure the effects on RNGs in different parts of the world……. Oct 3, 1995, the OJ Simpson verdict was chosen: ,,,around the time that the TV preshows began, at 9:00 AM Pacific Time, an unexpected degree of order appeared in all RNGs. This soon declined back to random behavior until about 10;00 AM, which is when the verdict was supposed to be announced. A few minutes later, the order in all 5 RNGs suddenly peaked to its highest point in the two hours of data precisely when the court clerk read the verdict.,,, — For me this is verifiable and repeatable evidence that overcomes the insurmountable problems that “random chance” has posed to Darwinism and offers positive empirical proof of the mind over matter principle for the position held by Theistic IDers. bornagain77
Hi Mavis [and GP]: I see I am dating myself. A walk down memory lane . . . 1] Analogue computers A long time ago, there were machines known as Analogue Computers, which in electronic form -- electromechanical, mechanical and hydraulic forms exist[ed?] too -- used operational amplifiers, potentiometers, diode function generators etc etc. They were used to set up [i.e. patch-cords, almost like an old fashioned telephone exchange!] differential equations, and then run them as simulations, often showing the result on a plotter or a CRT screen or the like. And yes, at a certain point I actually had my hand on one of the old beasties. Nowadays, it is all digital simulations on PCs -- cf Gil's simulation packages used to model various dynamics. I notice the Wiki article classes good old slip-sticks [slide-rules] as analogue computers too. Never really thought of that before, but yes, It'll do. [And, yes, I used to use one of those, as a student. My dad passed on to me a very special circular job, that still exists somewhere about.] 2] Getting "truly" random numbers If we were to do an analogue computer today, it would probably be integrated with a digital one. And so I guess my heritage is shown in my statement that one could integrate a Zener noise source to make truly random numbers. [Of course, one will need to compensate for statistical distributions, as GP pointed out.] The point being, that a Zener diode is a fair noise source, and we can build a circuit to take advantage of that. Then, we can rework that to get a well-distributed random number process. [NB: Back in the bad old days of my Dad's work in statistics circa 1960, a neat trick was to use the phone book to generate random digits, as the last 4 or so digits of a phone number are likely to be reasonably random.] Here is a random number service and its description of how it uses radio noise as a similar source off credibly truly random numbers. 3] On chance, necessity, intelligence This morning, I saw an offline remark by email, and now a remark in the thread that touches on this. M:
Levers lift things. What are these “mechanical necessities of the world” you speak of? What does that mean? If you mean we build things according to our understanding of mechanics (material behaviour, physics etc) then, well, of course! What else? . . . . The random number generators we create generate random numbers . . . When people make things they don’t throw things together randomly . . . . You instruct the thing you built to do what you want it to do via the mechanism you built in when you were creating it.
1 --> We observe natural regularities, e.g things tend to fall. These are associated with low contingencies and give us rules of physical/mechanical necessity. 2 --> e.g. Things that are heavy enough will fall, unless supported . . . cf Newton's apple and the moon, both of which were "falling" but under significantly different circumstances [non-orbital vs orbital motion, the role of centripetal forces . . .] that gave rise to the inference to the Law of Gravitation. 3 --> By contrast, certain circumstances show high contingency. This permits them to have high information storing capacity; e.g. the 26-state element known as the alphabetical character. [Alphanumeric characters extend this to 128 or more states per character.] 4 --> Outcomes for such high contingency situations may be driven by chance and/or intelligence. (E.g. we could use the Zener source to drive a truly random string of alphanumeric characters. This has in it intelligence to set up the situation, and randomness in the outcome.] 5 --> This, would also be a case of a cybernetic system that is intelligently designed and configured but takes advantage of natural regularities and chance processes. 6 --> The organised complexity reflects intelligent action, and the designed [and hard to achieve!] outcome is a credibly truly random alphanumeric character string. 7 --> Thus, agents -- per observation -- may design systems that integrate chance, necessity and intelligence in a functional whole. Thus, such is both logically and physically possible, as well as reasonably regularly empirically observed. This last entails that there is a significantly different from zero probability. 8 --> Further to this, we see that certain empirically observable, reliable signs of intelligence exist: organised complexity, functionally specified, complex information, systems that are integrated and have a core that is irreducible relative to achieving function. Indeed, we routinely use these signs in significant and momentous situations, to infer to intelligence at work, in day to day life and common sense reasoning, in forensics, in scientific work, in statistics, etc. 9 --> Further to this, we also notice that we do not see significant signs of a fourth causal pattern. [For millennia of thought on the subject, "Other" in the post's tile line has persistently remained empty; once we empirically trace out causal factors and patterns.] 10 --> All of this is a commonplace. The problem is that once we apply it to the fine tuning of the cosmos, the origin of the organised complexity and FSCI in cell based life, the origin of body-plan level biodiversity and the origin of a credible mind required to do the science etc, we come up with interesting inferences to intelligent action. And these otherwise well-warranted inferences do not sit well with a dominant school of thought among many sectors of the civilisation we are a part of, namely evolutionary materialism. 4] Fractals These are often generated by cybernetic systems, whether in a digital [or analogue -- Farmer et al did just that in the early days of chaos research] computer or in a biosystsem like a fern. [I gather blood vessels in the body also follow a fractal growth pattern -- maybe tha tis a compact way to get a branching network that reaches out to just about all the cells in the body.] In the case of a shoreline [one of the classic cases], we have forces of necessity and chance at work. Insofar as we may look at a snowflake as a fractal system, we see that there is a necessity imposed by the structure of the H2O molecule, giving rise to hexagonal symmetry. There is a pattern imposed by temperature [which type of flake forms]. Then when the conditions favour flat, dendritic flakes beloved of photographers, the presence of microcurrents and water molecules along the flake's path gives rise to the well-known complexity. [Cf discussion and links in the always linked APP 3] But observe: the fractals are produced by lawlike processes and/or some random inputs. That is why the EF catches the first aspect as "law." Next, observe that I do not normally discuss CSI but instead FSCI, as that makes the key point plain . . . 5] FSCI When a highly contingent situation exists, beyond the Dembski bound, and it also gives rise to a relatively rare functionally specified stare, then that is a reliable sign of intelligence. For example, look at the Pooktre chair tree. Trees branch, often in a fractal-like pattern -- probably functional in allowing them to capture sunlight. But here, we are dealing with evide4ntly intelligently constrained growth, producing a functional pattern recognisable as a chair. It is very rare in the configuration space, and it is functionally specified. It is information rich. It is an artifact, and one that was so identified by commenters "live" as being a real enough tree. [Had it been Photoshop that too would have been a different type of design.] It is logically and physically possible for a "natural" tree to assume such a shape by chance + necessity only, but so maximally improbable that we confidently and accurately infer to design as its best explanation. This is the same pattern that leads us to infer to design for say the nanotechnoloigy and information systems in the cell. For,t eh observed universe does not have anywhere near adequate probabilistic resources to be likely to generate the cell by chance + necessity across its credible lifespan. GEM of TKI kairosfocus
Mavis: "The random number generators we create generate random numbers." No, that's not correct. They are pseudo random number generators, and they generate pseudo random numbers. No program can generate random numbers, because programs work by necessity. As Dembski has discussed in detail, generating a truly random sequence is conceptually a big challenge. Obviously, what a program can do is reading random seeds (external to the program itself) and elaborating them according to specific, and appropriate, necessary algorithms, so that the final sequence will look like a true random sequence. That's what is meant by "pseudo-random". In case you are not convinced, I paste here from Wikipedia: "There are two principal methods used to generate random numbers. One measures some physical phenomenon that is expected to be random and then compensates for possible biases in the measurement process. The other uses computational algorithms that produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. The latter type are often called pseudorandom number generators. A "random number generator" based solely on deterministic computation cannot be regarded as a "true" random number generator, since its output is inherently predictable. John von Neumann famously said "Anyone who uses arithmetic methods to produce random numbers is in a state of sin." How to distinguish a "true" random number from the output of a pseudo-random number generator is a very difficult problem. However, carefully chosen pseudo-random number generators can be used instead of true random numbers in many applications. Rigorous statistical analysis of the output is often needed to have confidence in the algorithm." gpuccio
F2XL: Welcome back! I don't know if you had the time to read well my whole post about the calculation. I am happy we agree about taking 3 and not 4 as the possible chanhe space of a single mutation at a single site, but, as soon as you have time, I would really like to know your opinion aboutthe second point I make (indeed a more quantitatively relevant one), that the probability to obtain a specific single nucleotide mutation is 3 * 4.7 million, and not 3 to the 4.7 millionth power. The reason for that is that the number of possible single mutations, with one mutational events, is exactly that: 3 * 4.7 million. Instead, 4 to the 4.7 millionth power is the total number of combinations of the whole genome, that is the whole search space. In other words, there are 4^4.7million different sequences that a genome that long can assume. That's an important value, but it is not the one pertinent here. Anyway, that consideration should not affect much your reasoning, because, if I am right, the real probability for a single coordinated mutation of 490 specific nucleotides, after 490 mutational events, is however low enough to give strength to any possible reasoning against chance, being (again, if I am not wrong) equal to the probability for a single mutation at the 490th power, that is about 1 : 10^3430. There are inreality other small adjustments to consider, for instance the redundancy of the genetic code which allows for synonimous mutations, but that would not change much the ordere of the result. So, I believe that anyway the order of probability is so low that you can confifently go on with your argument, but it is important that we all agree (including our friendly "adversaries") on how to compute it, to avoid possible misunderstandings. Again, if I am wrong in my calculations (perfectly possible, I am not a mathematician), I would appreciate the input of someone who can give us the correct mathemathical and statistical perspective. gpuccio
Mavis: "If there is some doubt, as you say “necessarily”, could you give me the circumstances under which a fractal will have none, some, alot, a large amount of CSI?" No, I was not clear enough. There is no doubt that a fractal has not CSI. Indeed, I wrote "is not CSI, and is not classified as necessarily designed by the EF": the "necessarily" does not refer to the nature of a fractal, but to the nature of the EF. The EF detects, CSI, not design. CSI implies designe, but design does not imply CSI. Therefore, if a piece of information id detected by the EF as having CSI, it is interpreted as "necessarily designed" by the filter itself. On the contrary, if the same piece of information has not CSI (the case of a fractal), then the EF cannot judge if it is designed or not. But it can certainly affirm that it has not CSI, so it is not "necessarily designed". Is that clear? gpuccio
I see F2XL hasn’t appeared here. I’ll wait for his response to my last comments. Perhaps he has a real life as well. :-) Good observation. :) Yeah I probably won't be able to get back on here 'till this Saturday due to a douche professor few schedule conflicts. But don't worry about getting sidetracked, I'll still come back on here and continue from where I left off. After taking a quick skim of the comments on here, I noticed gpuccio made an interesting observation on what I was doing. After reading what he said, I noticed that I did in fact make an error which he was able to point out. I initially used 4 base pairs as a reference point to the different outcomes you could get when changing a piece of information for the homologs. But since there are four total, and thus only THREE other possible ways an existing base pair can change, the odds for a single mutation (out of a conservative estimate of 490) that can help beat the 5 criteria and thus pass the neutral gap, you would take 3 to the 4.7 millionth power (as gpuccio pointed out). So the new odds (single mutation) would be on the order of less then one chance in 10 to the 2,242,000th power. I was confusing increasing information with changing existing information in my numbers. I'll move on and respond to some of the other comments Saturday morning, so don't worry about getting sidetracked. F2XL
PS: Mavis, cybernetic systems are based on our insight into the mechanical necessities of the world.
Levers lift things. What are these "mechanical necessities of the world" you speak of? What does that mean? If you mean we build things according to our understanding of mechanics (material behaviour, physics etc) then, well, of course! What else?
We set up entities that then reliably do as instructed [even generate pseudo or credibly actual random numbers — no reason a Zener noise source cannot be put into a PC for instance].
The random number generators we create generate random numbers.
But, the configuration of components to make up a system is anything but a random walk.
When people make things they don't throw things together randomly.
Then, we program them [assuming the system is programmable separate from making up the hardware config].
You instruct the thing you built to do what you want it to do via the mechanism you built in when you were creating it.
We patch analogue computes and adjust pots, putting in diode function generators etc, we tune control systems [maybe set up adaptive ones . . .],
Who is this "we" you speak of? Do you do all those things?
we write and load software.
Do we?
Some of that stuff generates fractals.
Yes, it's that stuff I was trying to talk about before all the wordy distractions.
We can compare fern growth a la screen with real ferns, noting self-similarity and scaling patterns etc (well do I remember doing and playing with such coding).
And what conclusions did you come to when you were so playing?
But in so doing, we must ask how do fern leaves grow?
Must we? I thought it was about fractals still?
Ans, accor to an inner program, i.e. we are back at the program.
Sigh. What else would tell a fern leaf how to grow apart from a set of instructions telling a fern leaf how to grow. Or rather, a set of rules. gpuccio
A fractal output, in itself, is not CSI, and is not classified as necessarily designed by the EF
If there is some doubt, as you say "necessarily", could you give me the circumstances under which a fractal will have none, some, alot, a large amount of CSI? And how would one perform the calculation to determine that? After all, you are not just assuming that it will not have measurable CSI without performing the calculation? Mavis Riley
Basically, this review describes evidence that something that appears to be irreducibly complex can have functional intermediates, and that fitness can increase along paths in sequence space. So, Behe just shifts the goalposts: he says this is minor, and that bigger shifts would be impossible.
Eh? Behe's been saying the same thing for years before that article was published. Ditto for other ID proponents. I can't remember where I read/heard it but Behe previously talked about "weak IC" (or maybe it was someone else using that phrasing, reporting on what he said) that's composed of a couple components, and how Darwinian indirect pathways should be capable of producing such structures. I remember Dembski talking about possible pathways, including gene duplications, for modifying existing CSI in a book. It's been years since I read it, but he's always acknowledged that minor islands of functionality should be accessible. And this isn't an issue of definitions...I think ID proponents have always been very clear on what is considered by "minor" and "trivial".
Well, go ahead mate and collect the evidence!
Okay. The Ohno's Dilemna paper is addressing methods by which gene duplicates might be preserved long enough for one copy to diverge in function. Just scanning the abstract suggests that they are working from a model that assumes the starting gene had several activities to start with, one at a very low level. After duplication one copy is subject to selection for the low-level function, allowing divergence over time. This model of promiscuous function has been proposed in similar form by a number of other people. The problem with all such models is that they assume that there will be overlapping low level functions available somewhere in the genome (or biosphere if you allow for horizontal gene transfer) for any conceivable desirable step. Check out the following paper for a refutation of that idea, though that is not how they frame it. Multicopy Suppression Underpins Metabolic Evolvability Wayne M. Patrick,1 Erik M. Quandt, Dan B. Swartzlander, and Ichiro Matsumura Mol. Biol. Evol. 24(12):2716–2722. 2007 doi:10.1093/molbev/msm204 Department of Biochemistry, Center for Fundamental and Applied Molecular Evolution, Emory University, Atlanta, Georgia
Our understanding of the origins of new metabolic functions is based upon anecdotal genetic and biochemical evidence. Some auxotrophies can be suppressed by overexpressing substrate-ambiguous enzymes (i.e., those that catalyze the same chemical transformation on different substrates). Other enzymes exhibit weak but detectable catalytic promiscuity in vitro (i.e., they catalyze different transformations on similar substrates). Cells adapt to novel environments through the evolution of these secondary activities, but neither their chemical natures nor their frequencies of occurrence have been characterized en bloc. Here, we systematically identified multifunctional genes within the Escherichia coli genome. We screened 104 single-gene knockout strains and discovered that many (20%) of these auxotrophs were rescued by the overexpression of at least one noncognate E. coli gene. The deleted gene and its suppressor were generally unrelated, suggesting that promiscuity is a product of contingency. This genome-wide survey demonstrates that multifunctional genes are common and illustrates the mechanistic diversity by which their products enhance metabolic robustness and evolvability.
In brief, they knocked out 104 different metabolic genes in E coli, then asked if any of the E coli genes in its entire genome was able to rescue the cells when vastly overexpressed. Take home message: out of 104 genes knocked out, only 20 could be replaced at all. That leaves 84 unrescued genes that could not be replaced by promiscuous activity or any other mechanism. Another thing to note is the presumption of the model.
Before duplication, the original gene has a trace side activity (the innovation) in addition to its original function.
Notice that the end of the search, the innovation, is already present at the beginning. How unlikely yet how convenient! Other relevant info. http://www.proteinscience.org/cgi/reprint/ps.04802904v1.pdf (by Behe and Snoke August 2004) Eytan H. Suchard, "Genetic Algorithms and Irreduciblity," Metivity Ltd
Genetic Algorithms are a good method of optimization if the target function to be optimized conforms to some important properties. The most important of a is that the sought for solution can be approached by cumulative mutations such that the Markov chain which models the intermediate genes has a probability that doesn't tend to zero as the gene grows. In other words each improvement of the gene -set of 0s and 1s follows from a reasonable edit distance - minimum number of bits that change between two genes -and the overall probability of these mutations does not vanish. If for reaching an improvement, the edit distance is too big then GAs are not useful even after millions of generations and huge populations of millions of individuals. If on the other hand the probability of a chain of desired mutations tends to zero as the chain grows then also the GA fails. There are target functions that can be approached by cumulative mutations but yet, statistically defy GAs. This short paper represents a relatively simple target function that its minimization can be achieved stepwise by small cumulative mutations but yet GAs fail to converge to the right solution in ordinary GAs.
A two-part paper by phylo Royal Truman and Peter Borger titled "Genome truncation vs mutational opportunity: can new genes arise via gene duplication?" Here is the abstract of Part 1:
Gene duplication and lateral gene transfer are observed biological phenomena. Their purpose is still a matter of deliberation among creationist and Intelligent Design researchers, but both may serve functions in a process leading to rapid acquisition of adaptive phenotypes in novel environments. Evolutionists claim that copies of duplicate genes are free to mutate and that natural selection subsequently favours useful new sequences. In this manner countless novel genes, distributed among thousands of gene families, are claimed to have evolved. However, very small organisms with redundant, expressed, duplicate genes would face significant selective disadvantages. We calculate here how many distinct mutations could accumulate before natural selection would eliminate strains from a gene duplication event, using all available 'mutational time slices' (MTSs) during four billion years. For this purpose we use Hoyle's mathematical treatment for asexual reproduction in a fixed population size, and binomial probability distributions of the number of mutations produced per generation. Here, we explore a variety of parameters, such as population size, proportion of the population initially lacking a duplicate gene (x0), selectivity factor(s), generations (t) and maximum time available. Many mutations which differ very little from the original duplicated sequence can indeed be generated. But in four billion years not even a single prokaryote with 22 or more differences from the original duplicate would be produced. This is a startling and unexpected conclusion given that 90% and higher identity between proteins is generally assumed to imply the same function and identical three dimensional folded structure. It should be obvious that without new genes, novel complex biological structures cannot arise.
Here is the abstract of Part 2:
In 1970, Susumo Ohno proposed gene and genome duplications as the principal forces that drove the increasing complexity during the evolution from microbes to microbiologists. Today, evolutionists assume duplication followed by neo-functionalization is the major source of new genes. Since life is claimed to have started simple and evolved new functions, we examined mathematically the expected fate of duplicate genes. For prokaryotes, we conclude that carrying an expressed duplicate gene of no immediate value will be on average measurably deleterious, preventing such strains from retaining a duplicate long enough to accumulate a large number of mutations. This genome streamlining effect denies evolutionary theory the multitude of necessary new genes needed. The mathematical model to simulate this process is described here.
Andreas Wagner, “Energy Constraints on the Evolution of Gene Expression,” Molecular Biology and Evolution, 2005 22(6):1365-1374; doi:10.1093/molbev/msi126
I here estimate the energy cost of changes in gene expression for several thousand genes in the yeast Saccharomyces cerevisiae. A doubling of gene expression, as it occurs in a gene duplication event, is significantly selected against for all genes for which expression data is available. It carries a median selective disadvantage of s > 10?5, several times greater than the selection coefficient s = 1.47 x 10?7 below which genetic drift dominates a mutant’s fate. When considered separately, increases in messenger RNA expression or protein expression by more than a factor 2 also have significant energy costs for most genes. This means that the evolution of transcription and translation rates is not an evolutionarily neutral process. They are under active selection opposing them. My estimates are based on genome-scale information of gene expression in the yeast S. cerevisiae as well as information on the energy cost of biosynthesizing amino acids and nucleotides.
Royal Truman's 2006 article "Searching for Needles in a Haystack"
The variability of amino acids in polypeptide chains able to perform diverse cellular functions has been shown in many cases to be surprisingly limited. Some experimental results from the literature are reviewed here. Systematic studies involving chorismate mutase, TEM-1 ? lactamase, the lambda repressor, cytochrome c and ubiquitin have been performed in an attempt to quantify the amount of sequence variability permitted. Analysis of these sequence clusters has permitted various authors to calculate what proportion of polypeptide chains of suitable length would include a protein able to provide the function under consideration. Until a biologically minimally functional new protein is coded for by a gene, natural selection cannot begin an evolutionary process of fine-tuning. Natural selection cannot favour sequences with a long term goal in mind, without immediate benefit. An important issue is just how difficult statistically it would be for mutations to provide such initial starting points. The studies and calculations reviewed here assume an origin de novo mainly because no suitable genes of similar sequence seem available for these to have evolved from. If these statistical estimates are accepted, then one can reject evolutionary scenarios which require new proteins to arise from among random gene sequences.
Patrick
Bob O'H: In the meantime, while we wait for F2XL, I would like to comment on the mathemathical aspect which has been controversial between you and him, because I have a feeling that both are wrong. Maybe I am wrong too, but I would like to check. As I understand it, F2XL has put the question in these terms: 1) E. cole has a genome of 4.7 million base pairs. 2)We have to account for 490 specific mutations (I will not discuss this number, and will go on from here). Now, I think the wy to reason is: The probabilty of having a sppecific nucleotide substitution, if we have a single mutational event, is: a) 1 : (3 * 4.7 million), that is 1 : 1.41*10^7. Let's say 1:10^7 to simplify computations. Why? Because each mutational event can change in three diffrent ways each single point (for instance, if you have A at one point, and it mutates, it can become T, C or G), and the single mutational event can happen at any of the nucletide sites in the genome. So, I think F2XL is wrong here, because he takes 4 instead of 3, andmakes a power of the length of the genome, instead of just multiplying it. It is correct to have the length of the genome as a power only if you are computing all possible combinations of nucleotides with that length, which is not the case here. So, let's go on: The next question is, what is the probability of having a specific combination of 490 mutations, if we have 490 single mutational events? Notice that here the order in which the events happen has no importance, we can just the same consider them contemporary, or happening in any order. Instead, the order of the final mutations in the genome is fixed: we are looking for 490 definite mutation a those specific 490 sites. Are we OK with that? Well, I think that the problem is similar to this one: if I have three dies on a table, in a specific order (the nucleotide sites I want to change) and I flip each coin once (in any possible cronological order, that doesn't matter, provided I keep the order of the coins on the table). What is the probability of having, in the end, a specific sequence (say, three heads)? The combination of sequences are 2^3, that is 8, and the probability of a specific combination is 1/8, that is 0.125. That can be obtained multiplying he probabilities of each single event (0.5*0.5*0.5, that is 0.5^3, that is 0.125). The same is valid for our specific comnbination of 490 mutations with 490 mutational events, in no specific cronological order, but in a very specific order in the genome. As the probability of each event is, see point a), 7*10^-8, the probability of our specific 490 nucleotudes mutation, after 490 random mutational events, should be: b) 1 : (10^7)^490, that is of the order of 1 : 10^3430 That's my final probability for the specific 490 nucleotides mutation after 490 mutational events. It's a really low probability, far beyond any conceivable UPB, but it is not the same as computed by F2XL, or, as far as I understand, by you. May be I am wrong. Am I? However it is, I think we have to arrive at a correct computation... gpuccio
Bob O'H: Sorry for sidetracking you. I thought it was you who threw into the discussion the paper about traversing landscapes... Anyway, I am waiting for F2XL too. gpuccio
Folks, you're now throwing examples at me that have nothing to do with the problem we were discussing. I don't want to get side-tracked from F2XL's problem, so I won't respond here. I'm sure another post will appear at some point to discuss these matters further. Sorry for this, but I'd like to find out if F2XL's calculation of CSI is valid. If we wander off onto other topics, he might decide we're not interested any more, and not reply. Bob O'H
Bob O'H: I have found, I think, the abstract of the first paper (but I remember I read the full paper, so I will go on looking for it). It should be the following: Evolution of Hormone-Receptor Complexity by Molecular Exploitation Jamie T. Bridgham, Sean M. Carroll, Joseph W. Thornton* Abstract: According to Darwinian theory, complexity evolves by a stepwise process of elaboration and optimization under natural selection. Biological systems composed of tightly integrated parts seem to challenge this view, because it is not obvious how any element's function can be selected for unless the partners with which it interacts are already present. Here we demonstrate how an integrated molecular system—the specific functional interaction between the steroid hormone aldosterone and its partner the mineralocorticoid receptor—evolved by a stepwise Darwinian process. Using ancestral gene resurrection, we show that, long before the hormone evolved, the receptor's affinity for aldosterone was present as a structural by-product of its partnership with chemically similar, more ancient ligands. Introducing two amino acid changes into the ancestral sequence recapitulates the evolution of present-day receptor specificity. Our results indicate that tight interactions can evolve by molecular exploitation—recruitment of an older molecule, previously constrained for a different role, into a new functional complex. Just to start the discussion, and without entering in detail about the procedure of "using ancestral gene resurrection" and its possible biases, I just ask you: do you really think that an artificial lab work which modifies just two aminoacids, simply altering the affinity of a receptor for very similar ligands, is evidence of anything? What is it showing? That very similar interactions can be slightly modified in what is essentially the same molecule by small modifications? Who has ever denied that? I am sorry, but I must say that Behe is perfectly right here. When we ask for a path, we are asking for a path, not a single (or double) jump from here to almost here. I will be more clear: we need a model for at least two scenarios: 1) A de novo protein gene. See for that my detailed discussion in the relevant thread. De novo protein genes, which bear no recognizable homology to other proteins, are being increasingly recognized. They are an empirical fact, and they must be explained by some model. The length of these genes is conspicous (130 aminoacids in the example discussed on the thread). The search space huge. Where is the traversing apparatus? What form could it take? 2) The transition from a protein with a function to one other with another function, where the functions are distinctly different, and the proteins are too. Let's say that they present some homology, say 30%, which lets darwinist boast that one is the ancestor of the other. That's more or less the scenario for some proteins in the flagellum, isn't it? Well we still have a 70% difference to explain. That's quite a landscape to traverse, and the same questions as at point 1) apply. You cannot explain away these problems with examples of one or two muations bearing very similar proteins, indeed a same protein with slightly different recognition code. It is obvious hat even a single aminoacid can deeply affect recognition. You must explain different protein folding, different function (not just the same function on slightly different ligands), different protein assembly. That's the kind of problems ID has always pointed out. Behe is not just "shifting the goalposts". The goalposts have never been there. One or two aminoacid jumps inside the same island of functionality have never been denied by anyone, either logically or empirically. They are exactly the basic steps which you should use to build your model pathway: they are not the pathway itself. Let's remember that Behe, in TEOE, places exactly at two coordnated aminoacid mutations the empirical "edge", according to his reasonings about malaria parasite mutations. You can agree or not, but that is exactly his view. He is not shifting anything. gpuccio
Bob O'H: Obviously, I agree with kairosfocus' comments, summed up in the following: "Yes, an already existing protein may bounce around on its hill or functionality, maybe even move across to a close enough neighbouring peak. But that has nothing to do with: [1] ab initio, getting to cell based life with its nanotechnologies, from monomers in prebiotic soups [cf the now available discussion on prebiotic soups in TMLO. If you need it Foxit will download the file.] [2] the integrated cluster of shifts in cells, tissues, organs and systems to get to novel body plans." More in detail, I remember reading with attention the opaper about cortiocoid receptors, and being really disappointed with it, while I don't remember reading the second one you mention. While I agree in general with Behe's comments, I will probably give you my specific view, if I can find and access the original papers. Discussing real examples is exactly what can bring our discussion to better achievements. gpuccio
Bob: Here is the relevant problem you need to traverse to get to a place where you can confidently say that evolutionary intermediates are not an issue:
The Cambrian explosion represents a remarkable jump in the specified complexity or "complex specified information" (CSI) of the biological world. For over three billions years, the biological realm included little more than bacteria and algae (Brocks et al. 1999). Then, beginning about 570-565 million years ago (mya), the first complex multicellular organisms appeared in the rock strata, including sponges, cnidarians, and the peculiar Ediacaran biota (Grotzinger et al. 1995). Forty million years later, the Cambrian explosion occurred (Bowring et al. 1993) . . . One way to estimate the amount of new CSI that appeared with the Cambrian animals is to count the number of new cell types that emerged with them (Valentine 1995:91-93) . . . the more complex animals that appeared in the Cambrian (e.g., arthropods) would have required fifty or more cell types . . . New cell types require many new and specialized proteins. New proteins, in turn, require new genetic information. Thus an increase in the number of cell types implies (at a minimum) a considerable increase in the amount of specified genetic information. Molecular biologists have recently estimated that a minimally complex single-celled organism would require between 318 and 562 kilobase pairs of DNA to produce the proteins necessary to maintain life (Koonin 2000). More complex single cells might require upward of a million base pairs. Yet to build the proteins necessary to sustain a complex arthropod such as a trilobite would require orders of magnitude more coding instructions. The genome size of a modern arthropod, the fruitfly Drosophila melanogaster, is approximately 180 million base pairs (Gerhart & Kirschner 1997:121, Adams et al. 2000). Transitions from a single cell to colonies of cells to complex animals represent significant (and, in principle, measurable) increases in CSI . . . . In order to explain the origin of the Cambrian animals, one must account not only for new proteins and cell types, but also for the origin of new body plans . . . Mutations in genes that are expressed late in the development of an organism will not affect the body plan. Mutations expressed early in development, however, could conceivably produce significant morphological change (Arthur 1997:21) . . . [but] processes of development are tightly integrated spatially and temporally such that changes early in development will require a host of other coordinated changes in separate but functionally interrelated developmental processes downstream. For this reason, mutations will be much more likely to be deadly if they disrupt a functionally deeply-embedded structure such as a spinal column than if they affect more isolated anatomical features such as fingers (Kauffman 1995:200) . . . McDonald notes that genes that are observed to vary within natural populations do not lead to major adaptive changes, while genes that could cause major changes--the very stuff of macroevolution--apparently do not vary. In other words, mutations of the kind that macroevolution doesn't need (namely, viable genetic mutations in DNA expressed late in development) do occur, but those that it does need (namely, beneficial body plan mutations expressed early in development) apparently don't occur.6
Yes, an already existing protein may bounce around on its hill or functionality, maybe even move across to a close enough neighbouring peak. But that has nothing to do with: [1] ab initio, getting to cell based life with its nanotechnologies, from monomers in prebiotic soups [cf the now available discussion on prebiotic soups in TMLO. If you need it Foxit will download the file.] [2] the integrated cluster of shifts in cells, tissues, organs and systems to get to novel body plans. And in that context, the flagellum is a useful toy example, one that F@ has already long since shown runs into serious probabilistic resource constraints, never mind the various tangents that may distract us from the central point. [Remember, per fall of France 1940, such distraction has been a core component of say Blitzkrieg -- it may win a rhetorical battle but it does not adequately address the fundamentals of the issue.] Bob, you need to show us that there is a credible route from the assumed tail-less E coli and the tailed one. We know that intelligences can traverse such search spaces, but we have no good reason to see that the abstract possibility that chance can do so will have any material effect in the real world where we have to address availability of search resources. Remember we are talking about dozens of proteins, and a self assembly system that has to have sufficiently functional intermediates that natural selection and the like can reinforce them into niches, thence they move on tot he next level. (And the TTSS seems to be more of a subset derivative than a precursor, i.e the code embeds the subset functionality.) GEM of TKI PS: Mavis, cybernetic systems are based on our insight into the mechanical necessities of the world. We set up entities that then reliably do as instructed [even generate pseudo or credibly actual random numbers -- no reason a Zener noise source cannot be put into a PC for instance]. But, the configuration of components to make up a system is anything but a random walk. Then, we program them [assuming the system is programmable separate from making up the hardware config]. We patch analogue computes and adjust pots, putting in diode function generators etc, we tune control systems [maybe set up adaptive ones . . .], we write and load software. Some of that stuff generates fractals. We can compare fern growth a la screen with real ferns, noting self-similarity and scaling patterns etc (well do I remember doing and playing with such coding). But in so doing, we must ask how do fern leaves grow? Ans, accor to an inner program, i.e. we are back at the program. kairosfocus
gpuccio - the paper is a review of several pieces of work on molecular evolution, each showing that there is a landscape that can be traversed. They even mention two examples (hormone detection by steroid receptors and repressor–operator binding in the E. coli lac system) where a "lock and key" mechanism can evolve. Behe's reaction is typical (hmm, somehow I think we might find ourselves in disagreement here). Basically, this review describes evidence that something that appears to be irreducibly complex can have functional intermediates, and that fitness can increase along paths in sequence space. So, Behe just shifts the goalposts: he says this is minor, and that bigger shifts would be impossible. Well, go ahead mate and collect the evidence! I see F2XL hasn't appeared here. I'll wait for his response to my last comments. Perhaps he has a real life as well. :-) Bob O'H
Mavis: I was going to answer, but Patrick has anticipated me. I can see not contradiction between what Patrick has said and whtc both kairosfocus and me have said. tha concept is simple. A fractal output, in itself, is not CSI, and is not classified as necessarily designed by the EF (let's remember, however, hat it could be designed just the same. The EF can well have false negatives, indeed all designed things which have not enough complexity will escape the EF). In the same way, the fractal procedure for computing the fractal output, if simple enouigh, is not CSI. But if that procedure is part of a longer code which uses it in a complex context, then the whole code would exhibit CSI, although an isolated part of it may not exhibit it. In the same way, in a computer program a single instruction may not be complex enough to exhibit CSI, but a functional sequence of 100 instruction is CSI. I hope that answers your question. gpuccio
Even if you could not decide between two very similar textures, one generated manually and one proceduraly? How can specification matter at that point?
Designers can use multiple methods/tools to reach an intended result. How the actor acted is a separate question. https://uncommondesc.wpengine.com/intelligent-design/how-does-the-actor-act/
Does it?
Rhetorical question... They're intended to "encourage the listener to reflect on what the implied answer to the question must be." Ponder comment #152 especially.
Who’s right?
We are all correct. kf: It is the programs and formulae that generate them that pass the EF. gpuccio: Obviously, the system which computes the fractal is a completely different thing… me (149): the systems generating the complexity [fractals in this case] are taken into account gpuccio:
In theory, there could be fractal parts in the non coding DNA, but I am not aware of evidences of that.
Check out fractogene.com to contact those who are looking for such evidence. http://www.junkdna.com/fractogene/05_simons_pellionisz.pdf Patrick
An intended result can be reached via algorithm when intelligence is involved.
The question asked, by you, was
Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed?
A rendering is a artificial construction and by it's very nature is designed. My point is that the specific detail generated by procedural textures is "designed" in the same way that the differences between blades of grass are "designed". Not predictable except in the general case. Earlier you said
It would certainly affect the calculating of informational bits–the systems generating the complexity are taken into account, along with the reduced amount of information necessary to represent the entire object–but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false).
You appear to be saying it makes a difference if the source of the texture makes a difference, even if the two were very similar in appearance.
The difference is the lack of Specification.
Even if you could not decide between two very similar textures, one generated manually and one proceduraly? How can specification matter at that point?
Essentially what you’re doing is rephrasing the old tired objection that Dawkins made about “apparent design”.
In fact I was attempting to get an answer to the original point you asked
Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed?
Does it?
A designed object can contain pseudo-random attributes.
I'm not saying it can't. Previously KariosFocus said
PS: Fractals do NOT pass the EF — they are caught as “law” — the first test. It is the programs and formulae that generate them that pass the EF. [And, these are known independently to be agent-originated, so they support the EF’s reliability.] .
And gpuccio said
A fractal is a good example of a product of necessity. So, it does not exhibit CSI, becuase the EF has to rule out those forms of self-organization produced by necessary law. Obviously, the system which computes the fractal is a completely different thing… Moreover, I don’t think that a fractal in itself has function, so it would not be functionally specified.
And you, Patrick said
but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false).
Who's right? Mavis Riley
Bob O'H: Any comment on Behe's comments? I do hope nyour mysterious "example" is not the one about hormonal receptors, which I read somr tine ago. That would really be a disappointment... gpuccio
Mavis: I feel a little sidetraccked by your comments about fractals. Let's review things, as I see them: 1) Fractals, in themselves, are not CSI. Therefore, they cannot be recognized by the EF as designed (though they certainly can be designed. I hope the difference is clear). The mechanism which computes the fractal could exhibit CSI, but that has to be evaluated in each single case. 2) Fractal forms occur in nature, both in non living and in living things. I admit that your vegetable seems remarkable... 3) The information in DNA is not fractal (at least, not the protein coding sequences. In theory, there could be fractal parts in the non coding DNA, but I am not aware of evidences of that. 4) CSI is not fractal. The information which specifies a functional protein is not fractal. It has to be found either from knowledge of the physical properties of protein sequencies (such as folding), and we are not yet able to do that, or found by guided random search coupled with specific measurement of the searched function (like in protein engineering). No fractal formula will tell us which aminoacid sequence will give a protein which folds in a certain way, and which has a specific enzymatic activity, no more than a fractal formula can give us the text of Hamlet. 5) We have practically no idea of what codes for the macroscopic form of multicellular beings. If we find some fractal aspect in macroscopic (or microscopic) parts, such as your vegetable, or, say, the patterns of arborization of vessels, or anything else, we cannot say that that particular aspect of form exhibits CSI (which does not imply that it is not designed). If we knew the mechanism which generates the fracta (which we don't) we could try to compute if it exhibits CSI or not. However, most macroscopic forms of living beings do not appear to be fractal. 6) Even in computer programming, fractals can be used in specific procedures, like compression, but have you ever seen a functioning program code generated by fractal formulas? Procedures are not fractal. Program code is not fractal. The same can be said of biological information. Most information present in living beings is not fractal, does exhibit CSI, and therefore requires a designer. gpuccio
The “rendering” is designed but the exact contents of the texture are not, at least not at run time.
An intended result can be reached via algorithm when intelligence is involved. For example: http://www.mapzoneeditor.com/?PAGE=GALLERY.RENDERS Active information is involved. (I just emailed Bill to doublecheck on how to calculate this in regards to fractals, procedural textures, etc.) The same applies to GAs. Active information requires intelligence based upon all known observation.
Of course, you could predict what it would be no doubt, but that’s not quite the same thing. Does creating a texture when you have no idea what it looks like count as “designing” it?
The difference is the incorporation of a generalized Specification. In the act of designing you will have a target in search space. This target can be very vague/generalized (large) or very specific (small). Most GAs are examples of the former, while Dawkin's Weasel program is the extreme of the latter.
If so, how would you tell the difference between a procedural effect and a designed effect designed to look like a procedural effect?
Essentially what you're doing is rephrasing the old tired objection that Dawkins made about "apparent design". I see no need to rehash that one: http://www.google.com/search?hl=en&q=%22apparent+design%22+site%3Awww.uncommondescent.com&btnG=Google+Search
How could you tell your “design” from somebody elses “design” if neither of you knows what it looks like to begin with.
Comparison of the active information, perhaps?
So, the “feature” and “rendering” is designed, and the procedural texture is following rules that are designed but the texture itself? If it was partly based on a random seed generated (as many are, or used to be anyway) from how long your PC has been powered on for would you claim it was designed by you by virture of you turning on the pc at a given moment?
A designed object can contain pseudo-random attributes. Darwinists are generally confused about this. But I'm willing to forgive this confusion since there are ID proponents who use poor arguments. For algorithms (GAs, fractals, whatever) it's not merely the act of writing the code itself that invalidates the example. The design is in how the search is funneled. Dembski calls this active information. Patrick
Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed?
The "rendering" is designed but the exact contents of the texture are not, at least not at run time. Hence the utility of procedural texturing. Of course, you could predict what it would be no doubt, but that's not quite the same thing. Does creating a texture when you have no idea what it looks like count as "designing" it? If so, how would you tell the difference between a procedural effect and a designed effect designed to look like a procedural effect? How could you tell your "design" from somebody elses "design" if neither of you knows what it looks like to begin with. So, the "feature" and "rendering" is designed, and the procedural texture is following rules that are designed but the texture itself? If it was partly based on a random seed generated (as many are, or used to be anyway) from how long your PC has been powered on for would you claim it was designed by you by virture of you turning on the pc at a given moment?
It would certainly affect the calculating of informational bits–the systems generating the complexity are taken into account, along with the reduced amount of information necessary to represent the entire object–but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false).
I'm afraid that I did not fully understand the first part. As to my objection, in fact it is the objection of many others too, including gpuccio (which is why my comment was aimed at him, but I'm glad you answered).
I wouldn’t be surprised if DNA uses recursive mathematics for generating its complexity.
What do you mean? I've just shown you a picture of DN" "using" recursive mathematics.
Plants do this for their structure at a macro level, although this is the first time I’ve seen such a pattern on a plant.
Can't be many vegetarians around them there parts! :)
Also, fractals can be used for data compression, so why couldn’t there be fractal compression of hereditary information?
Nobody was implying they could not. Mavis Riley
Mavis, Does the usage of a procedural texture mean that a rendering incorporating such a feature is not designed? It would certainly affect the calculating of informational bits--the systems generating the complexity are taken into account, along with the reduced amount of information necessary to represent the entire object--but I hope that makes it obvious how silly your objection is (aka the presence of fractals does not equate to the EF always returning a false). I wouldn’t be surprised if DNA uses recursive mathematics for generating its complexity. Plants do this for their structure at a macro level, although this is the first time I've seen such a pattern on a plant. Also, fractals can be used for data compression, so why couldn't there be fractal compression of hereditary information? And, yes, yes...you're probably trying to lead the conversation to attempt to make the old argument that the OOL found its source in self-organization, fractals, whatever... Gil's previous thoughts on this subject:
Recursion (self-referential algorithms and self-calling functions) is an extremely powerful tool in computer science. The AI (artificial intelligence) techniques used in chess- and checkers-playing computer programs are based upon this concept. This is the basis of what we call a “tree search.” The immune system apparently uses a search/trial-and-error technique in order to devise antibodies to pathogens. The immune system also maintains a database of previously-seen pathogenic agents and how to defeat them. This is what immunization is all about. As an ID research proposal I would suggest pursuing what we have learned from AI research to see if human-designed algorithms are reflected in biology, especially when it comes to the immune system: 1) Iterative Deepening: Make initial, shallow, inexpensive searches, and increase the depth and expense of the searches iteratively. 2) Investigative Ordering: Order results from 1) to waste as little time as possible during deeper searches. 3) Maintain short-term memory to rapidly access solutions to the most-recently-seen problems. (We use RAM-based hash tables in chess and checkers programs for this purpose.) 4) Maintain long-term memory for catastrophic themes that tend to recur on a regular basis. (We use non-volatile, disk-based endgame databases in chess and checkers programs for this purpose.)
Patrick
gpuccio, Now that my comments are appearing, please take note of this http://www.fourmilab.ch/images/Romanesco/ Fractal food - naturally occurring fractals. Romanesco no less! So, this leaves me with a question. If fractals will fail to be noted as "designed" by the EF as several have pointed out already, what does the EF make of a biological organism (presumably designed) that expresses a fractal as it's physical form? Designed? Not? Can it tell? Mavis Riley
I vaguely remember reading those articles, but I recall them being trivial. Behe spoke of such short indirect pathways being feasible years ago. Kinda like the "devastating examples" of irreducibly complex structures being formed...comprised of 2-3 components. But what do you expect...we've been asking Darwinists for evidence for macroevolutionary events for years and all they can do is showcase trivial examples and hysterically assert that mechanisms with 100 times the complexity can occur in just the same manner. How many times must ID proponents repeat themselves, stating that such modifications of CSI should be fully possible without intelligence? We are EXPECTING to find such examples, for heaven's sake! Anyway, enough of me, here is Behe's thoughts on that paper:
“The evolutionary puzzle becomes more complex at a higher level of cellular organization.” No kidding. The January 25th issue of Nature carries a “Progress” paper by Poelwijk et al that’s touted on the cover as “Plugging Darwin’s Gaps,” and cited by its authors as addressing concerns raised by proponents of intelligent design. The gist of the paper is that some amino acid residues of several proteins can be altered in the lab to produce proteins with properties slightly different from those they started with. A major example the authors cite is the work of Bridgham et al (2006) altering hormone receptors, which I blogged on last year. That very modest paper was puffed not only in Science, but in the New York Times, too. It seems some scientists have discovered that one way to hype otherwise-lackluster work is to claim that it discredits ID. Quite unsurprisingly, the current paper shows that microevolution can happen. Small changes in a protein may not destroy its activity. If you start out with a protein that does something, such as bind DNA or a hormone, it’s not surprising that you can sometimes find a sequence of changes that can allow the protein to do something closely similar, such as bind a second sequence of DNA or a second, structurally-similar hormone. My general reaction to breathless papers like this is that they vastly oversimplify the problems evolution faces. Consider a very rugged evolutionary landscape. Imagine peaks big and small all packed closely together. It would of course be very difficult for a cell or organism to traverse such a landscape. Now, however, suppose an investigator focuses his gaze on just one peak of the rugged landscape and myopically performs experiments whose products lie very close to the peak. In that case the investigator is artificially reducing what in reality is a very rugged landscape to one that looks rather smooth. The results tell us very little about the ability of random processes to traverse the larger, rugged landscape. The authors remark, “The evolutionary puzzle becomes more complex at a higher level of cellular organization.” No kidding. Nonetheless, they, like most Darwinists, assume that larger changes involving more components are simple extrapolations of smaller changes. A good reason to be extremely skeptical of that is the work of Richard Lenski, which they cite. Lenski and his collaborators have grown E. coli in his lab for tens of thousands of generations, in a cumulative population size of trillions of cells, and they have seen no building of new systems, just isolated mutations in various genes. Apparently, nature has a much more difficult time putting together new systems than do human investigators in a lab.
Here's a relevant discussion about how minor stepwise pathways are viable but run into problems when several major concurrent changes must occur: http://www.overwhelmingevidence.com/oe/node/381 That pretty much covers my thoughts on this subject. Darwinists just need to be repeatedly banged over the head with basic engineering concepts (the problem as outlined in comment #128) until they get it. Patrick
gpuccio, Perhaps the path is fractal and as such does not have a length as you understand it. Mavis Riley
Bob O'H: Thank you for the one word. I am still waiting for the summary. Could you please at least tell us how short that path is, and which are the proteins involved? Just curious... gpuccio
Equivocation is also wonderful- especially if you are an evolutionist. Too bad evidence for "evolution" is not evidence for non-telic processes. Gene duplications? Then it is also required to duplicate all the regulatory elements that accompany gene activation. And even then if the gene's product doesn't have a specified place to go then it is just a waste of energy for the cell to manufacture something it doesn't need. Joseph
In a word, yes. Isn't science wonderful. :-) Bob O'H
Bob O'H: Unfortunately, I have not access to that article. Could you please sum it up for us? At risk of speaking of what I have note read, I would like anyway remark that we are not only looking at a pathway which is "logically possible", but to one which is "empirically possible". And to the real function advantages which make it selectable step by step. Does the article show examples of such pathways through single event mutations for real proteins? gpuccio
gpuccio - you too should read the articles I linked to. The first falsifies your claim that step-wise paths through sequence space is impossible. You could reply that it only looks at a short path, which would be correct. But I'd still like to see a better argument that longer paths can't be traversed other than the argument from incredulity you have at the moment. Bob O'H
M Caldwell: "If perhaps we allow, for argument’s sake, this unwarranted assumption, might we not end up with an unbroken chain of unwarranted assumptions from Mr O’H?" Well, let's go step by step. In intelligent discussion between intelligent agents, that's a perfectly natural pathway... :-) gpuccio
M Caldwell: I suppose that common descent is assumed, at least as hypothesis, in the discussion between F2XL and Bob O'H. I was just arguing that Bob O'H assumption that a step by step functional and selectable pathway of mutations exists is not warranted, neither logically nor empirically, even under the assumption of common descent. gpuccio
Bob O'H (#134): Excuse the brief intrusion, but I want to comment on a couple of your affirmations: "That would seem a reasonable evolutionary explanation" Yes, it's a pity that it is simply impossible. There is no reason in the world, neither logical nor empirical, that functional sequences of proteins can be derived step by step passing through increasingly functional intermediates. Indeed, that's a really silly idea, considering all that we know, both of information in general and of protein function in particular. regardinginformation, it would be life affirming that any meaningful sentence of, say, 150 characters, cna be obtained from another different one by successive changes of one character, always retaining menaing (indeed, increasingly maningful menaing). That's obviously ridiculous for sentences, as it is for computer programs, and as it is especially for proteins, whose function depends critically on complex, and as yet largly unpredictable even to us, biochemical interactions and 3D folding. Just for curiosity, could you junp a moment to the thread about the de novo protein gene, and explain a model why, in that evolutionary scenario (suggested by perfectly serious darwinian researchers) about 350 nucleotides would have changed to give a completely new protein gene, and we have absolutely no trace of the supposed 350 step-by-step increasingly functional intermediates, while all the related species retain the other 128 nucleotides, whose function remains a mystery? Or take any other protein you like, or with which you are more comfortable, and show us any proposed (not necessarily demonstrated) pathway of step by step mutations which harvest a new, different function passing through a number (let's say at least 30) of successive intermediates, each selectable for its increase in fitness. Oh, and you can fix the order of mutations as you like. And you can use all the side activities you like (provided you motivate them, either theoretically or empirically). You say: "No. You’re presenting your model, based on your assumptions. Give use evidence that your assumptions are reasonable. Let’s not get side-tracked." Frankly, I think you are not fair here. F2XL is presenting his model, and his assumptions about the mutations necessary are perfectly natural. You objected with a counter-model, that each successive mutation can be selected for a benefit. That counter-model is not natural at all. Indeed, it appears absolutely artificial, and almost certainly wrong, as I have tried to argue at the previous point. So, I really think it is you who have the burden to show that your counter-model is even barely credible, if you want to keep it as an objection. Apply it, even hypothetically, to the object discussed, the flagellum, and show us why it should be reasonable to believe that for each protein of it there is a functional step by step path from a previously existing protein, the related funtions (startin, intermediates, final), and especially the general balance of functions (we are talking of a complex of proteins, after all). Oh, if possible with an eye to regulatory problems (relative rates of transcription, post transcriptional maturation, assemblage of the different parts, localization, etc...) gpuccio
Sorry, I don’t see why it’s obvious. I can’t see why there cannot be a path through sequence space that would give an increase in fitness at every step. What evidence do you have for this? So you’re saying that at every point mutation there will be a benefit?
That would seem a reasonable evolutionary explanation
Give me what you think is a realistic pathway for an E. coli population to obtain a flagellum.
No. You're presenting your model, based on your assumptions. Give use evidence that your assumptions are reasonable. Let's not get side-tracked. Oh, and arguments from incredulity or ignorance won't cut it.
and gene products can have more than one function, so after duplication this side activity could become selectively more important (e.g. http://dx.doi.org/10.1073/pnas.0707158104. Sorry, the second anchor tag is screwing things up). To which selection says, “Hey, this gene by itself after obtaining many various errors JUST SO HAPPENED to obtain a new function,
Read the paper I linked to, please. Or, if don't want to (and there's no compulsion), don't try and make criticisms of something you haven't read.
Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Agree with you so far. After all in this case the order in which the mutations happen doesn’t matter at all.
In reality, it might make a difference, of course, but assuming the order is irrelevant is a decent first approximation.
... (I have no idea why we are talking about the order of events, the order in which mutations occur don’t matter in this scenario). ...
Are you even aware that your calculation assumed a set order? Look at how you set up the calculation - you took the product of the probabilities. This assumes a fixed order, as I showed above. You're right that from the way you set up the problem, the order shouldn't matter. So you have to make sure that in the maths it doesn't. Bob O'H
Bob O'H, can you explain to me in detail what it is you think I was trying to calculate? F2XL
Sorry, I don’t see why it’s obvious. I can’t see why there cannot be a path through sequence space that would give an increase in fitness at every step. What evidence do you have for this? So you're saying that at every point mutation there will be a benefit? Give me what you think is a realistic pathway for an E. coli population to obtain a flagellum. There is evidence that fitness landscapes can be traversed (e.g. this review), Apply the findings in this article to the flagellum please. and gene products can have more than one function, so after duplication this side activity could become selectively more important (e.g. http://dx.doi.org/10.1073/pnas.0707158104. Sorry, the second anchor tag is screwing things up). To which selection says, "Hey, this gene by itself after obtaining many various errors JUST SO HAPPENED to obtain a new function, so rather than risking any loss of functional advantage we will just keep the gene as it is in terms of what it's functioning as since we're blind and cannot see into the future what genes could eventually become a structure coded for by 49,000 base pairs." Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Agree with you so far. After all in this case the order in which the mutations happen doesn't matter at all. Suppose they are independent with probabilities are p_A, p_B, p_C. There are six ways in which this could occur: ABC ACB BAC BCA CAB CBA For the first, the probability is P(A).P(B|A).P(C|A,B) = p_A.p_B.p_C For the second, the probability is P(A).P(C|A).P(B|A,C) = p_A.p_C.p_B etc. The total probability is thus 6p_A.p_C.p_B. I think a far more simple way to put it would be to use a hypothetical set of index cards each numbered 1-10. Now suppose you were in a situation where the orders in which they can come in was what you were trying to calculate (I have no idea why we are talking about the order of events, the order in which mutations occur don't matter in this scenario). What you would do is take the number of options (in this case you have ten options, index cards numbered 1-10). If I had to detirmine the number of possible orders that they could all go in if I were randomly shuffling them, I would take the highest number (10) and multiply that by all preceeding numbers. As a result, the calculation would look something like this: 10 x 9 x 8 x 7 x 6 x 5 x 4 x 3 x 2 x1 = 3,628,800 If I have 3 options, then it would look a lot like your example, 3 x 2 x1 = 6 total combinations. The difference from the coin toss is that in a coin toss, the first toss has to come first (!), so the order can't be permuted. What does the order of coin tosses (in fact what does the order of anything) have to do with what I was doing? BTW different coins can be tossed in different orders, just like mutations can happen in different orders (though with the mutations I was considering it does not matter what order they come in). But you haven’t shown why permutation isn’t possible for the mutations. Because it's not relevant. F2XL
Must? Can you show us the empirical evidence for this statement? I think it’s kind of obvious. I’ll give a few examples to illustrate the point further, along with a link that gives some idea of what even Matzke’s pathway would require (by his own admission).
Sorry, I don't see why it's obvious. I can't see why there cannot be a path through sequence space that would give an increase in fitness at every step. What evidence do you have for this? There is evidence that fitness landscapes can be traversed (e.g. this review), and gene products can have more than one function, so after duplication this side activity could become selectively more important (e.g. http://dx.doi.org/10.1073/pnas.0707158104. Sorry, the second anchor tag is screwing things up).
…and they had to occur in the order specified. Time for a quick math lesson. :D
And now the maths lesson. Suppose you want to calculate the probability of three events (A, B, and C) happening, and the order is irrelevant. Suppose they are independent with probabilities are p_A, p_B, p_C. There are six ways in which this could occur: ABC ACB BAC BCA CAB CBA For the first, the probability is P(A).P(B|A).P(C|A,B) = p_A.p_B.p_C For the second, the probability is P(A).P(C|A).P(B|A,C) = p_A.p_C.p_B etc. The total probability is thus 6p_A.p_C.p_B. The difference from the coin toss is that in a coin toss, the first toss has to come first (!), so the order can't be permuted. But you haven't shown why permutation isn't possible for the mutations. Bob O'H
Let’s say the parts fulfill every requirement except for interfacing. I forgot to add something in that paragraph, and that would be the issue that Matzke concedes must be crossed for his pathway to work. Just add this to the paragraph I quoted from for better understanding. :) Also towards the end of comment #117 I made the following quote: But now we have our probabilistic resources to take into account. I won't move onto that step until a consensus is reached on what I've done so far. F2XL
Must? Can you show us the empirical evidence for this statement? I think it's kind of obvious. I'll give a few examples to illustrate the point further, along with a link that gives some idea of what even Matzke's pathway would require (by his own admission). Suppose a part switches location to where the flagellum is to be built, but it's other respective homologs do not make the switch. Selection has nothing different to act on, so this change would probably be neutral (or harmful, but we'll set that aside) UNLESS other parts have made the switch as well. ......Ok maybe this is a better way to put it. Recall the list of things that must happen with each part before selection can do anything (from comment #116). If all of the parts that are needed to produce a flagellum fulfill all five of the criteria except #2, then selection cannot preserve that "progress," you just have a pile of protein parts that don't really have any conceivable way of benefiting the cell, not until AFTER they've ALL fulfilled criteria #2. Suppose that all the parts fulfill every criteria except they aren't localized in the same area. Again, selection has nothing to act upon in order to preserve the progress so far because you basically have the parts to a flagellum that would fill in that job just fine but they are scattered all throughout the cell, thus selection doesn't have the "foresight" to realize that the parts are all optimized to become such a structure; it is thus rendered powerless. Let's say the parts fulfill every requirement except for interfacing. In that case you would have the parts all there in the same location, and the right order of assemblage with functions ready to go, but the parts aren't optimized to fit together. It would be like trying to build a motor out of parts which are from all sorts of various vehicles from an airplane to a Humvee to a nuclear submarine. The parts (or proteins) would not interface to get any functional advantage for the vehicle (or cell) that will be using the motor (or flagellum). Again, selection is thus far rendered powerless. The number is probably much, much (probably several times higher) greater but I assumed that there were only 490 base pairs that needed to be changed before all 42 of the homologs would fulfill all 5 of the criteria I listed in comment #116 (therefore allowing selection to actually preserve something). 490 base pairs accounts for only 1% of the total amount of base pairs in a typical 35 gene flagellum, and only constitutes a little over a hundredth of a percent of the entire genome in E. coli. But it's certainly the biggest hundredth of a percent you will ever find in biology. No, you would if there were only 490 mutations in total... Which there are (at least in terms of what selection can't have any effect on unless they've all made their respective changes). ...and they had to occur in the order specified. Time for a quick math lesson. :D While it's true that they don't have to occur in any particular order, that's irrelevant to what I calculated. Suppose the odds of an event (which we will denote with X) must also occur with event Y (though it's not necessary that they happen at the same time). Both events being independent of each other would have their probabilities multiplied. To use coin tosses as an analogy, suppose I wanted to calculate the odds that I will flips 5 heads in a row with five separate coins (or the same coin). With the odds of each coin (assuming they are fair) being respective to the individual coins themselves (as with mutations), you would take the odds of each and multiply them to figure out what the odds are that you will get all heads (2 to the 5th power, or one chance in 32). The same goes for the mutations. With the odds of each mutation being on the order of 10 to the negative 2,830,000th power (4 to the 4.7 millionth power, 4 base pair outcomes and 4.7 millions places they can go), you would take that and multiply that with itself 490 times. Just as you would with the odds of getting a heads on each coin flip. A glance at the now infamous XVIVO video (esp the version with the voiceover . . .) will at once make plain that Denton long since put his finger on the issue: we see codes, algorithms, implementation machinery, all in a self-assembling and significantly self-directing whole. Just finished reading his book yesterday, and it's not hard to see why it inspired Behe so much. And "The Inner Life of the Cell" certainly puts Denton's words into context. :) F2XL
PS: While waiting on TMLO . .. The already mentioned XVIVO video is a enough example, as it aptly illustrates the machinery of the white blood cell, and the algors and codes are in of course the DNA & RNA etc, of which sequences of execution to make proteins are shown. How that is done is a commonplace: codes, algorithms, executing machines. [Cf the machine code/ architecture view of a digital computer.] kairosfocus
codes, algorithms, implementation machinery
Could you give me a few examples of each of those things please, as instantiated in the human body? Mavis Riley
Mavis, With a 4-state element, chained to n times, the config space is 4^n. To calculate, do log10 [4] and multiply by n. Subtract the whole number part and extract the antilog of the remaining fractional part. So far the config spaces F2 has estimated look about right. I do not claim any expertise beyond the merits on the facts and related logico-mathematical and factual reasoning. [And, in an earlier thread, I gave links and remarks on how Wm A D estimates CSI. I prefer to look at the vector of values, as the disaggregation into vector elements tells important things. Bits, after all, is a measure of information storage capacity, not significance.] I will withold final estimation of the worth of F2's work till he finishes; save that so far he seems to be on an interesting, more detailed track than I am wont to take up. F2's work is rather like taking a 20 lb sledge to a walnut. A glance at the now infamous XVIVO video (esp the version with the voiceover . . .) will at once make plain that Denton long since put his finger on the issue: we see codes, algorithms, implementation machinery, all in a self-assembling and significantly self-directing whole. Such is long since in the class of entities known to be originated in intelligence. And a part of that is the fact that the config space so isolates islands of functionality that search resources uninformed by intelligent, active information [which is also quantified] are credibly fruitless. Cf my microjets case in App 1 th always linked,in thermodynamics context. Trust that helps. GEM of TKI kairosfocus
Kairosfocus, I know you are one of the resident experts on the EF etc. Do you agree with F2XL's math so far? Mavis Riley
F2XL: Keep going, fascinating to watch. GEM of TKI kairosfocus
F2XL - Will you be publishing this work in one of the ID journals? Mavis Riley
Dawkins (excuse my language)says somewhere that each change must arise sequentially in an unbroken chain of viable organisms. And this is somehow supposed to add credibility to random evolution! Laughable!
You would hardly expect changes that resulted in a unviable organism to be passed on would you? Why laughable? I don't think it was intended to "add credibility", it seems to me a statement of the obvious. Any anyway, is that you Minnie? EEH, it's been some time eh girl? I don't really know how long it's been eh? You must have left the street 20 years ago now, don't time fly! Fancy a pint in the Rovers later Minnie? Mavis Riley
So with the odds of each base pair changing to the right combination being independent of each other (if not then please explain why) you would take the original odds and take those to the 490th power (the odds for each base pair would be both the same and independent).
No, you would if there were only 490 mutations in total, and they had to occur in the order specified. Bob O'H
For the homologs to proceed as a flagellum, several things must change at once for selection to preserve them.
Must? Can you show us the empirical evidence for this statement? Bob O'H
With that 5% gap to cross, you would be looking at 2,450 base pairs that need to be changed, out of 4.7 million in the entire genome. What the changes could be categorized as are elaborated on in my previous comment. For the homologs to proceed as a flagellum, several things must change at once for selection to preserve them. If you proceed to have the a part change location, then selection can't really do anything to preserve that change throughout the vast majority of the population unless all other parts have made the same change, in the right order, are made to be mutually compatible, have their functions switched, etc. This applies to each and every homology. Suppose a part changes location (thanks to getting the right sets of base pairs to change), but the information that tells what order of assemblage the part will go (in that location) isn't present. In that case it's likely to just do harm (in our experiment though, we'll just say the effects are neutral :)), so selection can't help you there. While I think it's reasonable to say that all 2,450 of the base pairs (an extremely conservative estimate) that go from homologs to the actual flagellum would be neutral by this standard, I will be extremely, unrealistically hopeful and assume that only 1% of the total base pairs that code for a flagellum are neutral when they must be implemented. Selection will take care of the rest. That means that 490 base pairs must be changed over the course of an entire line of descent before you actually reach the level of change needed for a flagellum to appear in an E. coli. So with our 4.7 million base pair genome, let's see what the odds are that we would be able to make any particular base pair change to the right nucleotides. 1. 4 options... (e.g: at, ta, cg, gc) 2. 4.7 million places... So what you would do is take four to the 4.7 millionth power (feel free to ask why if I haven't made it clear enough). The result? (One chance in) 10 to the 2,830,000 power. And that's just for a single point mutation (this could represent a start signal for instance). What we're looking to cross is a 490 base pair change over the course of all living history. So with the odds of each base pair changing to the right combination being independent of each other (if not then please explain why) you would take the original odds and take those to the 490th power (the odds for each base pair would be both the same and independent). Our semi-final result for having the neutral gaps crossed is one chance in 10 to the 1,386,700,000th power. But now we have our probabilistic resources to take into account. F2XL
Please. Do not allow me to divert you. Alright then. :) Now we have our 10 to the 50th power E. coli that have been around over the course of a 3.85 billion year period. In each E. coli there are 3,000 point mutations (400 times faster then the norm, none of which will do any harm for the sake of argument), so that means that there are 10 to the 55th power (yes that is an exaggeration) base pair changes that can take place in an attempt to have homologs cross over to a functional flagellum. Recall our conditions. 1. E. coli have a 4.7 million base pair genome. 2. We are assuming, despite the fact that it's highly unlikely that this is the case, that there is only a 5% gap to cross from the homologs to the actual rotary flagellum. So let's see how this would work. We're trying to change the following aspects of the homologs: 1. Their location - All parts must be localized in the proper area in order for it to be a flagellum in the first place. 2. Their order of assemblage - As with any multi-part system parts must be placed in certain orders of assemblage in order for it to function at all. 3. Interfacing - Parts must be able to interface well enough to go together as a functioning system (in this case, the flagellum). 4. Part selection/syncronization/retainment - Each part that can actually work as a part of the flagellum better stay there and all at the right time(s), and we sure don't want any interfering parts showing up instead to destroy the whole system. 5. Overall function - Once all of the above hurdles have been crossed, the function for each of the homologs must be changed from their original functions. This must be done without harming the cell for having a lack of the original homolog part's function, and also must be done so the parts all work in synergy to form a flagellum. With all of the homolog's being coded for by thirty-five 1,400 base pair genes, a total of 49,000 base pairs. Next comes our probability. F2XL
This is very exciting. F2XL, what are you waiting for? Mavis Riley
F2XL, do you mean that your research lead further than all the efforts of Dr. Dr. Dembski, Dr. Behe, Dr. Lönnig, and Dr. Axe. sparc
Bennith, I've been reading up thread some now and I was wondering did you get any response to the "design detection/EF via computer program" topic? I would love to see such a thing in action. Mavis Riley
Please. Do not allow me to divert you. Will you attempt to determine a specific value for the CSI? Mavis Riley
So you’re going to ignore gene duplication followed by mutation, right? Just the duplication, the mutations that can happen AFTERWARDS are all that matter (and are taken into account). But keep in mind that in a situation where all the information that codes for Matzke's homologs is already present, duplications would probably do more harm then good. It may have escaped your notice that there is something like non-coding sequences. I think I'll just let the people who actually know what the X filter and the idea of CSI is intended to do decide if that above comment has any merits. F2XL Do you have a list of items you've performed this sort of maths for? No (well, actually I do), but I think maybe if you have a little patience there's this really cool conversation that we're having in which such a thing is applied to the bacterial flagellum. You know, the one that started a dozen or so comments above? Care to wait until I'm finished? And something I've always wondered about the EF - can it tell the difference between a "designed" species and a "micro-evolved" species that came from one of the "designed" species? Yes. Now onto the original discussion. F2XL
F2XL Do you have a list of items you've performed this sort of maths for? And something I've always wondered about the EF - can it tell the difference between a "designed" species and a "micro-evolved" species that came from one of the "designed" species? One would presume the CSI would degenerate in that situation. Mavis Riley
If after combining the two main factors above the odds for a particular protein coming about are less then one in 10 to the 150th power, we would have to conclude (according to the “X” filter) that the protein cannot be explained by chance due to the low probability, it can’t be a regularity due to the constraints you would have on it for the sequence to be functional and therefore since it is less likely to come about then the Universal Probability Bound the only possible way to the evidence is through design.
It may have escaped your notice that there is something like non-coding sequences. sparc
3. Gene duplications - these as we already know take an existing gene and copies it. Now I better get this out of the way now so no one asks me why later, but in this experiment I will not be factoring in gene duplications, and here’s why. Having the same information repeated over and over again doesn’t really produce anything new, and in this case since we are trying to modify existing information that codes for the homologs that Nick J. Matzke has provided the duplications of any of the 35 genes for the homologs won’t really do anything.
So you're going to ignore gene duplication followed by mutation, right? Bob O'H
F2XL, the scientific community would be grateful if you would apply your ID tools (IC,CSI, EF etc.) to the public databases to eliminate those sequences that arrose by sequencing errors, wrong assemblies, vector contamination etc. and are thus surely not designed. While this is a little (but not completely) off-topic from what I was originally doing, the basic criteria for determining whether something was the product of some sort of teleological intervention or chance and necessity would be as follows: 1. All probabilistic resources must be taken into account first, both replicational and specificational. That means repeated trials and also various ways you can get the same result must be factored in. 2. Next you would have to determine how likely it is for the sequences of the functional protein to come about. 3. If after combining the two main factors above the odds for a particular protein coming about are less then one in 10 to the 150th power, we would have to conclude (according to the "X" filter) that the protein cannot be explained by chance due to the low probability, it can't be a regularity due to the constraints you would have on it for the sequence to be functional and therefore since it is less likely to come about then the Universal Probability Bound the only possible way to the evidence is through design. F2XL
F2XL, the scientific community would be grateful if you would apply your ID tools (IC,CSI, EF etc.) to the public databases to eliminate those sequences that arrose by sequencing errors, wrong assemblies, vector contamination etc. and are thus surely not designed. sparc
While I think you probably could've picked a better time to make that comment, I guess it does have good relevance and expansion to the discussion of what we're talking about. To those I would ask, do you honestly think a mechanism could not accomplish this? I am actually looking for a response here. The answer would have to be a most definite no. Not now, not yesterday, not tomorrow, never. Not unless you have something there to give the process a little guidance. After giving the rest of your post a read, and seeing that you've compared nature with a robot, I would suggest that maybe you would serve yourself well by using a more accurate analogy. Even a simplistic set of laws can output something complex if the raw materials it had to work with were complex. A program that did nothing but flip all the bits of its input could ouput the entire works of Shakespeare. And furthermore the laws of nature may be simple but the universe is very large and compex (and whose to say really that the laws of nature aren't complex as well). Might I suggest you let me finish explaining why I hold this to be the primary falsehood of materialism (e.g. fulfilling the request of these people)? F2XL
Something I'm honestly unclear about, to those of you who in this thread would only sarcastically say that chance and necessity caused what we're seeing here, in addition to those who would say it was obviously "a purposeful agent" and distinguish such from chance and necessity. To those I would ask, do you honestly think a mechanism could not accomplish this? I am actually looking for a response here. I could personally envision some very sophisticated robot able to distinguish and recognize different types of chairs, and also containing within it certain templates identifying the crucial features of all chairs, as well as features of more particular types of chairs. I can imagine this robot with some sort of appendages with which he could manipulate certain attributes of his environment. For example maybe he could mold and shape sand at the beach. And maybe he had a goal to go around and make as close approximations as he could of items he had detailed knowledege about in his memory. So you could imagine him piling up and shaping sand to form a crude chair. Maybe there could be a random number generator that determined which of the various objects he knew about to model at any given time. Thus any given chair that he formed would be related to chance, i.e. where he happened to be at the moment and thus the raw materials available (e.g. a particular type of sand, maybe black sand) plus the random number generator that dictated what object he chose to model, in addition to necessity in the form of the compex rules he possessed that governed his behavior. Thus, chance and necessity. Now what I'm puzzled about is, are you ID'ers saying that such a robot would be a purposeful intelligent agent? So wouldn't you be saying then that chance and necessity can in fact be a purposeful intelligent agent. Or would you obfuscate, and say that although the robot was not an agent, a human was necessary to create him and its obvious to you that a human is not a mechanism. It may be obvious to you, but it is not at all obvious to everyone. When you look at a human designer, he also has some understanding of chairs stored in his brain. For him to make a detailed model of a chair requires him to have carefully observed chairs for a long time, i.e. to watch and memorize so he could ultimately recall. Do I really have to spell all this out again? Why can't the human process be a mechanism as well, and on what supposedly self-evident basis would one assert that it cannot be a mechanism? Maybe someone might ascribe some transcendent attributes to the "choices" that a human exhibited, in for example deciding which chair to model, and so forth, and ignore facts like the humans choices were limited to what he had previous exposure to, and even the ulitmate decision of which chair to model might very well come down to a complete random factor, for example he had seen a chair on a billboard while driving somewhere. As far as why do higher animals have a tendency to like to copy things they've seen before - go ask a parrot. And if you say, "Ah a parrot would not be able to answer you, that's what makes me an agent and a parrot not an agent." OK then, you explain to me, why is the impulse to copy things, and the skill and dexterity to do so with a certain degree of accuracy somehow indicative of something that cannot even be systematically explained or described (i.e. an "agent", i.e. something that transcends chance and mechanism.) Does this all make sense? What is the crucial thing I am missing in the ID viewpoint? Just to clear up briefly my own viewpoint, the crucial factor is the complexity of a mechanism. If someone says the laws of nature (i.e. a mechanism) cannot accomplish something, its presumably because the laws of nature are extremely simple, not that the capability transcends mechanism in general. So even something as complex as a human being can be output by the determinstic mechanism of epigenesis. If something can be specified then a number can directly represent it. There are an infinite number of mechanisms (i.e. programs) that can output any given number. Even a simplistic set of laws can output something complex if the raw materials it had to work with were complex. A program that did nothing but flip all the bits of its input could ouput the entire works of Shakespeare. And furthermore the laws of nature may be simple but the universe is very large and compex (and whose to say really that the laws of nature aren't complex as well). If the mechanism of epigenesis creates people why couldn't some other mechanism intrinsic in the universe have created epigenesis. JunkyardTornado
I don't understand how modifying existing information is not modifying existing information if the information in question happens to be a copy of some other part. I don't understand how modifying existing information is not modifying existing information if the information in question happens to be a copy of some other part. I don't understand how modifying existing information is not modifying existing information if the information in question happens to be a copy of some other part. See what I mean? Gene duplication would be like taking that first half of what you said and having it copied afterwards. Only point mutations and frameshift changes can produce anything new, which is why those are what I mainly take into account. At some level there is lots of duplication - after all there are only 4 “letters” so what resolution do you go down to to determine what information is a copy of some arbitary other information? Talking about entire gene duplication here. Yes we see some repetition of base pairs within genes but what gene duplication is involves taking entire sections of the genome and copying 1,400 base pairs at once (at least with the E. coli). It's not directly relevant. Keep in mind that we're trying to change the information that codes for the homologs that Nick J. Matzke insists are what led to the flagellar motor. It's an overly hopeful assumption that those homologs would be present to begin with in every last E. coli but I did that just to make unguided evolution more hopeful. Having that information just repeat itself won't get you anywhere in this scenario or any scenario for that matter. F2XL
But hey, let's make evolution (in terms of a completely unguided process) even more hopeful then it really is by doing two things. 1. Mutations rates will be 400 times faster by working under the assumption that they happen once in every E. coli. 2. With each mutation, we will assume that 3,000 base pairs change. This is basically the equivalent to having a frameshift mutation occur in 2 genes and then have around 200 point mutations happen after that elsewhere in the genome. Whether these are from frameshift mutations or thousands of point mutations is irrelevant. We will just be assuming that 3,000 base pairs will be changing at any given place in the 4.7 million base pair genome of the E. coli. 3. There will be no HARMFUL mutations in this process. Either the mutations are neutral or they are able to help us achieve the changes needed to get a working flagellum capable of overcoming Brownian motion. Any potential effects that these mutations have outside of being neutral or getting a rotary flagellum will not be measured for both the sake of argument and the sake of simplicity. Now let's factor this in with the 10 to the 50th power exaggeration mentioned before. If each pf these cells has 3,000 point mutations occur then the total number of mutations that could've ever happened in the entire history of life on earth is 3 x 10 to the 53rd power. To inflate the probabilistic resources even more we will round up to 10 to the 55th power. This number will represent the total number of mutations (in terms of base pair changes) that have happened since the origin of life. We now have established our replicational resources for this elaboration. F2XL
What a waste of time. We all knew that evolution was impossible, so what's the point? Daniel King
I don't understand how modifying existing information is not modifying existing information if the information in question happens to be a copy of some other part. At some level there is lots of duplication - after all there are only 4 "letters" so what resolution do you go down to to determine what information is a copy of some arbitary other information? Mavis Riley
Please do. Now you’ve got some BIG numbers, I want to see your small ones. They sure are. 10 to the 50th power (one followed by 50 zeros) will represent our replicational resources in terms of how many cells could ever have existed in order to obtain this "motor." I can't remember what site I read this from but I believe the actual number of bacteria that have ever existed on earth is believed to not have exceeded 10 to the 30th power (throws that number out the window in favor of my own exaggeration). But this number is about to get even bigger. Now we have to take into account what mutations could pull off during this time period. Mutations are an interesting thing to factor in. While I'm aware that they come in many various forms, the general information on mutations indicates that there are really three main types to be concerned about. They are as follows: 1. Point mutations - the change of a single nucleotide pair. 2. Frameshift mutations - an insertion or removal of one or more base pairs which results in the entire gene being read in a different manner. In the case of an E. coli, a frameshift mutation is basically the equivalent to having 1,400 point mutations happen at once in a single gene. 3. Gene duplications - these as we already know take an existing gene and copies it. Now I better get this out of the way now so no one asks me why later, but in this experiment I will not be factoring in gene duplications, and here's why. Having the same information repeated over and over again doesn't really produce anything new, and in this case since we are trying to modify existing information that codes for the homologs that Nick J. Matzke has provided the duplications of any of the 35 genes for the homologs won't really do anything. Gene duplications can only add new information if one of the first mutations mentioned happens to it after the fact. And those will already be accounted for. To do this we need to determine how many mutations happen per cell. In the case of E. coli, they happen only once every 400 cells. F2XL
Most, if not all, mathematical statements are amenable to being reproduced and manipulated in a computer program. As design detection and the usage of the EF are heavily biased towards maths in their implementation would it be possible to create a computer program that could implement a limited Explnatory Filter? Bennith Karlow
Please do. Now you've got some BIG numbers, I want to see your small ones. Bob O'H
To extend things a little further, let's assume that Earth has literally been filled with 5 x 10 to the 23rd power cells since the origin of life 3.85 billion years ago. Now what we need to do is determine a generation length. A typical cell normally divides every half an hour or so (20-30 minutes) but for the sake of argument, let's assume that these E. coli cells could (and still do) replicate every second. This leads use to 3,850,000,000 years, nearly 1.5 trillion days, 35 trillion hours, 2 quadrillion minutes, or less than 125 quadrillion seconds. Now since we want to apply this to the replication/generation length (which is roughly 1,500 times faster then the replication rate we know E. coli tend to go through) of one second to the total amount of seconds that have existed since the origin of life, here's what we get: 5 x 10 to the 23rd power cells that could possibly live on earth at once... ...multiplied by the number of seconds that have passed since the origin of life (1.25 x 10 to the 15th power. The result is 6.25 x 10 to the 38th power. But for the sake of argument, let's round up to 10 to the 40th power. This number is probably trillions upon trillions of times greater then the actual number of cells that have ever existed on earth, and for the following reasons. 1. I assumed the number of cells that could exist (and DO exist) in a cubic millimeter of ocean water is 500,000 regardless of how much guesswork was involved in that number. 2. I assumed the entire surface of the earth is covered in water. (compared with almost 71%) 3. I exaggerated the amount of ocean water that exists on Earth by assuming the ocean had a constant depth of 10 miles (on average it goes a little past 2 miles, and never goes much past 7 miles in the Marianna's trench) Combined with the above point I basically will be using an ocean that is 7 times bigger then it really is. 4. Compared with the normal replication rate of E. coli which is about 25 minutes on average, I basically assumed they replicate 1,500 times faster then they normally do. 5. I assumed that the flagellum has had 3.85 billion years to become commonplace amongst E. coli, fact of the matter is, the flagellum is believed to have shown up not too long after the origin of life. Altogether this means that there has been 10 to the 40th power cells (or trials) to get the rotary flagellum, which is already a hugely unrealistic number, but I will exaggerate further by assuming the number is 10 billion times larger and therefore our end result will be 10 to the 50th power cells on earth since the origin of life. Continue? F2XL
Or did you realize that you could not deliver the goods you promised? Figured someone would say that if I didn't post it all on the same day. I'll reload this page every now and then to make sure as many people are watching as I expected. Now for the next set of considerations. Now what we'll be doing is trying to change the information that codes for the homologs and their respective functions and locations and try and mutate it into the information which codes for a flagellum. bacterial flagellum. but before we can do that there are a few things we need to factor in, in terms of probabilistic resources: 1. The maximum number of cells that could've possibly existed on Earth until the present. 2. What we can expect from mutations during this time frame. 3. We will assume there are about as many ways to get a flagellum after changes have been made to the 5% dissimilar coding (we must keep in mind that the difference is actually much greater then that) for the homologs as there are atoms in the entire known universe (10 to the 80th power, which I'm pretty sure more than compensates for specificational resources). Let's knock out that first factor. How many cells could've possibly existed on earth until the present date? Generally speaking, the estimated surface area of the earth is roughly 500,000 KM. This translates into 1,650,000,000 feet, or nearly 20 billion inches. In all of those inches put together you would have roughly 500 billion millimeters. As seen in the following link... http://www.eurekalert.org/pub_releases/2005-08/osu-mhh081805.php ...there could potentially be half a million cells in a millimeter of sea water. Assuming that we had an ocean that was ten miles deep covering the entire globe since the origin of the first cell, this translates into a 2,000 quadrillion cubic millimeter ocean, and if 500,000 cells can survive all at once in each millimeter (or so the theory goes) of sea water; this means we have 5 x 10 to the 23rd power cells that can possibly be alive on earth at any given time. F2XL
Yes, please go on, F2XL. Or did you realize that you could not deliver the goods you promised? Daniel King
Go on, F2XL... Bob O'H
Following you so far. Allen_MacNeill
"I’m following you so far. I’ll be interested to see how you handle replication, selection, and mutational processes." These are all rather easy to compensate for, though it's the selection part where it gets interesting. Moving on, what we will basically be doing is testing the claims of Nick J. Matzke. What our starting point will be is an E. coli cell that does not have a rotary flagellum. Instead we will make the ridiculously hopeful assumption that all of Matzke's homologous protein parts which are listed in the following table... http://pandasthumb.org/archives/2006/09/flagellum-evolu.html#more ...are already present in the cell. We will use 35 genes (the minimum needed) to code for all of the homologous protein parts, and even though there's good reason to believe otherwise, we will assume that the base pairs which code for all of the homologs are characterized by a 95% sequence similarity. Follow me? F2XL
PS: In the derivative 150 utils thread, I have linked an excerpt from the Reaearch ID Wiki, here, that gives a specific metric for CSI; one that exploits probability and the commonality of the bit as a metric of information-carrying capacity. kairosfocus
Folks, First, it's looking like it's espalier, NOT photoshop. And, it doesn't just LOOK like a tree-chair, as here we see someone sitting in the very same chair, a few years on it seems. Courtesy, Pooktre garden, which also has other espalier trees, the man-trees linked above among them. [Notice here, how observed functionality, and multiple instances help reinforce the design inference.] Now, what is going on, and how does this set up the "X" filter as a serious empirically anchored inference to design? 1 --> Many will note that I usually speak of FSCI, not CSI. That is: FUNCTIONALLY specified, complex information. 2 --> In short, observe functional utility first -- which requires an empirically observable situational context, probably with surrounding objects that interact with the entity, and often underlying conventions, codes and/or signals [here analogue and digital may be relevant], algorithms, handshaking etc. 3 --> This first dimension of the design inference leads to at least a discrete-state, nominal metric, storing one bit of capacity, but packing a lot more of significance: functional/ non-functional. 4 --> Functionality is tied to configuration that is contingent, and usually constrained to rather specific [clustered] configurations. That is, specification deals with the question of how many states may reasonably be regarded as functional in the relevant sense -- not many configs of jet parts will fly, relatively speaking; sometimes, just one state is functional. [Here, we can use a sensitivity- to- perturbation metric, which allows us to judge the scope and peakiness of the hill of functionality, within the island of functionality. Such metrics are often used in optimisation analyses, as we look at sensitivity to parameter drift etc.] 5 --> Next, observe the complexity. In short, how many configs are possible for the elements involved. This is of course where a Shannon-type information storage capacity metric is important. For instance, and pardon F2 if this gabs a bit of your thunder, a 49 k bases DNA strand is capable of ~ 8.70 * 10^29,500 states -- and given the odds that a random config will give stop codons, this will by far and away be likely to be non-functional if randomly chosen. [And, as Marks and Dembski showed in a duly "censored" paper, it is active information that allows us to use insight to do materially better than random search.] High contingency also is a sign of chance or intelligence not mechanical necessity showing itself in natural regularity. 6 --> Compare: first if there were 10^1500 possible basic flagellum designs, each with 10^150 variants that preserved the basic function, we would still have the functionality so isolated relative to the config space that no random search [or its comparably capable] would be reasonably likely to reach to any such island of function within the probabilistic resources of the available universe, which credibly has about 10^150 quantum states accessible to it across its lifespan. [And so odds of random search or equivalent finding things isolated to significantly better than 1 in 10^150 are negligibly different from nil.] 7 --> Now, we see that information is not just a matter of bit-capacity, but that certain configs of states of bits [or the equivalent], may function, and are often quite isolated in a config space, sufficiently so that random or substantially equiv to random search strategies are effectively non-starters. So, it is no surprise that FSCI is a reliable index of desigfn in the cases where we can compare its detection with an independently known causal story. [E.g., Pooktre now joins the list.] 8 -> Now, observe something else: measurements -- quantities -- are measures of something [qualities, especially functional ones]. Metres quantifies length beyond tall/short, near/far etc. But near/far etc still have merits. Further to this, so long as we may reliably make a comparison more/less, we may rank, and in so ranking we store information that can be recorded in bits. Ordinal scales are scales of measurement -- and indifference curves are an example of that, with a subjective ranking. BTW, subjective probability a la Bayes etc, is similarly a measure; one that is more or less routinely used in real-world contexts. 9 --> And, to measure on a scale -- interval or ratio -- we compare the observed quality as instantiated, length, to an arbitrarily and conventionally defined standard amount. Here, a certain rod then now the certain number of wavelengths, and finally the distance light moves in a certain span of time]. 10 --> Last, we are looking at INFORMATION. Functionally specified, sufficiently complex, often algorithmic, code-bearing information that actually works in identified objects that have in them information-processing parts in an architecture that works with the code to implement algorithms. Not apparent messages that are credibly a product of chance and necessity acting undirected, but information which is credibly -- per massive empirical observation-base -- the product only of intelligence. Thus, I conclude that the CSI metric, in a functional context, is sufficiently reasonable and related to what we do in many other significant contexts that to select it out for objection is brushing a little too close to selective hyperskepticism. Thus, we see a situation where we are looking at inference to best, empiriclaly anchored explanation of observations. In the aid of these observations,w e use measures where reasonable and informed judgement. We test the result against a battery of known cases, and see that it is a reliable filter between law, chance and agency when it rules the last. So, until and unless I can see good reason to know that the EF and the resulting vector metrics and verdicts it produces, are unreliable on identifying cases of agency when it so rules, I will continue to freely use it. [As to attempted uses of prestigious labels like "science" to dismiss what is tested and works reliably, that sort of dismissive exercise only serves to the discredit of science.] GEM of TKI kairosfocus
D. The tree probably obtained this shape as the result of the purposeful efforts of an intelligent agent. E. Other. What a meaningful example! I don't know if my ideas are worth a cent by I'try to put them in the discussion. Apparently the person (or persons) who were involved in this evident ID-made thing was/were able to obtain it by continouously put shape constraints on the natural growth of the tree. So, day after day, and week afte week, the tree itself was shaped according to the origina idea of the owner. Don't you think that this seems a perfect metaphor of what is the way the Designer did act in the biological world? kairos
Allen and Bob, do each of you follow me so far?
I'm following you so far. I'll be interested to see how you handle replication, selection, and mutational processes. Bob O'H
If you observe up from the trunk, you will immediately see the familiar menorah. This clearly shows an example of cooption, further it answers the age old question -- which comes first, the menorah or Hanukkah. We can now confirm that the jewish people found natural menorahs in the about 3000 BCE and developed a whole ritual tale around it. Like get real! bFast
Let's start off with what we're quantifying. We are looking at a rotary flagellum commonly found in E. coli. We go with something simple, a 35 gene flagellar motor (one which Scott Minnich has proven to require all 35 of those genes) which because of gene size in an E. coli the flagellum is thus coded for by 49,000 base pairs. The total amount of base pairs in the E. coli as a whole is about 4.7 million. Follow me so far? F2XL
Allen MacNeill, (48) I realize that we are getting dangerously close to your cutoff of 100 comments, and that others have at least partly addressed your comment. However, I would like to return to a specific set of statements you made.
. . . if ID is valid, aren’t both the “chair tree” and a “natural tree” designed? Indeed, I believe that virtually all of the commentators here would agree. In that case, how useful is the so-called “explanatory filter”, as clearly it cannot distinguish between the level of design exhibited by the object in the photograph and the “natural” objects with which it is compared? Indeed, if ID “theory” is valid, it seems likely that essentially all living objects and processes (and formerly living, but now dead ones, such as corpses and fossils) are designed, or are artifacts produced by designed entities.
Implied in your comments seems to be an assumption that multiple layers of design cannot be distinguished, and/or that the distinction is not useful. If so, that would seem to be a quite indefensible assumption. For example, if we sail into Sydney harbor and see plastic flowers forming the letters "WELCOME TO SYDNEY". it is reasonable to conclude (1) the letters were designed, (2) the plastic flowers were designed, and (3) the designer of the flowers may have had nothing significant to do with the designer of the letters. Thus two levels of design can be distinguished that are essentially independent. If real flowers are designed, a similar argument can be made for the essential independence of the use of real flowers and their original growth. The tree chair is then simply another example of an artifact which is designed twice, and the two levels of design can be distinguished. Saying that something is designed is not automatically a way of throwing up one's hands and quitting on the question of how or why. In fact, this distinguishing between levels of design has very practical consequences. A baseball bat, a rolling pin, and a hammer are all designed objects. WHile it may be difficult to assign a precise number of CSI bits to each object, it seems clear that each object (especially if the bat has a logo engraved on it) passes the minimum number, whatever that is. All three can be swung at a head with enough force to cause damage, and in some cases death. That also can, in the appropriate circumstances, pass the threshold of the design inference, in which case the designer is likely to find him/herself behind bars if not dead. But the two design inferences are at least partly separable. The maker of the hammer or rolling pin is not likely to join the malefactor. So important distinctions can be made between different levels of design. The real problem that is going on here is illustrated by another example of double, or in this case triple, design. The World Trade Center was undoubtedly designed. Airplanes are designed, junkyard tornado theories to the contrary notwithstanding. I remember just getting ready to finish my shift when I was called to see television pictures of the World Trade Center North Tower in flames, from where an airplane had hit it. I remember the commentators saying that this was a terrible tragedy, and wondering what had happened, and muttering, "You idiots! Somebody did that deliberately." Then when the South Tower was hit, the commentators started asking "You don't suppose it might be some terrorist?" With time, that has become (with good reason) the conventional wisdom. The problem with recognizing design early on, when it conceivably might have made a difference, was simple; nobody wanted to believe that such design was possible, because it would change how they viewed the world, and would change their actions in ways that they didn't want to change them. In the same way, there are those who do not wish to see the obvious design in trees, people, starfish, and bacteria, because it will change how they view the world and might have a major impact on their actions. Otherwise, why would Dawkins have insisted for decades that all the evidence for design is bogus, when he knew that intelligent design was a reasonable explanation for the origin of life? He was afraid of where it would go if he admitted the truth. He was afraid that if he gave an inch, design theory would take a mile. And so he refused to give even the well-justified inch. As long has he could portray design as unscientific, he had a strong defense against the idea of a God who could "intervene" in nature. Now that he has admitted that one cannot rule out design on scientific grounds, the question of the identity of the designer starts suggesting answers that are hard to avoid, but which he very much would not like to deal with. This is not a question of science versus religion, as is commonly portrayed from that side, but rather a question of anti-theism versus science. Paul Giem
"Aren't we all. It is funny how ID-critics will use the objection, get painted into a corner and get suddenly quiet, then bring it up again on a later thread." It's like reading a short biography of what happens when debates run rampant on Youtube, or any other site. I feel the same way when people insist that evolution doesn't have a "goal" of producing something. You ask for elaboration on what those other outcomes might be and they leave, and repeat the same argument on another thread/video. I guess truth can be a hard pill to swallow. "The first step is to focus on the instructions, not the final result, since very simple instructions can lead to extremely complex things (such as the mathematical formula that produces the Mandelbrot set). For the flagellum, this would be the stretches of DNA that code for the various components, but also include parts that code for its construction. It would be reasonable NOT to include any of the machinery used for transcription, etc., as that is held "in common" with all other DNA-based processes and constructs." I agree, when I quantify the likelihood of a structure in biology coming about by material means, I stick with just the instructions, kind in the way one would calculate the odds of getting windows vista by seeing what the binary code would be for it. In the case of the flagellum, that's what I do. It's a little different from how Dembski does it, but in either case you get the same general result. "Yes, I know we’ve gone through this before. But I’ve yet to see a satisfactory answer." I think I can satisfy that Bob. Allen I agree with everything you say of empiricism, and I think I better address both you and Bob on the following: "Still waiting to be shown such an analysis for the E. coli flagellum…" Folks, I would like the honor of taking it upon myself to show these guys how the "X" filter and CSI can both be used to infer design as the cause for the flagellum. But before I do so, both of you need a little background information on what I'm doing. I'm taking the Explanatory Filter (what I call the "X" filter) and applying it to complex specified information (or CSI). The CSI we will be looking at is the genetic information that codes for a flagellum commonly found in E. coli. Allen and Bob, do each of you follow me so far? PLEASE DO NOT BLOCK THESE PEOPLE UNLESS THEY RESORT TO NAME CALLING OR DRIFTING OFF TOPIC! I just want to give them an idea of how it's done. F2XL
Bob -- Please, show don’t tell. How would you infer design? What process would you go through? If you need to use CSI, how would you calculate this? Bob, A fair point and before I address it I want to go back to the one I was making to be clear. You have a fallen log (or dead horse) and you sit on it. It's a chair. Hence from what you know about chairs you can't infer that this is a chair since you know just about anything can be a chair. However, there are things that you know are specifically (note SPECIFICally) designed (note designed) for sitting. They have backrests, armrests, fit the contours of the body, provide distance from the ground for the body etc. So you see this thing and assume design. Further we know that trees, by necessity, don't grow this way. Further we recognize that it is very improbable that chance made this thing grow in such a fashion (point to ponder would the chance be greater for this grow as it did or for life to form through the meshing of inanimate atoms? I dare say the latter). So it is fair to say this thing is designed. But suppose you never saw a chair, would you presume design? Probably not. You need the knowledge of design to assume design. Which gets us to life. DNA meshes well with what we know from experience about conveying information whether it be alphabets or computer code. If we just saw DNA as a pattern of molecules and never made the correlation between DNA and information would we consider design? Again probably not but the more we understand about how information works, the greater traction ID gets. tribune7
bornagain77, the math is at a bare minimum. The main purpose was to develop a working conceptual framework, with simple mathematical concepts to illustrate certain points. That said, if the framework moves forward, I fully expect the math to get more complex. JJS P.Eng.
Allen MacNeill, Thanks for making the point about not having to identify the designer to identify design. Someday maybe enough people will think well enough that this and "who designed the designer" can wind up in the ash can where they belong. On your other point: can fingerprint analysis be considered robust enough to help solve crimes where it is applicable without the need for it to be applicable in each and every crime scene? Of course, this leaves open the question of whether it is ever applicable and useful. But that case does not hinge upon your current demands. Charlie
JJS P.ENG. I'm glad you are willing to see where this trail leads. I don't have the math background to flesh it out in any meaningful detail but I wish you the best, and would like to know your findings. Thus I am saving your blog address for future reference. bornagain77
I believe there has been three ways to infer design, CSI which has a problem because some people use it to refer to every designed entity, IC which Behe elaborated on and till today there has not been any valid counter arguments other than speculation and finally OC or organized complexity where individual complex objects interact with each other for functional results. The cell is one of the best examples of OC. There is a constant harping from people like Bob O'H on how to calculate CSI but the real issue is how big the exponents are and not that they are big or not. If CSI is limited to those systems that refer to other functional systems such as computer programs and machine operations, alphabets and language and DNA and proteins then the numbers are so astronomical it is inane to challenge them. So Bob, while there may not be a precise number to quantify CSI, the number is so large that it is meaningless to challenge it as not being large enough to be the result of chance. Nit pick away but you know and we know the specific number is incredibly large for each case of CSI. Pick a protein and the instructions that refer to it. Do the calculations and show us how this could result from chance by a process of your choice. Lay out the argument for chance and then maybe we can have a discussion that is not nit picking over trivialities. I find it ironic that an evolutionary biologist such as Bob or biologists such as specs or leo never defend their positions with facts but who seem to delight in finding slight inconsistencies in often minor arguments by proponents of ID. Step up to the plate and swing away instead of hurling insults from the rafters that the opponent's game isn't going perfectly. jerry
Re SCheeseman [70] and bornagain77 [71]: Reading your comments have given me encouragement to complete a framework I am working on to objectively recognise design in nature. Like you, I believe Information Theory is a necessary step in this process. The proposed framework I'm attempting to develop "calculates" the amount of information in an object based on essential but basic parameters of design. If the amount of information calculated is over a "minimum threshold", then one can confidently say that the object is a product of design. I should stress that this "framework" is the product of someone with limited knowledge of biology and information theory. The "framework", if valid, requires that several details be hashed out/filled in, and quite possibly multiple revisions of the framework as a whole (I am expecting a lot of "red marks" - oh, such a lovely reminder of my days in graduate study). This is why I welcome all constructive comments on the proposed framework. For those interested, it will be posted at my blog under "The Problem of Design - Part 3: The Proposed Framework" by the end of this week (I hope). My apologies to Dr. Dembski and DaveScot for "pimping" my ideas (and my blog) on this site. JJS P.Eng.
As far as needing to know the identity of the "designer", I completely agree with those who assert that this is unnecessary. I wouldn't need to know which particular Microtus pennsylvanicus are the parents of the meadow vole from which I've obtained a tissue sample. Hoever, what I do need is a technique to identify that sample and characterize it as being from a particular population (or species, or whatever). This is done by following a well-worked-out protocol for statistically analyzing data obtained from empirical tests. If the explanatory filter and CSI are to be taken seriously as robust and widely applicable tools for "design analysis", it is absolutely necessary that its application to empirical analysis be based on an algorithm that can be applied to any natural phenomenon to determine one's "confidence" (in a statistical sense) that the object or process observed is or is not designed. Statistical analyses such as these are standard procedure throughout the natural sciences, but especially in the biological sciences, where causes are considerably less obvious (and more complex) than in the physical sciences. So, if ID is to be taken seriously, it must be possible to do similar statistical analyses to determine if a particular object or process "exceeds the threshold for design". Still waiting to be shown such an analysis for the E. coli flagellum... Allen_MacNeill
You would infer design because of what you know about design, not what you know about chairs.
Please, show don't tell. How would you infer design? What process would you go through? If you need to use CSI, how would you calculate this? Yes, I know we've gone through this before. But I've yet to see a satisfactory answer. Is it too much to ask you to show your working when you infer design?
You’ve even been willing to contradict yourself in order to make an argument.
Oh, thanks. Insult my integrity by linking to post from last year, which was posted 2 days after anything else, and to which I didn't reply (suggesting I never read it). And which refers to some previous argument of mine without indicating what that argument was. I can't defend myself simply because I've no idea what you were responding to. Well done. You win. Bob O'H
A note on the concept of the genetic 'code'. I recently read an article over at TO wherein the author states that the genetic code isn't real code as per a genuine set of abstract symbols with syntax, semantics etc. used to describe information processes or whatever. The author states that the genetic code is merely a 'cipher'. Of course, that is useless since a cipher is also and abstract code. Looking up definitions for the words 'code' and 'cipher' using Googles built-in define:word function reveals a ton of verying definitions. Doing a 'define:genetic code' fairs about the same - a ton of varying definitions. Most of the definitions however, do indeed denote an implied intelligence and course ought to. Code, instructions, ciphers... all words that can be applied to genetic information systems can only arise from intelligence. There is simply no such thing as structred code arising without it. This, to me is the death knell of materialistic views on the genome and therefore all origins and development of living things. Information itself is necessairly metaphysical and thus coded info cannot arise without a metaphysical component & therefore intelligence. Borne
gpuccio wrote:
I am a little bit tired of all the discussions about the necessity of knowing about the designer to infer design.
Aren't we all. It is funny how ID-critics will use the objection, get painted into a corner and get suddenly quiet, then bring it up again on a later thread. Oh well.
First of all, design is defined in relation to a designer. Although some commenters have tried to confuse the terminology, there cannot be design without a designer. Otherwise, you have to call it something else (Dawkins is correct enough when using the term “apparent design”, meaning something which has some characteristics of a designed objetc, but in reality is not the product of a designer). But what is a designer? It is important to remark that, although our reference model is that of humans, the concept of designer does not require the full set of human characteristics: a designer can be easily defined as any conscious being who has the ability to act and to generate, through his action, new design, and in particular new CSI. So, a designer needs not be human. He must, however, have the following characteristics: a) Be conscious (experience conscious representations) b) Be intelligent (that is, aware of principles like meaning and purpose) c) Be able to act upon matter d) Be able to superimpose CSI from his conscious representations into matter, through his actions That’s all. It’s much, but that does not mean neither that the designer has to be human, nor that we have to know anything else about him. In the design inference, the designer is inferred from design, and not vice versa. Design is observed, and the designer is inferred. Moreover, an inferred designer has to have only the essential characteristics of a designer, and not any other aspect of, say, human designers. So, he needs be conscious, like humans, but he needs not have hands. He needs to be able to act, but he needs not to be able to speak. And so on. Does the designer have to have a material body? That depends on your philosophy. If you believe that only matter can interact with matter, point c) would imply that. But if you believe that other realities can exist which are different from matters as we know it, and yet can interact with matter as we know it, then no such material condition is required. As for me, as I firmly believe that human consciousness is not strictly material, and yet it constantly interacts with matter, there is really no such problem.
Hits the nail on the head. Another reason why you, gpuccio, are one of my top 5 favorite commentors on UD. Atom
bornagain77: Thanks for putting some meat on the bare bones I offered! SCheesman
Off Topic: CSI (complex specified information) is foundational to the Intelligent Design position and I have seen many discussions on this site on the precise definition of information, with Shannon information being invoked many times by our materialistic friends against CSI. In this vein of debate, my curiosity has been aroused by recent discussions about "defining" information that have been brought about by Anton Zeilinger's work in "Quantum Telepotation". Since Anton Zeilinger is in fact arguing that information is indeed foundational to reality (information is the irreducible kernel from which everything else flows). I thought this following remark very interesting to the Intelligent Design position for CSI: http://www.quantum.univie.ac.at/links/newscientist/bit.html In the beginning was the bit excerpt from article: The number of classical bits in a system has traditionally been evaluated using a formula derived by the American engineer Claude Shannon. Say your system is a hand of cards. If you wanted to e-mail a friend to describe your hand, Shannon's formula gives the minimum amount of information you'd need to include. But Zeilinger and Brukner noticed that it doesn't take into account the order in which different choices or measurements are made. This is fine for a classical hand of cards. But in quantum mechanics, information is created in each measurement--and the amount depends on what is measured when--so the order in which different choices or measurements are made does matter, and Shannon's formula doesn't hold. Zeilinger and Brukner have devised an alternative measure that they call total information, which includes the effects of measurement. For an entangled pair, the total information content in the system always comes to two bits. Without Shannon's theory, progress in telecommunications during the second half of the 20th century would have been far slower. Perhaps total information will become as important in the 21st century. Zeilinger's principle is a newborn baby. If its fate is anything like that of Planck's century-old energy quantum, years will pass before it grows up and gains acceptance in the mainstream of physics. But if it does, it will transform physics as thoroughly as its venerable predecessor. --- me again: Could this total information that Dr. Zeilinger is talking about be reconciled in a meaningful manner to the CSI that Dr. Dembski has illustrated? i.e. are they in fact two sides of the same coin? bornagain77
Allen_MacNeill:
Yes, I would. Specifically, I would like to see how one arrives at a quantitative analysis of the level of “CSI” in something like the flagellum of E. coli versus something like a Martian “blueberry” in such a way that one can be reasonably certain that the former is indeed “designed” but the latter isn’t.
I think the above is an entirely reasonable expectation. I don't claim to have the full answer, but maybe a few ideas that might help in leading to the attainment of the goal. The first step is to focus on the instructions, not the final result, since very simple instructions can lead to extremely complex things (such as the mathematical formula that produces the Mandelbrot set). For the flagellum, this would be the stretches of DNA that code for the various components, but also include parts that code for its construction. It would be reasonable NOT to include any of the machinery used for transcription, etc., as that is held "in common" with all other DNA-based processes and constructs. Once the "Minimal instruction set" is determined, you would need to be able to quantify why that particular set of codes is "specified" compared to any other random ordering of DNA of the same length. This might be done by identifying in each section the individual protein coding regions, stops, starts etc., assigning to each part a particular function. Subtracting from the total specified complexity would be redundancies, e.g. where different sequences might be able to code for a protein with nearly-identical properties. This might be pretty hard, but perhaps a reasonable "upper bound" could be applied. As with a written language, there must exist an irreducable "core" of information; beyond which the function would be lost, and from that the specified information could be calculated. Dr. Dembski has produced similar estimates, I believe. SCheesman
bob,
But how would one infer or even suspect design if one knew nothing of the designer?
Haven't we had this conversation before? Dave even brought it up. I've noticed you have a trend of repeating your objections even after they're answered many, many times. You've even been willing to contradict yourself in order to make an argument. Patrick
Interesting what Allen MacNeill brings up at 47, J Stanley01 concisely answers at 50, and Barry A seconds at 53. Still Allen makes a good point: Design obtains at many levels. Instantly in the photo above we recognize design in the specification of a chair, yet at a deeper level, as Allen points out, there is additional design (he might say “the appearance of design”) in the living organism itself. And it doesn’t stop there. The environment provided by the soil, water, atmosphere and sun—the whole solar system—as Sir Isaac Newton assured us, is also designed. And now we go even deeper and ask: What about the laws? Yes, the laws too, they appear designed. As atheist Martin Rees concedes (Just Six Numbers: The Deep Forces That Shape The Universe), it’s Design or it’s Many Worlds—he picks the latter but notes that it’s merely to his taste. So how far does it go? Reality—like language—is hierarchical. At the deepest level is determinism and in the other direction is contingency—exactly where you draw the line between the two may be debated. But, as Paul Davies says, even God cannot alter the laws of logic. When physicists study other possible worlds, they mean worlds where the laws of physics are designed differently but the mathematics remains the same. So Allen is talking context. We isolate things for study over against some context. When we study biological design we assume the laws of physics, when we study physics we assume mathematics. It is only through the mystical experience that we grasp the whole of it all at once. Rude
I am a little bit tired of all the discussions about the necessity of knowing about the designer to infer design. I think it should not be so difficult as it seems. First of all, design is defined in relation to a designer. Although some commenters have tried to confuse the terminology, there cannot be design without a designer. Otherwise, you have to call it something else (Dawkins is correct enough when using the term "apparent design", meaning something which has some characteristics of a designed objetc, but in reality is not the product of a designer). But what is a designer? It is important to remark that, although our reference model is that of humans, the concept of designer does not require the full set of human characteristics: a designer can be easily defined as any conscious being who has the ability to act and to generate, through his action, new design, and in particular new CSI. So, a designer needs not be human. He must, however, have the following characteristics: a) Be conscious (experience conscious representations) b) Be intelligent (that is, aware of principles like meaning and purpose) c) Be able to act upon matter d) Be able to superimpose CSI from his conscious representations into matter, through his actions That's all. It's much, but that does not mean neither that the designer has to be human, nor that we have to know anything else about him. In the design inference, the designer is inferred from design, and not vice versa. Design is observed, and the designer is inferred. Moreover, an inferred designer has to have only the essential characteristics of a designer, and not any other aspect of, say, human designers. So, he needs be conscious, like humans, but he needs not have hands. He needs to be able to act, but he needs not to be able to speak. And so on. Does the designer have to have a material body? That depends on your philosophy. If you believe that only matter can interact with matter, point c) would imply that. But if you believe that other realities can exist which are different from matters as we know it, and yet can interact with matter as we know it, then no such material condition is required. As for me, as I firmly believe that human consciousness is not strictly material, and yet it constantly interacts with matter, there is really no such problem. Do we need to know anything specific about the designer to be able to recognize a design? No, we just need to be aware of our full definition of a designer, and of all the necessary characteristics which must be shared by any designer (see previous points). The only aspect where some more definite idea of the designer could help in inferring design is in the part of the inference which tries to identify specification. As I see it, there are specifications which require no specific knowledge of the designer, others which are better understood in relation to specific characteristics of him. It depends on the context in which the specific function is defined. As I have often remarked, any functional specification must be defined in an appropriate context. Proteins are of no use, unless in the context of a cell, or at least of a solution with the right pH, and where the substrates on which the protein acts are present. A transmembane protein is of no use if there is not a membrane. An enzyme is of no use if there is not its substrate. A transmission pathway is of no use if all the steps are not working. In the above examples, the function can easily be defined in a specific context which can vastly be considered independent from the designer. An enzyme is functional if it does its work, and if its work serves some function in the context, for instance, of a bacterial cell. If it was designed, should we know some further details of the designer, beyond his being conscious, intelligent, and aware that an enzyme of that kind is needed so that the bacterium can survive? No. In the case of the espalier tree, the additional CSI linked to the special form of the tree is viewed as functional only if we think of it as a chair. Therefore, we must be aware of the context in which chairs are functional: somebody has to be able to sit on them. The designer should be aware of that, too. That would bring us to make one more assumption about the designer: either he can sit, or he knows that others (humans, for example) can sit. In other words, the only assumption we need about the designer, beyond the above points, in a specific design inference, is that he may be aware of the contex which specifies the function observed, and possibly interested in implementing that function (and able to do that). I hope that shows that we need not know anything else: human or not human, material or not material. These are non essential (although certainly interesting) details. The design inference can well live without them. All we need are the above points, and an awareness of the context for the function (certainly on our part, and at least assume it on the part of the designer). gpuccio
thank Charlie......then i don't know what to say except for i feel a wee bit embarrassed for him. interested
Bob --I would infer design because I know about chairs, Maybe not as much as you might think. When is a dead horse a chair? :-) You would infer design because of what you know about design, not what you know about chairs. tribune7
F2XL asked (in #52):
"Would you like me to show you how it’s done with the flagellum?"
Yes, I would. Specifically, I would like to see how one arrives at a quantitative analysis of the level of "CSI" in something like the flagellum of E. coli versus something like a Martian "blueberry" in such a way that one can be reasonably certain that the former is indeed "designed" but the latter isn't. This is precisely the kind of statistical analysis that is the bread and butter of experimental biology. One formulates an hypothesis, formulates a prediction on the basis of that hypothesis, designs an experiment to test that prediction, counts or measures the results generated by that experiment, and then analyzes how well the results fit the predictions flowing from the hypothesis using some form of analytic statistics. Show me how this can be done with the flagellum of E. coli. Specifically, show me how the analytic mathematics/statistics of the XF can be used to calculate a number that indicates succinctly the level of CSI in a chosen object or process, such that everyone who does such a calculation will agree that it does (or does not) rise to the level of statistical significance. Allen_MacNeill
PS: GP [and Allen], T & A, by suggesting a metric dimension of functionality [cf Fig 4 in their paper], can give us a metric for degree of functional performance/specification. In either case, we see a vector metric, with one variable for degree of complexity, and another for functionality and/or specification. Metrics come in various forms: ratio, interval, ordinal, nominal, and quantities come in scalar and vector forms too: 5 m/s North is not equal to 5 m/s South. kairosfocus
Bob: RE: how would one infer or even suspect design if one knew nothing of the designer? All that is required to identify that a given observed case is credibly designed, is that, based on experience of designs and designerS, one identifies reliable signs of design. The explanatory filter, with the construct: specification + complexity, has proved reliable enough when it rules "design." The subset of CSI, functionally specified, complex information [or T & A's Functional Sequence Complexity], is even more specific, as it looks at function that expresses itself through complex organisation and associated information that would otherwise be overwhelmingly improbable. [That is, the available probabilistic resources would be most likely fruitlessly exhausted on a random search or one not instructed by active information.] So, one does not need knowledge about THE designer to infer to design, and one knows a lot about designers already. In the case of the photo of the chair, the structure shown exhibits patterns of complex organisation that put it well within the threshold of FSCI. And that would hold even if the photo was a fake (Just, instead of espalier, it would be photoshop or the like!) Back to work . . . GEM of TKI kairosfocus
I'm curious about this. I would infer design because I know about chairs, and the designers who use them (I even have a colleague who claims to teach chair theory). But how would one infer or even suspect design if one knew nothing of the designer? Bob O'H
Allen_MacNeill: "Ergo, the “explanatory filter” apparently cannot produce a quantitative assessment of the level of design of any living (or formerly living) object or process. Extending this line of reasoning, the “explanatory filter” is also useless for the purposes of verifying or falsifying “borderline” cases." Wrong. The EF is about affirming the demonstrable presence of CSI. To do that, it uses a definite, although arbitrarily drawn, quantitative level for the minimum complexity requested to define CSI (Dembski's UPB). So, it is perfectly quantitative. Only, its goal is not to quantify CSI, but just to identify those cases where CSI is certainly present. A few notes, to avoid furhter misunderstandings: a) Design does not equal CSI. There are simple designs which don't exhibit CSI. Correctly, they will not be identified by the EF (the design nature could, anyway, be proved in other ways, for instance by direct observation of the process of design). On the other hand, all correctly identified CSI is designed (empirically). b) Not all CSI can necessarily be detected by the EF, because not all the necessary information can be available. Let's remember that the EF, to be applied, needs reasonable information about three different points: the computation of the complexity, the specification, and the exclusion of necessary pathways. c) The EF does not attempt to quantify how much CSI is present, and therefore to compare different examples of CSI. It just compares CSI with non CSI. That's its goal. d) An approximate comparison between different levels of CSI can, however, be attempted by comparing the levels of complexity (not of specification, because specification is a property of the whole, which is either present or not, a binary variable in other words). In a general sense, a more complex specified item could be said to exhibit more CSI than a simpler one. So, as you see, even the "level" of CSI can in theory be assessed, although it can not be easy to do that. An attempt to do that for protein families can be found in the second paper by Abel and Trevors, which applies the concept of Shannon entropy. In the case of the tree, anyway, if you can show that the CSI implicit in the "espalier" form is indeed superimposed to the CSI implicit in the tree (which should be rather intuitive, but not necessarily easy to demonstrate), then you can correctly conclude that the espalier tree has more CSI than a similar, normal tree. To sum up: ID and the EF are pretty quantitative, and they do very definite things (which, indeed, cannot be said of many other evolutionary concepts). gpuccio
nterested 05/20/2008 9:52 pm allen is not making many valid points at all…..he is just showing that he has never read any of Dembski’s works…..Dembski’s point in defining the explanatory filter was NEVER to be able to ALWAYS detect design in every instance. rather, it was to show that somethings show irrefutable levels of design. beyond that, your post is pretty poor actually…..
Interested, your points are good (as are those of others addressing Professor MacNeill's comments) except for the first error you made: MacNeill not only has read Dembski, but has taught his work to students. http://evolutionanddesign.blogsome.com/reading-list/ His error does not, then, stem from ignorance. Charlie
D for all trees, not just the tree shown. William Wallace
pubdef, (54) Thanks for your charitable reading of what I was trying to say, and as my previous comment noted, I butchered. I agree that the theistic implications of ID should be owned. That is part of why I made comment 41. That does not mean that the theism must be front-loaded. But it does mean that anti-theism cannot be front-loaded. (Well, unless one does like Prof. Dawkins and insist that only naturally evolved aliens are allowed). Behe's religion did not require ID (compare Kenneth Miller). He was driven to it by his science. However, his belief in God made it much easier to accept ID as a possibility. Perhaps more to the point, Antony Flew, in spite of being an atheist, came to the realization that ID was substantially correct, and almost perforce became a theist, although not a Christian. Paul Giem
Daniel King, (42) My bad. I was in a hurry and didn't proofread well enough. The sentence should read,
The problem you don’t get is that, unless there is a question-begging definition of science, there is no requirement that science must be compatible with atheism.
A non-question-begging definition of science might be the study of reproducible events, or perhaps the study of events for which there is a known physical mechanism (although the latter would rule out quantum mechanics). A question-begging definition of science would be the explanation of events in nature by means of natural processes. How can we know ahead of time that all events, including the Big Bang and the OoL, are in fact explained by natural processes? The latter definition of science would appear to beg that question. Paul Giem
Wow, five posts to answer, what fun. #35:
So then, if you discovered an abandoned city on Uranus or Jupitor, you would conclude that it was either designed by humans, or was natural, not artificial?
I would hypothesize that it was designed by something in the same general category as humans, i.e., an evolved being constituted of matter, which I would proceed, to the extent of my available brains, time, and resources, to learn as much as I could about.
And would anyone who credits its existance to design be described as propounding a creationist or religious view?
It would depend on the nature of the proposed designer. (See below for my attempt to define "creationist.") #37:
Consider a robot that makes robots like itself. The robot is not intelligent. It simply follows a computer code that tells it to place nut B on bolt A and twist clockwise. Through a whole series of such instructions another robot is produced by non-intelligent means. But the robot itself was obviously the product of intelligence.
By "the robot itself," I assume you mean the first robot; so I don't see how "the robot itself" is not a "created" object, or how your proposal differs from front loading. #37 again:
The question of whether it is possible to detect design and the question of whether a designer of living things exists are the same question. If I demonstrate design has occurred (i.e., detect design) I have necessarily demonstrated the existence of a designer, because design does not happen without the existence of a designer.
I have no quarrel with "if design, then designer," except that it seems circular; but I don't quite understand your first sentence here. It is certainly possible that we can detect design and that a designer of living things does not exist. (If you mean to say "the question of whether it is possible to detect design in living things and the question of whether a designer of living things exists are the same question," that would not be responsive to my point, which is that our ability to detect design by humans -- in archeology, crime scene investigation, etc. -- does not mean that we can, by the same principles and methods, infer design or a designer of humans. (Sorry if this needs more explication than I have time for here.)) #38:
We use induction to infer “making” and “designing” ability in most intelligences, based on the limited number of examples we’ve seen (human design.)
I would think that induction would be of limited validity if there is insufficient similarity between the inferred instances and the observed instances; so I would question the validity of inferring, from artifacts of human intelligence, that other things were intelligently designed by something or someone that we know nothing about. #39:
1) What is your definition of “creationism”?
I've never had to define it before, but I suppose this will do: the belief that nature, and most specifically, living things, were intentionally brought into being by some entity outside of nature.
2) if we limit ID to cases of human design, is it still religious?
I don't see how it would be, but it seems superfluous. Detection of design is simply a component of other activities and disciplines (archeology, CSI, etc.).
If not, then it isn’t ID that is religious, since you’re using ID in both cases. ID has religious implications (as does Darwinism), but it doesn’t have religious premises.
Again, if we're only talking about detection of human (or animal) design, I don't see why we have to use the term ID. As for ID not having religious premises, I don't agree. Behe, for example, has said that he arrived at ID through scientific inquiry, unrelated to his Christian beliefs; but I question whether he would have reached his conclusions in the absence of a prior conception of (even without belief in) a supernatural, intelligent being. #41:
The problem you don't get is that there is no requirement that science, unless there is a question-begging definition of science, is [ ]required to be compatible with atheism. [I removed an apparently spurious "not" from your sentence.]
I have no expectation that science will be compatible with atheism. (I do think it must be materialistic; if anyone can tell me how they can do nonmaterialistic science, I'd like to hear it.) I do object to disingenuous claims that ID has nothing to do with theism. I think there's a fundamental dishonesty going on when some in the ID community are saying "we just want to do science, follow the evidence where it leads, the Darwinists are suppressing scientific inquiry, we are the Galileos of our age" and others are saying that the problem is that we have to abandon materialism. pubdef
jstanley01 writes: "I think [Alan is] missing the irony of the picture: how the human intelligence behind the specified complexity that makes up a chair can be so evident, while any intelligence at all behind the orders-of-magnitude greater specified complexity required to produce a tree can seem so obscure to so many people." Yes! You got it. Thanks. BarryA
"However, this immediately raises the question, if ID is valid, aren't both the "chair tree" and a "natural tree" designed?" Yes, but on different levels. Normally we look at the CSI required to code for the tree, in this case we are looking at how the tree was arranged to grow in the pattern that it did after the fact. "Indeed, I believe that virtually all of the commentators here would agree." No kidding, but see the above point. "In that case, how useful is the so-called "explanatory filter", as clearly it cannot distinguish between the level of design exhibited by the object in the photograph and the "natural" objects with which it is compared?" I'm assuming your understanding of the Explanatory Filter (or what I like to call the "X" filter) literally stems from the way I applied it to the EXTERNAL ARRANGEMENT of the tree, and for the most part that was the original question that was being asked. The X filter can ALSO be applied to the genome of the tree if that's the type of issue you're looking towards. "Indeed, if ID “theory” is valid, it seems likely that essentially all living objects and processes (and formerly living, but now dead ones, such as corpses and fossils) are designed, or are artifacts produced by designed entities." That's a possibility but nonetheless I think you have fallen for a somewhat common misconception. ID only concerns itself with CERTAIN aspects in biology (and physics/cosmology). You wouldn't need every last characteristic of life to be the product of design for it to be a valid theory. "Ergo, the “explanatory filter” apparently cannot produce a quantitative assessment of the level of design of any living (or formerly living) object or process." Again, you've confused what the explanatory filter IS with WHAT we're applying it to. "Extending this line of reasoning..." Which we can explicitly see is based off of a straw man... "...the “explanatory filter” is also useless for the purposes of verifying or falsifying “borderline” cases." If you would like, I can show you how I personally use the Explanatory Filter (or simply the "X" filter) and apply it to a biological structure such as the flagellum. Then you know how you can apply it at the biological level. "But, in the same way that the “explanatory filter” cannot distinguish between an espaliered tree and a “natural” tree (i.e. one not modified by a human “designer”)" Dude, the X filter is part of our every day reasoning, it's not that tough to apply it to the Tree in the image. Chance? Heck no. Regularity? Though highly improbable, the fact that it conforms to a specified low descriptive yet familiar pattern we see repeated elsewhere means that this is consistent with being an intentional arrangement. Design? Yes, see the previous sentence for clarification. "...it cannot quantitatively distinguish between a non-living Martian “blueberry” and a living one (or one produced by a living entity or process)." See the above elaborations on the X filter, you're assuming it cannot be applied at the biological level because we used it in a different context. "One of the most basic principles of modern biological science is that, if one cannot statistically analyze the validity of one’s empirical results as compared with one’s proposed hypothesis for the origin of such results, then one isn’t “doing science”." We totally agree with this and it's one of the reasons we express skepticism over Darwinian evolution. Take the following video as an example: http://www.youtube.com/watch?v=SdwTwNPyR9w As an evolutionary explanation, you have protein parts just pop out of no where, magically gather at the same location, in the right order, without any interfering parts getting in the way, and in a way where they are interfacing compatibly to form a functional flagellum capable of overcoming Brownian motion. No attempts to statistically quantify the likelihood that such a pathway will be crossed are made. And without numbers (which you agree are important), there is no way to falsify (or verify) that the pathway can even occur. When Dembski (or many of the other people on UD, myself included) attempt to quantify such a pathway, we see that it has no realistic chance of happening (since the odds tend to satisfy the UBP). So in terms of not rigerously testing their results statistically Evolutionists are just as guilty. "Yet, on the basis of the foregoing," Which was based on a misconception. "the “explanatory filter” (and, by extension, the soi dissant “theory” of “complex specified information” upon which it is based) is utterly useless as an analytic tool in the empirical sciences." See bottom point. "Discovering how, exactly?" Would you like me to show you how it's done with the flagellum? F2XL
allen is not making many valid points at all.....he is just showing that he has never read any of Dembski's works.....Dembski's point in defining the explanatory filter was NEVER to be able to ALWAYS detect design in every instance. rather, it was to show that somethings show irrefutable levels of design. beyond that, your post is pretty poor actually..... the point of the explanatory filter is to show merely that somethings are certainly designed. your post has done zero to address that.... interested
Allen_MacNeill @ 47 I think you're missing the irony of the picture: how the human intelligence behind the specified complexity that makes up a chair can be so evident, while any intelligence at all behind the orders-of-magnitude greater specified complexity required to produce a tree can seem so obscure to so many people. If both are the products of design, of course there are going to be borderline problems. Take a co-authored book for example: it may not be self-evident from a cursory reading ot the text which author wrote what. Familiarity with each author's own unique style may be required, and even that may not be enough. Or take a co-written piece of software: it may be impossible for a third party to sort out which coder wrote what code. I don't see how that borderline problem relates to the Mars blueberries. It seems to me that whether they display sufficient specified complexity to wiggle, or to have used to have wiggled -- in contrast to the rocks that surround them -- is exactly everyone's waiting to see. jstanley01
Allen, I am not sure you are making a valid point. There are two different issues, the basic design of the organism and its potential capabilities and how that organism's outward appearance or capabilities is affected by the environment. These are two completely separate issues. On each issue the EF could be applied. You seem to be conflating issues when they are really quite separate. jerry
Furthermore, the originator of the "explanatory filter" itself has said this about quantitative analysis of hypotheses about design:
"ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories. If ID is correct and an intelligence is responsible and indispensable for certain structures, then it makes no sense to try to ape your method of connecting the dots. True, there may be dots to be connected. But there may also be fundamental discontinuities, and with IC systems that is what ID is discovering.”
http://www.iscid.org/boards/ubb-get_topic-f-6-t-000152.html Discovering how, exactly? Allen_MacNeill
An interesting question, but not necessarily in the way intended. Consider the fact that all of the commentators so far have compared the object in the photograph with a "natural" tree, and concluded that the "tree" in the photograph must have been designed (perhaps by the person next to it in the photo). However, this immediately raises the question, if ID is valid, aren't both the "chair tree" and a "natural tree" designed? Indeed, I believe that virtually all of the commentators here would agree. In that case, how useful is the so-called "explanatory filter", as clearly it cannot distinguish between the level of design exhibited by the object in the photograph and the "natural" objects with which it is compared? Indeed, if ID "theory" is valid, it seems likely that essentially all living objects and processes (and formerly living, but now dead ones, such as corpses and fossils) are designed, or are artifacts produced by designed entities. Ergo, the "explanatory filter" apparently cannot produce a quantitative assessment of the level of design of any living (or formerly living) object or process. Extending this line of reasoning, the "explanatory filter" is also useless for the purposes of verifying or falsifying "borderline" cases. For example, it cannot be used to determine if the "blueberries" so abundant in some of the photographs taken by the Mars surveyor robots are the product of living organisms (and therefore of "design"), as doing so would involve calculations that would indicate that the "blueberries" either did or did not exceed some quantitative threshold indicating design. But, in the same way that the "explanatory filter" cannot distinguish between an espaliered tree and a "natural" tree (i.e. one not modified by a human "designer"), it cannot quantitatively distinguish between a non-living Martian "blueberry" and a living one (or one produced by a living entity or process). One of the most basic principles of modern biological science is that, if one cannot statistically analyze the validity of one's empirical results as compared with one's proposed hypothesis for the origin of such results, then one isn't "doing science". Yet, on the basis of the foregoing, the "explanatory filter" (and, by extension, the soi dissant "theory" of "complex specified information" upon which it is based) is utterly useless as an analytic tool in the empirical sciences. Allen_MacNeill
Lemme see. Could it be A? B? C? D? E? Mind if I sit down while I think about this? jstanley01
"Another famous example would be IBM’s Deep Blue computer beating the chess champion Gary Kasparov using Machine Code and its machine set, the computer and monitor with its powersource, electricity." He actually had some interesting things to say about Deep Blue. I believe he made a comment that if a computer could beat him in chess then a computer could write the best books, plays, or something to that effect. "Who is doing the requiring? The scientists or the atheists?" Sadly it could be both if many scientists regard themselves to be atheist or agnostic. Still can't find any justification for why things that are "scientific" must be in terms of matter and energy. I find the third fundamental entity (information/intelligence) to be just as testable. F2XL
When dealing with the origin of life (OOL as they call it I guess), if you exclude the design inference, then that makes all other inferences atheistic in nature. So if the evolutionary proponents are going to sit there and deny the inference of design, they are in effect in the business of validating atheism with respect to the origin of biological organism(s). By the way, (D) is the obvious answer to the question posed. I see a recognizable pattern that has a low probability of existence. Extremely low probability. Exceptionally patterned. Very intuitive. Very cool. RRE
BarryA, Codes can be granted the ability to produce the effect of intelligent agency when given a machine set that is compatible to that particular code. In another way, codes can mimick the effects of a mind and body when a machine(s) are able to be used by the code. That would explain how the genetic code with its compatible machine set (nano robots within the cell) could produce yet another code with a compatible machine set, in other words, reproduce. Our automobile factories can use a code in combination with robotic arms (machines) to purposefully produce the car. The code and compatible machine set (robotic arms) can mimick the effects of a welder. Another famous example would be IBM's Deep Blue computer beating the chess champion Gary Kasparov using Machine Code and its machine set, the computer and monitor with its powersource, electricity. The obvious point would then be what produced the 'original' genetic code and the 'original' set of compatible machines that make the cell. After all, a code has only been shown to come into existence through intelligent agency (either another code with compatible machine set or through a mind with a compatible machine set). So in my opinion, this would regress until a mind becomes the final answer as to any codes' origin. Also, observation has shown us that each and every time a mind has been shown to exist, a compatible machine set has been present (biological body). So I think I have shown that a code can in fact produce the effects of intelligenct agency, although this has always regressed back to a mind at the irreducible point of origin. RRE
The problem you don’t get is that there is no requirement that science, unless there is a question-begging definition of science, is not required to be compatible with atheism.
Enough with the double negatives, already. Do you mean to say, "Science is required to be compatible with atheism"? Who is doing the requiring? The scientists or the atheists? File under: Incoherence. Daniel King
Pubdef, (33), You make a valid point along with some that appear to be mostly incorrect. Regarding the detection of design, if one comes across a beaver dam, one infers either instinct or intelligence or both in the beavers. instinct is simply a low-level "hardwired" intelligence, very much akin to computer programs. If someone happens on a valley where no humans have been (which is unlikely to happen now) and finds a beaver dam, it is reasonable to infer some kind of intelligence, even if one has never seen a beaver. Could one say that a given dam was made by beavers and not humans? Not securely, as conceivably humans could imitate beavers. Could one say that a dam was made by humans and not beavers? If it were 500 feet high and made of reinforced concrete, on this earth, probably yes. But in either case, one can say that the dam was the result of intelligent design, and not the random falling of mud, logs, concrete, and/or steel. The inference of intelligent design is at least partly independent of the designer. BarryA (37) has successfully shown how design can be separated temporally from making something. This is also true for the springing of traps. Frontloading is a demonstrably real phenomenon. Your valid point is that this is in some sense creationism and religious. If one accepts that there was a designer, then that designer had to be more intelligent than us (as we can neither create life nor transport it at will to another solar system yet). So from our perspective, even if material, this designer is effectively godlike. Furthermore, if this designer was material, the question of "Who designed the designer?" is appropriate, leading to an ultimate designer or designers that were (are?) not dependent on the organization of matter for its/their intelligence. So, yes, there would be rather clear creationist (in the broader sense) implications. The problem you don't get is that there is no requirement that science, unless there is a question-begging definition of science, is not required to be compatible with atheism. An interpretation of science compatible with atheism has to win on the merits. It looks like such an interpretation is losing rather badly in several areas, most prominently on the OoL. Tough break. You have to win on the merits, and if you lose, then you have to either concede and move to our side, or maintain a faith-based atheism. As for me, I'll go with the evidence. And I think every scientist should be able to do the same without intimidation. Do you? Paul Giem
The choice is obviously D! But nowhere does this challenge "materialist assumptions" like other commenters have suggested. At the risk of sounding like a darwinist, this was clearly designed by a natural entity, a human being is the most likely explanation. It was also designed for a discernible purpose - making a chair that you could take a picture of and show off how cool you are! It's a very neat chair. You don't have to have a signature saying whodunit to identify the designer, the shape, size, and nature of the design that took place suggests that it was a skilled human designer. Here's a question, if there was a flaw in the design, could you infer that the designer was less-than-fully capeable of carrying out the design? For large, complicated designs, can the effects of miscommunication be detected when multiple designers are involved? These are the kinds of questions I'm really interested in. EJ Klone
Sorry Barry, I mistook #38 for DLH when it was you. Good point either way. Atom
DLH wrote:
The question of whether it is possible to detect design and the question of whether a designer of living things exists are the same question. If I demonstrate design has occurred (i.e., detect design) I have necessarily demonstrated the existence of a designer, because design does not happen without the existence of a designer.
Bingo. pubdef wrote:
This leads to illumination of why Intelligent Design is, essentially, (1) creationism and (2) religious
1) What is your definition of "creationism"? 2) if we limit ID to cases of human design, is it still religious? If so, then you need to explain how detecting signs of intelligent activity on material objects becomes a religious proposition. If not, then it isn't ID that is religious, since you're using ID in both cases. ID has religious implications (as does Darwinism), but it doesn't have religious premises. Atom
pubdef wrote:
The fundamental point is not so much that we have seen humans design things as that we have seen humans make things; better, that we have seen humans do things; better yet, that we have seen humans. So, the inference to human design is fundamentally different from an inference to nonhuman design. This leads to illumination of why Intelligent Design is, essentially, (1) creationism and (2) religious.
Why is it different? How do you know humans can make things? Because you've seen a few do so? You are also using induction to infer "making" ability in most humans based on a limited number of examples. We use induction to infer "making" and "designing" ability in most intelligences, based on the limited number of examples we've seen (human design.) As I've written here before, artifacts establish the historical presence of designing intelligences. So your point devolves into my original post (#1)...you will argue that deisgn can't be true, because we have no "evidence" of ancient non-human designers. (Conveniently, there can never be any such "evidence", since even if we found a big signature that said "Made by Alien 115", it couldn't have been designed, since we have never seen a non-human alien intelligence designing. So that evidence wouldn't count, nor would any set of evidence.) So I'd rather not use your reasoning. Atom
pubdef writes “the real point is not that someone/something designed the flagellum, but that someone/something made it. (If I’m wrong, tell me — is it a viable hypothesis within the ID paradigm that living things were designed by an intelligence, but the design was implemented without intelligence?” Well, duh. No one suggests that each flagellum comes about as an act of special creation by an intelligent agent. That is absurd. Living things were indeed designed by an intelligent agent, and that design is indeed implemented without intelligence. Consider a robot that makes robots like itself. The robot is not intelligent. It simply follows a computer code that tells it to place nut B on bolt A and twist clockwise. Through a whole series of such instructions another robot is produced by non-intelligent means. But the robot itself was obviously the product of intelligence. Now consider the bacterium. It is truly the case that a bacterium is a self-replicating nano-robot. It is a marvel of nano-technology, complete with a computer code (DNA), material transport mechanisms, quality control apparatus, etc., all of which operate without the slightest need for input by an intelligent agent. The question you pose is therefore, utterly irrelevant to ID. We know what MADE the bacterial flagellum, the bacterium. The real issue is what is the best explanation for the existence of a staggeringly complex self-replicating nano-robot – unguided and blind natural forces or intelligent agency. The answer is, of course, obvious. pubdef then writes: “the real point of dispute between ID and ID opponents is not whether it is possible to detect design, but whether a designer of living things exists.” Wrong. The question of whether it is possible to detect design and the question of whether a designer of living things exists are the same question. If I demonstrate design has occurred (i.e., detect design) I have necessarily demonstrated the existence of a designer, because design does not happen without the existence of a designer. BarryA
re: #35 The planet "Jupitor" should not be confused with the more familiar "Jupiter", since it is in a different universe than our own, which is one of billions of universes that make all events possible. russ
The fundamental point is not so much that we have seen humans design things as that we have seen humans make things; better, that we have seen humans do things; better yet, that we have seen humans. So, the inference to human design is fundamentally different from an inference to nonhuman design. This leads to illumination of why Intelligent Design is, essentially, (1) creationism and (2) religious.
So then, if you discovered an abandoned city on Uranus or Jupitor, you would conclude that it was either designed by humans, or was natural, not artificial? And would anyone who credits its existance to design be described as propounding a creationist or religious view? russ
Re #17 and #27: "Dembski's whittler" is clever, but a few minutes thought brought me to why it doesn't get Dembski where he wants to go. The fundamental point is not so much that we have seen humans design things as that we have seen humans make things; better, that we have seen humans do things; better yet, that we have seen humans. So, the inference to human design is fundamentally different from an inference to nonhuman design. This leads to illumination of why Intelligent Design is, essentially, (1) creationism and (2) religious. Concerning (1), the real point is not that someone/something designed the flagellum, but that someone/something made it. (If I'm wrong, tell me -- is it a viable hyptothesis within the ID paradigm that living things were designed by an intelligence, but the design was implemented without intelligence? You might say that "front loading" is such a hypothesis, but I don't think that works as a separation of design and implementation.) Concerning (2), the real point of dispute between ID and ID opponents is not whether it is possible to detect design, but whether a designer of living things exists. pubdef
re: #31 Could it not be that the tree, through gradual evolution, provided progressively more comfy seating for Neanderthals/Cromagnans, and benefited from early man's aversion to insects? In killing the bugs that preyed on men, they would have likely killed some of the tree-eating species as well, thus forming a symbiotic relationship between man and chairtree. russ
To further clarify the explanatory power of Darwinian theory, we now turn to the chair-shaped tree. The insects that evolved through natural selection to avoid humans, and thus the deadly chemicals, evolved further to recognize subtle indicators of human presence, such as chairs. This tree, through natural selection, evolved to fool the insects into thinking that humans might be around to poison the insects with the deadly chemicals, and, thus, this variety of tree stood a better chance of survival from avoiding attack by insects that fear humans. Thus, through natural selection, the chair-tree passed on its chair-tree genes to future generations.
Gil, your post is an excellent contribution to the overwhelming body of evidence for macroevolution. But I would like your clarification on the origin of this: http://bp2.blogger.com/_Lj5KP8OgZVk/R8cfwjQTSTI/AAAAAAAABXE/wYkCFIJiwL0/s1600-h/trees-thumb.jpg Which came first, the chair-tree, or the man-tree? And do we have in one or the other a missing link that finally proves a theory which was already been proven? russ
teh = the* doens't = doesn't* Atom
pubdef wrote:
Unless I missed something, no one has responded to this point from Congregate #3: I could change my mind if you showed me that the tree’s ancestors had the same shape, or if seeds from it grew in the same shape.
If the tree's ancestors had teh same shape, then it took this shape through mechanical necessity (development program.) But then the question gets pushed back one level: how did the tree's ancestors come to take this shape? The reason no one responded (I believe) is because we all see that it doesn't really answer BarryA's implied question. We could also say it has this shape becuase it had the same shape two seconds ago and things tend to keep the same shape over time...but that doens't really answer Barry's implied question, either. Atom
Unless I missed something, no one has responded to this point from Congregate #3:
I could change my mind if you showed me that the tree’s ancestors had the same shape, or if seeds from it grew in the same shape.
pubdef
Matteo wrote:
can’t remember exactly where, but in some published essay or book, William Dembski pointed out the question-begging logic behind saying “we can identify human designs because we have seen humans design things, but not so with non-human designs”. Well, when we saw the human being designing something, how did we know that that was what he was doing? Dembski gives the example of a human being taking a whittling knife to a stick. How can we even tell that the human being is acting as a designer as he manipulates the knife, rather than just using it to absently hack away at the stick? Only by detecting design in the result! Hence we infer from the presence of design to the designing activity of the human, and not the other way around.
Exactly! he made the point in an essay in the Design Revolution. (Excellent book, btw.) Makes the point perfectly. Those who need to "See a designer around" before they could possibly concede design must first identify someone as a designer...and you can only do that by inferring design! Atom
Don't be silly. Nature designed the chair. What happened was Nature was walking around in her garden, in the cool of the day, as it were, when suddenly the idea occurred to Her, “Gee! It certainly would be nice to sit down every now and then!” And right then and there She thought of Chair. Cause let’s face it, even Nature gets tired of walking around all the time, creating millions of species and body plans out plain old atoms and stuff. Nature’s always having great little inspirations like that. Heck, I remember the time She said to Herself, “Self! Let’s make a liver.” Right out of the blue! Can you believe it? She’s so creative. And what about the time She made light? Without light, we’d all be in the dark. In fact we’re still in the dark about light. Nobody can figure it out. Is it particle? Is it wave? Who the heck knows? So that was another great idea She had. Or think about our solar system for a moment. I mean really think about it! Stuff orbiting the sun, stuff orbiting stuff, stuff staying out of stuff’s way. When Nature got done figuring all THAT out, she really needed to sit down. And that’s where Chair comes in. Nature is resting. (To tell the truth, She hasn’t really had any great new ideas in a long time. I don’t know why She stopped creating things all of a sudden, but She did. Except, of course, for the chair.) allanius
top ... less laminins bornagain77
Left a word out: I haven’t searched the theological reasoning for the to^pless laminins bornagain77
Sparc: http://jcsm.org/myspace/laminin.jpg Well using the same Darwinian logic for similarities of morphology being sufficient proof for establishing evolution as conclusive beyond any doubt, The morphological similarities of laminin to the cross of Christ, as well as its central role in "holding our body together (as well as holding all other animal bodies together)", obviously warrants conclusive evidence that the designer of life was in fact Jesus Christ. To think otherwise makes you a materialistic Crank with a hidden religious agenda to sneak your (non) religion into our schools (oops that diabolical deed has already done). At any rate, That the disruptive ADAMS, http://www.udel.edu/PR/NewsReleases/Viper/moliculeslr.jpeg disintegrins, would disrupt laminins, would be found in snake venom as well as would look morphologically just like a snake only furthers this Darwinian line of reasoning of similarties being conclusive scientific proof and proves beyond any doubt that that serpent, Satan. did indeed tempt man in the garden of EDEN and caused the fall of man and the world from perfection. I haven't searched the theological reasoning for the topless laminins but I'm sure, using this same Darwinian line of reasoning of similarities being conclusive proof, will give me more irrefutable evidence to refute the crankpot Darwinists with. bornagain77
PS: Let us hypothesise a photoshopping game, just for fun; just in case Chas Johnson decides to weigh in and can find the usual artefacts of manipulation. Where does this point? Again, the FSCI leads to -- Design; just, the context of the design would be different: cheating or a joke.. kairosfocus
This tree is an example of the astonishing power of evolution. Further, it shows how could two separate species evolve traits that just happen to fit so perfectly together. The arrangement of branches accommodates perfectly the posteriors of both early and modern man. Our early ancestors were hardy and rugged individuals but they appreciated a comfortable resting place for their rear-end as much as we do today. Individuals rested comfortably and in a way that facilitates social interaction could thereafter face the next struggle with a small but significant advantage. The tree entices the primate to take a comfortable seat. The resulting shaking of the branches releases a shower of pollen into the hair of the seated individual.He or she then moves on through the forest until another rest is required. The most comfortable natural seat is sought and the pollen is carried there, accomplishing the plant’s goal of pollination. To those untrained in the discipline of darwinism this mutual arrangement may appear to defy logic. To the untrained eye it may even have the appearance of design but this is clearly incorrect thinking. In fact we often see even more complex evolved symbiotic relationships. For example, plants have evolved the means of production of nourishing and sweet-tasting nectar as a reward to birds that assist in pollination of the plants. The chair tree is a more straightforward and easier explained case of bottom-down/bottom-up evolution. steveO
LOL . . . Hint: look up "espalier." Here we actually see yet another case where functionally specified complex information [straightness, symmetrical bends and branches, configuration of a chair -- well over 500 bits of information are highly probable] traces yet again to design. The presence of a human in context of course invites the well, we know a candidate designer was present objection. To which the reply is: unless you can -- without begging the question -- rule out that a designer was possibly present, given what we know of FSCI, it is a reliable sign that points to design ad the best explanation. So, obviously option D is the best. [And, the reasons put up by some to revert to other options are highly revealing.] Well done, BarryA. VERY well done. GEM of TKI kairosfocus
Difficult question. I have to choose E. It's certainly not something that could be created by nature, or by any kind of random means. I might be tempted to say it was something created by the gentleman standing next to it. But I do not have any understanding of how he might have done it. In fact, I think it's impossible to create something like that. Therefore, I have to discount the "intelligent design" option. It was clearly created by something not of this world. And while the creator "may" be intelligent, I have no evidence to support that, seeing as how I know nothing about the nature of the creator or what conditions it may be operating in. So, I'm left with "E". There is no satisfactory conclusion. Go back to the lab and look for another hypothosis. TheMissingLink
i am intrigued by how few of our resident skeptics of ID are weighing in on this topic. i find it very telling actually..... interested
DLH-- I can't remember exactly where, but in some published essay or book, William Dembski pointed out the question-begging logic behind saying "we can identify human designs because we have seen humans design things, but not so with non-human designs". Well, when we saw the human being designing something, how did we know that that was what he was doing? Dembski gives the example of a human being taking a whittling knife to a stick. How can we even tell that the human being is acting as a designer as he manipulates the knife, rather than just using it to absently hack away at the stick? Only by detecting design in the result! Hence we infer from the presence of design to the designing activity of the human, and not the other way around. Indeed, in our own individual psychological development, how could we ever have conceived of humans as being designers, apart from having made a design inference from the results of intentional human action? The ability to make such an inference from the designed nature of effects to the designing nature of the cause would seem to be intrinsic to human developmental psychology. For example, it is hard to see how infants could ever pick up language without making such an inference. It is also hard to see how an infant could ever begin to develop an awareness of the existence of other persons absent an inference that there are agents causing the order behind sensory events. The bottom line is that accurate design detection is logically prior to knowledge of designing agents. While identifying agents depends on design detection, design detection does not depend on identifying agents. Matteo
BTW, ADAMs have been identified in snake venom! sparc
BA77:
It seems the source may be correct for the importance of the Laminin protein molecule
But then, what do topless laminins (laminin-311, laminin-321, laminin-411 and laminin-421) tell us? Did they suddenly realize they were naked? And why have disintegrins that disrupt interactions between laminins and their receptors been named ADAMs? sparc
The answer is clearly C. Random variation and natural selection can easily explain this phenomenon. Certain insects attack trees. Since the advent of humans and pesticides, insects evolved through natural selection to avoid humans and thus the deadly chemicals. Those insects that did not avoid humans and the deadly chemicals were killed by the deadly chemicals and did not pass on their avoid-humans-and-the-deadly-chemicals genes. To further clarify the explanatory power of Darwinian theory, we now turn to the chair-shaped tree. The insects that evolved through natural selection to avoid humans, and thus the deadly chemicals, evolved further to recognize subtle indicators of human presence, such as chairs. This tree, through natural selection, evolved to fool the insects into thinking that humans might be around to poison the insects with the deadly chemicals, and, thus, this variety of tree stood a better chance of survival from avoiding attack by insects that fear humans. Thus, through natural selection, the chair-tree passed on its chair-tree genes to future generations. This line of reasoning is irrefutable. Only creationists, who refuse to accept the obvious explanatory power of modern evolutionary theory, and who want to destroy science as we know it, refuse to accept the overwhelming evidence provided above. Creationists are obviously poorly educated, not very bright, and cannot understand the subtleties of modern evolutionary theory. Only those with more highly evolved intelligence and years of schooling are qualified to comment on Darwinian theory, which clearly and conclusively offers irrefutable evidence for the evolution of the tree-chair. Perhaps some bright, young, evolutionary theorist will flesh out my obvious explanation for the evolution of the tree-chair, and present it as part of his dissertation in pursuit of a Ph.D. in evolutionary theory. GilDodgen
I inferred design based on the fact that I know it is possible for humans to do such things, and that there were humans around at the time the tree existed. Is there a more formalized, "scientific" method for inferring design? If so, can someone demonstrate how would it apply, step by step, to the example here, showing all calculations? congregate
Time to whip out the explanatory filter: The object in the picture contains roughly 9 branches coming out from the stem. I would guess the range of possible directions they could point could be anywhere from zero (horizontal) to 90 degrees (skyward), and they all (looking from above) could point anywhere in a 360 degree span around that main stem. So we know this is highly improbable. Now all we need to do is determine if this was a regularity or came about by design. Any arrangement of branches pointing out is highly improbable, but does this specify a pattern or design? If so we could rule out the idea that this is an intermediate probability. So for proving specification, I will in this instance use minimum description length to make my inference. All 9 of the main branches (except the outer two, which exhibit the same bends anyway) form the same bent shape. It also repeats a familiar pattern allowing me to describe it in far fewer words: a chair. Well, I guess the only rational way to the evidence is through D, with C playing a very minimal role. "Off Topic: Cool Video: Laminin Protein molecule: http://www.youtube.com/watch?v.....re=related" Well that's an odd coincidence ;D F2XL
congregate at 3 Yes, identifying design INDEPENDENTLY from identifying a designer is very threatening to most materialists. Many insist that one must identify a designer before one can recognize design. That gives the excuse that there can be no design detection in nature without identifying the designer. Then since nature is not "signed", there can be no inference to either design or designer in nature. This is a critical issue in ID - whether design can be detected without identifying a designer. So it is worth exploring the issues on how / why we can detect design, and where this can be done without identifying the designer. A similar example is prehistoric cave paintings. How/why can we distinguish design in the above example and/or cave paintings? DLH
russ (7), Sorry, I didn't catch the sarcasm. Given the serious responses I have seen from ID skeptics, I just couldn't tell the difference. I've had conversations lately where the other person could have copy+pasted your note and it would have fit right into their line of thinking. As others have noted, it is sad but true that it is hard to satirize the opposition. Perhaps we need to use a warning of some kind -- something at the end that makes the tongue-in-cheek nature obvious. ericB
It seems the source may be correct for the importance of the Laminin protein molecule: Here is what wiki says about Laminin: http://en.wikipedia.org/wiki/Laminin Laminin is vital to making sure overall body structures hold together. Improper production of laminin can cause muscles to form improperly, leading to a form of muscular dystrophy. It can also cause progeria. and this: https://www.bdis.com/discovery_labware/products/display_product.php?keyID=238 Laminin, a major component of basement membranes, has numerous biological activities including promotion of cell adhesion, migration, growth, and differentiation, including neurite outgrowth. bornagain77
Off Topic: Cool Video: Laminin Protein molecule: http://www.youtube.com/watch?v=Ejj51hNIL3E&feature=related bornagain77
Eric, I was being sarcastic. But we know that humans do that kind of thing, so its reasonable to assume that the guy in the photo could have designed it. It's the best explanation. Of course, airbrushing out the designer does not mean he never existed. That was part of the sarcasm. russ
[Also to russ (5)] congregate (3): "So what’s the point, that we can recognize design in some things? Is that controversial?" Sadly, yes, it is controversial -- at least it is when materialists feel that the recognition of design might threaten their materialist assumptions. It is only not controversial when those materialist assumptions are safe (cf. Dawkins in Expelled). Atom's response in post 1 is humorous satire, although unfortunately it was hard to tell that it was satire until the end. To russ, the person in the photo cannot be assumed to be the designer. In any undesigned option, the person could just as easily be the discoverer. Remember, in addition to your general knowledge (e.g. about trees), the only specific information you have to go on is the photo. In other words, the question is whether one should infer design, even if you don't know anything about a designer. Can you properly infer design just from the object in question? ericB
"D", because we have a photo of the apparent designer. But if you airbrush out the designer from the photo, then A,B,C or E become true. russ
I think I would assume intention (which is option D). But the D (from design) is not a show stopper. I just don't understand why anyone would think so narrowly? Maybe reverse engineering would be an interesting activity to consider as an example on how to proceed. To know that some system was designed does not change anything, it just broadens the horizon with regard to the kind of processes that can be considered while you are attempting to figure out how it works. A real scientist (someone who seeks knowledge) would then ask "how?", as in "how was this done?" because that's still very much unknown. A real philosopher (someone who seeks truth) would then ask "why?". gmlk
Well, I'm not aware of trees growing quite like that naturally, and I am aware that humans have "trained" plants to grow in similar fashion (at Disney World they grow pumpkins shaped like the iconic Mickey Mouse ears), so I'll go with d, purposeful efforts of an intelligent agent. I could change my mind if you showed me that the tree's ancestors had the same shape, or if seeds from it grew in the same shape. So what's the point, that we can recognize design in some things? Is that controversial? congregate
Naw, It's all front loading. Rude
If we have separate evidence of an existing designer (and by separate I mean not "artifacts"...it has to be bones or something), then maybe design. Until then, we can assume that a high-information fitness function exists, and that it shapes trees into such an unlikely functional configuration (unlikely relative to all possible configurations.) If we conclude design, we have to stop asking all questions. Suddenly. Except for who designed the designer. </end canned UD critic answer> That is the answer someone will be posting in 5, 4, 3... Atom

Leave a Reply