Uncommon Descent Serving The Intelligent Design Community

East of Durham: The Incredible Story of Human Evolution

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Imagine if Galileo had built his telescope from parts that had been around for centuries, or if the Wright Brothers had built their airplane from parts that were just lying around. As silly as that sounds, this is precisely what evolutionists must conclude about how evolution works. Biology abounds with complexities which even evolutionists admit could not have evolved in a straightforward way. Instead, evolutionists must conclude that the various parts and components, that comprise biology’s complex structures, had already evolved for some other purpose. Then, as luck would have it, those parts just happened to fit together to form a fantastic, new, incredible design. And this mythical process, which evolutionists credulously refer to as preadaptation, must have occurred over and over and over throughout evolutionary history. Some guys have all the luck.  Read more

Comments
Pertrushka, in my last comment to you I said that you would say anything before you'd allow yourself to address the evidence in earnest. Like clockwork, you did exactly that.
It hasn’t been demonstrated to be irreducible
Bullshit. Pure bullshit. What part of the codon, tRNA, aaRS system can be removed and still function as the mechanism to transfer genomic information? Answer the question, Petrushka, it was your bald assertion, so answer it. If you cannot answer that question, then common intellectual honesty requires you to retract your remark (instead of justifying it with equivocation and higher grades of cow poo).
...and your argument is simply a restatement of the fact that we don’t know the history of the origin of life.
My argument is not a restatement of the OOL mystery, it is an argument that the tranfer of information from the genome is observably semiotic, and logically/necessarily so. That argument is supported by a) 100% of the evidence, b) logical coherence, and c) the embarrassing fact that not a single materialists has thusfar been able to refute it by observation (nor even offer a conceptual counter-example). Deal with it. ;)Upright BiPed
December 7, 2011
December
12
Dec
7
07
2011
09:56 AM
9
09
56
AM
PDT
Gordon: Antievolutionists have been trying to use information to refute evolution for a long time. I think A. E. Wilder-Smith may’ve been the first to seriously pursue this line of reasoning. He basically claimed (IIRC, it’s been a while since I read him) that information theory says that information can only come from intelligent sources. Unfortunately for him, both the statistical and algorithmic theories of information (the two primary theories) say almost the opposite: in the statistical theory, information sources are generally assumed to be random (even when they really aren’t, because the difference because the difference between intelligently-created and random information doesn’t matter to the aspects it studies). The algorithmic theory doesn’t generally deal explicitly with the creation of information, but random processes are certainly capable of producing Kolmogorov complexity (AIT’s measure of information). Let's keep it simple, When we speak of information in ID, we are considering what Abel would call "semiotic information". IOWs, we are interested to information that conveys a meaning. The so called theories of information, including Shannon's, are not interested in meaning. They are essebtially cybernetic or communication theories. Very useful to compute complexity. But they avoid the problem of "meaningful information". ID tries instead to approach tha problem rigorously, and quantitatively, introducing the concept of specification. Let's go on. Some later, more clueful antievolutionists (mainly Dembski) realized that the definitions from the standard theories of information didn’t give them any basis to refute evolution, and so set out to create their own theories and definitions that could provide a framework they could use. I would like to say here that the reason to be interested in meaningful, semiotic information is certainly not only to refute evolution. The problem of meaning is fundamental in all human knowledge, and it has been avoided for too long by science. The more interesting question is something more like blueprint-style information. If it (or something similar) could be properly defined (as opposed to what I’ve done) and it can be shown that evolution has no way to add it to the genome, then you’d have a case. I have a case. I have properly defined dfSCI. And it can be shown that non design evolution has no way to generate it. The definition you’re using of dFSCI takes a very different approach to ruling out natural production. Where Dembski is rationalist (mathematical derivations of why natural processes can’t produce CSI), you use an empirical argument (natural processes have never been observed to produce dFSCI). I am very happy of these words. I feel that you have correctly understood my approach, that is completely empirical. I think that you describe well the difference with Dembski's approach, which is certainly more important and depper than mine. But they are however different, even is my empirical approach obviously uses many tools created or detailed by Dembski. I discussed some general problems with this approach earlier, but let me take a closer look at this particular argument. I think it’s pretty clear that evolutionary processes can produce increases in dFSCI, at least if your measure of dFSCI is sufficiently well-behaved. Consider that there exist point mutations that render genes nonfunctional, which I assume that you’d consider a decrease in dFSCI. Point mutations are essentially reversible, meaning that if genome A can be turned into genome B by a single point mutation, B can also be turned into A by a single point mutation. Therefore, the existance of point mutations that decrease dFSCI automatically implies the existance of point mutations that increase dFSCI. Ah! Now we are coming to something really interesting. I must say that I have really appreciated your discussion, and this is probably the only point where you are explicitly wrong. No problem, I will try to show why. Please go back to my (quick) definition of dFSCI in my post number 9 here. I quote myself: "No. The dFSCI of an object is a measure of its functional complexity, expressed as the probability to get that information in a purely random system. For instance. for a protein family, like in Durston’s paper, that probability is the probability of getting a functional sequence with that function through a random search or a random walk starting from an unrelated state (which is more or less the same)." Well, maybe that was too quick, so I will be more detailed. a) We have an object that can be read as a digital sequence of values. b) We want to evaluate the possible presence of dFSCI in that object. c) First of all we have to explicitly define a function for the digital information we can read in the object. I we cannot define a function, we cannot observe dFSCI in that object, It is a negative. Maybe a false negative. There are at different ways to be a false negative. The object could have a function but not be complex enough: it could still be designed, but we cannot say. Or we could not be able to understand the code or the function in the object. d) So, let's say that we have defined a function explicitly. Then we measure the dFSCI for that function. e) To do that. we must measure the functional (target) space and the search space. Here various possiblities can be considered to approximate these measures. For proteins genes, the best way is to use the Durston method for protein families. f) The ratio of the target space to the search space if the complexity of our dFSCI for that object and that function. What does it express? As I said, it expresses one of two things, which are more or less equivalent: f1) The probability of obtaining that functional sequence from scrtach in a purely random system: IOWs, for a protein gene, the probability of obtaining any sequence that produces a protein with that function in a system that builds up sequences just adding randomly nucleotides. f2) The probability of obtaining that functional sequence through a random walk. That is more relevant to biology, because the usual theort for genes is that they are derived from other, existing sequences through variation. But the important point, that IO have explicitly stated in my previous post, is that it expresses "the probability of getting a functional sequence with that function through ... a random walk starting from an unrelated state. Starting from an unrelated state. That's the important point. Because that's exactly what happens in biology. Basic protein domains are unrelated states. They are completely unrelated at the sequence level (you can easily verify that going to the SCOP site). Each basic protein domain (there are at least 2000) has less than 10% homology with any other. Indeed, the less than 10% homology rule bears about 6000 unrelated domains. Moreover, they also have different structure and folding, and different functions. So the question is: how does a new domain emerge? In the example I cited about the human de novo gene, it seems to come from non coding DMA. Many examples point to transposon activity. In no case a functional, related precursor is known. That's why dFSCI is a good measure of the functional information we have to explain. Let's go to your argument. You say: "Consider that there exist point mutations that render genes nonfunctional, which I assume that you’d consider a decrease in dFSCI." No. That's wrong. We have two different objects. In A, I can define a function and neasure dFSCI. In B, I cannot define a function, and dFSCI cannot be measured. Anyway, I could measure the dFSCI implicit in a transition from B to A. That would indeed be of one aminoacid (about 4 bits). And so? If you have a system where you already have B, I will be glad to admit that the transition from B to A is of only 4 bits, and it is perfectly in the range of a random system. IOWs. the dFSCI of that specific transition is of only 4 bits. But you have to already have B in the system. B is not unrelated to A. Indeed, you obtained B from A, and that is the only way you can obtain exactly B. So, can you see why your reasoning is wrong? You are not using the concept of dFSCI correctly. dFSCI tells us that we cannot obtain that object in a purely random system. It is absolutely trivila that we can obtain that object in a random system starting from an almost identical object. Is that a counter argument to dFSCI and its meaning? Absolutely not. For instance, if you can show that a basic protein domain could have originated from an unrelated state thorugh an intermediate that is partially related and is naturally selectable(let's say from A to A1 to B, where A and B are unrelated, A1 is an intermediate between A and B, and A1 is naturally selectable), , then we are no more interested in the total dFSCI of B. What we have to evaluate is the dFSCI of the transition from A to A1, and the dFSCI of the transition from A1 to B. The assumption is that A1 can be expanded, and its probabilistic resources multiplied. Therefore, if the two (or as many as you want) transitions have low dFSI, and are in the range of the biological systems that are supposed to generate them, then the whole system can work. I hope I have been clear, but I would be happy to discuss any aspect of that that you don't agree with.gpuccio
December 7, 2011
December
12
Dec
7
07
2011
09:45 AM
9
09
45
AM
PDT
Evolution is learning and doesn’t violate any laws or probabilities.
What does it mean that something doesn't "violate any laws?" That's not a meaningful way to validate anything. Laws are an easy way to rule something out. I can't jump 100 feet into the air because of the law of gravity. But can I run 100 miles without stopping? Does it violate any laws? How is that a useful question? For what evolutionary transitions have you calculated the probabilities, and how, so that you can say it is probable rather than improbable? How you calculate or estimate that probability speaks volumes about how well you understand the process itself. As it is, every evolutionary narrative is far too vague to really determine how probable or improbable it is. That should be held against the theory as further evidence that it is vague and lacking any actual explanation. Instead it's used to support the theory. As long as it doesn't explain anything, the probability of the explanation can't even be estimated, so no one can say that it's improbable. Is there even a name for this fallacy, using the inability to explain or apply an idea as a defense against criticism of it? In any other case that would be a weakness, but it's darwinism's greatest strength.ScottAndrews2
December 7, 2011
December
12
Dec
7
07
2011
09:06 AM
9
09
06
AM
PDT
Gordon: Let's go to your specific evaluations: - Abiogenesis happened: very strongly supported. The very early Earth (& before that the early universe) couldn’t support anything like life, and it’s here now, so it must’ve originated at some point. Perfectly true. - Abiogenesis happened by X path: very weak at this point. In the first place, we don’t have any fully worked out paths by which abiogenesis could have happened (though that doesn’t mean there are no such paths) Second, there’s very little evidence left to go on (there is some evidence, like possible moleculars from an RNA world, but not much of it). True. But I have more faith that our incresing knowledge will give us more understanding. Personally, I believe that we will never find evidence of an RNA worlds, because I don't believe it ever existed. But we will see. - Common ancestry (all — or at least many — organisms are related to each other): very strong for the “many” version, much weaker for “all”. Basically, the evidence we have supports common ancestry; for parts of the family tree we have lots of evidence about, this is very strongly supported; for parts we have less evidence for, it’s proportionally weaker. Examples of areas where it’s weaker: species we haven’t discovered yet (that’s sorta the extreme case), the relations between archaea, eubacteria, and eukaryotes (the split happened so long ago there’s relatively little evidence left), and relations among eubacteria (evidence is mostly limited to genetic similarity, and that has a low signal-to-noise ratio due to horizontal transfer). An example of an area where it’s strong: mammals, including the one everyone cares about, humans (here we have a variety of sources of evidence, all pointing to pretty much the same history — some of the evidence is fairly strong on its own, but the principle of consilience means means the whole is even stronger than the sum of its parts). Perfectly true. And so? (You’re probably going to ask if I’ve been reading Jonathan M’s recent articles on the evidence for common ancestry, and the answer is no I haven’t, and yes I probably should. But when I’ve dug into such things in the past, evolution’s come up the winner.) I am not going to ask. The reson is simple. I do accept common descent (not necessarily universal) as the best explanation for what we observe. I must say that I am a little disappointed that you conflate here ID with the negation of common descent, as many do. I would have expected maybe some more explixit distinction, given the high level and pertinence of your discussion. So, I have to state it again: I, like many others in ID, aceppt common descent. Not for ideology. Not for any strange reason. But because I find the evidence for common descent the best explantion. Exactly the same reason why I accept ID, and not neo darwinism. I think that Behe has the same position. And many others here. Others differ. Those who don't accept common descent have their reasons, and they are welcome to express them. But, at present, I am not convinced by those reasons. For the nth time: ID theory and common descent are two separate issues, with some interaction, but no more than that. - Mutation and selection contribute to evolution (i.e. have contributed to the differences between modern organisms and their ancestors): very very strong. Both are observed in the lab and in the wild, and given our understanding of genetics it’s hard to see how either could not happen. Perfectly true. How and how much they contribute, obviosuly, is all another matter. - Mutation and selection are the only mechanisms of evolution: known to be false. Don’t leave out genetic drift and gene flow, lots of special variants on the primary mechanisms (hypermutation, chromosome fission&fusion, meiotic drive, hitchhiking, the founder effect, etc), and a few outliers (e.g. endosymbiotic capture)… It is false. But genetic drift and the rest are anyway random variation. So, if you include RV instead of RM in the neo darwinian algorithm (as I always do), all these are included. The point is, NS is the only part in all these explanations that has a necessity rule in it. The rest is a variety of blind events that are necessarily devoid of any information about life and function. NS. at least, derives form the existence of the reproductive function, and so has some connection with function, at least with reproductive function. But, even if you count all the things you have listed, the statement remains false. - The known mechanisms of evolution (see above) are the only ones in operation: sort-of assumed for both for methodological and Occam’s razorish reasons, but probably false. We keep finding new machanisms (or at least variants), and there’s no reason to think that’s going to suddenly stop tomorrow. It may sound nonsensical to assume something that’s probably false, but it’s actually a good assumption in certain ways. Yes and no. It is false, and I don't belive it is a good thing to assume it. The reason is simple. There is the design theory, that explains things much better, and that can be used to interpret data in a much more functional way. So I would say: those who believe in non design theories must assume that this statemetn is true, and pursue their research acoordingly. And those who accept the design theory should do the opposite: refute the statement, and pursue research accordingly. The important point is, data, however found, should be freely interpretable from both perspectives. None of these two positions should be considered a priori non scientific. A simple analogy: suppose we make a low-accuracy measurement (say, weighing a hog with Robert Burns’ method); we may be pretty sure the result is wrong, but since we don’t know how far off or in which direction, it’s nonetheless going to be our best estimate of the actual value. Basically, it’s reasonable to use it as a working assumption, and as a starting point for further investigation, just don’t actually fall into believing it’s true. This is a good reasoning, that I would happily apply to the computation of dFSCI in biological objects :) - Evolution happens by entirely naturalistic processes: weak-to-nonexistent. Scientific investigations assume this both for methodological reasons and for lack of evidence otherwise; but it’s really just an assumption. Pretty much like the analogous abiogenesis one. We have lots of evidence for naturalistic mechanisms of evolution, but that hardly rules out non-naturalistic contributions. Not true. Well, it is true that evidence for purely naturalistic processe as an explanation for biological evolution are nonexistent :) . But it is not true that there is lack of evidence otherwise. And it is not true, absolutely, that we have "lots of evidence for naturalistic mechanisms of evolution". What do you mean? We have evidence for common descent (already discussed) and we have evidence for a minimal role of RV and NS in microevolution (some forms of antibiotic resistance, and little else). Where are your "lots of evidence"? - Mutation and selection are the most important mechanisms of evolution: legitimately controversial, as well as subjective (depending on what you consider important). Most DNA-sequence-level differences between organisms are neutral, so selection’s irrelevant to their origin, so mutation and drift seem to be the major players at this level. I agree that the role of NS is minimal. And the role of RV is minimal too. IOWs, the whole theory is inconsistent. But if you look at differences in phenotype rather than genotype, the non-neutral differences are the ones that matter, and hence selection plays a much larger role. How much larger, and whether (/how much) it outweighs other factors is something scientists argue about… All reasonings about phenotype are irrelevant, if we cannot describe the molecular basis of variation. Only molecular reasoning allows us to discuss the nature and complexity of the observed variation, and therefore to compare causal expalnations. That's why I never discuss fossils, for example (another good reason could be that I have not the competence :) ). ID and neo darwinism can be compared only at molecular level. There, and only there, the true cause of variation can be analyzed. - The known mechanisms of evolution (see above) are sufficient to account for the properties of modern organisms: weak, but in the absence of counter-evidence, reasonable to assume. But there is a lot of counter-evidence. Each basic proitein domain is absolute counter evidence! And we have thousands of them in the proteome. And a lot of other things, obviously, form the cambrian explosion of body plans to OOL, from the genetic code to the huge Irreducible Complexity of biological machines, and so on, and so on. (I know, this is the one where you hit the ceiling. Please wait, and hear me out first.) Ouch! :) Well, I am listening... There’s been a lot of effort by both creationists and ID supporters to find & describe properties of organisms that known evolutionary mechanisms couldn’t produce, and (in my opinion and that of the mainstream scientific community) they haven’t found any. Completely false, but well, you are entitled to your opinions. Since you’re particularly interested in information, I’ll discuss that. And in next post, information at last!gpuccio
December 7, 2011
December
12
Dec
7
07
2011
08:49 AM
8
08
49
AM
PDT
Eric, How could you forget about one very powerful thing, trial and error?! No designer at all, just apparent design. We all have a collective hallucination.Eugene S
December 7, 2011
December
12
Dec
7
07
2011
08:10 AM
8
08
10
AM
PDT
Gordon: Here it becomes easier. I haven’t checked into whether he’s right about that, but the thing I found disturbing was that he didn’t try to construct an ID explanation either. I haven't read the articles, but I agree that a design explanation must be proposed is we refute the calssical explanation. If I were an IDist scientist looking at that pattern, I’d be trying to figure out where the pattern could’ve come from: is it a design goal in and of itself? Is it a side effect of some other goal, and if so what goal and how does the side effect arise? Is it a result of some feature of the design orpcess or how it was implemented? Think of things that could’ve caused the pattern, and (if possible) figure out ways to test them. Unless you can do that, ID can’t claim this as a point in its favor I agree. ID can and must do exactly that. I try, for what I can, to have always that kind of approach. The problem is, the ID approach to biological issues is just in the beginning. The resources are really minimal, and the resistance of the academic world is huge and dogmatic and very, very intolerant. However, people like Behe, Axe, Gauger, Durston and others are doing a splendid work, in the midst of all difficulties. At present, however, most of the experimental help for ID comes from the research made by darwinists, even against their intentions. Luckily, Data are always data, whoever provides them. You can find an example of ID reasoning on biological data in my previous post. An analogy is not a scientific theory. It might be the basis for a theory, but the bare analogy? No. The analogy is the basis for the scientific theory of ID. It serves to establish the hypothesis that functional information has been added to the biological world whenever we can witness the emergence of new dFSCI. A lot of work obviously remains to verify and detail this hypothesis with data. Moreover, I have much more faith in inferences by analogy than you seem to have. All our human knowledge, including science, is based on a shared inference ny analogy: the inference that other human beings are conscious exactly as we are. That is a mere inference by analogy, shared by all (except maybe solipsists), and yet it is the foundation of all our cognition of the outer world and all our shared knowledge. Consider some opportunities for your theory to make predictions: if we have an object we know was human-designed, does your theory make any prodections about it? No, they theory essentially says it might have dFSCI (and indeed, some human-designed objects have dFSCI, and some don’t). How about organisms? Again, the theory says they might have dFSCI. How about things other than organisms that aren’t human-designed? Well, since one of the possibilities on the table is that the entire universe and everything in it were designed, everything might have dFSCI. Here maybe there is some confusion. Let's clarify. dFSCI is a formal property often observed in human designed things. If we know that an object is designed (because we have evidence of the design process), still it can have dFSCI or not (which can be objectively verified on the object itself). dFSCI is never observed in objects that we knoe for certain were not designed by humans, with the only exception of biological objects. If we apply design detection by looking for dFSCI in objects that could be designed by humans, and if we can after verify, we can see that dFSCI is a reliable tool for design detection, becasue it gives no false positives, and a lot of false negatives (if the threshold of complexity is taken appropriately). Applying the search for dFSCI to the biological world, we find a lot of objects exhibiting dFSCI. So, we express the hypothesis that those objects have been designed. That is the simple reasoning. All the rest comes after, and consists in analyzing exisitng data, and new data, in the light of that kind of explanation. It implies researching the possible ways of implementation of functional information in natural hystory, with specific reference to the when and why. It implies defining better ther relationship between the functional protein space and the serch space. It implies trying to understand and quantify the digital complexity of regulation networks, of body plans, and of many other things that probably are not enough understood at present. And so on. You say: "Well, since one of the possibilities on the table is that the entire universe and everything in it were designed, everything might have dFSCI." No. That's wrong. dFSCI is observed in specific objects, and is objectively measured in them. If they have it, they have it. Otherwise, it is not observed. I repeat: no object in the physical universe, that we are sure was not designed by humans (or aliens, as far as we can know), exhibits dfSCI, except for biological objects. The fine tuning argument for the whole universe is a valid form of the cosmological argument for God, but IMO it remains in part phylosophical, because when you consider the whole universe as an object it is difficult to reamin merely empirical. And anyway, while it is an argument for design, it is formally completely different from the argument for biological design, which is completely empirical and based on dFSCI. You can improve the situation a bit by adding subsidiary hypotheses. For instance, if you add the hypothesis that organisms are the only designed-by-other-than-human things, you can get some predictivity. That's exactly my position. But not much, because properly testing a theory requires that you test it against evidence independent of the evidence that let to its formulation. I’m pretty sure your theory derives from considering a wide variety of objects and their properties, which doesn’t leave much room to test it against new objects (especially, objects that aren’t basically more of the same). I don't agree. The tool dFSCI can be tested blindly against any object in the two categorie: designed by humans or certainly not designed by humans (but not biological). It will always win (in the sense of nopt giving false positives; as already said, it will give a lot of false negatives). And there is a reason for that. The reason is that the process of design is the source of dFSCI. IOWs, it is the conscious representation of reality with intuition of meaning, and the conscious represenatation of function and purpose, that allow the generation of what Abel call semiotic information, and in particualr prescriptive information, and which in practice is confirmed by the observation of dFSCI. Our hypothesis is only that some designer, probably not human, can have the same kind of conscious representations and intent, and therefore input functional information into biological beings. Now, we must not make confusion: one thing is to test the dFSCI tool for design detection. That can be made as much as we want. And it can also easily be falsified. Producing objects in a non designed system that exhibit dFSCI would obviously falsify the whole theory of design detection. The design inference for biological beings, based on the concept of dFSCI, is another thing. It is a more general theory that competes with the only existing non design theory for biological information, neo darwinism. The two explanations must be compared according to how well they explain existing data, and of how well they predict and explain new data. The example of ether is very revealing. Indeed, as long as it was the best explanation, it was considered such. When data (or a better reasoning) made it unnecessary, it was discarded. That's perfectly fine. So, let's say that we in ID do believe that at present ID theory is the best explanation for the oriogin of biological information, and that it should be considered a legitimate scientific approach to that problem, and that it should have the respect, resources, and attention that it deserves. The, we will see. Personally, I am very sure that ID will fare much better than ether :) . More in next post.gpuccio
December 7, 2011
December
12
Dec
7
07
2011
08:09 AM
8
08
09
AM
PDT
And as if the practice of Science had a rich sense of humor, it turns out that this very system of information transfer (the very heart of it all) is the most prolific form of irreducible complexity in the known universe. It’s logically undeniable.
It hasn't been demonstrated to be irreducible, and your argument is simply a restatement of the fact that we don't know the history of the origin of life. You are making assumptions about the outcome of research in progress.Petrushka
December 7, 2011
December
12
Dec
7
07
2011
06:01 AM
6
06
01
AM
PDT
Gordon: Again I want yo thank you for your reply, and from the heart. I deeply appreciate your contribution, and it was a pleasure for me to read it. Answering it is not too difficult, because I agree with many of the things you say, so I will just state where I agree, and concentrate on the points where I differ. I agree with you about your approach to science and methodology. I am a little bit less convinced about the prominent value of predictions. They are an important part of scientific support to a theory, but not the only one. IMO, two things are equally important in giving credit to a scientific theory: it is the best explanation for observed things, and it can make right predictions about things still to be observed. I am not trying to underemphasize the importance of the second point: I do believe it is fundamental for ID. But it is true also that, in dealing with explanation of historical events, like in evolution problems, one cannot expect the same role of predictions as, say, in physics. That is true both for ID and neo darwinism. That said, I do believe that ID and neo darwinism imply very different predictions about biological information, and that goes far beyong the point, often made here, of junk DNA. Let's put it this way: we are now comparing two existing theories about the origin of biological information: a) one (ID) states that whenever a non trivial rise in dFSCI is observed, there must have been an input of functional information in the form of "switch configuration" (to use Abel's term) from some conscious agent. b) the other (neo darwinism) states that in all those cases an explanation based on RV + NS is feasible. Please note that those are IMO the only scientific explanations currently available. For epistemological reasons, I don't accept the argument, so often made here by darwinists, that even if their theory does not work, the design theory must be refited just the same, because some other naturalistic explanation in principle could be found. That is nonsense. Science works with available empirical explanations, not with logical possibilities. Your example of ether is perfect for that: ether was accepted as the best explanation, and that was methodologically correct, until evidence favoured a better, explicit, satisfying explanation. Science is all about competing explanation, not about ideological positions a priori (such as "the explanation must be materialistic"). So, we are compering these two specific theories. Do they engender different predicitons? Of course they do. I will try to describe two very important ones. 1) Neo darwinism states that the functional organization in biological beings is not what it appears: IOWs, not designed, not purposeful. That bilogical beings appear designed and purposeful I will take for granted, as even Dawkins agrees on that, but if you disagree we can discuss that point. So, according to neo darwinism, the appearance of fucntional teleology is only a byproduct of a blind mechanism, RV + NS, which "simulates" teleology in the end. Indeed, as ID well shows, the neo darwinian mechanism is completely unable to explain that appearance of function and purpose as we know it today. But that's not the point here. My point is: the complexity of biological beings as we knoe it today is one thing. The complexity of biological beings as we will understand it in, say, ten years from now, is another thing. ID believes that the functional complexity in biologcial beings is the product of an intelligent designer. Moreover, a simpèle analysis of the level of function as we understand it now easily shows that the designer is "much smarter" than we are (a simple, empirical consideration). The empirical evaluation of how our understanding of biological functions has incersed in the last ten years (often in completely unexpected ways) can therefore justify the following prediction: in ten years from now, we will find huge new levels of functional complexity in living beings. Now, that is a simple prediction. It can happen or not. I strongly believe it will happen. Is that a prediction compatible with the neo darwinian model? Well, darwinists will certainly say it is. But is it really? Let's see. For that model, functional organization is just the byproduct of a blind algorithm, and chance is largely the main factor in building new information. OK, NS is not chance, it is necessity, but it has a really indirect relationship with functional organization, and you can probably agree that now, already, its role in explaining things is really stretched (a very kind euphemism) in classical neo darwinisms (and almost non existing in alternative forms). So, why should neo darwinism predict ever increasing and ever deeper levels of "apparently" teleological organization? It cannot even begin to explain what we already know, its best hope is really that we will as soon a s possible "hit the wall" of this bogus functional organization, and concentrate on explaining what we have already accumulated. So, I make a prediction: in the next ten years, we will discover tons of new unexpected levels of functional organization in biological beings. And that is perfectly compatible with the design hypothesis, and totally against the neo darwiniam explanation. 2) But let's go to something more specific. How and when does new biological information appear? Here the two models differ very much. The neo darwinian model requires that it appears gradually at the genome level, and that the probability barriers implicit in RV be very often "shunted" by the necessity mechanism of NS. Do we agree on that? The ID model is much more flexible. The main implementation modalities are: direct writing (top down), or some algorithm based on random variation + intelligent selection (bottom up), or a mix of the two. In direct writing, graduality is not required (although it is still possible). In the second scenario, some graduality is expected, but the times can be freatly accelerated compared to the neo darwinian scenario, and above all there is no need for naturally selectable intermediates (intelligent selection can act in very different ways). So, as our knowledge abot when and how new information appears in natural history grows, we can certainly discriminare better between those two proposed scenarios. Let me make just one example. A paper has many times be cited here by darwinists which details the possible emergence of a new functional brain protein in humans. Please remember that I am citing this paper only for methodological reasons. I cannot give any final judgement about the particular issue, because the protein has not yet been directly isolated, and its specific possible biochemical function is not known. But both those points can be quickly solved by reaserch. The point is that, according to the paper, that protein (if it really exists) is 184 AAs long. Therefore, if it is functional, and if it is a new basic protein domain (it should be, because the sequence has no homology with all known proteins), it is a well definite "de novo" gene, appearing for the first time in humans. Moreover, because of ots lenghth, it is absolutely reasonable to assume that its dFSCi is high, even if that would require further research about the sequence function relationship in that particular protein (the Durston method cannot be applied to a single gene). But that research can be done. It is derived from part of a non coding DNA gene, emerging for the first time in primates. Four final mutations in humans transform the non coding gene in an ORF. In particular, one mutation eliminates an internal stop codon that made the ORF impossible. Therefore, the ORF did not exist in primates, and it was never translated before its activation in humans. THerefore we have the foillowing scenario: a) A non coding sequence appears for the first time in primate. It changes, mutates, but is never translsted. Therefore, no NS of the results is possible at this level. b) The, in humans, 4 final mutations (one of them a frameshift mutation, another one a stop codon removal) "suddenly" activate an ORF in that sequence, and the ORF corresponds to a new protein, with a completely new fold and function. Do you agree that such an empirical scenario is best explained by the design theory (in particular, I would say, by the direct writing variety), rather than by the RV + NS theory? That's what I mean when I say that the two theories have very different implications (IOWs, they make different predictions), and that our growing empirical knowledge will allow us to check what predictions are verified or falsified. I stop here for now. The rest in another post.gpuccio
December 7, 2011
December
12
Dec
7
07
2011
02:50 AM
2
02
50
AM
PDT
11.1.1.1.3 Petrushka, As regards small probabilities, compare 1 out of 10^3 vs. 1 out of 10^70. Can you relate a practically occurring event on the planet to the latter figure? Can you see the difference? Incremental change, hypercycles and other palmistry are good for books, not for reality.Eugene S
December 7, 2011
December
12
Dec
7
07
2011
02:03 AM
2
02
03
AM
PDT
Thank you very much Eric and Biped for taking the time to read my rather rambling post and to then give a meaningful response. Of course "purpose" is another hazily defined concept, and it turns out we were indeed talking at cross purposes, Eric. You point out that "purpose" is used to mean the function of any given system, while I was using it in a more teleological sense. Leaving that aside, as I understand it the logic goes, "If DNA does in fact have a designer, it is logical to assume that that designer would not have left whole chunks of genome with no function." Playing Devil's advocate, I still have trouble seeing why that assumption is any different to the "no designer worth his salt" argument that is so often wielded against ID, which, it is effectively argued, is fundamentally philosophical in nature and therefore scientifically moot. Also I still feel that my fundamental point remains unaddressed. Maybe I didn't explain myself very clearly. As I see it, we are talking about two concepts, "that which was clearly designed" (X) and "the designer(s) that must exist" (Y). ID is very good at defining X: "anything with functional complex specified information and/or irreducible complexity" or something along those lines. However I have yet to see a clear-cut, restrictive definition of Y tying together beaver, human, alien and deity. "Conscious and/or intelligent agent" just isn't specific enough, as I hope I have been able to explain. I suspect that coming up with such a definition might help. I'll make my own ham-fisted attempt, just to explain what I mean: "An intelligent agent is a discrete entity that autonomically acts to disrupt the operation of known chemical and physical laws." Has anybody tried to come up with such a definition before?englishmaninistanbul
December 7, 2011
December
12
Dec
7
07
2011
12:00 AM
12
12
00
AM
PDT
First, let me apologize for both the lateness and length of this message. I write slowly, and I kept thinking of more things I wanted to say, and ... it got a bit out of hand. (And for KF: I'll try to reply about thermodynamics tomorrow.) gpuccio:
Let’s go to the details. Thank you from my heart for admitting that my reasoning is not circular. It should be obvious, once ot os calrified, but my experience here, even with people “from the other side” that I really respect and like, has been different. So, thank you for that.
No problem. I agree that people often get too wrapped up in defending their positions to admit their own biases and mistakes. I can hardly claim to be bias-free, but I do try to avoid letting my biases and such control me.
[earlier] Maybe, if you understood better the details, you could change your mind, just out of intellectual honesty.
If you can convince me that the details justify your conclusions, then yes, I will change my mind. A bit of a warning, though: I've been following the Creation/Evolution/ID controversy for around 25 years, and have dug in detail into various topics that happen to have caught my interest. So far, for everything I've investigated, the evolution side has come out ahead or at least even. As I get older and lazier, it's harder and harder to catch my intrest enough to get me to really do the research necessary to have a properly informed opinion on a topic. I've been meaning for a while to write up some notes on all the things people get wrong about thermodynamics (this is a topic that tends to get messed up by almost everyone, no matter what side they're on), a critique of Dembski's CSI work (I wasn't impressed by the recent paper by Elsberry & Shallit's recent paper -- they were still calling out problems fixed in the 2005 version of CSI, but also missed new problems introduced in that version). Also, I really need to read more of Dembski's work on NFLT (I know the basic idea, but I haven't gone into the details). Net result: you'll have to seriously grab my attention to get me off my duff and into the details of the bio side of things. That said, most of the rest of your message was about a general overview of the current argument for ID (or your take on it anyway); I'm fairly familiar with that, so I think I can reply coherently. I may get a bit ranty, though: I'm not at all happy with design-centric ID (the "design detection" approach), because I think it's really a bit of a cop-out. To explain why, let me take a bit of your discussion out of order:
The only reason why so called scientists refute that explanation a priori is that they are committed by faith to materialistic reductionism, that cannot tolerate even the possibily that conscious agents may exists who can have designed biological information.
I think you're vastly overestimating this committment -- a few scientists are committed to materialism, but most just do what works. Historically, materialistic reductionism has produced successful theories, and nonmaterialism pretty much hasn't. As result, the possibility of nonmaterialistic science is generally dismissed. The attitude is little like the general attitude about permetual motion machines: present a scientist with the claim that you've come up with either one and the response will be along the lines of "good grief, not again -- I have real work to do, please go away." In either case, if you actually have one (perpetual motion machine or viable nonmaterialistic scientific theory), the solution isn't to rail against bias; it's to build the thing, run it, and show people that it does work. If you can do this, you will gradually get people to take you seriously, and modify their ideas of what's possible. Consider science around 1900: physics was considered the definition of what good science should be like: not merely materialistic, but mechanistic and deterministic. Then quantum mechanics came along; it violated the rules about what a good theory should be like, but made successful predictions that no other theory did. It worked. And because of that, it changed the rules. If you can build a theory that makes good predections, and works by the other measures of a successful scientific theory, I'm confident you can change the rules as well. But I don't think it's possible to do that under the constraint of design-centric ID. Think about the other fields that're sometimes given as successful scientific theories of design: anthropology and forensics. In both cases, the real meat of the science isn't in detecting designed objects, but in figuring out who did it, why, how, etc. Furthermore, in both cases the "design detection" is actively driven by theories about the designers. Anthropologists don't decide that something is designed just because it doesn't match known non-intelligent sources, but because it does match known and understood human sources, goals, techniques, etc. Design-centric ID can at most use a generic description of what designers might do, like: sometimes they produce things that have some dFSCI. That doesn't really give much scope for any sort of detailed predictions. Let me give you an example of this limitation: last year, Richard Sternberg posted a series of articles (1, 2, 3, about patterns in the distribution of LINES and SINES in the genomes of mice and rats. His main point is that evolution fails to provide an explanation for the patters. I haven't checked into whether he's right about that, but the thing I found disturbing was that he didn't try to construct an ID explanation either. If I were an IDist scientist looking at that pattern, I'd be trying to figure out where the pattern could've come from: is it a design goal in and of itself? Is it a side effect of some other goal, and if so what goal and how does the side effect arise? Is it a result of some feature of the design orpcess or how it was implemented? Think of things that could've caused the pattern, and (if possible) figure out ways to test them. Unless you can do that, ID can't claim this as a point in its favor:the best you can do is say that neither evolution nor ID has an explanation. And I don't see any way to get past that without going past the design-centric ID paradigm. Ok, I think I'm done ranting. At least for the moment.
The “argument from ignorance” part deserves a further clarification.
I was specifically referring to the argument that at-least-partially-selectable paths don't exist, not to ID overall; however, let's proceed.
ID is not an argument from ignorance. It is, indeed, a theory made at least of two fundamental parts: a) The positive part: a1) definition of a formal property (for instance, dFSCI), frequently observed in objects that are certainly designed by conscious agents (In this case, humans), and empirically never observed in objects that are certainly not designed by conscious observers, but are rather the output of random or necesiity, or mixed systems. I strongly believe that dFSCI completely satisfies that purely empirical definition. a2) demonstration that dFSCI is hugely observed in the set of objects we are discussing (biological objects, and particularly the genome), whose origin is the object of controversy. a3) inference to design by a conscious agent as the best explanation available. This is an inference by analogy, a very strong and convincing one, a positive argument from all points of view. It is also a perfectly correct scientific explanation, wholly empirical and appropriate to the problem we are trying to solvbe.
An analogy is not a scientific theory. It might be the basis for a theory, but the bare analogy? No. The most important feature of a scientific theory (or hypothesis) is its testability: it has to have consequences (predictions) that can be checked against reality. Tests that match reality provide support for the theory. As you've described the positive side of ID, I don't see any way to derive any testable predictions from it. (Note: the above is, of course, a bit of an oversimplification. For one thing, it's nearly impossible to derive testable predictions from just one theory. Generally, a prediction derives from serveral theories, subsidiary hypotheses, etc; and if the prediction fails, it can be tricky to figure out which theory should be considered falsified. The recent faster-than-light neutrino results are a good examle: they falsify something, but whether they've falisified relativity, or the rule that causes precede effects, or some part of their understanding of the timing or distance measurements, or something about the operation of their detectors, or...) Consider some opportunities for your theory to make predictions: if we have an object we know was human-designed, does your theory make any prodections about it? No, they theory essentially says it might have dFSCI (and indeed, some human-designed objects have dFSCI, and some don't). How about organisms? Again, the theory says they might have dFSCI. How about things other than organisms that aren't human-designed? Well, since one of the possibilities on the table is that the entire universe and everything in it were designed, everything might have dFSCI. You can improve the situation a bit by adding subsidiary hypotheses. For instance, if you add the hypothesis that organisms are the only designed-by-other-than-human things, you can get some predictivity. But not much, because properly testing a theory requires that you test it against evidence independent of the evidence that let to its formulation. I'm pretty sure your theory derives from considering a wide variety of objects and their properties, which doesn't leave much room to test it against new objects (especially, objects that aren't basically more of the same). Let me give you an example where a similar argument from analogy was used in science: when scientists studied waves (e.g. sound), they found waves need some sort of medium to propagate through. Sound, for example, cannot propagate through a vacuum. When they discovered that light was a type of wave, they assumed that it similarly needed a medium, and since light can propagate through a vacuum, they hypothesised that vacuum wasn't really empty, but filled with "luminiferous aether" that carried the light waves. But they didn't stop there, they did what I was griping about Sternberg not doing, and design-centric ID not allowing: they developed (and tested and refined and...) an extensive theory about exactly how this aether behaved. It was actually quite strongly supported. Then came the Michelson–Morley experiment. It was an attempt to measure the motion of Earth relative to the aether by looking for changes in the speed of light, and it didn't find any. This is usually described as having refuted the aether theory, but what it actually did is much subtler: it didn't show that aether was nonexistent, it showed that it was irrelevant. If you include relativistic corrections in the distance and timing elements of MM's experiment (i.e. correct the subsidiary hypotheses), their result is entirely consistent with a stationary aether. But the undetectability of aether drift left aether with no real (=contributing to predictions) role in its own theory. Since the predictive parts of the aether theory could be reformulated without reference to aether (instead, they're described in terms of abstract electric and magnetic fields), the only reason to have the aether in the theory was to satisfy the analogy. That wasn't enough reason; it was jettisoned. I'll draw two lessons for ID from the example of aether: first, your hypotheses and theories really need to be able to make predictions. Second, if you don't want the designer(s) to be jettisoned from ID as irrelevant, he/she/it/they have to be active participants in the theory, with properties that contribute to its predictions. Going back to my rant, I don't see a way to do either of these within the constraints of design-centric ID. (As usual, I've oversimplified the aether story a bit. Most significantly, irrelevance wasn't the only problem with aether: its properties wound up not making much sense for a physical thing, so it was getting a bit implausible anyway.) I also think the analogy itself is rather weak, but this is a relatively unimportant point and I've gone on long enough...
b) The second part is the falsification of the currently accepted theory of neo darwinism. That is necessary too, because one of the pillars of ID reasoning is that dFSCI is observed in biological information, IOWs that the functional information there is not explained by any knows chance, necessity, or mixed mechanism. As neo darwinism pretends to do exactly that, it is the duty of ID to demonstrate that that theory is wrong, illogical and empirically unsupported. And that’s exactly what ID does.
As usual, I disagree with your assessment of the situation. I don't want to go on too long here, but let me just break down what I see as the level of evidentiary support for various parts of evolutionary theory (and then I'll discuss information at the end): - Abiogenesis happened: very strongly supported. The very early Earth (& before that the early universe) couldn't support anything like life, and it's here now, so it must've originated at some point. - Abiogenesis happened by X path: very weak at this point. In the first place, we don't have any fully worked out paths by which abiogenesis could have happened (though that doesn't mean there are no such paths) Second, there's very little evidence left to go on (there is some evidence, like possible moleculars from an RNA world, but not much of it). - Abiogenesis happened by an entirely naturalistic process: weak-to-nonexistent. Scientific investigations assume this both for methodological reasons and for lack of evidence otherwise; but it's really just an assumption. Furthermore, I don't really see any way to put it in testable form, meaning that I don't know if it can be supported by evidence. - Common ancestry (all -- or at least many -- organisms are related to each other): very strong for the "many" version, much weaker for "all". Basically, the evidence we have supports common ancestry; for parts of the family tree we have lots of evidence about, this is very strongly supported; for parts we have less evidence for, it's proportionally weaker. Examples of areas where it's weaker: species we haven't discovered yet (that's sorta the extreme case), the relations between archaea, eubacteria, and eukaryotes (the split happened so long ago there's relatively little evidence left), and relations among eubacteria (evidence is mostly limited to genetic similarity, and that has a low signal-to-noise ratio due to horizontal transfer). An example of an area where it's strong: mammals, including the one everyone cares about, humans (here we have a variety of sources of evidence, all pointing to pretty much the same history -- some of the evidence is fairly strong on its own, but the principle of consilience means means the whole is even stronger than the sum of its parts). (You're probably going to ask if I've been reading Jonathan M's recent articles on the evidence for common ancestry, and the answer is no I haven't, and yes I probably should. But when I've dug into such things in the past, evolution's come up the winner.) - Mutation and selection contribute to evolution (i.e. have contributed to the differences between modern organisms and their ancestors): very very strong. Both are observed in the lab and in the wild, and given our understanding of genetics it's hard to see how either could not happen. - Mutation and selection are the only mechanisms of evolution: known to be false. Don't leave out genetic drift and gene flow, lots of special variants on the primary mechanisms (hypermutation, chromosome fission&fusion, meiotic drive, hitchhiking, the founder effect, etc), and a few outliers (e.g. endosymbiotic capture)... - The known mechanisms of evolution (see above) are the only ones in operation: sort-of assumed for both for methodological and Occam's razorish reasons, but probably false. We keep finding new machanisms (or at least variants), and there's no reason to think that's going to suddenly stop tomorrow. It may sound nonsensical to assume something that's probably false, but it's actually a good assumption in certain ways. Essentially, since we don't know what other mechanisms there are, we don't know how to take them into account, and any attempt to do so runs the risk of throwing us even further off. A simple analogy: suppose we make a low-accuracy measurement (say, weighing a hog with Robert Burns' method); we may be pretty sure the result is wrong, but since we don't know how far off or in which direction, it's nonetheless going to be our best estimate of the actual value. Basically, it's reasonable to use it as a working assumption, and as a starting point for further investigation, just don't actually fall into believing it's true. - Evolution happens by entirely naturalistic processes: weak-to-nonexistent. Scientific investigations assume this both for methodological reasons and for lack of evidence otherwise; but it's really just an assumption. Pretty much like the analogous abiogenesis one. We have lots of evidence for naturalistic mechanisms of evolution, but that hardly rules out non-naturalistic contributions. - Mutation and selection are the most important mechanisms of evolution: legitimately controversial, as well as subjective (depending on what you consider important). Most DNA-sequence-level differences between organisms are neutral, so selection's irrelevant to their origin, so mutation and drift seem to be the major players at this level. But if you look at differences in phenotype rather than genotype, the non-neutral differences are the ones that matter, and hence selection plays a much larger role. How much larger, and whether (/how much) it outweighs other factors is something scientists argue about... - The known mechanisms of evolution (see above) are sufficient to account for the properties of modern organisms: weak, but in the absence of counter-evidence, reasonable to assume. (I know, this is the one where you hit the ceiling. Please wait, and hear me out first.) There's been a lot of effort by both creationists and ID supporters to find & describe properties of organisms that known evolutionary mechanisms couldn't produce, and (in my opinion and that of the mainstream scientific community) they haven't found any. Since you're particularly interested in information, I'll discuss that. Antievolutionists have been trying to use information to refute evolution for a long time. I think A. E. Wilder-Smith may've been the first to seriously pursue this line of reasoning. He basically claimed (IIRC, it's been a while since I read him) that information theory says that information can only come from intelligent sources. Unfortunately for him, both the statistical and algorithmic theories of information (the two primary theories) say almost the opposite: in the statistical theory, information sources are generally assumed to be random (even when they really aren't, because the difference because the difference between intelligently-created and random information doesn't matter to the aspects it studies). The algorithmic theory doesn't generally deal explicitly with the creation of information, but random processes are certainly capable of producing Kolmogorov complexity (AIT's measure of information). Some later, more clueful antievolutionists (mainly Dembski) realized that the definitions from the standard theories of information didn't give them any basis to refute evolution, and so set out to create their own theories and definitions that could provide a framework they could use. To understand the problem they faced a little better, consider that novels, weather reports, blueprints, and lottery numbers can all resonably be considered information, but they're all very different types of information. Evolution can clearly create some of these types of information: mutations can create lottery-number-style information, and selection can create weather-report-style information. It cannot (at least as far as I can see anything like a novel, but since there doesn't seem to be any of that sort of information in the genome, that doesn't mean anything. The more interesting question is something more like blueprint-style information. If it (or something similar) could be properly defined (as opposed to what I've done) and it can be shown that evolution has no way to add it to the genome, then you'd have a case. There've been a number of attempts at this over the years. Dembski's first, complex specified information, takes what I'd call a rationalist approach: it tries to arrange the definition of information so that it rules out (well, actually just limits) the production of information by natural means. The early versions of CSI had some pretty serious problems; later definitions cleared things up a great deal, but left at least one showstopper: in order to prove that something had CSI, you had to first show that natural processes had a very small probability of producing that thing. Essentially, CSI can't be used to prove that evolution doesn't work, because in order to show that you'd have to already have proven that evolution doesn't work. (I'll skip over Dembski's more recent work on NFLT and active information, because I haven't read enough of it to really comment knowledgeably.) The definition you're using of dFSCI takes a very different approach to ruling out natural production. Where Dembski is rationalist (mathematical derivations of why natural processes can't produce CSI), you use an empirical argument (natural processes have never been observed to produce dFSCI). I discussed some general problems with this approach earlier, but let me take a closer look at this particular argument. I think it's pretty clear that evolutionary processes can produce increases in dFSCI, at least if your measure of dFSCI is sufficiently well-behaved. Consider that there exist point mutations that render genes nonfunctional, which I assume that you'd consider a decrease in dFSCI. Point mutations are essentially reversible, meaning that if genome A can be turned into genome B by a single point mutation, B can also be turned into A by a single point mutation. Therefore, the existance of point mutations that decrease dFSCI automatically implies the existance of point mutations that increase dFSCI. "But," I hear you say, "all observed mutations that change dFSCI decrease it." I'd go look up counterexamples, but it's really a moot issue because it doesn't really matter if dFSCI-increasing mutations are observed because logically they must exist. If they are not observed, that just means that the organisms' starting genomes are at local maxima of dFSCI, and an inability to go "up" from where they are does not imply the inability to get "up" to where they are. This is rather abstract, so let me give an anology that might clarify what I'm talking about. Consider a bunch of people milling around the top of a hill. They have very short memories, and are arguing about how they got to the top of the hill. The walkists think they just walked up it, but the helicopterists think there must've been some sort of aerial transportation involved. To prove their case, the helicopterists point out that walking is always observed to take them down, not up, so clearly it cannot be how they got to the top of the hill. Their argument is wrong for exactly the same reason yours (at least, the one I attribute to you) is: as long as walking is symmetric (like point mutations), the ability to walk down automatically implies the ability to walk up. The reason they cannot walk up from the top of the hill is not because of a limitation of walking, but because of a special property of their starting position. And (most important) that special property doesn't mean that their position cannot be reached by walking up. Note that if walking was not symmetric -- for example, if there were slopes too steep to climb -- my argument would fail. But point mutations are reversable, meaning that at least for those types of mutations, the ability to decrease dFSCI implies the ability to increase dFSCI. Insertions and deletions are not generally symmetric, but they are indirectly reversable: any insertion can be reversed by a corresponding deletion, and any deletion can be reversed by a sequence duplication followed by a bunch of point mutations. There are some ways your argument could escape this problem: for one, you could claim that all genomes have equal dFSCI, so that your argument doesn't limit evolution, only abiogenesis (which is not, as far as I can see, symmetric). Or your dFSCI measure might not be a defined as a function of the DNA sequence (i.e. you might have a measure where a particular mutation and its reverse both decreased dFSCI). But it's time for bed, and this message is too long already.Gordon Davisson
December 6, 2011
December
12
Dec
6
06
2011
11:50 PM
11
11
50
PM
PDT
Petrushka:
A naturalistic history would involve one mutation at a time. With occasional chromosomal changes, such as duplications and transpositions. ID calculations never take this into consideration. Basically because such a history would never be particularly improbable.
This is simply false. ID calculations assume the best case scenario for naturalistic pathways, and those pathways continue to be shown as exceedingly improbable. There are two aspects to information arising in life: prebiotic, and post-biotic. Some (including probably some or even most ID proponents) would view the first as the most problematic. Indeed, Meyer's latest book is focused exclusively on the first, while Behe's work is focused primarily on the second. In either case, every opportunity is made in the calculations for material mechanisms to do their alleged work; and each time they come up wanting.
Behe is simply full of it.
Look in the miror. I don't think you understand the relationship of probability to the ID argument, as your examples demonstrate. Of course improbably things happen. They happen all the time. Improbability is only one aspect of detecting design. In addition, there has to be specification.Eric Anderson
December 6, 2011
December
12
Dec
6
06
2011
08:52 PM
8
08
52
PM
PDT
Upright Biped, Thanks for weighing in. I think you make some excellent points. If I can distill what you are saying into generalized ID terms, the semiotic nature of cellular processes is another striking example of (i) functional complex specified information, and (ii) irreducible complexity. I agree with you that it is a powerful example and one that cannot be ignored by the thoughtful person. However, I fear the materialist who cannot or will not understand the basic concept of functional complex specified information and the fact that it points to a designing intelligence will not be swayed by the semiotic example, as powerful as it may be. That is because the dedicated materialist has already committed the intellectual fallacy of thinking that such systems can arise through purely naturalistic and materialistic processes. Therefore, she will simply ignore the argument altogether (witness your recent attempt at an exchange on the other website) or will argue that, "yes, the semiotic processes in the cell are incredible, and isn't it incredible what evolution can produce!?"Eric Anderson
December 6, 2011
December
12
Dec
6
06
2011
08:42 PM
8
08
42
PM
PDT
Well Petrushka, seeing as non-local, beyond space and time, quantum entanglement/information is now found along entire protein structures which is, among other things, constraining functional proteins from 'evolving' to any new, extremely rare, sequences of functionality in the first place, I would have to say that the only thing that is 'full of it' is the entire reductive materialistic foundation of the neo-Darwinian framework that you are so enamored with. But hey petrushka, it's only science! :) ,,, All you have to do, to save your beloved atheistic/materialistic delusions, is to refute Alain Aspect, and company's, falsification of local realism i.e. of reductive materialism!!! Moreover, you don't rigorously establish a point in science by denigrating another person, such as you have done with Dr. Behe, you do it by actually providing concrete, clear, repeatable experimental proof for your position that unquestionably demonstrates that it is true!!! notes:
Falsification Of Neo-Darwinism by Quantum Entanglement/Information https://docs.google.com/document/d/1p8AQgqFqiRQwyaF8t1_CKTPQ9duN8FHU9-pV4oBDOVs/edit?hl=en_US Where's the substantiating evidence for neo-Darwinism? https://docs.google.com/document/d/1q-PBeQELzT4pkgxB2ZOxGxwv6ynOixfzqzsFlCJ9jrw/edit Coherent Intrachain energy migration at room temperature - Elisabetta Collini & Gregory Scholes - University of Toronto - Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/ Quantum states in proteins and protein assemblies: The essence of life? - STUART HAMEROFF, JACK TUSZYNSKI Excerpt: It is, in fact, the hydrophobic effect and attractions among non-polar hydrophobic groups by van der Waals forces which drive protein folding. Although the confluence of hydrophobic side groups are small, roughly 1/30 to 1/250 of protein volumes, they exert enormous influence in the regulation of protein dynamics and function. Several hydrophobic pockets may work cooperatively in a single protein (Figure 2, Left). Hydrophobic pockets may be considered the “brain” or nervous system of each protein.,,, Proteins, lipids and nucleic acids are composed of constituent molecules which have both non-polar and polar regions on opposite ends. In an aqueous medium the non-polar regions of any of these components will join together to form hydrophobic regions where quantum forces reign. http://www.tony5m17h.net/SHJTQprotein.pdf Myosin Coherence Excerpt: Quantum physics and molecular biology are two disciplines that have evolved relatively independently. However, recently a wealth of evidence has demonstrated the importance of quantum mechanics for biological systems and thus a new field of quantum biology is emerging. Living systems have mastered the making and breaking of chemical bonds, which are quantum mechanical phenomena. Absorbance of frequency specific radiation (e.g. photosynthesis and vision), conversion of chemical energy into mechanical motion (e.g. ATP cleavage) and single electron transfers through biological polymers (e.g. DNA or proteins) are all quantum mechanical effects. http://www.energetic-medicine.net/bioenergetic-articles/articles/63/1/Myosin-Coherence/Page1.html
Here's another measure for quantum information in protein structures:
Proteins with cruise control provide new perspective: Excerpt: “A mathematical analysis of the experiments showed that the proteins themselves acted to correct any imbalance imposed on them through artificial mutations and restored the chain to working order.” http://www.princeton.edu/main/news/archive/S22/60/95O56/
The preceding is solid confirmation that far more complex information resides in proteins than meets the eye, for the calculus equations used for ‘cruise control’, that must somehow reside within the quantum information that is ‘constraining’ the entire protein structure to its ‘normal’ state, is anything but ‘simple classical information’. For a sample of the equations that must be dealt with, to ‘engineer’ even a simple process control loop like cruise control along a entire protein structure, please see this following site:
PID controller Excerpt: A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal. http://en.wikipedia.org/wiki/PID_controller
It is very interesting to note that quantum entanglement, which conclusively demonstrates that ‘information’ in its pure 'quantum form' is completely transcendent of any time and space constraints, should be found in molecular biology on such a massive scale, for how can the quantum entanglement 'effect' in biology possibly be explained by a material (matter/energy) 'cause' when the quantum entanglement 'effect' falsified material particles as its own 'causation' in the first place? (A. Aspect) Appealing to the probability of various configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply! To give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various 'special' configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! Verse and music:
John 1:1-5 1 In the beginning was the Word, and the Word was with God, and the Word was God. 2 He was with God in the beginning. 3 Through him all things were made; without him nothing was made that has been made. 4 In him was life, and that life was the light of all mankind. 5 The light shines in the darkness, and the darkness has not overcome it. Rascal Flatts - Unstoppable (Olympics Mix) http://www.youtube.com/watch?v=v1xF1L8ZS7s
bornagain77
December 6, 2011
December
12
Dec
6
06
2011
08:28 PM
8
08
28
PM
PDT
Behe is full of it, you say? Full of what? While Dr Behe is cleaning up, would you (finally) mind addressing the evidence you have ducked more times than can be counted? The translation of genomic information requires specific physical objects and dynamics. These are observable properties. It is observed that two categories of physical objects within the system have qualities that are immaterial to their physcial make-up. This process is coherently understood, and these immaterial qualities are logically necessary for the system to operate properly. HOW does an immaterial quality become instantiated into a material object by the purely material processes you defend? Give yourself a pep talk; hit the evidence for ID as a welcome change of pace.Upright BiPed
December 6, 2011
December
12
Dec
6
06
2011
07:57 PM
7
07
57
PM
PDT
Hello Eric and Englishman, Not to distract from your conversation, but I think there is a far more compelling (and sustainable) coversation about the predictions of ID and TOE with regard to DNA than pondering the percentage of junk DNA (to be confirmed at some point in the distant future). Please allow me to please cut and paste from another thread...
If the theory of material origins is actually true, then the idea itself predicts that the information in the genome is not semiotic – to borrow Dr Moran’s term – it is only ‘analogous’ to the kind of information transfer we as sentient beings use. One is symbolic and the other is chemical. Indeed, that position is argued by materialists (one way or another) ever day on this forum. The information transfer in the genome is said to be no more than a cascade of physical reactions, but of course, all information transfer is a cascade of physical reactions, so that is no answer, and it never has been. But why does the truth of materialism predict this (chemical-only transfer) anyway? Because the representations and protocols involved in semiosis would have only appeared on the map after billions of years of evolutionary advancement in organisms. An imaginative materialists may see a chemically non-complex origin of inheritable Life in his or her mind’s eye, but that image blows up if that heredity is accomplished by using representations and protocols. Ask a materialists “what came first on the great time-line of Life: a) molecular inheritance by genetics, or b) representations and protocols?” Typically confusions ensues, and the embattled assumptions of materialism are pushed to the very front of the defense. On the other hand, if ID is said to be true, then it’s own prediction is on the line. That prediction has been that the information causing life to exist is semiotic. And again, that is exactly what is argued (one way or another) on this board every day. When nucleic sequences were finally elucidated, we did not find an incredible new and ingenious way in which physical law could record and transfer information, we found the exact same method of information transfer that living agents use; semiosis. And as it turns out, if one properly takes into account the observable physical entailments of information transfer during protein synthesis, and compares it to the physical entailments of any other type of recorded information transfer (without exception), they are precisely the same. It requires an arrangement of matter to serve as a representation within a system, it requires an arrangement of matter to physically establish an immaterial relationship between two discrete objects within that system (the input and output), it requires an effect to be driven by the input of the representations, and it requires that all these physical things remain discrete. The semiotic state of protein synthesis is therefore confirmed by the material evidence itself, and with it, one of the predictions of ID theory. Of course, I have no authority, and I am not speaking for ID writ large, just for myself and anyone else who might hold this view.
Upright BiPed
December 6, 2011
December
12
Dec
6
06
2011
07:30 PM
7
07
30
PM
PDT
Behe is simply full of it. Every thing that happens can be considered improbable in retrospect. And yet things happen. It is incredibly improbable that your particular parents met, married, conceived at the exact moment required to produce you, and yet they did. The Axe argument was that functional space was too sparse to support incremental change, so when Lenski and Thornton demonstrate incremental pathways, suddenly it becomes improbable that they be found. What a surprise. Suddenly the threshold for dFSCI is reduced from 500 bits to one bit.Petrushka
December 6, 2011
December
12
Dec
6
06
2011
07:27 PM
7
07
27
PM
PDT
By the way, I should add that DNA is most definitely elegant and breathtaking. Everything we are learning about it underscores the fact that we are dealing with a system designed by a designer or designers of almost incomprehensible skill and capability. However, as it relates to the inference of design, my point is that "elegance" is a more subjective concept that does not necessarily or exclusively belong to the design inference. Functional complex specified information is the key to inferring design, whether we subjectively think the particular item is "elegant" or not.Eric Anderson
December 6, 2011
December
12
Dec
6
06
2011
07:14 PM
7
07
14
PM
PDT
Thanks, englishmaninistanbul. Two points: 1. ID itself does not get into the question of the designer's purpose in any ultimate purpose sense. We can talk about a concept of "purpose" in the narrow sense of an object having a definable function or being a "purposeful" object in terms of what it objectively does. As a result, there is "purpose" in terms of engineering or functionality (which is a form of complex functional specified information). But the broader "Why" of a designer's actions is not part of ID. 2. More importantly, even if we accept that DNA has a purpose in the sense of its engineering and function (which I think most everyone agrees on), it does not follow that "all parts" of DNA must have a purpose. Machines wear out, break down, etc. Also, as was discussed on another thread, it is possible that some DNA does not have current function but is there for future use/development. That said, there are good design and engineering reasons to believe that the great majority, if not all, of DNA is functional. This has been a general prediction of ID for a long time (as articulated by Bill Dembski a number of years ago), in contrast to the junk DNA hypothesis pushed by many evolutionary proponents. But this does not mean that "all parts" of DNA have function. There may be some small amount of junk. Thus, if we were to describe the predictions of ID and Darwinism with respect to DNA we would say that: (i) ID predicts most DNA will have function, though some smaller portion could be junk, (i) Darwinism predicts that some small portion is functional, with the vast majority being junk.* We now are starting to see the evidence, and it is lining up very strongly on the side of (i). * Note that some evolutionists are now starting to back pedal from the junk DNA hypothesis as more and more DNA is shown to be functional. But even recently, prominent proponents of Darwinian theory have continued to maintain that junk DNA provides evidence for Darwinism and against design.Eric Anderson
December 6, 2011
December
12
Dec
6
06
2011
07:09 PM
7
07
09
PM
PDT
Well, if the logic goes "Designers have purpose, and DNA was designed, therefore all parts of the genome must have a purpose" then it kinda does require "elegance" in that sense. Maybe we're talking at cross purposes.englishmaninistanbul
December 6, 2011
December
12
Dec
6
06
2011
06:23 PM
6
06
23
PM
PDT
"So are we saying that ID necessarily posits a designer that is interested in elegance of design?" Teleologic does not require elegance.Eric Anderson
December 6, 2011
December
12
Dec
6
06
2011
06:08 PM
6
06
08
PM
PDT
Just having a little think to myself, it occurs to me that part of the trouble is in defining "consciousness" and "agency." I just listened to that Signature in the Cell discussion over at discovery.org where Stephen Meyer describes how he started with what you might describe as a "hunch" that design is the answer, and how he then set out to find a scientific way of describing and justifying that hunch. I hope that I'm just ill-informed and somebody will put me right, but I still haven't come across a definition of "consciousness" or "intelligent agency" beyond what is assumed to be common knowledge. To use an example: "Driving responsibly" is a concept we're all familiar with. But when a policeman pulls over an adrenaline-addicted wannabe rally driver he can't just say "You're not driving responsibly enough sir" and give him a ticket. He has to refer to legal definitions of "driving responsibly" phrased in words of one syllable that offenders can't weasel out of. So with "consciousness" and "intelligent agency", we all know what we mean but we have trouble describing it. We know that a beaver dam qualifies as intelligent design as much as computer software. So how do we define these intelligent agents, as opposed to any other phenomenon? It reminds me of an interview with Antony Flew on Youtube, where Flew confirms that he accepts there must be "an intelligence", and then reasons that the intelligence must be "omnipotent", but that we're not entitled to infer anything else in a religious sense. When questioned on whether that intelligence is eternal, he says "you can't really separate the eternity from the omnipotence." The interviewer asks if this must be a "personal force or being", and among other things Flew says "He's got to be conscious if he's going to be an agent." And so on. I'm sure people are going to debate the meaning of all these words and which thing necessitates the other, but that's really the problem. I suppose what's needed is not so much an understanding of consciousness or agency or even a definite statement of whether it's illusory or real, just a universally acceptable, bare bones, legalistic working definition of what conscious (?) agents are in terms of observed phenomena, that's portable to the origin of life debate. To give an example of why I think there's a need for this, on the Put a Sock In It page, in the opening paragraph, we have the following:
Intelligent design does not speak to the nature of designers anymore than Darwin’s theory speaks to the origin of matter.
True in context, but you have to define this designer somehow, at least in terms of lowest common denominators between all observed designers. The heading "Intelligent Design is Not a Valid Theory Since It Does Not Make Predictions" answers that accusation by pointing to the vindicated prediction that junk DNA would turn out to be functional. However it is admitted that (italics mine):
predictions of functionality of “junk DNA” were made based on teleological bases
So are we saying that ID necessarily posits a designer that is interested in elegance of design? By that definition Windows software is not ID! A small joke there, but I think you can see my point. Now it is certainly true that someone who subscribes to ID is also free to step into the realm of teleology which materialism flatly denies, but might it not be more technically correct to say that ID allowed for the prediction of junk DNA function, as opposed to being responsible for it all by itself? Another example: The heading "The Designer Must be Complex and Thus Could Never Have Existed" is handled thusly:
This is obviously a philosophical argument, not a scientific argument, and the main thrust is at theists. So I will let a theist answer this question
A correct statement in its context, but it doesn't address the question of where the scientific definition ends and philosopho-religious speculation begins. You can't say that the word "designer" is utterly undefinable scientifically otherwise it would be as unscientific a term as "God." So there has to be a minimum definition acceptable in scientific terms for what does and does not qualify as a "designer", "intelligent agent", what have you. I'm not sure whether I'm raising valid points or if I just haven't done enough research. But I really would like to know if these questions have been addressed. Thank you for reading, if you made it this far :)englishmaninistanbul
December 6, 2011
December
12
Dec
6
06
2011
05:38 PM
5
05
38
PM
PDT
as to:
Thornton has added a new twist — actually trying all the possible variations between two cousin genes and checking to see that there is a neutral pathway. I’m sure this will become a common kind of research.
Drum roll please:
Wheel of Fortune: New Work by Thornton's Group Supports Time-Asymmetric Dollo's Law - Michael Behe - October 5, 2011 Excerpt: Darwinian selection will fit a protein to its current task as tightly as it can. In the process, it makes it extremely difficult to adapt to a new task or revert to an old task by random mutation plus selection. http://www.evolutionnews.org/2011/10/wheel_of_fortune_new_work_by_t051621.html Severe Limits to Darwinian Evolution: - Michael Behe - Oct. 2009 Excerpt: The immediate, obvious implication is that the 2009 results render problematic even pretty small changes in structure/function for all proteins — not just the ones he worked on.,,,Thanks to Thornton’s impressive work, we can now see that the limits to Darwinian evolution are more severe than even I had supposed. (which was 1 in 10^40 for just two protein-protein binding sites) http://www.evolutionnews.org/2009/10/severe_limits_to_darwinian_evo.html#more Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe - Oct 2009 Excerpt: Nature has recently published an interesting paper which places severe limits on Darwinian evolution.,,, A time-symmetric Dollo’s law turns the notion of “pre-adaptation” on its head. The law instead predicts something like “pre-sequestration”, where proteins that are currently being used for one complex purpose are very unlikely to be available for either reversion to past functions or future alternative uses. http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.html
petrushka, do you even really care that your integrity is completely shot after repeatedly being shown to be wrong??? Why do you do this? Exactly what is the payoff for living in a lie?bornagain77
December 6, 2011
December
12
Dec
6
06
2011
04:39 PM
4
04
39
PM
PDT
ID proponents use probability to ascertain the likelihood of a materialistic and naturalistic history of an object...
A naturalistic history would involve one mutation at a time. With occasional chromosomal changes, such as duplications and transpositions. ID calculations never take this into consideration. Basically because such a history would never be particularly improbable. Now, in many cases we don't know the history, so we extrapolate from the processes that we can observe. We also extrapolate from the genetic distance between cousin species. Thornton has added a new twist -- actually trying all the possible variations between two cousin genes and checking to see that there is a neutral pathway. I'm sure this will become a common kind of research. It's a fossil record in the genome.Petrushka
December 6, 2011
December
12
Dec
6
06
2011
04:28 PM
4
04
28
PM
PDT
Petrushka: "The key requirement is regularity. One cannot study miracles except as exceptions to the background of regularity . . ." Who is talking about miracles? Is Mount Rushmore or the sculptures on Easter Island a result of miracles? The observable fact is that some things in the world are designed and others are not. Can we tell whether something falls in the designed category if we don't have a record of the causal history? That is it. Very simple question. It is a question that is asked and answered all the time in several fields. Nothing to do with miracles.Eric Anderson
December 6, 2011
December
12
Dec
6
06
2011
03:42 PM
3
03
42
PM
PDT
Petrushka: "ID proponents use probability to suggest the history of an object, and the assumed history of the same object as parameters in the calculation of probability." This is a misrepresentation. ID proponents use probability to ascertain the likelihood of a materialistic and naturalistic history of an object, based on what is currently known in chemistry and physics, and then draw a perfectly rational inference based on that probability. The parameters are whatever information is currently known in chemistry and physics that could bear on the alleged materialistic history. Don't like the parameters of the calculation? Then please detail for us what materialistic mechanism/history you propose, and then we can do some calculations to see if it has any legs in the real world.Eric Anderson
December 6, 2011
December
12
Dec
6
06
2011
03:37 PM
3
03
37
PM
PDT
Petrushka, genetic translation contains two material objects with (observable) immaterial qualities. These are coherently understood phenomena, which are logically necessary to accomplish the task. How does an immaterial quality become instantiated into a physical object? What mechanisms are causally adequate to accomoplish such a thing? My bet? You'll say anything at all before you'll allow yourself to get in the ring with the actual* evidence. actual: a : existing in act and not merely potential b : existing in fact or reality c : not false or apparent: existing or occurring at the time ...Upright BiPed
December 6, 2011
December
12
Dec
6
06
2011
02:58 PM
2
02
58
PM
PDT
'Evolution' breaks the fact, that humans only come from humans. Now DNA can also be used in creation. The idea that a creator made one animal and from that, ( even after many generations )made another similar animal. To go even further than that God could take body materials to make another animal thus leaving vestiges of that first animals history. This explains the 'common descent' many 'evolutionary' scientists hold on to. But there is also evidence for that common descent, is not correct. That is why some they say uncommon descent. But both ideas are not totally correct. In Creative Patterns, we see a descent of similar animals, but many starts to theses descents. But not through 'evolution' but through Creation. This is not just an idea, we are told that God actually did creation like that. That was with Adam and Eve. God actually took bone, DNA, muscle tissue,etc, to create Eve. He did not just create her from scratch, nor did he use just the DNA. Does this not explain what both the common descent and uncommon descent scientists see? It is a combination of both ,views. If anyone thinks this is incorrect, let me now why?MrDunsapy
December 6, 2011
December
12
Dec
6
06
2011
02:35 PM
2
02
35
PM
PDT
as to: Evolution is learning and doesn’t violate any laws or probabilities. No??? OK,, it just violates the second law of thermodynamics!!! :)
Evolution's Thermodynamic Failure - Granville Sewell (Professor of Mathematics - Texas University - El Paso) http://www.spectator.org/dsp_article.asp?art_id=9128 Prof. Granville Sewell on Evolution: In The Beginning and Other Essays on Intelligent Design - video http://www.youtube.com/watch?v=CHOnqDNJ0Bc Granville Sewell - Mathematics Dept. University of Texas El Paso (Papers and Videos) http://www.math.utep.edu/Faculty/sewell/
bornagain77
December 6, 2011
December
12
Dec
6
06
2011
02:12 PM
2
02
12
PM
PDT
Petrushka: Put simply, science assumes that anything that can interact with matter is matter. Sièle it certainly is, and simply wrong. Science does not assume anything like that. You certainly do. But you are not "science". On regularity I agree. When design is applied, there are regular phenomena that can be observed.gpuccio
December 6, 2011
December
12
Dec
6
06
2011
02:06 PM
2
02
06
PM
PDT
1 3 4 5 6 7

Leave a Reply