Uncommon Descent Serving The Intelligent Design Community

Sifting

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

UD member Timothy in another thread writes

But in principle it is possible to arrive at this particular unique combination of symbols using a simple brute force algorithm (like, for example, an elementary counting algorithm) that works through all the possible combinations of symbols. Thus, given such a systematic algorithm, all the books of the world, those written and those yet to be written, are implied by it.

I thought this was important enough to deserve a thread of its own.

This is not generating new information. It is sifting through existing information looking for something in particular.

A set is defined (all possible combinations of letters and punctuation) that by definition includes the information being sought. A goal is then defined (e.g. the combination of letters in a Shakespearian play). A mechanism is then defined to sift through the set (e.g. a million monkeys with a million typewriters for a million years) looking for something already known to be a member of the set.

Similarly, we already know that in the set of all physically possible combinations of atoms some of those combinations exhibit the properties of life (metabolism and reproduction). The question isn’t whether or not the information exists. The question is whether the sifting mechanism has a reasonable chance of finding the target in the information set where we know it already exists. If given infinite opportunity any sifting mechanism, even a totally random one, will eventually stumble onto the right combination. In the case of the spontaneous generation of life the set is very large and the sifting mechanism (laws of chemistry, physics, and statistical mechanics) doesn’t have infinite opportunity.

Intelligent Design is all about applying statistical mechanics to the laws of chemistry and physics and determining the probability of given patterns emerging from the set of all physically possible patterns. It posits that for some patterns the universe, or some subset of it where the sifting takes place, is not big enough or old enough to have provided enough opportunity for certain patterns to have any reasonable possibility of being formed absent the actions of an intelligent agent (design) in the sifting process. In order to exclude false positives (a design inference where there was no design) the probability bound for a design inference is set very high. Dembski proposes that it be set at one chance 10^150 opportunities. 10^150 is the estimated number of all the elementary particles (protons, neutrons, electrons) in the observable universe. False negatives are still possible (no design inference where design actually took place) given that design can mimic chance.

The scientific or mathematical theory of design detection makes only minimal presumptions about the nature of intelligent agency. It presumes that 1) intelligent agency predating human intelligent agency exists either within (natural) or without (supernatural) the observable universe; 2) the agency is capable of abstract thought; 3) the agency is capable of manipulating matter and energy to turn abstract thought into physical reality. Any presumptions beyond that are philosophical or religious in nature and are the personal views of individuals not the formal presumptions of the scientific theory of design detection.

Please be sure to read the sidebar Definition of Intelligent Design for a more concise defintion of what ID is and is not. Unintelligent (dumb) design theories such as the modern synthesis (neo-Darwinian) don’t presume that intelligent agency doesn’t exist. They make the assertion that intelligent agency is unnecessary and then reasonably apply Occam’s Razor to shave it out of the equation. Intelligent design theory differs only in that it asserts that intelligent agency is necessary and thus cannot be removed from the equation.

Just as an aside, the “million monkey” proposal is quite inadequate for generating a Shakespearian play. In a like manner many scientists and mathematicians admit that life is unlikely to spontaneously generate in the known universe and propose a theory of infinitely many universes and we just happen to be in one where life spontaneously emerged. This can be restated as: “If a million monkeys aren’t enough then just add more monkeys until there are enough.” I like that. Preposterous, uninvestigable pseudoscience. And these same scientists and mathematicians they say ID is pseudoscience. People who live in glass houses shouldn’t throw stones…

Comments
Timothy V Reeves (46): "I am eagerly waiting to hear more news on the subject [of protein evolution] as so much swings on the existence or non-existence of those islands of stability." That is an additional reason you should be sure to have a close look at Behe's The Edge of Evolution since he examines issues concerning the evolution of proteins and their binding sites. You also mention: "Secondly, and even better, proteins, compared to other morphological phenotypes, are relatively simple structures, being mere sequences whose possibilities have at least a chance of being mathematically investigated; unlike the vast possibilities of general morphospace that could juxtapose all sorts of forms and function that we find difficult to imagine in a huge ramifying space of possibilities that is not analytically tractable." In the book God's Undertaker by Lennox, he discusses how it is actually not "merely" this simple. The sequence stored in DNA may be subsequently sliced, diced and rearranged. (See "1. Alternative splicing", p. 133; and "Proteomics" p. 136). Furthermore, there are issues of geometry (cf. p. 135). It is possible for the same sequence to form either functional, non-functional, or even fatally dysfunctional geometries. As Lennox suggests, having to consider the 3D geometry could present "a problem of mind-boggling proportions" -- the very sort of problem it might have seemed that "mere" proteins avoided. BTW, Denyse's review is here. Even ignoring these complicating factors, I don't know what would prompt Larry (other than the confident faith of a true believer) to suggest "that protein ‘sequence space’ is filled with islands of stability that allows evolution to traverse in a kind of island hopping motion." To be meaningful, "stability" would have to mean at least "functional" (even if we ignore whether it is sufficiently beneficial to be retained by selection against deleterious mutation). Yet I believe it has long been recognized that the great majority of possible sequences do not correspond to functional proteins. Functional protein sequences are considered rare. Sequence space is not "filled" with functional proteins. ericB
To Timothy V Reeves (43), I heartily commend your recognition of the difference between a theological issue and a scientific one. It is a great foundational distinction that too many disregard. It is sadly common that many fall into faulty reasoning from confusing these categories of questions. Yet no matter what theological questions we may have, that would not warrant counterfactual leaps of faith. A theological question cannot impart to mindless matter attributes it manifestly does not have, or give scientists a free pass for ignoring the discrepancies. About speculations on why God might decide to act over a long time, I don't know what Hugh Ross et al (Reasons to Believe) might say, but that might be one source to consider. There is a separate question that is asked about why God doesn't just let life start and progress through natural processes, similar to stars, galaxies, planets, etc. But short of making the universe itself intelligent, I simply don't believe that is an option at all. Keep in mind that nothing else in nature, other than the artifacts of intelligence, has specified complexity -- nothing. Even if laws and chance can take care of the rest, that is the one aspect of nature they cannot create, though they will affect it once it does exist. "In brief, living organisms are distinguished by their specified complexity. ..." -- OOL scientist, Leslie Orgel. This still leaves open the unanswered question of whether sufficient specified complexity could be front-loaded once at the beginning of biological life, or if intelligent agency is also necessary along the way for certain kinds of major changes. Regarding books, it depends on the topic. Many good books are available from arn.org (Access Research Network). One that is not at ARN is my current favorite for a general introduction on the topic of whether science is more consistent with theism or atheism. God's Undertaker -- Has Science Buried God? by mathematician John Lennox responds to the recent rush of "science and reason imply atheism" books, and includes an introduction to a wide variety of design topics, both cosmological and biological. It was reviewed by Denyse O'Leary. If you want more about cosmological ID, check out The Priviledged Planet. Concerning biological ID, I have not yet read The Design of Life, but I have seen glowing comments about it. Another introductory book is The Politically Incorrect Guide to Darwinism and Intelligent Design by Wells. Regarding the origin of life, The Mystery of Life's Origin was a well received college level OOL textbook, but it was quite technical (not suited to the average lay reader). For a more recent and more accessible alternative focused on OOL, you could consider Intelligent Design or Evolution? Why the Origin of Life and the Evolution of Molecular Knowledge Imply Design by Stuart Pullen. If you want to focus on both the capabilities and the limitations of Darwinian processes, once life is operating, Behe's The Edge of Evolution is on the leading edge of looking at the implications of the best available empirical data. ericB
Hi Gpuccio, (44) Thanks for the interesting comments. Your mention of enzymes is related to the protein-folding question, a question that arose on my favourite atheist’s blog, Larry Moran at Sandwalk. He was challenging a Canadian ID theorist’s (Kirk Durston – heard of him?) research in this area. I was very interested in the subject for two reasons. Firstly, because proteins are the ‘bottle neck’ through which evolution must evolve, then any problems over the evolvability of proteins place themselves fairly and squarely on a ‘critical path’ as it were. Secondly, and even better, proteins, compared to other morphological phenotypes, are relatively simple structures, being mere sequences whose possibilities have at least a chance of being mathematically investigated; unlike the vast possibilities of general morphospace that could juxtapose all sorts of forms and function that we find difficult to imagine in a huge ramifying space of possibilities that is not analytically tractable. Larry assured me that protein ‘sequence space’ is filled with islands of stability that allows evolution to traverse in a kind of island hopping motion. (In other words, there are no sheer drops in this region of Mt Improbable according to Larry – although I trust Larry, he would have to believe that, of course) When Larry and Kirk agreed to take to take the matter further in the New Year I was looking forward to the outcome. I am eagerly waiting to hear more news on the subject as so much swings on the existence or non-existence of those islands of stability. BTW This discussion between Mapou and Paul looks interesting! Timothy V Reeves
Mapou, I am a little disappointed. In (31) you had stated that "I have a pertinent question to ask you regarding young earth creationism but please don’t take me wrong. I respect all Christian churches and denominations . . ." and "This is just a question regarding the interpretation of a very small part of the book of Genesis. I am struggling with trying to find a correct interpretation and I am willing to listen to all views on the matter before I make a final decision. Right now, I just feel that I don’t have enough information one way or another." That made it sound like you were just a seeker for truth and wanted to know my opinion, as it could be right. Then in another thread I read, "And he would also be wrong theologically. As I pointed out in a parallel thread, in an article I addressed to YEC Dr. Giem . . ." I don't mind your having a strong opinion about the exegetical question you raised. But it helps if you maintain a consistent attitude so that I can react accordingly. To comment on (41), first, you comment that "the fact that there are so many possible interpretations for the first few verses of Genesis speaks to the improbability of any of them being correct." I'm not sure I follow the logic here. I gave two different possible interpretations. Does that qualify as "so many"? Or did you have in mind some other interpretations that you wanted to add to the pool? And why should a multiplicity of interpretations imply that not any of them are correct? The proper conclusion would seem to be that at least one of them is correct. If one finds a victim with a bullet in him, and find a suspect with seven guns on him, it would seem that the suspect would be more, rather than less, likely to have shot the victim, because he has several guns, any one of which could have been used to shoot the victim. (Now if someone else also has one or more guns, then that would make the first suspect less likely to have shot the victim. But you did not propose any exegetical possibilities for the question of light before the sun that pointed to a long-age interpretation.) You refer to the question of whether Yahweh or Elohim created Adam. You may wish to note that according to Genesis 2 it was Yahweh Elohim ("the LORD God" in the KJV and RSV). That would seem to imply that Yahweh and Elohim are not two separate deities. You mention that "it is inadvisable and even disingenuous for Christians to insist that they have already found what they are looking for (before the search has even started)". It may surprise you, as apparently you have run across some YEC's who follow that procedure, and have been told of more, and perhaps believe that all YEC's are that way, but I wholeheartedly agree with you on this. I believe that if one is to do science, one must put one's hypotheses and theories in a position where the data could count against them, and where enough data could disprove them. If you are interested in my take, click on my name and you can get to my website. Then read the introduction and first chapter of my book, and it should help you to see how I approach the subject of science and religion. I think you will be pleasantly surprised. Given that fact, your belief that "young earth creationists are doing Christianity a disservice by dogmatically clinging to a belief system that is so clearly contradicted by scientific data" apparently does not apply to all YEC's. You might try being a little more open-minded on this subject. I realize that, as you put it, "this is only one Christian’s opinion." But that is no reason why you cannot correct that opinion when it is contradicted by data on what other Christians actually believe. (Being more open-minded and less critical may also help you avoid running afoul of DaveScot :) ) Paul Giem
Timothy V Reeves: I find your approach very fair, and please enjoy your freedom of thought as long as possible. There is nothing better than being able to look at a problem with intellectual detachment and serious concern for truth. Is some of us, in the ID field, may appear "polarized", please bear with us: that's bevause we are very much conscious and sure of what we believe, and that's not because of religious prejudice, at least not certainly in my case, but just because what we believe is simple and strong. I don't love name calling or unrespectful discussion, but I do love intellectual confrontation, even harsh if necessary, but always for the sake of truth. From that point of view, although extremes can be found everywhere, I believe that there is a definite, well perceptible difference between the ID field and the darwinists. And I am proud of being, even from that point of view, on what I believe is the better side. Maybe that, as you interiorize better the arguments as they really are, you may become a little more "polarized" too. That needs not disturb your freedom of thought in any way (if the "polarization" is the one I believe). Indeed, the more I debate ID, the more I feel my freedom of thought grow, and the more I enjoy it. I enjoy because I am discussing important things with reasonable people, with whom I can appreciate similarities and differences. Differently from what somebody believes, discussing with different people can be really rewarding, "especially" if you don't share their beliefs, provided that respect for both truth and people is there. In the past, I could have thought of YECs as unreasonable people, and yet I have met here a couple of them (no need to name them, I think) who are among the most intelligent and nice persons I can imagine. Such experiences are a true joy, and even if I have not changed my mind about YECs from a cognitive point of view, I feel happy that I can look at them with much more respect now. Dogmatism and intolerance are the worst enemies of cognitive approach. It is really sad that they must be so widespread, today, in the science field. But that's the heritage of our times. A last note about probabilities. Infinitesimal probabilities, like the ones we can find in biological information, are certainly not zero, but they are, indeed, infinitesimal. If you are familiar with mathematics and physics, you know that approximations, in such cases, are perfectly legit, especially if we are looking at an empirical problem. So, form an empirical pont of view, the difference between 1:4^(10^6) (the complexity of the genome of a very simple bacterium) and zero is really nil. Regarding the slopes of Mt. Improbable, be sure that they will soon be revealed to be extremely steep. More than anything can randomly climb. And that aspect is perfectly open to experiments. You can see how steep they are if you consider the recent attempts to create a new enzymatic protein. See for instance the following article: "Selection and evolution of enzymes from a partially randomized non-catalytic scaffold" whose abstract you can easily find in Pubmed. The important thing is that the researchers used here a random search to obtain a new enzymatic activity, and, after a long experimentation, they partially succeded, but: 1) They started with a library of sequences which had been selected intelligently for their characteristics (the presence of a zinc finger). 2) They indeed used a targeted random variation procedure 3) They selected results using a specific measure of the expected activity (so, we are in the model of Dawkin's Methinks it's like a weasel, where the system intelligently knows what to look for, and indeed measures it even if it could still not be of any relevant utility). And, finally: 4) The resulting activity was still very low-level, if compared with the enzymatic activities we observe in nature, those which are really useful in the empirical world. In my opinion, that clearly shows how steep the slopes of Mt. Improbable are, even for intelligent and obstinate research. gpuccio
Hi Eric. Thanks very much for the response. I hadn’t heard of “The Mystery of Life's Origin". I am looking for a book that does justice to ID. Which books do you consider best? What would be a good starter text? Should I get more than one book? Being a theist, ‘proof’ that evolution has not occurred won’t mean I start loosing sleep at night (“So it didn’t happen like that after all!”). On the other hand as I currently favor evolutionary theory (note: ‘currently favor’ – I haven’t sold myself to it!) I don’t get automatically annoyed when I hear evolutionists speaking. I rather find myself in the middle and that position does provide for a certain amount of emotional independence and cool headedness about the issue. It’s clear that the ID-evolution debate is very polarized and the battlefield is thick with name-calling and slurs. My worse nightmare is that I get sucked into this social mel’ee and lose that sense of feeling free of vested interests, ulterior philosophies, reputation mongering, crowd pressures, and identification with polemical champions. Whatever the outcome of the thermodynamics/configurational ratchet issue, there is another question I have with ID, but its more theological – why would God tinker with creation on and off in an adhoc way over a very long period of time giving ‘evolution’ the occasional ‘leg up’ over those mt improbable cliff faces? I’m assuming the standard paleontological time line here - I'm really asking the question as to why in theistic context that time line should exist at all and why, at least to a first approximation, it looks so 'evolutionary'! Timothy V Reeves
Timothy V Reeves (40), your "diffusing fluid" illustration is instructive, but perhaps in ways you may not have intended. Imagine that the liquid was originally frozen in the shape of letters that spell out an informative message. Then imaginge the letters melt and the liquid diffuses througout the maze as you described. Now the question is, if we are willing to wait a very long time, what is the probability that the diffusion will reverse and the liquid will regroup into letters forming a meaningful message? (Assume for free an opportune freezing spell at any advantagous time in the process.) Your example implicitly assumed a goal that was in the natural "downhill" direction of entropy via the diffusing of the liquid. As my previous post discussed, what is needed to create a highly specified and complex language processing system is exactly the opposite. Similarly, it is all too easy to look to computer programs for inspiration (e.g. a search for a given sentence) without noticing the ways in which their informed and directed instruction processing is not analogous to the undirected processes of nature. In the book The Mystery of Life's Origin the authors reviewed and scrutinized origin of life research. They spent three chapters analyzing examining relevant aspects of thermodynamics. Even in an open system, energy flow is insufficient. One needs an engine, i.e. a means to harness the energy for doing configurational work. Here are some summary comments of theirs about this issue. "We have identified this latter problem as one of doing the configurational entropy work. Here the difficulty is fundamental. It applies equally to discarded, present, and possible future models of chemical evolution. We believe the problem is analogous to that of the medieval alchemist who was commisioned to change copper into gold. Energy flow through a system can do chemical work and produce an otherwise improbable distribution of energy in the system (e.g., a water heater). Thermal entropy, however, seems to be physically independent from the information content of living systems which we have analyzed and called configurational entropy. As was pointed out, Yockey has noted that negative thermodynamic entropy (thermal) has nothing to do with information, and no amount of energy flow through the system and negative thermal entropy generation can produce even a small amount of information. You can't get gold out of copper, apples out of oranges, or information out of negative thermal entropy. There does not seem to be any physical basis for the widespread assumption implicit in the idea that an open system is a sufficient explanation for the complexity of life. As we have previously noted, there is neither a theoretical nor an experimental basis for this hypothesis. There is no hint in our experience of any mechanistic means of supplying the necessary configurational entropy work. Enzymes and human intelligence, however, do it routinely." (p. 183). ericB
Paul Giem: So that would be my answer to your question. I would slightly favor the Divine power explanation, but if there were good reason to reject it, I could be quite comfortable with the perception model instead. Dr. Giem, thank you for the pertinent reply. I think that the fact that there are so many possible interpretations for the first few verses of Genesis speaks to the improbability of any of them being correct. I feel that it is much more sensible to base one's doctrine on unambigious interpretations (e.g., the death and resurrection of Christ) than on speculations and traditions. We simply do not have enough information. Consider that there is even controversy as to who really created Adam, whether it was Yahweh or some other beings collectively referred to in Genesis as the Elohim (the Lords or the Masters). Scientific enquiry is a type of searching. The Bible is not against scientific research. Our Lord did command us to keep searching in order to find. In my opinion, it is inadvisable and even disingenuous for Christians to insist that they have already found what they are looking for (before the search has even started) and to conduct a new search to corroborate what they think they have found. There is nothing wrong with admitting fallibility. Even God has had regrets as the scriptures clearly indicate. In conclusion, let me add that I sincerely believe that young earth creationists are doing Christianity a disservice by dogmatically clinging to a belief system that is so clearly contradicted by scientific data. As I wrote elsewhere on this blog, God is not called the Young of days but the Ancient of days. Nobody can be ancient in a universe that is only a few thousand years old, sorry. Having said that, please realize that this is only one Christian's opinion. Our salvation is not predicated upon our ability to correctly interpret the book of Genesis. Mapou
Hi Eric (39) If you release a diffusing fluid at the start of maze, then if there is enough of it, it will diffuse throughout the maze, and in time some will reach the finishing point (if there is one). Parts of the maze may well be designed to trap the fluid so that patterns of peaks and troughs in concentration may develop in the maze. Now this is the interesting point: The fluid molecules have no notion of direction as they are subject to ‘blind’ Brownian motion, and yet because of the disequilibrium in initial conditions and the structure of the maze the system has pursued a direction. Hence, apparently blind undirected agents do not of themselves, paradoxically, preclude a system that pursues a direction. Now, talking about search programs there is that rather subtle, ramifying maze simulation program of staggering proportions that has already be written for us: its called the cosmos. Although on the ground it seems to employ small jiggling agents that don’t have a clue about what direction they are pursuing (and that’s before you get to the atheists), the initial conditions and the platonic maze they are working through may well superimpose direction. The ‘designer’ of this huge maze simulation may be clever enough to not need to provide the diffusing agents with any sense of getting warmer, but may incorporate that into structure of the maze itself and the way it collects ‘diffusing agents’. I keep returning to the same issue: whether we are using the metaphor of a maze, Mt Improbable or morphospace, the vast space of possibilities implicit in the physical regime of our world and just how those possibilities are networked is crucial. For example, in the maze of possibilities, is there a fibril of connection between what you call the non-symbolic and the symbolic? Or it is as you have assumed that the symbolic and non-symbolic are separated by a sheer rock face? (Bear in mind that the distinction between the symbolic and non-symbolic is not necessarily clear-cut – are pheromones mechanism or symbol?) It is surely an irony that what really sets the cat amongst the pigeons here is Theism, Yes Theism. Once one allows that the primary ontology of our world is a vast intelligence with the property of aseity, then it is possible that that intelligence has incorporated directionality and foresight at the maze level rather than at the agent level. However, over the question of whether our cosmos actually is a product of maze probing, theology and empiricism will need to have their say, of course. Note: a lot of engineers inhabit the ID community and some of them just don’t seem to understand the second law of thermodynamics. Timothy V Reeves
Timothy V Reeves (38): "I didn’t think the point I was making point was particularly controversial, even on this site: it concerns spontaneous generation probabilities, which although extremely small, are nevertheless finite."
If you mean to say the probability is positive rather than zero, that is so only in the sense that some might allow a greater than zero chance that anything at all might happen at once at any time in a leap, e.g. the spontaneous appearance of a functioning cell. This is the materialists' version of a miracle. If you mean a real world process analogous to computer algorithms that can search for a solution, I submit that probability is zero. Undirected pre-symbolic natural processes have no real world counterpart for the transition to symbolic processing. Please note, I have not been asking you about the merely improbably difficult foothills of ariving at any type of "self-sustaining structure." To move that issue to the side for clarity, you may start from the assumption that the prebiotic world found a way to generate replicating molecular structures without the aid of symbolic processing of any kind. For the sake of discussion, I will give you that much for free. I am asking about the problem of climbing the sheer face of moving from the non-symbolic to the functioning symbolic, i.e. from a non-symbolic universe to one that can translate encoded symbolic sequences into instructions that direct the construction of useful macromolecules. Part of the problem is that the universe does not play with impartial dice, giving all arrangements equal chance. The entropy current flows downhill -- unless you have energy and an engine that harnesses it to locally direct change against the current and toward some other destination. Living organisms can do that only because they have a programmed engine to harness energy flow purposefully in a directed manner. When a computer programmer designs a search algorithm, the programmer must implant the capability of recognizing progress and/or success. With evolutionary types of algorithms, the computer is enabled to play a "getting warmer/colder" game that gives direction to the search. In this way, the designer of the algorithm provides a slope that the algorithm can climb. Apart from appeals to leaps of blind faith, can you point to any empirically grounded basis for believing that presymbolic mindless matter and the blind forces of nature have any access whatsoever to a "getting warmer/colder" selection function for getting "closer" to or "farther" from the feature of symbolic language processing? Without that capacity there is no directed search process for symbolic processing. No matter how long you wait, there is only mindless, undirected drifting, subject to the prevailing downhill currents of entropy. ericB
Eric (36) I didn’t think the point I was making point was particularly controversial, even on this site: it concerns spontaneous generation probabilities, which although extremely small, are nevertheless finite. Everyone knows about them, but everyone discounts them because they can achieve nothing in realistic time. I was commenting on the claim that “one can’t generate novel information”: That claim isn’t strictly mathematically true, but it may (or may not) be practically/physically true. BTW: self-sustaining structures do not need an intelligent selector: their very nature means that they select themselves should they, for whatever reason, make an appearance (whether spontaneously or because an intelligence has cobbled them together) Whether novel self-sustaining structures can be arrived at ‘cumulatively’ (as opposed to spontaneously) in realistic time is the 1 billion dollar question: As I have made clear above, this all depends on the lie of the land around Mt Improbable. Timothy V Reeves
Mapou, (31) You asked, "f God created everything in six literal days, each having a literal morning and a literal evening, how come the main luminary (sun) that is responsible (as Genesis clearly says) for the literal evenings and the literal mornings was not created until the fourth day? What was responsible for the mornings and the evenings of the first three days?" I usually try to avoid too much discussion of the age of the earth on this forum, as it can be divisive, and besides, I am a guest on a website where most of those responsible for the site (with the exception of Paul Nelson) believe in long ages, are not creationists in the sense of believing that Genesis as traditionally read settles the question, and are routinely smeared with this accusation by the most vocal of their Darwinist opponents. Under these circumstances it is not proper for me to attempt to proselytize. I believe rather that my function here should be to reinforce the thrust of their message when I agree with it (which is most of the time). However, I don't believe in hiding or falsifying my beliefs, even for a good cause. And you have asked a direct question, and deserve an answer, and this is late in the comments, so I hope they will not mind if I attempt one. First, I will say that I really do not know the answer. I was not there, and I do not have a video to review. All I have is the Genesis account itself, as it has come down, and my own fair but imperfect knowledge of Hebrew. So I cannot give a definitive answer to your question. What I can do is give you two models, either of which can explain the scientific and Biblical data with a fair degree of coherence. The first, which I shall call the perception model, proposes that on the first day light was allowed to reach the surface of the primitive ocean from the sun, but through such a cloudy haze that it was difficult to perceive the direction of the light, let alone that a heavenly body was responsible for it. On the fourth day, the atmosphere cleared enough that the source of the light could be (fairly) clearly seen. The source would not necessarily have to have been as clearly seen as we do today in a cloudless sky; there might have been some hazy cloud cover distributing heat around the globe so that Adam and Eve could be comfortable while being naked, even at night. This view sees the Genesis account as describing what the creative process would look like to an observer on the ground, or earlier at the ocean's surface, thus the name perception model. This model also gives a different spin to the description of the fourth day from what you will read in most commentaries. Most commentaries note that the sun (Hebrew Shamash) and the moon are not specifically mentioned on the fourth day, and propose that this is a subtle but pointed polemic against the worship of these heavenly bodies as deities. In contrast, the perception model would hold that they are not mentioned because the sun itself was actually made on the first day, but the collection of light known as the greater light was made on the fourth. With a God able to perform miracles, this would not be impossible "scientifically". And I don't know of any way to test this theory with data to which we have access today. The major difficulty I see with this proposal is that it would seem to trivialize God's activity on the fourth day; He really only clarified the atmosphere somewhat. The second theory, which I might call the Divine power theory, takes off from the idea that the theological reason for the structure of Genesis 1 is that, among other things, God doesn't need the sun to make light. Therefore, it is proposed, God deliberately had light come before the sun just to show that he had that power. On the first three days, God had light come from the direction where the sun would eventually be created. With unidirectional light and the earth turning, there was evening and morning without the need for the sun. Then on the fourth day, the sun was in fact created. The earth would not have to drift out of orbit more than 3 days' worth, which could easily be accommodated when God created the sun. It could be argued that this makes God a show-off, but the counter argument would be that God was making a specific point that would be important in ancient society. So that would be my answer to your question. I would slightly favor the Divine power explanation, but if there were good reason to reject it, I could be quite comfortable with the perception model instead. Paul Giem
Timothy V Reeves: "I had thought of that one! That’s why I used the word ‘configuration’ – it includes hardware and software, symbols and reader. What you say is valid only if by ‘configuration’ I had meant ‘naked information’. ... Hope this has cleared up the misunderstanding."
My comment already covers your allusion to both hardware and software. Blind processes have no imagination, no planning, and no capacity to select for future functionality. So it doesn't help to just tack "hardware and software" together. Saying it that way does not give blind forces capabilities they do not have. In short, undirected, prebiotic processes have neither any need for nor any capability to seek symbolic representation. There is no search-for-symbolic-processing engine within mindless matter. It is quite fulfilled by actual reactions without symbolic representation. What we do have is the need for materialism to make a blind leap of pure faith, despite the complete lack of support from our observations of nature. But blind leaps of faith are not science. ericB
Paul Geim (29) Hi Paul, I was aware that arguing from IC in particular cases predates Behe - Arguments like “what use is half an eye?” have been round a long while. For example, I read “Genesis Flood” in the seventies and again in the eighties and nineties. I have memory that I first saw the argument in that book (?). However I must confess that because Behe’s name is now ubiquitous in this area I rather assumed that ‘irreducible complexity’ is a term tracing back to Behe’s annunciation of a kind of a generalized IC, packaged with his professional’s knowledge of proposed examples. I gave up young earth creationism as one of the ‘ten things you’ve gotta belief as a Christian’ about 12 years after I was converted – for me it failed as an explanatory structure for life as observed at my level. However, ID, in and of itself, appears not to necessarily reject the ‘old earth’ fossil histories, and can be read as a challenge, perhaps even a useful challenge, to conventional theories about the mechanism of evolution. (a kind of negative challenge about unevolvability) without rejecting the standard geological model. at what point are you prepared to concede that there are no gentle slopes up mt improbable? The Mt Improbable imagery rather trivializes a platonic object that is complex in the extreme; morphospace - effectively a structure of structures, that somehow has factored into it environmental considerations as well (Viz organisms are not merely in environments but become a constituent of them at the same time – hence, tricky feedback effects). ‘Mt Improbable’ is not going serve up its topological secrets as easy certainties or in neat analytical form to either evolutionist or ID theorist. Timothy V Reeves
EricB 32 Hi EricB, I had thought of that one! That’s why I used the word ‘configuration’ – it includes hardware and software, symbols and reader. What you say is valid only if by ‘configuration’ I had meant 'naked information'. But ‘configuration’ is a broader term that includes what we identify as information and any ‘machinery’ that effectively dresses the 'naked information' and interprets it. Notice that I said ‘sustaining configurations we called life’ not ‘self sustaining configurations we call the genetic code’. Hope this has cleared up the misunderstanding. Timothy V Reeves
magnan As I mentioned, this depends on how many multiplications take place in the mosquito Just one. Gametocytes (male and female parasites) differentiate in the human host. These are taken into the mosquito gut during a blood meal. In the mosquito gut the male parasite produces the equivalent of sperm cells (flagellated germ cells) that fertilize the female parasites. The fertilized female then produces yet another form called a sporozoite which doesn't reproduce again in the mosquito but rather migrates to the mosquito's salivary gland where it is eventually injected into a new human host during a blood meal. There's comparatively very little opportunity in the mosquito stage for selection to deselect mutations that occured in the human host where many rounds of replication took place. As well, there's very little opportunity for mutation to produce anything new in the mosquito phase. The most significant thing in the mosquito phase is recombination where genetic novelties produced by different cell lines can come together. If the mosquito feeds on more than one human host, which is common enough, then gametocytes from different human hosts get a chance to recombine their DNA. The mosquito phase is where there's an opportunity for selection to pick out any novel genetic changes that would serve to extend its range into colder climates. The mosquito is cold-blooded so its body temperature is that of the environment. Although the mosquito can survive in temperatures below 42F the parasite cannot. Any random genetic changes generated in a human host that helped the parasite survive the cold would be intensely selected for in the mosquito phase at the edges of the parasite's range. It's very significant that mutation and selection failed to produce any solutions for survival in colder temperatures. An ID prediction that can, at least in principle, be tested would be that the parasite failed to extend its range because the modifications required are beyond the edge of evolution - i.e. requiring more than just a few interdependent nucleotide changes. This would require identifying the cause of death in colder temperatures and producing a GM version of the parasite that can survive the colder temperature. Close relatives of the parasite that can survive in colder temperatures should in principle provide a source for the key genes needed for temperate survival. The ID prediction is that those genes would be beyond the edge of evolution for P.falciparum and thus, for all practical purposes, forever beyond its reach. Another ID prediction would be related to the cause of failure in thwarting the sickle cell mutation in human hemoglobin. ID predicts that being able to digest the modified hemoglobin requires more than a few interdependent nucleotide changes where again it's beyond the practical edge of evolution for the parasite. DaveScot
To Timothy V Reeves, regarding New Information, etc. Speaking for myself, I would not "accept that very inefficient processes can produce configurations identical to those self-sustaining configurations we call life, although in a prohibitively long time." Regarding "information", the key information of interest is clearly the symbolic information encoded by a genetic code and used (via translation) to inform the construction of life's protein machines. The sifting illustration is inadequate and misleading with regard to life for multiple reasons. To mention one fundamental problem, even imagining that we had a random sequence generating engine (ignoring all chemical obstacles), a sequence of itself is meaningless. Symbol sequences only acquire meaning in the presence of a matched translation convention that associates the symbols with their functional/useful counterparts (e.g. amino acid sequences for functional proteins). When we use illustrations that generate sequences out of English letters, we can easily forget that we are smuggling in the translation for free, since we can translate it ourselves. If we strip away the unfair advantage provided by the English reader and place it back into a prebiotic context, the result is this: None of the object sequences have any meaning at all. The probability of finding one with meaning is zero. To really solve the symbolic information problem, it is required to generate both the symbolic translation machinery and also the symbolic sequences that correspond to functional counterparts according to the specific convention implemented in the translation machinery. Now, how does a blind process select (as an analogy) for the creation of a DVD player in universe that does not yet have any matching, information-carrying DVDs? Or how does it select for the creation of DVDs that contain marker sequences that will become meaningful according to a future DVD player whose conventions have not yet been established? Furthermore, if one is aiming to create symbolic information encoding for protein construction, there is no hope of doing this by blind processes in advance of the existence of the functional proteins. Mindless matter cannot encode for imagined, future proteins. By contrast, intelligent agents certainly can design and encode for future functional systems. Ada Lovelace is regarded as the first programmer for her work developing instructions for Charles Babbage's Analytical Engine, even though the engine was never built. ericB
Paul Giem: The usual tactic was to argue the age of the earth, or perhaps of the universe, and assume that because the creationists could be discomfited there, all their arguments were bunk. Nowadays that won’t work. (Even radiometric dating may now be problematic). Dr. Giem, I understand that you are a Christian and a young earth creationist. I, too, am a Christian although I believe that the universe is billions of years old and that life has existed on earth for at least a billion years. I have a pertinent question to ask you regarding young earth creationism but please don't take me wrong. I respect all Christian churches and denominations in spite of the incompability of their doctrines. I even include the Jewish faith within my circle of respectability. My position is that God does not judge us on the basis of our logic or lack thereof, but on the basis of our faith. My question is this. If God created everything in six literal days, each having a literal morning and a literal evening, how come the main luminary (sun) that is responsible (as Genesis clearly says) for the literal evenings and the literal mornings was not created until the fourth day? What was responsible for the mornings and the evenings of the first three days? I apologise if you've already answered this question in your writings. Again, I am not questioning your faith. This is just a question regarding the interpretation of a very small part of the book of Genesis. I am struggling with trying to find a correct interpretation and I am willing to listen to all views on the matter before I make a final decision. Right now, I just feel that I don't have enough information one way or another. Mapou
DaveScot, your numerical estimates are instructive. One way of conceptualizing the insight would be to say that a relatively few generations of selection of mutational variants with a huge population and a relatively small genome can indeed substitute for a huge number of generations of successive selection in a smaller population of organisms with a much larger genome. I thought I would try a little rough analysis myself, looking at it in a different way. The trillion population is the acute phase in the human patient. For this estimate we assume he began being administered chloroquine at this point and not earlier at lower parasite numbers. Say that the known simple CQR variant combines two mutations and it occurs in this parasite population of a trillion in an acute phase patient just given chloroquine. There is strong selection pressure due to the chloroquine and the clone of this variant now takes over the population, with the others dying off. Ideally, ignoring other factors and using the parasite multiplication rate of 8 per 48 hours given in the literature, the geometric progression is such that at day 10 the total number of multiplications to form the clone is about 33,000 and is up to a billion at day 20 and a trillion at about day 25. This simple version CQR clone population will be at about 60 million (large enough to get any particular needed mutation given the genome size) between days 16 and 18. So we at this point could be expected to have developed a better 3-mutation CQR variant in this patient , whose clone would then take over, displacing the 2-mutation version. This process would then take another 16 to 18 days to have accumulated enough multiplications of the improved clone to have a good probability of developing another even more effective 4-mutation variant. That is a total period of say 36 days in acute phase for this patient, during which he has been administered chloroquine. I wouldn't think chloroquine would be given that much longer. So over the course of the disease in this one patient getting chloroquine and somehow generating the first simple CQR variant the parasite has had the opportunity through conventional RV + NS to have developed maybe a 4-mutation CQR mechanism. At this point the issue comes in as to how likely it is that this 4-mutation version will be transmitted to another patient being administered chloroquine, so that the process can be expected by MET to build up from this an even more elaborate 8-mutation variant in that patient, and so on to 16 and more in other patients during the outbreak. As I mentioned, this depends on how many multiplications take place in the mosquito, because the normal wild variety is more fit in the mosquito and will take over. If there are no multiplications in the mosquito phase and the process does not have to start all over again in each patient (most victims are infected with strains from other victims being given chloroquine), then we can expect RV + NS to build up very complicated systems for CQR, in any particular regional malaria outbreak. But it has never done this, so MET does seem to be falsified. I just don't know how valid the key assumption is. Since outbreaks tend to be geographically and temporally separate I would still expect that the CQR evolution process to have to start all over again for each outbreak. magnan
Thanks, gpuccio and bililiad. And thanks, DaveScott, for extending the analysis. Dave, your data bring up an important point that is often glossed over. If it is unusual for malarial parasites to have two mutations at the same time, in order to get from point A to point B, say, 1,000 bases apart, one must have a pathway where each successive mutation is at least comparable in survival value to the previous mutation, or else the advance to point B is stymied. This step-by-step pathway is what Behe was looking for in the literature and couldn't find. Without knowledge of this pathway, or at least a pathway, all Darwinian evolutionary claims are just hand-waving. They are faith statements without evidence, just the kind of thing that atheists routinely condemn religious believers for. How they have gotten a pass on this is beyond me, unless we are looking at religion rather than science. Timothy, I would second Mapou's observations. I also appreciate your response, especially the recognition that the actual data are the key to the controversy. I would like to clarify a couple of points. The concept of IC, if not the name, antedates Behe. It was what convinced me back in 1973 that materialism was hopelessly inadequate. Similar arguments have been used by creationists such as Duane Gish for decades before Behe wrote his book.. Behe has done a service sticking his neck out, and noting that the arguments cannot be defeated by arguing the age of the earth or even common descent, and writing in a particularly clear style, and by having credentials that were not easily discounted by the scientific community. But the data and arguments have always been there Behe didn't invent them; he discovered them, as all do who discover truth. Those in the dominant scientific community who claim that Behe's arguments are the same tired old creationist arguments that have already been defeated have a half a point; they are the same old arguments. The problem is, they never really were defeated. The usual tactic was to argue the age of the earth, or perhaps of the universe, and assume that because the creationists could be discomfited there, all their arguments were bunk. Nowadays that won't work. (Even radiometric dating may now be problematic). The Darwinists want to live in the past when they had greater success in debating. Second, your mention of the abhorrence of explanatory discontinuities touches on an important point. If one has explanatory discontinuity, one has no recourse but to believe that the discontinuity can be bridged without adequate evidence, or to disbelieve. That is what an explanatory discontinuity means. With too many explanatory discontinuities, one's philosophy becomes faith-based rather than evidence-based. It becomes religion rather than science. That is why Dawkins could say that "Darwin made it possible to be an intellectually fulfilled atheist." Darwin appeared to remove a major explanatory discontinuity. But if Darwin turns out to be substantially wrong, then atheists will have to go back to being intellectually unfulfilled. Needless to say, they don't like the prospect, and are prepared to resist the demotion of Darwin. Paul Giem
Timothy V Reeves: But I don’t expect to see intentionality down at that level, any more than I expect to see intentionality down at the neural level if someone zooms in on my mind – individual neurons work blindly and know no purpose – to suggest they do otherwise would be a repeat of the homunculus fallacy. The mystique of intentionality and purpose can only be found on the level of the whole system. I agree but this is precisely what Behe's "irreducible complexity" is all about, in my opinion. An intentional system (such as the mammal brain) is a complex anticipatory or proactive system. It is irreducibly complex. A blind (non-intentional) process cannot arrive at such a system through gradualism, hence the need for a pre-existing intentional system, aka an intelligent designer. I guess that the question that must be asked is, how did evolution get to be intentional without gradually evolving toward that stage? It needs the intentional complexity before it can become complex. Chicken and egg, all over again. Mapou
Gpucio (22) Paul (23) Thanks Gpuccio and Paul for the replies. I certainly agree with you Paul that everything turns on the empirical evidence, and the fact that all sides of the debate repeat this principle endlessly doesn’t succeed in trivializing it. On the subject of evidence it is clear that the landscape around Mt Improbable is crucial, and this in itself is an empirical (albeit tough) assignment that can be engaged without controversial mention of Design. The claim that all Mt Improbable’s slopes are irreducibility steep is where ID starts if not finishes. The ID community, it seems to me, has grown like a crystal seeded by Michael Behe’s term ‘irreducible complexity’. If Behe hadn’t identified the concept and coined the term I wonder if ID would be here today? One of the reasons I’m here is to see what the evidence for IR looks like from an ID perspective. What you won’t get from me is someone who defends evolution as if my life depends on it. The relationships stated by Paul, ‘no evolution = > God’, and ‘No God => evolution’ and are interesting to say the least. I suspect they arise because the human mind abhors explanatory discontinuities and dead ends of all kinds. In the former relationship we are given a living world perched on the discontinuous precipices of Mt Improbable, a perplexing situation that is accounted for by divine action. In the second relationship absence of divine activity is deputized for by evolution. Trouble is, when evolution has to stand in for the divine, it is not received well because it lacks the mystique of impenetrability. We zoom in on the process in our minds eye and all we see is the ‘purposeless’ chance shufflings of fragments of ‘dead’ stuff. Where’s the mind in that? Where’s intentionality? But I don’t expect to see intentionality down at that level, any more than I expect to see intentionality down at the neural level if someone zooms in on my mind – individual neurons work blindly and know no purpose – to suggest they do otherwise would be a repeat of the homunculus fallacy. The mystique of intentionality and purpose can only be found on the level of the whole system. Timothy V Reeves
magnan (& paul) There is still something missing from the analyses. When assessing what mutation can accomplish in the way of successive single point mutations accumulating over generations one must consider two factors that haven't been mentioned - the genome size (about 20 million base pairs) and the background random mutation rate (about 1 per billion nucleotides copied). Each parasite has an average of 0.02 single point mutations (1 of every 50 parasites has a single point mutation). In a 20 megabase genome there are 60 million unique single point mutations possible (there are three possible base changes at each locus). There are up to 1 trillion parasites in each infected individual. If 1 in 50 have a single point mutation then there are 20 billion parasites that differ by one single point mutation. With 60 million possible genomes that differ by 1 nucleotide that means that in each infected individual the parasite has tried every possible single point mutation 333 times. This is done every 21 days (the life cycle of the parasite) in each of the roughly 30 million people simultaneously infected at any one time. Over the course of 50 years that's 886 generations where every possible single point mutation has a chance to "improve" the species in each generation. Behe does the same math for mammals. Because the generation time for mammals is, on average, 100 times longer, their genomes 1000 times larger, and their total population at any instant in time a trillion times smaller, even with far more total generations over the course of millions of years the ability to offer up sequential single point mutations from one generation to the next for natural selection to choose between is orders of magnitude less than the malarial parasite had in just 50 years. There simply aren't enough mammals alive at once in any given generation to produce even a miniscule fraction of the variation in genotype that P.falciparum enjoys in each generation. To make matters worse, every mammal has an average of 3 point mutations where 49 out of 50 malarial parasites have a perfect copy of the parent. Given that the vast majority of mutations are fatal at worst or neutral at best, we should expect most mammal species to become extinct through the accumulation of small deleterious errors (selection works on the whole organism, not individual mutations, so it has to select the bad along with the good) while p.falciparum can charge merrily along in a state of genotypic perfection with selection able to choose one mutation at a time, not 3 at a time. This is exactly what we observe. Most of the mammal species that ever existed are extinct. Mammals should ALL be extinct due to genetic entropy if nothing is operating other than random mutation & natural selection. What keeps a few of them going despite genetic entropy is the biggest mystery. I suggest reading Cornell geneticist John Sanford's (inventor of the gene gun) book "Genetic Entropy" for a far more in depth discussion of genetic entropy. DaveScot
Paul Giem: Thank you for your post. Great points, and very well expressed. I agree with all my heart. gpuccio
Timothy V Reeves, (20), I hope you don't mind my commenting on part of your reply to gpuccio. You appear not to understand the controversy clearly. It is not logically impossible for a God to design the universe in such a way that it will organize itself into galaxies of stars, some that have planets, including at least one that is habitable, and that life will coalesce or spring into existence on that planet and then develop into increasingly complex forms of life, one of which can then (partially) understand the universe, itself, and the God who made it by such an indirect way. It is even not logically impossible that this God would then change His (or Her) ways and intervene in subtle or perhaps more obvious ways to produce what we would call miracles, perhaps in an effort to communicate with these intelligent products of His indirect creation. Kenneth Miller is not automatically ruled out of court. The problem is not the logical or philosophical impossibility of this kind of scenario. The problem is that it is contrary to the evidence. To pick the most obvious example, there is no good, or even remotely plausible, pathway from non-living matter to living matter. Both materialists and people like Kenneth Miller want us to believe that life arose spontaneously without any special assistance from anything, against the evidence. That's blind faith, brother. Notice that the best ID people want to talk about the evidence, whereas the tactics of their opponents are usually to try to win the argument before the evidence is presented. The opponents use such arguments as "they have no standing to argue" (they aren't really scientists, they don't publish in peer-reviewed articles), "science doesn't operate that way" (science is limited to materialism), or "they are dishonest" (as if that would win the argument even if it were true). The evidence points to an intelligent designer. and the rest is diversion. The problem is actually on the other side. If one starts without active intelligence in the universe, or at least able to interact with the universe, then one must, as a logical necessity, believe that something very similar to Darwinian evolution is the cause of the diversity of life, and one must believe in at least one case of the spontaneous generation of life. Period. Thus, what should really be said, rather than your "evolution => no God", is the inverse, that is, no God => evolution. + abiogenesis. Therefore, no abiolgenesis (or no evolution) => God. The facts that ID points to challenge these people's beliefs about religion, and one can expect religious persecution of those that disagree with them, as indeed has happened. The way you phrased you last paragraph sounded like you were saying that believers in God believed in ID because we believed in God. For some, that may be true. For others, we are simply following the (massive) weight of evidence that a designer is required for life as we see it, that the "Who designed the designer?" question applies to space aliens, and that God is really the only live option ultimately. You ask what would happen if this evidence were to be somehow reversed. Some of us might go the way of Miller, and others might become atheists, and still others might be agnostics. But you need to realize that your question, at least at present, is counterfactual. It's a bit like asking what I would do if I suddenly started floating up off of the ground. I don't really know, but right now there is no need for me to worry about it. Your comments about Mount Improbable are valid as far as they go, But all the surfaces we have found are either cliffs or overhangs, and we appear to have surveyed the mountain on all sides. The closer we look at the cliffs, the more sheer the rock walls appear to be. Those on top of the mountain have always fallen down and been unable to climb up if they have descended more than a certain distance down the side. At what point are you willing to concede that there are no gentle paths up the slope? Paul Giem
Timothy V Reeves: Just to be clear, my faith in God is based on completely different things, and has nothing to do with ID, nor in any way needs support from ID. The evolution => no God proposition has, in my opinion, a certain amount of truth, if we mean by evoultion what strict darwinists mean: generation of all biological information by blind forces. The no evolution => God proposition has a certain amount of truth too, because I believe that many people would be much more uncertain of a purely materialistic view of reality if the castle of darwinism were, beyond any doubt, proven for what it is, that is false. That would probably not be particularly important for me (or for you), because, as I have said, my belief in God is totally independent, but perhaps it is for those other people that we spend time defending what we believe is the scientific truth. But I would like to give a definite answer to that other questio from you, because it is a very important question. You ask: "If evolution should prove true, do you then stop believing in God?" I think you refer here to darwinian evolution. Well, I must very sincerely affirm here that darwinian evolution is not compatible, in any way, with my view of reality and with my faith. Not in a million years. I will never believe that blind forces have created the wonderful richness we see in the living world, not even by "commission" from a God at the start of the universe. I do believe that God personally created that (and, obviously, anything else that exists) in an explicit, intelligent, perceptible way. That's why I am sure that, in time, science will not only demonstrate that a designer exists (indeed, in my opinion, that has already been done), but also that the designer is ultimately a spiritual God. But, and here is the important point, what if darwinian evolution were reasonably proven true? (obviously, I dont think it has been, quite the contrary). But if it were? Not beyond any doubt, because I don't believe scientific theories can ever be ultimately proven true, but reasonably enough? Would I lose my personal faith? Certainly not. As I have said, my faith has well other reasons to stay, and all of them are more important for me than scientific evidence. But, and it is a very important but, my scientific consciousness would be obliged to stay sincere: I would sincerely recognize (I would, believe me...) that scientific evidence, as understood at the moment, is in favour of that theory, although for non scientific and personal reasons I would continue to believe that something is probably not right in that scientific understanding. So, as you see, I really believe that the scientific discourse should be independent fron one's faith. Darwinists have it completely wrong. ID is a scientific theory, not a religious theory. I do believe in ID for scientific, rational arguments. I am obviously happy that my scientific convinctions are in harmony with my religious faith, but that's not a priority. I defend my scientific convinctions purely at the level of objective confrontation, and I have never, never made any appeal to faith on this blog. Obviously, expressing one's faith is another matter. I occasionally express mine, as darwinists always express theirs. But that has nothing to do with the scientific intellectual confrontation which is taking place. gpuccio
magnan, (18) If I understand your objection properly, it is this: multiple simultaneous trials are not equivalent to multiple successive trials, as the former can change an organism in a sequential fashion and the latter cannot. In one sense you have a point. If one is trying to get two specific mutations, either of which is neutral, or perhaps beneficial, one is more likely to get them from a thousand generations than from a thousand simultaneous mutations in one generation (which are therefore independent). But that simplistic model is forgetting several things. One that is easily correctable is, that one has to factor in whether the separate mutations are truly neutral. If they are deleterious, they will be selected against, making it less likely that they will happen sequentially. Second, other deleterious mutations have to be excluded. In fact if either of the above are fatal, it brings the sequential process to a grinding halt. An ideal setup has the combination of sequential and simultaneous trials, so that the original line can continue until a better one is found. In fact, the problem of deleterious mutations is a vexing one, that has been ignored for too long by conventional biology. What you have overlooked is that the malarial parasite does in fact have both. It is true that there are about 1 trillion malarial cells in a given infected person, and approximately 100 million people with malaria at a given time, with a resultant 10^20 organisms at any given time. But there is also a fast generation time, with an average of 20 cells being produced every 2 days, giving a generation time of less than half a day, or greater than 730 generations per year. One has to decrease these numbers for the time spent in the mosquito (probably less than 10 days--I have not found data), and around 10 days in the liver (range 6-15 days). Assuming that a new host is picked every two months, that cuts our generations per year down to about 400. With a history of malaria for at least 2,500 years, that is one million generations. Compare that to the generation time for humans. If one assumes a split between chimpanzees and humans at 7.5 million years, and a generation time of 10 years (perhaps long for the ancestor but certainly short for humans), there have been more malaria generations over the past 2,500 years than there have been human generations, period. Therefore, all things being equal, one would expect more evolution in malaria than one would from chimp (or the proposed ancestor) to human. Now I would agree that all things may not be equal. But it does seem to me to take a certain kind of faith to believe, without any evidence, that it should be markedly easier for primates to evolve than for malarial parasites to evolve. This particularly true when we realize that the population of malarial parasites has vastly exceeded the population of humans for most of their respective generations. Surely you would concede that greater numbers of organisms make evolution easier, all other things being equal. This means that belief that it is easier for apes to evolve than for malarial parasites is actually against the evidence. When you say that "it seems to me it [the malarial parasite] hasn’t actually been given the vast numbers of successive contiguous generations in the same population claimed to be needed by the process to build complex structures and systems", I agree with you. But if that is the case, then the proposed ancestral ape hasn't been given enough successive contiguous generations to build complex structures and systems either. Your observation is very apt. Just extend it a little. Paul Giem
REPLY TO GPUCCIO Hi Gpuccio. Re: Creating Novel Information by ‘brute’ processing. Thanks very much for the long considered reply which I have read carefully. ‘Never find them’ is rather too absolute for me; Even 1:4^(10^6) is a finite ratio. ‘Never’ doesn’t follow from it. Brute processing information generators have at least a theoretical ‘platonic’ existance even without recourse to infinities. This sets a precedent. However, whether the ratio 1:4^(10^6) can be used to show that ‘never’ means ‘Not in the live time of our comos’, depends on those other rather knotty ID issues surrounding the concept of ‘irreducible complexity’; Viz whether Mt Improbable is always a sheer rock face or if it has gentle back routes! A lot depends on the ‘landscape’ surrounding Mt Improbable; you may not be able to devise a landscape that allows evolution to “think” of solutions, but perhaps Divine intelligence may not be so limited! Moreover, let’s say for the sake of argument that evolution has actually happened. Are you then telling me that if that is the case then Divine creative and sustaining providence is logically obviated? Does evolution really equate to a logically self sufficient cosmos? Creation ex-nihilo, in my opinion, is about reifying the platonic and not about configuration changes that occur within one reified contigent story taken from the platonic. Just because our contigent cosmos has been choosen for a particular history of configuration changes there is then no logical warrent to dispense with God’s Aseity and Creativity. If evolution should prove true, do you then stop believing in God? Is your faith based on a counter factual (That is, evolution => no God) and a conditional? (That is, no evolution => God). ‘Materialism’ is less about a commitment to a particular cosmic story, than it is beliefs about what constitutes the primary ontology of our cosmos. Timothy V Reeves
magnan The parasite only reproduces once in the mosquito gut producing a sporozoite which migrates to the mosquito's salivary gland where it then sits until it infects a human during a blood meal. In comparison it reproduces scores of times in the human host. There's comparatively very little opportunity in the mosquito stage for mutation and selection to undo any mutations that occured in the asexual stages in the human host. What it does do in the mosquito is undergo one round of sexual reproduction so that recombination gets a chance to produce variants with different mixes of dominant and recessive genes. Moreover, because replication in the human is asexual, any mutations become immediately fixed in all descendents of that particular cell line until it leaves the human host. Generation time means nothing in and of itself in MET. The total number of replications is all that counts. For large animals with long life cycles, few surviving offspring, and only sexual reproduction evolution occurs very slowly and only by the passage of millions of years can a large number of replications accumulate. This is not at all the case for the malaria parasite. Let's try a different example and see if you can understand it. When botanists are trying to generate variants of plants with useful mutations one of the ways they do this is to soak a seed with a chemical (mutagen) that causes mutations to occur much more frequently. Do they soak just one seed at a time, grow it, and if nothing useful and novel emerges try soaking a second seed? Of course not. They soak as many seeds at once as they can practically grow out. The generation time is constant whether one seed or one thousand seeds are soaked. What you're trying to tell me is that only the number of generations matters and that number of individuals in each generation matters not. DaveScot
I don't think I am being purposely obtuse to point out that Darwinist MET claims that it can produce complex innovations if it is given not just vast populations to allow vast numbers of small variations in each generation, but also vast numbers of generations to slowly build the apparently designed structures by multitudes of small steps. Complexity is supposed to be generated this way, not by a lucky accident of a fully formed machine appearing somewhere in a vast population. In the P. falciparum case the parasite has been certainly given vast populations and strong selection pressure during infection of humans being given chloroquine. But it seems to me it hasn't actually been given the vast numbers of successive contiguous generations in the same population claimed to be needed by the process to build complex structures and systems. If that is the case the CQR example doesn't actually falsify MET. But of course it certainly isn't evidence for the vast powers of RV + NS either. 1. The known CQR variants are supposed to be much less fit than the normal (wild) strain, in a non-drug environment. 2. Because of this, in the mosquito phase of the life cycle the wild variant should take over after a few parasite generations. 3. Accordingly, a human bitten by a malaria-bearing mosquito will most likely be infected with the normal wild strain of the parasite which does not have CQR. 4. Therefore, to build chloroquine resistance the parasite must start from scratch in each human infected with the disease. 5. This means that to build some sort of CQR complex the parasite is limited to the number of generations in a single human course of the disease, which I roughly estimated as 280 generations. 6. Since in Darwinist MET complexity is supposed to be built up gradualistically by a vast series of small changes over millions of generations, the CQR example does not seem to falsify MET. This capsulizes it in a series of propositions. I could easily be wrong in any of them. I think the weakest is 3. Perhaps actually during an outbreak most victims are bitten by mosquitoes carrying blood from other victims, which may carry parasites already having developed partial or full CQR. This would allow the parasite more generations to generate some elaborate defense. However, each separate outbreak would probably still have to evolve CQR independently of the others. magnan
magnan Let's look at an example in the engineering world of parallel processing. The example is the Ansari X Prize. This was a competition with a $10 million prize to the first team to produce a reusable vehicle that can carry passengers outside the atmosphere. The "generation time" is about a year. That's how long it takes to build a single prototype vehicle. If only one team were working on the problem only one vehicle per year would have a chance at success. About a dozen different teams worked in parallel on the problem generating about 12 test flights per year. One of them, after 5 or 10 years, won the prize. That's the power of parallel effort. The generation time is indeed a factor in the equation but the number of individual efforts to find a solution is equally important. A long generation time can be effectively negated by many generations being produced in parallel. This is such a simple, fundamental principle in any inventive undertaking it's hard for me to believe I need to belabor it and much easier to believe that anyone who claims to not understand it is being purposely obtuse. DaveScot
magnan Every single time one of the parasites replicates, whether in the human body or in the mosquito gut, there is one chance for heritable change to occur. I fail to see how any factor other than the total number of replications is relevant to the speed at which evolution can produce variants. Generation time doesn't exist in a vacuum. Generation time must be combined with how many new generations are being produced in parallel. In this parasite the generation time is very short, moreso in the asexual stage, but by far the biggest factor to consider is how many generations are produced in parallel. This is a staggeringly large number for this parasite and it is exactly this factor which enables it quickly produce drug resistant variant strains with a rapidity approaching that of bacteria. DaveScot
magnan: I am no expert of the malaria parasite, but I ask you: why are you saying that "In the mosquito phase of course there is no drug treatment and the selection forces are against the CQR variant". Even if that were true for the observed mutations (I really don't know), why should that be true of any possible beneficial mutation? Are you suggesting that any beneficial mutation is detrimental as soon as the environmental pressure decreases, even temporarily? Are you maybe one of those treacherous ID proponents? :-) In other words, the important thing in Behe's argument is that, in the presence of a very strong and specific selective pressure (be it even only in half the cycle of the parasite), no real "evolution" has taken place. In darwinian logic, there could be infinite ways the parasite could evolve through beneficial mutations which don't need be detrimental in the other half of the parasite cycle. After all, if parasites survive the cycle in man through mutation, and then bring that "neutral-beneficial" mutation to the mosquito cycle, in the end the resistant strains should be easily fixed. So, one of the two: either no complex beneficial mutation took place, or all beneficial mutations are in reality characterized by a severe loss of basic function. In both cases, darwinists are not in a good position. And anyway, malaria parasite did not undergo any new speciation, even minimal. gpuccio
DaveScot (#1): "In billions of trillions of opportunities what was mutation and selection able to accomplish (for the malaria parasite) in response to these selection pressures (for resistance to chloroquine)? Exactly what ID predicted. No more than two or three interdependent nucleotide changes that served to impart resistance to some drugs." I am an ID advocate, but I have had trouble with this example. It seems to me that with P. falciparum trying to develop CQR over the last 50 years we don’t really have the astronomical numbers of successive generations suggested for selection to operate. I posted on this before, but no one responded. I hate to be a devil's advocate, but I would like to see why this reasoning is invalid. Falciparum has basically two stages in its life cycle - in the mosquito and in the human body. The selection forces are strongly for the CQR variant during drug treatment in a human being. In the mosquito phase of course there is no drug treatment and the selection forces are against the CQR variant, resulting, after a while, with a parasite population of the normal non-CQR strain. So it seems that on average RV + NS is continuously operating to develop the CQR complex of mutations only during an acute infection in any individual while the drug is being administered. Usually Falciparum would seem to have to start all over again from “scratch” (from the normal free-living mosquito variant) in each infected human individual, unless the person is infected by a mosquito having just bitten a malaria patient having CQR malaria. So by this reasoning, over the entire 50 years since chloroquine became available the RV + NS process with P. falciparum seems to have had not much more than the number of parasite generations it took for the course of the disease in any one patient. Parasite generation time in the human body is estimated to be about 8 per 48 hours or 4 generations per day. The course of the disease seems to be about 5 weeks or 35 days with no drug treatment, and a rough estimate of duration with drug treatment if CQR develops could be as much as twice that or 70 days. The parasite would then have had less than 70×4 = 280 generations for RV + NS to operate in building and spreading a CQR variant in the victim being administered chloroquine. magnan
Timothy V Reeves: I try some answers to your thoughtful questions: "However, you have admitted that a naked ‘search’ process can eventually come up with the configurational goods, if in an inordinately long time" Yes, but here "inordinately" means really empirically impossible: that is, we are facing problems which cannot realistically be solved computationally in any finite time. Only a theorical reasoning about infinite search has to admit the possibility of a random solution, but that has really no relevance in the real universe. "there need be no sneaky information waiting in the wings prompting the program what to select or reject, because those configurations, once arrived at," That's the point: you can't arrive at them. Take the problem of Origin Of Life (OOL): the simplest living organism ever known, archea and bacteria, have genomes which are at least above 10^6 base pairs, which means a complexity of more than 1:4^(10^6). In comparison, Demski's UPB is a joke. The important point is that no simpler autonomous living being has ever been observed or reasonably conceived. All the known scenarios about OOL, from Urey-Miller to RNA world, are pure imagination, and of the wildest type, as even some important non ID scientist have admitted. But there is more: even if you had, by miracle, the raw information ready (that is, a bacterial genome), still you would not have life. Nobody has ever been able to take a complete bacterial genome and build, from non living molecules, a living bacterium. And something which is impossible in an intelligent lab should have happened in the primitive ocean, or underground, by virtue of a supplementary incredible magic, once the first incredible magic had supplied the necessary genome? Let's remember that anything simpler than a bacterium "is not a replicator". Replication in the biological sense is observed only by the very complex system of a genome, a system of transcription, a system of translation, a metabolic system, a membrane, and many other incredibly complex and efficient things. OOL is the early death of any materialistic theory of life. But even if, like Darwin, you choose to "take life for granted", you still have to explain how a precursor which was certainly almost identical, or maybe identical, to actual bacteria and archea, which are still the most adapted and successful forms of life on the planet, gave rise to almost infinite, and very different, forms of much more complex life, sometimes in a very short time (see the Cambrian explosion). The problem of origin of species is no less conundrum, for the darwinists, than OOL. There is no discussion possible. Darwinian theory is, simply, an impossible explanation. "imaginary brute processing scenarios raise questions about just what the ID communities’ ‘New Information’ means" It's rather simple. We have an empirical, incredible truth: although complex functional configurations cannot be generated by random forces or by necessary laws, we constantly observe them in nature. They are of two kinds: biological information in living beings, and human artifacts. Even if some earnest darwinist, even on this blog, has tried to affirm that the two categories are fundamentally different, that's not true. In biological beings we can observe myriads of examples of very complex machines, regulatory networks, etc. which are very similar to human engineered machines or software, only more efficient and complex. Indeed, many times human engineers try to imitate solutions found in living beings. But let's look at human artifacts. Take a simple human algorithm, like an ordering program for PC, which operates intelligently to order a content which is given as input. Just consider that kind of information. It is CSI, because it is complex enough, although not very complex (for convenience, let's suppose that it is a string of 1000 bits, that puts us at a complexity of more than 1:10^300, well beyond our UPB. Well, it is easy to affirm that that specific sequence of bits has never existed in nature, in any form, until some human has created it. Still, that sequence is not just one of the possible random sequences of 1000 bits. It is functional. In the correct environment (the PC) that information works as an ordering program. More important, that information was written that way "because" it had to work as an oredring program. That's a very deep thought, isn't it? We have an informational entity which would never have existed itherwise, and which still is specifically functional. And why does it exist? Because an intelligent agent, a human, wanted it to exist, had the purpose and the understanding and the means to create it. Don't ask me how intelligence can do something which non intelligent nature cannot do. I don't know. Nobody knows. But it's that way. We see it happening every minute. In other words, intelligence can "select" specific configurations according to a purpose, without having to perform a completely random search and sifting, which could never find those functional configurations in any existing time. Human intelligence can do that. Nothing else we know in nature can do that. And yet, we have another huge repository of the highest information of that kind (CSI, specified, functional, selected information) which "must" have been created by a similar process. There is no alternative. Human beings, as we know them, don't seem to be a reasonable solution to that. As far as we know, they were not there when like started on our planet. Aliens, it could be. There is always, for them, the "who designed the desigher" problem, but after all we don't know much of aliens, and the problem could be postponed. There is a possible entity which human thought is rather familiar with, and that is a God. Is that, as far as we can know, a possible scientific answer? You can bet. It definitely is. First of all, in most mature conceptions of God, He is the source of the highest qualities we observe in humans. So, instead of asking how God could have an intelligence similar to humans, we could just put it the other way: how can humans have an intelligence similar ro God? The answer is simple: God gave it to them. For Christians, it is very easy: God created humans in His image. No surprise, then, that we find intelligent design in life (but also, with a different degree of evidence, in the whole universe) as we find it in human artifacts. The same principle of intelligent agency is there: divine and human. The same concepts of purpose and meaning and will are there. And finally, does the "who designed the designer" problem apply here? Absolutely not. Why? Because, in all mature conceptions of a God, He is a cause out of creation, that is out of time and space, and out of the law of causality. Besides, He is necessarily simple, and the origin of all complexity. It's called transendency. Is an explanation based on God scientific? It certainly is, if God exists. After all, science is a search for truth, not for materialistic coneniency. Put it the other way. If God created the world and implemented specific higher information in living beings in the course of time, how could any materialistic explanation of reality be true? It's impossible. Science is a search for what is true, in the limits of what science itself can understand froma an empirical point of view. If God exists, and if His activity in creation is empirically observable, then no science is ultimately possible without a concept of God. gpuccio
Gil: "It would be interesting to calculate a realistic probability bound for the formation of life on earth by chemical processes." It's even harder than you suggest, since many of the precursors can only be formed under directly opposing conditions. Calculations of the concentrations of the necessary sub-species necessary imply concentrations so low that you are not even likely to encounter some of the required amino acids in the same litre of "soup". So, from the start, you can't get enough amino acids of all the right types in one place. Then, you can't get those amino acids to build anything biologically meaningful in terms os size, since you need to add energy, that is likely to break things down or form useless tarry bi-products. Then of course, the order and number of the molecules formed is rather important. Finally, as has been noted before, we can start with a single drop of water containing the simplest single-cell organism and disrupt it to produce all the necessary pre-cursors. That skips the first three steps above. What's the chance of putting humpty-dumpty back together again? Calculating the probability of any of any of these stages makes the UPB becomes generous, and the full odds against means multiplying all the stages together. SCheesman
OK then, let’s have quick look at this. I’ll confine my comments only to the issue of ‘New Information’. There are various things Dave and gpuccio mention like ‘CSI’ and Dembski’s UPB that I really need to get up to speed on, but I’ll see how far I can get without them. So Dave, you accept that very inefficient processes can produce configurations identical to those self-sustaining configurations we call life, although in a prohibitively long time. I didn’t actually have the search-reject-select computational scenario in mind (what I understand you refer to as ‘sifting’), but rather just the inexorable ‘search’ bit of the process, stripped of selection. However, you have admitted that a naked ‘search’ process can eventually come up with the configurational goods, if in an inordinately long time. In one rather idealized trivial mathematical sense there is no new information out there in that all possibilities are registered in the great-uncharted platonic volumes of mathematics. So when we talk about ‘New Information’ we are really thinking about configurations ‘new’ in the sense that they are ‘new’ relative to our existing world. In the case of Dawkins’ ‘ME THINKS…’ program it is very easy to maintain that no new information is generated because the very thing being looked for is written in the program from the outset (Nice one Richard, that wretched little program of yours is the best anti-evolution propaganda I’ve come across!) Now, and here’s the rub, when it comes to configurations of real atomic stuff, we need NOT add the ‘sifting’ function to our naked brute search process. Unlike Dawkins’ infamous ‘ME THINKS….’ program there need be no sneaky information waiting in the wings prompting the program what to select or reject, because those configurations, once arrived at, self-select, self-sustain and self perpetuate given their environment – that’s what life is all about. They inhabit platonic space, but should they, perchance, be discovered by brute processing they lock in, not because of some crib information hidden away in the physical ‘program’, but because they are what they are – self-sustaining, self-perpetuating. Like a weakened reification of the ontological argument they self select and self-sustain for their own self-referencing structural reasons. Admittedly there are big, big, big issues here about how quickly brute processing can do what it is supposed to have done in the allotted time (and Dembski’s UPB is relevant here) but imaginary brute processing scenarios raise questions about just what the ID communities’ ‘New Information’ means. If in principle brute processing can generate structural configurations identical to ID, (although it is a recurring gut feeling that it could never be done in a timely way) what distinguishes ‘New Information’? Why is it a distinctly ID concept? Can we really claim that ID is the ONLY way of generating new information? We could, of course, define ‘New Information’ as an extrinsic property of a configuration conferred upon it by external intelligence; that is, a property that exists by virtue of some intentional purpose of a pre-existing intelligence that has generated it. This, of course, muddies the water considerably as intelligence is the wild card here; it is not fully understood, even in the human case. So, Dave, in short I am still not clear just what ‘New Information’ is – is it an extrinsic connection that a structure has with the intelligent entity that generated it? Or are you simply defining it as structural configurations that brute processing can’t produce except in a prohibitively long time? Timothy V Reeves
It would be interesting to calculate a realistic probability bound for the formation of life on earth by chemical processes. This would be the number of atoms and molecules on the surface of the earth and in the oceans, times the maximum rate at which chemical reactions can occur, times a few hundred million years (since life apparently appeared almost immediately after the earth cooled sufficiently). The probabilistic resources available for the formation of life on earth by purely materialistic means would obviously be vastly smaller than those used to calculate Dembski's UPB. GilDodgen
Gil Thanks for the correction on the UPB. I'll make a note of it. DaveScot
hrun0815: "A well adapted organism remains unchanged as the environment remains unchanged" Well, that's exactly the point: 1) The environment has not remained unchange. The ubiquitous use of cloroquine is for the parasite the most important environmental threat one can imagine. The emergence of S hemoglobin, in more distant times, is another example. What other kind of fitness landscape change do we need to activate darwinian evolution? It seems that darwinists always have a double standard ready: in distant and untestable times, any imaginary change in environment is declared capable of triggering cows into whales and similar miracles, while in historical times a major environmental threat, like the ubiquitous use of a drug in the target population, cannot even produce an efficient method of resistance! 2) If the principle you state is correct (and I think it is more correct than you think), then why would the simplest, oldest and best adapted organisms we know of, that is bacteria and archea, have changed so much, according to darwinists, as to have given rise to the full chain of evolution, up to humans? I have always affirmed that simpler forms of life are usually the best adapted, and that complexity in itself is a motive of weakness in living organisms. In other words, the true driving principle that is behind the growing complexity of the living world seems not to be survival, but rather the expression of the fundamental principles of life, together with a true, infinite creativity. gpuccio
hrun0815:
Well, I guess in some cases it does, when drugs are injected to treat humans. In those cases P. falciparum did change effectively as expected.
The ID community is suggesting the following: > that drug resistance must usually involves the organism disabling itself (decreasing in overall information content) making it so that the drug cannot act against it, however making it to be a lesser organism. They parallel it to choping off one's arms to keep one from getting into arm-grabbing traps. > That resistance is limited to resistance that can be achieved with a single mutational event. If a dual-event is required, then one of the two events will at least offer partial resistance. The bottom line prediction of ID is that if a drug can be found which cannot be resisted by a single mutational event, then resistance will not develop. You obviously have much better access to the literature than I do. Show me in the literature that they are wrong. bFast
Thanks very much for the Replies Dave and gpuccio! I've got a bit of catching up to do here! I'll be back! Timothy V Reeves
I don't really get the P. falciparum example. Why would P. falciparum change? Does the environment it finds itself in change at all? Well, I guess in some cases it does, when drugs are injected to treat humans. In those cases P. falciparum did change effectively as expected. What does this tell us? A well adapted organism remains unchanged as the environment remains unchanged. And, small organisms with a high reproduction rate can often efficiently develop drug resistance. Why would a design proponent or darwinian evolution proponent expect anything else? hrun0815
Very good summary! I would just add that the problem remains of identifying and understanding better what is that makes the specification, in other words what is that makes the information "intelligently designed", and allows to recognize it and to point at it as the product of intelligent agents. Put differently, it means that we have to ask, at least in some degree, what defines a conscious intelligent agent, or at least its recognizable output. In the end, the demonstration that appropriately specified (for instance, appropriately functional) solutions are only a really tiny subset of the whole set of possibilities is a basic priority of the ID theory. While that concept is certainly intuitively obvious, at least in my opinion, it definitely deserves specific investigation, both theorical and empirical. Another important point, related to the first, is that the process of sifting need a recognition procedure which can fix the right result. In many cases, maybe in all, that means that the system must have some specific information about the result to be obtained, which is probably the raw meaning of what Dembski and Marks declare in their last papers. Let's take the example of Dawkinns' Methinks it's like a weasel model. There, the sifting is extremely efficient exactly because the sifting system, though using a random variation procedure to generate change, already knows the final sentence which is the correct solution to the search process. In that context, it is quite easy (although anyway not trivial) to get to the right solution in reasonable time. But it is correct to ask: where is the new information? The system has only "copied" an information which was alredy present through a very troublesome random mechanism. In biological systems, function is believed to be the "information" that guides the system, through natural selection. That would be the only way to shorten the infinite time required to get to the high grade of CSI observable in all beings. But we must remember that: 1) Function is recognizable only in the appropriate context. One of darwinian lies is that any new function could be selected, and that would enlarge the subset of specified results to islands of reasonable probability. That's not true. In any specified context, and there are myryad of different specified contexts in the history of life, only very few solutions are functionals. The fitness landscape is really determined not only by changes in environment, but also, and especially, by the already set context of the existing organism. 2) Most complex funtions cannot be derived in a gradual way by other functions. That's the characteristic of functions, after all. Each function "does" a specific thing. There is no reason in the world, neither logical nor empirical, why different functions, which "do" different things, could be derived easily and gradually one from the other, except in darwinists' just just so stories. 3) There are in the living world almost infinite numbers of "superfunctions", functions controlling other functions controlling other functions. Such an intricate network of meanings and relations is rare even in human artifacts, and usually much less efficient an elegant. That is a very heavy argument, not only in favour of design, but also in favour of design of the highest kind. gpuccio
I posted the following comment in response to Marc over at Human Events, following Granville Sewell's article:
Marc, Texas: "...I do know that it is mathematically possible for a billion monkeys pounding on typewriters to produce the complete works of Shakespeare by random chance, given enough time. Say, perhaps, 4.5 billion years." Consider the phrase, "To be or not to be, that is the question." If one ignores spaces and punctuation there are 30 characters in this string. There are 26 letters in the English alphabet, so there are 26^30 possible 30-character strings, or 2.8 x 10^42. If each of the billion monkeys typed 30 characters every second without pause for 4.5 billion years, they would generate 1.4 x 10^26 30-character strings. There is thus one chance in 1.4 x 10^16 (14,000,000,000,000,000 or 14 thousand trillion) that this English sentence would be produced, assuming the monkeys never typed any duplicate 30-character strings.
Those annoying big numbers seem to rear their ugly heads at the most inopportune times. One small correction concerning Dembski's universal probability bound. It is calculated as follows: 10^80: the number of elementary particles in the observable universe. 10^45: the maximum rate per second at which transitions in physical states can occur (i.e., the inverse of the Planck time -- 1 second is about 1.855 x 10^43 Planck times). 10^25: a billion times longer than the typical estimated age of the universe in seconds. 10^150 = 10^80 x 10^45 x 10^25 GilDodgen
"Intelligent Design is all about applying statistical mechanics to the laws of chemistry and physics and determining the probability of given patterns emerging from the set of all physically possible patterns." I think there's more to it than this. It's also about logical pathways from initial conditions to a later condition. One of the weaknesses of the blindwatchmaker hypothesis is that, so far, detailed pathways from initial states to proffered evolved states have no empirical basis. Take for example, a Rubic's Cube. If I randomly mix up the cube, there will always be a set of steps that can take you from the initial mixed up condition to the goal of uniform colors on all sides. However, I can peel the colored stickers off and put them back on in such a way that there is no possible path to uniform colors on all sides. Let's say that after replacing the stickers in such a fashion, I randomly mix up the cube. To a casual observer (i.e, one who has not attempted to determine the existence of such a logical path), there is no visual difference between a cube with a logical path to the goal, and one that lacks a path. The question of whether the cube actually has a logical path to the goal cannot be answered with statistics. mike1962
I think it should be noted that it is extraordinarily difficult to determine probalities in the diversification of life from some primordial, simple form of common ancestor. Mutation and selection operating over billions of years with trillions of trillions of opportunities to produce heritable change is formidable. I don't believe it's practically possible to estimate the odds with any certainty. However, we can look at what intelligent design theory predicts and see if it holds true in what we can actually observe. Intelligent design theory predicts that in the trillions of reproductive events in the chain going from reptiles to mammals the complex structures that distinguish the two cannot reasonably emerge from mutation and selection alone. Michael Behe, in the book "The Edge of Evolution" examines what mutation and selection was able to accomplish in the last 50 years in P.falciparum (the single celled parasite responsible for malaria). This is an extraordinarily well studied organism from top to bottom, gross anatomy to DNA sequence. In the past 50 years mutation and selection has had billions of trillions of opportunities to produce heritable change. It has been under intense selection pressure in the way of artificial efforts to eradicate it. In addition its range is severely limited by needing tropical climates. In billions of trillions of opportunities what was mutation and selection able to accomplish in response to these selection pressures? Exactly what ID predicted. No more than two or three interdependent nucleotide changes that served to impart resistance to some drugs. Where only one change was required for resistance to a certain drug it was acquired quickly and often - in as many as one in three individuals infected with the parasite. Where two or three interdependent mutations were required resistance arose only a few times. In response to the sickle cell mutation in human hemoglobin mutation and selection operating in the parasite has yet to find a way around it. Neither has mutation and selection found a way to allow the parasite to survive in temperate climates. Given the succcesses and failures of mutation and selection in billions of trillions of opportunities to find more than very simple solutions how are we to believe that in far fewer opportunities the same mechanism created all the far more complex structures that distinguish mammals from reptiles? Non sequitor. An important ID prediction was confirmed by observation while the chance & necessity prediction (if it can even be somehow contrived into making a prediction about future evolutionary change) was an utter failure. ID actually makes predictions about the course of evolution. Neo-Darwinian theory doesn't - all it does it make ad hoc explanations for evolutionary events that have already transpired. As far as future predictions all it says is sometimes things evolve and sometimes things stay the same. A theory that explains everything explains nothing. The best rebuttal I've seen for the lack of evolution in the parasite is "How do you know that an intelligent agency didn't act to prevent the parasite from evolving?" The answer to that is we don't know. We freely admit, as I did in the main article, that we cannot exclude false negatives. In the sidebar definition of ID we state that ID is about the positive evidence of design. We presume that in cases where there is no positive evidence of intelligent design that no design happened. In other words we give mutation & selection the benefit of doubt. DaveScot

Leave a Reply