Uncommon Descent Serving The Intelligent Design Community

Sifting

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

UD member Timothy in another thread writes

But in principle it is possible to arrive at this particular unique combination of symbols using a simple brute force algorithm (like, for example, an elementary counting algorithm) that works through all the possible combinations of symbols. Thus, given such a systematic algorithm, all the books of the world, those written and those yet to be written, are implied by it.

I thought this was important enough to deserve a thread of its own.

This is not generating new information. It is sifting through existing information looking for something in particular.

A set is defined (all possible combinations of letters and punctuation) that by definition includes the information being sought. A goal is then defined (e.g. the combination of letters in a Shakespearian play). A mechanism is then defined to sift through the set (e.g. a million monkeys with a million typewriters for a million years) looking for something already known to be a member of the set.

Similarly, we already know that in the set of all physically possible combinations of atoms some of those combinations exhibit the properties of life (metabolism and reproduction). The question isn’t whether or not the information exists. The question is whether the sifting mechanism has a reasonable chance of finding the target in the information set where we know it already exists. If given infinite opportunity any sifting mechanism, even a totally random one, will eventually stumble onto the right combination. In the case of the spontaneous generation of life the set is very large and the sifting mechanism (laws of chemistry, physics, and statistical mechanics) doesn’t have infinite opportunity.

Intelligent Design is all about applying statistical mechanics to the laws of chemistry and physics and determining the probability of given patterns emerging from the set of all physically possible patterns. It posits that for some patterns the universe, or some subset of it where the sifting takes place, is not big enough or old enough to have provided enough opportunity for certain patterns to have any reasonable possibility of being formed absent the actions of an intelligent agent (design) in the sifting process. In order to exclude false positives (a design inference where there was no design) the probability bound for a design inference is set very high. Dembski proposes that it be set at one chance 10^150 opportunities. 10^150 is the estimated number of all the elementary particles (protons, neutrons, electrons) in the observable universe. False negatives are still possible (no design inference where design actually took place) given that design can mimic chance.

The scientific or mathematical theory of design detection makes only minimal presumptions about the nature of intelligent agency. It presumes that 1) intelligent agency predating human intelligent agency exists either within (natural) or without (supernatural) the observable universe; 2) the agency is capable of abstract thought; 3) the agency is capable of manipulating matter and energy to turn abstract thought into physical reality. Any presumptions beyond that are philosophical or religious in nature and are the personal views of individuals not the formal presumptions of the scientific theory of design detection.

Please be sure to read the sidebar Definition of Intelligent Design for a more concise defintion of what ID is and is not. Unintelligent (dumb) design theories such as the modern synthesis (neo-Darwinian) don’t presume that intelligent agency doesn’t exist. They make the assertion that intelligent agency is unnecessary and then reasonably apply Occam’s Razor to shave it out of the equation. Intelligent design theory differs only in that it asserts that intelligent agency is necessary and thus cannot be removed from the equation.

Just as an aside, the “million monkey” proposal is quite inadequate for generating a Shakespearian play. In a like manner many scientists and mathematicians admit that life is unlikely to spontaneously generate in the known universe and propose a theory of infinitely many universes and we just happen to be in one where life spontaneously emerged. This can be restated as: “If a million monkeys aren’t enough then just add more monkeys until there are enough.” I like that. Preposterous, uninvestigable pseudoscience. And these same scientists and mathematicians they say ID is pseudoscience. People who live in glass houses shouldn’t throw stones…

Comments
magnan Let's look at an example in the engineering world of parallel processing. The example is the Ansari X Prize. This was a competition with a $10 million prize to the first team to produce a reusable vehicle that can carry passengers outside the atmosphere. The "generation time" is about a year. That's how long it takes to build a single prototype vehicle. If only one team were working on the problem only one vehicle per year would have a chance at success. About a dozen different teams worked in parallel on the problem generating about 12 test flights per year. One of them, after 5 or 10 years, won the prize. That's the power of parallel effort. The generation time is indeed a factor in the equation but the number of individual efforts to find a solution is equally important. A long generation time can be effectively negated by many generations being produced in parallel. This is such a simple, fundamental principle in any inventive undertaking it's hard for me to believe I need to belabor it and much easier to believe that anyone who claims to not understand it is being purposely obtuse. DaveScot
February 16, 2008
February
02
Feb
16
16
2008
04:42 PM
4
04
42
PM
PDT
magnan Every single time one of the parasites replicates, whether in the human body or in the mosquito gut, there is one chance for heritable change to occur. I fail to see how any factor other than the total number of replications is relevant to the speed at which evolution can produce variants. Generation time doesn't exist in a vacuum. Generation time must be combined with how many new generations are being produced in parallel. In this parasite the generation time is very short, moreso in the asexual stage, but by far the biggest factor to consider is how many generations are produced in parallel. This is a staggeringly large number for this parasite and it is exactly this factor which enables it quickly produce drug resistant variant strains with a rapidity approaching that of bacteria. DaveScot
February 16, 2008
February
02
Feb
16
16
2008
03:56 PM
3
03
56
PM
PDT
magnan: I am no expert of the malaria parasite, but I ask you: why are you saying that "In the mosquito phase of course there is no drug treatment and the selection forces are against the CQR variant". Even if that were true for the observed mutations (I really don't know), why should that be true of any possible beneficial mutation? Are you suggesting that any beneficial mutation is detrimental as soon as the environmental pressure decreases, even temporarily? Are you maybe one of those treacherous ID proponents? :-) In other words, the important thing in Behe's argument is that, in the presence of a very strong and specific selective pressure (be it even only in half the cycle of the parasite), no real "evolution" has taken place. In darwinian logic, there could be infinite ways the parasite could evolve through beneficial mutations which don't need be detrimental in the other half of the parasite cycle. After all, if parasites survive the cycle in man through mutation, and then bring that "neutral-beneficial" mutation to the mosquito cycle, in the end the resistant strains should be easily fixed. So, one of the two: either no complex beneficial mutation took place, or all beneficial mutations are in reality characterized by a severe loss of basic function. In both cases, darwinists are not in a good position. And anyway, malaria parasite did not undergo any new speciation, even minimal.gpuccio
February 16, 2008
February
02
Feb
16
16
2008
03:18 PM
3
03
18
PM
PDT
DaveScot (#1): "In billions of trillions of opportunities what was mutation and selection able to accomplish (for the malaria parasite) in response to these selection pressures (for resistance to chloroquine)? Exactly what ID predicted. No more than two or three interdependent nucleotide changes that served to impart resistance to some drugs." I am an ID advocate, but I have had trouble with this example. It seems to me that with P. falciparum trying to develop CQR over the last 50 years we don’t really have the astronomical numbers of successive generations suggested for selection to operate. I posted on this before, but no one responded. I hate to be a devil's advocate, but I would like to see why this reasoning is invalid. Falciparum has basically two stages in its life cycle - in the mosquito and in the human body. The selection forces are strongly for the CQR variant during drug treatment in a human being. In the mosquito phase of course there is no drug treatment and the selection forces are against the CQR variant, resulting, after a while, with a parasite population of the normal non-CQR strain. So it seems that on average RV + NS is continuously operating to develop the CQR complex of mutations only during an acute infection in any individual while the drug is being administered. Usually Falciparum would seem to have to start all over again from “scratch” (from the normal free-living mosquito variant) in each infected human individual, unless the person is infected by a mosquito having just bitten a malaria patient having CQR malaria. So by this reasoning, over the entire 50 years since chloroquine became available the RV + NS process with P. falciparum seems to have had not much more than the number of parasite generations it took for the course of the disease in any one patient. Parasite generation time in the human body is estimated to be about 8 per 48 hours or 4 generations per day. The course of the disease seems to be about 5 weeks or 35 days with no drug treatment, and a rough estimate of duration with drug treatment if CQR develops could be as much as twice that or 70 days. The parasite would then have had less than 70×4 = 280 generations for RV + NS to operate in building and spreading a CQR variant in the victim being administered chloroquine.magnan
February 16, 2008
February
02
Feb
16
16
2008
02:59 PM
2
02
59
PM
PDT
Timothy V Reeves: I try some answers to your thoughtful questions: "However, you have admitted that a naked ‘search’ process can eventually come up with the configurational goods, if in an inordinately long time" Yes, but here "inordinately" means really empirically impossible: that is, we are facing problems which cannot realistically be solved computationally in any finite time. Only a theorical reasoning about infinite search has to admit the possibility of a random solution, but that has really no relevance in the real universe. "there need be no sneaky information waiting in the wings prompting the program what to select or reject, because those configurations, once arrived at," That's the point: you can't arrive at them. Take the problem of Origin Of Life (OOL): the simplest living organism ever known, archea and bacteria, have genomes which are at least above 10^6 base pairs, which means a complexity of more than 1:4^(10^6). In comparison, Demski's UPB is a joke. The important point is that no simpler autonomous living being has ever been observed or reasonably conceived. All the known scenarios about OOL, from Urey-Miller to RNA world, are pure imagination, and of the wildest type, as even some important non ID scientist have admitted. But there is more: even if you had, by miracle, the raw information ready (that is, a bacterial genome), still you would not have life. Nobody has ever been able to take a complete bacterial genome and build, from non living molecules, a living bacterium. And something which is impossible in an intelligent lab should have happened in the primitive ocean, or underground, by virtue of a supplementary incredible magic, once the first incredible magic had supplied the necessary genome? Let's remember that anything simpler than a bacterium "is not a replicator". Replication in the biological sense is observed only by the very complex system of a genome, a system of transcription, a system of translation, a metabolic system, a membrane, and many other incredibly complex and efficient things. OOL is the early death of any materialistic theory of life. But even if, like Darwin, you choose to "take life for granted", you still have to explain how a precursor which was certainly almost identical, or maybe identical, to actual bacteria and archea, which are still the most adapted and successful forms of life on the planet, gave rise to almost infinite, and very different, forms of much more complex life, sometimes in a very short time (see the Cambrian explosion). The problem of origin of species is no less conundrum, for the darwinists, than OOL. There is no discussion possible. Darwinian theory is, simply, an impossible explanation. "imaginary brute processing scenarios raise questions about just what the ID communities’ ‘New Information’ means" It's rather simple. We have an empirical, incredible truth: although complex functional configurations cannot be generated by random forces or by necessary laws, we constantly observe them in nature. They are of two kinds: biological information in living beings, and human artifacts. Even if some earnest darwinist, even on this blog, has tried to affirm that the two categories are fundamentally different, that's not true. In biological beings we can observe myriads of examples of very complex machines, regulatory networks, etc. which are very similar to human engineered machines or software, only more efficient and complex. Indeed, many times human engineers try to imitate solutions found in living beings. But let's look at human artifacts. Take a simple human algorithm, like an ordering program for PC, which operates intelligently to order a content which is given as input. Just consider that kind of information. It is CSI, because it is complex enough, although not very complex (for convenience, let's suppose that it is a string of 1000 bits, that puts us at a complexity of more than 1:10^300, well beyond our UPB. Well, it is easy to affirm that that specific sequence of bits has never existed in nature, in any form, until some human has created it. Still, that sequence is not just one of the possible random sequences of 1000 bits. It is functional. In the correct environment (the PC) that information works as an ordering program. More important, that information was written that way "because" it had to work as an oredring program. That's a very deep thought, isn't it? We have an informational entity which would never have existed itherwise, and which still is specifically functional. And why does it exist? Because an intelligent agent, a human, wanted it to exist, had the purpose and the understanding and the means to create it. Don't ask me how intelligence can do something which non intelligent nature cannot do. I don't know. Nobody knows. But it's that way. We see it happening every minute. In other words, intelligence can "select" specific configurations according to a purpose, without having to perform a completely random search and sifting, which could never find those functional configurations in any existing time. Human intelligence can do that. Nothing else we know in nature can do that. And yet, we have another huge repository of the highest information of that kind (CSI, specified, functional, selected information) which "must" have been created by a similar process. There is no alternative. Human beings, as we know them, don't seem to be a reasonable solution to that. As far as we know, they were not there when like started on our planet. Aliens, it could be. There is always, for them, the "who designed the desigher" problem, but after all we don't know much of aliens, and the problem could be postponed. There is a possible entity which human thought is rather familiar with, and that is a God. Is that, as far as we can know, a possible scientific answer? You can bet. It definitely is. First of all, in most mature conceptions of God, He is the source of the highest qualities we observe in humans. So, instead of asking how God could have an intelligence similar to humans, we could just put it the other way: how can humans have an intelligence similar ro God? The answer is simple: God gave it to them. For Christians, it is very easy: God created humans in His image. No surprise, then, that we find intelligent design in life (but also, with a different degree of evidence, in the whole universe) as we find it in human artifacts. The same principle of intelligent agency is there: divine and human. The same concepts of purpose and meaning and will are there. And finally, does the "who designed the designer" problem apply here? Absolutely not. Why? Because, in all mature conceptions of a God, He is a cause out of creation, that is out of time and space, and out of the law of causality. Besides, He is necessarily simple, and the origin of all complexity. It's called transendency. Is an explanation based on God scientific? It certainly is, if God exists. After all, science is a search for truth, not for materialistic coneniency. Put it the other way. If God created the world and implemented specific higher information in living beings in the course of time, how could any materialistic explanation of reality be true? It's impossible. Science is a search for what is true, in the limits of what science itself can understand froma an empirical point of view. If God exists, and if His activity in creation is empirically observable, then no science is ultimately possible without a concept of God.gpuccio
February 16, 2008
February
02
Feb
16
16
2008
09:26 AM
9
09
26
AM
PDT
Gil: "It would be interesting to calculate a realistic probability bound for the formation of life on earth by chemical processes." It's even harder than you suggest, since many of the precursors can only be formed under directly opposing conditions. Calculations of the concentrations of the necessary sub-species necessary imply concentrations so low that you are not even likely to encounter some of the required amino acids in the same litre of "soup". So, from the start, you can't get enough amino acids of all the right types in one place. Then, you can't get those amino acids to build anything biologically meaningful in terms os size, since you need to add energy, that is likely to break things down or form useless tarry bi-products. Then of course, the order and number of the molecules formed is rather important. Finally, as has been noted before, we can start with a single drop of water containing the simplest single-cell organism and disrupt it to produce all the necessary pre-cursors. That skips the first three steps above. What's the chance of putting humpty-dumpty back together again? Calculating the probability of any of any of these stages makes the UPB becomes generous, and the full odds against means multiplying all the stages together.SCheesman
February 16, 2008
February
02
Feb
16
16
2008
08:55 AM
8
08
55
AM
PDT
OK then, let’s have quick look at this. I’ll confine my comments only to the issue of ‘New Information’. There are various things Dave and gpuccio mention like ‘CSI’ and Dembski’s UPB that I really need to get up to speed on, but I’ll see how far I can get without them. So Dave, you accept that very inefficient processes can produce configurations identical to those self-sustaining configurations we call life, although in a prohibitively long time. I didn’t actually have the search-reject-select computational scenario in mind (what I understand you refer to as ‘sifting’), but rather just the inexorable ‘search’ bit of the process, stripped of selection. However, you have admitted that a naked ‘search’ process can eventually come up with the configurational goods, if in an inordinately long time. In one rather idealized trivial mathematical sense there is no new information out there in that all possibilities are registered in the great-uncharted platonic volumes of mathematics. So when we talk about ‘New Information’ we are really thinking about configurations ‘new’ in the sense that they are ‘new’ relative to our existing world. In the case of Dawkins’ ‘ME THINKS…’ program it is very easy to maintain that no new information is generated because the very thing being looked for is written in the program from the outset (Nice one Richard, that wretched little program of yours is the best anti-evolution propaganda I’ve come across!) Now, and here’s the rub, when it comes to configurations of real atomic stuff, we need NOT add the ‘sifting’ function to our naked brute search process. Unlike Dawkins’ infamous ‘ME THINKS….’ program there need be no sneaky information waiting in the wings prompting the program what to select or reject, because those configurations, once arrived at, self-select, self-sustain and self perpetuate given their environment – that’s what life is all about. They inhabit platonic space, but should they, perchance, be discovered by brute processing they lock in, not because of some crib information hidden away in the physical ‘program’, but because they are what they are – self-sustaining, self-perpetuating. Like a weakened reification of the ontological argument they self select and self-sustain for their own self-referencing structural reasons. Admittedly there are big, big, big issues here about how quickly brute processing can do what it is supposed to have done in the allotted time (and Dembski’s UPB is relevant here) but imaginary brute processing scenarios raise questions about just what the ID communities’ ‘New Information’ means. If in principle brute processing can generate structural configurations identical to ID, (although it is a recurring gut feeling that it could never be done in a timely way) what distinguishes ‘New Information’? Why is it a distinctly ID concept? Can we really claim that ID is the ONLY way of generating new information? We could, of course, define ‘New Information’ as an extrinsic property of a configuration conferred upon it by external intelligence; that is, a property that exists by virtue of some intentional purpose of a pre-existing intelligence that has generated it. This, of course, muddies the water considerably as intelligence is the wild card here; it is not fully understood, even in the human case. So, Dave, in short I am still not clear just what ‘New Information’ is – is it an extrinsic connection that a structure has with the intelligent entity that generated it? Or are you simply defining it as structural configurations that brute processing can’t produce except in a prohibitively long time?Timothy V Reeves
February 16, 2008
February
02
Feb
16
16
2008
08:07 AM
8
08
07
AM
PDT
It would be interesting to calculate a realistic probability bound for the formation of life on earth by chemical processes. This would be the number of atoms and molecules on the surface of the earth and in the oceans, times the maximum rate at which chemical reactions can occur, times a few hundred million years (since life apparently appeared almost immediately after the earth cooled sufficiently). The probabilistic resources available for the formation of life on earth by purely materialistic means would obviously be vastly smaller than those used to calculate Dembski's UPB.GilDodgen
February 16, 2008
February
02
Feb
16
16
2008
07:55 AM
7
07
55
AM
PDT
Gil Thanks for the correction on the UPB. I'll make a note of it.DaveScot
February 16, 2008
February
02
Feb
16
16
2008
04:52 AM
4
04
52
AM
PDT
hrun0815: "A well adapted organism remains unchanged as the environment remains unchanged" Well, that's exactly the point: 1) The environment has not remained unchange. The ubiquitous use of cloroquine is for the parasite the most important environmental threat one can imagine. The emergence of S hemoglobin, in more distant times, is another example. What other kind of fitness landscape change do we need to activate darwinian evolution? It seems that darwinists always have a double standard ready: in distant and untestable times, any imaginary change in environment is declared capable of triggering cows into whales and similar miracles, while in historical times a major environmental threat, like the ubiquitous use of a drug in the target population, cannot even produce an efficient method of resistance! 2) If the principle you state is correct (and I think it is more correct than you think), then why would the simplest, oldest and best adapted organisms we know of, that is bacteria and archea, have changed so much, according to darwinists, as to have given rise to the full chain of evolution, up to humans? I have always affirmed that simpler forms of life are usually the best adapted, and that complexity in itself is a motive of weakness in living organisms. In other words, the true driving principle that is behind the growing complexity of the living world seems not to be survival, but rather the expression of the fundamental principles of life, together with a true, infinite creativity.gpuccio
February 15, 2008
February
02
Feb
15
15
2008
06:30 PM
6
06
30
PM
PDT
hrun0815:
Well, I guess in some cases it does, when drugs are injected to treat humans. In those cases P. falciparum did change effectively as expected.
The ID community is suggesting the following: > that drug resistance must usually involves the organism disabling itself (decreasing in overall information content) making it so that the drug cannot act against it, however making it to be a lesser organism. They parallel it to choping off one's arms to keep one from getting into arm-grabbing traps. > That resistance is limited to resistance that can be achieved with a single mutational event. If a dual-event is required, then one of the two events will at least offer partial resistance. The bottom line prediction of ID is that if a drug can be found which cannot be resisted by a single mutational event, then resistance will not develop. You obviously have much better access to the literature than I do. Show me in the literature that they are wrong.bFast
February 15, 2008
February
02
Feb
15
15
2008
02:23 PM
2
02
23
PM
PDT
Thanks very much for the Replies Dave and gpuccio! I've got a bit of catching up to do here! I'll be back!Timothy V Reeves
February 15, 2008
February
02
Feb
15
15
2008
01:51 PM
1
01
51
PM
PDT
I don't really get the P. falciparum example. Why would P. falciparum change? Does the environment it finds itself in change at all? Well, I guess in some cases it does, when drugs are injected to treat humans. In those cases P. falciparum did change effectively as expected. What does this tell us? A well adapted organism remains unchanged as the environment remains unchanged. And, small organisms with a high reproduction rate can often efficiently develop drug resistance. Why would a design proponent or darwinian evolution proponent expect anything else?hrun0815
February 15, 2008
February
02
Feb
15
15
2008
10:43 AM
10
10
43
AM
PDT
Very good summary! I would just add that the problem remains of identifying and understanding better what is that makes the specification, in other words what is that makes the information "intelligently designed", and allows to recognize it and to point at it as the product of intelligent agents. Put differently, it means that we have to ask, at least in some degree, what defines a conscious intelligent agent, or at least its recognizable output. In the end, the demonstration that appropriately specified (for instance, appropriately functional) solutions are only a really tiny subset of the whole set of possibilities is a basic priority of the ID theory. While that concept is certainly intuitively obvious, at least in my opinion, it definitely deserves specific investigation, both theorical and empirical. Another important point, related to the first, is that the process of sifting need a recognition procedure which can fix the right result. In many cases, maybe in all, that means that the system must have some specific information about the result to be obtained, which is probably the raw meaning of what Dembski and Marks declare in their last papers. Let's take the example of Dawkinns' Methinks it's like a weasel model. There, the sifting is extremely efficient exactly because the sifting system, though using a random variation procedure to generate change, already knows the final sentence which is the correct solution to the search process. In that context, it is quite easy (although anyway not trivial) to get to the right solution in reasonable time. But it is correct to ask: where is the new information? The system has only "copied" an information which was alredy present through a very troublesome random mechanism. In biological systems, function is believed to be the "information" that guides the system, through natural selection. That would be the only way to shorten the infinite time required to get to the high grade of CSI observable in all beings. But we must remember that: 1) Function is recognizable only in the appropriate context. One of darwinian lies is that any new function could be selected, and that would enlarge the subset of specified results to islands of reasonable probability. That's not true. In any specified context, and there are myryad of different specified contexts in the history of life, only very few solutions are functionals. The fitness landscape is really determined not only by changes in environment, but also, and especially, by the already set context of the existing organism. 2) Most complex funtions cannot be derived in a gradual way by other functions. That's the characteristic of functions, after all. Each function "does" a specific thing. There is no reason in the world, neither logical nor empirical, why different functions, which "do" different things, could be derived easily and gradually one from the other, except in darwinists' just just so stories. 3) There are in the living world almost infinite numbers of "superfunctions", functions controlling other functions controlling other functions. Such an intricate network of meanings and relations is rare even in human artifacts, and usually much less efficient an elegant. That is a very heavy argument, not only in favour of design, but also in favour of design of the highest kind.gpuccio
February 15, 2008
February
02
Feb
15
15
2008
08:48 AM
8
08
48
AM
PDT
I posted the following comment in response to Marc over at Human Events, following Granville Sewell's article:
Marc, Texas: "...I do know that it is mathematically possible for a billion monkeys pounding on typewriters to produce the complete works of Shakespeare by random chance, given enough time. Say, perhaps, 4.5 billion years." Consider the phrase, "To be or not to be, that is the question." If one ignores spaces and punctuation there are 30 characters in this string. There are 26 letters in the English alphabet, so there are 26^30 possible 30-character strings, or 2.8 x 10^42. If each of the billion monkeys typed 30 characters every second without pause for 4.5 billion years, they would generate 1.4 x 10^26 30-character strings. There is thus one chance in 1.4 x 10^16 (14,000,000,000,000,000 or 14 thousand trillion) that this English sentence would be produced, assuming the monkeys never typed any duplicate 30-character strings.
Those annoying big numbers seem to rear their ugly heads at the most inopportune times. One small correction concerning Dembski's universal probability bound. It is calculated as follows: 10^80: the number of elementary particles in the observable universe. 10^45: the maximum rate per second at which transitions in physical states can occur (i.e., the inverse of the Planck time -- 1 second is about 1.855 x 10^43 Planck times). 10^25: a billion times longer than the typical estimated age of the universe in seconds. 10^150 = 10^80 x 10^45 x 10^25GilDodgen
February 15, 2008
February
02
Feb
15
15
2008
08:45 AM
8
08
45
AM
PDT
"Intelligent Design is all about applying statistical mechanics to the laws of chemistry and physics and determining the probability of given patterns emerging from the set of all physically possible patterns." I think there's more to it than this. It's also about logical pathways from initial conditions to a later condition. One of the weaknesses of the blindwatchmaker hypothesis is that, so far, detailed pathways from initial states to proffered evolved states have no empirical basis. Take for example, a Rubic's Cube. If I randomly mix up the cube, there will always be a set of steps that can take you from the initial mixed up condition to the goal of uniform colors on all sides. However, I can peel the colored stickers off and put them back on in such a way that there is no possible path to uniform colors on all sides. Let's say that after replacing the stickers in such a fashion, I randomly mix up the cube. To a casual observer (i.e, one who has not attempted to determine the existence of such a logical path), there is no visual difference between a cube with a logical path to the goal, and one that lacks a path. The question of whether the cube actually has a logical path to the goal cannot be answered with statistics.mike1962
February 15, 2008
February
02
Feb
15
15
2008
08:40 AM
8
08
40
AM
PDT
I think it should be noted that it is extraordinarily difficult to determine probalities in the diversification of life from some primordial, simple form of common ancestor. Mutation and selection operating over billions of years with trillions of trillions of opportunities to produce heritable change is formidable. I don't believe it's practically possible to estimate the odds with any certainty. However, we can look at what intelligent design theory predicts and see if it holds true in what we can actually observe. Intelligent design theory predicts that in the trillions of reproductive events in the chain going from reptiles to mammals the complex structures that distinguish the two cannot reasonably emerge from mutation and selection alone. Michael Behe, in the book "The Edge of Evolution" examines what mutation and selection was able to accomplish in the last 50 years in P.falciparum (the single celled parasite responsible for malaria). This is an extraordinarily well studied organism from top to bottom, gross anatomy to DNA sequence. In the past 50 years mutation and selection has had billions of trillions of opportunities to produce heritable change. It has been under intense selection pressure in the way of artificial efforts to eradicate it. In addition its range is severely limited by needing tropical climates. In billions of trillions of opportunities what was mutation and selection able to accomplish in response to these selection pressures? Exactly what ID predicted. No more than two or three interdependent nucleotide changes that served to impart resistance to some drugs. Where only one change was required for resistance to a certain drug it was acquired quickly and often - in as many as one in three individuals infected with the parasite. Where two or three interdependent mutations were required resistance arose only a few times. In response to the sickle cell mutation in human hemoglobin mutation and selection operating in the parasite has yet to find a way around it. Neither has mutation and selection found a way to allow the parasite to survive in temperate climates. Given the succcesses and failures of mutation and selection in billions of trillions of opportunities to find more than very simple solutions how are we to believe that in far fewer opportunities the same mechanism created all the far more complex structures that distinguish mammals from reptiles? Non sequitor. An important ID prediction was confirmed by observation while the chance & necessity prediction (if it can even be somehow contrived into making a prediction about future evolutionary change) was an utter failure. ID actually makes predictions about the course of evolution. Neo-Darwinian theory doesn't - all it does it make ad hoc explanations for evolutionary events that have already transpired. As far as future predictions all it says is sometimes things evolve and sometimes things stay the same. A theory that explains everything explains nothing. The best rebuttal I've seen for the lack of evolution in the parasite is "How do you know that an intelligent agency didn't act to prevent the parasite from evolving?" The answer to that is we don't know. We freely admit, as I did in the main article, that we cannot exclude false negatives. In the sidebar definition of ID we state that ID is about the positive evidence of design. We presume that in cases where there is no positive evidence of intelligent design that no design happened. In other words we give mutation & selection the benefit of doubt. DaveScot
February 15, 2008
February
02
Feb
15
15
2008
08:19 AM
8
08
19
AM
PDT
1 2

Leave a Reply