Uncommon Descent Serving The Intelligent Design Community

The Circularity of the Design Inference

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Keith S is right. Sort of.

As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument. As Keith describes the argument:

In other words, we conclude that something didn’t evolve only if we already know that it didn’t evolve. CSI is just window dressing for this rather uninteresting fact.

In its most basic form, a specified complexity argument takes a form something like:

  • Premise 1) The evolution of the bacterial flagellum is astronomically improbable.
  • Premise 2) The bacterial flagellum is highly specified.
  • Conclusion) The bacterial flagellum did not evolve.

Keith’s point is that in order to show that the bacterial flagellum did not evolve, we have to first show that the evolution of the bacterial flagellum is astronomically improbable, which is almost the same thing. Specified complexity moves the argument from arguing that evolution is improbable to arguing that evolution didn’t happen. The difficult part is showing that evolution is improbable. Once we’ve established that evolution is vastly improbable, it seems a very minor obvious point that it would therefore not have occurred.

In some cases, people have understood Dembski’s argument incorrectly, propounding or attack some variation on:

  1. The evolution of the bacterial flagellum is highly improbable.
  2. Therefore the bacterial flagellum exhibits high CSI
  3. Therefore the evolution of the bacterial flagellum is highly improbable
  4. Therefore the bacterial flagellum did not evolve.

This is indeed a very silly argument and people need to stop arguing about it. CSI and Specified complexity do not help in any way to establish that the evolution of the bacterial flagellum is improbable. Rather, the only way to establish that the bacterial flagellum exhibits CSI is to first show that it was improbable. Any attempt to use CSI to establish the improbability of evolution is deeply fallacious.

If specified complexity doesn’t help establish the improbability of evolution what good is it? What’s the point of the specified complexity argument? Consider the following argument:

  1. Each snowflake pattern is astronomically improbable.
  2. Therefore it doesn’t snow.

Obviously, it does snow, and the argument must be fallacious. The fact that an event or object is improbable is insufficient to establish that it formed by natural means. That’s why Dembski developed the notion of specified complexity, arguing that in order to reject chance events they must both be complex and specified. Hence, its not the same thing to say that the evolution of the bacterial flagellum is improbable and that it didn’t happen. If the bacterial flagellum were not specified, it would be perfectly possible to evolve it even thought it is vastly improbable.

The notion of specified complexity exists for one purpose: to give force to probability arguments. If we look at Behe’s irreducible complexity, Axe’s work on proteins, or practically any work by any intelligent design proponent, the work seeks to demonstrate that the Darwinian account of evolution is vastly improbable. Dembski’s work on specified complexity and design inference works to show why that improbability gives us reason to reject Darwinian evolution and accept design.

So Keith is right, arguing for the improbability of evolution on the basis of specified complexity is circular. However, specified complexity, as developed by Dembski, isn’t designed for the purpose of demonstrating the improbability of evolution. When used for its proper role, specified complexity is a valid, though limited argument.

 

Comments
Adapa #86
The processes of transcription and translation that take DNA to mRNA to polypeptide chain are 100% deterministic physical processes.
The fact that the system follows natural law has never been in question, by anyone. Perhaps the reason you are such a vociferous ID critic is because you don't understand the issues.Upright BiPed
November 16, 2014
November
11
Nov
16
16
2014
08:54 AM
8
08
54
AM
PDT
Thanks for admitting the ID argument is busted.
It's clear that you're not interested in understanding the argument. You think you've won some kind of victory merely by declaring it to yourself. You've done nothing against the ID inference and have actually supported the non-circularity argument I proposed (by saying nothing about it). You have a massively difficult task in showing the evolutionary origin of multi-layered cellular information processing, cell-repair, feedback control loops, protein folds, epigenetics ... etc. It's a question of origins. You might want to inform Jim Shapiro that there's really no problem here. It's all 100% deterministic on known "laws of chemistry and physics": "One of the great scientific ironies of the last century is the fact that molecular biology, which its pioneers expected to provide a firm chemical and physical basis for understanding life, instead uncovered powerful sensor and communication networks essential to all vital processes , such as metabolism, growth, the cell cycle, cellular differentiation, and multicellular morphogenesis. ... [T]he life sciences have converged with other disciplines to focus on questions of acquiring, processing, and transmitting information to ensure the correct operation of complex vital systems." (p. 4) -- Evolution: A View from the 21st CenturySilver Asiatic
November 16, 2014
November
11
Nov
16
16
2014
08:45 AM
8
08
45
AM
PDT
Dr. Stephen Meyer: Chemistry/RNA World/crystal formation can't explain genetic information - video Excerpt 5:00 minute mark: "If there is no chemical interaction here (in the DNA molecule) you can't invoke chemistry to explain sequencing" http://www.youtube.com/watch?v=yLeWh8Df3k8bornagain77
November 16, 2014
November
11
Nov
16
16
2014
08:36 AM
8
08
36
AM
PDT
Silver Asistic We don’t consider deterministic physical processes (water running downhill) as the output of a communication/information process. The processes of transcription and translation that take DNA to mRNA to polypeptide chain are 100% deterministic physical processes. These are chemical molecules undergoing reactions by following the laws of chemistry and physics. Thanks for admitting the ID argument is busted.Adapa
November 16, 2014
November
11
Nov
16
16
2014
08:22 AM
8
08
22
AM
PDT
Upright BiPed #34
I think it would be a great advance to ID, if ID proponents would get it clear in their heads what the I in CSI is.
Agreed. Our opponents (and I think this OP itself?) misunderstand this. KF offered the following:
Further, the relevant configuration-sensitive separately and “simply” describable specification is often that of carrying out a relevant function on correct organisation and linked interaction of component parts, ranging from text strings for messages in English, to source or better object code for execution of computing algorithms, to organisation of fishing reel parts (such as the Abu 6500 C3) to the organisation of D/RNA codon sequences to control synthesis of functional proteins individually and collectively across the needs of cells and organisms with particular body plans. Where, too, function of proteins requires sequences that fold and form relevant slots and key-lock fitting structures in a way that per Axe 2010 goes beyond mere Lego brick stacking of interchangeable modules. That is there are interactions all along the folded string of AA residues.
Restated: "... the [information] specification ... often [carries] out a relevant function [of] correct organisation and linked interaction of component parts, ranging from text strings for messages in English, to ... code for execution of computing algorithms, to organisation of fishing reel parts (such as the Abu 6500 C3) to the organisation of D/RNA codon sequences to control synthesis of functional proteins individually and collectively across the needs of cells and organisms with particular body plans. ... function of proteins requires sequences that fold and form relevant slots and key-lock fitting structures in a way that per Axe 2010[,] goes beyond mere Lego brick stacking of interchangeable modules. That is there are interactions all along the folded string of AA residues." Restated again: Information is communicative. What it often communicates is a function. Thus, we have FCSI. We observe information (as examples given), and recognize its related function. Specified information goes beyond mere patterning. As you [Upright Biped] have argued many times, there is a necessary "discontinuity" between sender and receiver of information. There is a separation or 'freedom' involved because otherwise the information would be linked to the function in a deterministic mechanism -- and therefore would not be information, and would be unnecessary. We don't consider deterministic physical processes (water running downhill) as the output of a communication/information process. Gravity is not communicating information to the water and the water is not receiving the message from gravity or the hill (as far as we can tell). It is much different for genetic information which communicates to cell-processes. It's different also for information communicated and received by plants and animals -- and especially humans, in diverse ways. The ID argument does not begin by declaring that something could not have originated via natural processes. It begins by observing Information which is Specified (usually to an observable Function) and is Complex beyond a threshold. I don't think the ID inference concludes "this could not have evolved", but rather, "it's most probable that this is the product of Design by Intelligence, since it correlates to FCSI observances that we know were produced by intelligence." The ID inference can be falsified in those cases by showing that the FCSI was produced by natural forces. There is nothing circular about this. 1. Observe FCSI [Note]: What is FCSI? It has nothing to do with whether the thing has evolved or not. It has to do with Information which communicates - sender to receiver. We often observe a function. We observe that. 2. Explore natural forces/processes to see if the observation was caused by them. 3. Explore known Intelligent forces for correlation or causation. 4. Determine most probable source of FCSI. -- if #2 is Null and #3 is highly correlated, the inference to the most probable cause is Intelligent Design. It's not a circular argument. SETI research is not based on a tautology. Forsensics does not begin with the conclusion and then define results based on that. 1. We know it is murder if there is MSI (murder specific information). 2. MSI is defined as having ruled out natural cause. 3. We can't find natural cause. 4. Therefore there is MSI and a murder must have occurred. If you think forensic science works like that, then I can understand why you think CSI is a circular argument.Silver Asiatic
November 16, 2014
November
11
Nov
16
16
2014
07:57 AM
7
07
57
AM
PDT
Of supplemental note: classical 'digital' information, is found to be a subset of ‘non-local', (i.e. beyond space and time), quantum entanglement/information by the following method:
Quantum knowledge cools computers: New understanding of entropy – June 2011 Excerpt: No heat, even a cooling effect; In the case of perfect classical knowledge of a computer memory (zero entropy), deletion of the data requires in theory no energy at all. The researchers prove that “more than complete knowledge” from quantum entanglement with the memory (negative entropy) leads to deletion of the data being accompanied by removal of heat from the computer and its release as usable energy. This is the physical meaning of negative entropy. Renner emphasizes, however, “This doesn’t mean that we can develop a perpetual motion machine.” The data can only be deleted once, so there is no possibility to continue to generate energy. The process also destroys the entanglement, and it would take an input of energy to reset the system to its starting state. The equations are consistent with what’s known as the second law of thermodynamics: the idea that the entropy of the universe can never decrease. Vedral says “We’re working on the edge of the second law. If you go any further, you will break it.” http://www.sciencedaily.com/releases/2011/06/110601134300.htm
,,,And here is the evidence that quantum information is in fact ‘conserved’;,,,
Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html Quantum no-deleting theorem Excerpt: A stronger version of the no-cloning theorem and the no-deleting theorem provide permanence to quantum information. To create a copy one must import the information from some part of the universe and to delete a state one needs to export it to another part of the universe where it will continue to exist. http://en.wikipedia.org/wiki/Quantum_no-deleting_theorem#Consequence
Besides providing direct empirical falsification to the materialistic claims of neo-Darwinists, the implications of finding 'non-local', beyond space and time, and ‘conserved’ quantum information in molecular biology on a massive scale is fairly, and pleasantly, obvious:
Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description) http://vimeo.com/29895068 Quantum Entangled Consciousness - Life After Death - Stuart Hameroff - video https://vimeo.com/39982578
Verse and Music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things were made through Him, and without Him nothing was made that was made. In Him was life, and the life was the light of men. ROYAL TAILOR – HOLD ME TOGETHER – music video http://www.youtube.com/watch?v=vbpJ2FeeJgw
bornagain77
November 16, 2014
November
11
Nov
16
16
2014
07:57 AM
7
07
57
AM
PDT
Thus not only is information not emergent from a energy-matter basis, as is presupposed in atheistic materialism, but energy, and even matter, ultimately reduce to an information basis as is presupposed in Christian Theism (John1:1). How all this plays out in molecular biology is that this non-local, beyond space and time, quantum entanglement, by which energy/matter was reduced to information and teleported, is now found in molecular biology on a massive scale. Here are a few of my notes along that line:
Quantum entanglement holds together life’s blueprint – 2010 Excerpt: When the researchers analysed the DNA without its helical structure, they found that the electron clouds were not entangled. But when they incorporated DNA’s helical structure into the model, they saw that the electron clouds of each base pair became entangled with those of its neighbours. “If you didn’t have entanglement, then DNA would have a simple flat structure, and you would never get the twist that seems to be important to the functioning of DNA,” says team member Vlatko Vedral of the University of Oxford. http://neshealthblog.wordpress.com/2010/09/15/quantum-entanglement-holds-together-lifes-blueprint/ Quantum Information/Entanglement In DNA - short video https://vimeo.com/92405752 Coherent Intrachain energy migration at room temperature – Elisabetta Collini and Gregory Scholes – University of Toronto – Science, 323, (2009), pp. 369-73 Excerpt: The authors conducted an experiment to observe quantum coherence dynamics in relation to energy transfer. The experiment, conducted at room temperature, examined chain conformations, such as those found in the proteins of living cells. Neighbouring molecules along the backbone of a protein chain were seen to have coherent energy transfer. Where this happens quantum decoherence (the underlying tendency to loss of coherence due to interaction with the environment) is able to be resisted, and the evolution of the system remains entangled as a single quantum state. http://www.scimednet.org/quantum-coherence-living-cells-and-protein/
That quantum entanglement, which conclusively demonstrates that ‘information’ in its pure ‘quantum form’ is completely transcendent of any time and space constraints (Bell, Aspect, Leggett, Zeilinger, etc..), should be found in molecular biology on such a massive scale is a direct empirical falsification of Darwinian claims, for how can the ‘non-local’ quantum entanglement ‘effect’ in biology possibly be explained by a material (matter/energy) cause when the quantum entanglement effect falsified material particles as its own causation in the first place? Appealing to the probability of various 'random' configurations of material particles, as Darwinism does, simply will not help since a timeless/spaceless cause must be supplied which is beyond the capacity of the material particles themselves to supply!
Looking beyond space and time to cope with quantum theory – 29 October 2012 Excerpt: “Our result gives weight to the idea that quantum correlations somehow arise from outside spacetime, in the sense that no story in space and time can describe them,” http://www.quantumlah.org/highlight/121029_hidden_influences.php Closing the last Bell-test loophole for photons - Jun 11, 2013 Excerpt:– requiring no assumptions or correction of count rates – that confirmed quantum entanglement to nearly 70 standard deviations.,,, http://phys.org/news/2013-06-bell-test-loophole-photons.html etc.. etc..
In other words, to give a coherent explanation for an effect that is shown to be completely independent of any time and space constraints one is forced to appeal to a cause that is itself not limited to time and space! i.e. Put more simply, you cannot explain a effect by a cause that has been falsified by the very same effect you are seeking to explain! Improbability arguments of various ‘special’ configurations of material particles, which have been a staple of the arguments against neo-Darwinism, simply do not apply since the cause is not within the material particles in the first place! And although Naturalists have proposed various, far fetched, naturalistic scenarios to try to get around the Theistic implications of quantum non-locality, none of the ‘far fetched’ naturalistic solutions, in themselves, are compatible with the reductive materialism that undergirds neo-Darwinian thought.
"[while a number of philosophical ideas] may be logically consistent with present quantum mechanics, ...materialism is not." Eugene Wigner Quantum Physics Debunks Materialism - video playlist https://www.youtube.com/watch?list=PL1mr9ZTZb3TViAqtowpvZy5PZpn-MoSK_&v=4C5pq7W5yRM Why Quantum Theory Does Not Support Materialism By Bruce L Gordon, Ph.D Excerpt: The underlying problem is this: there are correlations in nature that require a causal explanation but for which no physical explanation is in principle possible. Furthermore, the nonlocalizability of field quanta entails that these entities, whatever they are, fail the criterion of material individuality. So, paradoxically and ironically, the most fundamental constituents and relations of the material world cannot, in principle, be understood in terms of material substances. Since there must be some explanation for these things, the correct explanation will have to be one which is non-physical – and this is plainly incompatible with any and all varieties of materialism. http://www.4truth.net/fourtruthpbscience.aspx?pageid=8589952939
Thus, as far as empirical science itself is concerned, Neo-Darwinism is directly falsified by the experimental evidence, in its claim that information in life is merely ‘emergent’ from a materialistic basis.bornagain77
November 16, 2014
November
11
Nov
16
16
2014
07:56 AM
7
07
56
AM
PDT
Moreover, to drive the nail in the coffin, as it were, to the materialistic belief that information is merely emergent from a material basis, it is now shown, by using quantum entanglement as a 'quantum information channel', that material reduces to information. (of note: energy is completely reduced to quantum information, whereas matter is semi-completely reduced, with the caveat being that matter can be reduced to energy via e=mc2). Here are a few of my notes along that line:
Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original,,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap – 2009 Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,, “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts Scientists Report Finding Reliable Way to Teleport Data By JOHN MARKOFF - MAY 29, 2014 Excerpt: They report that they have achieved perfectly accurate teleportation of quantum information over short distances. They are now seeking to repeat their experiment over the distance of more than a kilometer. If they are able to repeatedly show that entanglement works at this distance, it will be a definitive demonstration of the entanglement phenomenon and quantum mechanical theory. http://www.nytimes.com/2014/05/30/science/scientists-report-finding-reliable-way-to-teleport-data.html?_r=2 First Teleportation Of Multiple Quantum Properties Of A Single Photon - Oct 7, 2014 To truly teleport an object, you have to include all its quantum properties. Excerpt: ,,,It is these properties— the spin angular momentum and the orbital angular momentum?(of a photon)—?that Xi-Lin and co have teleported together for the first time.,,, https://medium.com/the-physics-arxiv-blog/first-teleportation-of-multiple-quantum-properties-of-a-single-photon-7c1e61598565 Researchers Succeed in Quantum Teleportation of Light Waves - April 2011 Excerpt: In this experiment, researchers in Australia and Japan were able to transfer quantum information from one place to another without having to physically move it. It was destroyed in one place and instantly resurrected in another, “alive” again and unchanged. This is a major advance, as previous teleportation experiments were either very slow or caused some information to be lost. http://www.popsci.com/technology/article/2011-04/quantum-teleportation-breakthrough-could-lead-instantanous-computing How Teleportation Will Work - Excerpt: In 1993, the idea of teleportation moved out of the realm of science fiction and into the world of theoretical possibility. It was then that physicist Charles Bennett and a team of researchers at IBM confirmed that quantum teleportation was possible, but only if the original object being teleported was destroyed. — As predicted, the original photon no longer existed once the replica was made. http://science.howstuffworks.com/science-vs-myth/everyday-myths/teleportation1.htm Quantum Teleportation – IBM Research Page Excerpt: “it would destroy the original (photon) in the process,,” http://researcher.ibm.com/view_project.php?id=2862
bornagain77
November 16, 2014
November
11
Nov
16
16
2014
07:55 AM
7
07
55
AM
PDT
Winston Ewert, for me, not being trained in higher mathematics, the easiest way to avoid this 'sort of' circularity, i.e. the adding of information as sort of a 'gloss' after the improbability of a certain configuration of a material substrate is established,,
"Probability comes before, not after the establishment of CSI." Winston Ewert
,,,the way to avoid the adding information as somewhat of a 'gloss' after the probability calculation is to go directly to the empirical evidence itself and establish that information is a 'real' entity that is separate from, and even primary to, material.,,, First a little background as to the situation we are dealing with with materialists,,, Believe it or not, not too many years ago I remember that materialists, as obvious as it is that sophisticated programming/information is in DNA (Bill Gates, etc..), would constantly argue that information was not really even in life. In fact, in the past, more than once, these following citations have been used to counteract the materialistic belief that information is not really in life.
Every Bit Digital: DNA’s Programming Really Bugs Some ID Critics - Casey Luskin - 2010 Excerpt: "There’s a very recognizable digital code of the kind that electrical engineers rediscovered in the 1950s that maps the codes for sequences of DNA onto expressions of proteins." http://www.salvomag.com/new/articles/salvo12/12luskin2.php Harvard cracks DNA storage, crams 700 terabytes of data into a single gram - Sebastian Anthony - August 17, 2012 Excerpt: A bioengineer and geneticist at Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record by a thousand times.,,, Just think about it for a moment: One gram of DNA can store 700 terabytes of data. That’s 14,000 50-gigabyte Blu-ray discs… in a droplet of DNA that would fit on the tip of your pinky. To store the same kind of data on hard drives — the densest storage medium in use today — you’d need 233 3TB drives, weighing a total of 151 kilos. In Church and Kosuri’s case, they have successfully stored around 700 kilobytes of data in DNA — Church’s latest book, in fact — and proceeded to make 70 billion copies (which they claim, jokingly, makes it the best-selling book of all time!) totaling 44 petabytes of data stored. http://www.extremetech.com/extreme/134672-harvard-cracks-dna-storage-crams-700-terabytes-of-data-into-a-single-gram Information Storage in DNA by Wyss Institute - video https://vimeo.com/47615970 Quote from preceding video: "The theoretical (information) density of DNA is you could store the total world information, which is 1.8 zetabytes, at least in 2011, in about 4 grams of DNA." Sriram Kosuri PhD. - Wyss Institute DNA: The Ultimate Hard Drive - Science Magazine, August-16-2012 Excerpt: "When it comes to storing information, hard drives don't hold a candle to DNA. Our genetic code packs billions of gigabytes into a single gram. A mere milligram of the molecule could encode the complete text of every book in the Library of Congress and have plenty of room to spare." http://news.sciencemag.org/sciencenow/2012/08/written-in-dna-code.html
In my honest opinion, the main reason that atheists/materialists would even argue the absurd position that information is not really in life in the first place, despite the fact that highly sophisticated functional information is obviously in life, is that, in the materialistic worldview, information is held to ultimately be 'emergent' from, to even be 'illusory' to, a material basis. That is to say, in the materialistic worldview, information, (as well as consciousness itself), is held to merely emerge from what they consider to be the primary, i.e. the 'real', material basis of reality. And while probabilistic arguments are certainly very good, for most reasonable people, for determining whether a functional sequence was designed or not, probabilistic arguments fail to be conclusive for the committed atheists/materialist. The reason for this failure of conclusiveness for them is, as far as I can tell from my dealings with commited atheists/materialists on UD, is that they can always imagine that some undiscovered hidden law, or some type of undiscovered hidden process, will be someday be discovered that will 'come to the rescue' for them. Some undiscovered process or law that will make these sequences of functional information not as improbable as they are turning out to be for them. I call it the Dumb and Dumber 'there's a chance' hypothesis:
Dumb and Dumber 'There's a Chance' https://www.youtube.com/watch?v=KX5jNnDMfxA
(for prime example of this misbegotten hope, see keith s's current excitement, as ill-founded as that excitement is, with Wagner's, as far as I can gather, evidence-free conjecture in his book 'Arrival Of The Fittest'. A book in which Wagner tries to reduce the mathematical probabilistic barriers for blind search). (Of note: Winston, given your skill in mathematics in this particular area, perhaps you can examine Wagner's argument more closely and do a post on that?) Thus, due to the that 'misbegotten hope' of the committed materialist, it is very important, as Dr. Dembski has currently done in his new book, to show that information is the foundation 'stuff' of the universe, and to show that material is 'emergent', even 'illusory' from what is the 'real' information basis of the universe.
“The thesis of my book ‘Being as Communion’ is that the fundamental stuff of the world is information. That things are real because they exchange information one with another.” Conversations with William Dembski–The Thesis of Being as Communion – video https://www.youtube.com/watch?v=cYAsaU9IvnI Conversations with William Dembski–Information All the Way Down - video https://www.youtube.com/watch?v=BnVss3QseCw
Dr. Dembski's contention that information, not material, is the 'fundamental stuff' of reality has some very impressive, even conclusive, empirical clout behind it. Both Wheeler and Zeilinger have, in their dealings with quantum mechanics, come to the conclusion that information, not material, is the 'fundamental stuff' of the universe.
"it from bit” Every “it”— every particle, every field of force, even the space-time continuum itself derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. “It from bit” symbolizes the idea that every item of the physical world has a bottom—a very deep bottom, in most instances, an immaterial source and explanation, that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment—evoked responses, in short all matter and all things physical are information-theoretic in origin and this is a participatory universe." – Princeton University physicist John Wheeler (1911–2008) (Wheeler, John A. (1990), “Information, physics, quantum: The search for links”, in W. Zurek, Complexity, Entropy, and the Physics of Information (Redwood City, California: Addison-Wesley)) "Now I am in the grip of a new vision, that Everything Is Information. The more I have pondered the mystery of the quantum and our strange ability to comprehend this world in which we live, the more I see possible fundamental roles for logic and information as the bedrock of physical theory." – J. A. Wheeler, K. Ford, Geons, Black Hole, & Quantum Foam: A Life in Physics New York W.W. Norton & Co, 1998, pp 64. Why the Quantum? It from Bit? A Participatory Universe? Excerpt: In conclusion, it may very well be said that information is the irreducible kernel from which everything else flows. Thence the question why nature appears quantized is simply a consequence of the fact that information itself is quantized by necessity. It might even be fair to observe that the concept that information is fundamental is very old knowledge of humanity, witness for example the beginning of gospel according to John: "In the beginning was the Word." Anton Zeilinger - a leading expert in quantum mechanics: http://www.metanexus.net/archive/ultimate_reality/zeilinger.pdf
bornagain77
November 16, 2014
November
11
Nov
16
16
2014
07:54 AM
7
07
54
AM
PDT
WJM #73
Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it’s probability of being generated by unguided forces?
I see R0bb has already answered this but maybe it is worth being even more explicit. First I assume you agree that a chance hypothesis is an  example of an unguided force? Given that, see page 21 of the paper that R0bb quoted.  Dembski gives the formula for the context dependent specified complexity of T given H –where H is a chance hypothesis  (note he gives no other way of measuring specified complexity such as one that is context independent or without a given H).  It is not possible to write the formula here because I don’t have the character set but it contains the expression P(T|H). Would you like to give an example of how to measure CSI that does not include a chance hypothesis? SB #78 The fact that it may be extremely difficult to estimate P(T|H) in practice is a problem with the CSI method not a prove that CSI doesn't need such an estimate.markf
November 16, 2014
November
11
Nov
16
16
2014
07:44 AM
7
07
44
AM
PDT
Absolutely. My intended but poorly stated point was that everybody already rejects any hypothetical explanations for biology that require astronomical luck. We don’t need to learn about specified complexity to be convinced of this position, although Dembski might argue that specified complexity explains why we take this position.
I get your point. I'll simply note that some people do try to claim that even if evolutionary explanations required astronomical luck, it doesn't mean it didn't happen.
I reiterate that if Dembski had thought that CSI excluded “chance” and “law” by definition, then it would have been pointless to make a lengthy argument to show this by other means. Something that’s true by definition does not need to be demonstrated by argument — you can simply restate the definition again!
All right triangles follow the Pythagorus theorem by definition. That is, the theorem follows somewhat non-obviously from the definition. You couldn't provie it by just restating the definition. In the same way, CSI follows Dembski's laws by definition, but not in a way that is established by repeating the definition.
The fact that Dembski calls this a law indicates that he thinks it is an empirical truth, not a definitional tautology.
Do a search for "Laws of Math" you'll find that many mathematical results are stated as laws.
His understanding is that, in order to calculate CSI, we must choose as a chance hypothesis the actual cause of the event. When I asked how that works when the actual cause is design, he said:
Allow me to put it this way. Specified Complexity is defined based on both an object of interest and a chance hypothesis that produced it. You cannot compute any value of CSI independently of a chance hypothesis. When we say that an object exhibits specified complexity outright, that's shorthand for saying that object exhibits specified complexity under all relevant chance hypotheses. (In my opinion, this was an unnecessary overloading of the terminology, but its too late to change that now.) When say that an intelligent agent produces CSI, we mean that they produce objects which were improbable under all the relevant chance hypotheses. Note that we do not attempt to calculate the probability by the agent hypothesis because its not a chance hypothesis.
but nowhere in the post does he say what, in my judgment, needed to be said—namely, that Dembski’s design inference is NOT circular and that KeithS was wrong to claim otherwise. Indeed, he continued to emphasize the extent to which he agreed with KeithS
Clearly, I should have been more explicit. However, my last paragraph was attempting to state what you are saying there:
So Keith is right, arguing for the improbability of evolution on the basis of specified complexity is circular. However, specified complexity, as developed by Dembski, isn’t designed for the purpose of demonstrating the improbability of evolution. When used for its proper role, specified complexity is a valid, though limited argument.
Unfortunately, Dr. Dembski doesn’t post here anymore. Thus, I would appreciate if Winston could comment on Kairosfocus’s FSCO/I and dFSCI. Do these terms in the work currently done at the Evolutionary Informatics Lab? Have they ever been used or at least discussed?
Since I've graduated, I'm no longer in as frequent contact with the EIL, so I can't really speak to there status to the same degree. I'll simply note that the acronyms have too many letters. A good acronym has at most three letters. More seriously, I haven't looked closely at this measures, so I won't comment on them.
Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it’s probability of being generated by unguided forces?
See my article http://www.evolutionnews.org/2013/04/information_pas071201.html, in there I point to pages throughout Dembski's work where he says that CSI requires the evaluation of probability by chance hypotheses.Winston Ewert
November 16, 2014
November
11
Nov
16
16
2014
07:40 AM
7
07
40
AM
PDT
Robb
I recommend the paper that Dembski calls his “most up-to-date treatment of CSI”. Note the formulas that contain the factor P(T|H).
Let’s say that I am evaluating the probability that wind, air, and erosion produced a well-formed sand castle on the beach. Assume that the object contains two million grains of sand, the time value of your choice, or any other quantitative facts you need to make the determination. Take me through the process. Pay special attention to the chronology involved in determining the probability that unguided forces were responsible for the object and the amount of complex specified information that it contains.StephenB
November 16, 2014
November
11
Nov
16
16
2014
07:12 AM
7
07
12
AM
PDT
WJM:
Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it’s probability of being generated by unguided forces?
I recommend the paper that Dembski calls his "most up-to-date treatment of CSI". Note the formulas that contain the factor P(T|H).R0bb
November 16, 2014
November
11
Nov
16
16
2014
06:36 AM
6
06
36
AM
PDT
I note in info theory info is measured on loss of uncertainty on receiving a message, regarding state of the source. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
06:16 AM
6
06
16
AM
PDT
WJM, that is why it is important to log reduce and see that you have an info beyond a threshold metric. The threshold is where an evaluation comes in. The degree of specified, complex info is before that. And I repeat we can use empirical studies of the 4-state per node DNA chain or the 20-state (more or less there are oddball cases) per node AA chain in proteins. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
06:10 AM
6
06
10
AM
PDT
Box, probability and information metrics are related, are in effect dual through the negative log probability metric. As there has been so much back forth I will use a fairly technical case from a diverse field to make the point, via how Harry S Robertson in Statistical ThermoPhysics grounds the informational view of thermodynamics. Here I clip my always linked note, from Robertson:
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . . [deriving informational entropy . . . ] H({pi}) = - C [SUM over i] pi*ln pi, [. . . "my" Eqn 6] [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . . [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]
In short, information is linked to probability and vice versa. In the case in prolonged debate because it has been a handy rhetorical club for ID objectors it turns out that information content of DNA or protein is fairly easy to obtain on state possibilities or on statistical studies on how such are actually used. Which will reflect the variability across life and time thus also the underlying probabilities. We already know the result, it is immaterial which metric we use, life forms have FSCO/I well beyond the threshold where blind watchmaker mechanisms are remotely plausible. Therein lieth the rub, as that is an ideologically unacceptable result in many quarters. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
06:02 AM
6
06
02
AM
PDT
WE said:
No. In Dembski’s work the probability is calculated in order to establish the complexity of the complex-specification criteria. Probability comes before, not after the establishment of CSI.
Measuring probability-distribution complexity capacity of a thing along with how specified (and or functional) it is is not the same thing as determining whether or not that a particular distributional pattern is plausibly achievable via unguided forces. You measure it first. You evaluate nature's capacity to produce it afterwards. Can you show where Dembski (or any ID proponent) has built into the measurement of CSI an evaluation of it's probability of being generated by unguided forces?William J Murray
November 16, 2014
November
11
Nov
16
16
2014
03:43 AM
3
03
43
AM
PDT
StephenB, For me the shocker is in post #32 and #51.
WJM: No. CSI isn’t an argument; it’s a measurement.
WE #32: No, its both. The measurement is based on how specified and improbable an object is.
WJM: No, the measurement demonstrates how complex-specified it is. Probabilities come into play after the CSI has been established and a case is being made for best cause.
WJM presents the common understanding of CSI at UD. It is in full accord with not being circular at all. We can observe an artifact and measure its CSI independent from probabilities. Kairosfocus wrote about CSI in a recent article:
KF: 'But you cannot quantify it, so there, it’s all pointless!' Another blatant denial of well known facts. Go open your PC up, and open up a folder. Set to view details. Notice the listed file sizes? Those are in bytes, eight bit clusters, of functionally specific, digitally coded complex informational files.
Is there a calculation of the improbability of evolution in the listed file sizes in my PC folders? Of course not. We have always understood that the measurement of the number of bits of CSI is distinct from a calculation of the improbability of evolution. Winston Ewert is very definitive however:
Winston Ewert: No. In Dembski’s work the probability is calculated in order to establish the complexity of the complex-specification criteria. Probability comes before, not after the establishment of CSI.
Now, I'm here waiting and hoping to see some arguments from both sides.Box
November 16, 2014
November
11
Nov
16
16
2014
02:18 AM
2
02
18
AM
PDT
PPPS: I suggest that complexity of a relevant entity can be indicated by the length of the y/n string of structured q's to specify its state in the face of the possible clumped or scattered possibilities, i.e. the rough equivalent of what AutoCAD does in portraying a 3-d config of a system. That is, functionally specific bits. Configs that are only a few bits long can fairly easily be recaptured by chance. Those that are of 500 or 1000 or more, not so, per sparse search imposed by available atomic and temporal resources. Complexity of course here points to high contingency, and the next aspect is that specified complexity would bespeak organisation of components that fit an independent or detachable "simple" description, or observable component interaction dependent performance -- functionality. Text elements in this post can be in many many possible configs, but only a few will make for contextually responsive remarks in English. For simple and familiar instance. Likewise AA strings can be in many states, but relatively few will carry out the functions of given enzymes required for even a simple living cell, to relevant degrees of effectiveness. And so forth.kairosfocus
November 16, 2014
November
11
Nov
16
16
2014
01:05 AM
1
01
05
AM
PDT
PPS: Where also it can be argued that the observed patterns of proteins and genes etc show what patterns of variation have been successfully explored across time in the world of life. That pattern, for proteins, the point of contact with the real world of organisms that have to live and reproduce, is that we have a dust of clusters of proteins in the space of possible AA string configs, such that we do have thousands of islands of function that are not reasonably bridgeable by either incremental stepping stones or Lego-brick like modularity that can build a whole from standard components. (Fold-function interactions involve the whole AA chain so that is unsurprising.)kairosfocus
November 16, 2014
November
11
Nov
16
16
2014
12:57 AM
12
12
57
AM
PDT
PS: What are relevant chancy hyps? For OOL, they are that standard physical and chemical materials and forces with linked statistical thermodynamics are acting in Darwin's warm pond of salts or the like prebiotic environments. These point strongly to a preponderance of forces of breakdown being prevalent, and to forces that feed diffusion and the sort of random walk dispersions we can observe as Brownian motion. Where clay templates or the like will tend to be crystalline and/or random, not fortuitously functionally specific. With of course the point that self replication on in effect a von Neumann kinematic, code using self replicator is part of what needs to be adequately causally explained. Language, of course is normally indicative of intelligence and purposeful thought. For existing unicellular life, we are looking at engines of non-foresighted hereditable variation such as various triggers of various mutations. Again, overwhelmingly adverse. Where also, while the tendency is to speak of "natural selection" in fact differential reproductive success and resulting culling out of less fit or fortunate varieties SUBTRACTS hereditable information. It is to the sources of chance variation we must turn to find the much discussed incremental creation of novel body plans etc. But, actual empirical evidence of the creative and inventive power of such chancy processes to account for 10 - 100+ mn new bases in the genome dozens of times over remains strangely absent, whilst it is abundantly easy to see that functionally constrained specified complexity is just that -- highly constrained to what Dembski once termed islands of function, in the space of possible configs W. We are back to T in W or rather clusters of Ts in W that are isolated and face sparse blind search. And blind searches for golden searches on W that land us in convenient reach of T's, face the fact that a search or sample is a subset of W. So the search for a golden blind search is looking at the higher order space 2^W. for 500 bits W = 3.27*10^150, 2^W is calculator smoking territory.kairosfocus
November 16, 2014
November
11
Nov
16
16
2014
12:50 AM
12
12
50
AM
PDT
WE: Perhaps, more significant is to look at the information dual to the probability metric, given that the actual context is log(p(T|H)). That is, low probabilities on blind chance [and linked mechanical necessities] are correlated with high informational complexity. With, the other two terms so transformed to informational form by the log operator and summation/subtraction, marking a threshold of relevant informational complexity where something beyond that threshold and which is in a zone T that is deeply isolated in the space of possibilities is not plausible to be attained on reasonable chance hyps that are said to be relevant. Where, zone T is independently describable or specifiable; i.e. it marks a case of specified complexity. Thus, Chi is a beyond threshold of specified complex information metric. Further, the relevant configuration-sensitive separately and "simply" describable specification is often that of carrying out a relevant function on correct organisation and linked interaction of component parts, ranging from text strings for messages in English, to source or better object code for execution of computing algorithms, to organisation of fishing reel parts (such as the Abu 6500 C3) to the organisation of D/RNA codon sequences to control synthesis of functional proteins individually and collectively across the needs of cells and organisms with particular body plans. Where, too, function of proteins requires sequences that fold and form relevant slots and key-lock fitting structures in a way that per Axe 2010 goes beyond mere Lego brick stacking of interchangeable modules. That is there are interactions all along the folded string of AA residues. Where further, practical thresholds for Sol system resources and/or the observed cosmos as a whole, on sparse sampling to space of possibilities rations, run to 500 - 1,000 functionally specific bits. The latter being the square of the former and in effect forcing the full set of possible Planck-time states of 10^80 atoms across a reasonable cosmological thermodynamic lifespan of 10^25 s, to be 1 in 10^150 of the possibilities W. This addresses a point in NFL where Dembski says in effect 1 in 10^150 of possibilities. So, a heuristic metric linked to Dembski's 2005 expression would be to seek functionally specific complex info in cells and other biologically relevant context that credibly traces to the deep past of origins and has in it at least 500 - 1,000 bits of such specified complexity. As a simple example even if one sets 100 AAs as a reasonable length for early simple organisms, with only 100 proteins and takes overly generous to abiogenesis estimates of 1 bit of info per AA, we have 10,000 bits of functionally specific complex information in the suite of proteins to make such an over simplified first cell work its metabolism etc, which is an order of magnitude of bits beyond the threshold. On needle in haystack blind search and on the vera causa principle (observing that cases of functional specified complexity routinely and on observation are only known to come from intelligently directed configuration), we are epistemically, inductively warranted to infer that the best explanation of the living cell is design. Which, puts design at the root of the tree of life, raising the point that the root grounds and supports the shoots, branches and twigs. So, we have excellent reason to hold that design pervades the world of life. KFkairosfocus
November 16, 2014
November
11
Nov
16
16
2014
12:33 AM
12
12
33
AM
PDT
Andre @61:
Mapou @4 You are on the money! Why do you think Keith S is ignoring me? Because I demanded that from him. Well spotted.
I'm glad we think alike. In my opinion, the gene repair argument fully refutes Darwinian evolution. The whole theory is nothing but a joke, a pathetic attempt by an unscrupulous elite who are hellbent on imposing a stupid state religion on the people.Mapou
November 15, 2014
November
11
Nov
15
15
2014
11:17 PM
11
11
17
PM
PDT
P.S. To be even more accurate, the above should be: For H=”natural selection” and T=”success”: The argument of specified complexity was never intended to show that P(T|H) <<<< 1. It was intended to show that if P(T|H) <<<< 1, then P(H|T) < 1/2.R0bb
November 15, 2014
November
11
Nov
15
15
2014
10:23 PM
10
10
23
PM
PDT
keith s:
In your linked article, you said:
The argument of specified complexity was never intended to show that natural selection has a low probability of success. It was intended to show that if natural selection has a low probability of success, then it cannot be the explanation for life as we know it.
That’s not correct, and the quote I gave above shows this:
I agree with Winston here. To restate his claim: For H="natural selection" and T="success": The argument of specified complexity was never intended to show that P(T|H) << 1. It was intended to show that if P(T|H) << 1, then P(H|T) << 1.R0bb
November 15, 2014
November
11
Nov
15
15
2014
10:03 PM
10
10
03
PM
PDT
Unfortunately, Dr. Dembski doesn't post here anymore. Thus, I would appreciate if Winston could comment on Kairosfocus's FSCO/I and dFSCI. Do these terms in the work currently done at the Evolutionary Informatics Lab? Have they ever been used or at least discussed?sparc
November 15, 2014
November
11
Nov
15
15
2014
10:00 PM
10
10
00
PM
PDT
KeithS writes,
Box, You made the mistake of trusting StephenB.
Box, unlike KeithS, I wouldn't dare insinuate that you don't have the ability to read the OP for yourself and that you need to trust me to get the facts. What an insult. I have found that you are far more thoughtful than that. As everyone knows, KeithS has been arguing for weeks that Dembski's design inference is "hopelessly circular," and that CSI is mere "window dressing." So when Winston Ewert broached the subject and began his essay with these words,
Keith S is right. Sort of. As highlighted in a recent post by vjtorley, Keith S has argued that Dembski’s Design Inference is a circular argument.
I interpreted that to mean that he did, sort of, agree with the substance of KeithS' perennial rant. To be sure, Winston qualified his remarks by making some important clarifications about his thoughts on why Dembski developed the notion of specified complexity, but nowhere in the post does he say what, in my judgment, needed to be said---namely, that Dembski's design inference is NOT circular and that KeithS was wrong to claim otherwise. Indeed, he continued to emphasize the extent to which he agreed with KeithS That is certainly the way KeithS and all the anti-ID partisans interpreted it, as is clear from their celebration. Predictably, KeithS began his pre-mature victory dance by saying,
With the circularity issue out of the way, I’d like to draw attention to the other flaws of Dembski’s CSI."
Clearly, this is a continued attack on Dembski's argument, as opposed to the way others interpret Dembski's argument, and clearly KeithS is implicating Winston in his mistake. So Winston makes the following request,
Before I’d even consider discussing these other alleged flaws, I need you to explicitly acknowledge that the alleged circularity isn’t a flaw in specified complexity, but only in some people’s mistaken interpretation of it. Dembski’s original argument isn’t circular.
As expected, KeithS threw it back in his face:
Winston, The problem is that Dembski does (or at least did) take the presence of CSI as a non-tautological indication that something could not have evolved or been produced by “chance” or “necessity”.
There you go. So, Box, I will let you decide who is trying to pull who's leg.StephenB
November 15, 2014
November
11
Nov
15
15
2014
09:58 PM
9
09
58
PM
PDT
Winston:
I don’t see anything in the linked post that suggest a belief that designed CSI is incoherent.
Sorry for the confusion -- the linked comment is where HeKS says that you agree with him. His understanding is that, in order to calculate CSI, we must choose as a chance hypothesis the actual cause of the event. When I asked how that works when the actual cause is design, he said:
No, no, no! You do not calculate the CSI (the high improbability of the specificity) of an event that was designed. That would be calculating the probability, or rather improbability, of something that was done intentionally, which is incoherent.
R0bb
November 15, 2014
November
11
Nov
15
15
2014
08:00 PM
8
08
00
PM
PDT
Mapou @4 You are on the money! Why do you think Keith S is ignoring me? Because I demanded that from him. Well spotted.Andre
November 15, 2014
November
11
Nov
15
15
2014
07:51 PM
7
07
51
PM
PDT
"everybody already rejects any hypothetical explanations for biology that require astronomical luck." MMM, no they don't,,, HISTORY OF EVOLUTIONARY THEORY - WISTAR DESTROYS EVOLUTION Excerpt: A number of mathematicians, familiar with the biological problems, spoke at that 1966 Wistar Institute,, For example, Murray Eden showed that it would be impossible for even a single ordered pair of genes to be produced by DNA mutations in the bacteria, E. coli,—with 5 billion years in which to produce it! His estimate was based on 5 trillion tons of the bacteria covering the planet to a depth of nearly an inch during that 5 billion years. He then explained that the genes of E. coli contain over a trillion (10^12) bits of data. That is the number 10 followed by 12 zeros. *Eden then showed the mathematical impossibility of protein forming by chance. http://www.pathlights.com/ce_encyclopedia/Encyclopedia/20hist12.htm A review of The Edge of Evolution: The Search for the Limits of Darwinism The numbers of Plasmodium and HIV in the last 50 years greatly exceeds the total number of mammals since their supposed evolutionary origin (several hundred million years ago), yet little has been achieved by evolution. This suggests that mammals could have “invented” little in their time frame. Behe: ‘Our experience with HIV gives good reason to think that Darwinism doesn’t do much—even with billions of years and all the cells in that world at its disposal’ (p. 155). http://creation.com/review-michael-behe-edge-of-evolution Waiting Longer for Two Mutations – Michael J. Behe Excerpt: Citing malaria literature sources (White 2004) I had noted that the de novo appearance of chloroquine resistance in Plasmodium falciparum was an event of probability of 1 in 10^20. I then wrote that ‘for humans to achieve a mutation like this by chance, we would have to wait 100 million times 10 million years’ (1 quadrillion years)(Behe 2007) (because that is the extrapolated time that it would take to produce 10^20 humans). Durrett and Schmidt (2008, p. 1507) retort that my number ‘is 5 million times larger than the calculation we have just given’ using their model (which nonetheless “using their model” gives a prohibitively long waiting time of 216 million years). Their criticism compares apples to oranges. My figure of 10^20 is an empirical statistic from the literature; it is not, as their calculation is, a theoretical estimate from a population genetics model. http://www.discovery.org/a/9461 Don't Mess With ID by Paul Giem (Durrett and Schmidt paper)- video https://www.youtube.com/watch?v=5JeYJ29-I7o "The immediate, most important implication is that complexes with more than two different binding sites-ones that require three or more proteins-are beyond the edge of evolution, past what is biologically reasonable to expect Darwinian evolution to have accomplished in all of life in all of the billion-year history of the world. The reasoning is straightforward. The odds of getting two independent things right are the multiple of the odds of getting each right by itself. So, other things being equal, the likelihood of developing two binding sites in a protein complex would be the square of the probability for getting one: a double CCC, 10^20 times 10^20, which is 10^40. There have likely been fewer than 10^40 cells in the world in the last 4 billion years, so the odds are against a single event of this variety in the history of life. It is biologically unreasonable." - Michael Behe - The Edge of Evolution - page 146 Swine Flu, Viruses, and the Edge of Evolution - Casey Luskin - 2009 Excerpt: “Indeed, the work on malaria and AIDS demonstrates that after all possible unintelligent processes in the cell–both ones we’ve discovered so far and ones we haven’t–at best extremely limited benefit, since no such process was able to do much of anything. It’s critical to notice that no artificial limitations were placed on the kinds of mutations or processes the microorganisms could undergo in nature. Nothing–neither point mutation, deletion, insertion, gene duplication, transposition, genome duplication, self-organization nor any other process yet undiscovered–was of much use.” Michael Behe, The Edge of Evolution, pg. 162 http://www.evolutionnews.org/2009/05/swine_flu_viruses_and_the_edge020071.html An Open Letter to Kenneth Miller and PZ Myers - Michael Behe July 21, 2014 Dear Professors Miller and Myers, Talk is cheap. Let's see your numbers. In your recent post on and earlier reviews of my book The Edge of Evolution you toss out a lot of words, but no calculations. You downplay FRS Nicholas White's straightforward estimate that -- considering the number of cells per malaria patient (a trillion), times the number of ill people over the years (billions), divided by the number of independent events (fewer than ten) -- the development of chloroquine-resistance in malaria is an event of probability about 1 in 10^20 malaria-cell replications. Okay, if you don't like that, what's your estimate? Let's see your numbers.,,, ,,, If you folks think that direct, parsimonious, rather obvious route to 1 in 10^20 isn't reasonable, go ahead, calculate a different one, then tell us how much it matters, quantitatively. Posit whatever favorable or neutral mutations you want. Just make sure they're consistent with the evidence in the literature (especially the rarity of resistance, the total number of cells available, and the demonstration by Summers et al. that a minimum of two specific mutations in PfCRT is needed for chloroquine transport). Tell us about the effects of other genes, or population structures, if you think they matter much, or let us know if you disagree for some reason with a reported literature result. Or, Ken, tell us how that ARMD phenotype you like to mention affects the math. Just make sure it all works out to around 1 in 10^20, or let us know why not. Everyone is looking forward to seeing your calculations. Please keep the rhetoric to a minimum. With all best wishes (especially to Professor Myers for a speedy recovery), Mike Behe http://www.evolutionnews.org/2014/07/show_me_the_num088041.html podcast - Michael Behe: Vindication for 'The Edge of Evolution,' Pt. 2 http://intelligentdesign.podomatic.com/entry/2014-08-06T15_26_19-07_00 "The Edge of Evolution" Strikes Again 8-2-2014 by Paul Giem - video https://www.youtube.com/watch?v=HnO-xa3nBE4 When Theory and Experiment Collide — April 16th, 2011 by Douglas Axe Excerpt: Based on our experimental observations and on calculations we made using a published population model [3], we estimated that Darwin’s mechanism would need a truly staggering amount of time—a trillion, trillion, years or more—to accomplish the seemingly subtle change in enzyme function that we studied. per biologic institute Is There Enough Time For Humans to have Evolved from Apes? Dr. Ann Gauger Answers - video http://www.youtube.com/watch?v=KN7NwKYUXOsbornagain77
November 15, 2014
November
11
Nov
15
15
2014
06:55 PM
6
06
55
PM
PDT
1 2 3 4 5

Leave a Reply