Uncommon Descent Serving The Intelligent Design Community

On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I do enjoy reading ID’s most vehement critics, both in formal publications (such as books and papers) and on the, somewhat less formal, Internet blogosphere. Part of the reason for this is that it gives one something of a re-assurance to observe the vacuous nature of many of the critics’ attempted rebuttals to the challenge offered to neo-Darwinism by ID, and the attempted compensation of its sheer lack of explicative power by the religious ferocity of the associated rhetoric (to paraphrase Lynn Margulis). The prevalent pretense that the causal sufficiency of neo-Darwinism is an open-and-shut case (when no such open-and-shut case for the affirmative exists) never ceases to amuse me.

One such forum where esteemed critics lurk is the Panda’s Thumb blog. A website devoted to holding the Darwinian fort, and one endorsed by the National Center for Selling Evolution Science Education (NCSE). Since many of the Darwinian heavy guns blog for this website, we can conclude that, if consistently demonstrably faulty arguments are common play, the front-line Darwinism defense lobby is in deep water.

Recently, someone referred me to two articles (one, two) on the Panda’s Thumb website (from back in 2007), by Arthur Hunt (professor in Department of Plant and Soil Sciences at the University of Kentucky). The first is entitled “On the evolution of Irreducible Complexity”; the second, “Reality 1, Behe 0” (the latter posted shortly following the publication of Behe’s second book, The Edge of Evolution).

The articles purport to refute Michael Behe’s notion of irreducible complexity. But, as I intend to show here, they do nothing of the kind!

In his first article, Hunt begins,

There has been a spate of interest in the blogosphere recently in the matter of protein evolution, and in particular the proposition that new protein function can evolve. Nick Matzke summarized a review (reference 1) on the subject here. Briefly, the various mechanisms discussed in the review include exon shuffling, gene duplication, retroposition, recruitment of mobile element sequences, lateral gene transfer, gene fusion, and de novo origination. Of all of these, the mechanism that received the least attention was the last – the de novo appearance of new protein-coding genes basically “from scratch”. A few examples are mentioned (such as antifreeze proteins, or AFGPs), and long-time followers of ev/cre discussions will recognize the players. However, what I would argue is the most impressive of such examples is not mentioned by Long et al. (1).

There is no need to discuss the cited Long et al. (2003) paper in any great detail here, as this has already been done by Casey Luskin here (see also Luskin’s further discussion of Anti-Freeze evolution here), and I wish to concern myself with the central element of Hunt’s argument.

Hunt continues,

Below the fold, I will describe an example of de novo appearance of a new protein-coding gene that should open one’s eyes as to the reach of evolutionary processes. To get readers to actually read below the fold, I’ll summarize – what we will learn of is a protein that is not merely a “simple” binding protein, or one with some novel physicochemical properties (like the AFGPs), but rather a gated ion channel. Specifically, a multimeric complex that: 1. permits passage of ions through membranes; 2. and binds a “trigger” that causes the gate to open (from what is otherwise a “closed” state). Recalling that Behe, in Darwin’s Black Box, explicitly calls gated ion channels IC systems, what the following amounts to is an example of the de novo appearance of a multifunctional, IC system.

Hunt is making big promises. But does he deliver? Let me briefly summarise the jist of Hunt’s argument, and then briefly weigh in on it.

The cornerstone of Hunt’s argument is principally concerned with the gene, T-urf13, which, contra Behe’s delineated ‘edge’ of evolution, is supposedly a de novo mitochondrial gene that very quickly evolved from other genes which specified rRNA, in addition to some non-coding DNA elements. The gene specifies a transmembrane protein, which aids in facilitating the passage of hydrophilic molecules across the mitochondrial membrane in maize – opening only when bound on the exterior by particular molecules.

The protein is specific to the mitochondria of maize with Texas male-sterile cytoplasm, and has also been implicated in causing male sterility and sensitivity to T-cytoplasm-specific fungal diseases. Two parts of the T-urf13 gene are homologous to other parts in the maize genome, with a further component being of unknown origin. Hunt maintains that this proves that this gene evolved by Darwinian-like means.

Hunt further maintains that the T-urf13 consists of at least three “CCCs” (recall Behe’s argument advanced in The Edge of Evolution that a double “CCC” is unlikely to be feasible by a Darwinian pathway). Two of these “CCCs”, Hunt argues, come from the binding of each subunit to at minimum two other subunits in order to form the heteromeric complex in the membrane. This entails that each respective subunit have at minimum two protein-binding sites.

Hunt argues for the presence of yet another “CCC”:

[T]he ion channel is gated. It binds a polyketide toxin, and the consequence is an opening of the channel. This is a third binding site. This is not another protein binding site, and I rather suppose that Behe would argue that this isn’t relevant to the Edge of Evolution. But the notion of a “CCC” derives from consideration of changes in a transporter (PfCRT) that alter the interaction with chloroquine; toxin binding by T-urf13 is quite analogous to the interaction between PfCRT and chloroquine. Thus, this third function of T-urf13 is akin to yet another “CCC”.

He also notes that,

It turns out that T-urf13 is a membrane protein, and in membranes it forms oligomeric structures (I am not sure if the stoichiometries have been firmly established, but that it is oligomeric is not in question). This is the first biochemical trait I would ask readers to file away – this protein is capable of protein-protein interactions, between like subunits. This means that the T-urf13 polypeptide must possess interfaces that mediate protein-protein interactions. (Readers may recall Behe and Snokes, who argued that such interfaces are very unlikely to occur by chance.)

[Note: The Behe & Snoke (2004) paper is available here, and their response (2005) to Michael Lynch’s critique is available here.]

Hunt tells us that “the protein dubbed T-urf13 had evolved, in one fell swoop by random shuffling of the maize mitochondrial genome.” If three CCC’s really evolved in “one fell swoop” by specific but random mutations, then Behe’s argument is in trouble. But does any of the research described by Hunt make any progress with regards to demonstrating that this is even plausible? Short answer: no.

Hunt does have a go of guesstimating the probabilistic plausibility of such an event of neo-functionalisation taking place. He tells us, “The bottom line – T-urf13 consists of at least three ‘CCCs’. Running some numbers, we can guesstimate that T-urf13 would need about 10^60 events of some sort in order to occur.”

Look at what Hunt concludes:

Now, recall that we are talking about, not one, but a minimum of three CCC’s. Behe says 1 in 10^60, what actually happened occurred in a total event size of less that 10^30. Obviously, Behe has badly mis-estimated the “Edge of Evolution”. Briefly stated, his “Edge of Evolution” is wrong. [Emphasis in original]

Readers trained in basic logic will take quick note of the circularity involved in this argumentation. Does Hunt offer any evidence that T-urf13 could have plausibly evolved by a Darwinian-type mechanism? No, he doesn’t. In fact, he casually dismisses the mathematics which refutes his whole argument. Here we have a system with a minimum of three CCCs, and since he presupposes as an a priori principle that it must have a Darwinian explanation, this apparently refutes Behe’s argument! This is truly astonishing argumentation. Yes, certain parts of the gene have known homologous counterparts. But, at most, that demonstrates common descent (and even that conclusion is dubious). But a demonstration of homology, or common ancestral derivation, or a progression of forms is not, in and of itself, a causal explanation. Behe himself noted in Darwin’s Black Box, “Although useful for determining lines of descent … comparing sequences cannot show how a complex biochemical system achieved its function—the question that most concerns us in this book.” Since Behe already maintains that all life is derivative of a common ancestor, a demonstration of biochemical or molecular homology is not likely to impress him greatly.

How, then, might Hunt and others successfully show Behe to be wrong about evolution? It’s very simple: show that adequate probabilistic resources existed to facilitate the plausible origin of these types of multi-component-dependent systems. If, indeed, it is the case that each fitness peak lies separated by more than a few specific mutations, it remains difficult to envision how the Darwinian mechanism might adequately facilitate the transition from one peak to another within any reasonable time frame. Douglas Axe, of the biologic institute, showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalisation. If a duplicated gene is neutral (in terms of its cost to the organism), then the  maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). One other study, published in Nature in 2001 by Keefe & Szostak, documented that more than a million million random sequences were required in order to stumble upon a functioning ATP-binding protein, a protein substantially smaller than the transmembrane protein specified by the gene, T-urf13. Douglas Axe has also documented (2004), in the Journal of Molecular Biology, the prohibitive rarity of functional enzymatic binding domains with respect to the vast sea of combinatorial sequence space in a 150 amino-acid long residue (Beta-Lactamase).

What, then, can we conclude? Contrary to his claims, Hunt has failed to provide a detailed and rigorous account of the origin of T-urf13. Hunt also supplies no mathematical demonstration that the de novo origin of such genes is sufficiently probable that it might be justifiably attributed to an unguided or random process, nor does he provide a demonstration that a step-wise pathway exists where novel utility is conferred at every step (being separated by not more than one or two mutations) along the way prior to the emergence of the T-urf13 gene.

The Panda’s Thumb are really going to have to do better than this if they hope to refute Behe!

Comments
jon what in the world are you talking about??? Questioning your relationship with God???? I stated this,,, 'perhaps you find establishing the Theistic basis of reality to be irrelevant, yet none-the-less, to answer the question you put forth (How does one differentiate a teleological Darwinian process from one that isn’t?) it is necessary to establish precisely that point, to answer whether reality is materialistic or theistic in its basis! As for you saying that CSI is completely superfluous, that is only true if one does not care to see or know where and if God has interceded further in the universe He created and sustains!',,, I cannot see where I directly accused you of being a neo-Darwinist/atheist anywhere in that post!!!!. As for my comment on CSI being important for Design Detection ON TOP of what God has already created and sustains, I solidly maintain that CSI is very important if one is concerned with detecting design on top of the design ALREADY establish in the universe. Perhaps its just as well that you go so far off base and accuse me of slander where there was no malicious intent, for this entire thread has been extremely bizarre as to the respect of reason, and I just as gladly cease from participating in it.bornagain77
March 12, 2011
March
03
Mar
12
12
2011
06:08 AM
6
06
08
AM
PDT
Bornagain, First of all, let us get one thing out of the way. I don't appreciate you questioning my relationship with God. That is a matter between Him and me and you would be better served remembering about motes and beams. Second, you have twisted my words when you accuse me of saying CSI is superfluous. I said that CSI would be superfluous if everything is designed, which seems to be your argument, as best as I can tell. If everything is designed, you don't need a design detector. At this point, you have committed two slanders against me by questioning my relationship with our Father and misrepresenting my question. I believe you need to make amends if we are going to proceed further.jon specter
March 12, 2011
March
03
Mar
12
12
2011
05:52 AM
5
05
52
AM
PDT
vjtorley #283 I am quite confused by your comment. You write: I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs. It appears that you believe that CSI is sometimes a useful way to detect design and sometimes not. Is that right? If so, if you come across some CSI how do you know if it is a sign of design?markf
March 12, 2011
March
03
Mar
12
12
2011
04:12 AM
4
04
12
AM
PDT
jon specter, perhaps you find establishing the Theistic basis of reality to be irrelevant, yet none-the-less, to answer the question you put forth (How does one differentiate a teleological Darwinian process from one that isn’t?) it is necessary to establish precisely that point, to answer whether reality is materialistic or theistic in its basis! As for you saying that CST is completely superfluous, that is only true if one does not care to see or know where and if God has interceded further in the universe He created and sustains!bornagain77
March 12, 2011
March
03
Mar
12
12
2011
02:20 AM
2
02
20
AM
PDT
Isn’t it obvious???
Well, no. You questioned what Darwinism can or can't do, then went on about mind-body duality, quantum mechanics, the second law of thermodynamics, the big bang, gravity, a supposed refutation of front loading, and finally The Shroud of Turin. It was a masterpiece of prose, but it did not address what I thought was a relatively simple question, specifically the technique used to differentiate a teleological Darwinian process from a non-teleological event. In your latest response, you revisit several of those themes, then also discussed Gödel’s Incompleteness Theorem, some po-mo stuff about whether are perceptions and observations can be trusted, and attacked the idea of a multiverse. It was another tour de force, to be sure, but still didn't answer my questions. Your final paragraph came the closest:
Perhaps jon a simpler way for you to understand this is can you please tell me how a Darwinian process could possibly be construed as ‘non-teleological’ in a Theistic universe?
First of all, as I have said previously, I am not a Darwinist, so I am under no obligation to provide proof of what I don't subscribe to. However, your final phrase "how a Darwinian process could possibly be construed as ‘non-teleological’ in a Theistic universe?" seems to suggest that there is, ultimately, no non-teleological processes. If that is the case, it seems to me that the idea of CSI is completely superfluous. If everything is designed, a tool to differentiate design from non-design is unnecessary because it will only ever yield one answer.jon specter
March 11, 2011
March
03
Mar
11
11
2011
11:00 PM
11
11
00
PM
PDT
jon specter, Isn't it obvious??? As I pointed out, If you want to answer that question You must look at the basis of reality!!! Since the basis of reality is found to be teleological/Theistic (logos; information theoretic) in nature, then that answers the question!!! If a 'Darwinian' process of a gradual increase of functional information ever did occur, it would ultimately have to be attributable to a teleological processes since reality is shown to be Theistic in its basis,,, shown not to be 'materialistic' in its basis; notes; falsification of reductive materialism; The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145 Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This experiment provides experimental proof that the teleportation of quantum information must be instantaneous in this universe.) http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html etc.. etc.. etc.. Falsification of non-reductive materialism; THE GOD OF THE MATHEMATICIANS - DAVID P. GOLDMAN - August 2010 Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel's critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes. http://www.faqs.org/periodicals/201008/2080027241.html Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century Excerpt: Gödel’s Incompleteness Theorem says: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle - something you have to assume to be true but cannot prove "mathematically" to be true.” This following site is a easy to use, and understand, interactive website that takes the user through what is termed 'Presuppositional apologetics'. The website clearly shows that our use of the laws of logic, mathematics, science and morality cannot be accounted for unless we believe in a God who guarantees our perceptions and reasoning are trustworthy in the first place. Proof That God Exists - easy to use interactive website http://www.proofthatgodexists.org/index.php Nuclear Strength Apologetics – Presuppositional Apologetics – video http://www.answersingenesis.org/media/video/ondemand/nuclear-strength-apologetics/nuclear-strength-apologetics Materialism simply dissolves into absurdity when pushed to extremes and certainly offers no guarantee to us for believing our perceptions and reasoning within science are trustworthy in the first place: Dr. Bruce Gordon - The Absurdity Of The Multiverse & Materialism in General - video http://www.metacafe.com/watch/5318486/ BRUCE GORDON: Hawking's irrational arguments - October 2010 Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world. Neither is it the case that "nothing" is unstable, as Mr. Hawking and others maintain. Absolute nothing cannot have mathematical relationships predicated on it, not even quantum gravitational ones. Rather, the transcendent reality on which our universe depends must be something that can exhibit agency - a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what "breathes fire into the equations and makes a universe for them to describe.,,, the evidence for string theory and its extension, M-theory, is nonexistent; and the idea that conjoining them demonstrates that we live in a multiverse of bubble universes with different laws and constants is a mathematical fantasy. What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the "Boltzmann Brain" problem: In the most "reasonable" models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. etc.. etc.. etc.. Perhaps jon a simpler way for you to understand this is can you please tell me how a Darwinian process could possibly be construed as 'non-teleological' in a Theistic universe?bornagain77
March 11, 2011
March
03
Mar
11
11
2011
07:38 PM
7
07
38
PM
PDT
jon specter, Isn't it obvious??? As I pointed out, If you want to answer that question You must look at the basis of reality!!! Since the basis of reality is found to be teleological/Theistic (logos; information theoretic) in nature, then that answers the question!!! If a 'Darwinian' process of a gradual increase of functional information ever did occur, it would ultimately have to be attributable to a teleological processes since reality is shown to be Theistic in its basis,,, shown not to be 'materialistic' in its basis; notes; falsification of reductive materialism; The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145 Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This experiment provides experimental proof that the teleportation of quantum information must be instantaneous in this universe.) http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html etc.. etc.. etc.. Falsification of non-reductive materialism; THE GOD OF THE MATHEMATICIANS - DAVID P. GOLDMAN - August 2010 Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel's critique of the continuum hypothesis has the same implication as his incompleteness theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes. http://www.faqs.org/periodicals/201008/2080027241.html Gödel’s Incompleteness: The #1 Mathematical Breakthrough of the 20th Century Excerpt: Gödel’s Incompleteness Theorem says: “Anything you can draw a circle around cannot explain itself without referring to something outside the circle - something you have to assume to be true but cannot prove "mathematically" to be true.” http://www.cosmicfingerprints.com/blog/incompleteness/ This following site is a easy to use, and understand, interactive website that takes the user through what is termed 'Presuppositional apologetics'. The website clearly shows that our use of the laws of logic, mathematics, science and morality cannot be accounted for unless we believe in a God who guarantees our perceptions and reasoning are trustworthy in the first place. Proof That God Exists - easy to use interactive website http://www.proofthatgodexists.org/index.php Nuclear Strength Apologetics – Presuppositional Apologetics – video http://www.answersingenesis.org/media/video/ondemand/nuclear-strength-apologetics/nuclear-strength-apologetics Materialism simply dissolves into absurdity when pushed to extremes and certainly offers no guarantee to us for believing our perceptions and reasoning within science are trustworthy in the first place: Dr. Bruce Gordon - The Absurdity Of The Multiverse & Materialism in General - video http://www.metacafe.com/watch/5318486/ BRUCE GORDON: Hawking's irrational arguments - October 2010 Excerpt: The physical universe is causally incomplete and therefore neither self-originating nor self-sustaining. The world of space, time, matter and energy is dependent on a reality that transcends space, time, matter and energy. This transcendent reality cannot merely be a Platonic realm of mathematical descriptions, for such things are causally inert abstract entities that do not affect the material world. Neither is it the case that "nothing" is unstable, as Mr. Hawking and others maintain. Absolute nothing cannot have mathematical relationships predicated on it, not even quantum gravitational ones. Rather, the transcendent reality on which our universe depends must be something that can exhibit agency - a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. This is what "breathes fire into the equations and makes a universe for them to describe.,,, the evidence for string theory and its extension, M-theory, is nonexistent; and the idea that conjoining them demonstrates that we live in a multiverse of bubble universes with different laws and constants is a mathematical fantasy. What is worse, multiplying without limit the opportunities for any event to happen in the context of a multiverse - where it is alleged that anything can spontaneously jump into existence without cause - produces a situation in which no absurdity is beyond the pale. For instance, we find multiverse cosmologists debating the "Boltzmann Brain" problem: In the most "reasonable" models for a multiverse, it is immeasurably more likely that our consciousness is associated with a brain that has spontaneously fluctuated into existence in the quantum vacuum than it is that we have parents and exist in an orderly universe with a 13.7 billion-year history. This is absurd. The multiverse hypothesis is therefore falsified because it renders false what we know to be true about ourselves. Clearly, embracing the multiverse idea entails a nihilistic irrationality that destroys the very possibility of science. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/ etc.. etc.. etc.. Perhaps jon a simpler way for you to understand this is can you please tell me how a Darwinian process could possibly be construed as 'non-teleological' in a Theistic universe?bornagain77
March 11, 2011
March
03
Mar
11
11
2011
07:36 PM
7
07
36
PM
PDT
Bornagain77, That is all very interesting, but how do you differentiate a teleological Darwinian process from one that isn’t?jon specter
March 11, 2011
March
03
Mar
11
11
2011
06:48 PM
6
06
48
PM
PDT
jon specter, you ask, 'How does one differentiate a teleological Darwinian process from one that isn’t?' That is an interesting question that never gets asked of evolutionary processes since neo-Darwinists do not have any examples of evolution that have ever generated information over and above what was already present. In fact all beneficial adaptations I have ever been made aware of always end up occurring because of a loss, or 'adjustment, of pre-existent functional information. “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ Yet jon specter if an example of evolution ever did occur that generated non-trivial information over and above what was already present, and one were to ask that question, ask if it were teleological evolution or not, you would have to ask a very basic question and ask whether the foundation of reality you are measuring from is teleological in its nature or not. Which in fact turns out to be the case; Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 (personally I feel the word "illusion" was a bit too strong from Dr. Henry to describe material reality and would myself have opted for his saying something a little more subtle like; "material reality is a "secondary reality" that is dependent on the primary reality of God's mind" to exist. Then again I'm not a professor of physics at a major university as Dr. Henry is.) http://henry.pha.jhu.edu/aspect.html Of note jon this is from the quantum mechanic perspective that we find that God 'sustains' reality. But from a General Relativity perspective we find that God 'created' the universe; Moreover God created from a baseline of thermodynamics that prevents any 'future' space-time materialistic processes of general relativity from ever generating non-trivial functional information on their own; The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose Excerpt: "The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the "source" of the Second Law (Entropy)." How special was the big bang? - Roger Penrose Excerpt: This now tells us how precise the Creator's aim must have been: namely to an accuracy of one part in 10^10^123. (from the Emperor’s New Mind, Penrose, pp 339-345 - 1989) http://www.ws5.com/Penrose/ Evolution is a Fact, Just Like Gravity is a Fact! UhOh! Excerpt: The results of this paper suggest gravity arises as an entropic force, once space and time themselves have emerged. https://uncommondescent.com/intelligent-design/evolution-is-a-fact-just-like-gravity-is-a-fact-uhoh/ And as far back in the history of the universe as we look entropy has always been working its work of increasing disorder,, and thus,,, "Gain in entropy always means loss of information, and nothing more." Gilbert Newton Lewis - Eminent Chemist jon specter, It is also interesting to note that a lot of theistic evolutionists actually argue from a deistic presupposition, and argue that God 'front-loaded' information into the initial conditions of the universe. But closer scrutiny reveals this is not possible: The Front-loading Fiction - Dr. Robert Sheldon - 2009 Excerpt: Historically, the argument for front-loading came from Laplacian determinism based on a Newtonian or mechanical universe--if one could control all the initial conditions, then the outcome was predetermined. First quantum mechanics, and then chaos-theory has basically destroyed it, since no amount of precision can control the outcome far in the future. (The exponential nature of the precision required to predetermine the outcome exceeds the information storage of the medium.),,, Even should God have infinite knowledge of the outcome of such a biological algorithm, the information regarding its outcome cannot be contained within the (initial conditions of the) system itself. http://procrustes.blogtownhall.com/2009/07/01/the_front-loading_fiction.thtml further notes: The Center Of The Universe Is Life - General Relativity, Quantum Mechanics, Entropy and The Shroud Of Turin - video http://www.metacafe.com/w/5070355 A Quantum Hologram of Christ's Resurrection? by Chuck Missler Excerpt: “You can read the science of the Shroud, such as total lack of gravity, lack of entropy (without gravitational collapse), no time, no space—it conforms to no known law of physics.” The phenomenon of the image brings us to a true event horizon, a moment when all of the laws of physics change drastically. http://www.khouse.org/articles/2008/847 Turin Shroud Enters 3D Age - Pictures, Articles and Videos https://docs.google.com/document/pub?id=1gDY4CJkoFedewMG94gdUk1Z1jexestdy5fh87RwWAfgbornagain77
March 11, 2011
March
03
Mar
11
11
2011
05:35 PM
5
05
35
PM
PDT
Indium, no one disagrees that, in principle, Darwinian processes can generate ‘Information’. Even Dr. Dembski agrees that they, in principle can. The only caveat is that if Darwinian processes ever generated ‘Information’ it would have to be by a teleological (of a mind) process
That is very interesting. How does one differentiate a teleological Darwinian process from one that isn't?jon specter
March 11, 2011
March
03
Mar
11
11
2011
04:30 PM
4
04
30
PM
PDT
Indium, no one disagrees that, in principle, Darwinian processes can generate 'Information'. Even Dr. Dembski agrees that they, in principle can. The only caveat is that if Darwinian processes ever generated 'Information' it would have to be by a teleological (of a mind) process; "LIFE’S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information": Excerpt: Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms. http://evoinfo.org/publications/lifes-conservation-law/ Yet, the whole issue of whether or not to argue if teleological or non-teleological processes are inherent in nature is not even on the table, for no one has ever witnessed the generation of functional information over and above what was already present in life: For a broad outline of the 'Fitness test', required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles: Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Michael Behe on Falsifying Intelligent Design - video http://www.youtube.com/watch?v=N8jXXJN4o_A Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution. http://www.evolutionnews.org/2010/03/thank_goodness_the_ncse_is_wro.html Testing Evolution in the Lab With Biologic Institute's Ann Gauger - podcast with link to peer-reviewed paper Excerpt: Dr. Gauger experimentally tested two-step adaptive paths that should have been within easy reach for bacterial populations. Listen in and learn what Dr. Gauger was surprised to find as she discusses the implications of these experiments for Darwinian evolution. Dr. Gauger's paper, "Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,". http://intelligentdesign.podomatic.com/entry/2010-05-10T15_24_13-07_00 The main problem, for the secular model of neo-Darwinian evolution to overcome, is that no one has ever seen purely material processes generate functional 'prescriptive' information. The Capabilities of Chaos and Complexity: David L. Abel - Null Hypothesis For Information Generation - 2009 To focus the scientific community’s attention on its own tendencies toward overzealous metaphysical imagination bordering on “wish-fulfillment,” we propose the following readily falsifiable null hypothesis, and invite rigorous experimental attempts to falsify it: "Physicodynamics cannot spontaneously traverse The Cybernetic Cut: physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration." A single exception of non trivial, unaided spontaneous optimization of formal function by truly natural process would falsify this null hypothesis. http://www.mdpi.com/1422-0067/10/1/247/pdf The Sheer Lack Of Evidence For Macro Evolution - William Lane Craig - video http://www.metacafe.com/watch/4023134/ Thus Indium, you may want to take false comfort in the fact that Darwinian evolution is possible in principle, but it would be a very shallow, and dishonest, comfort since you have no violations of genetic entropy to even show a increase of 'non-trivial' information above that which was already present! ----- ?'If I find in myself desires which nothing in this world can satisfy, the only logical explanation is that I was made for another world.' - C.S. Lewis Brooke Fraser- "C S Lewis Song" http://www.youtube.com/watch?v=GHpuTGGRCbY Hope http://vimeo.com/10193827bornagain77
March 11, 2011
March
03
Mar
11
11
2011
02:52 PM
2
02
52
PM
PDT
Surprising and very promising. Everybody (except for Joseph who seems to run out of arguments) agrees that evolution can generate CSI. And this on an ID blog! This thread will be a great reference for the future. Maybe you should put this conclusion into the "Glossary" on the front page?Indium
March 11, 2011
March
03
Mar
11
11
2011
01:32 PM
1
01
32
PM
PDT
MathGrrl: "Based on your calculations for titin, it seems to me that your calculation of CSI suffers from a similar problem as does that suggested by kairosfocus (two raised to the power of the length of the artifact). As I said in my response to him, if this is your definiition of CSI, known evolutionary mechanisms are demonstrably capable of generating it in both real and simulated environments." ... and herein lies the problem. It appears that you have misunderstood almost everything or skipped over a huge portion of what Dembski has written in the aforementioned paper, that KF has very patiently explained above, and that I have both explained and provided calculations for in my provided links. I'm pretty sure it's impossible to miss, based on everything that has been stated and calculated by three sources (Dembski, KF, and myself), that CSI is a measure of the bit length of the pattern/event in question (given a uniform probability distribution) -- that you refer to -- COMPARED against both *ratio of function to non-function in the search space* and the *universal probability bound.* So, yes, discovering the shannon information of the pattern/event in question as you point out is definitely part of the equation, but you are missing the other vital two thirds -- search space and probability bound. And I can't see any excuse for this if you actually did read through Demsdki's paper, KF's discussions, and my explanations and calculations. It's all right there. I don't mean to sound demeaning in any way, I'm just a little confused as to how you missed that vital two thirds of the equation for and concept of CSI when it has been explained quite thoroughly by three different sources. If you have any more questions, please ask. In the end, programming a CSI calculator would not be difficult as the only calculations required are multiplication and logarithm. Asking the right questions for a user to input the proper values in the proper locations might be difficult, though. Also, the difficult part would be in the empirical research necessary to find and defend the use of the numbers that would be plugged into the variables. But as a few people, including myself, have shown ... it is possible. Furthermore, if you will read through my provided links above again, you will notice that I have no qualm with evolutionary mechanisms generating CSI. Intelligence can use whatever means are at its disposal, including evolutionary algorithms, to generate CSI. The problem is whether or not chance and law, absent intelligence, can generate evolutionary algorithms that produce CSI. According to the recent work done by Dembski and Marks, the answer is that the the EA itself is at least as improbable as the output of that EA. Therefore, if chance and law can't generate CSI from scratch, then it won't be able to generate the evolutionary algorithm that can produce that CSI. Again, I have discussed this in the links I provided earlier.CJYman
March 11, 2011
March
03
Mar
11
11
2011
12:55 PM
12
12
55
PM
PDT
vjtorley,
When someone spends several hours of their valuable time trying to answer another person’s query, it is extremely discourteous of the person making the query to say in reply: “I’m afraid your response doesn’t address the issue,” or “I’ve read that paper, but it isn’t pertinent to this conversation.”
I genuinely appreciate your effort to discuss this with me and I never intended to give any impression otherwise. Please allow me to expand on my reasons for posting those responses to your previous message. Throughout this thread I have been attempting to get straightforward answers to what I believe are straightforward, even simple questions. The most basic is "What is the mathematically rigorous definition of CSI?" In order to guarantee that I understand any definition offered, I provided four scenarios (see messages 177 and 275 above) and asked to be shown exactly how to calculate CSI for them. As you can see, the only response remotely like a mathematical definition seems to boil down to "two to the power of the length of the genome" and no one has shown how to apply it to my scenarios (until your most recent post, of course). In your previous message you clearly spent a lot of time talking about the issues related to CSI. I have read most of the material you referenced and would find it to be an interesting topic of conversation in another context. My concern here, based on lurking on UD for some time prior to participating, is that all of that interesting content could easily derail the core topic I'm trying to reach resolution on, namely the mathematical definition of CSI and how to apply it in my four scenarios. I hope you understand that my response was based on my focus on this goal, not on any intention to be rude to you. Now, I hope you're still reading because I'm delighted to get on to the real meat of your response.
Since you are fixated with CSI, then I shall refer you to Dembski’s paper, Specification: The Pattern that Signifies Intelligence .
Excellent, I'm familiar with that paper.
Now suppose a gene in an organism gets duplicated. Humans have about 30,000 genes. Duplication of just one gene in the human genome will very slightly lengthen the semiotic description of the genome. If we let (AGTCGAGTTC) denote the random sequence of bases along the gene in question, and …….. signify the rest (which are also random, let’s say), then the description of the duplicated genome will be ……..(AGTCGAGTTC)x2 instead of ……..(AGTCGAGTTC). In other words, we’re just adding two characters to the description, which is negligible.
Why are you using "x2" instead of the actual sequence? Using the "two to the power of the length of the sequence" definition of CSI, we should be calculating based on the actual length. I can see how the Kolmogorov Chaitin complexity might make more sense, but that's not what I see ID proponents using.
Now on page 24, Dembski defines a specification as a pattern whose specified complexity is greater than 1. For such a specification, we can eliminate chance. I note that for the duplicated genome, the specified complexity Chi is much greater than 1, so Dembski’s logic seems to imply that any instance of gene duplication is the result of intelligent agency and not chance.
I didn't double check your math, but the orders of magnitude seem about right. I agree with you that by Dembski's logic a gene duplication event generates CSI.
And it would be, if we imagine that each extra base in the duplicated genome was added to the original genome, one at a time. For the odds of adding 100,000 bases independently, which just happened to perfectly match the 100,000 bases they were sitting next to, would be staggering. But that’s not how gene duplication works.
Absolutely! I actually got to this point with gpuccio over on Mark Frank's blog. Understanding the actual probabilities of a particular sequence arising requires knowledge of the history of the evolution of that sequence. That's not a surprise, really. The problem, that you have identified as well, is that not even the informally defined versions of CSI used by ID proponents take this into consideration. Dembski doesn't in No Free Lunch or in this paper, and none of the other participants in this thread have yet done so.
I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.
I agree completely.
Well, I’ve done the hard work. I hope you will be convinced now that fixating on a single measure of information is unhelpful.
Thank you, very sincerely, for your effort. The problem is, I'm not the one you need to convince. There are a number of ID proponents who continue to make the claim that CSI is a clear and unambiguous indicator of intelligent agency. They are the ones who need to be convinced to stop making those claims, that you yourself have shown to be unsupported, unless and until someone comes up with an alternative metric.MathGrrl
March 11, 2011
March
03
Mar
11
11
2011
07:48 AM
7
07
48
AM
PDT
Joseph,
If you have a copy of a dictionary and I give you aother copy of the same dictionary, do you have twice the information or just two dictionaries with the same information?
As Jon has already pointed out, if your definition of "information" is two raised to the power of the length of the sequence under consideration, then the answer is yes. That's why I asked for an example of how to calculate CSI in that particular scenario. Your analogy has another problem, though. Having an additional copy of a gene can result in greater production of a particular protein. That, in turn, can change the chemical reactions that take place in a cell. That is why I included the specification of "Produces X amount of protein Y." A simple gene duplication, even without subsequent modification of the duplicate, can increase production from less than X to greater than X. By the definitions provided thus far, this shows that known evolutionary mechanisms can generate CSI. If you disagree, please show your work.MathGrrl
March 11, 2011
March
03
Mar
11
11
2011
07:44 AM
7
07
44
AM
PDT
Indium,
So, Mathgrrl, if you persist and continue to ask for calculations at some point you will probably be told something like this: 1AFOK1HI917ZHG0LQBMNHJI4FGHE67HZ82HJT5RT8U54FV How would you want to elvolve this? Impossible. And still it is the WPA2 password of my wireless network.
This is, as I'm sure you know, an excellent example of a fitness landscape that evolutionary mechanisms are not well suited to traverse. I've heard cryptographers refer to it as "a needle standing on end in a desert."
At this point you will probably answer that replicating organisms don´t have to match a specific key. KF will tell you that life sits on sparse islands of functionality, which you will dispute based on papers which show binding action in random libraries of proteins. Slowly the discussion will move away from the original question. Be prepared to be asked where the proteins come from and what´s up with the fine tuning of our universe.
I find your prognostications interesting and would like to subscribe to your newsletter. ;-)MathGrrl
March 11, 2011
March
03
Mar
11
11
2011
07:43 AM
7
07
43
AM
PDT
1. Do you agree that the amount of CSI in a sequence is measured by 2^(length of the sequence)? Y/N Unfortunately it ain’t that black and white. In biology specified complexity referes to function.
Then why are people upthread calculating CSI by that formula? Are you saying they are wrong? What formula do you use?
2. Does non-coding DNA have meaning? Y/N It is possible.
An unhelpful answer. To the extent that it is even an answer. Nothing personal, but I think I may just wait for one of the ID scientists to answer.jon specter
March 11, 2011
March
03
Mar
11
11
2011
05:28 AM
5
05
28
AM
PDT
Joseph, I don´t understand your points 1+2. The duplicated gene at first codes for the same protein as the original one. Changes to the gene will sometimes (most of the time, really) not completely destroy the binding efficiency and ultimately sometimes also lead to new binding effects. At the same time it´s no longer a duplicate so you have to take it into account when calculating your DFCSID. Anyway: My main point is undisputed by your arguments: Evolutionary processes can *in principle* generate new information. It might be difficult, it might be rare, it might take a long time. But it is *in principle* possible. If you want to exclude parts of your sequence which originated from a duplication you would have to check if parts of your original sequence are not ultimately the result of gene duplication, too, to also remove these parts from your calculation. Ultimately, you would have to analyse the evolutionary history of each part of the genome in question to see if there isn´t a relatively simple evolutionary explanation, before applying your 2^N formula. Point 3+4 don´t adress my argument at all. I specifically stated that I argue only for one specific thing here: Is it in principle possible that evolutionary processes can generate new information as per your definition. Duplication and divergence easily do the job (in theory). In practice, I agree, there might be difficulties! ;-)Indium
March 11, 2011
March
03
Mar
11
11
2011
05:11 AM
5
05
11
AM
PDT
jon specter:
1. Do you agree that the amount of CSI in a sequence is measured by 2^(length of the sequence)? Y/N
Unfortunately it ain't that black and white. In biology specified complexity referes to function.
2. Does non-coding DNA have meaning? Y/N
It is possible.Joseph
March 11, 2011
March
03
Mar
11
11
2011
04:37 AM
4
04
37
AM
PDT
Indium, 1- You need to get binding sites with the gene duplication 2- You need to then get that protein integrated into the existing system, otherwise you just have a protein difusing through the cell able to cause problems. 3- There isn't any evidence that blind watchmaker processes can do such a thing 4- In a design scenario the information is in the pogram and if the program says to dupicate and diverge then it ain't new nor an increase in information.Joseph
March 11, 2011
March
03
Mar
11
11
2011
04:34 AM
4
04
34
AM
PDT
Joseph: Duplication events are often followed by a later divergence of the two siblings. So even if the duplication alone will not create information in your sense, the divergence will do the trick. Also, the duplicated part of the genome already starts on one of the famous islands of funtionality and can begin the mutational trajectory from a "working" starting point. So, just in principle, the process "gene duplication+divergence" can generate new information. Please note that (at this point) I am not saying that this a reasonable model of evolution. I am just saying here that there are evolutionary process which could *in principle* generate new information.Indium
March 11, 2011
March
03
Mar
11
11
2011
12:27 AM
12
12
27
AM
PDT
Joseph:
Wow- with ID information and meaning are inseparable.
We may be getting somewhere here. Two questions: 1. Do you agree that the amount of CSI in a sequence is measured by 2^(length of the sequence)? Y/N 2. Does non-coding DNA have meaning? Y/Njon specter
March 10, 2011
March
03
Mar
10
10
2011
12:49 PM
12
12
49
PM
PDT
Jon Spector:
If the measure of information is two raised to the power of the number of characters in the dictionary, then yes there is more information in two dictionaries. Are you sure you are not confusing information and meaning.
Wow- with ID information and meaning are inseparable. You are confusing Shannon Information with Complex Specified Information.
Well, gene duplications have been observed. Has anyone seen a big finger come out of the sky at the same time?
Do computers come with programmers or programs? Do you need a programmer to do your spellchecking or does the program suffice?Joseph
March 10, 2011
March
03
Mar
10
10
2011
11:59 AM
11
11
59
AM
PDT
UB:
To say that something can create information is to say that something can create the symbolic represenations (mapping of symbol to object) necessary for information to exist in the first place.
I do appreciate you going to the effort to explain it, although I did understand what you were saying earlier. But, it seems you are doing the same thing Joe is doing by mixing information and meaning. I am kinda new to this, so let me lay out my understanding of what mathgirl is saying and you can correct me where I am wrong. I apologize for the simplistic approach I am taking, but moving forward methodically is my best chance for understanding. In short, the Y/N format is about me, not you. 1. The CSI of a genome sequence is two raised to the power of the length of the sequence. (Yes/No) 2. Any two gene sequences of the same length have the same amount of CSI. (Yes/No) 3. Any two gene sequences of the different length have the different amounts of CSI. (Yes/No) 4. The longer gene sequence will have more CSI. (Yes/No)
You may attack this argument by showing any example from anywhere in the cosmos where information exists by any means other than through symbolic representation.
Believe it or not, I am on your side here. I am not attacking anything, I am only trying to understand. I'd like to see mathgirl slink away with her (prehensile?) tail between her legs as much as the next guy. But, victory is secondary to my understanding. So, if you could indulge my 4 Y/N questions, I would much appreciate it.jon specter
March 10, 2011
March
03
Mar
10
10
2011
11:31 AM
11
11
31
AM
PDT
Numbers have to do with "complexity". There isn't a problem here with "complexity", except for a lack of it, generally; that is, most sequences discussed don't exceed the UPB of 10^150. Even the sequence that Indium gives is around 10^70 in complexity. The problem always is with the notion of "specificity". Not just any pattern will do. Patterns have to be convertible, translatable, functional. Take Dembski's example of the first 100 binary digits appearing as a random sample. What distinguished the sequence, and makes it "patterned", is that you can "convert"/"translate" it into digits 0-99. Indium's sequence is convertible---into binary form---but this is trivial. It is essentially functional and translatable. Its function is obvious, and its translatable, because his WAN can recognize it, and translate it (under its binary form) into the binary sequence allowable for access. Random patterns are complex, but not specific. And MathGrrl's heroes don't seem to understand this. Yet, they have no excuse for not fully understanding Dembski's expose of it in NFL. It's hard work; but it's doable. However, if you don't care to understand, then you will give up before you do.PaV
March 10, 2011
March
03
Mar
10
10
2011
10:51 AM
10
10
51
AM
PDT
MathGrrl: A word of advice about good manners. When someone spends several hours of their valuable time trying to answer another person's query, it is extremely discourteous of the person making the query to say in reply: "I'm afraid your response doesn't address the issue," or "I've read that paper, but it isn't pertinent to this conversation." (Another word of advice: when making a sweeping claim, one should be prepared to substantiate it. It would add to your credibility if you could explain why you thought it was not pertinent, by citing a passage from the paper itself. Casual put-downs can backfire; they make you sound like you're bluffing.) Finally, to add insult to injury, you write: "I don't have an unlimited amount of time for discussions such as these." Gee, thanks. I'm not a mathematician, I'm doing this research as an act of courtesy, and I really don't appreciate being kicked in the teeth. Speaking of time, how long did it take you to dash off that response? Ten minutes? The fact is that you and I have different perceptions of what "the issue" is. For me, the issue is a simple one: has Professor Dembski provided a mathematically rigorous definition of "information" which shows that Darwinian evolution is incapable of creating information? That was your original question. The answer I gave you was: Yes. I referred you to Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information by William A. Dembski and Robert J. Marks II in my previous post. I even copied out the definition of information that Dembski was using, so you wouldn't have to look it up. By any normal standard of discourse, my response was a helpful one. You have chosen to turn your nose up at this response, however, simply because the definition that the paper in question employs isn't the same as Dembski's original definition of CSI. Frankly, I am not amused. Your fixation with this particular definition is puzzling. There are dozens of mathematically rigorous definitions of "information" in the literature. If Dembski can demonstrate his contention that Darwinian evolution is incapable of creating information using any one of those definitions, that should be enough. One of the scenarios you listed was gene duplication. I provided you with links to two articles by Jonathan M., showing that it doesn't result in an increase of information. Your response? "What, specifically, is wrong with the specification I provide for the gene duplication scenario?" Que? Since you are fixated with CSI, then I shall refer you to Dembski's paper, Specification: The Pattern that Signifies Intelligence . Let's see how much you really know. On page 24 of his paper, Dembski defines the specified complexity Chi (minus the context sensitivity) as -log2[(10^120).Phi_s(T).P(T|H)], where T is the pattern in question, H is the chance hypothesis and Phi_s(T) is the number of patterns for which agent S's semiotic description of them is at least as simple as S's semiotic description of T. Now suppose a gene in an organism gets duplicated. Humans have about 30,000 genes. Duplication of just one gene in the human genome will very slightly lengthen the semiotic description of the genome. If we let (AGTCGAGTTC) denote the random sequence of bases along the gene in question, and ........ signify the rest (which are also random, let's say), then the description of the duplicated genome will be ........(AGTCGAGTTC)x2 instead of ........(AGTCGAGTTC). In other words, we're just adding two characters to the description, which is negligible. OK. What about P(T|H)? Given the chance hypothesis, and the length of the human genome (about 3x10^9 base pairs), the probability of a particular genome sequence is 1 in 4^(3,000,000,000). A gene has about 100,000 base pairs, so for a genome with a duplicated gene, P(T|H) is 1 in 4^(3,000,100,000). If the original genome is random, then Phi_s(T) will be 4^(3,000,000,000). In section 4, Dembski says that for a random, maximally complex sequence, Phi_s(T) is identical to the total number of possibilities, because the sequence is incompressible. For the duplicated genome, Phi_s(T) will be about the same, since our description is virtually identical (all we added to the verbal description was an "x2", despite the fact that we now have an extra 100,000 bases in the duplicated genome). For the original genome, Phi_s(T).P(T|H) equals 4^(3,000,000,000)/4^(3,000,000,000), which equals 1. For the duplicated genome, Phi_s(T).P(T|H) equals 4^(3,000,000,000)/4^(3,000,100,000), which equals 1/4^(100,000). 4 is about 10^0.60206, so 1/4^(100,000) is approximately 10^-60206. For the original genome, Chi = -log2[(10^120).Phi_s(T).P(T|H)] = -log2[(10^120)]. log2(10)=3.321928094887362, so 10^120=(2^3.321928094887362)^120, or 2^398.631371. Hence Chi = -log2[(10^120)] or -398.631371. For the duplicated genome, Chi = -log2[(10^120).10^-60206], or -log2[10^-60086], which is -log2[(2^3.32192809488736)^-60086], or -log2[2^-199600], or 199,600. Now on page 24, Dembski defines a specification as a pattern whose specified complexity is greater than 1. For such a specification, we can eliminate chance. I note that for the duplicated genome, the specified complexity Chi is much greater than 1, so Dembski's logic seems to imply that any instance of gene duplication is the result of intelligent agency and not chance. And it would be, if we imagine that each extra base in the duplicated genome was added to the original genome, one at a time. For the odds of adding 100,000 bases independently, which just happened to perfectly match the 100,000 bases they were sitting next to, would be staggering. But that's not how gene duplication works. Rather, a whole gene is copied, "holus bolus", and the copy usually sits next to the original. The extra bases are not added independently, in separate events, but together, in a single copying event. And although the occurrence of the copying event at this particular point along the human genome may well be random, the actual copying process itself is law-governed, and hence not random. I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs. I am therefore not surprised that Jonathan M. used another measure of information in his papers discussing gene duplication: a gain in exonic information, which is defined as "[t]he quantitative increase in operational capability and functional specificity with no resultant uncertainty of outcome." Well, I've done the hard work. I hope you will be convinced now that fixating on a single measure of information is unhelpful.vjtorley
March 10, 2011
March
03
Mar
10
10
2011
10:35 AM
10
10
35
AM
PDT
Jon spencer "...then it would seem a known evolutionary mechanism can create CSI?" I am certainly prepared to have the point disregarded, yet again. To say that something can create information is to say that something can create the symbolic represenations (mapping of symbol to object) necessary for information to exist in the first place. You may attack this argument by showing any example from anywhere in the cosmos where information exists by any means other than through symbolic representation. If I build a box, fill it with the letters of the alphabet, and give it to potential to spit them out in abundant combinations. At the point it spits out a combination that is recognizable as a word, you cannot then say that it "created information". It doesn't have the capacity to do so, and neither does the genome.Upright BiPed
March 10, 2011
March
03
Mar
10
10
2011
10:32 AM
10
10
32
AM
PDT
If you have a copy of a dictionary and I give you aother copy of the same dictionary, do you have twice the information or just two dictionaries with the same information?
If the measure of information is two raised to the power of the number of characters in the dictionary, then yes there is more information in two dictionaries. Are you sure you are not confusing information and meaning.
That said there is’t any evidence that gene duplications are blind watchmaker processes.
Well, gene duplications have been observed. Has anyone seen a big finger come out of the sky at the same time? LOL. I'm on your side here, but this analogy doesn't help.jon specter
March 10, 2011
March
03
Mar
10
10
2011
10:28 AM
10
10
28
AM
PDT
jon specter- If you have a copy of a dictionary and I give you aother copy of the same dictionary, do you have twice the information or just two dictionaries with the same information? That said there is't any evidence that gene duplications are blind watchmaker processes. MathGrrl doesn't understand that. And if gene duplications are part of the prgramming, then they don't increase the information- they are part of it. If I can find the page in "Signature in the Cell" that goes over that I will post it.Joseph
March 10, 2011
March
03
Mar
10
10
2011
10:08 AM
10
10
08
AM
PDT
After seeing that Mathgrrl had already been given the calculations she sought
Correct me if I am wrong, but the calculations that were given to her were two raised to the power of the length of the genome under question. If, so that is fine to meet the challenge. However, she did raise a question that I would like to see the answer to. If CSI is calculated as two to the power of the genome length AND a gene duplication event can increase the length of the genome, then it would seem a known evolutionary mechanism can create CSI? Thanks, guys!jon specter
March 10, 2011
March
03
Mar
10
10
2011
09:56 AM
9
09
56
AM
PDT
1 2 3 4 5 6 14

Leave a Reply