Uncommon Descent Serving The Intelligent Design Community

On the non-evolution of Irreducible Complexity – How Arthur Hunt Fails To Refute Behe

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I do enjoy reading ID’s most vehement critics, both in formal publications (such as books and papers) and on the, somewhat less formal, Internet blogosphere. Part of the reason for this is that it gives one something of a re-assurance to observe the vacuous nature of many of the critics’ attempted rebuttals to the challenge offered to neo-Darwinism by ID, and the attempted compensation of its sheer lack of explicative power by the religious ferocity of the associated rhetoric (to paraphrase Lynn Margulis). The prevalent pretense that the causal sufficiency of neo-Darwinism is an open-and-shut case (when no such open-and-shut case for the affirmative exists) never ceases to amuse me.

One such forum where esteemed critics lurk is the Panda’s Thumb blog. A website devoted to holding the Darwinian fort, and one endorsed by the National Center for Selling Evolution Science Education (NCSE). Since many of the Darwinian heavy guns blog for this website, we can conclude that, if consistently demonstrably faulty arguments are common play, the front-line Darwinism defense lobby is in deep water.

Recently, someone referred me to two articles (one, two) on the Panda’s Thumb website (from back in 2007), by Arthur Hunt (professor in Department of Plant and Soil Sciences at the University of Kentucky). The first is entitled “On the evolution of Irreducible Complexity”; the second, “Reality 1, Behe 0” (the latter posted shortly following the publication of Behe’s second book, The Edge of Evolution).

The articles purport to refute Michael Behe’s notion of irreducible complexity. But, as I intend to show here, they do nothing of the kind!

In his first article, Hunt begins,

There has been a spate of interest in the blogosphere recently in the matter of protein evolution, and in particular the proposition that new protein function can evolve. Nick Matzke summarized a review (reference 1) on the subject here. Briefly, the various mechanisms discussed in the review include exon shuffling, gene duplication, retroposition, recruitment of mobile element sequences, lateral gene transfer, gene fusion, and de novo origination. Of all of these, the mechanism that received the least attention was the last – the de novo appearance of new protein-coding genes basically “from scratch”. A few examples are mentioned (such as antifreeze proteins, or AFGPs), and long-time followers of ev/cre discussions will recognize the players. However, what I would argue is the most impressive of such examples is not mentioned by Long et al. (1).

There is no need to discuss the cited Long et al. (2003) paper in any great detail here, as this has already been done by Casey Luskin here (see also Luskin’s further discussion of Anti-Freeze evolution here), and I wish to concern myself with the central element of Hunt’s argument.

Hunt continues,

Below the fold, I will describe an example of de novo appearance of a new protein-coding gene that should open one’s eyes as to the reach of evolutionary processes. To get readers to actually read below the fold, I’ll summarize – what we will learn of is a protein that is not merely a “simple” binding protein, or one with some novel physicochemical properties (like the AFGPs), but rather a gated ion channel. Specifically, a multimeric complex that: 1. permits passage of ions through membranes; 2. and binds a “trigger” that causes the gate to open (from what is otherwise a “closed” state). Recalling that Behe, in Darwin’s Black Box, explicitly calls gated ion channels IC systems, what the following amounts to is an example of the de novo appearance of a multifunctional, IC system.

Hunt is making big promises. But does he deliver? Let me briefly summarise the jist of Hunt’s argument, and then briefly weigh in on it.

The cornerstone of Hunt’s argument is principally concerned with the gene, T-urf13, which, contra Behe’s delineated ‘edge’ of evolution, is supposedly a de novo mitochondrial gene that very quickly evolved from other genes which specified rRNA, in addition to some non-coding DNA elements. The gene specifies a transmembrane protein, which aids in facilitating the passage of hydrophilic molecules across the mitochondrial membrane in maize – opening only when bound on the exterior by particular molecules.

The protein is specific to the mitochondria of maize with Texas male-sterile cytoplasm, and has also been implicated in causing male sterility and sensitivity to T-cytoplasm-specific fungal diseases. Two parts of the T-urf13 gene are homologous to other parts in the maize genome, with a further component being of unknown origin. Hunt maintains that this proves that this gene evolved by Darwinian-like means.

Hunt further maintains that the T-urf13 consists of at least three “CCCs” (recall Behe’s argument advanced in The Edge of Evolution that a double “CCC” is unlikely to be feasible by a Darwinian pathway). Two of these “CCCs”, Hunt argues, come from the binding of each subunit to at minimum two other subunits in order to form the heteromeric complex in the membrane. This entails that each respective subunit have at minimum two protein-binding sites.

Hunt argues for the presence of yet another “CCC”:

[T]he ion channel is gated. It binds a polyketide toxin, and the consequence is an opening of the channel. This is a third binding site. This is not another protein binding site, and I rather suppose that Behe would argue that this isn’t relevant to the Edge of Evolution. But the notion of a “CCC” derives from consideration of changes in a transporter (PfCRT) that alter the interaction with chloroquine; toxin binding by T-urf13 is quite analogous to the interaction between PfCRT and chloroquine. Thus, this third function of T-urf13 is akin to yet another “CCC”.

He also notes that,

It turns out that T-urf13 is a membrane protein, and in membranes it forms oligomeric structures (I am not sure if the stoichiometries have been firmly established, but that it is oligomeric is not in question). This is the first biochemical trait I would ask readers to file away – this protein is capable of protein-protein interactions, between like subunits. This means that the T-urf13 polypeptide must possess interfaces that mediate protein-protein interactions. (Readers may recall Behe and Snokes, who argued that such interfaces are very unlikely to occur by chance.)

[Note: The Behe & Snoke (2004) paper is available here, and their response (2005) to Michael Lynch’s critique is available here.]

Hunt tells us that “the protein dubbed T-urf13 had evolved, in one fell swoop by random shuffling of the maize mitochondrial genome.” If three CCC’s really evolved in “one fell swoop” by specific but random mutations, then Behe’s argument is in trouble. But does any of the research described by Hunt make any progress with regards to demonstrating that this is even plausible? Short answer: no.

Hunt does have a go of guesstimating the probabilistic plausibility of such an event of neo-functionalisation taking place. He tells us, “The bottom line – T-urf13 consists of at least three ‘CCCs’. Running some numbers, we can guesstimate that T-urf13 would need about 10^60 events of some sort in order to occur.”

Look at what Hunt concludes:

Now, recall that we are talking about, not one, but a minimum of three CCC’s. Behe says 1 in 10^60, what actually happened occurred in a total event size of less that 10^30. Obviously, Behe has badly mis-estimated the “Edge of Evolution”. Briefly stated, his “Edge of Evolution” is wrong. [Emphasis in original]

Readers trained in basic logic will take quick note of the circularity involved in this argumentation. Does Hunt offer any evidence that T-urf13 could have plausibly evolved by a Darwinian-type mechanism? No, he doesn’t. In fact, he casually dismisses the mathematics which refutes his whole argument. Here we have a system with a minimum of three CCCs, and since he presupposes as an a priori principle that it must have a Darwinian explanation, this apparently refutes Behe’s argument! This is truly astonishing argumentation. Yes, certain parts of the gene have known homologous counterparts. But, at most, that demonstrates common descent (and even that conclusion is dubious). But a demonstration of homology, or common ancestral derivation, or a progression of forms is not, in and of itself, a causal explanation. Behe himself noted in Darwin’s Black Box, “Although useful for determining lines of descent … comparing sequences cannot show how a complex biochemical system achieved its function—the question that most concerns us in this book.” Since Behe already maintains that all life is derivative of a common ancestor, a demonstration of biochemical or molecular homology is not likely to impress him greatly.

How, then, might Hunt and others successfully show Behe to be wrong about evolution? It’s very simple: show that adequate probabilistic resources existed to facilitate the plausible origin of these types of multi-component-dependent systems. If, indeed, it is the case that each fitness peak lies separated by more than a few specific mutations, it remains difficult to envision how the Darwinian mechanism might adequately facilitate the transition from one peak to another within any reasonable time frame. Douglas Axe, of the biologic institute, showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalisation. If a duplicated gene is neutral (in terms of its cost to the organism), then the  maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). One other study, published in Nature in 2001 by Keefe & Szostak, documented that more than a million million random sequences were required in order to stumble upon a functioning ATP-binding protein, a protein substantially smaller than the transmembrane protein specified by the gene, T-urf13. Douglas Axe has also documented (2004), in the Journal of Molecular Biology, the prohibitive rarity of functional enzymatic binding domains with respect to the vast sea of combinatorial sequence space in a 150 amino-acid long residue (Beta-Lactamase).

What, then, can we conclude? Contrary to his claims, Hunt has failed to provide a detailed and rigorous account of the origin of T-urf13. Hunt also supplies no mathematical demonstration that the de novo origin of such genes is sufficiently probable that it might be justifiably attributed to an unguided or random process, nor does he provide a demonstration that a step-wise pathway exists where novel utility is conferred at every step (being separated by not more than one or two mutations) along the way prior to the emergence of the T-urf13 gene.

The Panda’s Thumb are really going to have to do better than this if they hope to refute Behe!

Comments
Upright BiPed,
Mathgrrl, despite me asking you to do otherwise, you simply talked past my point without addressing any of it.
Actually, anyone reviewing this thread would find me justified in saying that about you. Throughout this conversation I have focused solely on obtaining a rigorous mathematical definition of CSI and some example calculations to learn how it works. I have made it very clear that my goal is to test the ID claim that CSI is a reliable indicator of intelligent agency. You have never provided a definition nor any example calculations. I look forward to discussing CSI with you when you decide to define it and show some calculations.MathGrrl
March 16, 2011
March
03
Mar
16
16
2011
04:27 PM
4
04
27
PM
PDT
By the way, the demonstrable evidence of a flank can be seen in the text of Mathgrrls post. She begins by quoting me:
So when you repeatedly claim that evolutionary mechanisms can “create” or “produce” complex specified information, you are making an absolutely unsupported statement. For a mechanism to “create” information (by definition) that mechanism would first have to have the capacity to create symbols and rules.
Then she makes a statement that literally has nothing whatsoever to do with my post:
Your objections to vjtorley’s calculation of CSI are an excellent demonstration of why rigorous mathematical definitions and example calculations are essential to making progress in this discussion. Based on Dembski’s discussion of CSI in Specification: The Pattern That Signifies Intelligence, vjtorley clearly demonstrated that gene duplication, an event for which there is empirical evidence, generates CSI.
The alert observer will notice that her statement make no refernce whatsoever to anything I said in my post. For those that have been following this thread, please feel free to return to ANY exchange between she and I on this topic, and you will see the exact same pattern.Upright BiPed
March 16, 2011
March
03
Mar
16
16
2011
03:42 PM
3
03
42
PM
PDT
Darwinist: "Random variation and natural selection can jolly well produce CSI." ID scientist: "I doubt it." ID Scientist:"Intelligent agency is the only known cause for CSI." Darwinst: "What is CSI?" You've got to love it!.StephenB
March 16, 2011
March
03
Mar
16
16
2011
03:37 PM
3
03
37
PM
PDT
Mathgrrl, despite me asking you to do otherwise, you simply talked past my point without addressing any of it. In strategic parlance this is referred to as a flank. It's a manuever specifically intended to avoid the front of an defended position. As a tactic in debate, it is intended to draw attention away from the defended position and engage elsewhere - where the actual strength of an argument can be avoided at all cost. Your continued avoidance is therefore duly noted.Upright BiPed
March 16, 2011
March
03
Mar
16
16
2011
03:34 PM
3
03
34
PM
PDT
CJYman,
The fact that some people don’t understand CSI, or calculate for it from different givens (which can be done but the calculation changes to the context-dependent form of CSI), makes no difference to mine and KF’s argument for CSI based explicitly on the math explained by Dembski in “Specification: the patterns which signify intelligence.”
The paper's title is Specification: The Pattern That Signifies Intelligence, and vjtorley used the discussion of CSI in it to demonstrate that gene duplication that results in increased production of a protein does, by that definition, generate CSI.
Either way, disagreement or not, I have provided a way to calculate for CSI and my argument is based on that calculation. Can you show that CSI based on my calculation and KF’s explanation of the concept will be produced by law+chance?
I found your calculation related to titin to be confusing, frankly. You didn't provide a mathematically rigorous definition of CSI that I saw and you didn't go into as much detail as did vjtorley. If you believe that your version of CSI is equivalent to what Dembski has published and you further believe that it is a reliable indicator of intelligent agency, please provide your rigorous definition and demonstrate how you arrive at a different answer than did vjtorley for the scenario he analyzed. Applying your definition to the other three scenarios I described would also be very helpful to others attempting to recreate your calculations. Let's get right down to the math, right here in this thread.MathGrrl
March 16, 2011
March
03
Mar
16
16
2011
03:11 PM
3
03
11
PM
PDT
Upright BiPed,
So when you repeatedly claim that evolutionary mechanisms can “create” or “produce” complex specified information, you are making an absolutely unsupported statement. For a mechanism to “create” information (by definition) that mechanism would first have to have the capacity to create symbols and rules.
Your objections to vjtorley's calculation of CSI are an excellent demonstration of why rigorous mathematical definitions and example calculations are essential to making progress in this discussion. Based on Dembski's discussion of CSI in Specification: The Pattern That Signifies Intelligence, vjtorley clearly demonstrated that gene duplication, an event for which there is empirical evidence, generates CSI. If you wish to refute this, you need to provide a similar level of detail. That means rigorously defining your version of CSI and demonstrating how to calculate it for the same scenario. Unless and until you do so, any claims you make about your metric are unsupported.MathGrrl
March 16, 2011
March
03
Mar
16
16
2011
03:10 PM
3
03
10
PM
PDT
Jon, I think I see your problem now. You believe that as a result of the universe being designed, then everything IN the universe must also be designed. But that is not the case. When we say the universe is designed, we mean the CONDITIONS in the universe, not necessarily everything in it.The laws of physics and the physical constants and the properties of the earth and so forth.For example I dont believe, and I dont think anyone believes the ceres asteroid or mount everest are designed. I hope this clears things up.kuartus
March 16, 2011
March
03
Mar
16
16
2011
01:10 PM
1
01
10
PM
PDT
Jon Specter, it seems you have no idea what you are talking about.I believe I made it pretty clear for you.Design detection is about differentiating betweeen what agents with foresight and purpose driven actions can do as opposed to agencies which dont have those qualities. Could natural processes which dont have intelligence make an iphone, or even something as simple as a notebook? A mind can. You are confusing two different things.Just because the universe as a whole item has an intelligent source, it does not mean it is intelligent in and of itself. A note book is not intelligent. Its as dumb as a rock. It cant carry out purpose driven actions all by itself. You also seem to imply that you cant differentiate between designed items. Can you not tell the difference between a computer and a bicycle?They are both designed items. Yet they are designed for different things. The universe is designed to sustain life.Yet just because the conditions are consistent with life, it is not enough.For example just because you have all the parts necessary for making a bike, you wont have a bike without assembling it.Again, design detection is about figuring out what needs FURTHER assembly withing the universe after realizing that the physical universe itself would have been causally inadequate to account for its origin. There I made it as simple as I could.kuartus
March 16, 2011
March
03
Mar
16
16
2011
12:17 PM
12
12
17
PM
PDT
kuartus:
Jon Specter, I dont get it. If you understood, how come you said that the concept of CSI was superfluous and that detecting design in a designed universe was practically nonsense?
Understanding what you are saying and agreeing that it is meaningful are two different questions. I understood what you were saying, but I do not find it meaningful. A design detector should be able to differentiate between designed and non-designed items. If the entire universe exhibits design, then there are no non-designed items to detect. On a lark, I stayed up last night writing a computer program for a design detector under those assumptions. Feel free to use it. No charge. Here is the code: Design = 1 It is a model of parsimonious code, if I do say so myself.jon specter
March 16, 2011
March
03
Mar
16
16
2011
06:42 AM
6
06
42
AM
PDT
Upright BiPed, Well stated at 356! I agree completely. It is obvious that EAs shuffle around CSI (from the CSI in the structure of its programming -- the probability of matching search algorithm to search space -- to the CSI in the structure of its output) but do not generate CSI. There does seem to be a lot of confusion about this.CJYman
March 15, 2011
March
03
Mar
15
15
2011
01:05 PM
1
01
05
PM
PDT
MathGrrl, The fact that some people don't understand CSI, or calculate for it from different givens (which can be done but the calculation changes to the context-dependent form of CSI), makes no difference to mine and KF's argument for CSI based explicitly on the math explained by Dembski in "Specification: the patterns which signify intelligence." If I had more time, we could argue about different people's interpretation of it, but the links I provided for you as well as the info KF provided, show that how I have calculated for CSI does indeed provide a specific measure of probability based on resources and search space. If my calculations are correct, and based on Demsbki's CSI as I have defended in the links I provided for you, then CSI is indeed a rigorous mathematical concept. Just because you continue to misunderstand it or others disagree with how I have calculated it, does not make it any less rigorous. I state again, I have defended how I calculate it in the provided links, with direct quotes and examples from Dembski's paper. Either way, disagreement or not, I have provided a way to calculate for CSI and my argument is based on that calculation. Can you show that CSI based on my calculation and KF's explanation of the concept will be produced by law+chance? KF has already much earlier in this thread provided you with our comments here as an example of a pattern requiring intelligence in its generation and also exhibiting CSI. The only thing that your last comment tells me is that you can provide no example of law+chance absent intelligence generating either CSI from scratch or an EA which produces CSI. Remember, the laws and intitial conditions of the program must be derived strictly from a random source such as RAndom.org to remove any potential foresight of an intelligent agent. So the ID hypothesis of CSI -- at least as KF and I have explained it and calculated for it -- as a no-go for law+chance absent intelligence can so far be seen to be correct.CJYman
March 15, 2011
March
03
Mar
15
15
2011
01:01 PM
1
01
01
PM
PDT
Jon Specter, I dont get it. If you understood, how come you said that the concept of CSI was superfluous and that detecting design in a designed universe was practically nonsense? It seems to me that you didnt understand,or else I dont think you would have had a problem with it.I take it you dont disagree with my response?kuartus
March 15, 2011
March
03
Mar
15
15
2011
11:38 AM
11
11
38
AM
PDT
Mathgrrl, You have repeatedly stated that evolutionary mechanisms can create CSI. I am completely comfortable being the odd man out here, so again, I must insist. CSI is an acronym for complex specified information. That is a noun with two very deliberate modifiers in front of it. That particular noun has certain characteristics that make it distinguishable among other words. One of the primary characteristics is that information only exists by means of semiotic representations (symbols and rules). There are no examples of it existing by any other means anywhere in the cosmos. This is simply an observation of reality. Many people have been tempted here to ignore the fact that Shannon specifically noted two distinct characteristics of a given signal (that which is meaningful and that which is noise). This distinction is a logical result of his engineering point-of-view (ie. that within the transmission and reception of a signal, some part of that signal could be information and some part could be noise). In ignoring this fact, one could temporarily ignore the fact that information does indeed require symbols in order to exist, while noise does not have that requirement. The conflation of the two is rampant (and often deliberate). However, in the presence of the two modifiers (complex and specified), relying on that kind of ignorance is not available. CSI does in fact exist (as does all other meaningful information) as a result of symbols and rules. So when you repeatedly claim that evolutionary mechanisms can “create” or “produce” complex specified information, you are making an absolutely unsupported statement. For a mechanism to “create” information (by definition) that mechanism would first have to have the capacity to create symbols and rules. Otherwise, the very most that could be said is that a mechanism has the capacity to alter or manipulate the symbols and rules that it had already been given – since it doesn’t have the capacity to create them itself. You might even say that evolutionary mechanisms can manipulate information within an existing intelligent system. That would be far more accurate (and convincing) than the claim you make. In essence, you are removing from the table the key characteristic of meaningful (complex specified) information, and then you make claims as to how it can be produced. I know you fully understand this rather significant problem, or else you wouldn’t try so hard to dismiss it. Nonetheless, it is a fact. - - - - - - Having witnessed your defense of an unsupported assertion, I make this observation on behalf of the gallery. I also make it in opposition to what other proponents of ID may say and think. If you care to respond Mathgrrl, why not this time actually address the issue instead providing your customary uninterested dismissal. I am more than happy to get down to the level of the symbol itself. We can remove the system you take for granted, and we will see if your position holds up.Upright BiPed
March 15, 2011
March
03
Mar
15
15
2011
08:09 AM
8
08
09
AM
PDT
MathGrrl and Dr. Torley if you can load, JonathanM commented here on the inability of 'gene duplication', operating per neo-Darwinian processes, to generate any non-trivial functional complexity; Michael Behe Hasn't Been Refuted on the Flagellum! Excerpt: Douglas Axe of the Biologic Institute showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalization. If a duplicated gene is neutral (in terms of its cost to the organism), then the maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). http://www.evolutionnews.org/2011/03/michael_behe_hasnt_been_refute044801.html I looked up Dr. Axe's paper here and found The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations Douglas D. Axe* Excerpt: In particular, I use an explicit model of a structured bacterial population, similar to the island model of Maruyama and Kimura, to examine the limits on complex adaptations during the evolution of paralogous genes—genes related by duplication of an ancestral gene. Although substantial functional innovation is thought to be possible within paralogous families, the tight limits on the value of d found here (d ? 2 for the maladaptive case, and d ? 6 for the neutral case) mean that the mutational jumps in this process cannot have been very large. http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.4/BIO-C.2010.4 Though the math is technical, and over my head, I do know that this study lines up extremely well with the empirical evidence that shows severe limits for the gene duplication scenario. A scenario that has never 'empirically' violated the principle of Genetic Entropy. i.e. all examples of put forth by neo-Darwinists fail after careful scrutiny!bornagain77
March 15, 2011
March
03
Mar
15
15
2011
07:01 AM
7
07
01
AM
PDT
CJYman,
I just really hope that for now, you finally realize that your assertions, for more than half of this post about CSI being non-rigorous based on your misunderstanding of its calculation, are completely incorrect.
On the contrary, I think my claim that CSI is not rigorously defined is well supported by the fact that several people compute it differently. Even vjtorley, who made an admirable effort to actually use Dembski's definition, had to interpret certain terms. I still haven't seen any ID proponent attempt to calculate CSI for the last three of the four scenarios described in 177 above, and the calculations for the first scenario all show that evolutionary mechanisms are, in fact, capable of generating CSI.
“That does not follow from the No Free Lunch theorems. All those theorems say, in layman’s terms, is that averaged over all search spaces, no algorithm performs better than a random search. For particular search spaces, some algorithms can perform dramatically better than others.” Yes, and it is Dembski and Marks further work which shows that to match that specific algorithm to the search space is just as difficult as finding the output of that EA in the first place.
This is exactly why modeling evolution as a search can confuse the issue. There is no process to "match that specific algorithm to the search space". We inhabit a universe with particular characteristics. There aren't any other "search spaces", although the Earth's environment is constantly changing. The No Free Lunch theorems have absolutely no applicability to a single "search space". It is profoundly unsurprising that mechanisms that can be modeled as algorithms that work in search spaces modeled on the real world are observed in the real world.
If you do not agree, simply provide evidence that chance+law, absent intelligence will produce either CSI from scratch or the EA to produce CSI, in the form of the experiment utilizing Random.org that I mentioned.
That's not how science works. If you make a claim, you have to support it. Thus far the claim that CSI is an indicator of the involvement of an intelligent agent has not been supported.
“This was raised on one of the threads you mentioned earlier. Without going back to it, I remember several people pointing out that the world we inhabit is one “search space” in your model (the scare quotes are because there are several other issues with modeling evolution as a search). It’s not surprising that some algorithms are better able than others to traverse that space. It’s even less surprising that the evolutionary mechanisms we observe are components of those algorithms. If they didn’t work in this “search space”, we wouldn’t observe them.” Of course, and if law+chance won’t produce the CSI observed in this universe from scratch without evolution, then — barring the use of an infinite multiverse in probability calculations for this universe which then arbitrarily destroys the foundation for all probability based science –law+chance won’t produce the EA (structure of life, laws of our universe, and the match between the two) required within this universe to produce CSI.
You're confusing two claims. You seem to accept that, in this universe and in particular on this planet, evolutionary mechanisms can generate CSI, however loosely defined. That's the only point I was trying to clarify on this thread. Your second claim seems to be based on the anthropic principle. That's a whole separate discussion with a rich background. Frankly, I find it pretty unconvincing but, with all due respect to your views, also pretty uninteresting. There's just not enough math in it. ;-) If you do, in fact, accept that CSI can be generated via known evolutionary mechanisms, I'll leave the anthropic principle arguments to others.MathGrrl
March 15, 2011
March
03
Mar
15
15
2011
06:01 AM
6
06
01
AM
PDT
vjorley,
Thank you for your posts. This will be my very last one on this thread.
Thanks for all your work on actually computing CSI. I hope you'll continue to participate on Mark Frank's blog.
Regarding CSI: on page 24 of his paper, Dembski defines the specified complexity Chi (minus the context sensitivity) as -log2[(10^120).Phi_s(T).P(T|H)], where T is the pattern in question, H is the chance hypothesis and Phi_s(T) is the number of patterns for which agent S’s semiotic description of them is at least as simple as S’s semiotic description of T. Here’s how I would amend the definition: Chi=-log2[(10^120).(SC/KC).PC], where SC is the Shannon complexity, KC is the Kolmogorov complexity (here defined as the length of the minimum description needed to characterize a pattern) and PC is the probabilistic complexity, defined as the probability of the pattern arising by natural non-intelligent processes.
While I understand your motivation for using Kolmogorov Chaitin complexity rather than the simple string length, the problem with doing so is that KC complexity is uncomputable. For most sequences, the most that can be said is that the minimal description is no more than the length of the string plus a constant related to the language being used to describe it. That raises the same issues related to the use of the length of the sequence in Dembski's formulation.
I envisage PC as a summation, where we consider all natural non-intelligent processes that might be capable of generating the pattern, calculate the probability of each process actually doing so over the lifespan of the observable universe and within the confines of the observable universe, and then sum the probabilities for all processes.
This is another term that is impossible to calculate, although in this case it is a practical rather than a theoretical limitation. We simply don't know the probabilities that make up PC. We don't even know all the processes -- that's why we continue to do research. Computing PC based on known processes and assumed probabilities will certainly lead to many false positives. This version of CSI is therefore more a measure of our ignorance than of intelligent agency, just as Dembski's is.
Can you think of any plausible counter-examples?
That's not how science works. If you're proposing a new metric, you need to clearly and rigorously define it, which you've made a good start at, and show how it actually measures what you claim it measures with some worked examples. Personally, I'd like to see it applied to my four scenarios. One problem you'll immediately encounter is identifying artifacts that are not designed, so that you can show that your metric doesn't give false positives. That's a metaphysical question that is sure to raise challenges from some ID proponents, no matter what artifacts you choose.MathGrrl
March 15, 2011
March
03
Mar
15
15
2011
05:41 AM
5
05
41
AM
PDT
dang jon, I was admiring how much more clearly Kuartus stated it than I did.bornagain77
March 14, 2011
March
03
Mar
14
14
2011
07:56 PM
7
07
56
PM
PDT
Kuartas, I understood all that the first time, when bornagain said it.jon specter
March 14, 2011
March
03
Mar
14
14
2011
06:47 PM
6
06
47
PM
PDT
MathGrrl: "If you disagree that these systems generate CSI, please show me how you would calculate CSI for the four scenarios I describe in my post 177 in this thread." I don't disagree that an EA can produce CSI, I disagree that law+chance can write an EA that produces CSI. I've already provided calculations for CSI. It took a while to research the variables. I don't have the time to do this again right now. It is now your turn to back up your position and show that law+chance will either produce CSI or an EA that produces CSI relying only on random data from a source such as Random.org. The reliance on random data is required to remove any possible foresighted element to the construction of the program and the creation of and matching of search algorithm to search space (akin to the construction and matching of the values for the natural laws and the structure of life itself). Obviously a programming environment would be allowed. This is akin to granting that an environment within which any laws can operate can exist without intelligence.CJYman
March 14, 2011
March
03
Mar
14
14
2011
05:51 PM
5
05
51
PM
PDT
Hello MathGrrl, I really wish I had more to time to go over this with you, but alas, edumacation is quite busy right now. It appears that we would need to hash through almost as many posts as I originally linked for you to cover these concepts in depth and what they actually mean, how CSI is related to NFLT and the work done by Dembski and Marks on active info, and how this all applies to ID Theory. For now, I'll reply to your last few comments. Please respond back, since I may have some extra time to respond again. However, it may also be that we'll have to carry on this discussion at another time. I just really hope that for now, you finally realize that your assertions, for more than half of this post about CSI being non-rigorous based on your misunderstanding of its calculation, are completely incorrect. Earlier I stated: "IOW, if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT)." MathGrrl, you responded: "That does not follow from the No Free Lunch theorems. All those theorems say, in layman’s terms, is that averaged over all search spaces, no algorithm performs better than a random search. For particular search spaces, some algorithms can perform dramatically better than others." Yes, and it is Dembski and Marks further work which shows that to match that specific algorithm to the search space is just as difficult as finding the output of that EA in the first place. If you do not agree, simply provide evidence that chance+law, absent intelligence will produce either CSI from scratch or the EA to produce CSI, in the form of the experiment utilizing Random.org that I mentioned. MathGrrl: "This was raised on one of the threads you mentioned earlier. Without going back to it, I remember several people pointing out that the world we inhabit is one “search space” in your model (the scare quotes are because there are several other issues with modeling evolution as a search). It’s not surprising that some algorithms are better able than others to traverse that space. It’s even less surprising that the evolutionary mechanisms we observe are components of those algorithms. If they didn’t work in this “search space”, we wouldn’t observe them." Of course, and if law+chance won't produce the CSI observed in this universe from scratch without evolution, then -- barring the use of an infinite multiverse in probability calculations for this universe which then arbitrarily destroys the foundation for all probability based science --law+chance won't produce the EA (structure of life, laws of our universe, and the match between the two) required within this universe to produce CSI. In reference to the multi-verse, even if it does exist, as I've stated on another thread ... "The best attempt to try to explain such organization so far is to throw vast excess of probabilistic resources at the problem in order to “allow” chance to do the dirty work of generating these patterns that are routinely observed to require intelligent systems utilizing their foresight. However, along with multiple universes there comes no non-arbitrary cutoff point as to what infinite probabilistic resources are to be used to explain. Infinite probabilistic resources can be used to explain away every pattern in existence and thus science stops since no further explanation is required. Even the infamous camera found on a planet on the other side of the universe, those hypothetical radio signals from ETI, and the orbit of the planets around the sun, and the arrangement of crystals could be explained by chance if infinite probabilistic resources are given." Thus, explanations observed to require either intelligence or law become arbitrarily superfluous. Science then grinds to a halt. I also stated: "However, as to weather *patterns:* The difference between a “weather pattern simulation” and a “simulation of an evolutionary algorithm producing CSI” is that weather patterns can be arrived at from a random set of laws and intitial conditions (law+chance absent intelligence), producing chaotic patterns which are mathematically indistinguishable from the types of patterns which make up “weather.” However, no one has shown that CSI or an EA that outputs a CSI pattern can be generated from a random set of laws and initital conditions (law+chance absent intelligence)." MathGrrl, you replied with: "Actually, that’s exactly what is shown by some real biological systems, as calculated by vjtorley above. Computer simulations have shown the same; consider Schneider’s ev, which demonstrated exactly the same behavior he observed in real biological systems. This qualifies as “mathematically indistinguishable from the types of patterns” we see in natural processes." You have completely misrepresented and twisted my argument. This may have been why KF left the discussion. You are continuing to leave out important parts of our arguments. You did it with the calculation for CSI for many comments, despite having three sources explain it to you in detail with calculations and now you are at it aagin. I really want to give you the benefit of the doubt on this one, but it is starting to get frustrating. The important part that you left out is that a random set of laws and intitial conditions (that is, law+chance) will produce patterns which are mathematically indistinguishable from weather patterns, however a random set of laws and initial conditions will produce neither CSI nor an evolutionary algorithm that outputs CSI. For an EA to work, the law structure has to be anything but random, since it must match the search space of the problem which needs to be optimized. In order for you to disagree with any of this, you will need to provide an example where a program was written by law+chance -- the laws and intitial conditions were chosen from a random source such as Random.org -- and it either developed CSI from scratch or it designed an EA which then output CSI. Again, as I've already explained, "... if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT). That is basically how useful EAs, which can produce CSI patterns such as an efficient antenna, operate. Until someone shows that foresight is not required to build an EA, by providing evidence that answers the previous question [referencing a random set of laws -- law+chance -- producing CSI or an EA that produces CSI] in the affirmative, the present mathematical and observational evidence shows that evolution can only be seen as a process requiring intelligence.CJYman
March 14, 2011
March
03
Mar
14
14
2011
05:35 PM
5
05
35
PM
PDT
Jon Specter, If you are still here, I would like to try and answer your question: How do you detect design in the universe if the universe itself is designed? I think that it has to do with the question,designed for what? I believe the evidence is strong that the universe was designed to SUPPORT and SUSTAIN life. The laws of physics and the physical constants make it possible for stars and planets and galaxies to exist as well as ordinary matter and life itself. That being said, physics and chemistry themselves have been shown to be totally inadequate at producing functionally complex organization and information,not to mention life. Empirical science has shown that nature all by itself cannot PRODUCE life.The law of biogenesis confirms that.So even though the physical universe is designed to HOUSE complex life, it is not designed to MAKE complex life. So we have to look at the only other agency which is abled to produce information and functionally complex organization,and that is intelligence. That is what ID is. Figuring out what can be reasonably attributed to natural processes and what cannot though an explanatory filter. A random jagged mountain side can be attributed to natural processes but an iphone can't. You understand?kuartus
March 14, 2011
March
03
Mar
14
14
2011
11:54 AM
11
11
54
AM
PDT
P.S. When I wrote "Shannon complexity: in my previous post, I should have been a little clearer about what I meant. I simply meant: the length of the string after being compressed in the most efficient manner possible.vjtorley
March 14, 2011
March
03
Mar
14
14
2011
09:12 AM
9
09
12
AM
PDT
Mathgrrl and markf Thank you for your posts. This will be my very last one on this thread. Regarding the nuclear reactor problems in Japan, you can find out what’s happening by checking here: http://bravenewclimate.com . I have to say that the media is sensationalizing the reactor problems, and the headlines on the Drudge Report are wildly over the top. This isn’t even as serious as Three Mile Island, let alone Chernobyl. See here: http://au.news.yahoo.com/thewest/a/-/world/9003436/japan-nuclear-health-risks-low-and-wont-blow-abroad/ . Fortunately, I live quite a long way from Fukushima, so my family is safe. Anyway, thank you for your concern, Mathgrrl. Regarding CSI: on page 24 of his paper, Dembski defines the specified complexity Chi (minus the context sensitivity) as -log2[(10^120).Phi_s(T).P(T|H)], where T is the pattern in question, H is the chance hypothesis and Phi_s(T) is the number of patterns for which agent S's semiotic description of them is at least as simple as S's semiotic description of T. Here's how I would amend the definition: Chi=-log2[(10^120).(SC/KC).PC], where SC is the Shannon complexity, KC is the Kolmogorov complexity (here defined as the length of the minimum description needed to characterize a pattern) and PC is the probabilistic complexity, defined as the probability of the pattern arising by natural non-intelligent processes. I envisage PC as a summation, where we consider all natural non-intelligent processes that might be capable of generating the pattern, calculate the probability of each process actually doing so over the lifespan of the observable universe and within the confines of the observable universe, and then sum the probabilities for all processes. Thus PC would be Sigma[P(T|H_i)], where H_i is the hypothesis that the pattern in question, T, arose through some naturalistic non-intelligent process (call it P_i). In reality, a few processes would likely dwarf all the others in importance, so PC could be simplified by ignoring the processes that had a very remote chance of generating T, relatively speaking. According to my definition, a string having a high ratio of Shannon complexity to Kolmogorov complexity (here defined as the length of the minimum description needed to characterize a pattern is more likely to be a product of design - especially if its probabilistic complexity is low. The (10^120) factor covers all events happening in the lifespan of the observable universe. Thus we can say that if Chi=-log2[(10^120).(SC/KC).PC] is greater than 1, then it is reasonable to conclude that T was designed. Can you think of any plausible counter-examples?vjtorley
March 14, 2011
March
03
Mar
14
14
2011
09:01 AM
9
09
01
AM
PDT
MayhGrrl you state; 'The physics and chemistry we observe leads to the evolutionary mechanisms we observe' Please MathGrrl' do tell of any 'evolutionary' example whatsoever that has been 'observed' passing 'the fitness test'; For a broad outline of the 'Fitness test', required to be passed to show a violation of the principle of Genetic Entropy, please see the following video and articles: Is Antibiotic Resistance evidence for evolution? - 'The Fitness Test' - video http://www.metacafe.com/watch/3995248 Testing the Biological Fitness of Antibiotic Resistant Bacteria - 2008 http://www.answersingenesis.org/articles/aid/v2/n1/darwin-at-drugstore Thank Goodness the NCSE Is Wrong: Fitness Costs Are Important to Evolutionary Microbiology Excerpt: it (an antibiotic resistant bacterium) reproduces slower than it did before it was changed. This effect is widely recognized, and is called the fitness cost of antibiotic resistance. It is the existence of these costs and other examples of the limits of evolution that call into question the neo-Darwinian story of macroevolution. http://www.evolutionnews.org/2010/03/thank_goodness_the_ncse_is_wro.html List Of Degraded Molecular Abilities Of Antibiotic Resistant Bacteria: http://www.trueorigin.org/bacteria01.asp The following study surveys four decades of experimental work, and solidly backs up the preceding conclusion that there has never been an observed violation of genetic entropy: “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain - Michael Behe - December 2010 Excerpt: In its most recent issue The Quarterly Review of Biology has published a review by myself of laboratory evolution experiments of microbes going back four decades.,,, The gist of the paper is that so far the overwhelming number of adaptive (that is, helpful) mutations seen in laboratory evolution experiments are either loss or modification of function. Of course we had already known that the great majority of mutations that have a visible effect on an organism are deleterious. Now, surprisingly, it seems that even the great majority of helpful mutations degrade the genome to a greater or lesser extent.,,, I dub it “The First Rule of Adaptive Evolution”: Break or blunt any functional coded element whose loss would yield a net fitness gain.(that is a net 'fitness gain' within a 'stressed' environment i.e. remove the stress from the environment and the parent strain is always more 'fit') http://behe.uncommondescent.com/2010/12/the-first-rule-of-adaptive-evolution/ Michael Behe talks about the preceding paper on this podcast: Michael Behe: Challenging Darwin, One Peer-Reviewed Paper at a Time - December 2010 http://intelligentdesign.podomatic.com/player/web/2010-12-23T11_53_46-08_00bornagain77
March 14, 2011
March
03
Mar
14
14
2011
06:50 AM
6
06
50
AM
PDT
MathGrrl and Dr. Torley, if you can load,,, I would like to point this Gene Duplication study out which confirms that Genetic Entropy has not ever been violated, not even by gene duplication; Is gene duplication a viable explanation for the origination of biological information and complexity? Abstract; All life depends on the biological information encoded in DNA with which to synthesize and regulate various peptide sequences required by an organism's cells. Hence, an evolutionary model accounting for the diversity of life needs to demonstrate how novel exonic regions that code for distinctly different functions can emerge. Natural selection tends to conserve the basic functionality, sequence, and size of genes and, although beneficial and adaptive changes are possible, these serve only to improve or adjust the existing type. However, gene duplication allows for a respite in selection and so can provide a molecular substrate for the development of biochemical innovation. Reference is made here to several well-known examples of gene duplication, and the major means of resulting evolutionary divergence, to examine the plausibility of this assumption. The totality of the evidence reveals that, although duplication can and does facilitate important adaptations by tinkering with existing compounds, molecular evolution is nonetheless constrained in each and every case. Therefore, although the process of gene duplication and subsequent random mutation has certainly contributed to the size and diversity of the genome, it is alone insufficient in explaining the origination of the highly complex information pertinent to the essential functioning of living organisms. http://onlinelibrary.wiley.com/doi/10.1002/cplx.20365/abstractbornagain77
March 14, 2011
March
03
Mar
14
14
2011
06:42 AM
6
06
42
AM
PDT
#341 It’s a long one. Perhaps you’d consider joining us over on Mark Frank’s blog (apologies to Mark for my presumption)? That would be a pleasure. I have started a new thread specifically for you and CSI.markf
March 14, 2011
March
03
Mar
14
14
2011
06:37 AM
6
06
37
AM
PDT
vjtorley,
I think this will have to be my last post on this thread, as I’m having trouble bringing it up on my PC – it seems to be eating into my computer’s virtual memory.
It's a long one. Perhaps you'd consider joining us over on Mark Frank's blog (apologies to Mark for my presumption)?
First, I agree that a high degree of CSI, as originally defined by Professor Dembski in his 2005 paper “Specification: The Pattern that Specifies Intelligence” is not sufficient by itself to warrant a design inference.
I hadn't seen this when I posted my most recent question to you in this thread. Please ignore it, since this answers it fully. I look forward to further discussions with you. Keep safe.MathGrrl
March 14, 2011
March
03
Mar
14
14
2011
05:51 AM
5
05
51
AM
PDT
vjtorley,
One answer I’ve received on gene duplication and CSI:
With regard to your question, Phi_s(T) is the ‘number of patterns for which ……’ (as you stated). I take this to be the number of different patterns, or number of different sequences that will perform the same function. The key word here is ‘different’. It is nonsense to try to measure CSI by arbitrarily duplication. You will get a different answer every time, depending upon whether you have duplicated the gene three times, or three trillion times. The only way gene duplication will increase CSI is if the two genes perform a function that one gene alone will not perform. In that case, the double gene forms a single functional pattern.
This is why I used the production of a certain amount of a protein as the specification in my scenario. It seems to meet the criteria of your correspondent, so it appears your calculations remain basically correct.MathGrrl
March 14, 2011
March
03
Mar
14
14
2011
05:48 AM
5
05
48
AM
PDT
CJYman,
IOW, if an evolutionary algorithm produces CSI as an output, the EA was intelligently designed with foresight of how the search space constraints relate to the target function (as per the NFLT).
That does not follow from the No Free Lunch theorems. All those theorems say, in layman's terms, is that averaged over all search spaces, no algorithm performs better than a random search. For particular search spaces, some algorithms can perform dramatically better than others. This was raised on one of the threads you mentioned earlier. Without going back to it, I remember several people pointing out that the world we inhabit is one "search space" in your model (the scare quotes are because there are several other issues with modeling evolution as a search). It's not surprising that some algorithms are better able than others to traverse that space. It's even less surprising that the evolutionary mechanisms we observe are components of those algorithms. If they didn't work in this "search space", we wouldn't observe them.
However, as to weather *patterns:* The difference between a “weather pattern simulation” and a “simulation of an evolutionary algorithm producing CSI” is that weather patterns can be arrived at from a random set of laws and intitial conditions (law+chance absent intelligence), producing chaotic patterns which are mathematically indistinguishable from the types of patterns which make up “weather.” However, no one has shown that CSI or an EA that outputs a CSI pattern can be generated from a random set of laws and initital conditions (law+chance absent intelligence).
Actually, that's exactly what is shown by some real biological systems, as calculated by vjtorley above. Computer simulations have shown the same; consider Schneider's ev, which demonstrated exactly the same behavior he observed in real biological systems. This qualifies as "mathematically indistinguishable from the types of patterns" we see in natural processes. If you disagree that these systems generate CSI, please show me how you would calculate CSI for the four scenarios I describe in my post 177 in this thread.MathGrrl
March 14, 2011
March
03
Mar
14
14
2011
05:47 AM
5
05
47
AM
PDT
vjtorley,
Why are you using “x2? instead of the actual sequence? Using the “two to the power of the length of the sequence” definition of CSI, we should be calculating based on the actual length.
Good question. The “x2? refers to the semiotic description. Let me put it another way, borrowing an example from the old joke about what dogs understand when their owners are talking: “Blah Blah Blah Blah Ginger Blah Blah” – except that in this case the “Blah” is not repetitive.
I agree with your essential point, I believe, hence my mention of Kolmogorov Chaitin complexity previously. This is one of several reasons why I also agree with your previous statement that "Actually, what I suspect is that IF the mathematics in my previous post in #283 are correct . . . then the definition of CSI may have to be revised somewhat." My only disagreement is that I think that the required revision might be larger than you suggest. Would you agree that until such a revision is available and demonstrated to objectively and unambiguously measure the involvement of intelligent agency, CSI cannot be claimed to be a useful metric for that task?MathGrrl
March 14, 2011
March
03
Mar
14
14
2011
05:44 AM
5
05
44
AM
PDT
1 2 3 4 14

Leave a Reply