Uncommon Descent Serving The Intelligent Design Community

On The Calculation Of CSI

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

My thanks to Jonathan M. for passing my suggestion for a CSI thread on and a very special thanks to Denyse O’Leary for inviting me to offer a guest post.

[This post has been advanced to enable a continued discussion on a vital issue. Other newer stories are posted below. – O’Leary ]

In the abstract of Specification: The Pattern That Signifies Intelligence, William Demski asks “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” Many ID proponents answer this question emphatically in the affirmative, claiming that Complex Specified Information is a metric that clearly indicates intelligent agency.

As someone with a strong interest in computational biology, evolutionary algorithms, and genetic programming, this strikes me as the most readily testable claim made by ID proponents. For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it. Unfortunately, what I’ve found is quite a bit of confusion about the details of CSI, even among its strongest advocates.

My first detailed discussion was with UD regular gpuccio, in a series of four threads hosted by Mark Frank. While we didn’t come to any resolution, we did cover a number of details that might be of interest to others following the topic.

CSI came up again in a recent thread here on UD. I asked the participants there to assist me in better understanding CSI by providing a rigorous mathematical definition and showing how to calculate it for four scenarios:

  1. A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”
  2. Tom Schneider’s ev evolves genomes using only simplified forms of known, observed evolutionary mechanisms, that meet the specification of “A nucleotide that binds to exactly N sites within the genome.” The length of the genome required to meet this specification can be quite long, depending on the value of N. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.)
  3. Tom Ray’s Tierra routinely results in digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes, but takes thousands of generations to evolve.
  4. The various Steiner Problem solutions from a programming challenge a few years ago have genomes that can easily be hundreds of bits. The specification for these genomes is “Computes a close approximation to the shortest connected path between a set of points.”

vjtorley very kindly and forthrightly addressed the first scenario in detail. His conclusion is:

I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

In that same thread, at least one other ID proponent agrees that known evolutionary mechanisms can generate CSI. At least two others disagree.

I hope we can resolve the issues in this thread. My goal is still to understand CSI in sufficient detail to be able to objectively measure it in both biological systems and digital models of those systems. To that end, I hope some ID proponents will be willing to answer some questions and provide some information:

  1. Do you agree with vjtorley’s calculation of CSI?
  2. Do you agree with his conclusion that CSI can be generated by known evolutionary mechanisms (gene duplication, in this case)?
  3. If you disagree with either, please show an equally detailed calculation so that I can understand how you compute CSI in that scenario.
  4. If your definition of CSI is different from that used by vjtorley, please provide a mathematically rigorous definition of your version of CSI.
  5. In addition to the gene duplication example, please show how to calculate CSI using your definition for the other three scenarios I’ve described.

Discussion of the general topic of CSI is, of course, interesting, but calculations at least as detailed as those provided by vjtorley are essential to eliminating ambiguity. Please show your work supporting any claims.

Thank you in advance for helping me understand CSI. Let’s do some math!

Comments
PaV (241 and 243),
Interestingly, he saw exactly the problem I saw when I looked a little more closely at Tom Schneider’s blog giving info on ev. "Mistakes" Who figures this out? How is it figured out? The answer is the programmer.
You are confusing the simulation with what is being simulated. ev simulates the biological systems on which Schneider based his PhD work. Errors are measured in a manner analogous to how binding sites work in those biological systems.
As Dr. Dembski points out, this is no more than a more sophisticated version of Dawkin’s "Me thinks it is a weasel" self-correcting version of Darwinism.
Incorrect. ev has no explicit target.
As a follow-up to [241], let’s point out what one finds at Schneider’s blogsite. We see a graph. Bits of information/nucleotide has increased. Wow. But . . . also notice—as I’ve already pointed out to MathGrrl—the increase quickly peters out. It flat-lines.
Yes, and it does so not because of any code in the simulator but because the model, simple as it is, is sufficiently rich that it reflects what is observed in biological systems. You seem to misunderstand the distinction between R(sequence) and R(frequency). That flat line is a very interesting result once you do understand that.
And then notice that when "selection" is removed, the "information" is all lost.
Thus demonstrating that evolutionary mechanisms can generate Shannon information.
Well, the "selection" that Schneider alludes to comes exactly from the function that ferrets out the number of mistakes.
Exactly, a model of differential reproductive success in biological systems.
Once this ferreting is turned off (which, though not in the form of a target sequence comes [per Dembski in the article] from “fitness functions”), voila, no new information.
Certainly, if some organisms weren't better suited to their environment than others, we wouldn't see evolution. I strongly recommend reading Schneider's PhD thesis as well as the ev paper. When you understand the difference between R(sequence) and R(frequency) you'll have a much greater appreciation for the results of ev.MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:44 AM
10
10
44
AM
PDT
PaV (235),
You’ve stated twice that you think ‘people’ should address MathGrrl’s question. First of all, it really isn’t a question. It’s a request. More of a demand.
It's just a request, phrased as politely and constructively as I could. If you don't want to answer it, just say so. Ultimately it's just words on your screen -- you're not being compelled in any way.
Why do think she is entitled to something that would be painstaking work to produce?
Why do you consider it painstaking to define your terms? ID proponents make some strong claims about CSI, it's not unreasonable to request to see the support for those claims.
So here we have MathGrrl who seems perfectly willing to accept Shannon’s simplistic notion of information, but now finds it troublesome that CSI isn’t “rigorously” defined. It is plenty rigorously defined.
Great! Please provide the definition and show how to apply it to the four scenarios I described in the original post.MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:43 AM
10
10
43
AM
PDT
Mrs. O'Leary (232),
I agree that people should directly address your questions.
Thank you. I hope it happens in this thread.
MathGrrl: Download? I’m a Canadian, hardly short of download capacity, so not clear re problem, but happy to learn.
vjtorley has mentioned problems when the thread gets too long. Thus far I'm not seeing any problems loading. Anyone else?MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:43 AM
10
10
43
AM
PDT
vjtorley (217),
After reading your posts, I’m beginning to think that your college major wasn’t mathematics, as your handle suggests, but English (you’re quite an articulate person) or possibly biology (since you display familiarity with various software programs designed to mimic evolution).
Thank you for the compliment. Are you suggesting that it is as impossible for a mathematician to be articulate as it is for, to pick a completely random example, a philosopher to be numerate?
Looking through your comments, I can see plenty of breezy, confident assertions along the lines of “Yes, I’ve read that paper,” but so far, NOT ONE SINGLE EQUATION, and NOT ONE SINGLE PIECE OF RIGOROUS MATHEMATICAL ARGUMENTATION from you.
Without searching through the thread, I seem to remember addressing some misconceptions about the NFL theorems. Until someone provides a rigorous mathematical definition of CSI and provides some examples of how to calculate it, though, there is very little math for me work with.
I invite you to calculate the CSI, using Dembski’s formula. Can you, I wonder?
After reading Dembski's work, no, I cannot be sure I understand his formulation. That's why I'm asking for a rigorous definition and some examples for scenarios similar to those I'm interested in measuring.
I’m calling your bluff.
No bluff, just questions. I'm happy to put in some effort once I understand the concepts better. I found your calculation in the previous thread logical, but even an ID proponent such as yourself had to interpret Dembski's work and you came to the conclusion that gene duplication could create CSI. I strongly suspect that isn't Dembski's view, so I need more clarity before I'm able to objectively calculate CSI.MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:42 AM
10
10
42
AM
PDT
PaV (212),
In the effort for full disclosure, please tell us exactly what you’re doing these days.
I'm working hard and participating on this blog. You?MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:41 AM
10
10
41
AM
PDT
PaV (209),
To provide a "rigorous definition" of CSI in the case of any of those programs would require analyzing the programs in depth so as to develop a "chance hypothesis". This would require hours and hours of study, thought, and analysis.
First, a mathematically rigorous definition of CSI should be independent of any particular system. Given how often CSI is used in arguments for ID on this blog, it should be straightforward for someone here to simply produce such a definition. Second, your assertion is not aligned with what Dembski claims in his books and papers. He clearly says that CSI can be calculated without knowing anything about the origins of the object. All you need to look at in GAs are the digital strings representing the genomes of the virtual organisms, just as ID proponents claim to be able to detect CSI in the genomes of biological organisms. Please show exactly how to perform that calculation.MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:41 AM
10
10
41
AM
PDT
vjtorley (207), You quote part of Dembski's response to Ken Miller's The Flagellum Unspun The Collapse of "Irreducible Complexity" (a paper well worth reading by everyone involved in the discussion):
Calculate the probability of getting a flagellum by stochastic (and that includes Darwinian) means any way you like, but do calculate it. All such calculations to date have fallen well below my universal probability bound of 10^(-150). . . . To be sure, if a Darwinian pathway exists, the probabilities associated with it would no longer trigger a design inference. But that’s just the point, isn’t it? Namely, whether such a pathway exists in the first place. Miller, it seems, wants me to calculate probabilities associated with indirect Darwinian pathways leading to the flagellum. But until such paths are made explicit, there’s no way to calculate the probabilities.
This highlights a significant problem that I see with some calculations of CSI, namely that they often assume a uniform probability distribution which is equivalent to de novo generation of particular genomes. Miller touches on this in his paper:
When Dembski turns his attention to the chances of evolving the 30 proteins of the bacterial flagellum, he makes what he regards as a generous assumption. Guessing that each of the proteins of the flagellum have about 300 amino acids, one might calculate that the chances of getting just one such protein to assemble from "random" evolutionary processes would be 20^-300 , since there are 20 amino acids specified by the genetic code. Dembski, however, concedes that proteins need not get the exact amino acid sequence right in order to be functional, so he cuts the odds to just 20^30, which he tells his readers is "on the order of 10^39" (Dembski 2002a, 301). Since the flagellum requires 30 such proteins, he explains that 30 such probabilities "will all need to be multiplied to form the origination probability"(Dembski 2002a, 301). That would give us an origination probability for the flagellum of 10^-1170, far below the universal probability bound.
This makes it clear that CSI calculated in this fashion is more a measure of our ignorance about the history of a particular system than it is an indicator of intelligent agency.MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:40 AM
10
10
40
AM
PDT
Joseph (206),
I would very much like to understand CSI well enough to test the assertion that it cannot be generated by evolutionary mechanisms.
My aplogies but I don’t believe you.
That's okay! My motivations are utterly immaterial to the core issues of defining CSI and providing examples of how to calculate it.
"No Free Lunch" is readily available. The concept of CSI is thoroughly discussed in it. The math he used to arrive at the 500 bits as the complexity level is all laid out in that book.
If you are able to articulate a mathematically rigorous definition of CSI and calculate it for the four scenarios I detailed in the original post, please do so. I am unable to from the descriptions in Dembski's various books and papers.MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:39 AM
10
10
39
AM
PDT
Everyone, I'm back from my weekend workshop mentioned in 254 and have finished my prep for the week, so I can get back to the discussion. I'm very pleased to see how it has progressed in my inadvertent absence. I'm going to address a number of the comments, but certainly not close to all. If you think I've overlooked an important point, please call my attention to it.MathGrrl
March 28, 2011
March
03
Mar
28
28
2011
10:38 AM
10
10
38
AM
PDT
Joseph, #170.
And if you cannot provide a mathematically rigorous definion of a computer program then you don’t know what you are talking about and are a waste of time.
I saw this comment made several times and thought it sensible to point out the existence of Formal Methods of Software Design Sorry if this was already addressed, I've only read to #170 but searched the page for the terms. I'll return to read the rest later.cams
March 28, 2011
March
03
Mar
28
28
2011
07:00 AM
7
07
00
AM
PDT
F/N: One of the most distressing things in the above thread (and previous ones leading up to it) is the repeated strawman -- or, outright false -- assertion that FSCI and/or the broader CSI cannot be worked out for real world biological cases, joined to equally predictable dismissals of observations and calculations when they are made. There is also the usual demand for references in the peer reviewed literature -- in the teeth of cases in point from that literature and the known bias and hostility by the establishment's gatekeepers which just led to a case where U of K had to settle a career-busting lawsuit for US$ 125,000. Being on a complex consultation trip, I do not have the time or focus today for a point by point rebuttal to the several claims like that, so instead I make a few points: 1 --> The UD weak argument correctives 25 - 30 [accessible top right this and every UD page] actually provide links to metrics and to cases in the literature, including especially the 2007 Durston case of 35 protein FAMILIES; based on an extension of the H-metric on average information per symbol [often called entropy] in light of protein families. 2 --> I have not found evidence above of a serious engagement of the evidence, but instead a repeated resort to the rhetoric of dismissal. 3 --> On the simple brute force X-metric, I have now presented the rationale for it several times in the thread [not to mention the much more detailed presentation in my always linked note -- that's a self reference is the dismissal (as though such dismissals are not playing the ad hominem and/or appeal to authority rhetorical game, instead of dealing with the merits . . .) ], and have used it, only to have it brushed aside without addressing seriously on merits. 4 --> In particular, having pointed out how the original post is a case of FSCI as a reliable sign of design [per explanatory filter . . . ] I have pointed out how the same logic and mathematical analysis on observed facts apply to a functionally specific protein of 300 AA's. And, that there are easily hundreds of such FSCI rich proteins working together in the living cell. 5 --> Whistled by in the dark . . . 6 --> Yesterday, I took a moment to address a key biologically relevant case in MG's list, and highlighted how the problem as presented implied -- but breezily brushed over -- a reference to a much wider complex regulatory system that is replete with functionally specific complex Wicken Wiring Diagram type organisation, implying FSCI, not only on the components [how many proteins, with how many AA's are involved . . . ] but on the implied information content of such a complex nodes, arcs and interfaces organisation. 7 --> The reaction was almost predictably that I did not provide a calculation. I do not think I need to. That we are dealing with well past 125 bytes of information to simply describe the regulatory network on a nodes, arcs and interfaces basis is obvious, and many items sitting at nodes are complex information rich molecules integrated in a co-ordinated way. 1,000 bits is of course the FSCI threshold. 8 --> The real and plainly still unmet challenge to champions of chance and necessity acting without intelligence, is to show us that the sort of functionally specific, complex organisation and information associated with these systems can credibly arise -- per OBSERVATION -- by just such blind watchmaker mechanisms. 9 --> The best they can do is to provide genetic or evolutionary algorithm based, intelligently designed software that starts on an island of function and per algorithmic process and intelligently specified fitness function and sorting, does hill climbing. That shows intelligent design as the best and only observationally known way to get to such islands of function. 10 --> So, much of the huffing and puffing in the face of actual examples to the contrary, that CSI is not defined, is not specified mathematically, is meaningless [notice how there simply has not been an answer to whether Orgel and Wicken were meaningless when they wrote in the technical literature in the 70's], and cannot be measured or estimated or calculated for biological systems is -- pardon my directness -- distractive rhetoric in the teeth of facts already and in many cases long since in evidence. Okay, better get on with further items on today's to do list. G'day GEM of TKIkairosfocus
March 28, 2011
March
03
Mar
28
28
2011
06:46 AM
6
06
46
AM
PDT
Markf, Apparently Jospeph is the Id of UD.jon specter
March 28, 2011
March
03
Mar
28
28
2011
06:30 AM
6
06
30
AM
PDT
markf, Why can't you produce positive evidence for your position? Why can't you produce your position's methodology so we can compare it to CSI? I know why- because to a person evos are intellectual cowards and that makes it personal.Joseph
March 28, 2011
March
03
Mar
28
28
2011
06:07 AM
6
06
07
AM
PDT
#335 Joseph - why do you get so intensely personal in your comments?markf
March 28, 2011
March
03
Mar
28
28
2011
05:51 AM
5
05
51
AM
PDT
markf Her "challenge" is bogus for the reasons provided. Evos are alking about gee duplicatisadding information yet ther aren't any gene duplications during the origin of life. Meaning it is clear that MG has erected a strawman. Then there is JR with its drooling about cancer- can anyone be more of a dolt? Most cancers are our fault but JR wants to blame the designer(s). And this is all pathetic because in the end all you evos have to do is actrually step up and start producing positive evidence for your position and CSI will go away. But that ain't going to happen, is it?Joseph
March 28, 2011
March
03
Mar
28
28
2011
05:22 AM
5
05
22
AM
PDT
Let’s keep this short and to the point. If we are talking about information (CSI, FSCI, or otherwise), and we are, then a language, “a system of chemical representations (symbols),” must also exist. I am saying that a materialistic explanation of language is, in principle, impossible.
Jut to be clear, are you saying that a language has to exist in nature (e.g. in the sequence of DNA nucleotides), or do you mean that nature has to be described by a language?Heinrich
March 28, 2011
March
03
Mar
28
28
2011
04:57 AM
4
04
57
AM
PDT
tgpeeler and UB You seem very concerned that Mathgrrl has not addressed the presence of symbols as a sign of information and therefore design. It seems a bit rough, as her challenge was for someone to provide a mathematical calculation of the CSI or information in certain cases. After all many leading ID proponents claim that CSI can be measured in bits. If you want to introduce a different criterion for information/design that is fair enough but it doesn't answer her challenge and it is not necessarily an evasion on her part not to answer. She has done amazingly well to respond to so many different objections on this thread and she cannot be expected to respond to every different objection, especially when it does not answer her challenge directly. I am happy to take up the challenge of whether life contains symbols and whether this is proof of design. Let me state my position - which is that "symbol" can include many shades of meaning. Under one definition it does imply some kind of intention or plan - but symbols of this type are not present in life. Under another definition symbols are present in life but do not imply any kind of design or plan. Perhaps we might pick up the discussion where I left off with UB at #31 above. We agreed that symbols in life according to his definition are only present in DNA, not elsewhere. I then asked what the symbols in DNA represent. I think that point he got distracted or I missed the answer somewhere.markf
March 28, 2011
March
03
Mar
28
28
2011
04:55 AM
4
04
55
AM
PDT
Onlookers: The above onward exchange after my comments y/day morning shows why I felt that the only reasonable purpose for my commenting on this thread was to put some things on record. If you will look at my comment at 320, you will see that I addressed the implications of a gene duplication in the context of a cell, that the implied regulated process to create a duplication could well direct5ly put us past any reasonable threshold for complexity. I then proceeded to actually do a sample calculation. Brushed aside. Thus, the situation in this thread is much as VJT described at 315: ____________ >> Regarding ev, PaV has already shown that it is incapable in principle of breaching Dembski’s Universal Probability Bound, so I think we can all agree it does not present a real challenge. I notice that Mathgrrl has not returned, and that she has yet to demonstrate her calculating prowess. Until she gets her hands dirty with some raw numbers, and attempts to perform some real calculations, I shall remain skeptical of her claim to be mathematically proficient. Jemima Racktouey (JR) complains that CSI cannot be calculated for an arbitrary system. Talk about shifting goalposts! First the complaint was that we couldn’t calculate specified complexity for anything biological. I replied by doing a calculation for the bacterial flagellum. Then the complaint was that it could only be calculated for one biological system, and not for other irreducibly complex systems. I replied by generously giving Mathgrrl a long list of 40 irreducibly complex systems, together with their descriptions, as well as the numbers required to calculate the specified complexity Chi for ATP synthase. I invited her to finish the calculation. She still hasn’t done so. And now the complaint from JR is that we can’t calculate specified complexity for anything and everything! Many meaningful mathematical quantities are not computable by a single algorithm or set of algorithms. JR should know that. That does not render them meaningless. Specified complexity is a meaningful quantity which (as I showed in my example of the Precambrian smiley face) can be calculated for a variety of objects). I conclude that some critics of CSI and specified complexity are making unreasonable demands which ID proponents should not waste their time on. Other more sincere critics have genuine questions that can be addressed on another thread . . . >> ____________ let's draw some additional conclusions: 1 --> The CSI concept is a recognition of a distinct class of reality found in known engineered systems [e.g. the reasonably comparable mechatronic ones -- cf. an architecture for such systems here in the context of autonomy and robotics], and also the living cell, per observations first made and published in the technical literature by Orgel and Wicken in the 1970's. 2 --> The "meaningful[ness]" and/or real-world significance of CSI and FSCI would THEREFORE be prior to the creation and/or critiques of mathematical models, analyses and metrics since the 1990's. (In short, we have a quality-quantification issue here. Often, quality must be recognised before quantity can be measured: what, before how much.) 3 --> Various metrics are possible, and are applicable in diverse contexts. 4 --> At a first rough-cut level, a brute-force, simple one that exploits the fact that once we are at or beyond 1,000 bits of info storage capacity, the search resources of the observed cosmos would be exhausted before 1 in 10^150 of the space could be scanned, and the fact that semiotic agents/observers [SAO's] with ability to judge exist and are part and parcel of science, allows us to recognise and crudely quantify enough cases to draw some reasonable conclusions. 5 --> Specifically, X = C*S*B, where SAO identifies specificity S as 1/0, and complexity on the threshold of info storage capacity per the 1,000 bit threshold as 1/0, and B as bit depth actually used, shows us that say a post of 143 or more characters in this thread or the original post is best explained on design. 6 --> Similarly, it points to a protein of sufficient complexity, e.g. a typical 300 AA molecule that must fold and function in the cell's key-lock fit environment, as best explained on design. 7 --> As was shown at 320, it also highlights that the cell duplication process and resulting complexity already point to design. Here is Wiki on Mitosis (for eukaryotes), making the usual admission against interest:
The process of mitosis is fast and highly complex. The sequence of events is divided into stages corresponding to the completion of one set of activities and the start of the next. These stages are interphase, prophase, prometaphase, metaphase, anaphase and telophase. During mitosis the pairs of chromatids condense and attach to fibers that pull the sister chromatids to opposite sides of the cell. The cell then divides in cytokinesis, to produce two identical daughter cells . . .
8 --> The article on the S Phase in which the chromosomes are duplicated, adds:
S-phase (synthesis phase) is the part of the cell cycle in which DNA is replicated, occurring between G1 phase and G2 phase. Precise and accurate DNA replication is necessary to prevent genetic abnormalities which often lead to cell death or disease. Due to the importance, the regulatory pathways that govern this event in eukaryotes are highly conserved . . . . The major event in S-phase is DNA replication. The goal of this process is to create exactly two identical semi-conserved chromosomes. The cell prevents more than one replication from occurring by loading pre-replication complexes onto the DNA at replication origins during G1-phase which are dismantled in S-phase as replication begins. In budding yeast, Cdc6 is degraded, Orc2/6 are phosphorylated and mcm proteins are excluded from the nucleus, preventing re-attachment of the replication machinery (DNA polymerase) to the DNA after initiation. Incredibly, DNA synthesis can occur as fast as 100 nucleotides/second and must be as accurate as 1 wrong base in 10^9 nucleotide additions. . . . . Damage to DNA is detected and fixed during S-phase. When the replication fork comes upon damaged DNA, ATR, a protein kinase, is activated. This kinase initiates several complex downstream pathways causing a halt in the initiation of new replication origins, prevention of mitosis and replication fork stabilization in order to keep the replication bubble open and DNA polymerase complex attached while the damage is being fixed.
9 --> In short, a tightly regulated, operationally complex process that sits on an island of function, and involving many objects and co-ordinated processes that will easily put us well past the threshold for FSCI. 10 --> In this context, the rare error that was focussed on by MG is seen for what it is, an error in a process that is already so plainly sophisticated and functionally specific, that it is long since best explained as designed. 11 --> So, again, the first case presented by MG without acknowledgement of that context and with the artful suggestion of the term "simple," is predicated on begging the question of what is required for cell replication, and for duplication of DNA in that. 12 --> And once design is already clearly on the table as a best explanation, further cases multiply the force of the inference. 13 --> Similarly, the Rube Goldberg dismissal of complexity above, by another commenter, reveals a lack of awareness of what is required to make a complex system that has to respond to real world challenges work. There is a reason why even a PC operating system -- vastly less sophisticated than a living cell -- is complex. 14 --> More sophisticated metric models, and variants on them, have already been linked and discussed. Names like Durston et al and Dembski have been put up. 15 --> VJT has of course done some calculations and has challenged MG to match them, so far without effect. G'day GEM of TKIkairosfocus
March 28, 2011
March
03
Mar
28
28
2011
03:56 AM
3
03
56
AM
PDT
Chances are she won't be bringing up the moderation policy. :)Upright BiPed
March 28, 2011
March
03
Mar
28
28
2011
12:17 AM
12
12
17
AM
PDT
TGP,
“Let me try once again to make this point.”
Of course, I concur with your point; it is the same one I was making in #192. When the guest author makes the statement that an evolutionary algorithm creates information, she simply assumes the only element in the equation which is (by itself) responsible for the existence of the information. She has so far refused to acknowledge this observation. Yet, anyone who questions this observation can peruse Marshal Nirenberg's papers for what he thinks caused phenylalanine to end up in his sample. The rest of us already know the answer, and we are not ignoring it. Even if it means the (already stated) conclusion of the guest author is either a) false, b) over-reaching, or c) incomplete.
“No how. No way. Ever.”
Not much of a consolation prize is it? For no other reason than this, we will not see any such acknowledgment on this thread. Mathgrrl won't acknowledge the reality on its face, but it will be interesting to see how she handles it if she decides to return. I'm not holding my breath on her simply acknowledging that the existence of the information has nothing whatsoever to do with the EA. As a strategic study, it is interesting. Since an observed reality has fatally challenged her conclusion, one which she cannot acknowledge without conceding the point, she is left with only one of three remaining positions. She can divert attention to something else. She can cloak herself from the evidence (perhaps say it doesn't matter; a way of attacking the evidence by dismissing the issue without acknowledging it), or she can go on ignoring it. Of course, she can also break it into pieces and play one card here and another there. I'm putting my money on diversion followed dismissal. Probably a technical foul of some sort, related to the thread perhaps. Previously, she dismissed it by implying that any particular proponent's definition of CSI made the issue go away. Perhaps she'll stay with that. She can always attack me, politely of course. I think she (gracefully) leaves open confrontation to others.Upright BiPed
March 28, 2011
March
03
Mar
28
28
2011
12:07 AM
12
12
07
AM
PDT
Upright BiPed @ 292 "The information she is discussing is information which is instantiated in a system of chemical representations (symbols)." Let me try once again to make this point. (And if it is a point not worth making I'd appreciate being enlightened as to why not.) Let's keep this short and to the point. If we are talking about information (CSI, FSCI, or otherwise), and we are, then a language, "a system of chemical representations (symbols)," must also exist. I am saying that a materialistic explanation of language is, in principle, impossible. It is impossible because neither the code nor the rules that govern the code can be explained by reference to the laws of physics. No how. No way. Ever. It takes a free and purposeful will to rationally order symbols to produce a message. It can't be done by an algorithm. Somebody show me one if it can. I'll be most interested to investigate that. This game is long over even if mg, jemima and the rest won't get it. This isn't intellectual resistance, even thought that's how it's being sold.tgpeeler
March 27, 2011
March
03
Mar
27
27
2011
09:10 PM
9
09
10
PM
PDT
M. Holcumbrink,
But if you showed up at the bank in a stretch limo, wearing a $10,000 suit, and followed by an entourage of very subservient individuals that are clearly terrified by you, and you approach the banker and say “I would like to deposit a lot of money in your bank”, I would imagine that he would roll out the red carpet for you without having the faintest clue as to how much money you actually intend to deposit.
And that's exactly how many confidence scams start.
I work in design, and my thoughts are “Oh! If only we could design and build systems that mimic life more closely!”
Will you be copying the 1 in 3 failure rate to cancer? Or the very narrow range of temperatures and pressures life works at? Will you be designing machines to be like parasites on other machines? To be red in tooth and claw? Perhaps not.JemimaRacktouey
March 27, 2011
March
03
Mar
27
27
2011
01:09 PM
1
01
09
PM
PDT
JR: "Funny how my bank manager wont’ accept my claims of being “money-rich” without me putting a figure on it." But if you showed up at the bank in a stretch limo, wearing a $10,000 suit, and followed by an entourage of very subservient individuals that are clearly terrified by you, and you approach the banker and say "I would like to deposit a lot of money in your bank", I would imagine that he would roll out the red carpet for you without having the faintest clue as to how much money you actually intend to deposit. But it's my guess that he would expect it to be a lot. JR: "The funny thing is that good design is not at all like what we see in the cell. The mark of a good design is simplicity, not complexity. Or at least, as simple as you can make it given the task at hand. The diagrams for the metabolic pathways of a cell include feedback loops all over the place, loops within loops within loops. Nobody sane designs stuff like that, there is just no reason to". I very seriously doubt you are familiar with good design as it pertains to mechatronics and biomimetics. The closer you get to autonomy (drones) or to creating lifelike robotics, the more automatic controls you need, and they constantly have to communicate with each other in real time. Eventually you end up with a tangled web of inputs, computations and outputs (it's the only way to make it work). You sound like you played with one of those programmable Lego sets and now you think engineering sophisticated integrated systems is simple and easy. Truth is, the cell alone is on the order of sophistication of an F22, including the glass cockpit. Do you have any idea how many systems are on board an F22, fully automated and integrated? Do you think something like that is simple? What about an ordinary hard drive? Simple? Besides that, any time I read about biology, all I hear, even from Darwinists, is how elegant these systems are, replete with astonishingly elegant design solutions (which is of course attributed to the efficacy of NS in creating such systems, to which I roll my eyes). And the more complicated such systems are, the more susceptible to degradation they are. Systems like these that have been designed by man are constantly having to be maintained, and the degree of maintenance required is geometric relative to the degree of sophistication. In biological systems the maintenance is actually part of the system itself, fully integrated and autonomous (which in and of itself is an engineering marvel), but when something goes wrong with the maintenance systems, we see things like cancer, which is not part of the design by any stretch of the imagination. JR: "Any actual person working in design would look at those far-too-complex “designs” and say “I’d never ever design something in that way”". I work in design, and my thoughts are "Oh! If only we could design and build systems that mimic life more closely!" Hence the field of biomimetics. Why would there be an entire field of engineering devoted to understanding and duplicating the design solutions we see in biological systems if "no actual person working in design" would ever want to "design something in that way"? You show your ignorance here, JR. Well, either that or you are guilty of sophistry. Which is it, by the way?M. Holcumbrink
March 27, 2011
March
03
Mar
27
27
2011
10:05 AM
10
10
05
AM
PDT
Collin
I think that Mathgrrl is trying to get us to admit that CSI is not rigorously, mathematically calculable.
It's not a case of admitting it, it's a case of asking for such a calculation to be performed and then judging the situation on the results obtained. As O'Leary has just noted,
Meanwhile, perhaps junior ID theorists cannot formulate a single definition of complex specified information at present.
it seems that the more junior members of the ID camp need to wait for updated instructions from the ID seniors. I've suggested to O'Leary that perhaps she should ask Dr Dembski to jump in to the discussion. If anybody can calculate CSI for the 4 examples given I'm sure it's the good Dr.JemimaRacktouey
March 27, 2011
March
03
Mar
27
27
2011
06:38 AM
6
06
38
AM
PDT
I think that Mathgrrl is trying to get us to admit that CSI is not rigorously, mathematically calculable. Well, she's asking lay people to admit something that they do not know, for the most part. Yet VJTorley has given her some calculations as have others. maybe she thinks that they are not rigorous enough. Fine, but we'll just have to agree to disagree. The definition of specified complexity is easy enough. Dembski gives a good illustration: "A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified." Whether this is calculable or not, I don't know. But it is logical and coherent and leads one to be able to infer that Shakespear's sonnet was designed regardless of whether or not we knew that Shakespear wrote it. I think that a cryptologist or SETI scientist would agree.Collin
March 27, 2011
March
03
Mar
27
27
2011
06:14 AM
6
06
14
AM
PDT
PAV:
So, MathGrrl’s question is not a question. It’s a demand for a demonstration, and nothing less.
So? Asking an ID scientist to demonstrate what he claims is one of the key tools in his toolbox hardly seems like some nefarious plot. The few actual scientists I know are positively giddy when asked to talk about their work. I can't shut them up, even after the food arrives.
As I demonstrated above, she has not understood what a specification
Yes, and by my reading, she agrees with you on that point and has asked for help clearing it up.
The only person in the ID world providing mathematical definitions of CSI is Bill Dembski. She should have known this from the beginning.
It might have saved everyone alot of time and frustration if they had just said 700 comments (over two threads) ago that they can't define CSI in an unambiguous manner.
She didn’t want us to give a “rigorous mathematical definition” of CSI, she wanted us to tear apart the programs and assess it using the notions of CSI. Why should I be expected to respond to such a request on my time and energy. Am I some kind of paid consultant?
I, for one, was under the impression that you were an ID scientist. And, as I am led to understand, spending inordinate amounts of time developing ideas and sharing them widely is the process of science and the work of scientists.
We can calculate it. But it is a very labor and time intensive operation. Why are we supposed to make this calculation?
Because, as I have been told, that is what scientists do if they want the broader world take notice of their work.
Why isn’t she expected to show that she understands CSI and demonstrate that understanding by, herself, anaylzing these programs. If she came up with something disproving CSI, THEN, and ONLY THEN would it be incumbent upon the ID community to rebut her findings.
How can she disprove what hasn't been shown to be proven? CSI is an interesting concept. But, until someone actually shows it in action (it is easy, after all, right? You said so yourself), it seems like demands for disproof are premature. Your demands for disproof of what you aren't yet willing to demonstrate looks kinda like this: I claim that I am the most interesting man in the world. Now you must disprove that. And it is insufficient for you to say that I am nothing more than an internet blog troll, because someone else likely disagrees with you. You must demonstrate it with such rigor that everyone agrees that I am not very interesting.
Even Bill Dembski can’t “agree” on a definition of CSI. He no longer is using it, in a sense. He now is using “specified complexity”. Others here at UD want to stick directly in the “information” area and have our own intuitive ideas of what CSI should look like, and what we should be looking for in biological systems. Is there something wrong with this?
Well, it renders your demand that Mathgrrl begone and not come back until she understands CSI a little confusing. How is she supposed to demonstrate an understanding of a central ID concept which actual ID scientists can't agree on? It might save you more time if you shortened your request that she go away until she understands CSI to a request that she just go away.jon specter
March 27, 2011
March
03
Mar
27
27
2011
04:08 AM
4
04
08
AM
PDT
KF,
But notice, please: there has been a provision of relevant calculations and metrics all along, just they were soon obfuscated in the chaos of a hot debate. Again and again.
Unfortunately the vast majority were published in books intended for the lay reader. If a usable rigorous definition of CSI had been published in the mathematical literature I doubt this thread would even exist. Can you provide such a definition of CSI so that it can be applied to a generic situation? If not, will you admit that such is not possible or will you continue to cloud the issue with rhetoric and demands that origin of the "system" is explained (and therefore winning the argument from two sides, heads CSI can be calculated, tails CSI cannot be calculated but you can't explain the origin of the system we're talking about so you lose).JemimaRacktouey
March 27, 2011
March
03
Mar
27
27
2011
04:03 AM
4
04
03
AM
PDT
The funny thing is that good design is not at all like what we see in the cell. The mark of a good design is simplicity, not complexity. Or at least, as simple as you can make it given the task at hand. The diagrams for the metabolic pathways of a cell include feedback loops all over the place, loops within loops within loops. Nobody sane designs stuff like that, there is just no reason to. It would be like including more ways for something to go wrong then to go right and it makes it impossible to debug and of course difficult to roll out into production. So if some of the people who say things like "the multiple layers of connectivity and interaction prove design" really knew anything about actual "design" they would conclude that the designer was a process exactly like unguided evolution. Otherwise, extrapolating from our current experience (which ID is always telling us to do), the designer simply did not know what it was doing and left it all up to evolution. These "multiple layers of interacting error correcting codes" that scream design to you all are not so rigorous to prevent 1 in 3 people getting cancer. Any actual person working in design would look at those far-too-complex "designs" and say "I'd never ever design something in that way". And they do and have. Yet here such mess is considered a hallmark of a genius designer! Go figure...JemimaRacktouey
March 27, 2011
March
03
Mar
27
27
2011
03:57 AM
3
03
57
AM
PDT
kairosfocus,
So, let us ask, how do we get TO a “simple . . . duplication”? Or, more specifically, first, to the functional, controlled, regulated expression of genes and their replication when a cell divides?
Yes, of course. The origin of such systems must be worked out before we can talk about the CSI present in such systems. In fact, we can probably determine that there is so much CSI in the system that we've no actual need to calculate a specific value as it's over the UPB.
Thus, once we have gene duplication, we have already had something that regulates and expresses replication, which is itself going to be FSCI-rich, if the just linked diagrams are any indication.
Ah yes. Such a complex system would by definition have loads of FSCI, so much so that there's no actual need to put a figure on it. It is FSCI-rich. Funny how my bank manager wont' accept my claims of being "money-rich" without me putting a figure on it.
That implied capacity, BTW VJT, is what seems to be pushing you over the threshold of CSI when you have such a duplication.
Odd how we can go over the threshold of CSI when so far all we know about the amount of CSI is that it's a "rich" amount. Your slight of hand has been noted. No need to calculate CSI if you know it's there. And if CSI is present that's a reliable indicator of intelligence. Talk about assuming your conclusions!
8 –> Doubling such a short protein would jump us to 1,200 functionally specific bits, and the crude, brute force “a cubit is the measure from elbow to fingertips” 1,000 bit threshold metric would pick this up as passing the FSCI threshold, on which the explanatory filter would point to design as best explanation.
Would you be able to show the details of that calculation? An example of the explanatory filter in action is a rarer beast then calculation of CSI. And here you seem to be agreeing that gene duplications do in fact generate CSI, therefore undirected evolution is capable of generating CSI. Interesting.
But that is not a pure chance event, it is within a highly complex island of function, and so we are looking at hill climbing within such an island.
Ah, so, even if a duplication generates CSI that duplication was still placed on that "island of function" by a designer so it's still design really.
Where also the focus of design theory is how do we get to islands of function, not how we move around within such an island,
Except it's not really is it? Unless you can reference a relevant paper published in the literature, not a self published website or other DI sponsored site.
to one where there is a duplication in a regulatory network, we have brought into focus a much wider set of complexity, that pushes us over the FSCI threshold.
Does it? then you'll have no problems showing the calculations that lead you to this conclusion.
13 –> In turn, that points to design as best explanation for the SYSTEM capable of that duplication.
Of course! The system is only capable of a random duplication at some random point in DNA that might cause cancer or any one of a thousand debilitating conditions because it was designed that way. Thanks for clearing that up. It appears the "intelligent designer" is the d-e-v-i-l.
20 –> So, MG’s tickler case no 1 points to a deeper set of connexions, and the crude, brute force FSCI criterion and metric comes up trumps.
While you may proclaim victory that would only work in an English class. This is a mathematics class and you won't get an A+ here without doing some actual math. Rhetoric should be saved for the debating class. D-JemimaRacktouey
March 27, 2011
March
03
Mar
27
27
2011
03:37 AM
3
03
37
AM
PDT
F/N: PAV, I suggest that CSI and FSCI have never ever been about raw bits, which is simply the key Shannon-Hartley metric on negative log probabilities. (One can further argue that so soon as the issue of a posteriori probabilities appear in Shannon's analysis, RAW BITS HAVE GONE OUT TOO . . . as the implicaiton is that a judging, semiotic agent is distinguishing signal from noise on an implicit inference to design on specified complexity of meaningful messages as opposed to random patterns involved in noise reflecting raw balances of probabilities.) In real world context, we use bits as a volume measure: so much capacity to hold or transfer information. But we are usually dealing with FUNCTIONAL information being held in that volume, e.g. "this jpg is 840 k bits" or whatever. And of course such a jpg would often be functionally specific, e.g it contains a portrait of President Obama, which can only be dosed with noise up to a certain level before it loses function, and eventually would disintegrate into meaningless snow. ______________ F/N 2: As to the notion that if someone had asked how to get the position from the movement of an object, one would "simply" provide the kinematics and/or dynamics, let us just say this is after the debates have been had. The debates were very hot indeed, and with major political overtones, 400 or so years ago. They took generations to settle out. And, motives, rhetorical tricks and traps were very much a part of the debates. As well as unjustified career busting. All of which sound ever so familiar today. But notice, please: there has been a provision of relevant calculations and metrics all along, just they were soon obfuscated in the chaos of a hot debate. Again and again.kairosfocus
March 27, 2011
March
03
Mar
27
27
2011
03:33 AM
3
03
33
AM
PDT
1 2 3 4 5 6 15

Leave a Reply