Uncommon Descent Serving The Intelligent Design Community

On The Calculation Of CSI

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

My thanks to Jonathan M. for passing my suggestion for a CSI thread on and a very special thanks to Denyse O’Leary for inviting me to offer a guest post.

[This post has been advanced to enable a continued discussion on a vital issue. Other newer stories are posted below. – O’Leary ]

In the abstract of Specification: The Pattern That Signifies Intelligence, William Demski asks “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” Many ID proponents answer this question emphatically in the affirmative, claiming that Complex Specified Information is a metric that clearly indicates intelligent agency.

As someone with a strong interest in computational biology, evolutionary algorithms, and genetic programming, this strikes me as the most readily testable claim made by ID proponents. For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it. Unfortunately, what I’ve found is quite a bit of confusion about the details of CSI, even among its strongest advocates.

My first detailed discussion was with UD regular gpuccio, in a series of four threads hosted by Mark Frank. While we didn’t come to any resolution, we did cover a number of details that might be of interest to others following the topic.

CSI came up again in a recent thread here on UD. I asked the participants there to assist me in better understanding CSI by providing a rigorous mathematical definition and showing how to calculate it for four scenarios:

  1. A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”
  2. Tom Schneider’s ev evolves genomes using only simplified forms of known, observed evolutionary mechanisms, that meet the specification of “A nucleotide that binds to exactly N sites within the genome.” The length of the genome required to meet this specification can be quite long, depending on the value of N. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.)
  3. Tom Ray’s Tierra routinely results in digital organisms with a number of specifications. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes, but takes thousands of generations to evolve.
  4. The various Steiner Problem solutions from a programming challenge a few years ago have genomes that can easily be hundreds of bits. The specification for these genomes is “Computes a close approximation to the shortest connected path between a set of points.”

vjtorley very kindly and forthrightly addressed the first scenario in detail. His conclusion is:

I therefore conclude that CSI is not a useful way to compare the complexity of a genome containing a duplicated gene to the original genome, because the extra bases are added in a single copying event, which is governed by a process (duplication) which takes place in an orderly fashion, when it occurs.

In that same thread, at least one other ID proponent agrees that known evolutionary mechanisms can generate CSI. At least two others disagree.

I hope we can resolve the issues in this thread. My goal is still to understand CSI in sufficient detail to be able to objectively measure it in both biological systems and digital models of those systems. To that end, I hope some ID proponents will be willing to answer some questions and provide some information:

  1. Do you agree with vjtorley’s calculation of CSI?
  2. Do you agree with his conclusion that CSI can be generated by known evolutionary mechanisms (gene duplication, in this case)?
  3. If you disagree with either, please show an equally detailed calculation so that I can understand how you compute CSI in that scenario.
  4. If your definition of CSI is different from that used by vjtorley, please provide a mathematically rigorous definition of your version of CSI.
  5. In addition to the gene duplication example, please show how to calculate CSI using your definition for the other three scenarios I’ve described.

Discussion of the general topic of CSI is, of course, interesting, but calculations at least as detailed as those provided by vjtorley are essential to eliminating ambiguity. Please show your work supporting any claims.

Thank you in advance for helping me understand CSI. Let’s do some math!

Comments
This is completely opposite of the Law of Thermodynamics stating neither energy or matter can be created or destroyed and that information must be pass along from its host and not created by nothing or on its own. So in this experiment, they were able to create a gene with a new DNA information structure than what should’ve existed or been allowed. PatHarris334
It has been more than two years since this post began. We've learned, subsequently, just as I suspected, that MathGrrl is really not a "girl", and that it was a person whose sole person was to work to undermine the ID position. This was obvious from the beginning, and was the reason I thought Denise was wrong in allowing this post, and the reason I told so-called "MathGrrl" to just "go away." Since that time, I've had a chance to look at Schneider's ev program. I haven't looked at it for quite some time now, but remember some few specifics. In response to so-called "MathGrrl", the following can be said: The 'ev' program does not have a truly defined "specification." What substitutes for this "specification" is a digital field representing what are supposed to be 128 nucleotide bases. The ev program is designed to 'evolve' a 'protein binding site', and a nucleotide sequence that matches up to the site. Each time the program is run, and further depending on the selection of various program variables, different 'specifications', i.e, different "protein binding sites," with matching protein sequence, are arrived at. [N.B. The 'ev' program is sort of rigged in several ways to produce a successful 'protein binding site.' One of the ways you might say that it is "rigged" is because after each 'replication' (program run through, the 'ev' program selects the top 64 sequences for duplication. So, it sweeps away one-half of the produced sequences based on how high the 'score' is for each of these sequences. Well, how is the this score arrived at? It's done by seeing how well the sequences match up based on a kind of grading system. This grading system amounts to a fitness function, and is a way of bringing in information that would not otherwise be available to a truly random process. The net effect of this one-half elimination is that within two more 'runs' of the program the highest scoring sequence is now found in all 128 available sequences. You have the option of turning off this replacement process. When you do, the program runs on ceaselessly: i.e., the 'protein binding site' is NEVER arrived at. IOW, to consider this a truly random operation is to give it a very generous interpretation. But we go on . . . ] Given these circumstances, to vastly oversimplify the situation, one can simply say that the 'ev' program ultimately provides a "specified" nucleotide sequence that is 128 positions long. Taking this 128 long nucleotide sequence as being equivalent to an actual 128 long sequence in protein coding DNA, we then know that the probability associated with each nucleotide is 1 in 4. [Now, let's understand that with what I've written above, the "actual" probabilities are much lower, almost equal to 1 in 1 since what Schneider has done is to essentially write a set of mathematical equations for which he is trying to find a solution. Given the value matrix he applies in calculating the 'value' of each sequence (i.e., it's nearness to a given calculated value for the computer-produced 'binding site) the actual odds of finding the right 'solution' are not that high, and crunching numbers will get you there. But, for the sake of simplification, we're overlooking these real flaws in the design of the program (flaws in the sense of not really tracking with what NS does in nature)] Hence, the "rejection region"--that I long spoke of--can be calculated. It's quite simple; and it reflects the basic answer I originally gave to MathGrrl. The 'rejection region' is 1 in 4^128; or, equivalently, 1 in 2^256. While a very small 'rejection region,' it is not what CSI requires, which is a 'rejection region' of 1 in 2^500. Hence, the output of the 'ev' program DOES NOT PRODUCE "CSI." But I said this all along, didn't I? This is why the interaction with MathGrrl was fruitless and foolish. If the man who presented himself as MathGrrl could understand these matters, what I said to him--continually--should have been straightforward. But it wasn't. Which was only a further tip-off that we were being had. The only critic of ID, and Dembski's NFL/CSI, who had a good understanding of a "specification" was Mark Perakh. However, even he made fundamental errors at times, errors which caused him to make false conclusions about what ID, and Dembski in particular, had to say. With this said, the underpinnings of ID, as found in NFL, still, in my estimation, stand scrutiny. There is no evidence that it has been overturned by any computer program produced so far, or any other kind of counter example. PaV
So in Dr. Tom Schneider's Nucleic Acid Research on the Evolution of Biological Information it seems that thru his research, he was able to perform a frame shift, normally a highly destructive mutation, but in? this instance beneficial, and as it's new information. This is completely opposite of the Law of Thermodynamics stating neither energy or matter can be created or destroyed and that information must be pass along from its host and not created by nothing or on its own. So in this experiment, they were able to creat a gene with a new DNA information structure than what should've existed or been allowed. Although there's been some researchers who tried to debunk Dr. Schneider's findings, like Batten, Behe, Bracht, Dembski, Gitt, Meyer, Strachan, Joseph, Truman, and Williams, all of whom seem to have been debunked themselves. Does anyone here know if Dr. Tom Schneider's research on Nucleic Acid Research in the Evolution of Biological Information has been debunked or is it valid research that points to Evo's ability to create new information on its own? Or is there genuine reference material that suggests this theory is flawed and nature in not able create gene material containing new information on its own accord? Please let me know. greghar
MathGrrl @ 342:
I’m going to address a number of the comments, but certainly not close to all. If you think I’ve overlooked an important point, please call my attention to it.
Is it "an important point" that claims by critics of ID that evolution can generate CSI undermines your entire case? Yes, I think you've overlooked that point. I also think it's an important point. I raised it way back in post #248. You know, back when I was trying to figure out whether you were even worth my time? Mung
We are now over the 400 comment mark and I haven’t seen any reason to change the provisional conclusions that I reached in my post...
Amazing. I had no clue that you'd already reached any conclusions, especially as early as in the OP. Either we're all idiots, or... I came across another thread on a different forum accusing you of being a fraud. If you had bothered to respond to my own request in this thread that you establish your bona fides... But you didn't. So I am left with: Could you please point me to any post in this thread in which you display competence in any of the following: 1. computational biology 2. evolutionary algorithms 3. genetic programming You can't. You don't. Mung
I know that specific DNA sequences cause certain amino acids to appear. I asked what those DNA sequences symbolise. Do those DNA sequences symbolise the corresponding amino acids? After all causes are not typically regarded to be symbols of their effects. High pressure causes clear skies – but it doesn’t symbolise clear skies.
Mark, you are quite correct that (on its face) high atmospheric pressure does not symbolize clear skies. To what entity would this symbol have any meaning? If one were to actually say that high pressure symbolizes a forthcoming of clear skies, it would only do so for a living observer who happens to pay attention and creates a mapping of the discrete symbol (high pressure) to the discrete object (clear skies). But even then, that mapping (the rule that high pressure equates to clear skies) would only exist in the mind of the observer. This has nothing whatsoever to do with the translation of a recorded code within the cell. Firstly, in cellular translation there is no third-party observer assigning meaning after the fact. In place of an observer, the mapping of the symbol has been physically instantiated within the system of translation. This is exactly what takes place within computer programming, as well as in the entire history of machine-code-operated machinery. A system has been organized to where the presence of an input is purposefully mapped to an output. The input and output are linked by the context of the organization; meaning without that context, one would be literally meaningless to the other. As an example, Adenine has no particular physical relationship to Asparagine. However, if you present Adenine to the cellular translation machinery in the correct order (one that matches the physically instantiated mapping of Adenine to Asparagine) then Asparagine will be added during protein synthesis. Needless to say, this is significantly different than an observer witnessing a cause and effect relationship in weather patterns, and then assigning order to what he/she observes. In one instance there is a direct step-by-step chain of physical causes leading to the effect. In the other, that chain comes to a certain point and then it stops. The point where it stops is at the presence of the symbol. “The existence of a genome and the genetic code divides the living organisms from nonliving matter. There is nothing in the physico-chemical world that remotely resembles reactions being determined by a sequence and codes between sequences.” Hubert P. Yockey: Information Theory, Evolution, and the Origin of Life Upright BiPed
KF 431 said: > For the X-metric, the approach is simple: 125 bytes has sufficient possible configs that the observed cosmos as a whole across its thermodynamic lifespan would not be able to sample as much as 1 in 10^150 of the configs. That is, the search is so small a fraction that it rounds down to no effective or credible search. This is patent. The fewer states/"possible configs" a search has to visit to achieve a goal (whether specified as a unique target or as satisfying a function without a target), the more efficient it is. A binary search, for example, is far more efficient than a brute search and visits only a fraction of the total possible states. It does this by making use of properties of the space (e.g., the space is sorted). The number of states visited is much smaller, but it is not zero. The smaller number of states visited by itself is not an indication that the search is not credible. That is nonsensical. However, I think your intended point is that the space being searched doesn't have the necessary properties to allow it to be searched in so brief a time. e.g. a binary search being done on an unsorted space with no order that can be exploited by the search. A binary search would not give valid results on such a space. So the real question is, does a biological space have order or redundancy that can be exploited? And how exactly does evolution (or GA models) search such spaces? Do they need to visit a sizable number of states to be effective? Or is that a red herring from misunderstanding both biology and search algorithms? So what do we notice about biological spaces? For one thing, only some changes matter. e.g. most mutations are neutral in functional effect. And even when there is a change in a protein, if the change doesn't affect a binding site it is also can be neutral in effect. In other words, a working individual is not a single point in the worst case maximum "possible configs", but rather spans a volume of points at least as large as every permutation of neutral changes that can be made to that individual. And that isn't even touching on the much larger volume of changes around an individual which actually have some minimal functional phenotypical effect--even useful ones: such as the wide variety of dog breeds. In other words, the actual search space that must be visited to reach a working individual is greatly reduced by the properties of the biological search space. If the proper algorithm is used to exploit this, you can get away with searching a fraction of it. A brute force search would still be ineffective even given this kind of search space. Fortunately evolution and GAs are not brute force algorithms. Like a binary search, they make use of the properties of the search space. Consider how useful mutations get fixed in populations. What is the effect of fixation on the search space? It means that whole swaths of the search space are no longer visited to any significance and do not need to be. The size of the search space has just been dynamically reduced as a direct result. That reduction happens for every fixation, and those reductions in the "possible configs" multiply. The reverse also happens. When a genome duplicates in full, the search space dramatically expands. It is as if our 128x128 smiley face has been resampled as a 256x256 smiley (initially blurry). The space now has more possible states, some of which will get "fixed" and reduce the search space again once some of these more precise "pixels" are visited and tested by selection. But how did we get to a 128x128 smiley by a evolutionary search? By starting with a 2x2, a 4x4, a 16x16 smiley, with previous fixations at every stage eliminating whole swaths of the search space even as the search space expands. This kind of search only has to visit a tiny fraction of the possible states. The number of total "possible configs" is a red herring. What matters are the properties of the search space and how an algorithm exploits those properties. In the right combination, you get highly efficient results without having to search the entire space. Bravo to MathGrrl and others for working on actual tests to the behavior of such algorithms. smgr78
here's your example: Casey Luskin has rather strongly implied that random strings of letters don't have CSI, whereas sentences do. So here's a very simple test for CSI. I here present two strings in a simple letter-to-number substitution code. One of them spells a coherent sentence composed by myself, and one is a string of randomly generated letters. Run the CSI calculations, SHOW ME the CSI calculations, and based on that determine which is which: 1,4,5,12,14,3,6,2,4,6,26,26,17,19,14,12,28,20,6,9,2,7,17,13,7,25,11,17,1,22,17,30,7,11,10,11,18,22,20,6,16,5,2,10,2,27,18,12,1,20,28 11,6,10,18,9,12,27,18,11,9,14,6,12,27,7,6,23,9,19,27,16,1,14,1,17,11,15,10,19,15,14,6,17,1,8,2,9,17,1,15,19,9,14,21,15,17,13,1,23,9,30 Remember: it is very important that this is done on the basis of ACTUAL CSI CALCULATIONS and not 'cheating' of some sort. Thus, the number-substitution code. We're testing your claim that you can detect design through calculating complex specified information, not your ability to paste text into Google translate, run expectation algorithms to determine real language consonant and vowel usage, look for repeating patterns, or even laboriously transliterate everything into the latin alphabet and read it aloud to see if it sounds like language. extremities
Kairofocus,
Complex, functionally specific information is used in a great many contexts and is a characteristic sign of intelligence. Take the posts in this blog thread for a start, then go over to the software used in the ICT’s industry, and the whole Internet.
That information is used and is useful is not in question. As you say, take the posts in this blog, and calculate the CSI (you made some mention of this above, I recall). Now we can determine that these posts are not randomly generated - which is great - but we already knew that even before putting numbers on it. Not so useful. However, I wasn't actually attempting to create a useful example. Just an example of the simplest kind. It need not prove anything at all. Must sleep now. Maybe more later? Tomato Addict
TA: Please open your eyes. Complex, functionally specific information is used in a great many contexts and is a characteristic sign of intelligence. Take the posts in this blog thread for a start, then go over to the software used in the ICT's industry, and the whole Internet. Orgel and Wicken were correct to highlight CSI and FSCI in the 1970's. The concept is meaningful at an intuitive common-sense level, and poses a serious challenge to origin of life research and the origin of body plans. Quantifications and models ranging from the simple X-metric to Durston et al and the FITS metric published in 2007, or Dembski's CSI metric model may be debated on specifics, and are subject to development as is so with all scientific work; but that does not change the basic fact that CSI and FSCI are real and plainly matter. Matter so much in fact that there is now an active push to deny, and to dismiss. The difference here is that in the design theory investigations, a threshold estimate is to be identified for how much will be sufficiently complex and specific that it is not credible to infer that lawlike natural forces and processes and/or chance contingency have by good luck given rise to such information. In the case of the X-metric, a simple, brute force approach has been used. For the X-metric, the approach is simple: 125 bytes has sufficient possible configs that the observed cosmos as a whole across its thermodynamic lifespan would not be able to sample as much as 1 in 10^150 of the configs. That is, the search is so small a fraction that it rounds down to no effective or credible search. This is patent. In that context, given that we have an empirically, routinely known observed -- and only observed -- source of such FSCI, intelligence is a superior explanation. The duplication objection raised above fails to address the problem of how the duplication arises -- notice that "simple" in the original post -- as, I have already pointed out repeatedly above. Such duplication, however, implies more than "it arises by magic." That is it implies a regulated, stage by stage process that duplicates. Such a process, in relevant examples, will normally involve more than 125 bytes worth of working information; on the simple grounds that 125 bytes is not enough space to do anything of significance with software: at say 6 letters per key word on average, we are talking of 20 or 21 words. Not a lot of working space. In the course of the genome, the process of replication of the chromosome is itself quite involved, and has sub processes called up if errors are detected. It can fail, especially for things like cancer, but the failure is a demonstration of what happens as a rule when something that is functionally specific breaks down under the impact of a chance factor. If we are dealing with a PC, an algorithm to duplicate a given string will similarly be quite functionally specific and complex, and will likely be irreducibly complex as well. It will require setting up start, sequence, decisions, looping, termination of loops and halting, with symbolic elements implemented according to definite rules and conventions more broadly. That means that a genetic or evolutionary algorithm based on duplicate, modify and improve, is fatally dependent on intelligent programming. Indeed, my consistent objection to the genetic algorithm proposed models of evolution, is that they start on a highly functionally specific and complex island of function, set up by intelligence and operate according to that already being on the target zone. As for the X = S*C*B simple metric, that is intentionally very simple and based on familiar things such as data measured in bits, and no resort to anything more than the usual judging semiotic agent, aka observer [who is involved in estimating how much solution is in a measuring cylinder or the length of a string as compared with a metre stick etc], is needed . . . so, pardon: it would help if you would actually address what is on the table not a strawman:
(i) is the item functionally or otherwise specific, so that it is isolated in the config space relative to the non-functional ones (e.g. what is the effect of injected noise on function, try out your PC application programs, or an image file etc to see this.) 1/0 for the obvious alternatives. (ii) Is it sufficiently complex as measured by number of bits, to pass the 1,000 bit scope that leads to a config space of at least 10^301 possibilities? 1/0 again. (iii) how many bits are actually explicitly used or implied to store the info involved in the function?
When a function is specific [e.g. we write posts using conventions of English], complex [at least 143 ASCII characters] and involves a certain number of bits, e.g. the 4469 7-bit characters [= 31,283 bits] in the original post, we can deduce the value 31,283 functionally specific and complex bits at work. The explanatory filter would point to the best explanation for this being design, and in fact we have good independent reason to accept that this is true. In the case of more complex metrics, I have repeatedly pointed onlookers to -- and even excerpted on -- the case by Durston et al, where the Shannon H-metric, integrated with functionality as assessed for observed sequences for proteins in 35 families, was used to give a table of FSC in FITS, published in the per reviewed literature. I find it highly interesting that, consistently, this vital data point has been ignored by objectors, as if it does not exist in the peer-reviewed literature. (Notice, it is a more sophisticated form of the X-metric, targetting functional as opposed to random or orderly sequences of symbols.) Similarly, when I see commenters -- just scroll up -- looking at the genetic code and in 2011 trying to dismiss what is most patently is, a digital code, that is telling. Let me cite from Crick in his letter to his son Michael, on his discovery, March 19, 1953:
Now we believe that the DNA is a code. That is, the order of bases (the letters) makes one gene different from another gene (just as one page of print is different from another)
So, who should I believe: Crick or the dismissive objector above? As a further example of what is going on, let's look at the CSI scanner thread at no 8, where JemimaRacktouey artfully clips off only a part of the UD corrective 27, to suggest that the X-metric and the overall comment in that remark is simplistic and incorrect. Where did she clip off? Just before the more serious level analysis was introduced, from Durston and Dembski, with links. In short, we are seeing selective hyperskepticism, strawman tactics, willful refusal to accept patent facts and reasonable findings [often tracing to the successful work of famous scientists such as Crick, Orgel and Wicken] and the like. Surely, we can do better. GEM of TKI kairosfocus
Joseph: I did read your example (thank you), along with most (not all) of the previous comments. I do not have time to chase back through all that again to find what I need, so I was asking for assistance. Fortunately, Kariosfocus has recently provided: > X = S*C*B. which I believe is the equation I was looking for. (this seems to be a likelihood ratio?) Back to my calculations: In my duplication example I doubled the string from "CG" to "CGCG", and S will be the increase in information, which is either zero, or something very small (there has been some disagreement). If zero, CSI will be undefined as I take the log, so let's say it is one bit (S=1) to carry through the example. This seems **wrong** somehow, and a little digging confirms. S or perhaps S*C ought to be a likelihood (apologies, it has been a while since I last worked this sort of problem by hand.) I am out of time for today, and perhaps out of energy to pursue this as well. Maybe tomorrow. I still think there should be a trivially simple demostration of the calculation, and the difficulty supports MathGrrl's contention that CSI is not rigorously defined. If CSI is a useful concept, then there ought to be examples of it being used as such in many fields. Likelihood ratio tests are certainly useful (I use them all the time), but here the definition are clear. The lack of clear definition for CSI appear to make it a not-useful concept. Tomato Addict
News Flash- There isn't anything any IDists can say or do to satisfy MathGrrl. I have provided a definition of CSI, one with mathematical rigor. She choked on it. I told her why CSI is a strong indicator of a designing agency, she choked on that too. Not only that she continues her equivocation by using "evolutionary mechanisms"- ie she blindly accepts that all evolutionary processes are blind watchmaker processes. So we have MathGrrl dismissing the efforts of IDists but when it comes to gene duplications she just blindly accepts that they are blind watchmaker processes. And in the end all she had to do was read "No Free Lunch"- but she choked on that too. A lot of choking, equivocating, and strawman erecting. That is what MathGrrl has provided. Oh well... Joseph
QuiteID, so what is your point? In non-technical usage, a “cipher” is the same thing as a “code”; however, the concepts are distinct in cryptography. http://en.wikipedia.org/wiki/Cipher Whereas A metaphor is a figure of speech that constructs an analogy between two things or ideas; the analogy is conveyed by the use of a metaphorical word in place of some other word. For example: "Her eyes were glistening jewels." http://en.wikipedia.org/wiki/Metaphor Yet clearly the DNA code is well beyond the metaphor category, in fact your own reference suggested the more strict usage of the term 'cipher' be used instead of Code, Thus QuiteID you and your references, which are suggesting that the use of the term Code for DNA is merely metaphorical, are shown to be completely out of context towards making your point! i.e. DNA is 100% a code in meaning, intent and purpose! Perhaps you should use a metaphor for your metaphor so as to make this twisted logic work for you! :) bornagain77
Mrs. O'Leary, I would like to again thank you for giving me the opportunity to make this guest post and for your time in policing the comments. I would also like to thank Jonathan M. for raising the possibility with you. Warm regards, MathGrrl MathGrrl
Everyone, We are now over the 400 comment mark and I haven't seen any reason to change the provisional conclusions that I reached in my post now numbered 201, namely: 1) There is no agreed definition of CSI. I have asked from the original post onward for a rigorous mathematical definition of CSI and have yet to see one. Worse, the comments here show that a number of ID proponents have definitions that are not consistent with each other or with Dembski’s published work. 2) There is no agreement on the usefulness of CSI. This may be related to the lack of an agreed definition, but several variants, that are incompatible with Dembski’s description, and alternative metrics have been proposed in this thread alone. 3) There are no calculations of CSI that provide enough detail to allow it be objectively calculated for other systems. The only example of a calculation for a biological system is Dembski’s estimate for a bacterial flagellum, but no one has managed to apply the same technique to other systems. 4) There is no proof that CSI is a reliable indicator of intelligent agency. This is not surprising, given the lack of a rigorous mathematical definition and examples of how to calculate it, but it does mean that the claims of many ID proponents are unfounded. Even after all of the effort expended by numerous participants, no one has directly addressed the five straightforward questions I asked, no one has provided a rigorous mathematical definition of CSI, and no one has provided detailed examples of how to objectively calculate it. I will continue to monitor this thread on the chance that someone chooses to address my original post, but I'm going to step back from addressing the majority of the comments that do not do so. Despite my disappointment and occasional frustration that I have not come away from this exercise with a sufficient understanding of CSI to be able to test the assertion that it cannot be generated by evolutionary mechanisms, I do believe that this has been a valuable discussion. It certainly provides a good reference for future threads here at UD. Thank you all for your participation. It's been interesting. MathGrrl
QuiteID The call of "metaphor" is nothing more than damage control. Whenever you take on type of input and create an output of a different type there has to be a code to do so. Nucleotides in amino acid chain out. That said I agree that DNA is not some type of program. But for DNA to be of any use there has to be a code to change from nucleotides to proteins. I also agree that DNA is not a "blueprint"- all it does it carry out its instructions. Joseph
bornagain77, I can cite literature too, and not just from blogs and books. Consider "Genes and Causation" by Denis Noble (Phil. Trans. R. Soc. A 13 September 2008 vol. 366 no. 1878 3001-3015), available at http://rsta.royalsocietypublishing.org/content/366/1878/3001.long "The coding step in the case of the relationship between DNA and proteins is what leads us to regard the information as digital. This is what enables us to give a precise number to the base pairs (3 billion in the case of the human genome). Moreover, the CGAT code could be completely represented by binary code of the kind we use in computers. (Note that the code here is metaphorical in a biological context—no one has determined that this should be a code in the usual sense. For that reason, some people have suggested that the word ‘cipher’ would be better.)" And: "Another analogy that has come from comparison between biological systems and computers is the idea of the DNA code being a kind of program. This idea was originally introduced by Monod & Jacob (1961) and a whole panoply of metaphors has now grown up around their idea. We talk of gene networks, master genes and gene switches. These metaphors have also fuelled the idea of genetic (DNA) determinism. But there are no purely gene networks! Even the simplest example of such a network—that discovered to underlie circadian rhythm—is not a gene network, nor is there a gene for circadian rhythm. Or, if there is, then there are also proteins, lipids and other cellular machinery for circadian rhythm." And: "The metaphors that served us well during the molecular biological phase of recent decades have limited or even misleading impacts in the multilevel world of systems biology. New paradigms are needed if we are to succeed in unravelling multifactorial genetic causation at higher levels of physiological function and so to explain the phenomena that genetics was originally about." Also see Stephen Strauss, "Beyond the double helix: as genetics becomes ever more complex, we badly need a way of describing what DNA does. 'Blueprint' just won't cut it." New Scientist 201.2696 (2009): 22 Also see Sergi Cortiñas Rovira, "Metaphors of DNA: a review of the popularisation processes," Journal of Science Communication 7(1), March 2008. I could go on. QuiteID
QuiteID, The DNA code is not 'like a code' as you are trying to insinuate, the DNA code is 100% a code in every meaning, intent, and purpose; The DNA Code - Solid Scientific Proof Of Intelligent Design - Perry Marshall - video http://www.metacafe.com/watch/4060532/ Moreover there are multiple overlapping codes that are completely inexplicable to Darwinian mechanisms; "In the last ten years, at least 20 different natural information codes were discovered in life, each operating to arbitrary conventions (not determined by law or physicality). Examples include protein address codes [Ber08B], acetylation codes [Kni06], RNA codes [Fai07], metabolic codes [Bru07], cytoskeleton codes [Gim08], histone codes [Jen01], and alternative splicing codes [Bar10]. Donald E. Johnson – Programming of Life – pg.51 - 2010 Histone Inspectors: Codes and More Codes - Cornelius Hunter - March 2010 Excerpt: By now most people know about the DNA code. A DNA strand consists of a sequence of molecules, or letters, that encodes for proteins. Many people do not realize, however, that there are additional, more nuanced, codes associated with the DNA. http://darwins-god.blogspot.com/2010/03/histone-inspectors-codes-and-more-codes.html ------------ Illya Prigogine, (Nobel-Chemistry) once wrote, “let us have no illusions we are unable to grasp the extreme complexity of the simplest of organisms. The DNA of a bacterium contains an encyclopedic amount of pure digitally encoded information that directs the highly sophisticated molecular machinery within the cell membrane. DNA characters are copied with an accuracy that rivals anything that modern engineers can do. ------------------- bornagain77
kairosfocus, whatever Wikipedia says (and I don't know why you say it is "testifying against interest"), it's widely recognized among scientists that terms like "code," "instructions," "language," etc. are metaphors. That's not to say they're "false" or "wrong" but only that they're limited. We understand the world in metaphorical terms all the time (see Lakoff and Johnson). These metaphors in particular tend to lead us to look at biology in anthropomorphic terms. There's a fairly developed literature on the misunderstandings that develop when the code metaphor is taken too literally. QuiteID
F/N: Now that I have a moment to pause, let's take a look at MG's four posed challenges at the head of this thread. It will emerge in short order that the questions are misdirected, and that consistently, the processes used start within islands of function in much wider and predominantly non-functional configuration spaces, based on intelligent design. Inadvertently, they show how FSCI is routinely the product of design. I will comment on points: _______________ >>A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.” a --> gene duplication, as shown above is not simple, and implies a problem with a regulatory network b --> The existence and structure of that network expresses a complex, integrated functional organisation that points to design, as it will go well beyond the FSCI threshold [125 bytes worth of code]. c --> The duplication itself, in the context cited does not create novel function, it would simply replicate existing information. d --> The question is mis-directed and falls under the fallacy of the complex question, also failing to distinguish mere information carrying capacity [complexity] from functional, meaningful specific information. Tom Schneider’s ev evolves genomes using only simplified forms of known, observed evolutionary mechanisms, e --> Ev starts within an island of function, i.e it begins within a target zone already. f --> That functionality owes a lot to the intelligent direction of Schneider that meet the specification of “A nucleotide that binds to exactly N sites within the genome.” The length of the genome required to meet this specification can be quite long, depending on the value of N. (ev is particularly interesting because it is based directly on Schneider’s PhD work with real biological organisms.) g --> The key question being addressed by Design theory is being begged, and it seems form above that the quantum of increment of information being claimed is well below the relevant threshold for FSCI, i.e if you show that chance plus trial and error can generate successful changes within the reach of the search resources of the cosmos, you have not addressed the real question, to generate functional information on the scale required to be relevant to the CSI filter h --> So, the same errors are at work. The design inference does not assert that no increments in information are possible on chance plus trial and error etc, but that such run into a limit, the search resources of the cosmos. i --> This is brought out in the simple X-metric, X = S*C*B. Tom Ray’s Tierra routinely results in digital organisms with a number of specifications. j --> Again, this starts within an island of function, i.e the key question is being begged. One I find interesting is “Acts as a parasite on other digital organisms in the simulation.” The length of the shortest parasite is at least 22 bytes, but takes thousands of generations to evolve. k --> 22 bytes [and the like], of course, is well within the FSCI limit of interest. But already, we see how hard it is to search a space. The various Steiner Problem solutions l --> Followed the link, and saw this:
In this post, I will present my research on a Genetic Algorithm I developed a few years ago, for the specific purpose of addressing the question Can Genetic Algorithms Succeed Without Precise “Targets”?
m --> this already begs the same question as has been repeatedly pointed out for some weeks: the program itself is already on an intelligently arrived at island of function. That dominates all else. n --> Now of course things have progressed since the days of Weasel where non-functional symbol strings were directly rewarded on mere proximity to target; which was presented -- rhetorically quite successfully -- as a demonstration of how evolution works [never mind the weasel word disclaimers in the fine print]. o --> Until there is an open admission, repudiation and correciton of that bit of manipulation, I have no confidence in further, more sophisticated versions of the same basic trick. p --> This, I do not find here, only a pretence that Philip Johnson and others did not have a legitimate point of protest. from a programming challenge a few years ago have genomes that can easily be hundreds of bits. q --> You are able to generate bit strings that within an island of function, hill climb to a target specified by a fitness function. r --> Where did that fitness function come form? Intelligent design, and it already codes the relevant target. The specification for these genomes is “Computes a close approximation to the shortest connected path between a set of points.” s --> And so, you have shown that an intleligently designed algorithm cna use controlled trial and error to hill-climb to a short distance between points solution. t --> Congratulations, you have shown that functionally specific, complex organisation and associated information, come from intelligence and can do the tasks they were set up to do. >> _______________ In short, MG, the problem is that the questions are complex and question-begging, so the set tasks become caught up in the loops of pointless circles of argument. The real challenger for the evolutionary materialistic paradigm, is to EMPIRICALLY show that life can and does self-assemble from reasonable chemicals in a reasonable pre-life environment, then give rise to novel body plans, in so doing crossing the observed thresholds of complexity of order 100+ k bits and 10+ mn bits. So far, we are seeing the a priori Lewontinian presumption of evolutionary materialism, and attempted [often irrelevant illustrations] that are working in that circle of reasoning. Of course, there are no specific CSI calculations above, such would be pointless in a context of a prior question-begging, complex and loaded question error. GEM of TKI PS: The follow up thread posted by VJT here is significant and responsive to the themes in this thread. I thank him for taking the time and making the effort to do the detailed calculations and analysis on WD's CSI metric that I just don't have time to even look at attempting. I only add that there are several possible metrics of the CSI in various forms. kairosfocus
A follow up on mye previous "cryptic" post The problem is to define specified. It seems that computer programs as well as proteins and other functional systems are more or less specified. Hence, "specified" is a set. Proteins gain their function by interacting in a context with other biomolecules. Hence, the set of specified proteins is the range of amino acid combinations that carry out a specific function within a specific context. The method would be to vary the amino acid combinations and count the ocombinations that carried out the questioned function (hard to test, but could be modelled by methods given by Axe and Durston et al., e.g. “Measuring the functional sequence complexity of proteins") Calculating CSI is possible when you have a finite set of configurable switches, but the approach must be different from system to system. Albert Voie
Tomato Addict, I provided MathGrrl with a simple definition and a simple example. Nothing will ever be good enough for her. Whatever any IDist says she will just come back with BS response. And the main problem is she didn't even read the book that describes and defines CSI. IOW she is a pathetic waste of time and it is sad to see all the time IDists have wasted on her. 1- She refuses to back-down from her strawman 2- She refuses to engage by providing the requested information 3- She refuses to read "No Free Lunch" 4- She is either purposely obtuse or just on a mission to see how long of a thread he can get and how much confusion she can generate. Joseph
MathGrrl:
In any case, my focus on this thread is to get a rigorous mathematical definition of CSI and some examples of how to calculate it.
Explain why it has to be "a rigorous mathematical definition of CSI" and then explain why my definition doesn't fit. As for equivocation tha is all you do with your use of "evolutinary mechanisms". IOW stop telling others tha tey ar equivocating when tat is what you have been doing. Joseph
Onlookers (and MF): Re MF:
I know that specific DNA sequences cause certain amino acids to appear. I asked what those DNA sequences symbolise . . . After all causes are not typically regarded to be symbols of their effects. High pressure causes clear skies – but it doesn’t symbolise clear skies.
This all too tellingly captures the degree to which evolutionary materialism is driven to incoherence in the face of the well-established fact that DNA bases in the chromosome SYMBOLISE, using a DIGITAL CODE -- Wikipedia testifying against interest, the amino acid string to be coded for once the regulatory network triggers that transcription and protein synthesis process. Let's cite Wiki:
The genetic code is the set of rules by which information encoded in genetic material (DNA or mRNA sequences) is translated into proteins (amino acid sequences) by living cells. The code defines a mapping between tri-nucleotide sequences, called codons, and amino acids . . . . Not all genetic information is stored using the genetic code. All organisms' DNA contains regulatory sequences, intergenic segments, and chromosomal structural areas that can contribute greatly to phenotype. Those elements operate under sets of rules that are distinct from the codon-to-amino acid paradigm underlying the genetic code . . . . Each protein-coding gene is transcribed into a template molecule of the related polymer RNA, known as messenger RNA or mRNA. This, in turn, is translated on the ribosome into an amino acid chain or polypeptide.[8]:Chp 12 The process of translation requires transfer RNAs specific for individual amino acids with the amino acids covalently attached to them, guanosine triphosphate as an energy source, and a number of translation factors. tRNAs have anticodons complementary to the codons in mRNA and can be "charged" covalently with amino acids at their 3' terminal CCA ends. Individual tRNAs are charged with specific amino acids by enzymes known as aminoacyl tRNA synthetases, which have high specificity for both their cognate amino acids and tRNAs. The high specificity of these enzymes is a major reason why the fidelity of protein translation is maintained.[8]:464–469 There are 4³ = 64 different codon combinations possible with a triplet codon of three nucleotides; all 64 codons are assigned for either amino acids or stop signals during translation.
Do the computer instructions on your PC cause the computer to execute steps specified by a given program? Of course they do, as a component of the cause of the computer's processing and output, based precisely on their digital, symbolic nature. High pressure, by sharpest contrast, is a causal factor for clear skies, but that is not by a step by step symbolic algorithmic process, such as is happening in the cell when DNA is transcribed to RNA, which is sent out to the ribosomes and then used to step by step string a protein chain, as is discussed here, with video. Amazing . . . and not a little sad. And of course if one struggles to see that the genetic code is exactly that, one will then have great difficulties in addressing how it reflects functionally specific, complex information, or how such FSCI could be an empirically reliable sign pointing to intelligent cause. GEM of TKI kairosfocus
#408 UB I know that specific DNA sequences cause certain amino acids to appear. I asked what those DNA sequences symbolise. Do those DNA sequences symbolise the corresponding amino acids? After all causes are not typically regarded to be symbols of their effects. High pressure causes clear skies - but it doesn't symbolise clear skies. markf
tgpeeler @ 403: "You can restart it by communicating something without using a language. Good luck with that." There are a number of dogs, bees, and ants that would beg to differ. paragwinn
Others have stated that CSI calculations are "hard work", Why is CSI so hard to calculate? Can it not be simplified to the point where a demonstration is clear? Even a trivial example ought to be sufficient for this purpose. MathGrrl's scenarios do not appear to require that a calculation be complicated, just that there should be a calculation. I often find that simplifying a problem down to the absolute basics is an excellent way to gain understanding. If CSI is a useful concept, then its meaning ought to be even more clear in a trivial example. Now if you will pardon my naivety, I'm going to give this a whack myself with MG's first scenario: Suppose the gene is simply "CG" and it duplicates to "CGCG" (I did say trivial!). If I understand correctly there needed to be some statement of the probability of this happening by chance, so how about 10%, or some arbitrary constant probability C if you prefer. So what come next? Walk me through this. Tomato Addict
Mark at 381, Hello Mark. If someone asks me if I think the arrangement of nucleic acids in DNA are mapped to amino acids during protein synthesis, I probably won’t take the question all that seriously, particularly on a thread where I’ve already made several comments to that effect. It seems from your own comments that you have evidence of some naturally occurring symbols, representations, abstractions, etc, you’d like to share. So again, if you have a case to make, then please by all means, make it. I will check back in as soon as I can. Cheers… Upright BiPed
It's radical that life is life? I'm sure I'm missing something here. There are various kinds of life, no? I for example, the last time I checked, am moderately different from a mushroom. Although at work I feel like one sometimes. :-) If your statement is true (I don't think it is) then all strings of symbols in English symbolize the same thing and are equivalent to every other one. Not true. tgpeeler
tgpeeler,
mg @ 392 “There are a number of information theorists who would beg to differ.” I’m sure there are. They’d be wrong, too. I will say again, boldly, if I may, that it is IMPOSSIBLE to account for the phenomenon of information in terms of the laws of physics. Why do I say this? Because symbols, rules (language), rationality (logic), free will (freely assembling symbols according to the aforementioned language specific rules and laws of rational thought), and intentionality (for a reason, to communicate a message) are ALL necessary for human information/communication. Materialism or naturalism or physicalism, whatever ilk your particular version is, all fail to account for human language and information....
There's your equivocation. You start with creating an analogy between some aspects of biological system and the concept of a language and now you're talking about "human language and information". You seem to be confused between your use of symbols and some kind of Platonic symbol inherent in what you are modeling. In any case, my focus on this thread is to get a rigorous mathematical definition of CSI and some examples of how to calculate it. Can you provide either or both of those? MathGrrl
tgpeeler,
mg @ 390 “You’re confusing the map with the territory. Just because it is possible to model some aspects of biological systems as “language” doesn’t mean that you can logically conclude an intelligence behind that language. That’s the fallacy of equivocation.” Hardly. Are you now saying that there is no such thing as REAL biological information? Is that what I’m hearing?
I said nothing about "biological information". I simply made the point that modeling some aspects of biological systems as a "language" doesn't mean you can then equivocate on the concepts underlying the term in order to define your intelligent agent into existence. MathGrrl
#401 tgpeeler It’s a short answer. Maybe that’s why you missed it last time. The base pairs symbolize LIFE. They all symbolise the same thing! Then every DNA string symbolises the same thing and is equivalent to every other one. This is indeed a radical theory. markf
mg @ 392 "There are a number of information theorists who would beg to differ." I'm sure there are. They'd be wrong, too. I will say again, boldly, if I may, that it is IMPOSSIBLE to account for the phenomenon of information in terms of the laws of physics. Why do I say this? Because symbols, rules (language), rationality (logic), free will (freely assembling symbols according to the aforementioned language specific rules and laws of rational thought), and intentionality (for a reason, to communicate a message) are ALL necessary for human information/communication. Materialism or naturalism or physicalism, whatever ilk your particular version is, all fail to account for human language and information because the only "tool" they have to explain anything and everything are the laws of physics (embarrassingly, they happen to be immaterial but I'll leave that alone for now). Therefore, as metaphysical projects, all of these "isms" utterly fail. The "assumption" that the natural or material or physical is all there is is clearly and obviously false. Whatever incarnation of the naturalistic story of life is currently being discussed is just false. They are all false. It is impossible for any naturalistic account of life to be true. Anybody who can string a couple of thoughts together should be able to track this with no problem. I know this includes you. So if I'm wrong, tell me how I'm wrong. Then I'll change my mind. But until you bring an actual argument to rebut the argument I'm making I'm afraid I will remain unmoved in my opinion that trying to explain information of any kind without symbols and rules is sheer lunacy. If you would STOP and THINK about this for a moment before dashing off a dismissal of one kind or another, you would see that I am correct about this. Analyze your own posts. Do they not obey (generally) the laws of reason? Yes. You make use of the law of identity. Do you not freely use English symbols, arranged according to arbitrary convention, to purposefully communicate a message? Yes, you do. I get it that this is a bold claim. Perhaps even grandiose. But that doesn't make it any less true. The materialist project is defeated. That game is over. You can restart it by communicating something without using a language. Good luck with that. tgpeeler
mg @ 390 "You’re confusing the map with the territory. Just because it is possible to model some aspects of biological systems as “language” doesn’t mean that you can logically conclude an intelligence behind that language. That’s the fallacy of equivocation." Hardly. Are you now saying that there is no such thing as REAL biological information? Is that what I'm hearing? tgpeeler
mf @ 381 "So I then asked what do the symbols in DNA (presumably the base pairs) symbolise? Actually I have asked this three times now." It's a short answer. Maybe that's why you missed it last time. The base pairs symbolize LIFE. tgpeeler
As for the EF answer the question- do you think scientists flip a coin or throw darts? Or do you think they have a methodology? JR:
Perhaps they use the same methodology as is used in the calculation of CSI?
That doesn't make any sense. Is nonsense the best you have to offer? JR:
I.E they don’t.
They don't what? They don't have a methodology? They don't need to eliminate necessity and chance before reaching a design inference? Can you do anything more than babble incoherently? Joseph
Muramasa: bockqote>Oh, and Joseph, can you tell me who said: “I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection.” I know who said it. The SAME person who said:
In an off-hand comment in a thread on this blog I remarked that I was dispensing with the Explanatory Filter in favor of just going with straight-up specified complexity. On further reflection, I think the Explanatory Filter ranks among the most brilliant inventions of all time (right up there with sliced bread). I’m herewith reinstating it — it will appear, without reservation or hesitation, in all my future work on design detection. (see here)
Yeah baby... Joseph
MathGrrl:
Gene duplication involves far more than the 250 base pairs required to meet your 500 bit limit.
Except gene duplications don't have any place at the OoL table. And CSI relates to origins. Not only that there isn't any mathematically rigorous definition of gene dupication that demonstrates it is a blind watchmaker process. As for a definition of CSI Information is taken care of by Shannon- mathematical rigor and all. Soecified information is Shannon information with meaning/ function. Complex Specified Informtion is Specified Information of 500 bits or more. As for GAs creating CSI- GAs create what they are designed to create. All the SI is there for them as a resource. Show me a GA that arose without a designing agency and you will have something. But then again MathGrrl refuses to read "No Free Lunch" so this is all a waste of bandwidth. Although I am sure even if she read it she would still have these hollow criticisms. Joseph
Why CSI is difficult to calculate, even though it is real: An elegant program is the shortest program that produces a given output. In other words, a program P is elegant if no program shorter than P produces the same output as P. NOTE: "Program" here means code plus input data. Every elegant program produces a single, unique (but possibly infinite) output. Some obvious but important consequences of this definition are 1) every output from an elegant program is computable and 2) there are an infinite number of elegant programs, one for each possible computable output. Theorem (Chaitin): It is not possible in general to determine whether or not a given program is elegant. Albert Voie
Tomato Addict,
MathGrrl writes at #358:
My gut instinct is that, if CSI can be rigorously defined and objectively calculated, GAs will prove to be capable of generating it. I’d very much like to test that instinct (and ID claims at the same time).
This would seem to be a reasonable goal for all interested parties. If this is testable, it should be tested!
Thank you! I'm glad that others find my request reasonable. MathGrrl
PaV, Your football field analogy does not reflect this situation in the slightest.
When, by definition, a bit sequence has to be at least 500 bits long to rank as CSI, and someone wants you to rigoruosly define CSI for a bit-string (sequence) that is 260 bits long, what would you make of it?
Gene duplication involves far more than the 250 base pairs required to meet your 500 bit limit. I have already provided a link to Schneider using ev in excess of that limit. It is also easy to come up with a Steiner problem that requires more than a 500 bit genome to solve. Please provide a mathematically rigorous definition of CSI and apply it to those scenarios so that others can perform similar calculations. MathGrrl
CJYman, Nice to see you in this thread!
I do understand that MathGrrl would like to have CSI calculated for the scenarios that she has provided, and in principle that’s great, but as PaV has noted above if she can’t figuer out the examples already given her, then there is no reason to suspect that if we take the time to go through her provided scenarios she would understand the calculations any better.
I hardly think that's fair, based on our interaction in the last thread. There are a number of questions around your calculation of CSI for titin that remain unanswered. It also appears that your definition of CSI isn't the same as that discussed by Dembski.
It would be better if she understood first what we’ve already tried to explain and calculate for her and then she can work on her own examples herself and ask us what we think about her calculations.
If someone would be kind enough to define CSI with mathematical rigor and show me, in detail, how to calculate it for the four scenarios I described, I will be more than happy to implement the metric in software and share the results I get from other scenarios I have in mind. Are you or are you not willing and able to do this? MathGrrl
PaV,
My claim is this: Dembski’s book, NFL, contains a rigorous, mathematical description/definition of CSI.
The amount of confusion regarding the definition of CSI, even among ID proponents on this very thread, demonstrates that Dembski's discussion is insufficiently rigorous to allow the objective, unambiguous calculation of CSI.
MathGrrl doesn’t want the definition contained there, she wants a worked-out example of CSI. Very different things.
If you have a mathematically rigorous definition of a metric that is claimed to be calculable, providing example calculations should not be a problem. The rest of your long post contains neither a rigorous definition of CSI nor example calculations for the scenarios I described. After repeatedly asking these straightforward questions, I believe I am justified in provisionally concluding that you are unable to answer them. MathGrrl
tgpeeler,
I am saying, per my previous post, and interminable posts prior to this on other threads, that is it impossible, IN PRINCIPLE, i.e. it is logically impossible, to explain information in terms of algorithms....
There are a number of information theorists who would beg to differ. MathGrrl
kairosfocus, Thank you for your copious responses. I appreciate the time you have taken to participate in this thread. My focus here, though, is to get qualitative answers to the questions I posed my original post. I'm sure that we will have the opportunity to discuss your thoughts in other threads and I will, of course, be happy to re-engage with you on this one if you choose to provide the definition and calculations I've requested. PS: Durston's metric is not the same as Dembski's. It might be interesting to discuss, but it isn't applicable to my questions. MathGrrl
tgpeeler,
The reason that we “are very concerned” that mg has not addressed the presence of symbols is that the presence of symbols destroys the materialist project. In other words, it’s not even possible for you to be right.
You're confusing the map with the territory. Just because it is possible to model some aspects of biological systems as "language" doesn't mean that you can logically conclude an intelligence behind that language. That's the fallacy of equivocation. MathGrrl
F/N 3: Excerpting the Durston et al paper: ____________ >> It is known that the variability of data can be measured using Shannon uncertainty [16]. However, Shannon's original formulation when applied to biological sequences does not express variations related to biological functionality such as metabolic utility. Shannon uncertainty, however, can be extended to measure the joint variable (X, F), where X represents the variability of data, and F functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called Functional Uncertainty (Hf) [17], and is defined by the equation: H(Xf(t)) = -?P(Xf(t)) logP(Xf(t))(1) where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function f, where f might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of Xf. Here, functionality relates to the whole protein family which can be inputted from a database. The advantage of using H(Xf(t)) is that changes in the functionality characteristics can be incorporated and analyzed. Furthermore, the data can be a single monomer, or a biosequence, or an entire set of aligned sequences all having the same common function . . . . The change in functional uncertainty (denoted as ?Hf) between two states can be defined as ?H (Xg(ti), Xf(tj)) = H(Xg(tj)) - H(Xf(ti))(2) where Xf (ti) and Xg (tj) can be applied to the same sequence at two different times or to two different sequences at the same time. ?Hf can then quantify the change in functional uncertainty between two biopolymeric states with regard to biological functionality. Unrelated biomolecules with the same function or the same sequence evolving a new or additional function through genetic drift can be compared and analyzed. A measure of ?Hf can increase, decrease, or remain unchanged . . . . The ground state g (an outcome of F) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable [27] . . . . The null state, a possible outcome of F denoted as ø, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes no constraints at all, resulting in the equi-probability of all possible sequences or options. Such sequencing has been called "dynamically inert, dynamically decoupled, or dynamically incoherent" [28,29]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence . . . . The change in functional uncertainty from the null state is, therefore, ?H(Xø(ti), Xf(tj)) = log (W) - H(Xf(ti)).(5) Physical constraints increase order and change the ground state away from the null state, restricting freedom of selection and reducing functional sequencing possibilities, as mentioned earlier. The genetic code, for example, makes the synthesis and use of certain amino acids more probable than others, which could influence the ground state for proteins. However, for proteins, the data indicates that, although amino acids may naturally form a nonrandom sequence when polymerized in a dilute solution of amino acids [30], actual dipeptide frequencies and single nucleotide frequencies in proteins are closer to random than ordered [31]. For this reason, the ground state for biosequences can be approximated by the null state. The value for the measured FSC of protein motifs can be calculated by relating the joint (X, F) pattern to a stochastic ensemble, the null state in the case of biopolymers that includes any random string from the sequence space . . . . The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or ? = ?H (Xg(ti), Xf(tj)).(6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. >> ______________ Durston et al go on in much more details, and give a table of values of FSC for 35 protein families [and of course FSC is functionally specified complexity of sequence]. Given here: http://www.tbiomed.com/content/4/1/47/table/T1 All, in the peer reviewed literature:
Measuring the functional sequence complexity of proteins Kirk K Durston1 email, David KY Chiu2 email, David L Abel3 email and Jack T Trevors4 Theoretical Biology and Medical Modelling 2007, 4:47doi:10.1186/1742-4682-4-47
But of course, the results that passed peer review and were published in a significant journal, years ago, and as repeatedly linked or mentioned in previous threads and as linked from the WACs, are meaningless, and are not sufficiently mathematically defined to be of significance . Etc etc. NOT! kairosfocus
F/N 3: Excerpting the Durston et al paper: ____________ >> It is known that the variability of data can be measured using Shannon uncertainty [16]. However, Shannon's original formulation when applied to biological sequences does not express variations related to biological functionality such as metabolic utility. Shannon uncertainty, however, can be extended to measure the joint variable (X, F), where X represents the variability of data, and F functionality. This explicitly incorporates empirical knowledge of metabolic function into the measure that is usually important for evaluating sequence complexity. This measure of both the observed data and a conceptual variable of function jointly can be called Functional Uncertainty (Hf) [17], and is defined by the equation: H(Xf(t)) = -?P(Xf(t)) logP(Xf(t))(1) where Xf denotes the conditional variable of the given sequence data (X) on the described biological function f which is an outcome of the variable (F). For example, a set of 2,442 aligned sequences of proteins belonging to the ubiquitin protein family (used in the experiment later) can be assumed to satisfy the same specified function f, where f might represent the known 3-D structure of the ubiquitin protein family, or some other function common to ubiquitin. The entire set of aligned sequences that satisfies that function, therefore, constitutes the outcomes of Xf. Here, functionality relates to the whole protein family which can be inputted from a database. The advantage of using H(Xf(t)) is that changes in the functionality characteristics can be incorporated and analyzed. Furthermore, the data can be a single monomer, or a biosequence, or an entire set of aligned sequences all having the same common function . . . . The change in functional uncertainty (denoted as ?Hf) between two states can be defined as ?H (Xg(ti), Xf(tj)) = H(Xg(tj)) - H(Xf(ti))(2) where Xf (ti) and Xg (tj) can be applied to the same sequence at two different times or to two different sequences at the same time. ?Hf can then quantify the change in functional uncertainty between two biopolymeric states with regard to biological functionality. Unrelated biomolecules with the same function or the same sequence evolving a new or additional function through genetic drift can be compared and analyzed. A measure of ?Hf can increase, decrease, or remain unchanged . . . . The ground state g (an outcome of F) of a system is the state of presumed highest uncertainty (not necessarily equally probable) permitted by the constraints of the physical system, when no specified biological function is required or present. Certain physical systems may constrain the number of options in the ground state so that not all possible sequences are equally probable [27] . . . . The null state, a possible outcome of F denoted as ø, is defined here as a special case of the ground state of highest uncertainly when the physical system imposes no constraints at all, resulting in the equi-probability of all possible sequences or options. Such sequencing has been called "dynamically inert, dynamically decoupled, or dynamically incoherent" [28,29]. For example, the ground state of a 300 amino acid protein family can be represented by a completely random 300 amino acid sequence . . . . The change in functional uncertainty from the null state is, therefore, ?H(Xø(ti), Xf(tj)) = log (W) - H(Xf(ti)).(5) Physical constraints increase order and change the ground state away from the null state, restricting freedom of selection and reducing functional sequencing possibilities, as mentioned earlier. The genetic code, for example, makes the synthesis and use of certain amino acids more probable than others, which could influence the ground state for proteins. However, for proteins, the data indicates that, although amino acids may naturally form a nonrandom sequence when polymerized in a dilute solution of amino acids [30], actual dipeptide frequencies and single nucleotide frequencies in proteins are closer to random than ordered [31]. For this reason, the ground state for biosequences can be approximated by the null state. The value for the measured FSC of protein motifs can be calculated by relating the joint (X, F) pattern to a stochastic ensemble, the null state in the case of biopolymers that includes any random string from the sequence space . . . . The measure of Functional Sequence Complexity, denoted as ?, is defined as the change in functional uncertainty from the ground state H(Xg(ti)) to the functional state H(Xf(ti)), or ? = ?H (Xg(ti), Xf(tj)).(6) The resulting unit of measure is defined on the joint data and functionality variable, which we call Fits (or Functional bits). The unit Fit thus defined is related to the intuitive concept of functional information, including genetic instruction and, thus, provides an important distinction between functional information and Shannon information [6,32]. Eqn. (6) describes a measure to calculate the functional information of the whole molecule, that is, with respect to the functionality of the protein considered. The functionality of the protein can be known and is consistent with the whole protein family, given as inputs from the database. >> ______________ Durston et al go on in much more details, and give a table of values of FSC for 35 protein families [and of course FSC is functionally specified complexity of sequence]. Given here: http://www.tbiomed.com/content/4/1/47/table/T1 All, in the peer reviewed literature:
Measuring the functional sequence complexity of proteins Kirk K Durston1 email, David KY Chiu2 email, David L Abel3 email and Jack T Trevors4 Theoretical Biology and Medical Modelling 2007, 4:47doi:10.1186/1742-4682-4-47
But of course, the results that passed peer review and were published in a significant journal, years ago, and as repeatedly linked or mentioned in previous threads and as linked from the WACs, are meaningless, and are not sufficiently mathematically defined to be of significance . Etc etc. NOT! kairosfocus
F/N 2: For reference, from the UD weak argument correctives top right this and every UD page for some years now: _________________ >> 25] Intelligent Design proponents deny, without having a reason, that randomness can produce an effect, and then go make something up to fill the void ID proponents do not deny that “randomness can produce an effect.” For instance, consider the law-like regularity that unsupported heavy objects tend to fall. It is reliable; i.e. we have a mechanical necessity at work — gravity. Now, let our falling heavy object be a die. When it falls, it tumbles and comes to rest with any one of six faces uppermost: i.e. high contingency. But, as the gaming houses of Las Vegas know, that contingency can be (a) effectively undirected (random chance), or (b) it can also be intelligently directed (design). Also, such highly contingent objects can be used to store information, which can be used to carry out functions in a given situation. For example we could make up a code and use trays of dice to implement a six-state digital information storing, transmission and processing system. Similarly, the ASCII text for this web page is based on electronic binary digits clustered in 128-state alphanumeric characters. In principle, random chance could produce any such message, but the islands of functional messages will as a rule be very isolated in the sea of non-functional, arbitrary strings of digits, making it very hard to find functional strings by chance. ID thinkers have therefore identified means to test for objects, events or situations that are credibly beyond the reach of chance on the gamut of our observed cosmos. (For simple example, as a rule of thumb, once an entity requires more than about 500 – 1,000 bits of information storage capacity to carry out its core functions, the random walk search resources of the whole observed universe acting for its lifetime will probably not be adequate to get to the functional strings: trying to find a needle in a haystack by chance, on steroids.) Now, DNA for instance, is based on four-state strings of bases [A/C/G/T], and a reasonable estimate for the minimum required for the origin of life is 300,000 – 500,000 bases, or 600 kilo bits to a million bits. The configuration space that even just the lower end requires has about 9.94 * 10^180,617 possible states. So, even though it is in principle possible for such a molecule to happen by chance, the odds are not practically different from zero. But, intelligent designers routinely create information storage and processing systems that use millions or billions of bits of such storage capacity. Thus, intelligence can routinely do that which is in principle logically possible for random chance, but which would easily empirically exhaust the probabilistic resources of the observed universe. That is why design thinkers hold that complex, specified information (CSI), per massive observation, is an empirically observable, reliable sign of design. 26] Dembski’s idea of “complex specified information” is nonsense First of all, the concept of complex specified information (CSI) was not originated by Dembski. For, as origin of life researchers tried to understand the molecular structures of life in the 1970?s, Orgel summed up their findings thusly: Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity. [ L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.] In short, the concept of complex specified information helped these investigators understand the difference between (a) the highly informational, highly contingent functional macromolecules of life and (b) crystals formed through forces of mechanical necessity, or (c) random polymer strings. In so doing, they identified a very familiar concept — at least to those of us with hardware or software engineering design and development or troubleshooting experience and knowledge. Namely, complex, specified information, shown in the mutually adapted organization, interfacing and integration of components in systems that depend on properly interacting parts to fulfill objectively observable functions. For that matter, this is exactly the same concept that we see in textual information as expressed in words, sentences and paragraphs in a real-world language. Furthermore, on massive experience, such CSI reliably points to intelligent design when we see it in cases where we independently know the origin story. What Dembski did with the CSI concept in the following two decades was to: (i) recognize CSI’s significance as a reliable, empirically observable sign of intelligence, (ii) point out the general applicability of the concept, and (iii) provide a probability and information theory based explicitly formal model for quantifying CSI. 27] The Information in Complex Specified Information (CSI) Cannot Be Quantified That’s simply not true. Different approaches have been suggested for that, and different definitions of what can be measured are possible. As a first step, it is possible to measure the number of bits used to store any functionally specific information, and we could term such bits “functionally specific bits.” Next, the complexity of a functionally specified unit of information (like a functional protein) could be measured directly or indirectly based on the reasonable probability of finding such a sequence through a random walk based search or its functional equivalent. This approach is based on the observation that functionality of information is rather specific to a given context, so if the islands of function are sufficiently sparse in the wider search space of all possible sequences, beyond a certain scope of search, it becomes implausible that such a search on a planet wide scale or even on a scale comparable to our observed cosmos, will find it. But, we know that, routinely, intelligent actors create such functionally specific complex information; e.g. this paragraph. (And, we may contrast (i) a “typical” random alphanumeric character string showing random sequence complexity: kbnvusgwpsvbcvfel;’.. jiw[w;xb xqg[l;am . . . and/or (ii) a structured string showing orderly sequence complexity: atatatatatatatatatatatatatat . . . [The contrast also shows that a designed, complex specified object may also incorporate random and simply ordered components or aspects.]) Another empirical approach to measuring functional information in proteins has been suggested by Durston, Chiu, Abel and Trevors in their paper “Measuring the functional sequence complexity of proteins”, and is based on an application of Shannon’s H (that is “average” or “expected” information communicated per symbol: H(Xf(t)) = -?P(Xf(t)) logP(Xf(t)) ) to known protein sequences in different species. A more general approach to the definition and quantification of CSI can be found in a 2005 paper by Dembski: “Specification: The Pattern That Signifies Intelligence”. For instance, on pp. 17 – 24, he argues: define ?S as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S's] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity [?] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 logarithm of the conditional probability P(T|H) multiplied by the number of similar cases ?S(t) and also by the maximum number of binary search-events in our observed universe 10^120] ? = – log2[10^120 ·?S(T)·P(T|H)]. To illustrate consider a hand of 13 cards with all spades, which is unique. 52 cards may have 635 *10^9 possible combinations, giving odds of 1 in 635 billions as P(T|H). Also, there are four similar all-of-one-suite hands, so ?S(T) = 4. Calculation yields ? = -361, i.e. > __________________ In short, the above is largely a revisiting of old ground, to no edifferent outcome than previously. kairosfocus
F/N: The onward thread, sadly predictably, has been largely unproductive. I will comment on a few points of note, as it is plainly winding down to an impasse, as I predicted/expected from my very first for the record remark: 1 --> Meaningfulness of specified complexity and associated information, especially functionally specific complex organisation and associated information is a longstanding reality acknowledged by Orgel and Wicken from the 1970's in the technical literature on OOL and related areas. 2 --> At no point above has MG found herself able to acknowledge this. 3 --> All else in the above impasse follows from this, as the objective reality of functionally specific complexity is antecedent to mathematical models, analyses and metrics. 4 --> The what comes before the how much, as length comes before defining the metre as how far light goes in about 3 ns, or previously, the distance taken up by a certain number of wavelengths of a certain spectral line from a certain isotope of Cs; and before that, the distance between two marks on a certain bar of Pt alloy kept in France, and before that, a certain fraction of the distance from pole to equator through Paris. 5 --> You will note that I have consistently put up a simple, brute-force X-metric for FSCI (the most relevant subset of CSI: the specification is based on observed function), X = C*S*B, based on the Shannon-Hartley quantification of information, as is commonly used in the form of functional bits. 6 --> I have only added the Semiotic Agent/Observer who judges function and complexity . . . comparable to how an observer in the end has to make a judgement for us to measure a length, e.g. alignment with marks on a metre rule. 7 --> I have given cases and calculations, and I have applied it to the case of gene duplication as proposed by MG [exposing the implied complex regulatory processes being glided over question-beggingly]. 8 --> All, only to be dismissed without analysis, as giving only a flood of words without calculations and backdrop analysis [note how EVERY post I have ever made at UD links through my handle to a briefing note that ties my thoughts to the relevant information theory and thermodynamics and related factors]. 9 --> So, MG is either refusing to notice the calculations and underlying analysis, or she is willfully distorting what I have done. 10 --> When I and others have then pointed to other, more complex metrics, analyses and calculations [which build on the rudimentary principles involved in the admittedly crude X-metric], they too have been brushed aside, and goal posts have been repeatedly moved. Remember, Durston et al have provided FSC results for 35 protein families, based on extending Shannon's H-metric of average information per symbol. Remember, Dembski has provided books and papers --- and key excerpts have been given -- to quantify CSI, and has provided his own calculation for a specific case in that context. CJY and VJT have given calculations for specific cases. 11 --> All, have been brushed aside. 12 --> Now, someone above wishes to make the claim that the explanatory filter that is used to identify cases of law, chance and design affecting aspects of an object, process, system, or phenomenon, is a dubious novelty. 13 --> I beg to remind that person that every time we make the distinction between a meaningful message and meaningless noise or natural regularity, in a communication context, we are making a design inference on the filter, used intuitively. [BTW, this immediately extends to the hypothesis testing context where we are looking for intentional action vs chance patterns.] 14 --> In fact, in science, the work of identifying what is law and what is scatter is an explanatory filter; and this extends to inference to design in contexts where results of an intervention are being identified, such as in control/treatment studies. 15 --> In short, the idea of an inference filter is nothing new. What is controversial in the origins science context is simply this: that people are inferring on FSCI to design, in contexts that cut across dominant schools of thought and the sort of Sagan-Lewontin a priori materialism that was so plainly documented in 1997:
To Sagan, as to all but a few other scientists, it is self-evident that the practices of science provide the surest method of putting us in contact with physical reality, and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test . . . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [[From: “Billions and Billions of Demons,” NYRB, January 9, 1997.]
16 --> have you identified a case where FSCI [use the crude X metric to quantify, e.g. 143 ASCII characters of text would suffice . . . ], on empirical evidence, originates by blind chance and mechanical necessity without intelligent direction? 17 --> ANS: Plainly, not; or that would have been long since triumphantly announced.
(The sort of cases being suggested above, of gene duplication begs the question of the underlying regulatory mechanism and its origin, and programs like ev are intelligently designed and similarly beg the question of how they arrived on an island of function with a conveniently working hill climbing algorithm.)
17 --> The production of FSCI by intelligence, in many ways, is a matter of routine direct observation with literally billions of cases in point. That routine observation is backed up by the infinite monkeys type analysis that also grounds the related second law of thermodynamics. 18 --> So, on inference ot best explanation across aspects of a phenomenon, object etc, we are entitled to infer from FSCI as reliable sign to its observationally known source, intelligence. 19 --> And, in that context, we are entitled to use an explanatory framework -- we may call it a filter -- to differentiate signs that point us to mechanical necessity, to chance contingency and to choice contingency [aka design] as best explanation. ____________ So, we are plainly at impasse, and it still remains that FSCI and CSI more broadly are meaningful, are quantifiable in principle and in fact in enough cases to be relevant, as well as fitting into a broader view of the methods of science and similar serious explanatory investigations. G'day GEM of TKI kairosfocus
PaV
I suppose for me to be pried away from what I do to focus long and hard on that particular problem would take, quite honestly, hundreds of thousands of dollars to begin to pique my interest.
As this would be a significant development for ID I'm sure a body like the Templeton foundation or the DI would be interested in providing the relevant funding. And compared to the amount of money that "Darwinism" get's on a daily basis a couple of hundred K is peanuts. So, given that you claim it's possible but for the lack of funds, and given that you are an ardent ID supporter (one of the few to step up to MathGrrls challenge) will you be putting together a proposal for funding? If not, why not? JemimaRacktouey
Joseph
As for the EF answer the question- do you think scientists flip a coin or throw darts? Or do you think they have a methodology?
Perhaps they use the same methodology as is used in the calculation of CSI? I.E they don't. JemimaRacktouey
Oh, and Joseph, can you tell me who said: "I’ve pretty much dispensed with the EF. It suggests that chance, necessity, and design are mutually exclusive. They are not. Straight CSI is clearer as a criterion for design detection." Muramasa
Joseph, I am unfamiliar with "hind-n-seek". What exactly is that? Muramasa
Jon @ 379. I count myself among the supportive onlookers. Up until now I’ve been silent – this is my first post on this site. But my interpretation of this debate is not similar to yours. I’m certainly not disillusioned. I think at least a couple of good examples of rigorous calculations have been given by vjtorley and CJYman. I think PaV and others have given satisfactory rebuttals for each of the four scenarios. And I think that several premise challenging questions have been asked of Mathgrrl which haven’t been addressed. For every lurker out there who ‘desperately’ wants ID supporters to meet Mathgrrl’s demands, there’s a lurker like me: who thinks that Mathgrrl needs to step up and respond in depth to the many attempts to help her; and who thinks that the consistency of message on the ‘show-me-rigorous-math-for-my-four-scenarios’ line, while rhetorically effective at first, falls flat when it's the seemingly the only thing she is willing to type. But I would agree with you in a strange way – I do hope that the ID supporters who have responded to Mathgrrl’s request keep up the posting. Not because they under some idealistic obligation to not ‘walk away from a fight’, but because the content is instructive (except for some of Joseph’s crabbiness). But I don’t blame them if they don’t. cmow
#380 UB I commented to you that I make a distinction between an object, the information that can be created from an object, and an object arranged to contain information. So you did - and I have no problem with that. It arose because I asked what the amino acids in a protein symbolise. You said they didn't. The symbols were in the DNA. So I then asked what do the symbols in DNA (presumably the base pairs) symbolise? Actually I have asked this three times now. markf
From PaV in 377: "When, by definition, a bit sequence has to be at least 500 bits long to rank as CSI, and someone wants you to rigoruosly define CSI for a bit-string (sequence) that is 260 bits long, what would you make of it?" Then take the human genome, duplicate any gene in any way you see fit, and then measure the CSI before and after. Surely the human genome is larger than 500 bits and fits your criteria. If you want I can give you an amino acid sequence larger than 500 bits and you can measure the CSI of that sequence. Or even better yet, I can give a diagram of the tertiary structure of the protein and you can measure the CSI of that protein. What would be most helpful? Taq
From #331, it turns out that simply not engaging is the chosen option, and that is fine. A sensible choice, given the alternative. Mark, I commented to you that I make a distinction between an object, the information that can be created from an object, and an object arranged to contain information. I think observation supports that view, but if you have a case to make, then by all means make it. I will be traveling, but should be able to keep up. Upright BiPed
Naturally when I told them what I had found that ended the discussion. But, of course, I had wasted a lot of my time. Maybe now you understand my reticence.
Well, I would suggest you are playing to the wrong audience. Who cares what the Darwinists say. Your real audience should be the supportive onlookers who wish to see ID succeed, but remain largely silent in this forum. Does it not bother you that those onlookers, among who I count myself, are becoming disillusioned with ID because it's most vocal supporters are basically walking away from an opportunity to demonstrate their primary tool? Do you believe ID is right or don't you? If you do, then you have no excuse to shrug your shoulders and walk away from this fight. jon specter
Somewhere back in the 200's, Colin commented about psychological measures of somewhat "fuzzy" nature. I wanted to add that such factors resulting from statistical multivariate analysis may be fuzzy in the interpretation, but the method of calculation is explicit - There is no question of how to do the calculation or the definition of the methodology, which is a rather different situation than with the calculation of CSI. AND MathGrrl writes at #358:
My gut instinct is that, if CSI can be rigorously defined and objectively calculated, GAs will prove to be capable of generating it. I’d very much like to test that instinct (and ID claims at the same time).
This would seem to be a reasonable goal for all interested parties. If this is testable, it should be tested! Tomato Addict
Jon S: Let me try and give you an example of what MathGrrl's request (demand) appears like to me. Let's say that you described a football field as a rectangular area, 50 x 120 yards along (including end-zones), that is striped every five yards, consisted of grass-like material, with a slope from the center to the edges of 1"/2 feet, etc. And then you say you're looking to build one. And someone says, "Oh well, come over to our house you can build it in the back yard." And then you ask, "Well, just how big is your back yard," to which they respond: "50 feet square." If they kept insisting that you could build your "football field" in their back yard, would you go over there to make sure? When, by definition, a bit sequence has to be at least 500 bits long to rank as CSI, and someone wants you to rigoruosly define CSI for a bit-string (sequence) that is 260 bits long, what would you make of it? PaV
Muramasa, CSI is only intended for certain circumstances- most tools are designed for certain circumstances. As for the EF answer the question- do you think scientists flip a coin or throw darts? Or do you think they have a methodology? Do you think an archaeologist can claim she is holding an artifact when a geologist can demonstrate that geological processes can account for it? Are all deaths homicides? Do you have any experience with conductiing an investigation beyond playing "hind-n-seek"? Joseph
Thank you, CJYman at [371]. I didn't know that you had worked out such a problem. Thanks for the link. And your analogy of Shannon information difference between the Taj Majal and CNN Tower is great. For any onlookers, the reality is as CJYman says, there are examples out there of how to calculate CSI/specified complexity. These are there to guide you. I don't see any evidence that MathGrrl sufficiently understands the concepts involved so as to justify having a meaningful conversation with her. At [368], I include quotes demonstrating MathGrrl's inability to understand vjtorley's response to her. And, I pointed out what I'm rather sure her agenda is: CSI is a moving target. It's not rigorous enough. But that seems to mean: it's not easy enough. (And it's not easy. Likely you would have to formulate some composite probability density function, and use some form of combinatorics that might be specific to the problem at hand, and so, a form you yourself would have to derive. But just because something isn't easy to achieve doesn't mean that it isn't true.) Well, it's not. But in certain situations it comes only at the end of a very long and hard search for understanding. JemimaR: I suppose for me to be pried away from what I do to focus long and hard on that particular problem would take, quite honestly, hundreds of thousands of dollars to begin to pique my interest. You have to remember: it's an entirely uninteresting problem for me (because it is an impossibility---a "tilting at windmills"). Here's an interesting problem: MathGrrl shows up with a precise "specification", including a chance hypothesis, demonstrating that she has found some instance of "specified complexity/CSI" via chance processes. I would love to debunk it. While I have zero interest the other way around, I'd debunk it for free!! Jon Spec: I'm not trying to bow out gracefully. MathGrrl, on plenty of occasions along the way, has been given her answer. That she cannot understand these answers is not my problem. You say it would be good for ID. Well, tell you what, try and get hold of Bill Dembski and see if you can interest him in that. But, tell me, would you be able to evaluate it? And if someone from the other side said Dembski's calculations were all wrong, would you know whether or not there was merit to their claim? So, then, what would be gained. CJYman has done a sample calculation. The basics are enough. Lastly, take the example of the ev program. There's a kind of subroutine to check how "correct" the "binding site" is? Well, ask yourself this? Correct compared to what? Well, obviously the actual binding site. Now, does that sound like "blind chance" to you? It sure doesn't to me. I forget the program, but years ago they said, "Oh, well you must know about so-and-so's ABC program (whatever it was) that randomly generated an electric circuit." I responded by saying, "Please don't waste my time. I don't want to go and look at it and then find out it can't do anything." "Oh, no, I was assured. It can really do what it says." So I spent quite some time meticulously going through the program, step by silly step, and what do I find out: if the program runs using information as to how the electric circuit was supposed to function (and this was specified by a "fitness function"), then, and ONLY then, did a circuit appear---something that looked like the worst case of a Rube-Goldberg machine you ever saw (highly inefficient). If you didn't feed the program this information, then NOTHING happened. Naturally when I told them what I had found that ended the discussion. But, of course, I had wasted a lot of my time. Maybe now you understand my reticence. PaV
CJYman provides further support for the notion that CSI, if applicable at all, can only be used in certain circumstances. Does this not render it nearly useless? Maybe a helpful intermediate step would be to list things for which CSI/specified complexity can be measured. And Joseph @373, that really isn't a specific demonstration of the explanatory filter. Muramasa
JR:
The explanatory filter you say kairosfocus? Would you, perhaps, be able to give a demonstration of the usage of it?
I would say it is used by anyone trying to determine how something came to be the way it is. IOW it is standard operating procedure. Do you think scientists flip a coin or throw darts? Joseph
CJYman, As always your efforts are appreciated. Unfortunately your efforts are wasted on those they were intended. Joseph
For anyone interested, here's a linklink to our previous discussion in which I gave a few more links to where I had calculated CSI for the protein Titin, explained how CSI is calculated, backed up my explanation, and showed how my calculation is derived from Dembski's lattest work on CSI in his "Specification ..." paper. Along with the work vjtorley and others have done, this shows that CSI is well defined and calculable in at least some situations. To demand that CSI be able to cover every single imaginable situation would be akin to demanding that unless one can calculate the difference in Shannon Information between the Taj Mahal and the CN Tower, then the concept of Shannon Information is not well defined. I do understand that MathGrrl would like to have CSI calculated for the scenarios that she has provided, and in principle that's great, but as PaV has noted above if she can't figuer out the examples already given her, then there is no reason to suspect that if we take the time to go through her provided scenarios she would understand the calculations any better. In fact, as a tutor, I understand very well that if you do someone's homework for them they will never understand it themselves. It would be better if she understood first what we've already tried to explain and calculate for her and then she can work on her own examples herself and ask us what we think about her calculations. Look at that, an opportunity for MathGrrl to actually engage in some ID Research. Sounds good? ... no? P.S. There is also the challenge at comment #311, referencing how law+chance absent intelligence will not produce an EA, at the above linked thread that MathGrrl has not taken up yet. CJYman
Jon, I believe that you’re a Creationist who doesn’t like what Dembski is doing, so that what MathGrrl is trying to do, counter-intuitively, supports your agenda.
LOL. You better have a talk with Joseph. He seems to think I am a Darwinist. If you do, though, be forewarned: he is kinda grumpy. What I am, or was, was an interested, supportive onlooker. I was interested in seeing this worked out, but it seems clear you have no intention of doing so. I can understand, if you were talking one on one with Mathgrrl, your reluctance to go through the work given that she might not accept it. But, you are on a much larger stage here and there are probably many onlookers, most remaining silent, that desparately want one of the core ID tools demonstrated in action. I just happen to be the 1 in a hundred onlooker mouthy enough to butt in. But, honestly, with your admission that you are not a scientist or a mathematician (I thought you were), I can only conclude that you probably are just looking for a graceful way out. You've done a yoeman's job thus far. Seems a shame that the real ID scientists are leaving you twisting in the wind. jon specter
PaV,
What she wants from us is the “chance hypothesis” for these programs. If she is willing to pay me large sums of money, I might consider showing her how its done. However, considering the time, effort and thought required, I am not willing to give it to her for free.
How much would it cost? JemimaRacktouey
Jon Specter [325]: So? Asking an ID scientist to demonstrate what he claims is one of the key tools in his toolbox hardly seems like some nefarious plot. The few actual scientists I know are positively giddy when asked to talk about their work. I can’t shut them up, even after the food arrives. My claim is this: Dembski's book, NFL, contains a rigorous, mathematical description/definition of CSI. MathGrrl doesn't want the definition contained there, she wants a worked-out example of CSI. Very different things. And thank you very much for elevating ID to the level of science; please inform the Darwinist community of this. PV:As I demonstrated above, she has not understood what a specification JS:Yes, and by my reading, she agrees with you on that point and has asked for help clearing it up. She has only provided what she thinks are specifications. But a specification should be linked to some "chance hypothesis". What she wants from us is the "chance hypothesis" for these programs. If she is willing to pay me large sums of money, I might consider showing her how its done. However, considering the time, effort and thought required, I am not willing to give it to her for free. Since when are we here at UD supposed to do somebody's homework assignment. If she wants help, contact Bill Dembski. He's the author of NFL. PV:The only person in the ID world providing mathematical definitions of CSI is Bill Dembski. She should have known this from the beginning. JS:It might have saved everyone alot of time and frustration if they had just said 700 comments (over two threads) ago that they can’t define CSI in an unambiguous manner. There is a great deal of rigor in both NFL and in "Specification". The definition is spelled out. MathGrrl doesn't want a DEFINITION; she wants a worked out example. If she is willing to pay me sufficiently, I might be interested. But, again, she's asking in numbers (1) and (2) for the impossible. In (1), simple duplication is tantamount to a copy machine copying multiple copies of an original. That is, there is NO NEW PATTERN! As for (2), it's length is insufficient to constitute true CSI, and therefore, "specified complexity" as well. [So, why should I even bother with them? It's obvious just looking at these "patterns". If she had even a rough idea of what CSI is, and how it works, she wouldn't be making these requests.] If she thinks OTHERWISE, then let her prove her case. Again----I don't do homework. PV:She didn’t want us to give a “rigorous mathematical definition” of CSI, she wanted us to tear apart the programs and assess it using the notions of CSI. Why should I be expected to respond to such a request on my time and energy. Am I some kind of paid consultant? JS:I, for one, was under the impression that you were an ID scientist. And, as I am led to understand, spending inordinate amounts of time developing ideas and sharing them widely is the process of science and the work of scientists. Again, thank you for elevating ID to a science. If I were a "scientist", then that would be my professional duty. I happen to not work in the sciences or in math whatsoever. As to ideas, ALL the ideas she is asking about are there in NFL and in "Specification". She's having trouble coming to grips with what all of this should look like in the cases that she's interested in. That's the problem. Again, I don't do homework. Let her figure it out herself. When I was auditing some physics classes, it was always the case that physics professors never gave their inquiring students direct answers to their questions, or else, the student would be deprived of a learning experience. PV:Why isn’t she expected to show that she understands CSI and demonstrate that understanding by, herself, anaylzing these programs. If she came up with something disproving CSI, THEN, and ONLY THEN would it be incumbent upon the ID community to rebut her findings. JS:How can she disprove what hasn’t been shown to be proven? CSI is an interesting concept. But, until someone actually shows it in action (it is easy, after all, right? You said so yourself), it seems like demands for disproof are premature. The claim is that chance processes cannot produce CSI. That is a simple claim. CSI is defined. Rigorously. Is it the duty of the ID community to prove, in every instance, that this claim is true? The way science works is that a claim is expected to work in every applicable situation. Demonstrating that it doesn't, would invalidate the claim. Again, for the umpteenth time: the calculation is easy. Setting up the equation by providing a pattern and the chance hypothesis that goes along with that, is the hard part. JV:Your demands for disproof of what you aren’t yet willing to demonstrate looks kinda like this: I claim that I am the most interesting man in the world. Now you must disprove that. And it is insufficient for you to say that I am nothing more than an internet blog troll, because someone else likely disagrees with you. You must demonstrate it with such rigor that everyone agrees that I am not very interesting. You're spilling over into the realm of the imbecilic now. I don't think any of those scenarios has any CSI whatsoever. It would take a huge amount of effort to prove so using the already existing rigorous definition of CSI. Why am I then obliged to demonstrate this to her. If she is of the mind that it does, then let her prove it. Here's a quote from MathGrrl: "For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it. Unfortunately, what I’ve found is quite a bit of confusion about the details of CSI, even among its strongest advocates." There's two parts: the first part is: "I’ve been trying to learn enough about CSI to be able to measure it objectively". Well, functional sequences are specified. That is just a given. As an example: "slltldlchd skdig;ls;s;e" wasn't helpful,was it? I typed it completely at random. But all my other words are "functional". You understand them. So, here at UD, we'll just postulate that any coding portions of the genome are "specified". And, let me go further, any coding portion sufficiently long enough would be, per NFL, CSI. So, there. It's not defined. It's a given. So all she needs to do is count. Isn't that easy enough? So now she knows "enough . . . to be able to measure it objectively." Then, the second part: "to determine whether or not known evolutionary mechanisms are capable of generating it." That's wonderful she wants to do that. Either way, whether she can determine that evolutionary mechanisms are capable of generating CSI, or not, she'd have her name in the spotlight. So, let her do it herself if she is of the mind. Why spoil her fame? PV:Even Bill Dembski can’t “agree” on a definition of CSI. He no longer is using it, in a sense. He now is using “specified complexity”. Others here at UD want to stick directly in the “information” area and have our own intuitive ideas of what CSI should look like, and what we should be looking for in biological systems. Is there something wrong with this? JS:Well, it renders your demand that Mathgrrl begone and not come back until she understands CSI a little confusing. How is she supposed to demonstrate an understanding of a central ID concept which actual ID scientists can’t agree on? Here's an exchange that took place on an earlier thread at [295]---one you are aware of since you posted at that time: vjtorley: I note that for the duplicated genome, the specified complexity Chi is much greater than 1, so Dembski’s logic seems to imply that any instance of gene duplication is the result of intelligent agency and not chance. Here's MathGrrl's response: I didn’t double check your math, but the orders of magnitude seem about right. I agree with you that by Dembski’s logic a gene duplication event generates CSI. She has either completely misunderstood vjtorley, or has twisted what he said. I don't know which. So, at the top of the thread she started, she writes: 2.) Do you agree with his conclusion that CSI can be generated by known evolutionary mechanisms (gene duplication, in this case)? This is wild. She hasn't a clue. Elsewhere in the thread I've linked to, vjtorley has directed her to Dembski's "Specification" paper. Then when vjtorley says this: the presence of bases along the gene in question, and …….. signify the rest (which are also random, let’s say), then the description of the duplicated genome will be ……..(AGTCGAGTTC)x2 instead of ……..(AGTCGAGTTC). In other words, we’re just adding two characters to the description, which is negligible. MathGrrl responds: Why are you using “x2? instead of the actual sequence? Using the “two to the power of the length of the sequence” definition of CSI, we should be calculating based on the actual length. I can see how the Kolmogorov Chaitin complexity might make more sense, but that’s not what I see ID proponents using. Yet, in Dembski's SP (=Specification Paper), when he is defining phi, which includes these kinds of duplication events, there is a footnote. In the footnote we read: "[24] There is a well-established theory of descriptive complexity, which takes as its point of departure ChaitinKolmogorov-Solomonoff theory of bit-string compressibility, namely, the theory of Minimal Description Length (MDL). The fundamental idea behind MDL is that order in data “can be used to compress the data, i.e., to describe it using fewer symbols than needed to describe the data literally.” See http:// www.mdl-research.org (last accessed June 17, 2005)." Now, if our wonderful little MathGrrl is really trying to understand CSI, then why doesn't she bother to read the footnotes? If you have a question about something, and there's a footnote, then why don't you explore it? The next footnote [25] refers her to the Design Inference. Why doesn't she try reading this stuff? That's all I've been saying. The definition is out there. Figure it out. JS:It might save you more time if you shortened your request that she go away until she understands CSI to a request that she just go away. Here's MathGrrl's true agenda----for all to see: On that earlier thread, again at [295], we have her response to vjtorley, and then her true colors shine.
VJT: Well, I’ve done the hard work. I hope you will be convinced now that fixating on a single measure of information is unhelpful. MathGrrl: Thank you, very sincerely, for your effort. The problem is, I’m not the one you need to convince.There are a number of ID proponents who continue to make the claim that CSI is a clear and unambiguous indicator of intelligent agency. They are the ones who need to be convinced to stop making those claims, that you yourself have shown to be unsupported, unless and until someone comes up with an alternative metric.
She has managed to misunderstand vjtorley (again). But it's very clear what her agenda is. And you want me to help her. No thank you. It would be best if MathGrrl went away. She's wasting our time. ---------------------- And, Jon, I believe that you're a Creationist who doesn't like what Dembski is doing, so that what MathGrrl is trying to do, counter-intuitively, supports your agenda. Please tell me I'm wrong. But be honest. __________________________ BTW, in looking over some posts, there have been some wonderful answers given to MG. Good work everyone. PaV
kairosfocus,
So, in the context of the explanatory filter, we may examine events, objects etc by asking on aspects, whether these are accounted for on mechanical necessity [giving rise to natural regularity], chance [giving rise to stochastic contingency on relevant distributions], or choice contingency, driven by intelligent, purposeful choice.
The explanatory filter you say kairosfocus? Would you, perhaps, be able to give a demonstration of the usage of it? You certainly talk confidently about it, as if it were trivial to apply. Perhaps then the "digital organisms" referenced in the OP might suit as a target? I'm sure there will be some reason why that's just not possible. Is the EF another CSI style mirage? Good for including in and building a complex multi-step argument, such as the tower of Babel KF is building, but it's all for naught as the foundations (CSI, the EF) evaporate away in the daylight. JemimaRacktouey
kairosfocus, Orgel gave a meaningful definition of specified complexity (which I think you, unlike PaV, take to be synonymous with CSI). However, his description of it was qualitative rather than quantitative. It may be that the concept is not up to mathematical precision. QuiteID
I think that the accusation of nonsense, takes priority over any further questions until it has been settled. And this, you know or should know. Worse, you have made a materially false assertion. One that you have had every occasion to know is materially false.
Ruh-roh. Looks like KF is getting ready to demand an apology for being slandered. Color me surprised.
this is not the inquisition, and I am no suspect heretic confined to answer quesitons as asked, however poisonously loaded.
Won't answer your questions, Mathgrrl. But, won't let you have the last word either. San Antonio Rose
mg @ 359 "I find your view interesting, but in this thread I am trying very hard to keep the focus on obtaining a rigorous mathematical definition of CSI and to get some detailed example calculations. Are you saying that what I am asking for cannot, even in principle, be provided?" I am saying, per my previous post, and interminable posts prior to this on other threads, that is it impossible, IN PRINCIPLE, i.e. it is logically impossible, to explain information in terms of algorithms and/or physical laws. This so obviously true that it is scarcely worth repeating. So I won't. You will sooner be able to create a square circle as to generate information with time and physics. Information is impossible without reason, language, free will, and intentionality. That is, a mind. Or Mind in the case of life. tgpeeler
MG: PLEASE, PLEASE PLEASE. It is you who have maintained that the undersigned, specifically, as well as others, have been "meaningless" in speaking of CSI and FSCI. Until this issue is settled, no further progress is possible. I have now any number of times pointed to Orgel and Wicken using the root of these abbreviations, and you have yet to accept that these men and others since, have been describing a significant aspect of reality, found inter alia in the living cell. I think that the accusation of nonsense, takes priority over any further questions until it has been settled. And this, you know or should know. Worse, you have made a materially false assertion. One that you have had every occasion to know is materially false. For, I took up time to address the first question of the fur you posed above, probably the most important. In so doing, I deconstructed the question, exposing the underlying issues that are begged and glided over with a glib "simple." In direct terms, MG, you have committed the fallacy of the complex, loaded question. this is not the inquisition, and I am no suspect heretic confined to answer quesitons as asked, however poisonously loaded. I have taken my privilege of intellectual liberty to analysie the quesiton, in te context of what it means to duplicate a gene. The answer in that context points in precisely the opposite way to what you seem to wish. Good day, madam. Off to my next item for the day. GEM of TKI kairosfocus
F/N: Pausing between meetings in a cafe, I think we need to bring a cite linked above explicitly into the discussion, from ENV; on gene duplication: ______________ http://www.evolutionnews.org/2009/10/jonathan_wells_hits_an_evoluti026791.html >> Since biology is based upon functional information, Jonathan Wells is interested in far more important questions like, Does neo-Darwinism explain how new functional biological information arises? Shallit seems interested primarly in addressing simplistic, trivial questions like how one might duplicate a string, without regard for the all important question of whether those additional characters convey some new functional message. Under Kolmogorov complexity, a stretch of completely functionless junk DNA that has been utterly garbled by random, neutral mutations might have more Kolmogorov complexity than a functional gene of the same sequence length. For example, consider the two following strings: String A: KOMOLGOROVINFORMATIONISAPOORMEASUREOFBIOLOGICALCOMPLEXITY String B: JLNNUKFPDARKSWUVWEYTYKARRBVCLTLOPDOUUMUEVCRLQTSFFWKJDXSOB Both String A and String B are composed of exactly 57 characters. String A spells a sentence in English, and String B was generated using a random string generator. Yet since many of its characters could be predicted using the grammatical rules of English, String A actually has less Kolmogorov complexity than String B (for example, we could use a data compression algorithm to shorten String A dramatically). Yet clearly String A conveys much more functional information than the String B. For obvious reasons, Kolmogorov complexity is not always a helpful metric of functional biological information. After all, biological information is finely tuned to perform a specific function, whereas random strings are not. A useful measure of biological information must account for the function of the information, and Kolmogorov information does not necessarily take function into account. In fact, Kolmogorov information is very similar to Shannon information, where "In both cases, the amount of information in an object may be interpreted as the length of a description of the object." But the length of the description says nothing about whether there is function, or how much fine-tuning is necessary for function. Thus you could have a very long random string that requires long descriptions, but it has no function. As any ID novice knows, we infer design when we find both complexity and specification. In rough terms, Shannon information or Kolmogorov information measure complexity, but not specification. Thus, such measures of information are not useful for measuring functional biological information. As a paper in the journal Theoretical Biology and Medical Modelling observes:
[N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon's original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that "different molecular structures may be functionally equivalent." For this reason, Szostak suggested that a new measure of information -- functional information -- is required. (Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, "Measuring the functional sequence complexity of proteins," Theoretical Biology and Medical Modelling, Vol. 4:47 (2007) (internal citations removed).) . . . .
Orgel captures the fact that specified complexity, or CSI, requires both an unlikely sequence and a specific functional arrangement. In fact, Orgel's "random mixture of polymers" might have extremely high Kolmogorov complexity, even though it would not be sufficiently specified to encode a functional biological life-form. Specified complexity is a much better measure of biological complexity than Shannon complexity or Kolmogorov complexity because it recognizes the highly specified nature of biological complexity . . . . ID proponents define "new" genetic information as a new stretch of DNA that actually performs some different, useful, and new function. For example, consider the following 42-character string: DUPLICATINGTHISSTRINGDOESNOTGENERATENEWCSI Now consider the following duplicate string: DUPLICATINGTHISSTRINGDOESNOTGENERATENEWCSIDUPLICATINGTHISSTRINGDOESNOTGENERATENEWCSI Whether or not we have increased the Kolmogorov complexity, we have not created any new meaning in the duplicated string. We have not increased the CSI in any meaningful sense. The above example is of course analogous to the commonly cited evolutionary mechanism of gene duplication. New functional information is not generated by a process of duplication until mutations change the gene enough to generate a new function -- which may or may not be possible . . . >> ________________ Thus, we see a second level of issues connected to the gene duplication question. In the Shannon sense of average information per symbol, i.e. the H-metric of SUM on i of pi ln pi, the highest average information per symbol occurs with zero redundancy, which implies full resistance to compression. The easiest way to get that, is to do a random string, where there is no correlation between any one digit and the next. K-complexity, similarly, would imply an increment to the algorithm to generate string X by adding to it, repeat X, once. But absent new function, we are not dealing with new information in any relevant sense. And, we observe that to get to the duplication in vivo, we are looking at exploiting a cellular replication system that (as we already saw) is brimming over with functionally specific complexity in its organisation, thus also functionally specific, complex information. In this context, the admittedly crude X-metric X = S*C*B, is enough to discern cases that are safely beyond the search capacity of our observed cosmos, acting through a blind random walk filtered by equally blind trial and error, from an arbitrary initial condition. So, in the context of the explanatory filter, we may examine events, objects etc by asking on aspects, whether these are accounted for on mechanical necessity [giving rise to natural regularity], chance [giving rise to stochastic contingency on relevant distributions], or choice contingency, driven by intelligent, purposeful choice. The original post in this thread passes the FSCI threshold, and is inferred by the explanatory filter to be an artifact of design. The protein molecule and the chromosome replicating mechanisms will pass at a similar threshold, and are inferred to also be best explained on design. The controversy over this inference, does not trace to observed cases of chance and necessity giving rise to FSCI without design, but to the a priori, Sagan-Lewontin assumption that origins are evolutionary materialistic. This assumption is then used to impose a controlling censorship on origins science. And, sadly, that is the root of ever so much of the debate we have seen, in recent years. GEM of TKI kairosfocus
mf @ 334 "You seem very concerned that Mathgrrl has not addressed the presence of symbols as a sign of information and therefore design." The reason that we "are very concerned" that mg has not addressed the presence of symbols is that the presence of symbols destroys the materialist project. In other words, it's not even possible for you to be right. All languages involve symbols and rules. None of which can be explained by physics. Materialism is a fool's errand. All of this nonsense about bits and bytes and how much and how many and blah blah blah is irrelevant to the fundamental issue, which no materialist seems capable of grasping, which is that physics cannot explain language. Language is required for information. Therefore, physics (the only explanatory tool in the materialist tool kit) cannot explain information. Case closed. My apologies if this is "a bit rough." tgpeeler
kairosfocus (339),
One of the most distressing things in the above thread (and previous ones leading up to it) is the repeated strawman — or, outright false — assertion that FSCI and/or the broader CSI cannot be worked out for real world biological cases, joined to equally predictable dismissals of observations and calculations when they are made.
You've written a considerable number of words and made a number of claims but you have not directly addressed any of my questions in the original post. What is the rigorous mathematical definition of CSI? How can I calculate it for the four scenarios I described (please show the details of your work)? What do you think of vjtorley's calculations in the previous thread? You seem to have time to participate here and you claim to be able to calculate CSI. Rather than continuing to post lots of words, please show just a bit of math. MathGrrl
markf (334),
You seem very concerned that Mathgrrl has not addressed the presence of symbols as a sign of information and therefore design. It seems a bit rough, as her challenge was for someone to provide a mathematical calculation of the CSI or information in certain cases. After all many leading ID proponents claim that CSI can be measured in bits. If you want to introduce a different criterion for information/design that is fair enough but it doesn’t answer her challenge and it is not necessarily an evasion on her part not to answer. She has done amazingly well to respond to so many different objections on this thread and she cannot be expected to respond to every different objection, especially when it does not answer her challenge directly.
Thank you, I am indeed resisting my usual desire to respond in detail to every point. There have been several interesting topics raised in this discussion, but I am trying very hard to keep this thread focused on getting answers to the questions I posed in the original post. I must say that I'm starting to suspect that those questions will not get answered. MathGrrl
tgpeeler (330),
Let’s keep this short and to the point. If we are talking about information (CSI, FSCI, or otherwise), and we are, then a language, “a system of chemical representations (symbols),” must also exist. I am saying that a materialistic explanation of language is, in principle, impossible.
I find your view interesting, but in this thread I am trying very hard to keep the focus on obtaining a rigorous mathematical definition of CSI and to get some detailed example calculations. Are you saying that what I am asking for cannot, even in principle, be provided?
This game is long over even if mg, jemima and the rest won’t get it.
What I honestly don't "get" is how ID proponents can make claims about CSI without being able to define it, let alone calculate it. MathGrrl
Collin (326),
I think that Mathgrrl is trying to get us to admit that CSI is not rigorously, mathematically calculable.
That would be a reasonable response, given what we've seen thus far in this thread. What I really would like, though, is a rigorous mathematical definition of CSI and detailed example calculations for the four scenarios I described. My gut instinct is that, if CSI can be rigorously defined and objectively calculated, GAs will prove to be capable of generating it. I'd very much like to test that instinct (and ID claims at the same time). MathGrrl
kairosfocus (320),
But, a gene DUPLICATION aint "simple"!
Gene duplications are observed in biological systems. The rest of your comment fails to provide a rigorous mathematical definition of CSI and does not provide any example calculations. Could you please do so? MathGrrl
vjtorley (315),
Regarding gene duplication, here’s an excellent blog post by Casey Luskin on why gene duplication doesn’t increase CSI: Jonathan Wells Hits an Evolutionary Nerve .
That article doesn't use Dembski's discussion of CSI. The calculation you performed does (with some interpretation). I find your results more compelling.
Regarding ev, PaV has already shown that it is incapable in principle of breaching Dembski’s Universal Probability Bound, so I think we can all agree it does not present a real challenge.
You grossly overstate what PaV has shown. Please see this link for a demonstration of how ev can generate that much information. MathGrrl
PaV (313),
And, QUITE FRANKLY, the answer to her request is that if she wants a rigourous definition of CSI then she should read NFL; and, if any questions remain, then she should email Bill Dembski. The only person in the ID world providing mathematical definitions of CSI is Bill Dembski.
In that case, ID proponents should stop making claims they can't support.
We can calculate it.
There is no evidence of that in this thread.
Even Bill Dembski can’t "agree" on a definition of CSI.
You can't agree on a definition, but you still claim to be able to calculate it? MathGrrl
critter (301),
I’m trying to understand all of this. MathGrrl asked for a demonstration of the math behind CSI (as defined by Dr Dembski). I too would like to see the math.
Thank you for helping to keep this discussion focused on the core issue! MathGrrl
kairosfocus (290),
In short, there is an intuitive, implicit appeal to the inference to design on FSCI in the heart of information theory. So, the design theory movement is doing something important in highlighting and addressing this and its significance.
ID proponents go much farther than this, though, by claiming that CSI can be objectively calculated and that it is a reliable indicator of intelligent agency. Do you reject those claims? MathGrrl
PaV (284),
1.) The specification of this scenario is "Produces at least X amount of protein Y." 2.) "A nucleotide that binds to exactly N sites within the genome." 3.) "Acts as a parasite on other digital organisms in the simulation." 4.) "Computes a close approximation to the shortest connected path between a set of points." The first, third, and fourth aren’t patterns.
They can easily be expressed as patterns, though. vjtorley shows this in his post 217 where he says:
Here’s the most concise English description: “stator joining two rotary motors.” This description corresponds to a pattern T
All you need to do is consider the underlying genome or bit string as the pattern. Your objection isn't reasonable. MathGrrl
PaV (283),
Per NFL, per the UPB found there, which is 10^150 . . . this falls woefully short of this UPB. This means that we CANNOT conclude that it is CSI.
You've only looked at one particular simulation run. Schneider has shown how to beat the UPB using ev. MathGrrl
Joseph (260),
MathGrrl has already made it quite clear that the book has insufficient information present to allow CSI to be calculated for the 4 examples in question.
I doubt that she has read the book. Her posts and questions tells me she hasn’t.
Your interpretation of my posts and questions is incorrect. If you feel that CSI can be rigorously defined and calculated for my four scenarios based on the discussion in No Free Lunch, please demonstrate how in this thread.
An interesting claim. But groundless without further explanation or justification.
You have it backwards- MathGrrl neds to explain how/ why her examples are good/ valid.
I'm afraid you have it backward. I'm asking for detailed clarification of a core ID concept. If you think my scenarios are somehow inapplicable, it is incumbent upon you to explain why.
Why don’t you present a few non-bogus scenarios and then calculate, if you can, the CSI present in those scenarios.
I have already told her how to do it and presented a paper in comment 12 that tells her how to do it.
As previously noted, that paper has nothing to do with CSI as described by Dembski. If you disagree, please demonstrate how they are aligned, provide a rigorous mathematical definition of CSI, and show how to calculate it for the four scenarios I described. MathGrrl
kairosfocus (250), You show a little math in this comment, but I don't see how it is related to Dembski's discussions of CSI, nor does it show how to calculate CSI for the four scenarios I described in the original post. Could you please just directly answer the exact questions I asked? I'll be more than happy to continue the discussion once we have that shared understanding. MathGrrl
PaV (241 and 243),
Interestingly, he saw exactly the problem I saw when I looked a little more closely at Tom Schneider’s blog giving info on ev. "Mistakes" Who figures this out? How is it figured out? The answer is the programmer.
You are confusing the simulation with what is being simulated. ev simulates the biological systems on which Schneider based his PhD work. Errors are measured in a manner analogous to how binding sites work in those biological systems.
As Dr. Dembski points out, this is no more than a more sophisticated version of Dawkin’s "Me thinks it is a weasel" self-correcting version of Darwinism.
Incorrect. ev has no explicit target.
As a follow-up to [241], let’s point out what one finds at Schneider’s blogsite. We see a graph. Bits of information/nucleotide has increased. Wow. But . . . also notice—as I’ve already pointed out to MathGrrl—the increase quickly peters out. It flat-lines.
Yes, and it does so not because of any code in the simulator but because the model, simple as it is, is sufficiently rich that it reflects what is observed in biological systems. You seem to misunderstand the distinction between R(sequence) and R(frequency). That flat line is a very interesting result once you do understand that.
And then notice that when "selection" is removed, the "information" is all lost.
Thus demonstrating that evolutionary mechanisms can generate Shannon information.
Well, the "selection" that Schneider alludes to comes exactly from the function that ferrets out the number of mistakes.
Exactly, a model of differential reproductive success in biological systems.
Once this ferreting is turned off (which, though not in the form of a target sequence comes [per Dembski in the article] from “fitness functions”), voila, no new information.
Certainly, if some organisms weren't better suited to their environment than others, we wouldn't see evolution. I strongly recommend reading Schneider's PhD thesis as well as the ev paper. When you understand the difference between R(sequence) and R(frequency) you'll have a much greater appreciation for the results of ev. MathGrrl
PaV (235),
You’ve stated twice that you think ‘people’ should address MathGrrl’s question. First of all, it really isn’t a question. It’s a request. More of a demand.
It's just a request, phrased as politely and constructively as I could. If you don't want to answer it, just say so. Ultimately it's just words on your screen -- you're not being compelled in any way.
Why do think she is entitled to something that would be painstaking work to produce?
Why do you consider it painstaking to define your terms? ID proponents make some strong claims about CSI, it's not unreasonable to request to see the support for those claims.
So here we have MathGrrl who seems perfectly willing to accept Shannon’s simplistic notion of information, but now finds it troublesome that CSI isn’t “rigorously” defined. It is plenty rigorously defined.
Great! Please provide the definition and show how to apply it to the four scenarios I described in the original post. MathGrrl
Mrs. O'Leary (232),
I agree that people should directly address your questions.
Thank you. I hope it happens in this thread.
MathGrrl: Download? I’m a Canadian, hardly short of download capacity, so not clear re problem, but happy to learn.
vjtorley has mentioned problems when the thread gets too long. Thus far I'm not seeing any problems loading. Anyone else? MathGrrl
vjtorley (217),
After reading your posts, I’m beginning to think that your college major wasn’t mathematics, as your handle suggests, but English (you’re quite an articulate person) or possibly biology (since you display familiarity with various software programs designed to mimic evolution).
Thank you for the compliment. Are you suggesting that it is as impossible for a mathematician to be articulate as it is for, to pick a completely random example, a philosopher to be numerate?
Looking through your comments, I can see plenty of breezy, confident assertions along the lines of “Yes, I’ve read that paper,” but so far, NOT ONE SINGLE EQUATION, and NOT ONE SINGLE PIECE OF RIGOROUS MATHEMATICAL ARGUMENTATION from you.
Without searching through the thread, I seem to remember addressing some misconceptions about the NFL theorems. Until someone provides a rigorous mathematical definition of CSI and provides some examples of how to calculate it, though, there is very little math for me work with.
I invite you to calculate the CSI, using Dembski’s formula. Can you, I wonder?
After reading Dembski's work, no, I cannot be sure I understand his formulation. That's why I'm asking for a rigorous definition and some examples for scenarios similar to those I'm interested in measuring.
I’m calling your bluff.
No bluff, just questions. I'm happy to put in some effort once I understand the concepts better. I found your calculation in the previous thread logical, but even an ID proponent such as yourself had to interpret Dembski's work and you came to the conclusion that gene duplication could create CSI. I strongly suspect that isn't Dembski's view, so I need more clarity before I'm able to objectively calculate CSI. MathGrrl
PaV (212),
In the effort for full disclosure, please tell us exactly what you’re doing these days.
I'm working hard and participating on this blog. You? MathGrrl
PaV (209),
To provide a "rigorous definition" of CSI in the case of any of those programs would require analyzing the programs in depth so as to develop a "chance hypothesis". This would require hours and hours of study, thought, and analysis.
First, a mathematically rigorous definition of CSI should be independent of any particular system. Given how often CSI is used in arguments for ID on this blog, it should be straightforward for someone here to simply produce such a definition. Second, your assertion is not aligned with what Dembski claims in his books and papers. He clearly says that CSI can be calculated without knowing anything about the origins of the object. All you need to look at in GAs are the digital strings representing the genomes of the virtual organisms, just as ID proponents claim to be able to detect CSI in the genomes of biological organisms. Please show exactly how to perform that calculation. MathGrrl
vjtorley (207), You quote part of Dembski's response to Ken Miller's The Flagellum Unspun The Collapse of "Irreducible Complexity" (a paper well worth reading by everyone involved in the discussion):
Calculate the probability of getting a flagellum by stochastic (and that includes Darwinian) means any way you like, but do calculate it. All such calculations to date have fallen well below my universal probability bound of 10^(-150). . . . To be sure, if a Darwinian pathway exists, the probabilities associated with it would no longer trigger a design inference. But that’s just the point, isn’t it? Namely, whether such a pathway exists in the first place. Miller, it seems, wants me to calculate probabilities associated with indirect Darwinian pathways leading to the flagellum. But until such paths are made explicit, there’s no way to calculate the probabilities.
This highlights a significant problem that I see with some calculations of CSI, namely that they often assume a uniform probability distribution which is equivalent to de novo generation of particular genomes. Miller touches on this in his paper:
When Dembski turns his attention to the chances of evolving the 30 proteins of the bacterial flagellum, he makes what he regards as a generous assumption. Guessing that each of the proteins of the flagellum have about 300 amino acids, one might calculate that the chances of getting just one such protein to assemble from "random" evolutionary processes would be 20^-300 , since there are 20 amino acids specified by the genetic code. Dembski, however, concedes that proteins need not get the exact amino acid sequence right in order to be functional, so he cuts the odds to just 20^30, which he tells his readers is "on the order of 10^39" (Dembski 2002a, 301). Since the flagellum requires 30 such proteins, he explains that 30 such probabilities "will all need to be multiplied to form the origination probability"(Dembski 2002a, 301). That would give us an origination probability for the flagellum of 10^-1170, far below the universal probability bound.
This makes it clear that CSI calculated in this fashion is more a measure of our ignorance about the history of a particular system than it is an indicator of intelligent agency. MathGrrl
Joseph (206),
I would very much like to understand CSI well enough to test the assertion that it cannot be generated by evolutionary mechanisms.
My aplogies but I don’t believe you.
That's okay! My motivations are utterly immaterial to the core issues of defining CSI and providing examples of how to calculate it.
"No Free Lunch" is readily available. The concept of CSI is thoroughly discussed in it. The math he used to arrive at the 500 bits as the complexity level is all laid out in that book.
If you are able to articulate a mathematically rigorous definition of CSI and calculate it for the four scenarios I detailed in the original post, please do so. I am unable to from the descriptions in Dembski's various books and papers. MathGrrl
Everyone, I'm back from my weekend workshop mentioned in 254 and have finished my prep for the week, so I can get back to the discussion. I'm very pleased to see how it has progressed in my inadvertent absence. I'm going to address a number of the comments, but certainly not close to all. If you think I've overlooked an important point, please call my attention to it. MathGrrl
Joseph, #170.
And if you cannot provide a mathematically rigorous definion of a computer program then you don’t know what you are talking about and are a waste of time.
I saw this comment made several times and thought it sensible to point out the existence of Formal Methods of Software Design Sorry if this was already addressed, I've only read to #170 but searched the page for the terms. I'll return to read the rest later. cams
F/N: One of the most distressing things in the above thread (and previous ones leading up to it) is the repeated strawman -- or, outright false -- assertion that FSCI and/or the broader CSI cannot be worked out for real world biological cases, joined to equally predictable dismissals of observations and calculations when they are made. There is also the usual demand for references in the peer reviewed literature -- in the teeth of cases in point from that literature and the known bias and hostility by the establishment's gatekeepers which just led to a case where U of K had to settle a career-busting lawsuit for US$ 125,000. Being on a complex consultation trip, I do not have the time or focus today for a point by point rebuttal to the several claims like that, so instead I make a few points: 1 --> The UD weak argument correctives 25 - 30 [accessible top right this and every UD page] actually provide links to metrics and to cases in the literature, including especially the 2007 Durston case of 35 protein FAMILIES; based on an extension of the H-metric on average information per symbol [often called entropy] in light of protein families. 2 --> I have not found evidence above of a serious engagement of the evidence, but instead a repeated resort to the rhetoric of dismissal. 3 --> On the simple brute force X-metric, I have now presented the rationale for it several times in the thread [not to mention the much more detailed presentation in my always linked note -- that's a self reference is the dismissal (as though such dismissals are not playing the ad hominem and/or appeal to authority rhetorical game, instead of dealing with the merits . . .) ], and have used it, only to have it brushed aside without addressing seriously on merits. 4 --> In particular, having pointed out how the original post is a case of FSCI as a reliable sign of design [per explanatory filter . . . ] I have pointed out how the same logic and mathematical analysis on observed facts apply to a functionally specific protein of 300 AA's. And, that there are easily hundreds of such FSCI rich proteins working together in the living cell. 5 --> Whistled by in the dark . . . 6 --> Yesterday, I took a moment to address a key biologically relevant case in MG's list, and highlighted how the problem as presented implied -- but breezily brushed over -- a reference to a much wider complex regulatory system that is replete with functionally specific complex Wicken Wiring Diagram type organisation, implying FSCI, not only on the components [how many proteins, with how many AA's are involved . . . ] but on the implied information content of such a complex nodes, arcs and interfaces organisation. 7 --> The reaction was almost predictably that I did not provide a calculation. I do not think I need to. That we are dealing with well past 125 bytes of information to simply describe the regulatory network on a nodes, arcs and interfaces basis is obvious, and many items sitting at nodes are complex information rich molecules integrated in a co-ordinated way. 1,000 bits is of course the FSCI threshold. 8 --> The real and plainly still unmet challenge to champions of chance and necessity acting without intelligence, is to show us that the sort of functionally specific, complex organisation and information associated with these systems can credibly arise -- per OBSERVATION -- by just such blind watchmaker mechanisms. 9 --> The best they can do is to provide genetic or evolutionary algorithm based, intelligently designed software that starts on an island of function and per algorithmic process and intelligently specified fitness function and sorting, does hill climbing. That shows intelligent design as the best and only observationally known way to get to such islands of function. 10 --> So, much of the huffing and puffing in the face of actual examples to the contrary, that CSI is not defined, is not specified mathematically, is meaningless [notice how there simply has not been an answer to whether Orgel and Wicken were meaningless when they wrote in the technical literature in the 70's], and cannot be measured or estimated or calculated for biological systems is -- pardon my directness -- distractive rhetoric in the teeth of facts already and in many cases long since in evidence. Okay, better get on with further items on today's to do list. G'day GEM of TKI kairosfocus
Markf, Apparently Jospeph is the Id of UD. jon specter
markf, Why can't you produce positive evidence for your position? Why can't you produce your position's methodology so we can compare it to CSI? I know why- because to a person evos are intellectual cowards and that makes it personal. Joseph
#335 Joseph - why do you get so intensely personal in your comments? markf
markf Her "challenge" is bogus for the reasons provided. Evos are alking about gee duplicatisadding information yet ther aren't any gene duplications during the origin of life. Meaning it is clear that MG has erected a strawman. Then there is JR with its drooling about cancer- can anyone be more of a dolt? Most cancers are our fault but JR wants to blame the designer(s). And this is all pathetic because in the end all you evos have to do is actrually step up and start producing positive evidence for your position and CSI will go away. But that ain't going to happen, is it? Joseph
Let’s keep this short and to the point. If we are talking about information (CSI, FSCI, or otherwise), and we are, then a language, “a system of chemical representations (symbols),” must also exist. I am saying that a materialistic explanation of language is, in principle, impossible.
Jut to be clear, are you saying that a language has to exist in nature (e.g. in the sequence of DNA nucleotides), or do you mean that nature has to be described by a language? Heinrich
tgpeeler and UB You seem very concerned that Mathgrrl has not addressed the presence of symbols as a sign of information and therefore design. It seems a bit rough, as her challenge was for someone to provide a mathematical calculation of the CSI or information in certain cases. After all many leading ID proponents claim that CSI can be measured in bits. If you want to introduce a different criterion for information/design that is fair enough but it doesn't answer her challenge and it is not necessarily an evasion on her part not to answer. She has done amazingly well to respond to so many different objections on this thread and she cannot be expected to respond to every different objection, especially when it does not answer her challenge directly. I am happy to take up the challenge of whether life contains symbols and whether this is proof of design. Let me state my position - which is that "symbol" can include many shades of meaning. Under one definition it does imply some kind of intention or plan - but symbols of this type are not present in life. Under another definition symbols are present in life but do not imply any kind of design or plan. Perhaps we might pick up the discussion where I left off with UB at #31 above. We agreed that symbols in life according to his definition are only present in DNA, not elsewhere. I then asked what the symbols in DNA represent. I think that point he got distracted or I missed the answer somewhere. markf
Onlookers: The above onward exchange after my comments y/day morning shows why I felt that the only reasonable purpose for my commenting on this thread was to put some things on record. If you will look at my comment at 320, you will see that I addressed the implications of a gene duplication in the context of a cell, that the implied regulated process to create a duplication could well direct5ly put us past any reasonable threshold for complexity. I then proceeded to actually do a sample calculation. Brushed aside. Thus, the situation in this thread is much as VJT described at 315: ____________ >> Regarding ev, PaV has already shown that it is incapable in principle of breaching Dembski’s Universal Probability Bound, so I think we can all agree it does not present a real challenge. I notice that Mathgrrl has not returned, and that she has yet to demonstrate her calculating prowess. Until she gets her hands dirty with some raw numbers, and attempts to perform some real calculations, I shall remain skeptical of her claim to be mathematically proficient. Jemima Racktouey (JR) complains that CSI cannot be calculated for an arbitrary system. Talk about shifting goalposts! First the complaint was that we couldn’t calculate specified complexity for anything biological. I replied by doing a calculation for the bacterial flagellum. Then the complaint was that it could only be calculated for one biological system, and not for other irreducibly complex systems. I replied by generously giving Mathgrrl a long list of 40 irreducibly complex systems, together with their descriptions, as well as the numbers required to calculate the specified complexity Chi for ATP synthase. I invited her to finish the calculation. She still hasn’t done so. And now the complaint from JR is that we can’t calculate specified complexity for anything and everything! Many meaningful mathematical quantities are not computable by a single algorithm or set of algorithms. JR should know that. That does not render them meaningless. Specified complexity is a meaningful quantity which (as I showed in my example of the Precambrian smiley face) can be calculated for a variety of objects). I conclude that some critics of CSI and specified complexity are making unreasonable demands which ID proponents should not waste their time on. Other more sincere critics have genuine questions that can be addressed on another thread . . . >> ____________ let's draw some additional conclusions: 1 --> The CSI concept is a recognition of a distinct class of reality found in known engineered systems [e.g. the reasonably comparable mechatronic ones -- cf. an architecture for such systems here in the context of autonomy and robotics], and also the living cell, per observations first made and published in the technical literature by Orgel and Wicken in the 1970's. 2 --> The "meaningful[ness]" and/or real-world significance of CSI and FSCI would THEREFORE be prior to the creation and/or critiques of mathematical models, analyses and metrics since the 1990's. (In short, we have a quality-quantification issue here. Often, quality must be recognised before quantity can be measured: what, before how much.) 3 --> Various metrics are possible, and are applicable in diverse contexts. 4 --> At a first rough-cut level, a brute-force, simple one that exploits the fact that once we are at or beyond 1,000 bits of info storage capacity, the search resources of the observed cosmos would be exhausted before 1 in 10^150 of the space could be scanned, and the fact that semiotic agents/observers [SAO's] with ability to judge exist and are part and parcel of science, allows us to recognise and crudely quantify enough cases to draw some reasonable conclusions. 5 --> Specifically, X = C*S*B, where SAO identifies specificity S as 1/0, and complexity on the threshold of info storage capacity per the 1,000 bit threshold as 1/0, and B as bit depth actually used, shows us that say a post of 143 or more characters in this thread or the original post is best explained on design. 6 --> Similarly, it points to a protein of sufficient complexity, e.g. a typical 300 AA molecule that must fold and function in the cell's key-lock fit environment, as best explained on design. 7 --> As was shown at 320, it also highlights that the cell duplication process and resulting complexity already point to design. Here is Wiki on Mitosis (for eukaryotes), making the usual admission against interest:
The process of mitosis is fast and highly complex. The sequence of events is divided into stages corresponding to the completion of one set of activities and the start of the next. These stages are interphase, prophase, prometaphase, metaphase, anaphase and telophase. During mitosis the pairs of chromatids condense and attach to fibers that pull the sister chromatids to opposite sides of the cell. The cell then divides in cytokinesis, to produce two identical daughter cells . . .
8 --> The article on the S Phase in which the chromosomes are duplicated, adds:
S-phase (synthesis phase) is the part of the cell cycle in which DNA is replicated, occurring between G1 phase and G2 phase. Precise and accurate DNA replication is necessary to prevent genetic abnormalities which often lead to cell death or disease. Due to the importance, the regulatory pathways that govern this event in eukaryotes are highly conserved . . . . The major event in S-phase is DNA replication. The goal of this process is to create exactly two identical semi-conserved chromosomes. The cell prevents more than one replication from occurring by loading pre-replication complexes onto the DNA at replication origins during G1-phase which are dismantled in S-phase as replication begins. In budding yeast, Cdc6 is degraded, Orc2/6 are phosphorylated and mcm proteins are excluded from the nucleus, preventing re-attachment of the replication machinery (DNA polymerase) to the DNA after initiation. Incredibly, DNA synthesis can occur as fast as 100 nucleotides/second and must be as accurate as 1 wrong base in 10^9 nucleotide additions. . . . . Damage to DNA is detected and fixed during S-phase. When the replication fork comes upon damaged DNA, ATR, a protein kinase, is activated. This kinase initiates several complex downstream pathways causing a halt in the initiation of new replication origins, prevention of mitosis and replication fork stabilization in order to keep the replication bubble open and DNA polymerase complex attached while the damage is being fixed.
9 --> In short, a tightly regulated, operationally complex process that sits on an island of function, and involving many objects and co-ordinated processes that will easily put us well past the threshold for FSCI. 10 --> In this context, the rare error that was focussed on by MG is seen for what it is, an error in a process that is already so plainly sophisticated and functionally specific, that it is long since best explained as designed. 11 --> So, again, the first case presented by MG without acknowledgement of that context and with the artful suggestion of the term "simple," is predicated on begging the question of what is required for cell replication, and for duplication of DNA in that. 12 --> And once design is already clearly on the table as a best explanation, further cases multiply the force of the inference. 13 --> Similarly, the Rube Goldberg dismissal of complexity above, by another commenter, reveals a lack of awareness of what is required to make a complex system that has to respond to real world challenges work. There is a reason why even a PC operating system -- vastly less sophisticated than a living cell -- is complex. 14 --> More sophisticated metric models, and variants on them, have already been linked and discussed. Names like Durston et al and Dembski have been put up. 15 --> VJT has of course done some calculations and has challenged MG to match them, so far without effect. G'day GEM of TKI kairosfocus
Chances are she won't be bringing up the moderation policy. :) Upright BiPed
TGP,
“Let me try once again to make this point.”
Of course, I concur with your point; it is the same one I was making in #192. When the guest author makes the statement that an evolutionary algorithm creates information, she simply assumes the only element in the equation which is (by itself) responsible for the existence of the information. She has so far refused to acknowledge this observation. Yet, anyone who questions this observation can peruse Marshal Nirenberg's papers for what he thinks caused phenylalanine to end up in his sample. The rest of us already know the answer, and we are not ignoring it. Even if it means the (already stated) conclusion of the guest author is either a) false, b) over-reaching, or c) incomplete.
“No how. No way. Ever.”
Not much of a consolation prize is it? For no other reason than this, we will not see any such acknowledgment on this thread. Mathgrrl won't acknowledge the reality on its face, but it will be interesting to see how she handles it if she decides to return. I'm not holding my breath on her simply acknowledging that the existence of the information has nothing whatsoever to do with the EA. As a strategic study, it is interesting. Since an observed reality has fatally challenged her conclusion, one which she cannot acknowledge without conceding the point, she is left with only one of three remaining positions. She can divert attention to something else. She can cloak herself from the evidence (perhaps say it doesn't matter; a way of attacking the evidence by dismissing the issue without acknowledging it), or she can go on ignoring it. Of course, she can also break it into pieces and play one card here and another there. I'm putting my money on diversion followed dismissal. Probably a technical foul of some sort, related to the thread perhaps. Previously, she dismissed it by implying that any particular proponent's definition of CSI made the issue go away. Perhaps she'll stay with that. She can always attack me, politely of course. I think she (gracefully) leaves open confrontation to others. Upright BiPed
Upright BiPed @ 292 "The information she is discussing is information which is instantiated in a system of chemical representations (symbols)." Let me try once again to make this point. (And if it is a point not worth making I'd appreciate being enlightened as to why not.) Let's keep this short and to the point. If we are talking about information (CSI, FSCI, or otherwise), and we are, then a language, "a system of chemical representations (symbols)," must also exist. I am saying that a materialistic explanation of language is, in principle, impossible. It is impossible because neither the code nor the rules that govern the code can be explained by reference to the laws of physics. No how. No way. Ever. It takes a free and purposeful will to rationally order symbols to produce a message. It can't be done by an algorithm. Somebody show me one if it can. I'll be most interested to investigate that. This game is long over even if mg, jemima and the rest won't get it. This isn't intellectual resistance, even thought that's how it's being sold. tgpeeler
M. Holcumbrink,
But if you showed up at the bank in a stretch limo, wearing a $10,000 suit, and followed by an entourage of very subservient individuals that are clearly terrified by you, and you approach the banker and say “I would like to deposit a lot of money in your bank”, I would imagine that he would roll out the red carpet for you without having the faintest clue as to how much money you actually intend to deposit.
And that's exactly how many confidence scams start.
I work in design, and my thoughts are “Oh! If only we could design and build systems that mimic life more closely!”
Will you be copying the 1 in 3 failure rate to cancer? Or the very narrow range of temperatures and pressures life works at? Will you be designing machines to be like parasites on other machines? To be red in tooth and claw? Perhaps not. JemimaRacktouey
JR: "Funny how my bank manager wont’ accept my claims of being “money-rich” without me putting a figure on it." But if you showed up at the bank in a stretch limo, wearing a $10,000 suit, and followed by an entourage of very subservient individuals that are clearly terrified by you, and you approach the banker and say "I would like to deposit a lot of money in your bank", I would imagine that he would roll out the red carpet for you without having the faintest clue as to how much money you actually intend to deposit. But it's my guess that he would expect it to be a lot. JR: "The funny thing is that good design is not at all like what we see in the cell. The mark of a good design is simplicity, not complexity. Or at least, as simple as you can make it given the task at hand. The diagrams for the metabolic pathways of a cell include feedback loops all over the place, loops within loops within loops. Nobody sane designs stuff like that, there is just no reason to". I very seriously doubt you are familiar with good design as it pertains to mechatronics and biomimetics. The closer you get to autonomy (drones) or to creating lifelike robotics, the more automatic controls you need, and they constantly have to communicate with each other in real time. Eventually you end up with a tangled web of inputs, computations and outputs (it's the only way to make it work). You sound like you played with one of those programmable Lego sets and now you think engineering sophisticated integrated systems is simple and easy. Truth is, the cell alone is on the order of sophistication of an F22, including the glass cockpit. Do you have any idea how many systems are on board an F22, fully automated and integrated? Do you think something like that is simple? What about an ordinary hard drive? Simple? Besides that, any time I read about biology, all I hear, even from Darwinists, is how elegant these systems are, replete with astonishingly elegant design solutions (which is of course attributed to the efficacy of NS in creating such systems, to which I roll my eyes). And the more complicated such systems are, the more susceptible to degradation they are. Systems like these that have been designed by man are constantly having to be maintained, and the degree of maintenance required is geometric relative to the degree of sophistication. In biological systems the maintenance is actually part of the system itself, fully integrated and autonomous (which in and of itself is an engineering marvel), but when something goes wrong with the maintenance systems, we see things like cancer, which is not part of the design by any stretch of the imagination. JR: "Any actual person working in design would look at those far-too-complex “designs” and say “I’d never ever design something in that way”". I work in design, and my thoughts are "Oh! If only we could design and build systems that mimic life more closely!" Hence the field of biomimetics. Why would there be an entire field of engineering devoted to understanding and duplicating the design solutions we see in biological systems if "no actual person working in design" would ever want to "design something in that way"? You show your ignorance here, JR. Well, either that or you are guilty of sophistry. Which is it, by the way? M. Holcumbrink
Collin
I think that Mathgrrl is trying to get us to admit that CSI is not rigorously, mathematically calculable.
It's not a case of admitting it, it's a case of asking for such a calculation to be performed and then judging the situation on the results obtained. As O'Leary has just noted,
Meanwhile, perhaps junior ID theorists cannot formulate a single definition of complex specified information at present.
it seems that the more junior members of the ID camp need to wait for updated instructions from the ID seniors. I've suggested to O'Leary that perhaps she should ask Dr Dembski to jump in to the discussion. If anybody can calculate CSI for the 4 examples given I'm sure it's the good Dr. JemimaRacktouey
I think that Mathgrrl is trying to get us to admit that CSI is not rigorously, mathematically calculable. Well, she's asking lay people to admit something that they do not know, for the most part. Yet VJTorley has given her some calculations as have others. maybe she thinks that they are not rigorous enough. Fine, but we'll just have to agree to disagree. The definition of specified complexity is easy enough. Dembski gives a good illustration: "A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified." Whether this is calculable or not, I don't know. But it is logical and coherent and leads one to be able to infer that Shakespear's sonnet was designed regardless of whether or not we knew that Shakespear wrote it. I think that a cryptologist or SETI scientist would agree. Collin
PAV:
So, MathGrrl’s question is not a question. It’s a demand for a demonstration, and nothing less.
So? Asking an ID scientist to demonstrate what he claims is one of the key tools in his toolbox hardly seems like some nefarious plot. The few actual scientists I know are positively giddy when asked to talk about their work. I can't shut them up, even after the food arrives.
As I demonstrated above, she has not understood what a specification
Yes, and by my reading, she agrees with you on that point and has asked for help clearing it up.
The only person in the ID world providing mathematical definitions of CSI is Bill Dembski. She should have known this from the beginning.
It might have saved everyone alot of time and frustration if they had just said 700 comments (over two threads) ago that they can't define CSI in an unambiguous manner.
She didn’t want us to give a “rigorous mathematical definition” of CSI, she wanted us to tear apart the programs and assess it using the notions of CSI. Why should I be expected to respond to such a request on my time and energy. Am I some kind of paid consultant?
I, for one, was under the impression that you were an ID scientist. And, as I am led to understand, spending inordinate amounts of time developing ideas and sharing them widely is the process of science and the work of scientists.
We can calculate it. But it is a very labor and time intensive operation. Why are we supposed to make this calculation?
Because, as I have been told, that is what scientists do if they want the broader world take notice of their work.
Why isn’t she expected to show that she understands CSI and demonstrate that understanding by, herself, anaylzing these programs. If she came up with something disproving CSI, THEN, and ONLY THEN would it be incumbent upon the ID community to rebut her findings.
How can she disprove what hasn't been shown to be proven? CSI is an interesting concept. But, until someone actually shows it in action (it is easy, after all, right? You said so yourself), it seems like demands for disproof are premature. Your demands for disproof of what you aren't yet willing to demonstrate looks kinda like this: I claim that I am the most interesting man in the world. Now you must disprove that. And it is insufficient for you to say that I am nothing more than an internet blog troll, because someone else likely disagrees with you. You must demonstrate it with such rigor that everyone agrees that I am not very interesting.
Even Bill Dembski can’t “agree” on a definition of CSI. He no longer is using it, in a sense. He now is using “specified complexity”. Others here at UD want to stick directly in the “information” area and have our own intuitive ideas of what CSI should look like, and what we should be looking for in biological systems. Is there something wrong with this?
Well, it renders your demand that Mathgrrl begone and not come back until she understands CSI a little confusing. How is she supposed to demonstrate an understanding of a central ID concept which actual ID scientists can't agree on? It might save you more time if you shortened your request that she go away until she understands CSI to a request that she just go away. jon specter
KF,
But notice, please: there has been a provision of relevant calculations and metrics all along, just they were soon obfuscated in the chaos of a hot debate. Again and again.
Unfortunately the vast majority were published in books intended for the lay reader. If a usable rigorous definition of CSI had been published in the mathematical literature I doubt this thread would even exist. Can you provide such a definition of CSI so that it can be applied to a generic situation? If not, will you admit that such is not possible or will you continue to cloud the issue with rhetoric and demands that origin of the "system" is explained (and therefore winning the argument from two sides, heads CSI can be calculated, tails CSI cannot be calculated but you can't explain the origin of the system we're talking about so you lose). JemimaRacktouey
The funny thing is that good design is not at all like what we see in the cell. The mark of a good design is simplicity, not complexity. Or at least, as simple as you can make it given the task at hand. The diagrams for the metabolic pathways of a cell include feedback loops all over the place, loops within loops within loops. Nobody sane designs stuff like that, there is just no reason to. It would be like including more ways for something to go wrong then to go right and it makes it impossible to debug and of course difficult to roll out into production. So if some of the people who say things like "the multiple layers of connectivity and interaction prove design" really knew anything about actual "design" they would conclude that the designer was a process exactly like unguided evolution. Otherwise, extrapolating from our current experience (which ID is always telling us to do), the designer simply did not know what it was doing and left it all up to evolution. These "multiple layers of interacting error correcting codes" that scream design to you all are not so rigorous to prevent 1 in 3 people getting cancer. Any actual person working in design would look at those far-too-complex "designs" and say "I'd never ever design something in that way". And they do and have. Yet here such mess is considered a hallmark of a genius designer! Go figure... JemimaRacktouey
kairosfocus,
So, let us ask, how do we get TO a “simple . . . duplication”? Or, more specifically, first, to the functional, controlled, regulated expression of genes and their replication when a cell divides?
Yes, of course. The origin of such systems must be worked out before we can talk about the CSI present in such systems. In fact, we can probably determine that there is so much CSI in the system that we've no actual need to calculate a specific value as it's over the UPB.
Thus, once we have gene duplication, we have already had something that regulates and expresses replication, which is itself going to be FSCI-rich, if the just linked diagrams are any indication.
Ah yes. Such a complex system would by definition have loads of FSCI, so much so that there's no actual need to put a figure on it. It is FSCI-rich. Funny how my bank manager wont' accept my claims of being "money-rich" without me putting a figure on it.
That implied capacity, BTW VJT, is what seems to be pushing you over the threshold of CSI when you have such a duplication.
Odd how we can go over the threshold of CSI when so far all we know about the amount of CSI is that it's a "rich" amount. Your slight of hand has been noted. No need to calculate CSI if you know it's there. And if CSI is present that's a reliable indicator of intelligence. Talk about assuming your conclusions!
8 –> Doubling such a short protein would jump us to 1,200 functionally specific bits, and the crude, brute force “a cubit is the measure from elbow to fingertips” 1,000 bit threshold metric would pick this up as passing the FSCI threshold, on which the explanatory filter would point to design as best explanation.
Would you be able to show the details of that calculation? An example of the explanatory filter in action is a rarer beast then calculation of CSI. And here you seem to be agreeing that gene duplications do in fact generate CSI, therefore undirected evolution is capable of generating CSI. Interesting.
But that is not a pure chance event, it is within a highly complex island of function, and so we are looking at hill climbing within such an island.
Ah, so, even if a duplication generates CSI that duplication was still placed on that "island of function" by a designer so it's still design really.
Where also the focus of design theory is how do we get to islands of function, not how we move around within such an island,
Except it's not really is it? Unless you can reference a relevant paper published in the literature, not a self published website or other DI sponsored site.
to one where there is a duplication in a regulatory network, we have brought into focus a much wider set of complexity, that pushes us over the FSCI threshold.
Does it? then you'll have no problems showing the calculations that lead you to this conclusion.
13 –> In turn, that points to design as best explanation for the SYSTEM capable of that duplication.
Of course! The system is only capable of a random duplication at some random point in DNA that might cause cancer or any one of a thousand debilitating conditions because it was designed that way. Thanks for clearing that up. It appears the "intelligent designer" is the d-e-v-i-l.
20 –> So, MG’s tickler case no 1 points to a deeper set of connexions, and the crude, brute force FSCI criterion and metric comes up trumps.
While you may proclaim victory that would only work in an English class. This is a mathematics class and you won't get an A+ here without doing some actual math. Rhetoric should be saved for the debating class. D- JemimaRacktouey
F/N: PAV, I suggest that CSI and FSCI have never ever been about raw bits, which is simply the key Shannon-Hartley metric on negative log probabilities. (One can further argue that so soon as the issue of a posteriori probabilities appear in Shannon's analysis, RAW BITS HAVE GONE OUT TOO . . . as the implicaiton is that a judging, semiotic agent is distinguishing signal from noise on an implicit inference to design on specified complexity of meaningful messages as opposed to random patterns involved in noise reflecting raw balances of probabilities.) In real world context, we use bits as a volume measure: so much capacity to hold or transfer information. But we are usually dealing with FUNCTIONAL information being held in that volume, e.g. "this jpg is 840 k bits" or whatever. And of course such a jpg would often be functionally specific, e.g it contains a portrait of President Obama, which can only be dosed with noise up to a certain level before it loses function, and eventually would disintegrate into meaningless snow. ______________ F/N 2: As to the notion that if someone had asked how to get the position from the movement of an object, one would "simply" provide the kinematics and/or dynamics, let us just say this is after the debates have been had. The debates were very hot indeed, and with major political overtones, 400 or so years ago. They took generations to settle out. And, motives, rhetorical tricks and traps were very much a part of the debates. As well as unjustified career busting. All of which sound ever so familiar today. But notice, please: there has been a provision of relevant calculations and metrics all along, just they were soon obfuscated in the chaos of a hot debate. Again and again. kairosfocus
F/N: I pause to look at one of MG's scenarios:
A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”
1 --> But, a gene DUPLICATION aint "simple"!
(Observe: "protein" implies a cellular context of functional processes that are tightly regulated and integrated, a la the infamous Biochemist's chart of cellular metabolic reactions. Talk about a Wicken wiring diagram well past the 1,000 yes/no decisions threshold!)
2 --> As the above implies, we are now also dealing with a regulatory process -- one that will have its own Wicken wiring diagram to lay out the architecture of the control loop process [cf examples here] -- that controls expression of the information in the genetic code. 3 --> So, let us ask, how do we get TO a "simple . . . duplication"? Or, more specifically, first, to the functional, controlled, regulated expression of genes and their replication when a cell divides? 4 --> ANS: By having a higher order regulatory network that responds to environmental and internal states and signals in a co-ordinated fashion. Thus, once we have gene duplication, we have already had something that regulates and expresses replication, which is itself going to be FSCI-rich, if the just linked diagrams are any indication. 5 --> That implied capacity, BTW VJT, is what seems to be pushing you over the threshold of CSI when you have such a duplication. 6 --> In other words the Dembski analysis is (correctly!) picking up that if lightning strikes the same unlikely place twice, something significant is happening. 7 --> Suppose the first protein coded for takes up say 600 bits [i.e. 300 bases, or 100 AA's]. 8 --> Doubling such a short protein would jump us to 1,200 functionally specific bits, and the crude, brute force "a cubit is the measure from elbow to fingertips" 1,000 bit threshold metric would pick this up as passing the FSCI threshold, on which the explanatory filter would point to design as best explanation. 9 --> In short, (i) lightning is hitting the same unlikely place twice, which implies (ii) a capacity to target that place by replicating, regulating expression, etc, and that (iii) this is so unlikely on chance plus bare mechanical necessity, that the best explanation is design. For, (iv) designers are known to do such duplications, and to set up regulatory systems that have counting loops that control how many times something is to be expressed or done: do until, do while etc. 10 --> Of course, if a counter point in the regulatory network suffers a simple mutation that triggered the double replication of the gene, that narrow outcome is well within the range of a chance event. 11 --> But that is not a pure chance event, it is within a highly complex island of function, and so we are looking at hill climbing within such an island. (Notice the centrality of this islands of function in wider config spaces concept. Where also the focus of design theory is how do we get to islands of function, not how we move around within such an island, except that hill climbing implies a deeper level of functional complexity to promote robustness and adaptability. That holds for the FSCI X-metric and it holds for the Durston et al FSC FITS metric and it holds for Dembski's CSI hot zone metric.) 12 --> So, by broadening the context from a protein molecule of say 100 AA's exists, which is within threshold -- so, just possibly by chance on the gamut of our cosmos, it could come about by chemistry, but of course without the cell's context, it is not functioning, it is just an unusual molecule -- to one where there is a duplication in a regulatory network, we have brought into focus a much wider set of complexity, that pushes us over the FSCI threshold. 13 --> In turn, that points to design as best explanation for the SYSTEM capable of that duplication. 14 --> This reminds me of the time I worked out and had to seriously reflect on the implications of the truth table (A AND B) => Q, namely that A => Q and/or B => Q. 15 --> Deeply puzzled [this is tantamount to saying that "Socrates is a Man" and/or "Men are Mortal" would INDEPENDENTLY imply "Socrates is Mortal"], I spoke with one of our budding mathematicians over in my next door neighbour Math Dept. 16 --> But of course, he said, blowing away my previous understanding that the interaction between the two premises was key to the syllogism. If men are mortal, Socrates is mortal. If Socrates is a man, he is mortal. (That is, the Maths was implicitly capturing a deeper reality than I had spotted.) 17 --> So, I came away with a deeper respect for the math, and a higher confidence in it. 18 --> The tickler? I was led to do the particular analysis, both by truth tables and by the Boolean Algebra, because of a theological issue over interpretation of a particular verse in the NT which in effect is of form (A AND B) => Q. 19 --> So, the deeper yet lesson is that reality -- on abundant experience as well as the implication of the key first principles of right reason being models of reality -- is a unified whole, and if we are capturing that reality in our models, we may be surprised by deeper connexions. But, we should respect them. 20 --> So, MG's tickler case no 1 points to a deeper set of connexions, and the crude, brute force FSCI criterion and metric comes up trumps. ____________ Back to my main order of business . . . GEM of TKI kairosfocus
#294 PAV I’ve just done a word-check in the “Specification: The Pattern That Signifies Intelligence.” (It is “specified complexity” that is detailed.) It doesn’t appear. So, actually, it is IMPOSSIBLE to give a “rigorous mathematical definition” of CSI based on the paper. Pav - a word check is hardly sufficient. Read (and understand) the addendum Note to Readers or TDI & NFL which begins: "Readers familiar with my books The Design Inference and No Free Lunch will note that my treatment of specification and specified complexity there (specificity, as such, does not appear explicitly in these books, though it is there implicitly) diverges from my treatment of these concepts in this paper. The changes in my account of these concepts here should be viewed as a simplification, clarification, extension, and refinement of my previous work, not as a radical departure from it. To see this, it will help to understand what prompted this new treatment of specification and specified complexity as well as why it remains in harmony with my past treatment." Whatever you want to call it, this is clearly an attempt to mathematically define the quantity that is supposed to the hallmark of design by the foremost theoretician in the ID world. It may not have the exact label "CSI" but the concept is meant to play the same role and supersede the definition of CSI in NFL. markf
I notice that Mathgrrl has not returned, and that she has yet to demonstrate her calculating prowess. Until she gets her hands dirty with some raw numbers, and attempts to perform some real calculations, I shall remain skeptical of her claim to be mathematically proficient. vj. Her mathematical prowess is irrelevant. The challenge to show how you can calculate CSI is the same challenge whoever makes it. What do you want her to do - a maths exam? markf
I'm a visitor here, so perhaps I'm not familiar with the conventions of this blog. But if this were a physics blog and an Aristotelian asked how to calculate the position of an object from its motion, I wouldn't expect the respondents to spend time arguing about the motives of the poster, or whether objects remain in motion or naturally come to rest -- I'd expect someone to simply post: y = x + vt + 1/2at**2 where: y = final position x = initial position v = initial velocity a = acceleration t = time If an alchemist asked on a chemistry blog how one might calculate the pressure of a gas, one wouldn't argue about the nobility of gold or the Philosopher's Stone -- one would simply post: p=(NkT)/V where: p = absolute pressure of the gas N = number of gas molecules k = Boltzmann's constant T = temperature of the gas V = volume of the gas And if a young-earth creationist asked on a biology blog how one can determine the relative frequencies of the alleles of a gene in a population, one wouldn't argue about the literal interpretation of Genesis -- one would simply post: p² + 2pq + q² = 1 where: p = population frequency of allele 1 q = population frequency of allele 2 These are examples of clear, detailed ways to calculate values, the kind of equations that practicing scientists uses all the time in quotidian research. Providing these equations allows one to make explicit quantitative calculations of the values, to test these values against the real world, and even to examine the variables and assumptions that underlie the equations. Is there any reason the same sort of clarity cannot be provided for CSI? Tulse
Okay, let me see if I understand this correctly. PaV has asserted that Dembski has "in a sense" abandoned CSI (something that posters in this thread have been vigorously trying to defend) in favor of "specified complexity". So, is there some way to calculate it (specified complexity) for the examples MathGrrl has cited? If not, it doesn't help the cause very much at all. And yes, there "is something wrong with this" if you cannot adequately and consistently describe something. Just having an intuitive idea of "what CSI should look like" (wait, didn't we abandon that term?) isn't going to stand up to scrutiny. Muramasa
Hi everyone, This will definitely be my last post on this lengthy thread, as it is taking too long for me to bring it up on my PC. Markf has raised some substantive points regarding specified complexity in his most recent post. I will be arguing in a forthcoming post that he has misunderstood Dembski's argument on pages 18-19 of his paper. More of that anon. Regarding gene duplication, here's an excellent blog post by Casey Luskin on why gene duplication doesn't increase CSI: Jonathan Wells Hits an Evolutionary Nerve . I would urge everyone who is interested ingene duplication and CSI to read it. See also A Response to Dr. Dawkins' "The Information Challenge" http://www.discovery.org/a/4278 The NCSE, Judge Jones, and Citation Bluffs About the Origin of New Functional Genetic Information http://www.discovery.org/a/14251 Regarding ev, PaV has already shown that it is incapable in principle of breaching Dembski's Universal Probability Bound, so I think we can all agree it does not present a real challenge. I notice that Mathgrrl has not returned, and that she has yet to demonstrate her calculating prowess. Until she gets her hands dirty with some raw numbers, and attempts to perform some real calculations, I shall remain skeptical of her claim to be mathematically proficient. Jemima Racktouey (JR) complains that CSI cannot be calculated for an arbitrary system. Talk about shifting goalposts! First the complaint was that we couldn't calculate specified complexity for anything biological. I replied by doing a calculation for the bacterial flagellum. Then the complaint was that it could only be calculated for one biological system, and not for other irreducibly complex systems. I replied by generously giving Mathgrrl a long list of 40 irreducibly complex systems, together with their descriptions, as well as the numbers required to calculate the specified complexity Chi for ATP synthase. I invited her to finish the calculation. She still hasn't done so. And now the complaint from JR is that we can't calculate specified complexity for anything and everything! Many meaningful mathematical quantities are not computable by a single algorithm or set of algorithms. JR should know that. That does not render them meaningless. Specified complexity is a meaningful quantity which (as I showed in my example of the Precambrian smiley face) can be calculated for a variety of objects). I conclude that some critics of CSI and specified complexity are making unreasonable demands which ID proponents should not waste their time on. Other more sincere critics have genuine questions that can be addressed on another thread. That's what I'll attempt to do in my next thread. vjtorley
Excellent response PaV ;) TheForthcoming
Jemima: "If by “pelting questions” you mean the repeated requests to provide a rigorous mathematical definition of CSI then that’s the purpose of this thread." No it's not. CSI belongs to Bill Dembski. He provided a rigorous definition of CSI in NFL. What MathGrrl has done is asked a trick question. CSI does not appear in the Specification: The Pattern that Signifies Intelligence. In that paper, Dembski moves away from "bits" per se, and comes up with a number that can be negative, positive, and an indicator of SPECIFIED COMPLEXITY---not CSI--if it's greater than one. So, MathGrrl's question is not a question. It's a demand for a demonstration, and nothing less. As I demonstrated above, she has not understood what a specification is or else she would have provided different characterizations of the "patterns" involved in these EA programs. The only one that she provided that fit the description of a pattern is for the ev program. In the ev program you have, all told, 260 binary digits in the output ( I now have a copy of Schneider's paper). 2^260 = approx. 10^100. This is below the "static" UPB ("static" per the Specification paper), and hence possible due to chance occurrences. But, of course, there are all kinds of other problems with the program---as pointed out in the papers of Truman and Dembski that I've cited. Muramasa:
Onlookers: If I may summarize this thread: 1) Supporters of ID are not able to calculate CSI, though they claim it to be easy to do. 2) When asked repeatedly to do so, they attempt to change the subject. 3) Supporters of ID cannot even agree on how to define CSI.
As to 1): But I did calculate the complexity of the ev program. It was very easy. And it fell short of the UPB. This was the only program for which MathGrrl provided a proper "specification" (although she used N, as if the output bit length could be any size whatsoever when, in fact, she ought to have known full well that N in the ev program was 260 binary digits. And, QUITE FRANKLY, the answer to her request is that if she wants a rigourous definition of CSI then she should read NFL; and, if any questions remain, then she should email Bill Dembski. The only person in the ID world providing mathematical definitions of CSI is Bill Dembski. She should have known this from the beginning. She didn't want us to give a "rigorous mathematical definition" of CSI, she wanted us to tear apart the programs and assess it using the notions of CSI. Why should I be expected to respond to such a request on my time and energy. Am I some kind of paid consultant? Is she Secretary of State, or the IRS Commissioner, and if I don't do it, I'll end up in big trouble? She seems like a big girl. Let her do the hard work if she's so interested. 2) We can calculate it. But it is a very labor and time intensive operation. Why are we supposed to make this calculation? Why isn't she expected to show that she understands CSI and demonstrate that understanding by, herself, anaylzing these programs. If she came up with something disproving CSI, THEN, and ONLY THEN would it be incumbent upon the ID community to rebut her findings. Let her go first. I've got better things to do. As it is, I've wasted more time on this post than I care to think about. I've got Schneider's paper and am reading it. It's a big waste of time. Why? Because there isn't any CSI there. And that ALWAYS proves to be the case with EA programs. 3) Even Bill Dembski can't "agree" on a definition of CSI. He no longer is using it, in a sense. He now is using "specified complexity". Others here at UD want to stick directly in the "information" area and have our own intuitive ideas of what CSI should look like, and what we should be looking for in biological systems. Is there something wrong with this? Let me tell you what, Murmasa, you go buy a copy of No Free Lunch, read it, understand it, and then come back here and demonstrate that you understand CSI, and then we'll take your criticisms seriously. PaV
Upright Biped, Boy, I'm not making myself clear, am I? I thought you were saying something like "it wouldn't matter if you could explain the origin of this or that complex system (for example, the flagellum) as long as you can't explain the origin of biological information in the first place." I take that to be Joseph's position (as far as I understand it), and I thought it was implied in the passage from you I quoted above. QuiteID
Onlookers: If I may summarize this thread: 1) Supporters of ID are not able to calculate CSI, though they claim it to be easy to do. 2) When asked repeatedly to do so, they attempt to change the subject. 3) Supporters of ID cannot even agree on how to define CSI. Muramasa
By the way JR, I didn't make my objection on this thread without explaining what the observation was about. If you'd like to know, then you are welcome to read it. Upright BiPed
A shout to MH before I take off. Hey Mike - I'm impressed with your choice of books. JR, no offense intended, but the questions you are asking tell me you don't understand how the issue I raised relates to the topic of the thread. Cheers Upright BiPed
Upright, What are the limits of ID/evolution then? KF has previously suggested that the bodyplan of a cow is beyond the capability of evolution to generate and therefore must have been designed, do you agree or disagree with that view? JemimaRacktouey
Upright,
I can’t think of a reason why evolution wouldn’t be possible.
Do you agree CSI can be generated by known evolutionary mechanisms. E.G gene duplication. JemimaRacktouey
QID, My repsonse to you was not snarky. I gave you my answer. Evolution would be in action. I don't know why you ask the question. Are you suggesting some sort of line of demarcation. You are free to make that case if you wish. But if you implying that if ID should recognize a requirement for the origin of Life, then the rest of biology is off-limits, then I disagree, but more than just disagree, I don't get it. No offense. Upright BiPed
Upright,
with a demonstrated refusal to address certain issues.
Issues which you would like to raise but which are not necessarily within the scope of the OP. So perhaps it's not surprising those issues don't get addressed to your satisfaction.
Then you don’t understand the difference between the input and the output.
Perhaps. Perhaps not. But I'll wait and see what MathGrrl has to say.
You still have questions of whether CSI even exist? You do realize don’t you that you are creating CSI in order to pose the question?
How do you know I'm creating it if you don't know how to measure it? If you don't know how to measure it how can you be sure it's there at all? If CSI is present in the messages I'm creating then it's present in the parasites that evolve in one of the examples given in the OP. And the challenge is can you measure it as well as simply claim it obviously exists? As per the OP. Can you? Can anyone? JemimaRacktouey
Upright Biped [301], my question was serious, though your answer was mainly snark. Let me restate: your earlier comment suggested that you think anything after the establishment of the "symbol system" at the beginning of life is irrelevant to ID. Am I correct in that characterization? Joseph seems to have the same view, though he uses different terms. QuiteID
#298
until it’s been shown that CSI can be determined for an arbitrary system in an objective way then your questions are, I believe, premature.
Then you don't understand the difference between the input and the output. Upright BiPed
QID asks if "all evolution is possible after the origin of life?" I can't think of a reason why evolution wouldn't be possible. And I can't think of a reason why anyone should be prohibited from studying it. Upright BiPed
I'm trying to understand all of this. MathGrrl asked for a demonstration of the math behind CSI (as defined by Dr Dembski). I too would like to see the math. critter
#293 1) You are correct, the "purpose of this thread" has certainly unfolded, along with a lack of rigorous rebuttal, and with a demonstrated refusal to address certain issues. 2) What is the ultimate purpose? I take her at her word, do you suggest I do otherwise? 3) Secret agengy, Orwellian manifesto, conspiracy theory? My comments specifically pertain to this thread. 4) You still have questions of whether CSI even exist? You do realize don't you that you are creating CSI in order to pose the question? Here's one for ya - CSI is as real as Gravity. 5) No, I admitted nothing of the sort. Slow down. By the way, your suggestion would mean that any questions raised can be classified as as failure to answer. Nothing could survive, ever, if that is the case. 6) You need to concentrate on the evidence, not on me. Upright BiPed
Upright
My comment was obviously unwelcomed, as evidenced by the fact that she did not attempt to address it in any meaningful way.
I guess it never occurred to you that it might simply have been off topic, irrelevant or simply tangential to the issue at hand. CSI can either be calculated for an arbitrary system or not and once that has been determined then perhaps your points will take on more relevance then. But until it's been shown that CSI can be determined for an arbitrary system in an objective way then your questions are, I believe, premature. As I already noted, you want to have your cake and eat it. First we have to prove the cake exists. JemimaRacktouey
KF,
Functional specificity is an observable. Complexity can be identified using a threshold. And specific quantum orf relevant information meeting these criteria can be identified using the basic metric, the bit.
Surely it would be more instructive to put your obviously extensive knowledge of CSI into use and actually address the request in the original post rather then talk about how easily it could be done if only you were to choose to do so?
That is why I have asked MG to answer: were Orgel and Wicken “meaningless”?
And yet again we see distractions strewn into the air like blazing straw men. Why does the ability to calculate CSI depend on MG's answers to your questions? Whatever the answer it cannot affect your ability to perform a calculation that everybody here seems to think can be done but nobody can actually do.
When we can see the point of a simple, crude, brute force model and metric, then we can profitably progress to more complex cases.
Ah, so you have to see the "point of it" before you'll perform the calculations? Seems it would be easier just do do it and move the thread on.
So, MG: were or were not O & W “meaningless” when they spoke of specified complexity and wiring diagram based functional organisation?
Whatever the answer, will you then be able to calculate CSI for the examples given? If not then what is the purpose of your question except to shift the conservation to grounds that you feel strong on? Why do you feel the need to define the acceptable boundaries of every debate you partake in? Can't you stick to the topic of the thread rather then add your distractions and rabbit trail questions? If you can't calculate CSI for the examples given them I'm sure that there are other threads where your pseudo scientific smokescreen (I've read some of your website) will not be pierced so simply as it has been here by the simple fact you've been unable to perform the relevant calculations and instead have given every reason under the sun as to why it's the wrong question in the first place to be asking and furthermore that you have the right questions to ask and those questions need to be answered by MG. So in fact MG should not even have considered coming here and asking for what everybody claims is easily done and should have consulted you first KF before even thinking to ask such irrelevant questions. Luckily you don't run the world. JemimaRacktouey
JemimaRacktouey: What’s the “production history” behind the bacterial flagellum? DNA? And splicesomes, and ribosomes, etc. However, one can correlate the proteins found in the bacterial flagellum with its DNA. It maps. Information is only information if it can be translated. Most biological structures, but not all, can be translated from DNA PaV
If your issue has nothing to do with mathematics, you're raising it in the wrong thread. Although it does seem from all other comments that CSI is either not sufficiently well defined or too difficult to calculate to be of any use in determining agency. smokesignals
Upright Biped, An associate professor? Where did she say that? I must have missed something. Can you clarify this for me?
If the EA does not establish the mapping of those represenations, then it must rely on those relationships to be a pre-existing quality. If they are a) a pre-existing quality, and b) are critical to the existence of the information, then c) what is critical to the existence of the information is a pre-existing quality of the input. Therefore, whatever information is contained in the output of an EA does not owe its existence to the EA, but to the symbol system required at the input.
Doesn't that suggest that all evolution is possible after the origin of life, and that anything after the basic symbols of the genetic code were established is none of ID's business? Or am I reading too much into that? QuiteID
I've just done a word-check in the "Specification: The Pattern That Signifies Intelligence." (It is "specified complexity" that is detailed.) It doesn't appear. So, actually, it is IMPOSSIBLE to give a "rigorous mathematical definition" of CSI based on the paper. Was that the game you were playing all the time, MathGrrl? PaV
Upright,
The question is why is she here pelting persons (to whom she must know very well are not mathematicians) with a loaded set of questions,
If by "pelting questions" you mean the repeated requests to provide a rigorous mathematical definition of CSI then that's the purpose of this thread. And those questions may or may not be loaded, but they were so loaded with the permission and support from some of the owners of this blog and and such perhaps addressing them on their face would be one way to proceed.
while at the same time avoiding taking any questions which might impact the ultimate conclusion she is so obviously desperate to make?
And what "ultimate conclusion" do you have in mind here? Has it ever occurred to you that there is no secret agenda, that there is no "ultimate conclusion", no Orwellian manifesto? That all is being asked is what has been claimed to exist all along? Regardless of any of your supposed "ultimate" conclusions. Why don't you just try playing the game and drop the conspiracy theory aspect?
then it hardly is of any consulation to be forced to admit that CSI still requires that ellusive embodiment of meaning (the semiotic convention) which has not been observed coming into existence by anything other means than a mind.
Is it not irrelevant if CSI still requires a semiotic convention if CSI cannot be quantified nor even be shown to exist in the first place? And you seem to be admitting here that CSI cannot be calculated for an arbitrary system, which is some sort of progress I suppose. And once again your coin has two heads. Heads, CSI can be calculated, you win. Tails CSI cannot be calculated but you know it exists and in any case it has not been observed coming into existence by anything other means than a mind, and therefore you win again. JemimaRacktouey
#282 "The objections so far seem somewhat different, e.g. Uprights demand that MathGrrl answers his questions first." It has been made perfectly clear that my issue with Mathgrrl has been from the start a singular issue - one having nothing to do with the mathematics. She states that an EA can create information. The information she is discussing is information which is instantiated in a system of chemical representations (symbols). Without those representations, the information ceases to exist. If the EA does not establish the mapping of those represenations, then it must rely on those relationships to be a pre-existing quality. If they are a) a pre-existing quality, and b) are critical to the existence of the information, then c) what is critical to the existence of the information is a pre-existing quality of the input. Therefore, whatever information is contained in the output of an EA does not owe its existence to the EA, but to the symbol system required at the input. Given that neither she nor anyone else has provided any reasoning which would refute this most obvious point, I stand by it. And after once again bringing this to Mathgrrl's attention, I suggested (more than 250 posts ago) that her conclusion was therefore either "a) false, or b) over-reaching, or c) incomplete". My comment was obviously unwelcomed, as evidenced by the fact that she did not attempt to address it in any meaningful way. Her response came in three pieces. The substance of her first comment was as follows:
No one else has defined CSI with any degree of mathematical rigor, let alone provided any example calculations.
You will notice that this has nothing whatsoever to do with the subject of my comment. It also has nothing to do with modifying her conclusion to more accurately reflect the reality pointed out above. This is simply a maneuver on her part to control the conversation. Her second comment was as follows:
Darned if I know, it really depends on the exact definition of “information”, “complex specified information” in this case.
This comment was intended as a response to the question of whether or not the EA establishes the symbol system or relies on it at the input - and to this (the most basic of observations) she says "Darned if I know". Frankly, I have a hard time believing that, and indeed, taking her at her word only makes her look silly. Apparently she understands this as well, otherwise, she would not have been forced to deflect it with utterly detached questions about how to define information. Her final comment is nothing less than to simply ignore what was said, and to re-assume control of the conversation:
Would you please define CSI with some mathematical rigor and demonstrate how to calculate it for the four scenarios I detailed in the original post?
So in her three repsonses we see that she has no intention whatsoever in addressing this critical issue. Her only interest is in driving to a conclusion which has already been shown to be either false, over-reaching, or incomplete. Perhaps Mathgrrls's openly demonstrated obstinance and her patent controlling of the conversation could explain the dynamics of this thread. In that regard I'd like to point out another issue. Without any doubt, two of the most amazingly patient contributors to UD are GPuccio and Vincent Torely. Second-string contributors like myself are a joke in comparison. GPuccio is a medical practitioner in Italy, and VJ has a PhD in Philosophy living and working in Japan. I note that in comment 217 (given a chance to read further into the thread) even VJ has noticed the dynamics involved. He states:
I can see plenty of breezy, confident assertions along the lines of “Yes, I’ve read that paper,” but so far, NOT ONE SINGLE EQUATION, and NOT ONE SINGLE PIECE OF RIGOROUS MATHEMATICAL ARGUMENTATION from you. Instead, you’ve let us do all the mathematical spadework, while you’ve done nothing but critique it on general, non-technical grounds. This is highly suspicious.
For regular readers of UD, one has to wonder what it takes to get VJ to "call out" anyone. Yet here he is, having sensed something is not right. VJ appareantly wants a check of bona fides, but (ostensibly) our guest is an associate Prof in Mathematics, so I am certain she can handle the numbers. The question is why is she here pelting persons (to whom she must know very well are not mathematicians) with a loaded set of questions, while at the same time avoiding taking any questions which might impact the ultimate conclusion she is so obviously desperate to make? I would submit to VJ and anyone else to take a rather organic view of the conversation. What is the #1 issue? It is the conclusion, not the math. If it were about the math then Mathgrrl would have been giving rigourous mathematical rebuttals (as VJ points out) and she would be doing in front of mathematicians. It is her conclusion which is being protected. It is being protected from observations like the one I offered, as well as others. After all, if your goal is to make the connection (as has already been said or implied) "CSI has been refuted - ID is about CSI - ID has been refuted", then it hardly is of any consulation to be forced to admit that CSI still requires that ellusive embodiment of meaning (the semiotic convention) which has not been observed coming into existence by anything other means than a mind. Finally, if anyone is deluded into thinking that ID is premised on the level of mathematical rigor that can be applied to CSI (particularly in regard to satifying the personable belief system of an implaccable opponent) they know nothing at all about ID. Cheers... Upright BiPed
kairosfocus, A clarifying question, if you will: isn't Orgel's use of specified complexity pretty much qualitative? That is, although Dr. Dembski's use of the term certainly descends from Orgel ("evolves," perhaps? :-)), his own mathematicization of the term takes it in a different direction. I have an easy time believing the qualitative reality of the term but a hard time understanding the mathematical models. QuiteID
Pardon I am a bit woozy after a fairly rocky sea-ferry ride. Yellow Hole lived up to its name. I note briefly, that MG has declared in a previous thread that the CSI concept -- especially absent a neat mathematical rigorous definition -- is meaningless. My for the record remarks above are to correct this foundational error, as if the foundation is bad, the building will be considerably weakened. I have already underscored that on observing certain realities of cell based life, Orgel and Wicken were moved to identify the concepts that lay the base for the quantities identified as complex specified information, and functionally specific, complex information [a subset of CSI, where the specification is based on observed function and vulnerability to perturbation of key elements, linguistic and/or structural]. Mathematical models, analytical techniques and metrics respond to that reality, and we must not put the mathematical cart before the reality horse. Indeed, in the iconic case of calculus, the breakthroughs happened in C17 [Newton and Leibniz in the lead], the systematisation on that success was in C19 or so. And, the limits of such systematisations were laid out in the 1930s by Godel. Mathematics is irreducibly complex and inescapably a faith venture. When it comes to CSI and the directly practically relevant FSCI, the observation came first: it was recognised by Orgel and Wicken across the 1970's, that order, randomness and organisation on specified complexity are three distinct things. Life is complex but not random, as it is functionally specific. And, in the case of DNA, we are dealing with coded digital information in data structures that specify proteins and the regulatory networks that control them. (Cf. here.) If you cannot acknowledge this as a meaningful reality that is observed before it is modelled and analysed, and measured, then forever after you will have stumbling blocks. That is why I have asked MG to answer: were Orgel and Wicken "meaningless"? In fact, the question answers itself. O & W are right, and MG has made a basic conceptual error in doing science. Or, engineering for that matter, as CSI and FSCI describe very familiar realities of technical systems. Next, there is a demand to answer to far more complex cases, when the basic ones go a begging for an answer. I think we need to learn how to crawl and stand before we try to run and fly. So, I start with a basic, practically based case of CSI using models, analysis and metrics drawn from common practices in science, technology and even mathematics. Functionally specific, complex organisation and associated information. Functional specificity is an observable. Complexity can be identified using a threshold. And specific quantum orf relevant information meeting these criteria can be identified using the basic metric, the bit. These -- as we saw above -- are related to Hartley's log metric suggestion, and the Shannon metric that ISCID conveniently summarises:
Shannon information is the type of information developed by Claude Shannon and Warren Weaver in the 1940s. Shannon information is concerned with quantifying information (usually in terms of number of bits) to keep track of alphanumeric chcaracters as they are communicated sequentially from a source to a receiver. The amount of Shannon information contained in a string of characters is inversely related to the probability of the occurrence of the string. Unlike specified complexity, Shannon information is solely concerned with the improbability or complexity of a string of characters rather than its patterning or significance.
Of course, this hints at an underlying inference to design on functional specificity. We distinguish meaningful signal from noise, and we do so in the context of signal to noise ratio, for the meaningful and functional message is based on special configurations that are unlikely by chance -- islands of function -- and are subject to corruption by the various processes that happen in information storage, processing and communication systems. In short, there is an intuitive, implicit appeal to the inference to design on FSCI in the heart of information theory. So, the design theory movement is doing something important in highlighting and addressing this and its significance. (Indeed, that is a key part of why my always linked note shows my road into the design theory: information and thermodynamics.) So, pardon my insistence on learning to creep first. When we can see the point of a simple, crude, brute force model and metric, then we can profitably progress to more complex cases. But, id we demand to run and fly before we can creep and stand much less walk, the exercise will be futile. So, MG: were or were not O & W "meaningless" when they spoke of specified complexity and wiring diagram based functional organisation? Good day GEM of TKI kairosfocus
Jemima, There are 2 questions: can CSI be recognized and can CSI be calculated. Dembski thinks that both can be done. I'm not certain about the second one, maybe he is right. But I know that you can observe specification and complexity. It's like observing light without being able to calculate its amount. People were doing it long before light was measurable in any precision. Collin
Pav
And that pattern has a production history behind it.
What's the "production history" behind the bacterial flagellum? DNA? JemimaRacktouey
Jon Specter Yet you are reluctant to put any effort into developing one of the key tools in the intelligent design toolchest. Why? Because Mathgrrl won’t believe you no matter what? No. Because she doesn't understand what a specification is. It always deals with a pattern. And that pattern has a production history behind it. It's easy to write one sentence descriptions of what you think is a 'specification'; it's quite another to actually understand what the pattern is that is involved in the program, and then to understand how it is generated. I am not of the opinion that MathGrrl is here to find out how CSI applies in each of the situations she inquires about. I think there's some ulterior motive. I again am interested in what affiliations she has, and where she now works. Hopefully she'll tell us. PaV
QuietID:
Joseph, I still don’t get why you think genetic changes aren’t random except that you want to think that. The evidence is that they’re random.
Again Dr Spetner wrote a book about it called "Not By Chance". And the only way you can say all mutations are random is to demonstrate that the OoL was due to random processes. Avida has nothing to do with the blind watchmaker. Joseph
Jon Specter [247]: In the case of ev, and from memory mostly, neither of them are complex enough, that is, show enough improbability, to exceed the UPB. I think part of what MathGrrl is getting at is that in Dembski's paper on "Specification . . " he's introduced more concepts into his reformulation of CSI, and, hence, it might seem a little unwieldy. But, to simply things---it seems it's better to approach things in a simplified form before using the more complex---we can just use the UPB, which can be used in a straightforward way. Again, neither ev nor Tierra exceed this. What I have been lamenting all along is the work that's involved in establishing the "pattern" that's involved. That's why I asked MathGrrl for an example of a "specification". Frankly, I hadn't noticed that she included a "specification" in her post. This happened because she used basically the same format as that of an earlier thread; but this time she included---she thought---a "specification" for each. In the one example where a true specification was given, that of ev, and where N is known, 16, the calculation is quite straightforward and easy. It is not CSI. In the other cases, she hasn't properly understood what a "specification" is, and so hasn't presented them properly. A "specification" always includes a pattern. It's not simply a description; although descriptions can, in the case of ev, be translated into a pattern. PaV
MathGrrl: Here's Dembski's brief description of "specification" from his paper: Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. Here are your supposed specifications: 1.) The specification of this scenario is “Produces at least X amount of protein Y.” 2.) “A nucleotide that binds to exactly N sites within the genome.” 3.) “Acts as a parasite on other digital organisms in the simulation.” 4.) “Computes a close approximation to the shortest connected path between a set of points.” The first, third, and fourth aren't patterns. The only "pattern" is that of 2.) From what I can see, N=16 in the ev program, and given that there are four, possibly five (deletion) choices for all 16 positions, the most complexity produced is going to be 2^5 x 16 power, which, again, is well below the UPB. But,ev falls apart in other ways per Dembski and Truman. Every example of a CSI calculation that Dembski has given has involved a pattern. The Caputo case dealt with 33 D's and 1 R, etc. CSI analyzes patterns, and it involves anaylzing the chance hypothesis involved in the pattern emerging. The simple descriptions you've provided don't provide any such patterns save #2. And it's not CSI. So you now have your answer. PaV
I've finally discovered Schneider's paper in Nucleic Acids Research where he presents his ev program and its results. I noticed this statement---which I believe to correctly summarize what he accomplished:
Second, the probability of finding 16 sites averaging 4 bits each in random sequences is 2^4X16 = 5 x 10^20 yet the sites evolved from random sequences in only ~ 10^3 generations . . .
Per NFL, per the UPB found there, which is 10^150, even if there were no other problems with ev (which there are, and can be found in the papers I linked to belonging to Dembski and Truman), this falls woefully short of this UPB. This means that we CANNOT conclude that it is CSI. That is, we CANNOT be assured that what was produced was not the result of chance. The "chance hypothesis", the null hypothesis here, CANNOT be eliminated, and thus Schneider's output does not constitute CSI. QED. PaV
M. Holcumbrink,
We cannot calculate anything because we are missing important information.
I'm sure that for the purposes of this thread such details will be provided if asked for and deemed relevant to the problem at hand. The objections so far seem somewhat different, e.g. Uprights demand that MathGrrl answers his questions first. Not about missing facts as such. What further details are provided is not up to me. But I'll continue to enjoy reading the thread as this develops.
No, CSI is the “filesize” with the added knowledge that the data is specific and functional, i.e. it performs a critical function within a given system.
But that added knowledge about the data can only come from knowledge that it's designed and I thought the whole purpose here was to detect design via CSI, not use the knowledge that some data is designed to say that some data is designed? JemimaRacktouey
Joseph, I still don't get why you think genetic changes aren't random except that you want to think that. The evidence is that they're random. The question is can they drive novel evolutionary changes? If Avida models those random changes, then functional information emerges through random changes. If Avida doesn't model those random changes, then you shouldn't cite it as a measure of functional information. QuiteID
QuietID, You are not paying attention. Evolutionary processes are not necessarily blind watchmaker processes. Avida is yet another targeted search. Joseph
Joseph, the PNAS reference you give in 12 relies on Avida, an artificial life modeling system, and seems to claim that functional information is created within the evolutionary processes of the program. Isn't that precisely what we think functional information can't do? QuiteID
Jemima @259: “If “CSI” objectively exists then you should be able to explain the methodology to calculate it and then expect independent calculation of the exact same figure (within reason) from multiple sources for the same artifact.” When I was in school I had to use castigliano’s theorem to generate equations for specific cases in order to do stress analysis. Some cases were much more straight forward than others, but when I came across an example where it proved exceedingly difficult for me to use the theorem to derive the specific equation needed for that situation, it did not mean that the theorem was therefore invalid. I knew that it applied, but I did not have the mental faculties to be able to use it. Likewise, CSI is a very real entity, and it has been calculated for certain situations, but that does not mean that it is not a valid concept when a less straight-forward scenario is presented. Do you deny the reality of CSI? If a man gets stuck with a knife and he starts bleeding everywhere, there is no doubt that if the bleeding is not stopped he will die, and the inability of someone to calculate the rate of blood flow out of the wound does not change the fact that he is bleeding profusely. We can see the blood, and there is a lot of it. Likewise, we can see a lot of CSI, hence the attempts to rigorously define it and calculate it. If it wasn’t there to tacitly see, we wouldn’t even be talking about it right now. Jemima: “Currently I get the impression that “CSI” is simply the (file)size of the object in question to which we add the knowledge that it was designed and so design is claimed.”…” Of course, this all seems to hinge on knowing the artifact in question is designed in advance. Which sort of defeats the entire point of CSI in the first place. Things are designed because they have lots of CSI, and CSI is only present when things are designed. Therefore design. No, CSI is the “filesize” with the added knowledge that the data is specific and functional, i.e. it performs a critical function within a given system. Then given the probability of said highly specified data, design is claimed. Jemima @265: “Ah, the “paper bluff”. If the details of how to calculate CSI for the scenarios in question are in that paper them presumably your yourself are now capable of calculating the CSI for the scenarios in question. So what’s stopping you?” We cannot calculate anything because we are missing important information. For instance, in mathgrrls scenario #1, what is the length of the gene that is duplicated? And if its duplication causes the quantity of the protein it codes for to increase within the cell, then something else is in play here, because to the best of our knowledge the gene only codes for said protein. And if its production increases due to the duplication then whatever it is that regulates the production is clearly affected by the presence of the extra gene, but we are not privy to those specifics either. We need to know the mechanism involved here, but we don’t. I figure that to be quite obvious. M. Holcumbrink
Here's another paper. I'm posting the link straight away since I'm having problems and don't have the time to fix them. http://www.trueorigin.org/schneider.asp#1 PaV
kairosfocus @251, I understand how simple it is to calculate CSI in certain instances where a string of digital code is used to generate components that are unambiguously critical to the function of a system. No problem there. And after my last post, I decided that the best way to “measure” the CSI contained within one of my complex piece parts would be to take its 3D CAD file or a 2D PDF of the blueprint and apply the calculation to the amount of digital data contained therein. Not so tough after all. Regarding genetic algorithms, at what point could they be considered “artificial intelligence”? Machines make machines all the time in factories, and in some instances the processes are almost completely automated. But that of course does not mean that blind naturalistic processes are capable of making sophisticated machinery. So when software generates CSI, it seems to me that what we have is software making software. That would count as artificial intelligence, would it not? If so, then we are still stuck with intelligence being the sole originator of CSI. Intelligence by any other name is still intelligence, it seems to me. M. Holcumbrink
JemimaRacktouey, Are you interested in having evolutionists put their money where there mouth is? The point being is ID persists beause eolutionists have failed miserably at supporting the claims of their position. Go figure... Joseph
Then use those details to calculate the CSI for the examples given and prove me wrong and yourself right.
MathGrrl should be able to t do for herself-that is if her examples are valid. Why won't I do it? I say, and have explaned why, hr exampls are not valid. You, like MathGrrl, have serious issues. Joseph
Jemima:
The difference is that CSI is claimed to be something that can be calculated to the exact bit,
Who has made that claim? I say you made that up. I say you are full of it. Joseph
Joesph,
I say it provides exactly what is asked for.
One last comment to you then... Then use those details to calculate the CSI for the examples given and prove me wrong and yourself right. Why on earth would you not do so if you could? JemimaRacktouey
I don’t see how CSI relates to that situation at all. Illogical and irrelevant. That is beause you don't know anything about the subject. And the reasoning has been provided.
So perhaps you could calculate the CSI present in the four examples in the OP?
They are bogus and I have explained why.
It seems you are at odds with the majority of ID supporters on this thread.
What you say is meaningless. No need to engage you as you are one of the willfully ignorant. Joseph
Joseph,
All you have to do is demonstrate a BF can evolve via blid watchmake processes in/ from population(s) that never had one and CSI is basically refuted.
I don't see how CSI relates to that situation at all. Illogical and irrelevant. Why don't you explain in detail why that would be the case Joe?
Th point of CSI is its presence, meaning 500 or more bits of specifed information present, then all of our observations and experiences tell us tat only designing agencies are capable of such a feat.
Yes, quite. So perhaps you could calculate the CSI present in the four examples in the OP?
Nope. CSI need not be present to detect design. Counterflow is all that is required.
It seems you are at odds with the majority of ID supporters on this thread. I'll assign you as "fringe" which for ID is quite something and ignore the rest of what you have to say, unless of course you can calculate CSI for the examples given in the OP at which time I'll be happy to re-engage with you. I had wondered why most people on this thread did not engage with you, now I know. JemimaRacktouey
I have already told her how to do it and presented a paper in comment 12 that tells her how to do it. JemimaRacktouey Ah, the “paper bluff”. Ah the jerk response. How is what I posted a "bluff". Be specific. I say it provides exactly what is asked for. And again why/ ho are the examples valid? Why can't you tell us? Why can't MathGrrrl tell us? Joseph
Jemima, Specification is rigorously defined but CSI may not be. I'm not sure if CSI can be calculated, but specification and complexity can be recognized. http://www.designinference.com/documents/2005.06.Specification.pdf Collin
Collin
For example, psychologists measure “motivation.” Please calculate that rigorously
The difference is that CSI is claimed to be something that can be calculated to the exact bit, that it can be objectively calculated, that it can unambiguously determine if the artifact in question is designed with no prior knowledge of how it came to be and that CSI indicates intelligence. I'm not the one claiming those things, I'm just interested to see if IDers can put their money where their mouth is. A secondary reason is to see if after this thread IDers will still make the claim that CSI can be calculated for any object despite the fact nobody can actually do so. Is CSI an empty claim or one that can be backed up? What do you think Collin? JemimaRacktouey
JemimaRacktouey:
E.G the claims that the bac flag or the cell is “full” of CSI but no actual figure can be stated. If no figure is known, how is it known for sure that the actual value is non-zero?
All you have to do is demonstrate a BF can evolve via blid watchmake processes in/ from population(s) that never had one and CSI is basically refuted. Th point of CSI is its presence, meaning 500 or more bits of specifed information present, then all of our observations and experiences tell us tat only designing agencies are capable of such a feat.
Of course, this all seems to hinge on knowing the artifact in question is designed in advance.
That's just dumb.
Things are designed because they have lots of CSI, and CSI is only present when things are designed.
Nope. CSI need not be present to detect design. Counterflow is all that is required. Joseph
Joseph,
You have it backwards- MathGrrl neds to explain how/ why her examples are good/ valid.
Then I submit that the next time you claim that CSI can be calculated and is evidence for design you explain the scenarios that that cannot be applied to. Otherwise it's something of an impossible guessing game for somebody to propose a scenario and for you to simply say "nope, that's invalid" without explaining why. Why are the examples invalid Joe? You claims that they are "not valid" cuts no ice with me. Let's hear a few scenarios from you that you agree are valid for the calculation of CSI and if you calculate the CSI then perhaps a general principle can be derived.
I have already told her how to do it and presented a paper in comment 12 that tells her how to do it.
Ah, the "paper bluff". If the details of how to calculate CSI for the scenarios in question are in that paper them presumably your yourself are now capable of calculating the CSI for the scenarios in question. So what's stopping you? If you do it then you've shut up MathGrrl, provided a major boost for ID as CSI can now be calculated for arbitary scenarios and you'll probably end up as a guest blogger here. So why don't you do it Joe? JemimaRacktouey
For example, psychologists measure "motivation." Please calculate that rigorously... Well, they can give you statistics on what is most likely in certain situations but its hard to say what is going on in the brain of an individual. Yet measuring motivation is a worthy intellectual endeavor just like analyzing CSI. Collin
Colin
Whether or not CSI can be calculated, the definition is clear enough that we can at least see if there is complexity and specification.
This is the crux of the matter. If CSI cannot be calculated then the claims that it can are bogus and should not be made. If it can be calculated then it can be calculated in general and there should not be a very long thread where people are giving all sorts of reasons why in this particular case it cannot be calculated.
Do you deny that the bacterial flagellum has those things?
I don't know, define CSI and I'll tell you if it exists in the bac flag.
Even if it cannot be calculated rigorously, I think that it still leads to a valid inference of design just as Mount Rushmore leads to a valid inference of design
If you think about it that's invalid. We recognize Mount Rushmore as designed because it looks like human faces. Amongst other reasons. Yet here the claim is that "the designer" created the CSI that ID claims to observe (but cannot calculate). Given that nothing is known about this "designer" (ID is not about the designer) how can you recognize the design in the bac flag? What is you reference point? Where is the "face that looks like a human face" in the bac flag?
It’s POSSIBLE that the knife was blown by the wind in just such a way as to imbed in someone’s back, but unlikely, therefore the design inference is valid.
Or, therefore, something else entirely. Disproving one thing does not support another, positive evidence is required to do that. So it's "possible" that the bac flag evolved totally naturally and even if that is disproved it does not provide any support whatsoever for ID. How can it? There are multiple other explanations available (Craig Ventor in a time machine) so ruling out one say nothing about the probability of another being true.
Complexity and specificity, I submit, is the same thing.
Then calculate the CSI in MathGrrls examples. JemimaRacktouey
Collin, without calculation, the design inference is either circular or non-rigorous. We know Mount Rushmore is designed, but we can only say we know this because of complexity and specification if those are rigorously defined. QuiteID
QuietID, You may be correct, but I think you should consider that you are not correct. I think that complexity and specificity are well enough defined that its presence (in whatever "amount") can be reliable found. This leads to a reasonable inference of design. Remember that much of science is based on reasonable inferences. My experience is in psychology and I know that many people think that that is not a science. I'm one of them. But the world treats it as a science (especially in a court room) and what passes for science is rarely rigorously definable. Yet it receives government funding, influences public policy and is admissible as professional opinion in court. Psychologists use math, particularly statistics, but they use very fuzzy definitions; much more fuzzy and less rigorous than CSI. Collin
JemimaRacktouey
MathGrrl has already made it quite clear that the book has insufficient information present to allow CSI to be calculated for the 4 examples in question.
I doubt that she has read the book. Her posts and questions tells me she hasn't.
An interesting claim. But groundless without further explanation or justification.
You have it backwards- MathGrrl neds to explain how/ why her examples are good/ valid.
Why don’t you present a few non-bogus scenarios and then calculate, if you can, the CSI present in those scenarios.
I have already told her how to do it and presented a paper in comment 12 that tells her how to do it. Joseph
QuiteID
But I also think that for it to be useful scientifically, it should be calculable.
Exactly so. If "CSI" objectively exists then you should be able to explain the methodology to calculate it and then expect independent calculation of the exact same figure (within reason) from multiple sources for the same artifact. Currently I get the impression that "CSI" is simply the (file)size of the object in question to which we add the knowledge that it was designed and so design is claimed. E.G the claims that the bac flag or the cell is "full" of CSI but no actual figure can be stated. If no figure is known, how is it known for sure that the actual value is non-zero? Or KF's claims that CSI=FSCI and as such can be calculated directly from the file size, e.g:
11 –>We can compose a simple metric that would capture the idea: Where function is f, and takes values 1 or 0 [as in pass/fail], complexity threshold is c [1 if over 1,000 bits, 0 otherwise] and number of bits used is b, we can measure FSCI in functionally specific bits, as the simple product: FX = f*c*b, in functionally specific bits 12 –> Actually, we commonly see such a measure; e.g. when we see that a document is say 197 kbits long, that means it is functional as say an Open Office Writer document, is complex and uses 197 k bits storage space.
Taken from here: https://uncommondescent.com/intelligent-design/background-on-orderly-random-and-functional-sequence-complexity/ If FSCI==CSI then CSI=Filesize*X. If it's that simple one wonders why KF has not calculated it for the four example scenarios. Of course, this all seems to hinge on knowing the artifact in question is designed in advance. Which sort of defeats the entire point of CSI in the first place. Things are designed because they have lots of CSI, and CSI is only present when things are designed. Therefore design. JemimaRacktouey
Jemima, Whether or not CSI can be calculated, the definition is clear enough that we can at least see if there is complexity and specification. Do you deny that the bacterial flagellum has those things? Even if it cannot be calculated rigorously, I think that it still leads to a valid inference of design just as Mount Rushmore leads to a valid inference of design as does a knife in the back of a dead person. It's POSSIBLE that the knife was blown by the wind in just such a way as to imbed in someone's back, but unlikely, therefore the design inference is valid. Complexity and specificity, I submit, is the same thing. Collin
Joseph,
3- “No Free Lunch” is readily available. The concept of CSI is thoroughly discussed in it. The math he used to arrive at the 500 bits as the complexity level is all laid out in that book.
MathGrrl has already made it quite clear that the book has insufficient information present to allow CSI to be calculated for the 4 examples in question. If you disagree, why don't you perform the required calculations and show how you went about it? That is after all the purpose of this thread. If you cannot then presumably you will not make further claims regarding the general applicability of CSI to enable design detection.
5- Your scenarios are till bogus
An interesting claim. But groundless without further explanation or justification. Why don't you present a few non-bogus scenarios and then calculate, if you can, the CSI present in those scenarios. That may be enough to allow a general principle to be derived which can be used as an objective "design detector". JemimaRacktouey
JemimaRacktouey, You write:
It’s almost as if you are saying that until MathGrrl explains the origin of life you won’t help calculate CSI? Very strange. Or even that you know you can’t calculate CSI but it does not matter because the materialists cannot explain the origin of life. Equally strange.
You put your finger on something that I have noticed but have had a hard time articulating. I don't know what to make of it though. I like the concept of CSI: it makes sense to me intuitively as a sign of intelligence. But I also think that for it to be useful scientifically, it should be calculable. QuiteID
Joseph
Have you ever watched “My Cousin Vinny”? Do you remember what Ms Mona Lisa Vito said when the prosecuter tried to test her automobile knowledge? Well that applies to what MathGrrl is doing…
Oddly I agree. In the final, climactic section of the movie "Ms Mona Lisa Vito" demonstrates an expert knowledge of automobiles superior to that of anyone else in the courtroom. It seems that you have hit the nail on the head! JemimaRacktouey
My apologies for the delay in replying. I'm at a workshop this weekend that is giving me less personal time than I expected. I'll be back online this evening. MathGrrl
Kairosfocus
Please see the just above to see how a simple CSI metric can be developed
It would be more productive if you were to simply develop the metric for the 4 scenarios outlined in the OP. If you are able to provide instructions on how to develop CSI metrics then why are you unable to apply such instructions to MathGrrl.
The rhetoric that tries to obfuscate the reality is just that, selectively hyperskeptical rhetoric.
Pardon me, but the rhetoric that is obfuscating reality is coming from you. For example:
Namely, FSCI and CSI — and the two cannot be conceptually separated, MG whether we deal with Orgel-Wicken or Dembski c 1998 on — are real, are only seen to come from intelligence, are beyond the search capacity of the observed cosmos, and lie at the heart of C-chemistry, cell based life.,
While that may or may not be true it's irrelevant. The issue at hand is if CSI can be computed for the 4 scenarios given. Not what the ultimate origin of any such CSI measured is. It's almost as if you are saying that until MathGrrl explains the origin of life you won't help calculate CSI? Very strange. Or even that you know you can't calculate CSI but it does not matter because the materialists cannot explain the origin of life. Equally strange. Your attempts to cloud the issue have been noted. If you could compute the CSI for the scenarios in question I have no doubt that doing such would require less effort then typing several very large posts one after the other all in an attempt to explain why you can't compute CSI for the examples given but it does not matter because MathGrrl cannot explain it's ultimate origin anyway. And these posts seem to be very similar to previous posts you have written anyway, so it seems that whatever the issue is at hand you can re-use the same talking points over and over. The fact that you can't compute the CSI for the scenarios has not been masked by your attempts to throw red herrings into the mix I'm afraid.
Absent a priori evolutionary materialism straight-jacketing science in our day, we would have long since drawn the obvious and plainly well warranted conclusion: the cell is a deliberately engineered technology. Whodunit, we do not yet know, but that tweredun is plain.
You are not forced to wear the evolutionary materialism straight-jacket. You can take it off and conduct your own research free of any enforced rules about materialism. I guess you "know" that the cell is a deliberately engineered technology but somehow I suspect that you cannot calculate the CSI present in "the cell" Despite this you will no doubt will claim that it has CSI present anyway. And lots of it. So far that's all that seems to have happened on this thread. CSI cannot be calculated but you instinctively know that there is "lots", which indicates design, therefore ID. JemimaRacktouey
vjtorley On your calculation of the CSI of the bacterial flagellum.  You have done a lot of work on this and it deserves a full answer.  I don’t have the time this weekend.  So I am going to summarise the problems (both yours and Dembski’s) as a sort of marker and try to get back to it in the week.   1) It is irrelevant whether you measure a probability in the conventional way as a value between 0 and 1 or in bits by taking minus 1times the log to the base 2.  Surprised you raised that to be honest.   2) This highlighted something I had never realised before.  When calculating this probability he makes a rather basic error.  If you have n independent events and the probability of single event having outcome x is p then the probability of at least one event having outcome x is not np. It is (1 – (1-p)^n). So the calculation 10^120.Phi_s(T).P(T|H) is just wrong – even if you could make sense of Phi_s(T ) and P(T|H).   3) I apologise, I had forgotten that Dembski does define specificity to include not just those patterns which are simpler but those patterns which are simpler and less probable.  His only justification for this is that by doing so he ensures that “S is ruling out large targets” (top of page 19).  This seems a bit ad hoc but never mind. He can define specificity what way he wishes. To assess the specification of an outcome according to this definition we have two imponderables: (a) What other patterns are at least as simple as the one this outcome conforms to – where “simple” means Kolgomorov complexity (which remembers is not computable) (b) Which of these patterns are less probable that the one we observe.  How on earth do we know for a real situation? More to the point, when he comes up with the estimate of Phi_s(T) for the bacterial flagellum (pp 24 – 25) he forgets all about the “less probable” criterion and just looks for “simpler”.   4) T in his bacterial flagellum example is defined conceptually as “bidirectional rotary motor-driven propeller” and it is this he uses to estimate Phi_s(T). .  This is not necessarily (in fact is  almost certainly not) the same as the exact configuration of proteins.  We have no idea what other configurations of proteins might achieve this effect.  So when you insert P(T|H) into the formula it is a different T!   5) You want to argue that the assuming all amino acids are equally likely and independent of each other is what Dembski means by H in his definition of CSI.  I agree that in this case, as in a number of others, he is unclear.  If we adopt this definition of H then there are a few problems. While the choice of the "raw chance" version of H is pretty clear in this specific case it is not in general well defined.  It uses Bernouilli’s principle of indifference and as I am sure you know, Keynes (among others) has shown that this principle does not necessarily lead to a unique solution (we discussed this in the context of cosmic design some months ago). But if we accept the common sense interpretation for amino acids then there are many EA that lead to massive increases in CSI – gene duplication being an example (as discussed above). I am sorry this is not properly explained with examples and references – but I wanted to get the overall points before my memory and enthusiasm fade.   Cheers Mark markf
M. Holcumbrink Please see the just above to see how a simple CSI metric can be developed, and on cases where more sophisticated ones have been developed. The rhetoric that tries to obfuscate the reality is just that, selectively hyperskeptical rhetoric. Also, note that we can extend the above to implicit cases by observing that they fit in with a nodes, arcs and interfaces network, a Wicken "wiring diagram." You are doubtless familiar with blueprints [and the underlying drawings], exploded views, wireframe meshes and the like. These can all be reduced to network lists that give a linguistic data structure description from which the drawing can be made and the parts and the system constructed. Such a structured summary of course can be reduced to bits -- a chain of elementary yes/no decisions -- and then measured using the X-metric just suggested as the simple case. More sophisticated metrics have been developed, described and linked. Just, they have been ignored or brushed aside on one excuse or another. And, as to the notion that once one has created oodles of FSCI to set up a genetic algorithm, intelligently, one can then look at its hill climbing that uses highly controlled limited random processes, and say voila, unaided blind chance and mechanical necessity are creating CSI, that is self-refuting on its face. GEM of TKI kairosfocus
As far as metrics go, I stand by my earlier point: unless one is willing to accept that there is a basic simple metric on commonly used information concepts and analysis, then one will hyperskeptically dismiss more complex metrics -- for these more complex metrics rest on the same basic analysis. (The Durston FITS metric rests on H, the average information per symbol, and the Dembski type CSI metric rests on the analysis of configuration spaces and isolated hot zones AKA islands of function or targets. So, in reality, the above is an exercise in exposing circular, selectively hyperskeptical, crankish thinking. Pardon bluntness. To reject the reality of FSCI etc, one has to reject such basic and commonly accepted phenomena and metrics that it is at once revealing that something has gone wrong. Worse, to post a significant remark putting up such hyperskepticism requires one to produce examples of such FSCI, as was already analysed at 45 above; showing how such FSCI, a subset of CSI, is routinely produced by intgelligence. Self referential inconsistency anyone? a: We start with the Shannon-Hartley based metric of information carrying capacity, first expressed as a negative log measure [cf my summary of the more or less standard derivation here in my always linked]. b: We add the semiotic agent, AKA the intelligent observer. Such is capable of recognising linguistic or algorithmic function vs non function. The intelligent, judging, observer is of course a key -- though often implicit -- part of science, engineering and measurement generally. c: We introduce a simple metric for FSCI (a subset of CSI relevant to the DNA in the cell), X: X = C*S*B d: S -- specificity, as seen by isolation of functionality on islands, testable by seeing what significant random noise does to the ability to function. What would white noise mixed in do to the functionality of the text string in this post, or to the functionality of ev as a program etc? [Almost self evident.] Use a simple 1/0 value for yes/no. e: C -- complexity, and here a threshold of 1,000 bits of info carrying capacity used to store the message is good enough for government work. Cf the infinite monkeys analysis for why. Again a simple 1/0 for yes/no. f: B, for number of bits. A 300 AA funcitonal protein that folds and works in a specific task in the cell uses 1,800 bits of D/mRNA storage. 3 letters per codon, 300 codons, 2 bits per 4-state base. g: So, if something is (i) functionally specific, and (ii) complex beyond the threshold and as well (iii) has a specific bit value, it passes the two thresholds of specified complexity, and its number of stored bits [which by the threshold for C is beyond 1,000] is a value of FSCI in functionally specific bits. h: This is a commonplace in digital technology. i: And, as we see here complexity and specificity incorporated, it is a subset of CSI. Thus, the set, CSI is non-empty. Other more complex metrics and models build on this foundation. G'day GEM of TKI kairosfocus
Onlookers: Passed back after several days; pausing for a moment from the crises in hand. (Start with a territory where recurrent budget is 80% of GDP, needing to move to self-sustaining growth, post a major natural disaster that has stripped away 2/3 of land, infrastructure and population; and dealing with an increasingly reluctant principal aid partner . . . multiply by a poorly structured new constitution with dangerous and unbalanced provisions . . . ) Back to this thread. I am astonished to see the mantra that Complex Specified Information -- itself a description of a long since OBSERVED and commented on phenomenon as common as posts in this thread -- is "undefined" AKA "meaningless" still being tossed around. Selective hyperskepticism on steroids. In fact, the reality of FSCI and wider CSI needs to be first acknowledged, perhaps by recognising certain features of posts in this thread, then mathematical models and metrics need to address the observed realities adequately. Let's get some basic facts straight for the record, yet once again (and I refer the interested parties to the UD weak argument correctives, especially nos 26 - 30, which have given the concept, the actual intellectual roots in the work of Orgel [and Wicken and Yockey et al], and summaries and links on specific, mathematically grounded metric models that inter alia have been used to generate FSC metrics in FITS for 35 protein families. In short, much of the above is plainly an exercise in dismissive rhetoric triumphing over unwelcome reality. FSCI, and the wider concept CSI, are DESCRIPTIONS OF OBSERVED FACTS, not definitions that need to be turned out just so or they can be dismissed to one's heart's content. The attempt to be dismissive, therefore shows itself for what it is: denial of patent but unwelcome reality for evolutionary materialists. Namely, FSCI and CSI -- and the two cannot be conceptually separated, MG whether we deal with Orgel-Wicken or Dembski c 1998 on -- are real, are only seen to come from intelligence, are beyond the search capacity of the observed cosmos, and lie at the heart of C-chemistry, cell based life. Absent a priori evolutionary materialism straight-jacketing science in our day, we would have long since drawn the obvious and plainly well warranted conclusion: the cell is a deliberately engineered technology. Whodunit, we do not yet know, but that tweredun is plain. So, now, let us cite (and kindly cf the link here) those who recognised the facts and acknowledged that they need to be explained: ORGEL, 1973: _________________ >> . . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [[The Origins of Life (John Wiley, 1973), p. 189.] >> ___________________ Wicken, 1979: ____________________ >> ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [[originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. >> ____________________ Again, for the record. [ . . . ] kairosfocus
Hi MathGrrl, Yes, I go by Mung, but feel free to think of me as Thomas, as in Doubting Thomas. Could you please point me to any post in this thread in which you display competence in any of the following: 1. computational biology 2. evolutionary algorithms 3. genetic programming My time is valuable, and I hate wasting it. No doubt that's due to some evolutionary adaptation of which I am completely ignorant. Also, it seems to me that at least one corollary of the argument in your OP is that any paper which has claimed to generate CSI is false, because the the authors cannot even define CSI, much less generate it. Would you agree? Regards Mung
PaV:
The hard part is establishing the “chance hypothesis”. That requires examination of the program, its elements, how it interacts, its final outputs, etc, etc. Then, this “chance hypothesis” generates a rejection region. The details of that could be difficult.
Okay, dumb question, but this I don't get. If you don't have the time to fully flesh out the chance hypothesis, on what basis do you reject that it is capable of generating CSI? jon specter
Indium #238: “But the problems with CSI will not go away. Unless this thread is deleted, everybody can link here in the future. The failure of the UD crowd to give a working definition is there for all to see”. I come from the engineering world, and I am very familiar with very sophisticated integrated systems. My world is all about design, and there are many facets to this world, some of which are… 1) materials: properties, tolerances, configurations, interfaces 2) controls: inputs, computations & outputs 3) data processing: storage, retrieval, compression, expression, regulation, utilization 4) automated manufacturing and assembly: jigs, fixtures, tooling 5) energy: storage, utilization, transmission 6) efficiency 7) optimization… and the list goes on. But what is astounding to me is that all of the engineering principles I learned about in school are utilized in biological systems. All of it is there! So I guess my point would be that the ability to calculate CSI (or lack thereof) does not effect my belief that life has been designed in the slightest. If life has not been designed, how would it be any different than it is? The evidence for the design of life is exquisite, and the more we learn, the more mind-boggling it is to me that anyone would come to any other conclusion. The whole CSI calculation just seems like pointless busy work to me. There are certain single piece parts in the aerospace industry that have thousands of features, which means it contains an enormous amount of information. A single rectangular chunk of aluminum, on the other hand, has very little information. I haven’t the faintest clue how to go about calculating the CSI that is contained in either of them, but I know that the difference between the two is considerable. But my inability to give mathgrrl a rigorous equation by which to calculate it does not change the fact that CSI is there, and that there is a lot of it. And it doesn’t change the fact that if I found a similar piece part buried in the sands of the Sahara I would conclude unequivocally that the thing had an intelligent source. So when I see ion powered turbines that are controlled by sophisticated sensory inputs and outputs, and whose manufacture and assembly is regulated by the most sophisticated software known to modern man, I will likewise conclude that it has an intelligent source, CSI be damned. M. Holcumbrink
I am not a scientist, but have friends who are (in fields unrelated to biology). Some of them have spent years working on developing their ideas, testing, revising, testing some more, revising some more, etc. It probably isn’t unheard of that some scientists will spend decades going through the same process to fully develop their scientific ideas. As I put somewhat flippantly in a previous reply, that is what scientists do.
Yes, that is what they do. Except some scientists are paid to do it, and able to get grants for their research, and some are not. tragic mishap
As to indium's claim that gene duplication does anything in the real world; Michael Behe Hasn't Been Refuted on the Flagellum! Excerpt: Douglas Axe of the Biologic Institute showed in one recent paper in the journal Bio-complexity that the model of gene duplication and recruitment only works if very few changes are required to acquire novel selectable utility or neo-functionalization. If a duplicated gene is neutral (in terms of its cost to the organism), then the maximum number of mutations that a novel innovation in a bacterial population can require is up to six. If the duplicated gene has a slightly negative fitness cost, the maximum number drops to two or fewer (not inclusive of the duplication itself). http://www.evolutionnews.org/2011/03/michael_behe_hasnt_been_refute044801.html The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations Douglas D. Axe* Excerpt: In particular, I use an explicit model of a structured bacterial population, similar to the island model of Maruyama and Kimura, to examine the limits on complex adaptations during the evolution of paralogous genes—genes related by duplication of an ancestral gene. Although substantial functional innovation is thought to be possible within paralogous families, the tight limits on the value of d found here (d ? 2 for the maladaptive case, and d ? 6 for the neutral case) mean that the mutational jumps in this process cannot have been very large. http://bio-complexity.org/ojs/index.php/main/article/view/BIO-C.2010.4/BIO-C.2010.4 Is gene duplication a viable explanation for the origination of biological information and complexity? - December 2010 - Excerpt: The totality of the evidence reveals that, although duplication can and does facilitate important adaptations by tinkering with existing compounds, molecular evolution is nonetheless constrained in each and every case. Therefore, although the process of gene duplication and subsequent random mutation has certainly contributed to the size and diversity of the genome, it is alone insufficient in explaining the origination of the highly complex information pertinent to the essential functioning of living organisms. © 2010 Wiley Periodicals, Inc. Complexity, 2011 http://onlinelibrary.wiley.com/doi/10.1002/cplx.20365/abstract Evolution by Gene Duplication Falsified - December 2010 Excerpt: The various postduplication mechanisms entailing random mutations and recombinations considered were observed to tweak, tinker, copy, cut, divide, and shuffle existing genetic information around, but fell short of generating genuinely distinct and entirely novel functionality. Contrary to Darwin’s view of the plasticity of biological features, successive modification and selection in genes does indeed appear to have real and inherent limits: it can serve to alter the sequence, size, and function of a gene to an extent, but this almost always amounts to a variation on the same theme—as with RNASE1B in colobine monkeys. The conservation of all-important motifs within gene families, such as the homeobox or the MADS-box motif, attests to the fact that gene duplication results in the copying and preservation of biological information, and not its transformation as something original. http://www.creationsafaris.com/crev201101.htm#20110103a bornagain77
As a follow-up to [241], let's point out what one finds at Schneider's blogsite. We see a graph. Bits of information/nucleotide has increased. Wow. But . . . also notice---as I've already pointed out to MathGrrl---the increase quickly peters out. It flat-lines. And then notice that when "selection" is removed, the "information" is all lost. Well, the "selection" that Schneider alludes to comes exactly from the function that ferrets out the number of mistakes. Once this ferreting is turned off (which, though not in the form of a target sequence comes [per Dembski in the article] from "fitness functions"), voila, no new information. MathGrrl: Once again, what are your connections to Schneider? Would you like to tell us for the sake of full disclosure? PaV
Indium: First, are you some new rare-earth element? Second, the math is easy. Yes, the math is that difficult in "Specification"; but it isn't that easy either. But, again, this is the easy part. The hard part is establishing the "chance hypothesis". That requires examination of the program, its elements, how it interacts, its final outputs, etc, etc. Then, this "chance hypothesis" generates a rejection region. The details of that could be difficult. Crunching numbers is easy. Trying to figure out just what they mean is the difficult side of things. PaV
I found this little article by Wm. Dembski. Interestingly, he saw exactly the problem I saw when I looked a little more closely at Tom Schneider's blog giving info on ev. "Mistakes" Who figures this out? How is it figured out? The answer is the programmer. And this is where information is smuggled in. As Dr. Dembski points out, this is no more than a more sophisticated version of Dawkin's "Me thinks it is a weasel" self-correcting version of Darwinism. Interestingly, if you try and get to Dembski's paper from Schneider's blog, you won't get access. And more interestingly, is the fact that he tells us just what Dembski objected to---well, he tells us half of what Dembski objected to . . . . and then proceeds to tell us how Dembski was wrong. You see, MathGrrl, this is why I won't waste my time with your outrageous request. PaV
Denyse, Have you ever watched "My Cousin Vinny"? Do you remember what Ms. Mona Lisa Vito said when the prosecuter tried to test her automobile knowledge? Well that applies to what MathGrrl is doing... Joseph
I came here as a supporter, but am about to say something that will make me about as welcome as a skunk at a garden party. PAV says:
Why do think she is entitled to something that would be painstaking work to produce?
I am not a scientist, but have friends who are (in fields unrelated to biology). Some of them have spent years working on developing their ideas, testing, revising, testing some more, revising some more, etc. It probably isn't unheard of that some scientists will spend decades going through the same process to fully develop their scientific ideas. As I put somewhat flippantly in a previous reply, that is what scientists do. Yet you are reluctant to put any effort into developing one of the key tools in the intelligent design toolchest. Why? Because Mathgrrl won't believe you no matter what? Why do you require her approval? Why not do it for the silent onlookers? Why not do it solely to advance ID to the next level? Why not do it so some future ID scientist has a foundation to work from? Why not do it just for the thrill of discovery? jon specter
PaV in comment 32: The EASIEST part of CSI is the calculation of complexity. And certainly, as Dembski presents it in his paper on “Specification”, it is a more complicated, world-encompassing approach; but the simplified version is a simple negative log calculation of improbability. Some 8th graders could do the calculation. PaV in comment 227: Why do think she is entitled to something that would be painstaking work to produce? Huh, what now? Painstaking 8th grade mathematics? Also, it is interesting to see how people here get more and more defensive and even hostile. You seem to want to force Mathgrrl out of here. But the problems with CSI will not go away. Unless this thread is deleted, everybody can link here in the future. The failure of the UD crowd to give a working definition is there for all to see. So, maybe somebody can do the calculations for this simple case: 11->11.11 (duplication) 11.11-> 11.01 (divergence) So, 11 -> 11.01 = increase in information. One could argue that this does not happen in nature or that the secification doesn´t change in these cases. But Zhang et al (and many more...) seem to disagree: http://www.ncbi.nlm.nih.gov/pubmed/11925567 Indium
PaV, no, of course mathematicians don't "prove" definitions. Some definitions are stipulated ("Let Specificiation A be . . .") and some come about as the consequence of such stipulations. You also state:
I agree that discussions amongst mathematicians would be valuable. But we’ve had them here before. They quibble about NFL theorems; they quibble about uniform probability distributions, and they want to say that all of this disqualifies CSI. The point being that CSI and ID are charged subjects that will not receive an impartial assessment.
Discussions "here" (or on any blog) are by nature more subjective than discussions in the mathematical literature. That's why I suggested one of the journals put out by SIAM (the Society for Industrial and Applied Mathematics). If discussions here have been unhelpful, perhaps that's because here's the wrong place to have them. But it's up to ID researchers to start that happening. Dr. Dembski doesn't publish in the mathematical field any more, and he's a busy man, so maybe somebody else should pick it up. I have a high opinion of Dr. Dembski, but I'm no mathematician. Nevertheless, it seems like there a qualified person could publish the math. It doesn't need to be resisted irresponsibly; in fact, there's no need to publish it with reference to evolution at all. Upright Biped: I have a hard time figuring out what you're trying to say to me. There's seems to be a lot of snark, but I can't understand what you're snarking about. QuiteID
PaV
And then why can’t she just simply say that she doesn’t know how to apply CSI to these programs.
But to me that seems to me to be exactly what MathGrrl is saying. I think that if the relevant CSI was computed and the method shown for the examples given that would move this on to the next level, which I'm sure would be more productive as it would appear to promise a usable mechanism to objectively determine design!
Do mathematician’s “prove” definitions? I don’t think so.
I don't think that's what this is about. If CSI can be objectively computed for an arbitrary object, as claimed, then it can be computed for MathGrrl's examples. JemimaRacktouey
Denise: You've stated twice that you think 'people' should address MathGrrl's question. First of all, it really isn't a question. It's a request. More of a demand. She says she wants to learn more about CSI: well, there are books, and there are on-line publications. Second: what are your reasons for your statement? I'm a bit confused. Why do think she is entitled to something that would be painstaking work to produce? What has she done to show that she has the background? What has she done to show that she understands CSI at all? What specific questions (not demands) has she offered seeking clarification? So, again, why do you think she deserves an involved response? The ONE actual question she asked had to do with what a "specification" is. Well, I know what a specification is per NFL. Why can't she read that and understand that? And then why can't she just simply say that she doesn't know how to apply CSI to these programs. Then suggestions could be given to her. I mention in my penultimate post that there is this striking similarity between her need for a "rigorous mathematical description" of CSI, and the comments made by Thomas Schneider. In the mind of Thomas Schneider, Shannon information, which is a simplistic logarithmic function, is real mathematics and thus a true description of information. Well no one at ID would ever think that Shannon information is any kind of true indication of information except when it comes to computer programs. It's too simplistic. Even mathematicians acknowledge its limitations. So this, I believe, is her ulterior motive: she wants to disparage CSI to look mathematically naked (which is just the opposite of reality since it is its complexity of concept that makes acceding to her request so difficult), and you're abetting her purpose. I can't help but wonder why? Quiet ID: I agree that discussions amongst mathematicians would be valuable. But we've had them here before. They quibble about NFL theorems; they quibble about uniform probability distributions, and they want to say that all of this disqualifies CSI. The point being that CSI and ID are charged subjects that will not receive an impartial assessment. It is very human to disagree with a conclusion someone has reached and then to find reasons to invalidate those conclusions, instead of just absorbing the work and pointing out errors if found. So I don't have much optimism about that. OTOH, is it possible to "mathematically" prove what CSI is? I don't know if that is possible. However, can you prove to me that the Shannon equation depicting information defines information? I don't think so. It's simply a definition. Do mathematician's "prove" definitions? I don't think so. So here we have MathGrrl who seems perfectly willing to accept Shannon's simplistic notion of information, but now finds it troublesome that CSI isn't "rigorously" defined. It is plenty rigorously defined. Maybe she doesn't like it. That doesn't concern me. Maybe she doesn't understand it. She can look at referenced work and email Dr. Dembski directly. Maybe she thinks it's wrong. Well, then, write a paper and show how it is wrong. Anything more doesn't seem like a good use of time. PaV
QuiteID and Joseph: I think I get it now. It's not the specified complexity, Chi, but the specificity, sigma, which is customarily expressed in bits, as Professor Dembski states on page 19 of "Specification: The Pattern that specifies Intelligence." If the probability is below 10^-120, then sigma will be over 400 bits, and if it's below 10^-150 (Dembski's original universal probability bound), it'll be over 500 bits. Chi differs from sigma in that the expression we are taking the negative logarithm of (to base 2) has an additional multiplier of 10^120, the maximal number of bit operations that the observable universe could have performed throughout its history. Thus it's equivalent to subtracting 400 from the number of bits corresponding to sigma. If you've still got one or more bits left over after that (i.e. if Chi > 1), then you do indeed have a pattern that warrants the design inference. Cheers, and thanks for the quote, Joseph. vjtorley
#221 Really? Then I must certainly keep myself in check. My question immediately dismantles the core conclusion our guest opponent wishes to imply, and for that it is being ignored. However, I wouldn't want to seem ungrateful. Actually I first posted this question 192 post ago, and I felt completely composed while doing it. But I am happy to ley you be the judge of that: - - - - - "So the question remains: Does the output of any evolutionary algorithm being modeled establish the semiosis required for [the] information to exist, or does it take it for granted as an already existing quality. In other words, if the evolutionary algorithm – by any means available to it – should add perhaps a ‘UCU’ within an existing sequence, does that addition create new information outside (independent) of the semiotic convention already existing? If we lift the convention, does UCU specify anything at all? If UCU does not specify anything without reliance upon a condition which was not introduced as a matter of the genetic algorithm, then your statement that genetic algorithms can create information is either a) false, or b) over-reaching, or c) incomplete." Upright BiPed
MathGrrl, at 193 I agree that people should directly address your questions. People: Can someone summarize the discussion? What we learned, didn't, and why? I could run it as a post. MathGrrl: Download? I'm a Canadian, hardly short of download capacity, so not clear re problem, but happy to learn. (Could be because I live within walking distance of the CN Tower http://www.cntower.ca ) O'Leary
QuiteID (#214) I agree with you that an applied mathematics journal would be a good place to publish articles by the Intelligent Design movement on CSI. Regarding bits: after reading Joseph's comment at #212, and the relevant passage on converting probabilities into bits on page 19 of Professor Dembski's article on specification, I am wondering whether I missed something, after all. But I still have to ask: if a Chi (specified complexity) value of 1 warrants a design inference, how many bits is that, and why? If Chi is measured in bits, then that would mean 1 bit warrants a design inference. Wouldn't it? vjtorley
Dembski is kind of awol. At his website, Designinfererence.com he has only one entry in 2010 and none in 2011. In 2007 he had like 15. These entries are scholarly not "blog posts." What is he up to lately? Collin
After your “big finger from the sky” comment, I mistakenly thought you were here to encourage the fraud.
Not very familiar with Monty Python, are you?
Point on the word in my question that indicates hostility toward you, and I will be more than happy to retract it.
Having observed many more threads than I have commented on, I have noticed that, when frustrated, you tend to answer questions with questions. So, your whole question came across as a fit of pique. jon specter
critter at 217, Bill Dembski blogs here, but - just for clarification - this is not "his" blog. It passed into the hands of a Colorado not-for-profit some time ago, by his wish. He and I are two of five mods. I hope you find the information you seek. O'Leary
Critter, That's exactly what I'm thinking. He used to comment much more frequently on this blog. Collin
QID, yes yes, I am sure you are right. Implications haven't had an impact on any other discourse regarding ID, so I must have been making an unsupported assuption. Upright BiPed
I have been following this thread because I am genuinely interested in the math of evolution and design. (I had a year of work towards a MS in Math) Mathgrrl is asking for the math involved. Perhaps Dr Dembski can supply some knowledge (this is his blog). critter
#213 1) KF = kairosfocus. You might have noticed, since he has been active on the same threads as you have been. 2) My apologies as well. After your "big finger from the sky" comment, I mistakenly thought you were here to encourage the fraud. I do apologize. Please allow me to make it up to you. Point on the word in my question that indicates hostility toward you, and I will be more than happy to retract it. Upright BiPed
While I probably don't understand CSI well enough, I thought that my calculations in 21, 28 and 29 were interesting. Does anybody else like my idea of calculating the tightness of fit between code and function and comparing it to the same code and function in an unrelated species to get an inference of design? It may not be CSI but I thought that the math was correct (and very simple). Collin
vjtorley, do you agree with me that the CSI arguments should be presented in a form acceptable to a top-notch applied math journal? I don't think they've been presented in that way yet, which is one reason that mathematics as a whole has ignored the issue. Upright Biped, most mathematicians have not found CSI and related ideas worth considering. I think they are worth considering, but the challenge has not (yet) been presented in ways that would force mathematicians to take them seriously. QuiteID
#208 KF took care of that long ago, but nothing is good enough if the conclusion has to be protected.
Oh, sorry. I am relatively new here and don't always get who is who. Which usernames are those scientists posting under?
While you are here, do you think you can answer one of the questions ruled off-limits on this thread?
I don't think I deserve that hostility. I came here as a supporter, hoping to see CSI calculated. That is still my hope. jon specter
QuietID, The fact the paper I linked to in comment 12 exists proves that scientists are interested in quantifying functional information, which is the equivalent of CSI. And I am not sure what VJ is referring to. In the paper he links to Dembski specifically says :
Note that putting the logarithm to the base 2 in front of the product ? S(T)·P(T|H) has the effect of changing scale and directionality, turning probabilities into number of bits and thereby making the specificity ameasure of information. page 18(bold added)
Joseph
#208 KF took care of that long ago, but nothing is good enough if the conclusion has to be protected. While you are here, do you think you can answer one of the questions ruled off-limits on this thread? "Does the output of any evolutionary algorithm being modeled establish the semiosis required for the information to exist, or does it take it for granted as an already existing quality" Upright BiPed
Hi everyone, For people who might be interested in getting their hands on some solid probability figures relating to various molecular machines, here's a source I've just found, which I'd strongly recommend: http://theory-of-evolution.net/ . Here is a list of 40 irreducibly complex molecular machines, for those who aren't familiar with it already: http://creationbydesign.wordpress.com/2010/06/15/list-of-40-irreducibly-complex-molecular-machines-which-defy-darwinian-claims/ . vjtorley
Mathgrrl: After reading your posts, I'm beginning to think that your college major wasn't mathematics, as your handle suggests, but English (you're quite an articulate person) or possibly biology (since you display familiarity with various software programs designed to mimic evolution). Why do I say that? Looking through your comments, I can see plenty of breezy, confident assertions along the lines of "Yes, I've read that paper," but so far, NOT ONE SINGLE EQUATION, and NOT ONE SINGLE PIECE OF RIGOROUS MATHEMATICAL ARGUMENTATION from you. Instead, you've let us do all the mathematical spadework, while you've done nothing but critique it on general, non-technical grounds. This is highly suspicious. I'm calling you out. How much mathematics do you really know? You have complained that you don't know how to calculate the CSI for a bacterial flagellum, despite professing to be familiar with Dembski's works. But the calculation I performed for the CSI of a bacterial flagellum could have been done by anyone who had completed Grade 10 at high school. There was nothing advanced about the mathematics. So I have to ask: who are you, really? You write (#193):
There are no calculations of CSI that provide enough detail to allow it be objectively calculated for other systems. The only example of a calculation for a biological system is Dembski's estimate for a bacterial flagellum, but no one has managed to apply the same technique to other systems.
Now that's just mean, rude and curmudgeonly. I'm a philosopher, not a mathematician, and it's been 30 years since I studied mathematics at university. I spent hours of my valuable time looking up Professor Dembski's old papers, tracking down the probabilities and re-reading his paper on specification to see if I'd understood the math aright, before calculating a figure of between 2126 and 3422 for the CSI of a bacterial flagellum, and you never even acknowledged my calculation. A simple "Thank you" would have been nice. Instead, you complained that CSI had only been calculated for a bacterial flagellum so far, and that "no one has managed to apply the same technique to other systems." Rubbish. Actually, it's quite easy to do. If you click here , you will find a list of 40 irreducibly complex molecular machines. If you scroll down to number 8, you will find one I've written about before: ATP synthase. Here's the most concise English description: "stator joining two rotary motors." This description corresponds to a pattern T. Given a natural language (English) lexicon with 100,000 (=10^5) basic concepts, you should be able to estimate Phi_s(T). Let's see you do it. What about the probability P(T|H)? Well, I've located a scientifically respectable source which calculates the probability of ATP synthase arising as 1 in 2^884, or 1 in 1.28x10^266. I invite you to calculate the CSI, using Dembski's formula. Can you, I wonder? I'm calling your bluff. vjtorley
You sure got that right. Scientist like Denton, Behe, Orgel, Abel, Durston, Kenyon, Thaxton, to name a few.
Do you know any of them? It might be helpful to invite them over to further Mathgirl's education. jon specter
"Sounds an awful lot like a day in the life of a scientist" You sure got that right. Scientist like Denton, Behe, Orgel, Abel, Durston, Kenyon, Thaxton, to name a few. Upright BiPed
"when those works do not present with a level of precision that would be accepted by the vast majority of mathematicians. Perhaps that’s why these concepts have gone nowhere within the mathematics community." This is an example of naiveté that ID cannot afford. If these exact calculations demonstrated that information emerges as easily as rust, then they would be held up and waived around in the hands of NCSE lawyers in courthouses across the country. They do, however, show that the opposite is true. Upright BiPed
PaV, I think a solution for this dilemma -- and I do think it would be a real solution -- would be for someone to publish a description and defense of CSI in a top-flight applied mathematics journal. So far, the mathematics of CSI hasn't been developed with that degree of mathematical precision. QuiteID
MathGrrl: I noticed this in Schneider's response to Wm Dembski's objection to ev.
The ev paper did not make this claim since the phrase "complex specified information" was not used. It is unclear what this means. Shannon used the term "information" in a precise mathematical sense and that is what I use. I will assume that the extra words "complex specified" are jargon that can be dispensed with. Indeed, William A. Dembski assumes that information is specified complexity, so the term is redundant and can be removed.
You've alluded to Schneider before. Given his predilection for Shannon information, and his phrase, "Shannon used the term 'information' in a precise mathematical sense . . .", I suspect that you're a graduate student of his. In the effort for full disclosure, please tell us exactly what you're doing these days. Unless there is serious reasons for keeping this undisclosed, I take your refusal to answer as sufficient reason to no longer prolong this discussion. (Although it isn't a discussion, since you keep repeating over and over the same demand, sorry, request) PaV
QuiteID (#191, 192) Thank you for your posts. I quite agree with your points that: (i) the information that can be stored in a system is proportional to the logarithm logb(N) of the number N of possible states of that system; (ii) if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states; and (iii) when b is 2, the unit is the "bit" (a contraction of binary digit). My point, however, is that the expression [(10^120).Phi_s(T).P(T|H)] in Professor Dembski's formula for Chi doesn't refer to a number N of possible states, but to a probability: the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than that of pattern T and whose probability is no more than P(T|H), over the entire history of the universe (which is why the 10^120 factor is introduced). In my book, a probability is just a number, and the point of taking the log to base 2 and multiplying by -1 is to ascertain whether this probability is greater or less than 0.5. If the probability is far less than 0.5, then taking the log to base 2 of that probability and multiplying it by -1 will yield a number well in excess of 1, but it would be meaningless to equate this to bits. Consider Professor Dembski's statement that we can infer design by an intelligent agent if the specified complexity Chi is greater than 1. If you were to equate "1" to "1 bit" and then express Dembski's statement in terms of bits, you would get the nonsensical result that one bit is enough to warrant an inference to an intelligent agent! Surely that can't be what Dembski meant. That's why I interpret Chi as a cutoff number: if it's above 1, then we can be certain beyond reasonable doubt, on mathematical grounds alone, that the pattern was designed; and if it's 1 or below, then we need other compelling grounds if we are to make a warranted design inference. vjtorley
To provide a “rigorous definition” of CSI in the case of any of those programs would require analyzing the programs in depth so as to develop a “chance hypothesis”. This would require hours and hours of study, thought, and analysis.
Sounds an awful lot like a day in the life of a scientist. jon specter
Dear MathGrrl: To provide a "rigorous definition" of CSI in the case of any of those programs would require analyzing the programs in depth so as to develop a "chance hypothesis". This would require hours and hours of study, thought, and analysis. You come here and just simply "ask" that someone do this. Why? You do it. You analyze, study and think about any one of those programs, and you think about, study and analyze Dembski's concept of CSI, and then you prove that, lo, and behold, ev or Tierra, or what have you is wrong. What Dembski lays out in NFL is the conceptualization of a very abstract notion. His paper on Specification further elaborates and, frankly, complicated his original conceptualization. The hard labor will always be to take this conceptual framework and apply it to real-life situations. It can be done. But, it would take an immense labor, and the output of all this would amount to a Ph'D thesis in length and complexity. And, if you can't understand all of the work and effort that would be required, then I don't think you understand what's involved very well. That's your problem; not ours. I've already posted a paper that has shown that, per his definition of complexity, neither ev or Tierra produce any on-going complexity. And the bit output they come up with, based on the standards of NFL's description of CSI, demonstrates these programs don't measure up to CSI. Likewise, I've stated elsewhere---and you're aware---that I've looked at Tierra in some detail, but still in a cursory way, and from what I can see, it's output is trivial. You have a "parasite" forming. A different "life form". What does that mean actually? That a small assembly-language program, can, through generation of random changes, both lose its ability to "copy" itself, and then find a way to get another "organism" (i.e., a small program) to "copy" itself. Now I'm sure that with the smallness and simplicity of the programming (let's remember the guy who wrote this "program" was a biologist, not a computer geek) might allow something like this to happen in random fashion IF YOU RUN the computer program long enough. Which is what happens. But from what I could see, both the loss and parasitism were the result of simply commands coming and going. Let's remember that the "copy" command, and its execution, have already been programmed in. So this basically amounts to what we see in so-called microevolution when an operon is turned on and off in bacteria, for example. Why should I spend another moment analyzing something as basic and simplistic as this output? Can you give me an answer, other than you would be interested in it? Well, excuse me if I don't respond to your request. Tom Ray, the 'inventor' of Tierra made it available to anyone on the internet. It was Network Tierra. And what were Ray's expectations? He actually expected that by sheer computer power---that is, linking to other computers to use thier CPU time---he was going to generate "software programs" that we "couldn't even imagine"---just like Darwinism, you see. You can see him making this claim on an internet video. Well, he made that claim years ago. What has happened since? No "software program". Tom Ray is now doing other kinds of research. IOW, a big, fact "dead-end". The paper I cited put into mathematical language the very obvious conclusion I reached having taken a look at Tierra. As to ev, here's a link to Schneider's EV home. What do we see? Can you see the plateau-ing of the "complexity" after a thousand generations? So, what is your point with this repeated request---I call it a demand because of this repetition? If you don't understand CSI, then write Wm. Dembski an email, and ask for clarification. It's as simple as that. Again, one could, and can, do an ANALYSIS of whether or not CSI is present within the output of those programs, using the definition of Dembski. This is a response to an entirely different question, and the one which I think you think you're asking. But I'm not about to do it. You seem interested, so why don't you do that? "Creativity", like CSI, is an abstract concept. So, please, provide me with a rigorous mathematical definition of "creativity". Can you do that? One, can, however, apply the mathematical structure of CSI, as found in NFL and then refined in "Specification", to these various programs and demonstrate that they don't constitute CSI as described/defined by Dembski. This is something entirely different than "providing a rigorous mathematical definition" of CSI. Dembski has already done that. It's up to you to understand it, and then refute it. And, BTW, you would become a world-class luminary in the world of Darwinism if you could refute it. So why don't you start doing that, instead of just repeating your disproportionate demand for a "rigorous mathematical" refutation of ev and Tierra, etc. PaV
Joseph, I think MathGrrl has a point. The fact is that the mathematics of CSI is somewhat confusingly presented. The latest little dispute about whether CSI is in bits or not (you and I think it is, vjtorley thinks it's not) is illustrative. It won't do to say "read No Free Lunch and the Design Inference," when those works do not present with a level of precision that would be accepted by the vast majority of mathematicians. Perhaps that's why these concepts have gone nowhere within the mathematics community. That doesn't mean they're wrong -- I hope they're not! -- but it does mean that the mathematics needs to be nailed down tightly. QuiteID
markf(#186) I'd now like to address your first point. You write that Dembski's estimates for the probability of a bacterial flagellum arising by stochastic processes are "estimates are based on assuming that all amino acids are equally likely and independent of each other." That's true, but in reality they are all equally likely, and they are independent of each other. That's just a fact of chemistry. A similar point applies to the DNA molecule: the four bases are equally likely, and they are independent of each other. If they were dependent on each other, then they'd produce a boring regular pattern like AGAGAGAG - i.e mere order, rather than complexity. Order has low Shannon information. Hence I do not agree with you that the two figures I supplied (namely, 10^(-780) and 10^(-1170)) "effectively represent... a high and low estimate of ... a lower bound – the lowest the probability could reasonably be, given no other knowledge of the process by which the proteins were obtained." I would say that they represent the upper and lower bounds of a naive estimate, which may either be too optimistic or too pessimistic, but which is currently the best we have. You correctly state that "Dembski himself says that the precise calculation of P(T|H) is yet to be done" but you then add that "this was written after the works you refer to." Actually, Dembski said the same thing several years ago, in his 2003 paper, Still Spinning Just Fine: A Response to Ken Miller:
My point in section 5.10 was not to calculate every conceivable probability connected with the stochastic formation of the flagellum (note that the Darwinian mechanism is a stochastic process). My point, rather, was to sketch out some probabilistic techniques that could then be applied by biologists to the stochastic formation of the flagellum. As I emphasized in No Free Lunch (2002, 302): "There is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody's favor." ... Bottom line: Calculate the probability of getting a flagellum by stochastic (and that includes Darwinian) means any way you like, but do calculate it. All such calculations to date have fallen well below my universal probability bound of 10^(-150). But for Miller all such calculations are besides the point because a Darwinian pathway, though completely unknown, most assuredly exists and, once made explicit, would produce probabilities above my universal probability bound. To be sure, if a Darwinian pathway exists, the probabilities associated with it would no longer trigger a design inference. But that's just the point, isn't it? Namely, whether such a pathway exists in the first place. Miller, it seems, wants me to calculate probabilities associated with indirect Darwinian pathways leading to the flagellum. But until such paths are made explicit, there's no way to calculate the probabilities. This is all very convenient for Darwinism and allows Darwinists to insulate their theory from critique indefinitely.
Enough on that. I'd now like to address a more fundamental point you raise, namely the interpretation of P(T|H). Let me say at the outset that my understanding of the significance of P(T|H) in Professor Dembski's paper, Specification: The Pattern that Signifies Intelligence is quite different from yours. I'm not accusing you of mis-reading Dembski's paper; what I'm suggesting is that Dembski's argument would make more sense if P(T|H) referred to the probability of a pattern T arising by pure chance. Please allow me to explain why. You write:
It is not legitimate to substitute these figures for P(T|H) in his formula which is intended to be genuine estimate of the probability of the bacterial flagellum based on an evolutionary hypothesis H. (Emphases mine - VJT.)
Certainly, there are passages in Professor Dembski's paper which support your interpretation. For instance, on page 18, Dembski writes that "H, here, is the relevant chance hypothesis that takes into account Darwinian and other material mechanisms," and on page 25, when discussing the bacterial flagellum, Dembski writes that "H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms." Again, on page 26, Dembski talks about the example of a lopsided die, and says that even if specified complexity eliminates the chance hypothesis that all sides will appear with probability 1/6, "that still leaves alternative hypotheses H' for which the probability of the faces are not all equal." And again, on pages 27-28, Dembski rebuts an objection frequently voiced by evolutionists, that "because we can never know all the chance hypotheses responsible for a given outcome, to infer design because specified complexity eliminates a limited set of chance hypotheses constitutes an argument from ignorance." Personally, I think Professor Dembski was being too generous to his opponents here, and his more recent papers at http://www.evoinfo.org point to a better response: if Darwinian processes can produce organisms with a large amount of biological information, then these processes must themselves have been rigged with information at the start by an Intelligent Designer, thereby enabling them to achieve these spectacular results. Or as Dembski & Marks put it on page 4 of their 2009 article, Life's Conservation Law: Why Darwinian Evolution Cannot Create Biological Information :
The challenge of intelligent design, and of this paper in particular, is to show that when natural systems exhibit intelligence by producing information, they have in fact not created it from scratch but merely shuffled around existing information. Nature is a matrix for expressing already existent information. But the ultimate source of that information resides in an intelligence not reducible to nature. The Law of Conservation of Information, which we explain and justify in this paper, demonstrates that this is the case. Though not denying Darwinian evolution or even limiting its role as an immediate efficient cause in the history of life, this law shows that Darwinian evolution is deeply teleological. Moreover, it shows that the teleology inherent in Darwinian evolution is scientifically ascertainable - it is not merely an article of faith. (Emphases mine - VJT.)
To return to the die example: if I found a die which, when rolled, yielded the first 100 digits of pi in base 6, I'd be quite certain that it was designed by some agent to do that. And if someone were to demonstrate to me that the laws of Nature and/or the initial conditions of the universe were sufficient to ensure (or make it highly likely) that the die would do that, I certainly wouldn't become a convert to naturalism. Instead, I'd say that some Intelligence must have designed those laws and/or initial conditions. In other words, I'd invoke the fine-tuning argument. The foregoing argument explains why I think that Dembski's argument is in fact better understood if Dembski's hypothesis H is treated as a pure chance hypothesis. And indeed, throughout most of his paper, Dembski writes as if that was what he meant. For instance, P(T|H) is defined by Professor Dembski in his paper as a probability: the probability of a pattern T with respect to the chance hypothesis H. Dembski repeatedly refers to "the chance hypothesis H" in his paper (see pages 3, 5, 6, 7, 8, 9, 12, 16, 18, 19, 20, 24 and 25) and on page 22, referring to a particular sequence of ten digits (1123581321), Dembski writes:
This sequence, if produced at random (i.e., with respect to the uniform probability distribution denoted by H), would have probability 10^?10 , or 1 in 10 billion. This is P(T|H). (Emphases mine - VJT.)
I also think it's odd to describe a loaded die as coming up with a number (say, 6's) "by chance." If I found a die that kept rolling 6's, I wouldn't come up with an alternative chance hypothesis. Instead, I'd reject the chance hypothesis in favor of the alternative non-chance hypothesis that the die was biased. So I think we should treat H as a pure chance hypothesis, with a uniform probability distribution, when endeavoring to ascertain whether a pattern was designed by an agent or not. vjtorley
MathGrrl:
I would very much like to understand CSI well enough to test the assertion that it cannot be generated by evolutionary mechanisms.
My aplogies but I don't believe you. The reasons I don't believe you are: 1- "Evolutionary" mechanisms is meaningless. ID only argues against blind watchmaker-type pocesses having sole dominion over evolution. 2- That means ID doesn't attack evolutionary biology it attacks blind watchmaker evolution. 3- "No Free Lunch" is readily available. The concept of CSI is thoroughly discussed in it. The math he used to arrive at the 500 bits as the complexity level is all laid out in that book. 4- Science isn't about "proof". But with our knowledge of cause and effect relationships a designing agency is the only explanation for CSI 5- Your scenarios are till bogus Joseph
Mathgrrl, thanks for the sales pitch at 193. The dynamic on display here is, after all, not an uncommon occurance in human interaction. It actually has a rather rich history (the Inquisition comes to mind). Questions may be asked in order to formulate a conclusion. This process is often the foundation of logic, justice, and discovery. Yet, as we all know, questions can also be asked not to formulate a conclusion, but as a means to demonstrate a conclusion which has already been reached. This method is very often contained in a closed environment where certain subjects are off limits, and any answers given must first fit through the arbitrary constraints imposed by the person asking the questions. Both of those elements are on rampant display on this thread (and the ones leading up to it). In this regard, what has all the appearances of an exercise to formulate a conclusion is nothing of the sort. Imagine a courtroom operating in such a way. And of course, the sales pitch you've just given is intended to present the latter process as an example of the former. Nice job. Upright BiPed
Mathgrrl, Some of the confusion is probably due to my lack of understanding of what CSI is. So I hope you do not believe that there is more confusion over CSI than there actually is just because I haven't read the material. Collin
Here's two quotes from the paper I cite above: The issue of open-ended evolution can be summed up by asking under what conditions will an evolutionary system continue to produce novel forms. Arti?cial Life systems such as Tierra and Avida produced a rich diversity of organisms initially, yet ultimately peter out. By contrast, the Earth’s biosphere appears to have continuously generated new and varied forms throughout the 4 × 10^9 years of the history of life. There is also a clear trend from simple organisms at the time of the ?rst replicators towards immensely complicated organisms such as mammals and birds found on the Earth today. This raises the obvious question of what is missing in arti?cial life systems? And: Complexity is related to information in a direct manner, in a way to be made more precise later in this paper. Loosely speaking, available complexity is proportional to the dimension of phenotype space, and an evolutionary process that remained at lowlevels of complexity will quickly exhaust the possibilities for novel forms. However, intuitively, one would expect the number of novel forms to increase exponentially with available complexity, and so perhaps increasing complexity might cease to be important factor in open-ended evolution beyond a certain point. Of course, it is by far from proven that the number of possible forms increases as rapidly with complexity as that, so it may still be that complexity growth is essential for continual novelty. PaV
markf (#186) Thank you for your post. To take your second point first, I believe you have misinterpreted Professor Dembski's definition of specified complexity. You write:
[T]here is a problem in Dembski’s logic. He defines the specified complexity of an outcome as the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler. That is why P(T|H) is multiplied by Phi_s(T) in the formula.
This is a misunderstanding. First, the specified complexity Chi of an outcome is not a probability. A probability, by definition, has to be in the range 0 to 1, whereas Dembski's specified complexity can exceed 1. If it does, it is referred to as a specification and it is then attributed to an intelligent agent. Chi is actually minus 1 times the log to base 2 of a probability, namely [(10^120).Phi_s(T).P(T|H)]. If this probability is less than 0.5, then Chi will be greater than 1. Second, the reason why P(T|H) is multiplied by Phi_s(T) in the formula is not just in order to compute "the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler." Phi_s(T).P(T|H) is indeed a probability, but it is not the one you describe. Let's go back to page 17, where Dembski defines the specificity sigma as: sigma=-log2[Phi_s(T).P(T|H)] Dembski continues:
What is the meaning of this number, the specificity sigma? To unpack sigma, consider first that the product Phi_s(T).P(T|H) provides an upper bound on the probability (with respect to the chance hypothesis H) for the chance occurrence of an event that matches any pattern whose descriptive complexity is no more than T and whose probability is no more than P(T|H). The intuition here is this: think of S as trying to determine whether an archer, who has just shot an arrow at a large wall, happened to hit a tiny target on that wall by chance. The arrow, let us say, is indeed sticking squarely in this tiny target. The problem, however, is that there are lots of other tiny targets on the wall. Once all those other targets are factored in, is it still unlikely that the archer could have hit any of them by chance? That's what Phi_s(T).P(T|H) computes... (Italics and emphases mine - VJT.)
Thus the reason why P(T|H) is multiplied by Phi_s(T) in the formula is in order to compute the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler, and whose probability is no more than P(T|H). There is no suggestion here that the other patterns have the same probability as pattern T; rather the reverse. Subsequently, on page 24, Dembski introduces the 10^120 multiplier, to apply the probability Phi_s(T).P(T|H) over the entire history of the observable universe (where the maximum number of events = 10^120). The specified complexity Chi is: minus the log to base 2 of this probability. Later on, you wrote:
But this hides the enormous assumption that the probability of matching each of the other patterns is similar to or lower than the probability of matching the observed pattern.
I do not believe that Professor Dembski is making this assumption. There may well be other patterns with an equally simple description, but which are far more probable than the bacterial flagellum described by Dembski. That in no way weakens the point that if the probability of a bacterial flagellum arising by a stochastic process were shown to be astronomically low (e.g. 10^(-1170), as Dembski calculates) then it would be rational to infer that it was designed by an intelligent agent, if, after multiplying this astronomically low probability by the number of other patterns with an equally simple verbal description, and then multiplying that number by the number of events in the history of the observable universe, we still obtained a figure of less than 0.5 (in other words, -log2[(10^120).Phi_s(T).P(T|H)]>1). I think this answers your second point. I'll address your first and more substantive point in my next post. vjtorley
Everyone, We're getting near the 200 comment mark, which is great and I appreciate all the participation so far. I've noticed that threads here tend to load more slowly at around 300 and become difficult to use at 400, so this is close to the halfway point. I'll continue to respond directly to as many posts as I can, but I'd like to take a moment to summarize what I've learned up to this point and, possibly, refocus the discussion. CSI is unique among the arguments of ID proponents in that it leads to positive, potentially testable claims. Every other ID argument I've seen is an attack on modern evolutionary theory, not explicit support for ID. Further, the claim that ID is a reliable indicator of intelligent agency, if it could be demonstrated, would be world changing. Based on this, I would expect CSI to be the most active area of research for ID proponents, with new calculations being published frequently. Indeed, this is what I was hoping to find when I first became interested in the topic from lurking here and on other blogs. My preliminary conclusions from this discussion differ greatly from my initial expectations. It appears to me that there are at least four major problems with CSI as used by ID proponents here: 1) There is no agreed definition of CSI. I have asked from the original post onward for a rigorous mathematical definition of CSI and have yet to see one. Worse, the comments here show that a number of ID proponents have definitions that are not consistent with each other or with Dembski's published work. 2) There is no agreement on the usefulness of CSI. This may be related to the lack of an agreed definition, but several variants, that are incompatible with Dembski's description, and alternative metrics have been proposed in this thread alone. 3) There are no calculations of CSI that provide enough detail to allow it be objectively calculated for other systems. The only example of a calculation for a biological system is Dembski's estimate for a bacterial flagellum, but no one has managed to apply the same technique to other systems. 4) There is no proof that CSI is a reliable indicator of intelligent agency. This is not surprising, given the lack of a rigorous mathematical definition and examples of how to calculate it, but it does mean that the claims of many ID proponents are unfounded. When I took advantage of Denyse O'Leary's kind offer to make a guest post, I fully expected a lot of tangential conversation in the comments. What I did not expect was for us to be nearly 200 comments in without anyone directly addressing the five straightforward questions I asked, without anyone providing a rigorous mathematical definition of CSI, and without anyone demonstrating how to calculate CSI for the scenarios I described. I would very much like to understand CSI well enough to test the assertion that it cannot be generated by evolutionary mechanisms. If there is any ID proponent that can provide me with the definition and examples I've requested, please do so before this thread reaches the limit of the blog software. MathGrrl
Further, if it's an actual number, it's a number of something. "Greater than 1 complex specified information" makes no sense. As noted above, bits is in the expression. It can only be removed by dividing by bits somewhere else in the expression. QuiteID
vjtorley, expressing information in log2 terms means expressing it in bits. That's a basic convention of information theory. I don't know why Dr. Dembski calls it "an actual number" -- maybe to stress that's it's really quantifiable -- but it can't be to say that it's not a unit. As Wikipedia notes,
In 1928, Ralph Hartley observed a fundamental storage principle,[1] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm logb N of the number N of possible states of that system. Changing the basis of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the basis b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states. When b is 2, the unit is the "bit" (a contraction of binary digit). A system with 8 possible states, for example, can store up to log28 = 3 bits of information.
My emphasis. QuiteID
Alex73 @ 185 Indeed. In the beginning was the Word. :-) tgpeeler
VJ, The paper you link to is an extension of "No Free Lunch". And in NFL Dembski has CSI to be at least 500 bits of specified information- page 156. Joseph
Mathgrrl: I just noticed a remark you made in #131, which I believe represents a profound misunderstanding of CSI on your part:
Second, Dembski's CSI has units of bits. A change must be either an increase or a decrease in the number of bits. (Emphasis mine - VJT.)
This is incorrect. CSI is just a number. It has no units, and it does not represent bits. I'm sure you'll ask me to supply "chapter and verse" from Professor Dembski in support of my contention on this point, so here goes. Professor Dembski, in his paper, Specification: The Pattern that Signifies Intelligence defines (on page 24) the specified complexity Chi of pattern T given chance hypothesis H, minus the tilde and context sensitivity, as: Chi=-log2[10^120.Phi_s(T).P(T|H)] Phi_s(T) is a unitless number. On page 17, Dembski defines Phi_s(T) as the number of patterns for which S's semiotic description of them is at least as simple as S's semiotic description of T. P(T|H) is defined as a probability: the probability of a pattern T with respect to the chance hypothesis H. Since it is a probability, it must be a number greater than or equal to 0 and less than or equal to 1. Thus there are no units in the specified complexity Chi. It's just a number. That was why I gave it to you as a raw number in #173, when calculating the specified complexity of a bacterial flagellum. I didn't say: "The specified complexity of a bacterial flagellum lies somewhere between 2126 bits and 3422 bits." Instead, I said: "The specified complexity lies somewhere between 2126 and 3422." Professor Dembski himself confirms this interpretation in his paper, when he writes on page 34:
In my present treatment, specified complexity ... is now ... an actual number calculated by a precise formula (i.e., Chi=-log2[10^120.Phi_s(T).P(T|H)]). This number can be negative, zero, or positive. When the number is greater than 1, it indicates that we are dealing with a specification. (Emphases mine - VJT.)
I hope this clarifies matters for you. vjtorley
Noesis, What probability? Evolutionary biologists haven't even demonstrated a feasibility that, for example, cance an necessity can produce a bacterial flagellum from a population or populations that never had one. There isn't any evidence of chance and necessity constructing simpler multi-protein systems. So again what probability? Evolutionary biologists have failed to demonstrate a probability exists... Joseph
  vj #173   Your comment #173 is interesting but I think it combines your misinterpretation of Dembski’s work with a logical error in the original work.  1) As Dembski says, his objective when estimating the probability of obtaining the 30 proteins in bacterial flagellum as somewhere between 10^780 and 10^1170 is
…to sketch out some probabilistic techniques that could then be applied by biologists to the stochastic formation of the flagellum.
In fact these estimates are based on assuming that all amino acids are equally likely and independent of each other (with an adjustment for duplicates in one case). So they effectively represent a lower bound – the lowest the probability could reasonably be, given no other  knowledge of the process by which the proteins were obtained.  The two estimates are not the upper and lower bounds of the probability of obtaining the proteins.   They are a high and low estimate of the lower bound.  It is not legitimate to substitute these figures for P(T|H) in his formula which is intended to be genuine estimate of the probability of the bacterial flagellum based on an evolutionary hypothesis H. Dembski himself says that the precise calculation of P(T|H) is yet to be done (and this was written after the works you refer to).   2) But even if they were genuine estimates of P(T|H) there is a problem in Dembski’s logic.  He defines the specified complexity of an outcome as the probability of that outcome fitting a pattern or any other pattern that is as simple or simpler. That is why P(T|H) is multiplied by Phi_s(T) in the formula. So he obtains the total probability of meeting that pattern or one that is as simple or similar by multiplying the estimated probability of meeting the observed pattern (in this case “bidirectional rotary motor-driven propeller”) by the total number of patterns.  But this hides the enormous assumption that the probability of matching each of the other patterns is similar to or lower than the probability of matching the observed pattern. markf
tgpeeler quotes Dawkins: Dawkins actually says in “River Out of Eden” on page 19 that “Life is just bytes and bytes and bytes of digital information.” Isn't it sweet that the loudest anti-theist says exactly the same as one can read in the book he hates and deplores the most? Because a few pages after John the apostle idendifies Jesus of Nazareth as the Word (an old fashioned expression for digital information), he also quotes him saying absolutely unambiguously: I am ... the life. Every now and then I am stunned by the absolute perfection of these words... Alex73
Noesis #176
"You either have not read or have not understood Dembski’s last paper on CSI, “Specification: The Pattern that Signifies Intelligence.” The approach to design inference is one of computing quantities."
Any measure of complexity (or even any measure tout court) is always approximate in principle. In fact to measure is to map quality to quantity and this map can never be perfectly exact for the simple fact that quality is incommensurable to quantity by definition. Any measure is necessarily defective (Dembski’s CSI included). It is always an issue about the amount of the defectiveness. This ascertainment leads me to say that the increase of a protein is only quantitative while the organized functional hierarchies of organisms imply quality (also if the ID measures try to reduce them to a number). The incommensurability between quality and quantity makes us to understand that mere quantitative increase of matter cannot produce qualitative organization. Analogously the simple increase of the number of rocks doesn’t explain the whole cathedral. niwrad
GUYS Stop failing the Turing test. :D Anyway I think it will be not only possible to calculate CSI but relatively easy once we have a bioinformatics program capable of spitting out secondary, tertiary and quaternary structures from primary structure... But until then, studies such as the one by Axe which Meyer references in Signature in the Cell will do. For a protein 150 amino acids long, beta-lactamase, the ratio of working to non-working proteins is 10^-77. 10^-77 < 10^-40 therefore we have biological CSI. tragic mishap
MathGrrl says "Thank you in advance for helping me understand CSI. Let’s do some math!" I confess to only skimming through this thread as it seems to be pretty much a rehash of the pro-ID, or we might say pro-mind, and the anti-ID, or we might say the naturalist/materialist/physicalist - NMPist - view which claims (apparently) that the source of biological information (complex, functional, specified, or whatever) is time plus natural selection, that is to say, the laws of physics. In other words, what is the CAUSE of information? Most every biologist I've read, even on the pro-NDT side (Mayr, Crick, Dawkins, Coyne, etc...) has no problem with the idea that there is indeed such a thing as biological information. Dawkins actually says in "River Out of Eden" on page 19 that "Life is just bytes and bytes and bytes of digital information." I quote him not to offer a "proof" of this but merely to point out that since the discovery of the structure of DNA by Crick and Watson the idea of biological information has taken on ever increasing importance in biology and is widely recognized to exist. Laying aside for the moment whether or not it can be measured to mf's or mg's satisfaction. Just for fun, let's consider human information. The kind that is created by, well, humans. Like this post. What is the source of this information? Is it also the laws of physics as the NMPist would have us believe? Or is it mind, as I would have us believe? If we consider the prerequisites for human information I think we can identify at least 4 or 5 depending on how you count language. Let's count the symbols and rules of language as 2. Those rules operate within the laws of reason so these rules (Being, Identity, Non-contradiction, Excluded Middle, Causality) are pre-req 3. How are the symbols arranged in order to encode a message? It seems as though they must be freely chosen. Otherwise, how to account for the fact that I am typing this instead of that? There is no POSSIBLE explanation grounded in physical law for why I am typing this instead of that which suggests the question, well then, if physics isn't doing it then what is? That's for another time. The last thing that is (at least) required is intentionality or purpose. A "scientist" might say "causality." What is it that causes these letters to appear "out here" in cyberspace? It seems that whatever it is that is freely arranging these English symbols in a (one hopes) logical fashion is also intending to do this. Otherwise, obviously, it wouldn't be done. To recap, we need: Symbols, rules, reason, free will, and purpose. Without these there is no human information. So the NMPist now has to explain the existence of this information in terms of the laws of physics. If he wants to be intellectually honest, that is. After all, if NMPism means that all that exists is physical then obviously it follows that all explanations of these physical things MUST be found in the laws of physics. (Never mind for a moment the glaring - embarrassingly so - fact that these laws are also abstract and therefore beyond the reach of "science" because they cannot be sensed. I doubt that anyone has ever tasted or heard the law of gravity.) Can the laws of physics explain any of the things on my list? No. It is not even conceptually possible since information (although encoded in a physical substrate) is not itself physical. And if it's not physical then physics can't explain it. Let me try to illustrate with some examples. Why does "the dog" refer to Fido and "der Hund" also refer to Fido? Can this possibly be explained by reference to the laws of physics? No. It cannot. Why does "Es regnet" mean "it's raining in German and means absolutely nothing in English? Can this be explained by reference to physical laws? No. It cannot. If "b" is less than "c", and "a" is less than "b", then "a" < "c". This is necessarily true. Not even God can make it not true. So explain that in terms of physical law. Cannot be done. Free will cannot be explained by reference to physical law. Indeed the thorough going NMPist denies free will because everything must be explained by reference to physical LAW. We have the delicious irony of the fool denying that he has free will even as he exercises his free will to form the thought that he has none. Intentionality cannot be explained by physics. Indeed, this is why Dawkins and the rest rail against the idea of there being real purpose or design or intentionality in the universe. Let me offer a quick modus tollens argument to show the idiocy of this line of thinking. If I did not intend to be writing this post, I would not be writing this post. But I am writing this post. Therefore I do INTEND to be writing this post. For me, it is not too great a step to get from human information to biological information. There HAS to be a code for information of any kind. The code is not based on any laws of physics that I've ever read about. In fact, Yockey (2005), the physicist, says they are not. Oh heck, let me quote him. He says on page 5 that: "The belief of mechanist-reductionists that the chemical processes in living matter do not differ in principle from those in dead matter is incorrect. There is no trace of messages determining the results of chemical reactions in inanimate matter. If genetical processes were just complicated biochemistry, the laws of mass action and thermodynamics would govern the placement of amino acids in the protein sequences." BUT THEY DON'T. OK, that last part was me, not Yockey, but that was his point. In addition to the code there must be rules (else how did we recognize the existence of the CODE?). There must be free will (the code is not determined by the laws of physics - although - obviously - none of the chemical reactions violate the laws of physics). And purpose. Sigh. Why would there be anything at all unless someone (or SomeOne) determined that there would be? At any rate, I do not expect this will gain any traction in the anti-ID camp but every now and again one has to try. If mg is still reading you might ask yourself what "doing math" actually means. At its essence it's manipulating symbols according to various and sundry laws. Mathematics is a language too. A universal language. So how is it that you can manipulate those symbols freely? :-) tgpeeler
QuiteID (178): I'm tired, and I'm not going to look it up now. If I recall correctly, Dembski orders all of the specifications the semiotic agent is capable of emitting. To get the descriptive complexity of the specification, he takes the logarithm of the position of the specification in the ordering. Noesis
Everyone needs to take a break and read this Dilbert http://www.dilbert.com/strips/comic/2009-03-16/ Collin
Noesis, thanks for the heads-up. Although I support Dr. Dembski's arguments intuitively, I find them difficult to follow. Frankly I'm struggling with the whole thread. Perhaps I should read more before offering my opinion. Dr. Dembski's written a lot, so could you help me out by expanding on your last comment. Where specifically does he define specificity in quantitative terms? QuiteID
QuiteID:
So we can measure the information, but not the specification, which we can only say is either there or not.
Does anybody who believes in Dembski's CSI measure actually read what he wrote about it? He associates a numerical descriptive complexity with his specification of the bacterial flagellum, "bidirectional outboard rotary motor." Noesis
VJ thank you for another excellent post at 173 and 174. I have a question though. Did you have to pay someone today for those papers? I mean do you have to have special permission, perhaps a decoder ring, in order to find and have these papers - both the critiques and responses? How long have these been available? Miller's Critique? And Dr Dembski's responses? It seems these have been available for quite some time now, have they not? And it also seems that anyone with access to the Internet can get to them, is that not true VJ? If someone (particularly a mathematician) was earnestly so inclined to work with these concepts, it would seem they might have had access to them all along. - - - - - - - - Guess what? It won't be sufficient. Upright BiPed
niwrad,
The increase of a protein (eventually produced by gene duplication) by definition is a quantitative effect with no CSI per se. CSI implies quality and in principle quantity doesn’t entail quality.
You either have not read or have not understood Dembski's last paper on CSI, "Specification: The Pattern that Signifies Intelligence." The approach to design inference is one of computing quantities. Noesis
MathGrrl, I was not challenging you with my remark about Omega. I was trying to give you a hint. I'll say outright this time that Dembski wanted (past tense, because we're talking about work he seems to have abandoned) dearly to have a probability measure on the space of possible biological forms, so he could take the negative logarithm of the probability of a form to get information. As Stuart Kauffman has pointed out, there can be no such probability measure, because none of us can know the space of possible biological forms (or phase space, as he puts it). Dembski does not know the phase space. He has often complained that evolutionary biologists won't give him the probabilities that he needs. He has indicated that evolutionary theory is deficient because it does not yield those probabilities. He seems to believe that if the theory says that there are chance contributions to biological evolution, then it should provide probabilistic models. This does not follow logically. If I see you flipping an apparently fair coin to select inputs to a "black box," then I know that there is a chance contribution to the behavior of the system. But there is no way for me to provide a detailed probabilistic model. In particular, I do not know the range of responses of the black-box system. Dembski promised long ago to produce an upper bound on the probability of evolution of the bacterial flagellum. He has yet to get back to us with that. If should ever claim to have that bound, it will be bogus. Again, he cannot measure probability on a set he cannot hope to define. And without probability, there is no CSI. Noesis
Mathgrrl: In the interests of precision, and to avoid confusion, the line where I wrote in the post above: 20^30 = (10^39)^30 = 10^1170 should read: (20^30)^30 = (10^39)^30 = 10^1170. vjtorley
Mathgrrl: The specified complexity Chi of a bacterial flagellum is somewhere between 2126 and 3422, according to Professor Dembski's preliminary calculations of the probability of a bacterial flagellum arising by chance. I don't have a copy of Dembski's No Free Lunch: Why specified complexity cannot be purchased without intelligence (2002, Lanham, Maryland: Rowman & Littlefield) where he performs the original calculation. However, I found the following quote in a critical review by Professor Kenneth Miller, entitled, The Flagellum Unspun: The Collapse of 'Irreducible Complexity' :
When Dembski turns his attention to the chances of evolving the 30 proteins of the bacterial flagellum, he makes what he regards as a generous assumption. Guessing that each of the proteins of the flagellum have about 300 amino acids, one might calculate that the chances of getting just one such protein to assemble from "random" evolutionary processes would be 20^-300 , since there are 20 amino acids specified by the genetic code. Dembski, however, concedes that proteins need not get the exact amino acid sequence right in order to be functional, so he cuts the odds to just 20^-30, which he tells his readers is "on the order of 10^-39" (Dembski 2002a, 301). Since the flagellum requires 30 such proteins, he explains that 30 such probabilities "will all need to be multiplied to form the origination probability"(Dembski 2002a, 301). That would give us an origination probability for the flagellum of 10^-1170, far below the universal probability bound.
For the benefit of non-mathematical readers, I should point out that 20^30 = (10^39)^30 = 10^1170 (approximately). Miller criticized Dembski's logic in the paper I cited. Dembski replied in a paper entitled, Still Spinning Just Fine: A Response to Ken Miller . I'll just quote the relevant parts:
My point in section 5.10 [of No Free Lunch - VJT] was not to calculate every conceivable probability connected with the stochastic formation of the flagellum (note that the Darwinian mechanism is a stochastic process). My point, rather, was to sketch out some probabilistic techniques that could then be applied by biologists to the stochastic formation of the flagellum. As I emphasized in No Free Lunch (2002, 302): "There is plenty of biological work here to be done. The big challenge is to firm up these numbers and make sure they do not cheat in anybody's favor." Miller doesn't like my number 10^(-1170), which is one improbability that I calculate for the flagellum. Fine. But in pointing out that a third of the proteins in the flagellum are closely related to components of the TTSS, Miller tacitly admits that two-thirds of the proteins in the flagellum are unique. In fact they are (indeed, if they weren't, Miller would be sure to point us to where the homologues could be found). Applied to those remaining two-third of flagellar proteins, my calculation yields something like 10^(-780), which also falls well below my universal probability bound.
Some scientists have criticized Professor Dembski's probability calculations as being too simplistic. I would advise them (and you, if you haven't already) to read Dembski's 2004 paper, Irreducibility Complexity Revisited, in which he lays out the numerous hurdles that have to be overcome before an irreducibly complex biochemical system can evolve by a Darwinian mechanism:
(1) Availability. Are the parts needed to evolve an irreducibly complex biochemical system like the bacterial flagellum even available? (2) Synchronization. Are these parts available at the right time so that they can be incorporated when needed into the evolving structure? (3) Localization. Even with parts that are available at the right time for inclusion in an evolving system, can the parts break free of the systems in which they are currently integrated and be made available at the "construction site" of the evolving system? (4) Interfering Cross-Reactions. Given that the right parts can be brought together at the right time in the right place, how can the wrong parts that would otherwise gum up the works be excluded from the "construction site" of the evolving system? (5) Interface Compatibility. Are the parts that are being recruited for inclusion in an evolving system mutually compatible in the sense of meshing or interfacing tightly so that, once suitably positioned, the parts work together to form a functioning system? (6) Order of Assembly. Even with all and only the right parts reaching the right place at the right time, and even with full interface compatibility, will they be assembled in the right order to form a functioning system? (7) Configuration. Even with all the right parts slated to be assembled in the right order, will they be arranged in the right way to form a functioning system?
For the time being, then, in the absence of any better calculations, I'm going to stick with 10^-780 and 10^-1170 as the upper and lower bounds for the probability of a bacterial flagellum arising as a result of stochastic processes. Now recall that Dembski, in his paper, Specification: The Pattern that Signifies Intelligence defines (on page 24) the specified complexity Chi of pattern T given chance hypothesis H, minus the tilde and context sensitivity, as: Chi=-log2[10^120.Phi_s(T).P(T|H)] and then goes on to define a specification as any pattern for which -log2[10^120.Phi_s(T).P(T|H)]>1 . He continues:
As an example of specification and specified complexity in their context-independent form, let us return to the bacterial flagellum. Recall the following description of the bacterial flagellum given in section 6: "bidirectional rotary motor-driven propeller." This description corresponds to a pattern T. Moreover, given a natural language (English) lexicon with 100,000 (=10^5) basic concepts (which is supremely generous given that no English speaker is known to have so extensive a basic vocabulary), we estimated the complexity of this pattern at approximately Phi_s(T)=10^20 (for definiteness, let's say S here is me; any native English speaker with a some of knowledge of biology and the flagellum would do). It follows that -log2[10^120.Phi_s(T).P(T|H)]>1 if and only if P(T|H)<(1/2)*(10^-140), where H, as we noted in section 6, is an evolutionary chance hypothesis that takes into account Darwinian and other material mechanisms and T, conceived not as a pattern but as an event, is the evolutionary pathway that brings about the flagellar structure (for definiteness, let's say the flagellar structure in E. coli).
Time for some math. Given that log2(10)=3.321928094887362 (approx.), that Phi_s(T)=10^20 and that 10^(-1170)<=P(T|H)<=10^(-780), we can calculate: -log2[10^120.Phi_s(T).P(T|H)]= -3.321928094887362*(140-780)=2126 for the most optimistic scenario for the chance formation of the bacterial flagellum, and -3.321928094887362*(140-1170)=3422 for the most pessimistic scenario. So there's your answer, in black and white: for the bacterial flagellum, the specified complexity lies somewhere between 2126 and 3422. Since this is far greater than 1, then the flagellum can be described as a specification. I hope this answers your question. vjtorley
My understanding of CSI is that if it is present then that is a dead-on indicator of a designing agency. Meaning chance and necessity cannot accout for CSI. It is a yes or no thing. And you go about determining it is present by counting bits and determining a specification is present. (I also think it is limited because of that- you need to be dealing with something readily represented as bits- but that is neither here nor there- just sayin'.) But anyway yes or no. Function is a specification- does it exist,yes or no. We have something with a function- a bacterial flagellum- can chance and necessity account for it- yes or no. If yes we never get to the design inference and CSI isn't looking so good as a dead-on design indicator. Joseph
QuietID:
“500 bits of specified information.” Exactly! The CSI is measured in bits, but the thing being measured is the information, not the specification. Right? Specification is either present or not.
1- It can't be CSI without the specification 2- Yes the thing being measured is the information 3- Therefor Shannon's methodology for measuring/ calculating information may apply with that caveat 4- As Stephen C Meyer pointed out in "Signature in the Cell" Shannon provided a way of measuring/ calculating information carrying capacity 5- Therefor if that information is also specified it does not change the measurement/ calculation. Good job Q. Joseph
Joseph, Your brain automatically supplies the missing word without telling you it's missing. Now here's the question, is that a designed process, and if so, who designed it? Mathgrrl, I'm not sure you will get the definition you want because "intelligence" and "intelligent agent" are not defined that specifically either. It's like trying to say that 2X=3Y when you know that X and Y are not quantifiable. You are trying to find the value of X (CSI) but Y seems to be the value of PI. Undefinable. Collin
Joseph, "500 bits of specified information." Exactly! The CSI is measured in bits, but the thing being measured is the information, not the specification. Right? Specification is either present or not. It's like, the speed of a washing machine is quantifiable as rpms. But the machine itself is either top-loading or front-loading. QuiteID
The NFL theorems do not apply to a situation in which there is only one fitness landscape
The notion of NFL is larger than the theorems by those names. In ID useage (as in the book No Free Lunch) it deals with the probability an evolutionary algorithm can exist in the first place before there is even a fitness landscape.
Can you provide a rigorous mathematical definition of CSI
With respect to a certain pattern and probability distribution, in principle, you can provide a measurement. Whether you decide that the defintion is rigorous (or not) is less of a problem for ID than it is for Darwinists who claim certain structures can evolve without intelligence in the pipeline:
and example calculations for the four scenarios I described?
We can work on it if you can provide the probabilities for the landscapes existing in the first place without intelligent programming of the landscape. In the genetic algorithms I wrote, the probability space for creating that landscape is very small. An approximate measure is taking the amount of probability space to create a workable program from the language symbols. You can't just say tierra, or EV, or steiner will work by letting a random number generator create the fitness functions. This is like saying a printed document requires no intelligence because a computer and printer can print something out with human intervention after the "print" command is initiated. I've already said evolutionary algorithms can create CSI if they act as surrogates and extensions of human thought or other intelligent agencies. I see no reason to belive evolutonary algorithms capable of generating CSI can spontaneously self-create themselves. Say Steiner source is 1000 characters long (in Fortran), what is the number of compilable programs that can be implemented with 1000 characters versus the space of all possible characters. Certainly it is small. Will you be dissatisfied with any less than multi-decimal precision when the probabilites are so obviously remote? Let's say a variable name and reference requires 10 characters and must be coordinated somewhere in the source code so the program works correctly. That probability of success would be somethin like 1 in 10^40 (if we include special characters). And that is only the beginning of problems. I would hardly stake much claim in a theory have a 1 in 10^40 chance of being true. And that is a generous figure by the way... Even if you said biological systems implement evolutionary algorithms, you have no theoretical justificatin to say the biological systems self-created their capabilities any more than tierra, EV, or steiner self-created itself. They all needed intelligent designers. Praise be. By the way, Mendel's accountant is a superior model of evolutionary algorithms in nature. Why aren't those results (versus EV, tierra, steiner) used in scientific discussion. I suppose because they give answers Darinists don't like even if they might be correct. scordova
Mathgrrl...such dedication to detail. Yet, she refuses to acknowledge a key reality of what she is "seeking" to understand. Perhaps that reality stands as an impediment to the claim she wishes to make. ”Does the output of any evolutionary algorithm being modeled establish the semiosis required for [the] information to exist, or does it take it for granted as an already existing quality”. To answer this question Mathgrrl, you don't even have to agree with me that information only exist as a matter of symbols and rules. In the matter at hand, the fact that it does exist as a mapping of discrete objects is not even in dispute. So why is it that you cannot acknowledge it? Upright BiPed
"in biology CSI refers to biological function" I don't think that's true of biology generally, as the term is used by very few people in the field. QuiteID
"But anyway seeing that you ignore most of what I post dealing with you is a waste of time." Because you have failed to provide the one thing MathGrrl is asking for: a methematically rigorous definition of CSI. Grunty
Mathgrrl, Do you believe that the word "intelligence" has been defined rigorously? Collin
Collin, Strange that when I read it the missing word is there. Funny how that works Joseph
And if you cannot provide a mathematically rigorous definion of a computer program then you don't know what you are talking about and are a waste of time. Joseph
MathGrrl:
Unless you have a mathematically rigorous definition of CSI,
500 bits of specified information as demonstrated by the math in "No Free Lunch". All the rigor for determining that is in that book. That said all that has to be done is for someone to come along and demonstrate that a bacterial flagellum can evolve via an accumulation of genetic accidents and that will be that for the design inference for the BF. But anyway seeing that you ignore most of what I post dealing with you is a waste of time. Joseph
I know, for a fact, that the bacterial flagellum is a specified functional biological system. Not any ole sequence will produce one. I also know a BF contains thousands of parts- and that is before breaking it down into bits. Last I checked thousands is greater than 500 and 500 is the number I am looking to meet or break.
Unless you have a mathematically rigorous definition of CSI, you don't actually know that a bacterial flagella is "full of it". Without such a definition, you don't even know what "it" is. If you can't define your terms, claiming that CSI is an indicator of intelligent agency is, to put it as politely as possible, premature. All I'm asking for in my original post and throughout this thread is for a rigorous definition and some example calculations so that I can test the claims that ID proponents make with respect to CSI. I hope you'll consider providing that information. MathGrrl
Joseph, Sorry for making fun. I think your point is actually very good. Collin
Joseph, you are so dramatic. I noticed that you copied the same error in your 158 message as you did in your message at 145. That must be a posting-duplication mutation. I think that the CSI did not translate. :) Collin
MathGrrl, Please provide the rigorous mathematical definition for a computer program and you will how wrong you are. Joseph
Mathgrrl, I'm not sure he will succeed. It's probably like quantifying consciousness or intelligence. Psychologists say (tongue in cheek) that IQ tests measure intelligence and intelligence is what IQ tests measure. They know that it is tautological, but they've kind of given up on justifying it. There is just no way of making IQ correspond perfectly to anything in the real world. But it is useful in making predictions (even if they are inexact). Collin
MahGrrl: <bockquoeYou’ve already told me that bacterial flagella are “full of CSI”. Surely you wouldn’t make a claim like that without being able to support it? Please show me the math that caused you to reach your stated conclusion. I know, for a fact, that the bacterial flagellum is a specified functional biological system. Not any ole sequence will produce one. I also know a BF contains thousands of parts- and that is before breaking it down into bits. Last I checked thousands is greater than 500 and 500 is the number I am looking to meet or break. Joseph
MathGrrl:
Does that calculation hold even if we find that the gene evolved via known evolutionary mechanisms from precursors that coded for a less useful, but still workable, protein?
Do you read my posts? "Evolutionary mechanisms" is meaningless.
Since you seem to understand how to calculate CSI in some detail, could you please do so for the four scenarios I described?
No, they are bogus.
Please explain how the paper you referenced aligns with Dembski’s definition of CSI in Specification…. As I’ve tried to make very clear, I am interested in understanding the specific metric used by ID proponents.
You mean explain it AGAIN? I told you by quoting Dembski- in biology CSI refers to biological function. And the paper dals with the information pertaining to biological function. Joseph
niwrad,
About the “detailed calculations” I think that formulas are useful tools that one can apply after some basic principles are stated. In other words, formulas necessarily come after principles. By now we disagree on principles then it is useless to put formulas on the table.
We're not disagreeing on any principle. I'm just asking for clarification on a core ID concept. Given that happy state, could you please provide a rigorous mathematical definition of CSI based on Dembski's discussion in Specification... and show how to calculate it for the four scenarios I described in my original post? MathGrrl
Joseph,
Do the research and find out how many proteins are used- how many amino acids in each protein.
You've already told me that bacterial flagella are "full of CSI". Surely you wouldn't make a claim like that without being able to support it? Please show me the math that caused you to reach your stated conclusion. MathGrrl
MathGrrl #133
"You’re just restating your original claim that duplication does not increase CSI. I explained how duplication in biological systems can result in significant biochemical changes. If you maintain that this does not increase CSI, please show detailed calculations for the scenario I described."
Let’s suppose that a gene has CSI X. Let’s suppose that an organism has CSI Y. I think that both agree on the fact that Y is far greater than X (a gene is an infinitesimal part of an organism). Can the difference Y-X be caused by simple duplications? In software engineering duplication doesn’t work. You say that this analogy is flawed but after all informatics/robotics is the technological field more similar to biology. It would be very strange that what doesn’t work at all in the former works so well in the latter. You say that duplication in biological systems can result in significant biochemical changes but you are very far from demonstrating that duplication is the cause of Darwinian evolution. About the "detailed calculations" I think that formulas are useful tools that one can apply after some basic principles are stated. In other words, formulas necessarily come after principles. By now we disagree on principles then it is useless to put formulas on the table. Organization (and biology eminently shows organization) is quality. At the very end, the principle on which we disagree is that, given X quality, by doubling X we obtain more quality. This is simply impossible. I repeat that this way we increase quantity only, while you repeat that quality (what are your "significant biochemical changes" but quality) increases too, and this is absurd. niwrad
Joseph,
One “easy” example of doing so is taking a gene that cannot tolerate any variation- for example say it it codes for a protein that has 200 amino acids- all have to be in that specific order. 6 bits per amino acid (2^6 = 64) x 200 amino acids = 1200 bits of specified information. And that means CSI is present.
Does that calculation hold even if we find that the gene evolved via known evolutionary mechanisms from precursors that coded for a less useful, but still workable, protein? Since you seem to understand how to calculate CSI in some detail, could you please do so for the four scenarios I described? I would very much like to understand it well enough to compute it myself.
(and that mathgrrl cannot see the connection between CSI ad taht linked paper tells me she needs to do a lot of reading before asking her questions)
Please explain how the paper you referenced aligns with Dembski's definition of CSI in Specification.... As I've tried to make very clear, I am interested in understanding the specific metric used by ID proponents. MathGrrl
MathGrrl :Excellent! Please provide a detailed calculation to show me how to objectively determine exactly how much CSI is present in “the” bacterial flagellum (pick whichever flagellum you prefer). Do the research and find out how many proteins are used- how many amino acids in each protein. Then find out how much variation each can tolerate. Then follow theinstructions in the paper I linked to in comment 12- heck everything I just typed is paraphrased from that. Joseph
Collin,
Do you think that in order to calculate CSI it must be quantified?
While I don't understand Dembski's description well enough to calculate CSI myself, it is clear that he measures it in bits. MathGrrl
OK, I think I have a way through this. Information is quantitative, yes? Specification, however, is not (it's certainly not in Orgel). So we can measure the information, but not the specification, which we can only say is either there or not. Does that seem reasonable? I have a bit of difficulty with this, from PaV above:
Specifications” are DISCOVERED: I see something. It suggests a pattern to me. I uncover the pattern (which means that it is translatable, or functional.
The first part (suggests a pattern to me) sounds overly subjective, until the second part (uncover the pattern), which suggests some possibility of measuring the pattern. QuiteID
Joseph,
And even then the BF is eidence for design as it is full of CSI.
Excellent! Please provide a detailed calculation to show me how to objectively determine exactly how much CSI is present in "the" bacterial flagellum (pick whichever flagellum you prefer). MathGrrl
MathGrrl- from comment 117: The point of CSI is that its existence is a sign of a designing agency. CSI is defined as X number of bits f specified information. In “No Free Lunch” X = 500, which the math shows equals a probability greater than the upper probability bound. With respect to biology specified information equates to biological function. To see if CSI is present we need to determine if there is > 500 bits of specified information. One “easy” example of doing so is taking a gene that cannot tolerate any variation- for example say it it codes for a protein that has 200 amino acids- all have to be in that specific order. 6 bits per amino acid (2^6 = 64) x 200 amino acids = 1200 bits of specified information. And that means CSI is present. That said if there can be some variation you have to figure that in, which brings us back to the paper I linked to in comment 12. (and that mathgrrl cannot see the connection between CSI ad taht linked paper tells me she needs to do a lot of reading before asking her questions) Joseph
MathGrrl:
Please provide the rigorous mathematical definition of CSI that shows that it cannot be calculated for the scenarios I describe.
Please provide the rigorous mathematical definition for a computer program and you will how wrong you are. Joseph
Mathgrrl, Do you think that in order to calculate CSI it must be quantified? In other words, would I have to be able to look at a code, genome or sentence and say, "This has 7 CSIs" or 1 "Dembski" (like one Watt or one Calorie or one Newton). I don't know if it can be quantified. In economics, attempts have been made to measure "utility" (similar to "value" or "benefit"). They call it 1 util. But as I understand it, those attempts have not lead to "rigorous mathematical definitions" of utility. But that does not mean that "utility" does not exist or that it is not helpful as a concept in increasing understanding of an economy. I guess I am not optimistic that you will find a rigorous mathematical definition of CSI. Again, I would hope that Mr. Dembski would weigh in on this issue. Collin
Mathgrrl, At this point, the whale poo at the bottom of the ocean has succeeded in realizing that my argument with you is not over the math, it is over the completeness of your conclusions (post #31). If you wish to have a discussion with me, you can take the opportunity to finally address the issue for which I have now grown tired of bringing up to you. ”Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality”. Upright BiPed
MathGrrl:
The only point I would like to add is that CSI is claimed by ID proponents, including Dembski, as an unambiguous indicator of intelligent agency for biological artifacts such as the bacterial flagella. If CSI were really only about origins, such claims would be ridiculous on their face.
Why would it be ridiculous on their face? You can't have a bacterial flagellum with the bacteria. And even then the BF is eidence for design as it is full of CSI.
The fact is that ID proponents do claim to be able to measure CSI in biological systems without reference to their origins.
That has been explained for you also. It's as if you ignore more than half of what is posted. If that is how you are going to be then fine I suggest everyone leave you alone. Joseph
Noesis, My organic view of information has been well noted. It presents no problem, and results in the much the same conclusion. Personally, I blame Shannon for conflating noise with data. :) Upright BiPed
Mark, I find it interesting that you see different opinions among IDist as “important”. I wonder what exactly makes the grade of importance in your estimation. Obviously, if I drink the Kool-aide down at the NCSE, then ID is an insignificant group of incorrigible religious fanatics that have been repeatedly discredited by scientific observation. One would think the number of scientific facts that their ideas contradict must be an amazing document to behold (if it only existed). On the other hand, Darwinism and its corollaries represent no less than the modern scientific unification of all acquired human knowledge, collected together into the Absolute Truth of our reality. So powerful is it, that it should be legislated on one front, and used as the billy club of enforcement on another. Given this, I am interested. If it is true that any differences of opinion are marked as “important” among the forever discredited, how much more important are these same differences among those that command the obedience of everyone? It would seem to me that if differences of opinion carry the importance you suggest, then such importance would signify one thing among the insignificant (e.g. “who cares”), but must certainly mean something entirely different among those who cannot be denied (e.g. “no questions allowed”). For instance what if one proponent should say that natural selection is the ultimate creative force in the cosmos, while another says it is a weak force which is not even dominant? What if one should say that the theory not only predicts but demands gradualism, while another suggests rampant cladogenesis? What if one should say that the tremendous improbability of Life comes from a single ancestor, while another suggests that Life may have had several beginnings? Are these important, or should we genuflect in their presence and whisper to ourselves the comforting slogans of the authority? And what happens when their contradictions occur not only among themselves, but against other factual observations? We are told that DNA is at the center of Life; a digital code that can accommodate any amount of information. We are also told with absolute certainty that it cannot be the result of anything other than a natural process. Yet, where is there any actual evidence whatsoever that a natural process can create a digital code? I suppose what constitutes “importance” will be forever subjective, no? In any case, you skipped my question with the same aplomb as Mathgrrl: ”Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality”. Upright BiPed
Noesis #122
"Suppose that duplication of a gene doubles the amount of a protein produced by a cell. This can have a huge impact on phenotype. MathGrrl seems to have measure of CSI on the phenotype, and not the genotype, in mind. Take my behavior as the phenotype. If you send me an email message, I may put off dealing with it. If you repeat the transmission, I will generally respond promptly. What I’m describing here is a nonlinear response to a redundant transmission."
The increase of a protein (eventually produced by gene duplication) by definition is a quantitative effect with no CSI per se. CSI implies quality and in principle quantity doesn’t entail quality. Such quantitative effects can trigger some cellular events only if the cell allows it by design. Anyway the "huge impacts on phenotype" caused by increase of a protein have nothing to do with the immense functional hierarchies found in organisms that Darwinian macroevolution pretends to explain naturalistically. About the "nonlinear response to a redundant transmission" by a human I repeat somehow what I said previously: it is not the redundancy of the message to carry on real new information rather the intelligence of the receiver that applies additional meanings/decisions/interpretations fundamentally not contained inside the message (also if duplicated). niwrad
Joseph,
markf:
All that Mathgrrl is asking is how do you calculate CSI in some specific cases.
Right and at least some IDists are saying those cases are bogus.
Please provide the rigorous mathematical definition of CSI that shows that it cannot be calculated for the scenarios I describe. I see nothing in Dembski's work that suggests this is the case. MathGrrl
Noesis,
Where is the rigorous definition of the sample space ? [Omega]? If the sample space is ill-defined, then so is CSI.
I'm afraid you're demonstrating the same misunderstanding as Upright BiPed. I don't know how to calculate CSI, despite having read the relevant material by Dembski. I'm asking the ID proponents here to help me out. Could you please provide a rigorous mathematical definition of CSI based on Dembski's discussion in Specification... and show how to calculate it for the four scenarios I described in my original post? MathGrrl
PaV,
I think it is the height of hubris to come to this blog and DEMAND that someone demonstrate to you in a mathematically rigorous fashion that CSI was NOT generated by Tierra and its ilk.
I'm not demanding, I'm asking as politely as I can. I'm also not asking for what you claim I am, I'm simply requesting a few example calculations so that I can understand how to measure CSI objectively myself.
That is the stuff of Ph’D work.
It shouldn't be. While I don't find Dembski's explanation in Specification... to be clear enough to allow me to calculate CSI, none of the math there is beyond an average high school student.
Shallit has tried to prove Dembski wrong, and it turned out he was wrong.
Could you provide a reference? I would like to learn from Shallit's mistakes. And I'll ask, politely, again: Could you please provide a rigorous mathematical definition of CSI based on Dembski's discussion in Specification... and show how to calculate it for the four scenarios I described in my original post? MathGrrl
scordova,
The question is whether evolutionary algorithms can spontaneously generate CSI without authorship of intelligent agency.
That is indeed the question I am trying to answer. Can you provide a rigorous mathematical definition of CSI and example calculations for the four scenarios I described?
That is the subject of No Free Lunch discussions.
The NFL theorems do not apply to a situation in which there is only one fitness landscape, nor to the situation where the fitness landscape is dynamic. That makes it doubly inapplicable to the world we observe. MathGrrl
Mark Frank,
I agree that CSI is supposed to be a method for making deductions about origins. However, the whole point of Dembski’s paper (and indeed his other work) is that he suggests that CSI is a property of an object that you can assess without knowing anything about its origins. All that Mathgrrl is asking is how do you calculate CSI in some specific cases. According to Dembski it should be possible to do this without knowing anything about the origins of the object. So Joseph’s objection that the situations she puts forward are not about origins is irrelevant.
Exactly, Mark, thank you for stating this so clearly. Can anyone provide the detailed calculations I requested in the orignal post of this thread? MathGrrl
niwrad,
“The difference is, as noted in the original post and in my post 6, a duplicate gene can lead to an increase in production of a particular protein, with significant impact on the subsequent biochemistry. Such a change in protein production can even enable or disable other genes. The analogy to email or books is fatally flawed.”
My general reasoning was: given a simple text string X with CSI(X), the concatenation of another X, giving XX, has CSI(XX) = CSI(X).
You're just restating your original claim that duplication does not increase CSI. I explained how duplication in biological systems can result in significant biochemical changes. If you maintain that this does not increase CSI, please show detailed calculations for the scenario I described. MathGrrl
StephenB,
markf: “Mathgrrl points out Dembski writes in his paper (which he says supersedes all other definitions of CSI) (Dembski) “By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause” -markf: “i.e. we should be able to look at an object and assess its CSI without knowing anything about its origins.” Dembski is speaking about the how of the origin not the fact of the origin. In other words, we can assess its CSI without knowing anything about the process or the mechanism that produced it. He is not saying that CSI could be about something other than origins.
That does not follow at all from either what Dembski has written, the calculations in his books and papers, or how CSI is used by other ID proponents. Are you asserting that CSI cannot be calculated for biological systems or components? If not, please show how to calculate it for my four scenarios. MathGrrl
Collin,
So here is your calculation: when the command (build X amount of protein Y) results in X amount of protein Y, then you have a perfect fit between code and function. 100%. But if you get a gene duplication that says “Build 2X amount of Protein Y” and you only get 1.9X amount of protein Y, then you have less than 100% fit between code and function and therefore you have a decrease in CSI. But if you get 2X amount of Protein Y, you do not have an INCREASE in CSI, you have a CHANGE in CSI.
First, your approach highlights one of my points of confusion around CSI, namely what constitutes a valid specification. There seems to be a great deal of disagreement on this. The specification I provide in my example is "Produces at least X amount of protein Y." Instead of changing that, please explain why it isn't a reasonable specification. Second, Dembski's CSI has units of bits. A change must be either an increase or a decrease in the number of bits. Are you agreeing with vjtorley and others that CSI can be increased by gene duplication? MathGrrl
Upright BiPed,
I had made the decision to stop asking you to acknowledge the point. It had become obvious that you have no intention of doing so. I see now that this judgment has been confirmed yet again.
I am disappointed in your decision and will certainly happily continue the discussion with you, should you ever decide to do so. The arguments of someone who thinks it logical to ask others to define his terms for him should prove . . . interesting. MathGrrl
SCheesman,
First, what is the “object” in your example? Is it the act of duplication? Is it the existence of the protein that is being duplicated? Is it the specific action of the protein? Is it the degree of efficacy of that production? Is it the composition or complexity of the item being produced?
In my gene duplication scenario, the object is the post-duplication genome. Can you provide a detailed calculation for the CSI of this object? MathGrrl
Joseph,
Your four scenaios are bogus as not one deals with ORIGINS and CSI is all about ORIGINS
Mark Frank has already addressed this objection quite concisely. The only point I would like to add is that CSI is claimed by ID proponents, including Dembski, as an unambiguous indicator of intelligent agency for biological artifacts such as the bacterial flagella. If CSI were really only about origins, such claims would be ridiculous on their face. The fact is that ID proponents do claim to be able to measure CSI in biological systems without reference to their origins. As Dembski himself states in the paper referenced in my original post, "Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?" Note the phrase even if nothing is known about how they arose. MathGrrl
QuietID:
Joseph, if you’re saying that bad mutations & duplications are chance but good mutations & duplications are design, what’s your evidence for that?
That's not what I am saying. The bad mutations that aren't point mutations are the result of a design gone bad. Joseph
And I hypothesize that when a genome is doubled and has an effect, the negative results of that duplication are almost always expressed unless another system in the cell takes an active role in correcting the problem (error checking, redundancies, etc). Collin
Collin, of course. Personally, I don't think "good" mutations or duplications are common enough to support evolution. But for some here to suggest "if it's a good mutation, then it's designed or frontloaded" -- well that's just dumb. QuiteID
Joseph, if you're saying that bad mutations & duplications are chance but good mutations & duplications are design, what's your evidence for that? QuiteID
No one is denying that doubling a genome will having an effect. The question is, does the effect cause a new and beneficial result? The effect, if not absolutely perfect, is almost always a deterioration of a multi-faceted function. It usually degrades and does not lead to a beneficial result much less the construction of a very complex organelle or something like that. Collin
niwrad #120, Suppose that duplication of a gene doubles the amount of a protein produced by a cell. This can have a huge impact on phenotype. MathGrrl seems to have measure of CSI on the phenotype, and not the genotype, in mind. Take my behavior as the phenotype. If you send me an email message, I may put off dealing with it. If you repeat the transmission, I will generally respond promptly. What I'm describing here is a nonlinear response to a redundant transmission. Noesis
Mathgrrl, I would love it if you'd address my comment #95. Concerning your situation #2 (Schneider's ev) do the genomes do something other than bind to sites? I mean, how is it different from dropping a mixture of square and round pegs into a seive and having the round pegs go into the round holes and the square pegs going into the square holes after enough shaking around? Collin
Spiny Norman #64
"Depends what you mean by “sends an email twice”. I’m an IT guy. Duplicate emails are not usually identical. I can examine headers to see whether they were sent by the same mail client, from the same server, with the same unique MessageID, passed through the same set of mail servers, etc etc."
I agree that in general there are no two perfectly identical things or events in the entire universe (for the Leibniz’s principle of identity of indiscernibles). However my argument about duplication of messages not increasing CSI is fully independent from the medium. Hence your notes as an IT admin about email protocols and systems are ininfluent. In fact you can well think the two equal messages written on paper or whatever and nothing changes about the fact that CSI doesn’t increase. niwrad
markf, OTOH if any sequence of amino acids can result in a protein then specified information really isn't. If Craig Venter inserted randomly generted synthesized DNA into a bacteria that had its DNA removed and it worked, specified information would evaporate. Joseph
QuietID:
I don’t know why you say that. For example, specific gene duplications have been associated with various medical problems. Unless we say those duplications in those patients are designed (to hurt them?) those seem to be undirected.
Or caused by a program corrupted by the blind watchmaker. Joseph
markf:
I agree that CSI is supposed to be a method for making deductions about origins. However, the whole point of Dembski’s paper (and indeed his other work) is that he suggests that CSI is a property of an object that you can assess without knowing anything about its origins.
Right if we didn't know anything about its origins and CSI is present we infer a designing agency was required. markf:
All that Mathgrrl is asking is how do you calculate CSI in some specific cases.
Right and at least some IDists are saying those cases are bogus. markf:
So Joseph’s objection that the situations she puts forward are not about origins is irrelevant.
The point of CSI is that its existence is a sign of a designing agency. CSI is defined as X number of bits f specified information. In "No Free Lunch" X = 500, which the math shows equals a probability greater than the upper probability bound. With respect to biology specified information equates to biological function. To see if CSI is present we need to determine if there is > 500 bits of specified information. One "easy" example of doing so is taking a gene that cannot tolerate any variation- for example say it it codes for a protein that has 200 amino acids- all have to be in that specific order. 6 bits per amino acid (2^6 = 64) x 200 amino acids = 1200 bits of specified information. And that means CSI is present. That said if there can be some variation you have to figure that in, which brings us back to the paper I linked to in comment 12. Joseph
MathGrrl, Where is the rigorous definition of the sample space ? [Omega]? If the sample space is ill-defined, then so is CSI. Noesis
KairosFocus: For something to be "functional", I believe it has to be translatable. If I write random letters, it's an entirely useless activity. One can think of art: obviously the work of an intelligent agent (although sometimes this isn't obvious!) and one where, especially in its most abstract forms, the question is asked what does it mean? Then one tries to "interpret" (interpreters "interpret" from one language to another; i.e., they "translate") what the various objects and parts of the artwork mean. The example that M Holcumbrink used about the 3D configurations of t-RNA seems to fall both with the "functional" and "translation" categories. However, in a more general way one could say that the nucleotides that form the t-RNA molecule have been "translated", via chemical bonds, to a functional form: that is, if the nucleotides making up t-RNA molecule were linear, the translation mechanism of the cell would not operate properly. So, the "specification" of the sequence, leading to its ultimate configuration, is both a "translation" from a linear form to a cruciform pattern that provides needed function. Got to go. See you all tomorrow. PaV
MathGrrl[78]:
Please provide a mathematically rigorous method for creating a specification and show how those that I included with the four scenarios in my original post do not meet your criteria. You seem to be simply dismissing them out of hand.
You keep making, in the words of SCheesman, a "category" mistake. I've already written, per Dembski, that a "specification" is fundamentally a "pattern". If you could "mathematically ... [create]" a "specification", then it would no longer be specification. Or, rather, you CAN'T "create" a "specification". "Specifications" are DISCOVERED: I see something. It suggests a pattern to me. I uncover the pattern (which means that it is translatable, or functional). I determine its complexity. If it exceeds the UPB, then its CSI. Only the mind can detect a "specification". If there were some mathematical determination of it, then it wouldn't be CSI; it would be the result of whatever the mathematical equations determined it to be. You, and almost every other critic of ID, save Sober, make this fundamental categorical mistake. Why? Is it willful ignorance? What blinds you to the rather obvious?
PV:Here’s a link to a paper from 2003 that uses a variant of Shannon and Kolmogorov complexity to calculate the increase of complexity over increasing computer time used. While I have an interest in Kolmogorov-Chaitin complexity, the topic of this thread is CSI as defined by Dembski. Please provide example calculations of CSI for the four scenarios I described in my original post.
Please.... The basic model that Dembski uses for determining "complexity" is the inverse of Shannon information. How is it that you can't see that if something is the most "specified" thing in the world, attested to by every scientist in existence, would NOT constitute CSI if the complexity didn't exceed 500 bits, or whatever the equivalent of 10^150 is? What was the number of bits of "complexity" they found in Tierra? 30-58. End of story. I think it is the height of hubris to come to this blog and DEMAND that someone demonstrate to you in a mathematically rigorous fashion that CSI was NOT generated by Tierra and its ilk. That is the stuff of Ph'D work. You could publish a book if you did that. You'd have to learn assembly language in order to evaluate it. This is a ridiculous demand. No, the burden is on you. If you think that Dembski is wrong, then you point out just how Tierra, ev and the like, produce CSI. Instead, you come here and say: define CSI for me. Well, read the books. It's really simple enough. Then you can come back and prove Dembski wrong. Shallit has tried to prove Dembski wrong, and it turned out he was wrong. Why? Because he doesn't understand what a "specification" is, first of all, and, second, because he's convinced that CSI is trivial. Well, he was, and is, wrong. The classic example of CSI is the random binary string that turns out to be the first hundred digits in binary code. Either you get that, or you don't. Beyond that, you're on your own. Quit pestering us. Try to learn the stuff. PaV
Oops. I cannot recall any piece of technical writing by Dembski that does not refer to -log p as information. Noesis
Welcome Mathgrrl, Evolutionary algorithms can generate CSI if it acts as a surrogate of an intelligent agency. I demonstrated as much here at UD with my Genetic Algorithm: Dave Thomas Says Cordova's Algorithm is Remarkable The question is whether evolutionary algorithms can spontaneously generate CSI without authorship of intelligent agency. That is the subject of No Free Lunch discussions. scordova
Upright BiPed says that
information – any information – only exists by means of a semiotic convention and rules (unless you disagree, and can show an example otherwise).
The Shannon self-information of an event that occurs with probability p is -log p. Shannon chose the term "self-information" precisely to indicate that the quantity is unrelated to communication. And without communication, there is no semiosis. One of the two terms (quantities added together) in Dembski's latest definition of CSI is the self-information of the event of "hitting the target." And Dembski has always emphasized that the specification of the target is detachable. The other term in the expression for CSI is a measure of the complexity of the description of the target by some semiotic agent. I cannot recall any piece of technical writing by Dembski that does refer -log p as information. So it appears that Upright BiPed disagrees with Dembski as to what constitutes information. Noesis
O'Leary #50
"Niwrad at 37, I must disagree with the view that duplication adds no new information. It often does. Let me tell you a wheeze from the mid-twentieth century that perfectly illustrates that fact:A young lady went into the telegraph office and asked to send a telegram. She gave the operator a piece of flowered stationery with one word on it: Yes.The operator explained: Miss, it’ll cost you $2.00. You can have ten words for $2.00. She replied, “Certainly not! Nine more yesses will make it sound like I am too anxious.”"
Here duplication could mean anxiety; or could mean the operator is drunk; or the telegraph line has problems, etc. How can the receiver know the true reason (among many) of the duplication? He cannot. This means that the duplicated message *per se* doesn’t convey complex specified information, rather a simple unspecified bit of uncertainty, which is not at all CSI. Therefore also in this case duplication doesn’t increase CSI. niwrad
Joseph #99, Stephenb #100 I agree that CSI is supposed to be a method for making deductions about origins. However, the whole point of Dembski's paper (and indeed his other work) is that he suggests that CSI is a property of an object that you can assess without knowing anything about its origins. All that Mathgrrl is asking is how do you calculate CSI in some specific cases. According to Dembski it should be possible to do this without knowing anything about the origins of the object. So Joseph's objection that the situations she puts forward are not about origins is irrelevant. markf
M Holcumbrink: Personally, I find that causing the physical medium that carries the code to serve as both code carrier and components for machinery is SCARY genius. It causes me to join in with the Psalmist and say “we are FEARFULLY and wonderfully made”! Amen! PaV
MathGrrl #48
"The difference is, as noted in the original post and in my post 6, a duplicate gene can lead to an increase in production of a particular protein, with significant impact on the subsequent biochemistry. Such a change in protein production can even enable or disable other genes. The analogy to email or books is fatally flawed."
My general reasoning was: given a simple text string X with CSI(X), the concatenation of another X, giving XX, has CSI(XX) = CSI(X). Differently, you provide a more complex biological scenario where a gene X is duplicated (giving XX); where exist boundary conditions such as XX causes increase in production of a particular protein or even enable or disable other genes. Such conditions are mechanisms or logic of the overall system that make your scenario different and richer than mine. Here you have an original gene X with CSI(X) plus a system Y implying the above mechanisms with CSI(Y). So here CSI(XX) seems to be greater than CSI(X) only because there is also CSI(Y) at play. niwrad
Joseph, "The only way to say a gene duplication is a blind watchmaker process is to demonstrate that blind watchmaker-type processes can account for the origin of living organisms from non-living matter and energy." I don't know why you say that. For example, specific gene duplications have been associated with various medical problems. Unless we say those duplications in those patients are designed (to hurt them?) those seem to be undirected. QuiteID
QuietID, Yes, but the chances of that are low. CSI is a probablistic argument and I think that what you describe is very very improbable. So if we see tons of CSI then we can safely say that there has been design. Collin
markf @ 102 "I assume the symbols in a string of DNA are the bases. What do they symbolise?" Life. tgpeeler
I'm not a mathematician and I don't understand Dembski's math. However I have never really bought the UPB as a probability bound that ought to be used for application to biology. I think Behe's number of 10^-40 is a better probability bound to use. This is the estimated number of total cells in the history of life on earth. Just about everything except HIV mutates at a rate much less than one mutation per cell. The fastest rate for E. coli is around 10^-5 per gene. Therefore 10^-40 is probably a generous estimate of the Biological Probability Bound (BPB). In Behe's new paper, he defines a Functional Coded elemenT (FCT) as:
a discrete but not necessarily contiguous region of a gene that, by means of its nucleotide sequence, influences the production, processing, or biological activity of a particular nucleic acid or protein, or its specific binding to another molecule.
http://www.lehigh.edu/~inbios/pdf/Behe/QRB_paper.pdf In other words, these FCTs are the defined regions of the genome that should be used to calculate CSI. The calculation would be fairly simple theoretically. n = length of FCT sequence in nucleotides f = total number of sequences able to perform the same function as the template FCT When: f/4^n < 10^-40 Then you have biological CSI (bCSI). tragic mishap
UB #91
I did not realize you had addressed a comment to me in the previous thread; I stopped going there because the page was taking too long to load.
You couldn’t have responded anyhow because comments were closed shortly after I posed the question.
In any case, I think I have been consistent in my views. As I have said before, I am not one that thinks a carbon atom contains information. I make a distinction between an object, the information that can be created from an object, and an object arranged in order to contain information. I know this puts me at odds with some, but as of yet, I haven’t been convinced otherwise. I don’t dwell on it because the opponents of design generally become animated when they find two ID supporters who (gasp) disagree.
I think it is significant when ID supporters disagree over how to detect design.  But the point of my question in this case was not to show up disagreement between yourself and other ID proponents.  It was to explore your idea that symbols are the distinguishing criterion of information and therefore design (if I have paraphrased you correctly). You answered my first question thanks.  I am still interested in the answer to the second:
I assume the symbols in a string of DNA are the bases. What do they symbolise?
  markf
The only way to say a gene duplication is a blind watchmaker process is to demonstrate that blind watchmaker-type processes can account for the origin of living organisms from non-living matter and energy. Anything short of that and there isn't any justification for the blind watchmaker having a hand in anything but point mutations. To get a protein from a duplicated gene requires that duplicated gene to have the proper binding sites. And even then all you are going to have is an extra protein that you could have gotten from mutating the regulatory sequence- but a new protein that doesn't have a home and is free to get in the way of already existing and functioning systems. Like my car will run better if I add more spark plugs. Perhaps two radios would allow me to listen to the game and music at the same time. If I add spark plugs do I increase the CSI of my car? Joseph
---markf: "Mathgrrl points out Dembski writes in his paper (which he says supersedes all other definitions of CSI) (Dembski) "By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause" --markf: "i.e. we should be able to look at an object and assess its CSI without knowing anything about its origins." Dembski is speaking about the how of the origin not the fact of the origin. In other words, we can assess its CSI without knowing anything about the process or the mechanism that produced it. He is not saying that CSI could be about something other than origins. As I say so often, Darwinists typically read into documents that which they wish was there rather than read out of them that which the author intended. Then they want to hold the poor author accountable for their own confusion. [Hence this thread]. StephenB
markf:
You write several times above that CSI refers to origins.
And I have supported that claim. I can't force anyone to read my posts but don't just read that parts that you want to. markf:
By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause.
And archaeologists do that all the time. His point is chance and necessity cannot generate specified complexity- that means from scratch. Stop quote-mining. markf:
i.e. we should be able to look at an object and assess its CSI without knowing anything about its origins.
You sound like you are fishing. But anyway, if you are given a set of instructions for building something- that is specified information- a computer program also qualifies. And yes we can assess the if CSI is present in each of those examples. Joseph
Collin, whoops -- I thought you were responding to me, and that I wasn't clear. Sorry. QuiteID
But if several changes (possibly including duplication) over occur a number of generations in a population, and those changes lead to something other than 2X of Protein Y (maybe more protein Y expressed in where it's not been before, or a variation -- Protein Ya -- that plays a different role), and that "something other" is a new function that adds to fitness, *then* has CSI increased? QuiteID
QuietID, I think it would. But I probably shouldn't be taken as an authority because I haven't even read Dembski's work. I wish that he would comment on this thread. Collin
This goes back to my discussion of tightness of fit between code and function. Here's why, if you change a command from "Build X amount of Protein Y" to "Build 2X amount of Protein Y" you have not increased the CSI you have CHANGED the CSI. And if that change in CSI deteriorates the fit between code and function, then you have decreased the CSI not increased it. So here is your calculation: when the command (build X amount of protein Y) results in X amount of protein Y, then you have a perfect fit between code and function. 100%. But if you get a gene duplication that says "Build 2X amount of Protein Y" and you only get 1.9X amount of protein Y, then you have less than 100% fit between code and function and therefore you have a decrease in CSI. But if you get 2X amount of Protein Y, you do not have an INCREASE in CSI, you have a CHANGE in CSI. Collin
Collin, I agree. My question is, would an increase in the length of the genome plus a novel specification constitute an increase in CSI? QuiteID
Mathgrrl said, "According to my understanding of CSI, this increase in the length of the genome constitutes an increase in CSI." I highly doubt that Dembski was saying that. If so, then CSI would be meaningless. Let me ask you a question about your first scenario. You state that the specification is "Produce at least X amount of protein Y." Would the gene duplication essentially say "Produce at least 2x of protein Y?" (because it is saying produce at least X of protein Y twice). Collin
Mathgrrl, I had made the decision to stop asking you to acknowledge the point. It had become obvious that you have no intention of doing so. I see now that this judgment has been confirmed yet again. Upright BiPed
Mark, I did not realize you had addressed a comment to me in the previous thread; I stopped going there because the page was taking too long to load. In any case, I think I have been consistent in my views. As I have said before, I am not one that thinks a carbon atom contains information. I make a distinction between an object, the information that can be created from an object, and an object arranged in order to contain information. I know this puts me at odds with some, but as of yet, I haven’t been convinced otherwise. I don’t dwell on it because the opponents of design generally become animated when they find two ID supporters who (gasp) disagree. It becomes quite a spectacle. For instance, we have on this thread an ID critic who all but demands that proponent A speak to her in the terms of proponent B, then uses any differences as a hedge against the validity of ID. Yet, on another thread we have a Darwinian biologist highlighting the difference of opinions among his peers. Upright BiPed
SCheesman, you and Joseph seem to disagree. To clarify: I agree that a single non-teleological change (a gene duplication, for example) will not generate new CSI, even if the old gene did contain new CSI. But if such a duplication combined with other changes over time to create a new specification out of that (modified) duplication, wouldn't that be considered new CSI? I think it would, but I don't know if that's possible. Do you think that would? As far as I can tell, Joseph seems to think it would not. I think Joseph would argue that such a change could well be teleological, but I don't know if that could be supported by evidence. (Of course, I may be misunderstanding either you or Joseph.) QuiteID
Joseph (Mathgrrl has explained this very well - but I thought I would try to get it across as well.) You write several times above that CSI refers to origins. Yet as Mathgrrl points out Dembski writes in his paper (which he says supersedes all other definitions of CSI) By contrast, to employ specified complexity to infer design is to take the view that objects, even if nothing is known about how they arose, can exhibit features that reliably signal the action of an intelligent cause i.e. we should be able to look at an object and assess its CSI without knowing anything about its origins. markf
QuiteID:
I’m confused. I thought CSI could refer to new CSI in a system already containing CSI. Some here seem to be suggesting that any generation of new CSI in such a system doesn’t count. But that would mean that no evolutionary change in a biological system would count as “new CSI.” I’m sorry, but that seems like an impossibly high bar, and shifting the goalposts to boot.
You misunderstand what it means to "generate new CSI". Each change is the result of a selection from a range of possibilites, with its attendent probability. A single such change is not (generally) CSI... it is the combination of many such changes required to produce a given affect that constitutes CSI. "CSI" is really a threshhold established which separates what combination of changes has a reasonable chance of occuring given random processes (e.g. monkeys at typewriters) and what requires intelligent input. So organisms can and do change. But it is the ID contention, that under natural (non-teleological) processes, those changes will not combine or occur in a manner which crosses the threshhold of CSI. SCheesman
MathGrrl
Dembski asks the question “Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?” and answers yes
I could try to respond to all your comments, but instead would like to return to your original example #1:
1.A simple gene duplication, without subsequent modification, that increases production of a particular protein from less than X to greater than X. The specification of this scenario is “Produces at least X amount of protein Y.”
First, what is the "object" in your example? Is it the act of duplication? Is it the existence of the protein that is being duplicated? Is it the specific action of the protein? Is it the degree of efficacy of that production? Is it the composition or complexity of the item being produced? You really must break it up into parts, and the CSI of each can be considered on its own merits, even by Bill Dembski's original definition. So duplication, on its own is not CSI. The precise control of the amount of duplication for a purpose may be - you'd have to examine the possible ranges and what it takes to constrain it to a particular amount. The original gene, on its own, is an example of CSI, because it is a specific, complex arrangement to produce a specific function - e.g produce Y. So it is still possible to infer CSI without looking at the original "code", but you need to understand better the range of possible outcomes and how the particular scenario you envision is a selection from the outcomes, as opposed to a "necessary" one in order to impute CSI. SCheesman
QuietID- CSI pertains to origins, just as William Dembski wrote. I can't help it if you and MathGrrl refuse to understand that. Also "evolutionary change" is meaningless, but I have been over and over that before too. Joseph
I'm confused. I thought CSI could refer to new CSI in a system already containing CSI. Some here seem to be suggesting that any generation of new CSI in such a system doesn't count. But that would mean that no evolutionary change in a biological system would count as "new CSI." I'm sorry, but that seems like an impossibly high bar, and shifting the goalposts to boot. QuiteID
MathGrrl:
Please provide a mathematically rigorous method for creating a specification...
I don't know if I should laugh or cry- that is just so stupidly sad. When engineers design something do you think they use an equation for formulating their design specifications? Do you have any idea how engineers go about designing? How about artists? Do you have any idea what CSI is? Joseph
MathGrrl:
Please provide example calculations of CSI for the four scenarios I described in my original post.
Your four scenaios are bogus as not one deals with ORIGINS and CSI is all about ORIGINS: William Dembski wrote:
The central problem of biology is therefore not simply the origin of infrmation but the origin of complex specified inforaion. – page 149 of “No Free Lunch” (bold added)
Algorithms and natural laws are in principle incapable of explaining the origin of CSI. (further down on the same page)
So what's up MathGrrl? Why the strawmn and why the continued equivocation? And still no evidence that gene duplications are blind watchmaker processes, go figure... Joseph
MathGrrl:
I skimmed it and determined that it does not deal with Dembski’s CSI, which is the topic of this thread.
As I pointed out Dembski's CSI pertains to ORIGINS. You refuse to understand that. To me that means you have issues you need to take care of before coming here and asking about CSI. Also as I posted CSI refers to biological function. And that is what the paper I linkdto deas with. MathGrrl:
The issue for this thread is whether or not Dembski’s definition of CSI leads to the conclusion that gene duplication (and other known evolutionary mechanisms) can generate CSI.
That doesn't have anything to do with Dembski's CSI. Also you seem o be stuck with equivocations which tells me you don't know what ID claims even though I told you. "Evolutionary mechanisms" is meaningless. Starting with a CSI- ie a living organism- is cheating. The point being, MathGrrl, is you have erected a strawman and refuse to budge in the face of refuting evidence. Joseph
SCheesman,
I wish to propose a rigorous method to calculate the “increase in CSI” represented by the original four proposals. The principle is general and applies equally to all four. In each case you must identify the minimum set of instructional changes in the “source code” or “instruction set” necessary to move the solution from a state lacking the ability to produce the specification to the state containing that possibility. The CSI is the amount of change required, measured in bits of “atomic” changes in the code; for instance in the genetic code this would be any kind of mutation that can occur as a single-step operation, e.g. substitution, deletion, addition, maybe even duplication of section of code. The “bits” of change is the chance of the particular change required occuring out of all the possible atomic changes; equal to the negative log to the base two of the probability of that change out of all possible changes.
This is one of the variant CSI calculations that came up in a discussion with gpuccio on Mark Frank's blog. Personally, I think it makes a lot of sense. Interestingly, this definition leads to the conclusion that the parasite organisms in Tierra have significant CSI. I'd be very interested in discussing this further, but I do want to stay focused on Dembski's version of CSI, and the claims made about it, in this thread. That version, as far as I can tell, does not take the history of an artifact into account. Dembski asks the question "Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?" and answers yes. MathGrrl
kairosfocus,
UNLESS YOU CAN RECOGNISE AND ACCEPT THE EMPIRICAL REALITY OF FSCI AS OBSERVED, IDENTIFIED AND DESCRIBED IN THE ACADEMIC TECHNICAL LITERATURE BY ORGEL, WICKEN AND OTHERS 30+ YEARS AGO, YOU CANNOT CONSTRUCT OR UNDERSTAND VALID MATHEMATICAL MODELS THEREOF.
The topic of this thread is CSI as discussed by Dembski in Specification: The Pattern That Signifies Intelligence. Not Orgel, not FSCI, but the actual metric that is claimed by ID proponents to indicate intelligent agency. Despite repeated requests, you have still not provided a mathematically rigorous definition of CSI nor have you shown how to calculate it for the four scenarios I described in the original post. Could you please do so? MathGrrl
JGuy,
Gene duplication may double the amount of some existing CSI. However, simple reason shows that nothing new & novel is produced.
On the contrary, as noted repeatedly in this thread, a gene duplication can result in increased production of a particular protein which can, in turn, have a significant effect on the biochemistry that takes place in a cell.
To produce CSI, it needs to be more than what you started…and not repeats.
Could you please provide a rigorous mathematical definition of CSI based on Dembski's discussion in Specification... that supports this statement and show how to calculate CSI, by your definition, for the four scenarios I described in my original post? MathGrrl
PaV,
Namely, and here I’m referencing Dembski’s paper on Specification, most critics of ID confuse a “prespecification” with a true, and precisely defined “specification”. . . . In the case of Tierra, e.g., what “specification”, i.e. ‘pattern’ are we dealing with? None.
Please provide a mathematically rigorous method for creating a specification and show how those that I included with the four scenarios in my original post do not meet your criteria. You seem to be simply dismissing them out of hand.
Here’s a link to a paper from 2003 that uses a variant of Shannon and Kolmogorov complexity to calculate the increase of complexity over increasing computer time used.
While I have an interest in Kolmogorov-Chaitin complexity, the topic of this thread is CSI as defined by Dembski. Please provide example calculations of CSI for the four scenarios I described in my original post. MathGrrl
Upright BiPed,
It’s kinda like ‘piss on the truth’ isn’t Mathgrrl. As long as one remains satified with their ability to deny the observation, right?
Coarseness aside, I have yet to see you provide a rigorous definition of CSI nor any examples of how to calculate it for the four scenarios I described in my original post. There is nothing for me to deny. I look forward to you addressing that gap in your position. MathGrrl
SCheesman,
I don’t believe it is possible to come up with any mathematical description of the CSI in the 4 examples you give, but not because the concept of CSI is incoherent, but rather because the examples specified, though appearing to be framed in terms relevant to CSI, are in fact not what CSI measures in the first place, but rather contingent phenomena of more basic processes which more properly might be examined for changes in CSI.
This seems to contradict Dembski's claim in Specification.... He asks the question "Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause?" and answers it in the affirmative, as do many ID proponents here.
Change the focus from duplication of a gene to yield some concentration of a particular protein to the changes necessary in the genetic code or epigenetic action required to accomplish it. It may well be that a single point mutation produces this outcome, in which case the change in CSI is precisely zero, as the before and after state contain the same amount of information.
In the scenario I described, the increase in protein concentration is due to a duplication event. According to my understanding of CSI, this increase in the length of the genome constitutes an increase in CSI. Based on Dembski's explanation of CSI, do you agree? If not, why not?
Any computational algorithm, be it ev or Tierra, produces a fixed outcome depending on the initial conditions. Given the same initial conditions, you get the same solution.
That's not correct. The GAs with which I'm familiar make use of random number generators. If you look into the Steiner solutions linked to in the original post of this thread, you'll find that different solutions are found in different runs. Now, you could argue that the initial seed of a pseudo-random number generator counts as part of the "initial conditions", but that's just an implementation detail. Using a physically random source such as diode noise, radioactive decay, or a lava lamp would result in different solutions as well.
In order to talk about CSI in a useful way, you need descend to the lowest instructional level, the kernel of information that drives the outcomes observed.
Again, this doesn't align with the claims made by Dembski and other ID proponents. You seem to be making an argument similar to that of CJYman in the previous thread, a cosmological ID view. Do you agree that, given the world we observe, known evolutionary mechanisms can generate CSI? MathGrrl
Polanyi,
I'm no mathematician, but I don’t think one needs to be one in order to see that gene duplication as an evolutionary mechanism for the origin of information is problematic.
The issue for this thread is whether or not Dembski's definition of CSI leads to the conclusion that gene duplication (and other known evolutionary mechanisms) can generate CSI. Thus far, only vjtorley has made the effort to actually calculate CSI based on his understanding of Dembski's paper. MathGrrl
Joseph,
In comment 12 I linked to a paper discussing what you asked for. Did you read it?
I skimmed it and determined that it does not deal with Dembski's CSI, which is the topic of this thread. If you disagree, please show how their definition is analogous to Dembski's and provide example calculations for the four scenarios I described in the original post of this thread. MathGrrl
I wish to propose a rigorous method to calculate the "increase in CSI" represented by the original four proposals. The principle is general and applies equally to all four. In each case you must identify the minimum set of instructional changes in the "source code" or "instruction set" necessary to move the solution from a state lacking the ability to produce the specification to the state containing that possibility. The CSI is the amount of change required, measured in bits of "atomic" changes in the code; for instance in the genetic code this would be any kind of mutation that can occur as a single-step operation, e.g. substitution, deletion, addition, maybe even duplication of section of code. The "bits" of change is the chance of the particular change required occuring out of all the possible atomic changes; equal to the negative log to the base two of the probability of that change out of all possible changes. For the first case, for instance, it may turn out to be just a single point mutation that is required to produce the increase in production of the protein. What are the odds of obtaining that particular mutation out of all possible single-action mutations that could occur? Once that is determined, you can calculate the addition of CSI represented by the change. Maybe that occurs 1% of the time, and represents about 6.5 bits of CSI. It is the choice of change, and the resultant elimination of possibilities that is the hallmark of information. The total CSI is the measure of the accumulation of the choices required to produce your target "specification". Whether it "does" occur, of course is determined by the generation time and number of processes, e.g. the probabilistic resources, and the chances of the survival of intermediate states long enough to get to the required final state. For the other cases, "before" and "after" can all be identified, and the unit of functional change identified, (and is likely defined as part of the program), with probabilities available for the implementation of each type of change at each stage. For digital code this is much easier to do, as you can actually track it step-by-step. SCheesman
F/N: PAV, translation is of course an example of functionality. Even if your string were say the output of an encrypting machine, once we were able to make it function, providing it passed a threshold, it would be FSCI. But, a truly random unspecified string will not generally have linguistic or more specifically prescriptive algorithmic function. DNA has both: it is translated into the mRNA dialect, then used to assemble an AA string in the ribosome, and finally the resulting protein is folded, often agglomerated and activated, then put to work. Again, notice how I refer to empirical reality first and foremost as a basis for being meaningful and accurately connected to the real world. kairosfocus
Does someone think comments are closed? I wouldn’t be here if they were.
I think markf was referring to another thread. Heinrich
MG: I have a crisis coming to a head to deal with, and other matters, so I will pause to simply underscore what I have already said, only to see it again brushed aside without serious consideration. Pardon me, therefore, but I think you are putting the cart before the horse. Pardon, too, the resort to caps to draw attention to the most salient point: 1: UNLESS YOU CAN RECOGNISE AND ACCEPT THE EMPIRICAL REALITY OF FSCI AS OBSERVED, IDENTIFIED AND DESCRIBED IN THE ACADEMIC TECHNICAL LITERATURE BY ORGEL, WICKEN AND OTHERS 30+ YEARS AGO, YOU CANNOT CONSTRUCT OR UNDERSTAND VALID MATHEMATICAL MODELS THEREOF. (And if you cannot see that Orgel and Wicken -- not the easily rhetorically dismissed undersigned -- were not making "meaningless" noise, then no further progress is possible.) 2: Similarly, unless you can accept and understand that information measured in functionally specific bits is a commonplace of life in a digital age [think of the size of say YouTube Videos or the 143 character limit on tweets, i.e 1,000 bits if we work on ASCII 7-bit characters such as are used to type text in posts here at UD and elsewhere], you will be blind to that reality when it confronts you in the guise of DNA, and as the measure of DNA protein coding sub-strings. 3: Long since, too, I have pointed out that we must understand that there are several valid approaches to definition, and that to absolutise one particular abstruse and derivative approach, mathematical modelling, is apt to misconceive and mislead. 4: In short, mathematical models answer to empirical reality; not the other way around. 5: And, I have in fact given a specific, commonplace approach to mathematical measurement and calculation of functionally specific, complex information, without resorting to an infinite regress of demanded further mathematical definitions, and/or circularity. Namely, that for the simple heuristic X = C*S*B:
a: once one can identify a semiotic agent to recognise objectively observable functionality, one can assign a binary value 1/0 to S. (Text in English and functional DNA strings readily meet that criterion. That is we identify a functional macrostate that is met by a cluster of associated microstates.) b: Similarly, on the classic and generally used negative log information storage capacity metric, we can measure degree of complexity, as for n bits, there are 2^n possible configs and with > 1,000 bits, we are beyond a reasonable threshold for successful search by random walks and associated trial and error to detect functionality, on the gamut of the observed cosmos. So, our semiotic agent can count up the number of bits and if beyond 1,000, the complexity criterion is passed, C = 1; not 0. c: Already, we have identified the specific information storage capacity in number of bits, so we have n bits. d: To get the FSCI metric, X functionally specific bits, on this first level model we simply multiply: X = S*C*B. e: For your post at the head of the thread, that is 31, 283 functionally specific bits, and for a typical 300-AA protein we have 1,800 bits. There being of course thousands of such proteins in a typical cell. (The regulatory networks will of course have additional FSCI.)
6: if you cannot acknowledge that the metric X = S*C*B is defined on generally accepted scientific and engineering praxis as well as a wide field of current technology, and provides a simple way to identify and measure FSCI [one that is readily made and allows us to see that FSCI in our direct observation routinely and only comes form intelligence], then there is no basis for discussion of more complex metrics and cases. G'day GEM of TKI PS: I note that the design inference on FSCI or other metrics of Orgel-type observed CSI is an inference on ROOT cause, not on immediate source. This was already discussed by Paley in 1806, when he suggested a self-replicating watch as a further evidence of design above and beyond that which would be evident in the structure and function of a watch: ________________ >> Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself -- the thing is conceivable; that it contained within it a mechanism, a system of parts -- a mold, for instance, or a complex adjustment of lathes, baffles, and other tools -- evidently and separately calculated for this purpose . . . . The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done -- for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair -- the author of its contrivance, the cause of the relation of its parts to their use. >> ________________ I think that this has much to say to the onward debate on gene duplication etc, but unless there is a willingness to face the implications of the existence of complex organised function and associated information that can be measured in simple and generally acceptable ways, there can be no progress on more complex matters. If one already stumbles at the starting gates, one is not in contention for the prize at the finish line. kairosfocus
Does someone think comments are closed? I wouldn't be here if they were. There is a glitch in the system about which I must speak to our tech man where for some reason you will get that message now and then. Anyway, our policy is to auto close comments after thirty days, and it can't have been twenty-four hours yet . O'Leary
PaV @66 (& SCheesman @58) -
And, a little farther down: Observing some outcome and calling it a specification is a category error. This is what I have been saying for some time, though not as pithily and clearly as SCheesman.
But isn't this what IDers do? How is the specification for the bacterial flagellum defined without observing the outcome? Why isn't any mention made of, for example, cilia? They carry out the same role (making the bacterium mobile). How can an a priori specification be made without observing the outcome, when it's the outcome (e.g. the bacterial flagellum or Mount Rushmore) that is under investigation? Heinrich
UB #31
So the question remains: Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality.
  I see that you are returning to your argument that the decisive criterion for information is not the improbability of natural causes but the presence of symbols.  You raised this in the previous thread and I think we jointly came to the conclusion that therefore proteins do not contain information because they do not contain symbols.   My final comment was a couple of questions:  
1) So can you confirm that as far as you are concerned the only part of life to contain information and therefore CSI is DNA. The bacterial flagellum and the immune system are not examples of CSI. 2) Assuming that is true, I assume the symbols in a string of DNA are the bases. What do they symbolise?
Which you never had a chance to answer because comments closed.  May be you can pick that up now? markf
2 more cent. Gene duplication may double the amount of some existing CSI. However, simple reason shows that nothing new & novel is produced. To produce CSI, it needs to be more than what you started...and not repeats. I guess one can maybe imagine this as a packet of CSI being value X, and a new packet of equal but differents CSI would be: X^Y (where Y is a positive number). But adding (duplicating) seems like it would maybe something that built up CSI slower like: X+(X/1)+(X/4)...+(X/N^2) I know this seems contradictory to my previous post, but in the previous post, the ring is not a repeat of a smaller CSI packet. It's all just brain storming anyway. JGuy
My 2 cent. I am not fully tracking this discussion yet, but I thought it might be interesting to ask this...and who knows, the concept behind the question might help open up some ideas: When considering an object for CSI, does size factor? ..and.. ifso, how might that help one to approach (think calculus perhaps?) any useful measure of CSI? Example 1: A shiny smooth metal ring that looks like it was a perfectly sliced crossection from a metal cylinder. Imagine finding one that was the size of a pea, then imagine finding one the size of a car... Wouldn't size affect how you perceived the cause of it's origin? Despite bieng perfectly similar, somethigns small seems like it would have less parts to arrange...but somethign large seems even more unlikely. This example just remionded me of the monolith in "2001 A Space Odessey" - one could imagine a "small monolith" as bieng a possible product of nature...but(!) exactly WHY?? And can that reason be used to approach a better way to measure CSI? Thoughts anyone? JGuy
MathGrrl: The answer that SCheesman gives is splendid and spot on. In particular, he says: . . . the examples specified, though appearing to be framed in terms relevant to CSI, are in fact not what CSI measures in the first place, but rather contingent phenomena of more basic processes which more properly might be examined for changes in CSI. And, a little farther down: Observing some outcome and calling it a specification is a category error. This is what I have been saying for some time, though not as pithily and clearly as SCheesman. Namely, and here I'm referencing Dembski's paper on Specification, most critics of ID confuse a "prespecification" with a true, and precisely defined "specification". CSI involves a pattern which "induces" a rejection region. One then applies a probability distibution which captures the circumstances of that pattern, and calculates the level of improbability. If it exceeds the UPB, then you have CSI. In the case of Tierra, e.g., what "specification", i.e. 'pattern' are we dealing with? None. All Tierra succeeds in doing is to be able to use a very simplified assembly-language program, and get the computer to keep computing with it halting. So what. And the only thing that seems to happen is that the length of the computer program lengthens; nothing new, nothing of value, is produced. And the added length seems to come entirely from the original inputted programs, which is kind of like "gene duplication", which, as has been pointed out, is not added CSI. Here's a link to a paper from 2003 that uses a variant of Shannon and Kolmogorov complexity to calculate the increase of complexity over increasing computer time used. Net result. There is some initial complexity built up (60 bits is the maximum info content, well below the UPB), and then NO MORE complexity. I've come to the conclusion recently that the best way to know that you're dealing with CSI is that true CSI is translatable, or convertible. So, e.g., the above sentence, which everyone knows is the product of an "intelligent" agent, and thusly CSI, is translatable from English into Italian. How about this: icks thekskd tj sB Jks t8e gjsklclt shjtle;s';s sleltj a skt elsl;t e;sjsowje ty;slbjhslt;s 'atjelejmt ;sofo. I just typed all of this at random. Can you translate all of that into Italian? So, getting back to Tierra, what can the output of the program be "translated" into? I can't think of a thing, other than to refer it back to the original input. But, then, it becomes a situation wherein no new CSI has been generated. ( And even the original input might not be long enough to constitute CSI. But we can assume so.) Put another way, FIRST a pattern is detected, and then the mathematics applied to the situation surrounding its formation. In the case of biological function, a cell can "translate" DNA into a protein. Hence, since the "pattern" of DNA (i.e., the particular sequence coding for the protein) can be translated into a protein (by, guess what? A translation process), then as far as the cell is concerned, the pattern is "specified". Then, one calculates the improbability. Since it is known that chemically, physically, one nucleotide base is indistinguishable from another, and likewise for amino acids, then one simply takes the length of the sequence of a.a.'s, and the improbability is 20^N, where N = the number of a.a.'s in the sequence. The UPB is exceeded after, what, about 80 to 90 amino acids. Hence, proteins, and the protein sequence of DNA is CSI. Dembski has a sample calculation in his NFL. PaV
vj #39 You seem to be the only person in this lengthy discussion actually addressing Mathgrrl's challenge and you have put a lot of work into it. Thanks. The smiley face is clearly a calculation! But is it based on principles that can be even approximately repeated in another context? There seem to be quite a lot of arbitrary decisions and no guidance as to how to make similar decisions elsewhere. For example: Why only two colours? Why 128 by 128? When calculating the probability why do you assume that all colours are equally likely for each "pixel" and more importantly that the probability of one pixel taking a colour is independent of the probability of its neighbours taking that colour? But most importantly when calculating the probability why did you choose the probability of something that looks like a Smiley Face? You could equally justifiably choose: * looks like a face (smiley or otherwise) * looks like part or all of a human being * looks like part or all of any animal * looks like something we come across regularly in everyday life etc You haven't addressed the problem of an objective basis for choosing a specification which is the whole reason Dembski introduced Kolmogorov complexity in his paper. markf
niwrad (comment #37 above) wrote: >> About gene duplication allegedly increasing CSI >> I would reply with a simple question: >> “when one sends an email twice, do you think the receiver >> gets additional information respect the single mail?”. >> It is obvious there is no additional information. Depends what you mean by "sends an email twice". I'm an IT guy. Duplicate emails are not usually identical. I can examine headers to see whether they were sent by the same mail client, from the same server, with the same unique MessageID, passed through the same set of mail servers, etc etc. Lets say I am able to determine that there is no obvious reason for the duplication (everything looks identical) ... in that situation I am likely to conclude that there is a communications breakdown causing the re-send, or maybe a bug in the software that some developer has written. In other cases I might conclude that the author deliberately re-sent the message ... perhaps an unexpected error at the sending end prompted this intelligent intervention ... or perhaps if the content of the message was such that the sender was clearly anxious, a re-send might indicate to me an increasing level of anxiety on the part of the sender. So perhaps the sending of an email, which is known to be the product of intelligence (as I know of no naturally occurring email-sending systems) is not a suitable candidate for comparison with a CSI candidate where the authorship is disputed. Spiny Norman
Upright Biped @31: “information – any information – only exists by means of a semiotic convention and rules”. Me @62: “I would say that the utilization of semiotics is without question a hallmark CSI.” But I think I just thought of an instance where semiotics is not used in CSI… 3D RNA structures like tRNA, ribosomal RNA and micro-RNA, where the physical medium that stores the information folds into functional structures that are utilized as logic operators or machinery components. In these instances, the base pairs do not need a decoder to be considered CSI. The sequence of the base pairs is still critical (highly specified), because that is what dictates the resultant 3D shape. And the resultant structure is critical in ensuring the maintenance of the system (functional). Hence, we have FSCI, but no semiotics. Although I suppose it would be important to point out that the tRNAs and ribosomal RNAs are actually mechanical components for the translator, or rather the physical manifestation of the semiotics to which you refer! Personally, I find that causing the physical medium that carries the code to serve as both code carrier and components for machinery is SCARY genius. It causes me to join in with the Psalmist and say “we are FEARFULLY and wonderfully made”! M. Holcumbrink
Upright Biped @31: “information – any information – only exists by means of a semiotic convention and rules”. I think I would disagree with that. It seems to me that Shannon information is still information regardless of the existence of a code to make it useful. But I would say that the utilization of semiotics is without question a hallmark CSI. UB: “So the question remains: Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality.” I never thought of that before. Talk about cheating! I think I might be more inclined to believe that evolutionary algorithms have any relevance whatsoever the day they illustrate their efficacy towards the random development of a working language FIRST. That’s the first time I have ever seen that point being made. In light of that, evolutionary algorithms now seem super duper silly to me (whereas before they were just super silly). That aside, I like what Johnson pointed out in Programming of Life: “Ludwig sponsored an “Artificial Life” contest to find the shortest self-replicating program, with the winning program having 101 bytes. The probability of this program arising by chance is 256^-101 or 10^-243. If 10^8 computers each make 10^7 trials/sec for 3x10^22 trials/year, a solution becomes probable after 10^220 years. If a suitable program were half as large, “only” 10^99 years of processing would be necessary to make probable a self-replicating program by chance. It should be noted that any prescriptive program still requires an operational platform on which to execute. These programs, like all computer programs, were designed and were executed on designed platforms. Information, not random data, caused solutions”. I suppose MG is right to tell you that this you have pointed out does not satisfy her original request, but then again, what’s the point? I agree with Joseph (@2) that if chance and necessity can in fact generate CSI, however you quantify it, then ID can be put back on the shelf for most people. But it does not make it moot. Choice is still a very real and likely possibility. M. Holcumbrink
Mathgrll, "To be precise..." Well, of course. You take it for granted because you must. It's a no winner any other way. I had sorta hoped you might actually address the issue though, so I won't ask again. It's kinda like 'piss on the truth' isn't Mathgrrl. As long as one remains satified with their ability to deny the observation, right? Cheers... Upright BiPed
When I was getting my degree in psychology each of my professors stressed that psychology "is a science!" They would say it over and over again. And then they would show how to quantify the amount of self-esteem someone has using a questionnaire. And then when they find that people with high self-esteem are more likely to engage in such and such behavior. While "self-esteem" was a squishy concept, measuring it was useful in predicting future behavior. If CSI is not mathematically rigorous (although I think it is in the end) it still can be useful in detecting design, whether in DNA, computer programs, artificial life, SETI etc. Collin
The central problem of biology is therefore not simply the origin of infrmation but the origin of complex specified inforaion. - page 149 of "No Free Lunch" (bold added)
Algorithms and natural laws are in principle incapable of explaining the origin of CSI. further down on the same page
Joseph
MathGrrl: I don't believe it is possible to come up with any mathematical description of the CSI in the 4 examples you give, but not because the concept of CSI is incoherent, but rather because the examples specified, though appearing to be framed in terms relevant to CSI, are in fact not what CSI measures in the first place, but rather contingent phenomena of more basic processes which more properly might be examined for changes in CSI. 1. Change the focus from duplication of a gene to yield some concentration of a particular protein to the changes necessary in the genetic code or epigenetic action required to accomplish it. It may well be that a single point mutation produces this outcome, in which case the change in CSI is precisely zero, as the before and after state contain the same amount of information. 2. Any computational algorithm, be it ev or Tierra, produces a fixed outcome depending on the initial conditions. Given the same initial conditions, you get the same solution. Observing some outcome and calling it a specification is a category error. All the information, all the CSI is bound up in its instruction set. Anything occurring inside the computer (e.g. the changing world of genomes) does not alter the CSI contained in the program itself. All the possible outcomes, all the possible genomes, were already specified by the program itself before it was even run - the space of possible solutions is not altered by the actual playing out of the code in one or a thousand simulations. You cannot say that sampling the possibilities inherent in the program has added one iota to the CSI of the program itself, nor can they change a byte of its code. Change a byte of the code, though, and the universe of possibilities changes; sometimes not much, sometimes a great deal. Change your scenario to examine the computer code itself. How many extra instructions are necessary to get it to grow, evolve, or destroy a genome, or the modify its interactions with its digital world? That is where you can measure changes in CSI. 3. See Number 2. It may take 1000s of generations to evolve the solution you are looking for, but the evolution is pre-determined by the computer code and the initial conditions of the simulation. The only "specification" in your example is that you happened to pick a pattern you want out of the possible outcomes. By the way, who decided that a parasite was important? Did the computer programme? Of did the programme simply produce this phenomenon and you have decided it is important? Was the programme designed to produce parasites, or was this just an interesting outcome observed after the fact? The Mandlebrot set produces interesting patterns, but they contain no specified information above and beyond the original simple formula and colouring algorithm that produces them. 4. Same issue as 2 and 3. The original code contains the specification and instructions, carefully assembled to produce the effects desired. Conclusion: In order to talk about CSI in a useful way, you need descend to the lowest instructional level, the kernel of information that drives the outcomes observed. None of your examples do this. My recommendation is, if you really want to understand CSI better, refocus your search to the original instruction sets that produce these phenomena. Rigorous mathematical calculation of CSI in that context may not be easy, but it is no longer incoherent or intractable. SCheesman
I'm no mathematician, but I don't think one needs to be one in order to see that gene duplication as an evolutionary mechanism for the origin of information is problematic. Where did the first genes come from? What were they duplicated from? If gene duplication leads to fitness and new novel functions over time, why evolve repair mechanisms that actually try to prevent errors like gene duplication? Why is it, when we see the rapid appearance of new enzymes like in the case of the nylon eating bacteria, we see that gene duplication was not at work? Polanyi
:oops: should be "built-in responses to environmental cues".... Joseph
Intelligent Design is OK with mutations. We just say that some or even most are directed, not random with respect to anything. The point being is that mutations are directed pretty much the same way computer programs direct an output, as spell-checker is so directed. Dr Lee Spetner wrote a book (1997) titled "Not By Chance" in which he discusses the role of "built-in mechanisms to environmental cues" as part of his "non-random evolutionary hypothesis". Joseph
Darwinism, Design and Public Education page 92:
1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design. 2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity. 3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity. 4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.
Joseph
Origins- CSI is about origins. MathGrrl:
That is not reflected in Dembski’s paper referenced in the original post of this thread and it does not help to demonstrate how to calculate CSI.
1- It is reflected in "No Free Lunch" and other ID writings 2- Until we get past your road-blocks it is difficult to continue 3- In comment 12 I linked to a paper discussing what you asked for. Did you read it? Joseph
Congratulations to the powers that be at UD for uncircling the wagons a little. Zach Bailey
QuietID, Is programmer intervention required for spellchecker to work? Is programmer intervention required every time a program runs into a decision? Joseph
kairosfocus, That was a lot of words, but nowhere in there did you provide a mathematically rigorous definition of CSI nor did you show how to calculate it for the four scenarios I described in the original post. Could you please do so? MathGrrl
Niwrad at 37, I must disagree with the view that duplication adds no new information. It often does. Let me tell you a wheeze from the mid-twentieth century that perfectly illustrates that fact: A young lady went into the telegraph office and asked to send a telegram. She gave the operator a piece of flowered stationery with one word on it: Yes The operator explained: Miss, it'll cost you $2.00. You can have ten words for $2.00. She replied, "Certainly not! Nine more yesses will make it sound like I am too anxious." O'Leary
vjtorley,
Your comment on probabilistic complexity was as follows:
This is another term that is impossible to calculate, although in this case it is a practical rather than a theoretical limitation. We simply don’t know the probabilities that make up PC… Computing PC based on known processes and assumed probabilities will certainly lead to many false positives. This version of CSI is therefore more a measure of our ignorance than of intelligent agency, just as Dembski’s is.
In reply: the fact that we don’t know what the probabilities are doesn’t mean that we can’t put an upper bound on them, by computing the probabilities for a wildly optimistic scenario.
Actually, it does mean exactly that. The discovery of a new mechanism could change the probabilities you are calculating to 1. An upper bound implies knowledge that is simply not available, so it remains a measure of our ignorance. From an earlier section of your post:
In your response, you wrote several comments:
While I understand your motivation for using Kolmogorov Chaitin complexity rather than the simple string length, the problem with doing so is that KC complexity is uncomputable.
Quite so.
I find the thought process behind your metric interesting and hope to discuss it further with you. On this thread, however, I am focused on CSI as defined by Dembski and as claimed by ID proponents to indicate intelligent agency. Your metric isn't the CSI that is claimed to support ID. That CSI is supposed to be computable. I would very much like to learn how to compute it. (I will certainly go through the detailed calculation you posted later this evening. I appreciate the significant effort you are dedicating to this discussion.) On a personal note, I hope you and yours are safe and sound. MathGrrl
niwrad,
About gene duplication allegedly increasing CSI I would reply with a simple question: “when one sends an email twice, do you think the receiver gets additional information respect the single mail?”. It is obvious there is no additional information.
The difference is, as noted in the original post and in my post 6, a duplicate gene can lead to an increase in production of a particular protein, with significant impact on the subsequent biochemistry. Such a change in protein production can even enable or disable other genes. The analogy to email or books is fatally flawed. MathGrrl
PaV, I have read Specification... several times. It does not provide sufficient mathematical detail or examples for me to understand CSI to the extent required for me to measure it objectively. If you are able to do so, based on that paper or your other reading, please define CSI with some mathematical rigor and demonstrate how to calculate it for the four scenarios I detailed in the original post. MathGrrl
PaV,
I think this is an insincere statement. You can’t possibly state that you’re familiar with No Free Lunch and then turn around and then say: ” . . . I don’t understand ow to calculate CSI.”
That is exactly what I am, quite sincerely, saying. Since you seem to have a good grasp on the topic, would you please define CSI with some mathematical rigor and demonstrate how to calculate it for the four scenarios I detailed in the original post? MathGrrl
Upright BiPed,
In each of the threads you recently participated in here at UD, you kept making the claim that evolutionary algorithms can create information, in this case complex specified information.
To be precise, I noted that known evolutionary mechanisms can create CSI based on the definition used by vjtorley in his calculation. No one else has defined CSI with any degree of mathematical rigor, let alone provided any example calculations.
So the question remains: Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality.
Darned if I know, it really depends on the exact definition of "information", "complex specified information" in this case. Would you please define CSI with some mathematical rigor and demonstrate how to calculate it for the four scenarios I detailed in the original post? MathGrrl
Joseph,
Origins- CSI is about origins.
That is not reflected in Dembski's paper referenced in the original post of this thread and it does not help to demonstrate how to calculate CSI. MathGrrl
Indium says "Anyway, it is quite easy to see that, no matter what the exact definition is, known evolutionary processes like gene duplication+divergence can increase CSI, at least theoretically. Therefore, from a purely mathematical point of view, things are quite clear." No matter what the exact definition is? If you don't know what the definition is then how is it "quite clear?" Collin
F/N: What Dembski said on CSI at ARN in 1998, is; as excerpted: ______________ >> I shall (1) show how information can be reliably detected and measured [he develops in outline the usual negative log probability metric that traces to Hartley et al and which is easily accessed elsewhere, e.g in the always linked section a], and (2) formulate a conservation law that governs the origin and flow of information. My broad conclusion is that information is not reducible to natural causes, and that the origin of information is best sought in intelligent causes. Intelligent design thereby becomes a theory for detecting and measuring information, explaining its origin, and tracing its flow . . . . In Steps Towards Life Manfred Eigen (1992, p. 12) identifies what he regards as the central problem facing origins-of-life research: “Our task is to find an algorithm, a natural law that leads to the origin of information.” Eigen is only half right. To determine how life began, it is indeed necessary to understand the origin of information. Even so, neither algorithms nor natural laws are capable of producing information . . . . What then is information? The fundamental intuition underlying information is not, as is sometimes thought, the transmission of signals across a communication channel, but rather, the actualization of one possibility to the exclusion of others. As Fred Dretske (1981, p. 4) puts it, “Information theory identifies the amount of information associated with, or generated by, the occurrence of an event (or the realization of a state of affairs) with the reduction in uncertainty, the elimination of possibilities, represented by that event or state of affairs.” . . . . For specified information not just any pattern will do. We therefore distinguish between the “good” patterns and the “bad” patterns. The “good” patterns will henceforth be called specifications. Specifications are the independently given patterns that are not simply read off information . . . . The distinction between specified and unspecified information may now be defined as follows: the actualization of a possibility (i.e., information) is specified if independently of the possibility’s actualization, the possibility is identifiable by means of a pattern. If not, then the information is unspecified. Note that this definition implies an asymmetry between specified and unspecified information: specified information cannot become unspecified information, though unspecified information may become specified information . . . . there are functional patterns to which life corresponds, and which are given independently of the actual living systems. An organism is a functional system comprising many functional subsystems. The functionality of organisms can be cashed out in any number of ways. Arno Wouters (1995) cashes it out globally in terms of viability of whole organisms. Michael Behe (1996) cashes it out in terms of the irreducible complexity and minimal function of biochemical systems. Even the staunch Darwinist Richard Dawkins will admit that life is specified functionally, cashing out the functionality of organisms in terms of reproduction of genes. Thus Dawkins (1987, p. 9) will write: “Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.” . . . . To see why CSI is a reliable indicator of design, we need to examine the nature of intelligent causation. The principal characteristic of intelligent causation is directed contingency, or what we call choice. Whenever an intelligent cause acts, it chooses from a range of competing possibilities. This is true not just of humans, but of animals as well as extra-terrestrial intelligences . . . . A bottle of ink spills accidentally onto a sheet of paper; someone takes a fountain pen and writes a message on a sheet of paper. In both instances ink is applied to paper. In both instances one among an almost infinite set of possibilities is realized. In both instances a contingency is actualized and others are ruled out. Yet in one instance we infer design, in the other chance. What is the relevant difference? Not only do we need to observe that a contingency was actualized, but we ourselves need also to be able to specify that contingency. The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable . . . . CSI is a reliable indicator of design because its recognition coincides with how we recognize intelligent causation generally. In general, to recognize intelligent causation we must establish that one from a range of competing possibilities was actualized, determine which possibilities were excluded, and then specify the possibility that was actualized. What’s more, the competing possibilities that were excluded must be live possibilities, sufficiently numerous so that specifying the possibility that was actualized cannot be attributed to chance. In terms of probability, this means that the possibility that was specified is highly improbable. In terms of complexity, this means that the possibility that was specified is highly complex . . . . To see that natural causes cannot account for CSI is straightforward. Natural causes comprise chance and necessity (cf. Jacques Monod’s book by that title). Because information presupposes contingency, necessity is by definition incapable of producing information, much less complex specified information. For there to be information there must be a multiplicity of live possibilities, one of which is actualized, and the rest of which are excluded. This is contingency. But if some outcome B is necessary given antecedent conditions A, then the probability of B given A is one, and the information in B given A is zero. If B is necessary given A, Formula (*) reduces to I(A&B) = I(A), which is to say that B contributes no new information to A. It follows that necessity is incapable of generating new information. Observe that what Eigen calls “algorithms” and “natural laws” fall under necessity . . . Contingency can assume only one of two forms. Either the contingency is a blind, purposeless contingency-which is chance; or it is a guided, purposeful contingency-which is intelligent causation. Since we already know that intelligent causation is capable of generating CSI (cf. section 4), let us next consider whether chance might also be capable of generating CSI. First notice that pure chance, entirely unsupplemented and left to its own devices, is incapable of generating CSI. Chance can generate complex unspecified information, and chance can generate non-complex specified information. What chance cannot generate is information that is jointly complex and specified. Biologists by and large do not dispute this claim. Most agree that pure chance-what Hume called the Epicurean hypothesis-does not adequately explain CSI. Jacques Monod (1972) is one of the few exceptions, arguing that the origin of life, though vastly improbable, can nonetheless be attributed to chance because of a selection effect. Just as the winner of a lottery is shocked at winning, so we are shocked to have evolved. But the lottery was bound to have a winner, and so too something was bound to have evolved. Something vastly improbable was bound to happen, and so, the fact that it happened to us (i.e., that we were selected-hence the name selection effect) does not preclude chance. This is Monod’s argument and it is fallacious. It fails utterly to come to grips with specification . . . . The problem here is not simply one of faulty statistical reasoning. Pure chance is also scientifically unsatisfying as an explanation of CSI. To explain CSI in terms of pure chance is no more instructive than pleading ignorance or proclaiming CSI a mystery. It is one thing to explain the occurrence of heads on a single coin toss by appealing to chance. It is quite another, as Küppers (1990, p. 59) points out, to follow Monod and take the view that “the specific sequence of the nucleotides in the DNA molecule of the first organism came about by a purely random process in the early history of the earth.” CSI cries out for explanation, and pure chance won’t do. As Richard Dawkins (1987, p. 139) correctly notes, “We can accept a certain amount of luck in our [scientific] explanations, but not too much.” If chance and necessity left to themselves cannot generate CSI, is it possible that chance and necessity working together might generate CSI? The answer is No. Whenever chance and necessity work together, the respective contributions of chance and necessity can be arranged sequentially. But by arranging the respective contributions of chance and necessity sequentially, it becomes clear that at no point in the sequence is CSI generated. Consider the case of trial-and-error (trial corresponds to necessity and error to chance). Once considered a crude method of problem solving, trial-and-error has so risen in the estimation of scientists that it is now regarded as the ultimate source of wisdom and creativity in nature. The probabilistic algorithms of computer science (e.g., genetic algorithms-see Forrest, 1993) all depend on trial-and-error. So too, the Darwinian mechanism of mutation and natural selection is a trial-and-error combination in which mutation supplies the error and selection the trial. An error is committed after which a trial is made. But at no point is CSI generated. Natural causes are therefore incapable of generating CSI. This broad conclusion I call the Law of Conservation of Information, or LCI for short. LCI has profound implications for science. Among its corollaries are the following: (1) The CSI in a closed system of natural causes remains constant or decreases. (2) CSI cannot be generated spontaneously, originate endogenously, or organize itself (as these terms are used in origins-of-life research). (3) The CSI in a closed system of natural causes either has been in the system eternally or was at some point added exogenously (implying that the system though now closed was not always closed). (4) In particular, any closed system of natural causes that is also of finite duration received whatever CSI it contains before it became a closed system.>> ______________ That was in 1998, 13 years ago. kairosfocus
FOR THE RECORD: I note that MG has posted a contribution as above. I note, for the record as follows (following up from several posts here and at MF's blog that have gone over much the same ground): 1 --> In the post's main body there are 4469 ASCII characters, and at 7 bits per character [128 possibilities] that gives a space of ~1.32 *10^9,417 possible configurations. 2 --> Of these, rather few will be in grammatically correct, contextually relevant English, i.e. functional. 3 --> But the functionality can be recognised by our now proverbial "semiotic agent" [who under the label "observer" is firmly embedded in say Quantum Physics]; the post fits the external specification/pattern of being in English language sentences, defining a cluster of states distinguishable from those that are not. 4 --> Likewise, we already have a specification in binary digits [bits], a standard measure of information carrying capacity. 5 --> In essence -- for clarity -- one bit is one yes/no unit. Two bits allows 2 states in the second digit for each of the two in the first, and so on. So, 2 bits permits 4 states, 3, eight, and n bits 2^n. 6 --> Further to this, we are of course long since beyond 143 ASCII characters or 1,000 bits; which corresponds to 1.07*10^301 configs. 7 --> It so turns out that the ~ 10^80 atoms of the observed cosmos, changing state every Planck time [about 10^20 times faster than the fastest nuclear interactions, on the strong force], for the thermodynamic lifespan of eh cosmos [about 50 mn times the time usually estimated to have lapsed since the origin of the observed cosmos] would undergo 10^150 states. 8 --> Consequently, the resources of the observed cosmos would be inadequate to investigate more than 1 in 10^151 of the possibilities for 1,000 bits. 9 --> That is, no search on the gamut of the cosmos would be adequate to search enough of such a space to be materially different from no search. [Similar challenges are at the basis of the second law of thermodynamics. App 1 my always linked discusses this.] 10 --> We are now ready to calculate the basic level, functionally specific bits measure of the post above, C*S*B = X, where complexity C = 1 as 4469 7-bit symbols > 1,000 bits, (functional) specificity = 1 as this is English text, and number of bits is B = 4469 * 7 = 31,283. 11 --> We have 31, 283 functionally specific bits, similar to how we measure file size in general. The FSCI inference is that once we are past the 1,000 bit threshold, and have functionally specific bits -- think about what random bit changes would do to the above message in short order -- the explanatory inference is that the post is designed. 12 --> This is of course, independently known and reflects the reliability of the inference on FSCI to design. 13 --> Similarly, for a 300 AA typical functional protein, we are looking at mRNA of 3 * 300 = 900 4-state characters, or 1,800 functionally specific bits. And, it is known that while some stretches of a protein are fairly tolerant, significant random AA substitution will destroy function in rapid order, i.e, proteins are functionally specific. 14 --> The onward inference is that on the empirical reliability of FSCI as a sign of design and the linked "Infinite Monkeys" analyses, the protein too is best explained as an artifact of design. 15 --> Now, we have chosen a familiar example and a simple heuristic, to bring across the point of what functionally specific complexity is [by live example], and why it is that intelligence rather than blind random walks filtered on trial and error is the best explanation for it. 16 --> There are more complex models and metrics and more elaborate calculations [as have been raised before also, all of this is uncommonly like going in pointless circles], but they boil down to identifying the same Orgel complex specificity, in light of the same Wicken wiring diagram discussed here [and remember a string structure is a wiring diagram with nodes in a line], leading to the same issue of isolated islands of function in large config spaces where intelligent search is the most credible means to arrive at functionally specific configs, on search challenges to a random walk filtered by trial and error. 17 --> And MF's dismissive remarks notwithstanding, this is where the hyperskepticism issue comes in:
(i) If the above is "meaningless" -- a term used by MG in previous discussions -- then Orgel and Wicken were also meaningless. (MG has consistently not responded to this.) (ii) By posting a contribution in English, ASCII text, MG has in fact provided a personal example of how FSCI is best explained by design. (For very good reason, no infinite monkeys process on the gamut of the observed cosmos would be credible as an explanation of her post. Why then are we invited to imagine that such lucky noise and trial and error processes are credible as the source of the vastly more complex and specific functionality of the first living cell, and onward novel body plans, where we are dealing with 100,000 - 1 mn bits and well past 10 mn bits for these cases?) (iii) So, if MG wishes to deny the above, she is forced to actually instantiate the reason why FSCI is a good sign of design. That is very important as a baseline: self-referentiality. (iv) In addition, she has produced a file suing digital technology, and applying the metric of bits that are functional, i.e. we are dealing with a real world metric, one that is now a commonplace of a dominant technology. So, to try to dismiss the significance of functional bits is also self-referential. (v) Finally, if one refuses to acknowledge the above, it is predictable that more complex cases will be even more intractable. (Not, because they are "meaningless," but because MG is in a self referentially incoherent loop.)
18 --> Going further, one of the major issues over recent days has been the proposal that ev and co show how evolutionary mechanisms per blind chance filtered by selection on improved function, show how CSI can arise by chance. 19 --> This begs the material question highlighted by the above simple analysis: getting to islands of function in large config spaces. 20 --> Hill climbing within such an island on built in information and algorithms tracing to design, look to me suspiciously like a case of walking in circles in the snow and since one is seeing more and more tracks, one thinks one is getting closer to civilisation and rescue. 21 --> In reality, one is just going in circles and mis-attributing causes. ++++++++ So, have fun. Good day, GEM of TKI PS: I will follow up with an excerpt from Dembski where he took an initial step in his analysis that leads to the paper Specification. Maybe that will help in clarifying what is going on. kairosfocus
Didn´t Dr. Dembski kind of denounce the CSI concept? Or was that the explanatory filter? Anyway, it is quite easy to see that, no matter what the exact definition is, known evolutionary processes like gene duplication+divergence can increase CSI, at least theoretically. Therefore, from a purely mathematical point of view, things are quite clear. But the ID camp will not give up the ambiguity so easily, Mathgrrl. It is extremely valuable, in fact it´s the only thing that keeps the whole concept alive. Therefore, I am quite surprised you have been allowed to put a spotlight on this topic... Credits to O´Leary and Jonathan M: This has been a bold decision! Special thanks to vjtorley for his work, too! Indium
Mathgrrl: In the very long recent UD thread on CSI, I proposed an alternative definition of CSI: Chi=-log2[(10^120).(SC/KC).PC], where SC is the Shannon complexity (here defined as the length of the string after being compressed in the most efficient manner possible), KC is the Kolmogorov complexity (here defined as the length of the minimum description needed to characterize a pattern) and PC is the probabilistic complexity, defined as the probability of the pattern arising by natural non-intelligent processes. In your response, you wrote several comments:
While I understand your motivation for using Kolmogorov Chaitin complexity rather than the simple string length, the problem with doing so is that KC complexity is uncomputable.
Quite so. That's the point. Intelligence is non-computational. That's one big difference between minds and computers. But although CSI is not computable, it is certainly measurable mathematically. To use an old example: suppose we received a signal from space, containing the first 100 digits of pi. Here, the length of the description "1st 100 digits of pi" (or the Kolmogorov complexity, as I have defined it) is significantly less than the length of the string, which cannot be compressed because the digits in pi follow no pattern - hence the Shannon complexity as I have defined it above is 100. Concerning the probabilistic complexity in the denominator of my formula, I originally wrote:
I envisage PC as a summation, where we consider all natural non-intelligent processes that might be capable of generating the pattern, calculate the probability of each process actually doing so over the lifespan of the observable universe and within the confines of the observable universe, and then sum the probabilities for all processes. Thus PC would be Sigma[P(T|H_i)], where H_i is the hypothesis that the pattern in question, T, arose through some naturalistic non-intelligent process (call it P_i). In reality, a few processes would likely dwarf all the others in importance, so PC could be simplified by ignoring the processes that had a very remote chance of generating T, relatively speaking.
Your comment on probabilistic complexity was as follows:
This is another term that is impossible to calculate, although in this case it is a practical rather than a theoretical limitation. We simply don't know the probabilities that make up PC... Computing PC based on known processes and assumed probabilities will certainly lead to many false positives. This version of CSI is therefore more a measure of our ignorance than of intelligent agency, just as Dembski's is.
In reply: the fact that we don't know what the probabilities are doesn't mean that we can't put an upper bound on them, by computing the probabilities for a wildly optimistic scenario. That was what Dr. Stephen Meyer wrote about in his book, Signature in the Cell, where he states on page 213:
In 1983 distinguished British cosmologist Sir Fred Hoyle calculated the odds of producing the proteins necessary to service a single one-celled organism by chance at 1 in 10^40,000... [Postdoctoral researcher Douglas] Axe's experimental findings suggest that Hoyle's guesses were pretty good. If we assume that a minimally complex cell needs at least 250 proteins of, on average, 150 amino acids and that the probability of producing just one such protein is 1 in 10^164 as calculated above, then the probability of producing all of the necessary proteins needed to service a minimally complex cell is 1 in 10^164 multiplied by itself 250 times, or 1 in 10^41,000. That kind of number allows a great amount of quibbling about the accuracy of various estimates without altering the conclusion. The probability of producing the proteins necessary to build a minimally complex cell - or the genetic information necessary to produce these proteins - by chance is unimaginably small.
Of course, Meyer's calculation here applies to chance processes, and various origin-of-life researchers have suggested that there is a kind of biochemical predestination in Nature which makes the emergence of life highly likely, given enough time and a planet orbiting its star in the habitable zone. But the key problem with this view (as Meyer argues in chapter 10 of his book) is that if bonding affinities in DNA determined their sequencing, it would be unable to carry the vast amounts of information that it does. DNA would be characterized by order (and hence redundancy) rather than information. I conclude that it is possible to compute plausible upper bounds on probabilistic complexity, in the light of what we know. You also wrote:
If you're proposing a new metric, you need to clearly and rigorously define it, which you've made a good start at, and show how it actually measures what you claim it measures with some worked examples... One problem you'll immediately encounter is identifying artifacts that are not designed, so that you can show that your metric doesn’t give false positives.
OK. Let's deal with that last point. Here's an example. The Precambrian Smiley Face Suppose I dig up a Precambrian rock with what appears to be a smiley face on it: a circle, two dots that look like eyes, and a curved line segment that looks like a mouth. Two possible Kolmogorov descriptions of this face would be: (1) "smiley face" and (2) "a circle, containing two dots above a curved line". If the proportions were sufficiently accurate that anyone seeing it would call it a smiley face (e.g. if the eyes were evenly spaced and about one-third of the way down from the top), then I'd go with the former description; but if the two eyes were on the same side of the circle or something like that, I'd go with the latter description. To calculate the Shannon complexity, I'd need to break it up into its three components: circle, two dots and curve. To make it mathematically manageable in terms of the level of precision, I'd pixellate the representation of the smiley face, as no perfect circles exist in Nature. Let's suppose that at the 128x128 level, however, the smiley face was still a perfect circle. Then each row could be represented as alternating white and black spaces (or 0s and 1s), where the outline of the circle corresponded to black or 1. So in a typical row, you'd have x 0's (white space), a 1 (black), y 0's (more white spaces), a 1 (black, on the other side of the circle) and x 0's again (by symmetry). x would always be less than 64, so it'd need 6 bits to specify. y would always be less than 128, so it'd need 7 bits. So in a typical pixellated row, the number of bits you'd need to specify the circle would be: (1+6)+(1+1)+(1+7)+(1+1)+(1+6)=26 bits. The first 1+ in each case specifies the color; the number after the + specifies the number of bits with that color. But since the right hand side of the circle is the same as the left, 13 bits should be enough to specify the pixellated row, in terms of Shannon information. The next row would be much like the previous one, except that the black spaces would be a little closer together or further apart. To specify that row, you'd only need a two bits: one telling you how many spaces left or right to move the black pixel, relative to the previous row (it would never be more than one space, as the shape we're dealing with is a circle, and the pixellation is pretty fine), and the other to tell you whether to move the black pixel left or right. So that's two bits. Also, the top half of a circle is the same as the bottom, so we'd only need to specify 64 rows. By my calculations, I get 13+(63x2)=139 bits to specify a 128x128 pixellated circle, in terms of its Shannon complexity, where 13 stands for the top row and 63 represents the number of rows following it. Since we only have a quarter of a circle here, we'd need two more bits to specify: copy again to the right and copy again in the bottom half. So that's 141 bits altogether. All right. What about the eyes? Row number, column number for the first eye should suffice. So: 1 bit to specify black, 6 bits to specify row number, and 6 to specify column number. That's 13, and if you add 1 bit to say: copy on the right hand side, you get 14. The mouth is a bit tricky. Let's say it's about 4 rows deep. However, the black pixels in each row would be one or two line segments (not two dots, as in the circle case), so the specification required to describe it in the two-line case should be: (1+6)+(1+6)+(1+6)+(1+6)+(1+6)=35 bits. Half of that gives you 18 bits. You also need 6 bits to specify the row number for the first row where the mouth appears. Changing each successive row of the mouth requires more bits than for the circle case, as we have to change the start and end points of the line segment, by moving it (let's say) anywhere up to 16 (=2^4) columns to the left or right in the next row, so that's [1 (for L or R) + 4] times two (for start and end points of the line segment), or 10 bits. Total for the mouth: 18+6+(10x3)=54 bits. Total Shannon complexity for a pixellated 128x128 smiley face: 141+14+54= 209 bits. "Smiley face" has 11 characters, making it much shorter than the Shannon string needed to specify it. To properly describe the one we found, we need to specify it as follows: "128x128 smiley face". That's only 19 characters. Even if you insist on representing each letter as 6 bits (2^6=64, compared to 26 letters in the alphabet plus 10 digits), you still get 114 bits, which is much less than 209. Probabilistic complexity: let's assume that all the world's rocks are black and white, with no colors and no shades of gray. Let's assume they can all be represented in pixellated terms. The odds that a given 128x128 slab of rock will have an identical arrangement of black and white pixels to the smiley face we found are 1 in 2^16384. But of course a smiley face could look slightly different. Since I specified a 128x128 smiley face, I'm just going to deal with the 128x128 smiley face in my calculations, and not a smaller one. How much could its shape vary while leaving it recognizable as a smiley face? Each dot for the eye could probably move about 15 pixels up, down, left or right. Of course, the other eye dot would have to move the same way, to maintain perfect facial symmetry. So that gives us 30x30=900 possibilities. The top row of the mouth could probably move the black line segments 15 pixels left or right. The rows below would more or less have to move in sync. The row number for the top row could perhaps be varied by 15 pixels, up or down. So that's 30x30 again, but let's be generous and allow the mouth to vary in depth, from 1 (a flat smile) to 10 (we don't want a V-shaped smile). 30x30x10 is 9,000. The circle can't vary, if it's a 128x128 circle. So the number of possible 128x128 smiley faces comes out at 900x9,000= 8,100,000, which is much, much less than 2^16,384. It's about 2^23. So the probabilistic complexity of a 128x128 smiley face is about1 in 2^16,361. 10^120 (the upper bound on the number of events in the observable universe, and hence a very generous upper bound for the number of slabs of rock) is about 2^399. SC/KC is 209/114 or about 2^1. So on my definition Chi is: -log2[(2^399).2^1/2^16,361] or: -log2[2^(-15,961)] or 15,961 >> 1. I hope this satisfies you as a detailed calculation, using a concrete example. Not being a biologist I can't comment on ev, Tierra or the Steiner problem. But I hope you will recognize that it represents a useful metric for SETI fans who encounter alien artifacts when exploring another planet. I don't think my measure of CSI will yield any false positives. vjtorley
I want to thank MathGrrl for this post. The subsequent discussion is most interesting. And I'll thank UD for permitting this post. Neil Rickert
Kudos to Jonathan M and Denyse for allowing a non-ID proponent to open a thread. Question for ID proponents: at what point is the "specified" bit of CSI determined? Before the design arises or after? Grunty
About gene duplication allegedly increasing CSI I would reply with a simple question: "when one sends an email twice, do you think the receiver gets additional information respect the single mail?". It is obvious there is no additional information. About Dembski’s CSI, it is true that if we concatenate two identical mails we have a text string with double quantity of characters (then double complexity) respect the single one. But the specification of the two identical sub-strings is perfectly equal (said otherwise, there is no new functionality). Therefore the specification doesn’t increase at all. If the added specification is zero then the added CSI is zero too (also if the complexity increases). In fact we have *new* CSI only when complexity _AND_ specification are in the same time both greater than zero (i.e. in a sense they are both *new*) and here the specification is not. Therefore gene duplication doesn’t help to create new genetic information, exactly as, in software industry, to simply duplicate subroutines doesn’t help to produce new software. niwrad
Mod comment: PaV at 32 and 34: MathGrrl is an invited guest. We don't accuse people of making an "insincere statement" here. Cool off, okay? jon specter, too bad about your sciatica. The witless platypus thinks he is fine, and who dare argue? It's funny, yes, but why not wait for one of my more frivolous Coffee!! posts to add to the fun. O'Leary
I once suggested that maybe God delegated creation to a committee of angels.
That would certainly explain the platypus.... and my sciatica. jon specter
By the way, I registered because of the integrity of allowing opponents to post. It means a lot to me. maproctor
"Discussion of the general topic of CSI is, of course, interesting, but calculations at least as detailed as those provided by vjtorley are essential to eliminating ambiguity." -- MathGrrl I believe they can be found below the abstract in the paper: `Specification: The Pattern That Signifies Intelligence.` You'll find it's the first link in your post. It also provides worked examples of the same caliber as vtjorley's previous response to you. Given that what you ask is already knowingly available then you are simply being unserious in demanding that the gallery provide a dissertation to you. You have either not read the paper you linked and have your answers easily available or your are playing cute by not stating your objections to the work with which you are assumedly already familiar. That said, vtjorley is correct that CSI is unuseful. CSI, as laid out in the paper you linked, is based on the algorithmic complexity of a linear bitstring as compared to a random set of coin flips. DNA is not a pure linear bit string of a structured language in this sense -- Dembski's sonnet -- nor do its products express and operate in a single linear fashion so it's invalid at first blush. Further, given that the Darwinian process does not posit a one-time random coin flip it's not useful in distinguishing from the claims of ID from Darwin. To the point that anyone wishes to treat DNA as a proper bitstring under these notions then the rejection of the CSI hypothesis would reject both ID and Darwin in one shot. Or, to the other side, conform to both of them. The problem is not, and should not be, one of outcome but of the process that led to it. In that regard ID and Darwin are each the null hypothesis of the other. To the degree that you are asking for testable claims of ID and rigorous math? I give you the last century of research into evolutionary biology. It is impossible for this discussion to be settled so long as everyone keeps pretending that an organism is a simple bitstring rather than a control theory issue of graph topology and feedback. John Quincy Public
vjtorley, That's a really interesting thought... isn't it conceivable that random processes would have a 'signature' too? So one could determine what is the work of a 'designer 1' which is intelligent and a 'designer 2' which is a process. Perhaps the devil is in the details? Repetition can give you new information if the information is relative. So for instance two exit signs have more information than one if the first points to the second. How to get to a destination contains a variable amount of information. maproctor
MathGrrl: I just noticed that you wrote this at the beginning of your post (which I really can't believe is being allowed. You should be run off of this board!): "In the abstract of Specification: The Pattern That Signifies Intelligence, . . . Please tell me you've read more than just the abstract!! If you haven't, then please, just go away. If you have read more, then, please, tell me where in that paper are you having confusion or difficulties. And, what problems are you having applying what is written in that paper to the putative "scenarios" you've listed? If you can't do this much, then my assessment of you in our earlier post is 'spot on'. PaV
Upright, Would you explain what semiotic convention means? Collin
MathGrrl: I clearly stated in the original post that, based on my reading of the available material, I do not understand how to calculate CSI. Instead of asking me questions, why don’t you provide some answers? I think this is an insincere statement. You can't possibly state that you're familiar with No Free Lunch and then turn around and then say: " . . . I don't understand ow to calculate CSI." This is outlandish. The EASIEST part of CSI is the calculation of complexity. And certainly, as Dembski presents it in his paper on "Specification", it is a more complicated, world-encompassing approach; but the simplified version is a simple negative log calculation of improbability. Some 8th graders could do the calculation. Why can't you---or, won't you---give a definition of, and an example of, a "specification", as best you understand it? If you have no basic understanding, then why should I attempt any kind of dialogue with you? Why should I waste my time? MarkF: I think I'm entitled to some kind of an answer. A "specification" can be a very involved mental construction, especially when it comes to the putative "scenarios" that MathGrrl invokes. Why should I waste my time putting something together like that when she hasn't made the effort to come to grips with Dembski's definition of "specification"? PaV
Mathgrrl, In each of the threads you recently participated in here at UD, you kept making the claim that evolutionary algorithms can create information, in this case complex specified information. Yet, as I have pointed out to you, information – any information – only exists by means of a semiotic convention and rules (unless you disagree, and can show an example otherwise). So the question remains: Does the output of any evolutionary algorithm being modeled establish the semiosis required for information to exist, or does it take it for granted as an already existing quality. In other words, if the evolutionary algorithm – by any means available to it – should add perhaps a ‘UCU’ within an existing sequence, does that addition create new information outside (independent) of the semiotic convention already existing? If we lift the convention, does UCU specify anything at all? If UCU does not specify anything without reliance upon a condition which was not introduced as a matter of the genetic algorithm, then your statement that genetic algorithms can create information is either a) false, or b) over-reaching, or c) incomplete. Upright BiPed
Joseph, by "well understood" I think MG means -- in part -- that gene duplication has been observed to happen without apparent intervention. To posit intelligent intervention in those instances would be to say that an intelligence came in and duplicated a gene while leaving no trace of its presence. Since we know genes mutate without intervention, and since gene duplication is similar to any other mutation at base, your suggestion would mean that an intelligence could be operating at every point along the way. That seems like an impossibly high bar. QuiteID
Or at least you have an 81% chance of the code containing CSI and therefore an 81% chance of the code being designed. Collin
Mathgrrl, Then in step 2 you identify a similar function and see if the code is the same. You can compare the code-function of species 1 with the code-function of species 2 and get a similarity factor. Maybe they are 90% the same. Then you multiply it by a factor of negative relatedness. A human and a chimp would have a low factor like .1. So multiply the 90% by the .1 and you could get a 9% chance that the code is specified. But if the code-function is found in unrelated species, like a human and a spong, then you multiply it by something like .9. 90% by .9 equals an 81% chance of the code being "specified." If the code is also complex (a given) then you have CSI. Again, I'm just brainstorming, so don't ridicule me even if my reasoning is ridiculous. :) Collin
MathGrrl, though I can't 'do some math' with you (unless you want to stay with very basic math), perhaps this empirical evidence will be of interest to you; Flowering Plant Study 'Catches Evolution in the Act' Excerpt: The new species formed when two species introduced from Europe mated to produce a hybrid offspring. The species mated before in Europe, but the hybrids were never successful. However, in America something new happened -- the number of chromosomes in the hybrid spontaneously doubled, and at once it became larger than its parents and quickly spread. http://www.sciencedaily.com/releases/2011/03/110317131034.htm Now MathGrrl this looks like just the type of evidence you need to make your case does it not? But it turns out that this evidence, as compelling as it may be on the surface, does not 'make the case' for evolution. Can you tell me why this does not? Here is a hint. Evolution by Gene Duplication Falsified - December 2010 Excerpt: The various postduplication mechanisms entailing random mutations and recombinations considered were observed to tweak, tinker, copy, cut, divide, and shuffle existing genetic information around, but fell short of generating genuinely distinct and entirely novel functionality. Contrary to Darwin’s view of the plasticity of biological features, successive modification and selection in genes does indeed appear to have real and inherent limits: it can serve to alter the sequence, size, and function of a gene to an extent, but this almost always amounts to a variation on the same theme—as with RNASE1B in colobine monkeys. The conservation of all-important motifs within gene families, such as the homeobox or the MADS-box motif, attests to the fact that gene duplication results in the copying and preservation of biological information, and not its transformation as something original. http://www.creationsafaris.com/crev201101.htm#20110103a bornagain77
vjtorley, You think that there were more than 1 designers? That is radical. I like it. I once suggested that maybe God delegated creation to a committee of angels. Anyway, off topic... On topic: I'm not sure how stylometry would work because all of our references would be called related via common descent and therefore not independent. Collin
bornagain77:
MathGrrl posting a thread??? Is this Uncommon Descent???
Yes, a big thank you to all involved- good job and perhaps the start of something interesting. Joseph
MatGrrl:
The biochemistry that can result in a gene duplication is reasonably well understood. No intelligent agent is required for it to occur.
The circuitry of my PC is very well understood and required designing agencies for its manufacture. Just because something is "understood" doesn't mean the blind watchmaker didit. Dr Spetner wrote about this back in 1997 in "Not By Chance". But this gets to the root of the problem- MathGrrl just accepts that a protein producing gene dupliction is a blind watchmaker process- just we because it is reasonably well understood- yet moans about CSI. So I will say it again: Origins- CSI is about origins. If living organisms can arise from non-living matter via chance and necessity then CSI is moot, all evolutionary processes are blind watchmaker processes and ID is dead. If you are going to start with that which needs an explanaion in the first place- ie living organisms- then you have already cheated. Would you like to talk about that? Or are you going to continue to ignore it? Joseph
Collin (#11) I was very interested in your proposal that the linguistic method of stylometry can be applied to DNA. The reason is that I suspect you'd find more than one fingerprint. You also wrote:
The big obstacle is that for stylometry you have to have a reference that you know was written by a certain author.
I'd suggest starting with the most highly conserved sections of our DNA, which are found in all or nearly all organisms. Let's attribute these to Designer 1. I suspect that if you examined the DNA of higher animals, which regulates their pain responses as well as their social behavior, you'd find another fingerprint (let's call it Designer 2), indicating that Designer 1's work may have been tampered with. Death, predation, disease and some degree of suffering are part and parcel of the natural order. But that does not imply that the various instances we see of aberrant behavior in the animal kingdom (e.g. cannibalism of infants by female chimpanzees), or excruciatingly painful deaths, are part of the original plan of Providence. Something is rotten in the state of Nature, as we know it. vjtorley
MathGrrl posting a thread??? Is this Uncommon Descent??? Twilight Zone Opening THEME MUSIC 1962 Rod Serling http://www.youtube.com/watch?v=-b5aW08ivHU bornagain77
Mathgrrl, I guess I'm being tangential, sorry. I think I am still struggling with the definition of CSI. So I am trying to nail it down first by brainstorming different approaches. Perhaps we can start with #1 which would help us nail down just how "specified" a code section is. If a code section has no extraneous parts, then it is highly specified. For example, if it says, "Build protein X and be happy about it" then it is not highly specified because the "and be happy about it" is extraneous. So you could compare the function with the code and see how tightly they fit. If they fit tightly then you can quantify it mathematically, I think. Does that make any sense? I'm not sure it does. Collin
Collin, Could you please show how your approach aligns with Dembski's definition of CSI and show how to calculate it for the four scenarios I described? MathGrrl
PaV,
Give us your own sense of what you think specification is, and one of your own examples of what you think a specification is.
Again, I have provided a specification for each of four scenarios. If you find them somehow unusable, please provide your rigorous definition of CSI, show how to create appropriate specifications, and perform some example calculations. I clearly stated in the original post that, based on my reading of the available material, I do not understand how to calculate CSI. Instead of asking me questions, why don't you provide some answers? MathGrrl
Joseph,
How was it determined that a gene duplication that leads to an additional protein is a blind watchmaker process?
The biochemistry that can result in a gene duplication is reasonably well understood. No intelligent agent is required for it to occur. If you are trying to make some form of fine-tuning, front-loading, or cosmological ID argument, that might be interesting but isn't pertinent to determining how to calculate CSI. MathGrrl
Continued... This inference can be made because if a similar function arose in an unrelated species, then it would probably arise via different coding instructions. After all, there must be a million ways to say "Build protein X" in the genome. but if it is said the same way, in unrelated species, then a common designer is using what has worked for Him in the past to implement the new protein creation function. Collin
vjtorley, Nice to hear from you again!
The “x2? refers to the semiotic description.
If I understand you correctly, "semiotic description" is equivalent to Kolmogorov-Chaitin complexity. I agreed with you in the previous thread when you suggested that this would make sense, but it's not what Dembski uses in his description of CSI. It may be interesting to discuss alternative metrics in a subsequent thread, but here I'm very focused on trying to understand CSI as described by Dembski.
Here’s a reply I got from physicist Rob Sheldon:
[I]f “x2? were independent of the remainder of the genome, it would be CSI. The point is that it isn’t independent, it is a duplication.
All mutations are dependent on the existing genome (this is one reason why the search metaphor isn't a particularly good one -- evolutionary mechanisms only "search" near known viable solutions). They are "just" duplications, "just" point mutations, "just" insertions, "just" deletions, etc. What you seem to be suggesting here is that CSI is only applicable to de novo creation events. Am I misunderstanding?
received a message recently from another ID proponent, E.H.:
I believe Rob is basically saying the CSI is a product of the duplication algorithm. . . . This is because the probability of G2 occurring given the duplication algorithm and G1 is exactly 1.
What "duplication algorithm"? A quick PubMed search yields one estimate of 0.00115 / million years / lineage for gene duplication events in vertebrates. That's a lot lower probability than 1. As near as I can tell, you followed Dembski's approach quite straightforwardly in your original calculation. Correct me if I'm overstating the case, but it appears that you and your correspondents are recognizing a need to adjust the definition of CSI to better account for known evolutionary mechanisms. MathGrrl
Maybe you can identify CSI via triangulation of the following 3 factors. 1. Good code to functionality fit. 2. Identical code in another species that causes the identical function and 3. Those species are unrelated so that the code and function must have arisen separately (convergence). If those 3 factors are present, then we can point to a common designer and can be assured that the code is CSI. Collin
#3 Pav Here’s a question I’d ask you to answer: What do you understand Bell Dembski’s notion of “specification” to mean, and, could you provide an example of a “specification”? I am sure Mathgrrl will respond to this very well. But it relates directly my point above. The ID community itself seems to divided about what "specified" means. As Mathgrrl is asking the ID community to clarify what they mean by CSI surely it is for you guys to say what they mean by "specification"? markf
MathGrrl: The use of the word "specification" is at issue in at least one of your "scenarios". How is it that you want to have a discussion about "specification", yet you cannot provide a simple description and example of a specification. Leave these putative scenarios to the one side: Please answer the question. Give us your own sense of what you think specification is, and one of your own examples of what you think a specification is. PaV
A few refereces I posted on my blog:
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
In the preceding and proceeding paragraphs William Dembski makes it clear that biological specification is CSI- complex specified information. In the paper "The origin of biological information and the higher taxonomic categories", Stephen C. Meyer wrote:
Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information--that is, specified complexity from mere complexity. This review will use this term as well.
from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, “Measuring the functional sequence complexity of proteins,” Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
[N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon's classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.
Here is a formal way of measuring functional information: Robert M. Hazen, Patrick L. Griffin, James M. Carothers, and Jack W. Szostak, "Functional information and the emergence of biocomplexity," Proceedings of the National Academy of Sciences, USA, Vol. 104:8574–8581 (May 15, 2007). See also: Jack W. Szostak, “Molecular messages,” Nature, Vol. 423:689 (June 12, 2003). Joseph
Perhaps the linguistic method of stylometry can be applied to DNA. After all, it has been applied to music and art. http://en.wikipedia.org/wiki/Stylometry#Methods For those who don't know what stylometry is, I'll explain. When there is a historical record of disputed authorship, linguists use this method to discover the author. Linguists have discovered that when you write, you leave a "word print" (like a finger print) unconsciously in your work. Even if you write in different styles, use different "voices" and even write in different languages, your word print can be identified statistically. http://en.wikipedia.org/wiki/Writeprint I wonder if similar methods could be employed to find "word prints" in DNA, and if so, maybe that could lead to an inference of CSI. The big obstacle is that for stylometry you have to have a reference that you know was written by a certain author. So if you wanted to find out who wrote an anonymous Federalist Paper, you would have to compare it to writings of Thomas Jefferson or James Madison to compare them to. F. Mosteller and D. Wallace (1964). Inference and Disputed Authorship: The Federalist. Reading, MA: Addison-Wesley. Collin
How was it determined that a gene duplication that leads to an additional protein is a blind watchmaker process? Joseph
Hi MathGrrl: I've received a couple of personal communications on the subject of my CSI calculations since we last corresponded, which may be of assistance. Some people with whom I corresponded asked me to clarify my calculations, so I wrote the following explanation:
The "x2" refers to the semiotic description. Let me put it another way, borrowing an example from the old joke about what dogs understand when their owners are talking: "Blah Blah Blah Blah Ginger Blah Blah" - except that in this case the "Blah" is not repetitive. In the original genome, it's a long random string, then the gene that gets duplicated, and then more random stuff. And the gene that gets duplicated is itself a random string. To make things easier to visualize, I imagined that the duplicated gene was right at the end. I wrote the random stuff as "!@#$%^" even though of course it's all A's G's, T's and C's. I wrote the gene itself as (AGTCGAGTTC), even though a real gene has about 100,000 bases (and of course it's random too). Thus after the gene duplication, the simplest semiotic description is not !@#$%^(AGTCGAGTTC)(AGTCGAGTTC), but !@#$%^(AGTCGAGTTC)x2, which is much more economical. My point is that because the semiotic description is scarcely any longer than the original, Phi_s(T) in Professor Dembski's formula should be about the same for both. On the other hand, P(T|H) is strikingly different, because the duplicated genome is 100,000 bases longer than the original, so because there are four possible bases at each site, the probability of the duplicated genome is lower by a factor of 4^100,000. Put the two together, and you get an odd result where Chi is very high for the duplicated genome but not for the original. I hope that helps explain where I'm coming from. Is there an error in my CSI calculation?
Here's a reply I got from physicist Rob Sheldon:
[I]f "x2" were independent of the remainder of the genome, it would be CSI. The point is that it isn't independent, it is a duplication. In that case, all that is new is the "please duplicate" bit. Suppose we have used PKZIP to calculate the information in the genome. What happens when we duplicate a gene and run it through PKZIP? The amount of information added is very small. On the other hand, the resources to duplicate genes, and the entropy is not small. So if signal-to-noise is related to CSI, I would say duplicating a gene adds an eensy-weensy bit of information, while greatly increasing the entropic noise, so SNR goes down, CSI/codon goes down, and the designer doesn't need to be invoked.
I couldn't quite understand everything in Rob's reply, but fortunately I received a message recently from another ID proponent, E.H.:
I believe Rob is basically saying the CSI is a product of the duplication algorithm. This is a trick that Dr. Shallit used throughout his article "debunking" CSI. He claimed to show a number of times that he could algorithmically produce CSI, even though the CSI was already contained in his algorithms. When calculating CSI we have to first determine the probability of a gene G occurring given the chance hypothesis. In the case of the duplicated gene G2 this probability is still 1/4^(3,000,000,000), the same as the single gene G1. This is because the probability of G2 occurring given the duplication algorithm and G1 is exactly 1. So, the duplication adds nothing to the probability of occurring by chance. That is why Rob says the problem is G2 is not independent of G1. For G2 to be independent of G1, P(G2|G1) must equal P(G2|~G1), or at least be fairly close. If anything, duplication actually decreases any CSI that may have been in the gene, since the description still grows in length.
If I read E.H. correctly, he maintains that my original calculation of CSI in the duplication case was incorrect, and that in fact, using Dembski's formula in Specification: The Pattern That Signifies Intelligence, gene duplication does not, after all, increase CSI. Any thoughts? vjtorley
All credit to Jonathan M and Denyse for allowing an anti-ID guest post. I look forward to seeing some instructions on how to do the calculations. I want to point out that vjtorley's comment is not the only one that illustrates disagreement about the concept of CSI within the ID community. For example, Gpuccio has in the past asserted that part of the definition of dFSCI is that the item in question should be scarcely compressible (see for example his comment here.) This is the complete opposite of Dembski's definition of CSI in Specification: The Pattern That Signifies Intelligence, where he defines CSI in terms of outcomes that are highly compressible (see pp 9-12). Gpuccio's response was admirably honest and straightforward - he doesn't agree with everything that William Dembski writes. But remember this is the most recent mathematical definition of CSI (as far as I know) by the most well-known theoretician in the field, which he explicitly says supersedes all previous definitions. So at the very least this must give pause to those who say the concept of CSI is so simple they cannot understand why we sceptics do not understand it and accuse us of "hyperscepticism". markf
What if the original gene was CSI? Perhaps it is a command: "Build protein X." If the plan was duplicated then presumably you would get two protein Xs. But CSI is a quality not a quantity. So I don't think that by duplicating a gene you are creating CSI, you are just copying it. If you copy a novel, have you created CSI? No, you've just copied it. Perhaps that is the same thing in DNA. If CSI is a quality not a quantity, then I do not know if it can be measured mathematically. I hope I'm wrong though because it would be easier to start publishing some very powerful papers if CSI were measurable mathematically. Collin
Collin,
I would ask for a clarification concerning the gene duplication. Does the gene duplication result in new function?
Gene duplication can increase the production of certain proteins, as described in my scenario. This increased production can have significant impact on subsequent chemical reactions. MathGrrl
PaV,
What do you understand Bell Dembski’s notion of “specification” to mean, and, could you provide an example of a “specification”?
I provided the specifications for each of my scenarios. If you don't think those are "good" specifications for some reason, please explain why. MathGrrl
I hesitate to comment because I'm not a mathematician. But I would ask for a clarification concerning the gene duplication. Does the gene duplication result in new function? I do know that in communication, repeating an idea adds nothing. For example, if I say, "I like the color red. I like the color red" the second sentence adds no new information. I suppose if I were speaking via radio and the signal was bad, that the duplication could help result in the message being correctly transmitted. Collin
MathGrrl: You ask: 2. Do you agree with [vjtorley's] conclusion that CSI can be generated by known evolutionary mechanisms (gene duplication, in this case)? As I read the quote of his you've provided, this does NOT appear to be his conclusion. You've misunderstood him. Here's a question I'd ask you to answer: What do you understand Bell Dembski's notion of "specification" to mean, and, could you provide an example of a "specification"? If you can't answer this question relatively well, then there is no way to really have a discussion with you, I'm afraid. PaV
Origins- CSI is about origins. If living organisms can arise from non-living matter via chance and necessity then CSI is moot, all evolutionary processes are blind watchmaker processes and ID is dead. If you are going to start with that which needs an explanaion in the first place- ie living organisms- then you have already cheated. Joseph
For some time I’ve been trying to learn enough about CSI to be able to measure it objectively and to determine whether or not known evolutionary mechanisms are capable of generating it.
Blind watchmaker mechanisms, not "evolutionary" mechanisms. Ya see that is part of the problem- equvocation. For all you know most "evolutionary" mechanisms are design mechanisms. So first we have to deal with the equivocation. Joseph

Leave a Reply