Uncommon Descent Serving The Intelligent Design Community

The paradox in calculating CSI numbers for 2000 coins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Having participated at UD for 8 years now, criticizing Darwinism and OOL over and over again for 8 years is like beating a dead horse for 8 years. We only dream up more clever and effective and creative ways to beat the dead horse of Darwinism, but it’s still beating a dead horse. It’s amazing we still have a readership that enjoys seeing the debates play out given we know which side will win the debates about Darwin…

Given this fact, I’ve turned to some other questions that have been of interest to me and readers. One question that remains outstanding (and may not ever have an answer) is how much information is in an artifact. This may not be as easy to answer as you think. For example if I take an uncompressed sound file that is 10 gigabits in size and subject it to various compression algorithms, I may come up with different results. One algorithm may come up with 5 gigabits, another 1 gigabits, and another 0.5 gigabits. What then is the size of the file? How many bits of information are in the file given we can represent it in a variety of ways, all with differing numbers of bits.

Now, let us see how this enigma plays out for CSI. CSI is traditionally a measure of improbability since improbability arguments are at the heart of ID. Improbability can be related to the Shannon metric for information.

Suppose I have four sets of fair coins and each set of coins contains 500 fair coins. Let us label each set A, B, C, D.

Suppose further that each set of coins is all heads. We assert then then that CSI exists, and each set has 500 bits of CSI. So far so good, but the paradoxes will appear shortly.

If I asked, “what then is the total information content of the four sets of coins?” One might rightly say:

The total probability that all four sets of coins being heads is probability of all 2000 coins being heads, thus the probability is 2000 bits (1 out of 2^2000), and that is the amount of CSI collectively represented by all 4 sets of coins. The amount of CSI is 2000 bits.

But someone might come along and say,

Wait a minute, each set is a duplicate of the other set, so we should only count only 1 set as having the true amount of information. The information content is no greater than 500 bits, since the duplicate sets don’t count, there is no increase in information because of the duplicate sets. The amount of CSI is 500 bits.

So is the total amount CSI for all 4 sets combined 2000 bits or 500 bits? I think the correct answer is 2000 bits because CSI is a measure of improbability (Bill mentioned somewhere he thought about using the term “Specified Improbability”). I’ve given the number I think is the correct answer, but what do the readers think is the correct answer? What are the number of bits of CSI?

I post this discussion because ID proponents at UD have some disagreement over the issue. I think these are interesting topics to discuss, but we could always go back to beating the dead horse of Darwinism.

NOTES:
HT Eric Anderson who posed a very thought provoking question that inspired this thread:

Comment on Nuances

If I measure the Shannon information* in one copy of War and Peace and then measure the Shannon information in another copy of War and Peace do I now twice as much information as I had?

To maybe clarify the issues at hand, instead of War and Peace, I thought about the sets of coins. Readers are invited to weigh in. I think this is a topic that deserves consideration and rigor.

Comments
F/N: Sorry to be so late to the party. The key to the case is the focal question is the origin of FSCO/I or CSI more broadly. If one suspects copying, on what empirical basis? You have cited a case of repetitive regularity, so a possibility would be lawlike mechanical necessity. Say, double headed coins which cannot show a T. If the system is in fact -- notice the empirical context implied by that term -- contingent [fair coins say], capable of giving rise to diverse outcomes on similar starting conditions, then we are looking at chance or design to explain the specificity of the outcome. Just one block is already at the FSCO/I threshold so we can safely say, design. Going on, if copying is suspected, that needs to be assessed on whether the system is contingent again, and obviously if the coins are fair (not double headed) it is. In a contingent system with copying, then we face the copying mechanism, which itself is going to implicate FSCO/I. In the case of a designer duplicating his actions, that is already design. And so we see that the FSCO/I based inference cannot be separated from its empirical context, which has been noted from the beginning. KFkairosfocus
March 12, 2014
March
03
Mar
12
12
2014
04:03 AM
4
04
03
AM
PDT
I like Eric's earlier position that we can only seek to calculate the complexity in CSI, not the specificity nor the information itself. To my mind the "amount of information in an object" is a question that is completely misplaced if the object does not contain a pattern (an arrangement of matter) to be translated into a functional outcome by means of a protocol(s) within a system. A distintion is made between an object/outcome that is the product of CSI, and an object that contains CSI. - - - - - - - - - - - - - - - - - - OT: (just sharing) ...from the essay by Marcello Barbieri "Biosemiotics: A New Understanding of Biology"
Schrödinger’s prophecy In 1944, Erwin Schrödinger wrote “What is Life?”, a little book that inspired generations of scientists and became a landmark in the history of molecular biology. There were two seminal ideas in that book: one was that the genetic material is like an aperiodic crystal, the other was that the chromosomes contain a code-script for the entire organism. The metaphor of the aperiodic crystal was used by Schrödinger to convey the idea that the atoms of the genetic material must be arranged in a unique pattern in every individual organism, an idea that later was referred to as biological specificity. The metaphor of the code-script was used to express the concept that there must be “a miniature code” in the hereditary substance, a code that Schrödinger compared to “a Morse code with many characters”, and that was supposed to carry “the highly complicated plan of development of the entire organism.” That was the very first time that the word code was associated to a biological structure and was given a biological function. The existence of specificity and a code at the heart of life led Schrödinger to a third seminal conclusion, an idea that he expressed in the form of a prophecy: “Living matter, while not eluding the ‘laws of physics’ as established up to date, is likely to involve hitherto unknown ‘other laws of physics’, which, however, once they have been revealed, will form just an integral part of this science as the former”. Schrödinger regarded this prophecy as his greatest contribution to biology, indeed, he wrote that it was “my only motive for writing this book”, and yet that is the one idea that even according to his strongest supporters did not stand up to scrutiny. Some 30 years later, Gunther Stent gave up the struggle and concluded that “No ‘other laws of physics’ turned up along the way (Stent and Calendar 1978). Instead, the making and breaking of hydrogen bonds seems to be all there is to understanding the workings of the hereditary substance”. Schrödinger’s prophecy seems to have been shipwrecked in a sea of hydrogen bonds, but in reality that is true only in a very superficial sense. The essence of the prophecy was about the existence of something fundamentally new, and that turned out to be true. As we have seen, life is based on organic information and organic meaning, and these are indeed new fundamental entities of Nature. Schrödinger invoked the existence of new laws rather than of new entities, but that was only a minor imperfection and should not have been allowed to obscure the substance of the prophecy. There is, however, one thing that Schrödinger might not have appreciated in the answer that here has been given to the question “What is Life?”. Together with many other physicists, he believed that scientific truths must have beauty, and the answer “Life is artifact-making” might not be elegant enough to meet his criterion of truth. Luckily, there is a simple way out of this impasse because the word artifact-making maintains its meaning even when we drop all its letters but the first three. In this way, the statement that “Life is artifact-making” becomes “Life is art”, and that is a conclusion that even Schrödinger might have approved of.
Upright BiPed
November 23, 2013
November
11
Nov
23
23
2013
01:46 PM
1
01
46
PM
PDT
I think to clarify, I support the notion that CSI cannot increase in a closed system. I don't feel that way however for open systems. I elaborate here: https://uncommondescent.com/computer-science/dawkins-weasel-vs-blind-search-simplified-illustration-of-no-free-lunch-theorems/#comment-480980scordova
November 23, 2013
November
11
Nov
23
23
2013
01:27 PM
1
01
27
PM
PDT
Sal:
If we cannot measure CSI it makes it challenging to say whether CSI can increase or decrease in a system.
Oh, perhaps it is a bit "challenging," as you say, in a few corner cases. However, in the vast majority of cases the fact that we can't precisely quantify the amount of information in an artifact does not mean that we cannot say whether information has increased or decreased. Yes, at the margin it might be hard to tell whether x has more information than y. If I write down "He likes bananas" on one piece of paper and "He likes apples" on another piece of paper, it might be pretty hard to say for certain which piece of paper contains more "information," without knowing a lot of context. We could calculate the Shannon metric for 'C' and determine which string has more carrying capacity, but that wouldn't tell us which string has more information. However, there are myriad cases -- we could think of hundreds of them off the top of our heads -- in which is it quite clear that there is "more" information in x than y. For example, I have one page of War and Piece, you have two pages. What you have obviously contains more information. Half a dictionary vs. a whole dictionary; one section of a newspaper vs. several sections; one chromosome vs. the entire genome; ten amino acids of a particulary protein in sequence vs. 100 amino acids of that protein; and on and on. There are many, many cases in which we can clearly determine that x has more or less information than y. The Shannon metric can be helpful in measuring the pipeline. We also use logic, experience, meaning, context, and so on. The fact that I can't come up with a precise mathematical number to represent the "quantity" of information in an artifact is, in nearly all cases, beside the point and does not mean that I can't draw some reasonable conclusions about the information contained in an artifact, or the relative amount if information contained in multiple artifacts.Eric Anderson
November 23, 2013
November
11
Nov
23
23
2013
01:15 PM
1
01
15
PM
PDT
Mark Frank wrote: The point being that it is nonsense to talk about the CSI in an outcome. It depends on the target and the chance hypothesis you are assuming which underlies it. Demski’s own formula makes that clear. Your example makes the point rather nicely.
Actually Mark, even if for the sake of argument we had the right chance hypothesis, the paradox highlighted doesn't go away. I've said we can go to basic probability arguments. Whether the chance hypothesis assumed is correct is another story. It could be the wrong chance hypothesis, the assumption is falsifiable. For example we can go back and examine the coin to see if they are fair, we can look for how nature might make that arrangement spontaneously. Do the assumptions prove design? No, because we would be having to prove the assumptions. But the assumptions seem reasonable, and make the design inference reasonable in the eyes of some. I know we don't agree, but I hope you've been enlightened as to why some of us find mindless OOL just plain hard to believe. As I said, if IDist are mistaken, it was an honest mistake. Salscordova
November 22, 2013
November
11
Nov
22
22
2013
08:38 PM
8
08
38
PM
PDT
That is an interesting question. I think part of the challenge is that information is not, in my estimation, reducible to some quantifiable mathematical metric. Based on my review of the issues I would say that every attempt to reduce information to a precise mathematical quantity has failed, and will fail.
I can't tell you how much I appreciate that statement. I feel more comfortable defending ID from basic probability arguments and common sense. If we cannot measure CSI it makes it challenging to say whether CSI can increase or decrease in a system. I'm not posting this thread just to make trouble, I'm posting it because I've not been able to use information oriented arguments as forcefully as other arguments. I do agree with the claim that genetic algorithms can't do better on average than blind search. That is one consequence of NFL theorems, but like many truths, there isn't necessarily just one avenue to arrive a the same conclusion. It is blatantly obvious in searching for passwords that a Darwinian algorithm won't perform better than other algorithms given the same amount information about the problem. But maybe we can argue this without invoking information theories when simple probability arguments might suffice. I support the work of the Evolutionary Informatics lab at Baylor. But as far as the concept of CSI and its relation to ID, to the extent that it relates to simple probability, it works well, beyond that it becomes not so easy to work with. We don't have to really answer how much CSI is in 2000 coins heads. We can simply say, that on the assumption it is a fair coin, it is improbable -- 1 out of 2^2000 and it violates expectation value by many standard deviations. We can make the design inference without reference to information theories. That said, the original Explanatory Filter, I accept in as much as it was basically a probability based argument. Thanks for your comments, and thanks for the War and Peace comment too. That was very thought provoking, and it highlights some of the irresolution to the question, "how much information is in an artifact?"scordova
November 22, 2013
November
11
Nov
22
22
2013
07:59 PM
7
07
59
PM
PDT
Sal:
One question that remains outstanding (and may not ever have an answer) is how much information is in an artifact.
That is an interesting question. I think part of the challenge is that information is not, in my estimation, reducible to some quantifiable mathematical metric. Based on my review of the issues I would say that every attempt to reduce information to a precise mathematical quantity has failed, and will fail. The only thing the Shannon metric can do is give us a bit quantity calculation for the transmission of information, given certain conditions. As I've indicated before (and, if I may, I gather that you might agree), one of the historical travesties for this debate is the fact that the Shannon metric has been called "Shannon information." It is not information. It doesn't measure information. It tells us precisely nothing about the quantity or quality of the information. The only thing it measures is the statistical amount of information, from a simple bit measurement, that could be in a given string; measuring the "pipeline," if you will, not what actually flows through the pipeline. As a result, Shannon information is only a statistical surrogate for the 'C' part of CSI. It performs a useful function as a floor measurement of complexity when we are dealing with strings (like nucleotides in DNA, amino acids in proteins, etc.). It is useful to set a floor for what we might consider "complex," but beyond that it tells us nothing. The Shannon metric hasn't a clue whether the bits in the string constitute information or whether it is specified -- the 'S' and the 'I' parts of CSI. This is why I will continue to insist (unless someone can demonstrate the contrary) that one cannot mathematically measure CSI. You can measure C. And you can recognize the SI, but not because you can measure SI mathematically through some formula, rather because of meaning, function, goal-oriented content, purpose-driven operation, etc.Eric Anderson
November 22, 2013
November
11
Nov
22
22
2013
10:51 AM
10
10
51
AM
PDT
I believe the information content of a given message depends on both the message and the receiver of the message. For example, if Morse code is being received, the letter 't' has less information content than 'z', which does not occur as often, but only if the message is in English or a similar language, the the person receiving it knows this. The number sequence 23,28,33,42,51 for most people who receive it is a sequence of somewhat random incrementing two digit numbers and could have fairly large information content unless you grew up in New York City, then you know it is the stop street numbers on the main north/south Manhattan subway line, and that the next number will be 59. For you, the message has low information content. I'm a lurker and have read many discussions of what information is contained in DNA etc. To me, the answer is: "It doesn't matter because by any measure whatsoever it is far more information than natural processes can generate."GBDixon
November 21, 2013
November
11
Nov
21
21
2013
12:52 PM
12
12
52
PM
PDT
Maybe... four sets of the same information is more (Kolmogorov) information... HHHHHHHHHH => "ten heads" HHHHHHHHHH, HHHHHHHHHH, HHHHHHHHHH, HHHHHHHHHH = "four sets of ten heads" :-PJGuy
November 20, 2013
November
11
Nov
20
20
2013
10:27 PM
10
10
27
PM
PDT
Sal, thanks for the HT. Just saw this thread so I'll need to digest it a bit before responding in more detail. One thing that jumps out immediately, however, is that we need to get our terms straight.
CSI is traditionally a measure of improbability since improbability arguments are at the heart of ID. Improbability can be related to the Shannon metric for information.
Almost, but not quite. The 'C' part of CSI is traditionally a measure of improbability. Let's not get the 'S' or the 'I' caught up in this Shannon metric.Eric Anderson
November 20, 2013
November
11
Nov
20
20
2013
04:40 PM
4
04
40
PM
PDT
tossed for trollingMung
November 20, 2013
November
11
Nov
20
20
2013
04:38 PM
4
04
38
PM
PDT
[uninvited troll]Mung
November 20, 2013
November
11
Nov
20
20
2013
04:35 PM
4
04
35
PM
PDT
Maybe bits and bytes are a wrong way of looking at measurement. "We get away with the sloppy definition of "bit" in computer science (a binary "choice") only because we are measuring the "space" requirements needed for any program with that number of binary decisions. Bits measure averages, never specific choice commitments made with intent. The latter is the essence of algorithmic programming. Bit measurements are generic. They tell us nothing about which choice was made at each decision node. Bit measurements cannot tell us whether a program has a bug, or computes at all." http://www.tbiomed.com/content/2/1/29johnp
November 20, 2013
November
11
Nov
20
20
2013
03:21 PM
3
03
21
PM
PDT
Like Niwrad said, it can, using the same logic, be reduced down to 1-bit of information. Shouldn't a look at how these sequences are supposed to come about as important? Even though you said it wasn't flipping. Normal flipping of coins, for example, doesn't have a component in the process where repeats are expected. If considering the duplicate sequences in DNA, it may be designed that way of the result of copying blocks of information. But if copied genes are possible, this is obviously not happening on a bit by bit basis.... So, then it isn't as improbable or new information on a bit by bit basis. In OOL... expecting a homochiral protein or *NA molecules is like the coin flipping... not copying blocks of infobits.JGuy
November 20, 2013
November
11
Nov
20
20
2013
01:23 PM
1
01
23
PM
PDT
Sal This is not as irrelevant to Darwnism as you think.
I limit my use of CSI/NLF arguments to things like blind search for proteins, partly because of the paradoxes I'm highlighting in this discussion. I'm putting this paradox on the table because I'm concerned what IDists think is an effective line of argumentation, might be a little more murky than what it appears at first glance. My line of argumentation is that evolution in the wild works against evolution of Rube Goldberg coordinated complexity, not for it. Multicellular creatures are one example. NFL and CSI topics are difficult, particularly CSI Version 2. I prefer simpler lines of argument, particularly those raised by evolutionists themselves.
Your example makes the point rather nicely.
That means a lot especially coming from you. :-)scordova
November 20, 2013
November
11
Nov
20
20
2013
12:10 PM
12
12
10
PM
PDT
Ok. If in an organism all amino acids are "L" this means that life is... "faked". In a sense here "faked" implies design. So that single 1 bit of what I called "a posteriori information" of all "L" amino acids is an important message because it somehow answers "Yes" the 1-bit "Yes/No" question: "is the organism designed?". If instead we consider the "potential information", it is N, if the amino acids in the organism are N.niwrad
November 20, 2013
November
11
Nov
20
20
2013
11:21 AM
11
11
21
AM
PDT
Sal This is not as irrelevant to Darwnism as you think. The answer will depend on: a)what you define the target as e.g all heads or all the same or at least 1999 the same and so on. b)what assumptions you are making about how the coins got that way - was one tossed and then some natural mechanism duplicated it 1999 times or were 500 tossed and then some neutral mechanism duplicated it three times or was each individual coin tossed. The point being that it is nonsense to talk about the CSI in an outcome. It depends on the target and the chance hypothesis you are assuming which underlies it. Demski's own formula makes that clear. Your example makes the point rather nicely.Mark Frank
November 20, 2013
November
11
Nov
20
20
2013
11:12 AM
11
11
12
AM
PDT
What is the relevance of this thread to OOL? Consider the homochirality argument. It poses a similar set of questions since statistics of chirality are like statistics of coin flips. "How much CSI is there in the homochirality of an organism?" I think IDist would prefer higher CSI numbers than just a few bits. :-)scordova
November 20, 2013
November
11
Nov
20
20
2013
10:39 AM
10
10
39
AM
PDT
The coins were fair, they just happen to all show heads. The coins weren't flipped, they are found that way in some box or on the floor.scordova
November 20, 2013
November
11
Nov
20
20
2013
10:30 AM
10
10
30
AM
PDT
P.S. If the others 1999 are all heads this means all coins are faked. And faked coins cannot provide information different from single information that... indeed they are faked. We arrive again to a similar conclusion as before. :)niwrad
November 20, 2013
November
11
Nov
20
20
2013
10:25 AM
10
10
25
AM
PDT
We should distinguish between "potential information" before the coins flipping and "a posteriori information" after the coins flipping. The "potential information" before the coins flipping is 2000 bits. The "a posteriori information" after the coins flipping could even be 1 bit. In fact if all coins are head one could say: "the first coin is head, all the others 1999 are again duplicate heads and, as such, provide no additional information".niwrad
November 20, 2013
November
11
Nov
20
20
2013
10:05 AM
10
10
05
AM
PDT

Leave a Reply