Uncommon Descent Serving The Intelligent Design Community

Siding with Mathgrrl on a point, and offering an alternative to CSI v2.0

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There are two versions of the metric for Bill Dembski’s CSI. One version can be traced to his book No Free Lunch published in 2002. Let us call that “CSI v1.0”.

Then in 2005 Bill published Specification the Pattern that Signifies Intelligence where he includes the identifier “v1.22”, but perhaps it would be better to call the concepts in that paper CSI v2.0 since, like windows 8, it has some radical differences from its predecessor and will come up with different results. Some end users of the concept of CSI prefer CSI v1.0 over v2.0.

It was very easy to estimate CSI numbers in version 1.0 and then argue later whether the subjective patterns used to deduce CSI were independent and not postdictive. Trying to calculate the CSI in v2.0 is cumbersome, and I don’t even try anymore. And as a matter of practicality, when discussing origin-of-life or biological evolution, ID-sympathetic arguments are framed in terms of improbability not CSI v2.0. In contrast, calculating CSI v1.0 is a very transparent transformation going from improbability to taking the negative logarithm of probability.

I = -log2(P)

In that respect, I think MathGrrl (who’s real identity he revealed here) has scored a point with respect to questioning the ability to calculate CSI v2.0, especially when it would have been a piece of cake in CSI v1.0.

For example, take 500 coins, and suppose they are all heads. The CSI v1.0 score is 500 bits. The calculation is transparent and easy, and accords with how we calculate improbability. Try doing that with CSI v2.0 and justifying the calculation.

Similarly, with pre-specifications (specifications already known to humans like the Champernowne Sequences), if we found 500 coins in sequence that matched a Champernowne Sequence, we could argue the CSI score is 500 bits as well. But try doing that calculation in CSI v2.0. For more complex situations, one might get different answers depending on who you are talking to because CSI v2.0 depends on the UPB and things like the number possible primitive subjective concepts in a person’s mind.

The motivation for CSI v2.0 was to try account for the possibility of slapping on a pattern after the fact and calling something “designed”. v2.0 was crafted to try to account for the possibility that someone might see a sequence of physical objects (like coins) and argue that the patterns in evidence were designed because he sees some pattern in the coins somewhat familiar to him but no one else. The problem is everyone has different life experiences and they will project their own subjective view of what constitutes a pattern. v2.0 tried to use some mathematics to create at threshold whereby one could infer, even if the recognized pattern was subjective and unique to the observer of a design, that chance would not be a likely explanation for this coincidence.

For example, if we saw a stream of bits which some claims is generated by coin flips, but the bit stream corresponds to the Chapernowne sequence, some will recognize the stream as designed and others will not. How then, given the subjective perceptions that each observer has, can the problem be resolved? There are methods suggested in v2.0, which in and of themselves would not be inherently objectionable, but then v2.0 tries to quantify how likely the subjective perception can arise out of chance and then it convolves this calculation with the probability of the objects emerging by chance. Hence we mix the probability of an observer concocting a pattern in his head by chance and then mixing it with the probability an event or object happens by chance, and after some gyrations out pops a CSI v2.0 score. v1.0 does not involve such heavy calculations regarding the random chance an observer formulates a pattern in his head, and thus is more tractable. So why the move from v1.0 to v2.0? The v1.0 approach has limitations witch v2.0 does not. However, I recommend, that when v1.0 is available to use, use v1.0!

The question of post diction is an important one, but if I may offer an opinion — many designs in biology don’t require exhaustive rigor as attempted in v2.0 to try to determine if our design inferences are postdictive (the result of our imagination) or whether the designed artifacts themselves are inherently evidence against a chance hypothesis. This can be done using simpler mathematical arguments.

For example, consider if we saw 500 fair coins all heads, do we actually have to consider human subjectivity when looking at the pattern and concluding it is designed? No. Why? We can make an alternative mathematical argument that says if coins are all heads they are sufficiently inconsistent with the Binomial Distribution for randomly tossed coins, hence we can reject the chance hypothesis. Since the physics of fair coins rules out physics as being the cause of the configuration, we can then infer design. There is no need in this case to delve into the question of subjective human specification to make the design inference in that case. CSI v2.0 is not needed to make the design inference, and CSI v1.0, which says we have 500 bits of CSI, is sufficient in this case.

Where this method (v1.0 plus pure statistics) fails is in questions of recognizing design in a sequence of coin flips that follow something like the Champernowne sequence. Here the question of how likely it is for humans to make the Champernowne sequence special in their minds becomes a serious question, and it is difficult to calculate that probability. I suppose that is what motivated Jason Rosenhouse to argue that the sort of specifications used by ID proponents aren’t useful for biology. But that is not completely true if the specifications used by ID proponents can be formulated without subjectivity (as I did in the example with the coins) 🙂

The downside of the alternative approach (using CSI v1.0 and pure statistics) is that it does not include the use of otherwise legitimate human subjective constructs (like the notion of motor) in making design arguments. Some, like Michael Shermer or my friend Allen MacNeill, might argue that we are merely projecting our notions of design by saying something looks like a motor or a communication system or a computer, but the perception of design is owing more to our projection than to an inherent design. But the alternative approach I suggest is immune from this objection, even though it is far more limited in scope.

Of course I believe something is designed if it looks like a motor (flagellum), a telescope (the eye), a microphone (the ear), a speaker (some species of bird can imitate an incredible range of sounds), a sonar system (bat and whale sonar), a electric field sensor (sharks), a magnetic field navigation system (monarch butterflies), etc. The alternative method I suggest will not directly detect design in these objects quite so easily, since the pure statistics are hard pressed to describe the improbability of such features in biology even though it is so apparent these features of biology are designed. CSI v2.0 was an ambitious attempt to cover these cases, but it came with substantial computational challenges to arrive at information estimates. I leave it to others to calculate CSI v2.0 for these cases.

Here is an example of using v1.0 in biology regarding homochirality. Amino acids can be left or right handed. Physics and chemistry dictate that left-handed and right-handed amino acids arise mostly (not always) in equal amounts unless there is a specialized process (like living cells) that creates them. Stanley Miller’s amino acid soup experiments created mixtures of left and right handed amino acids, a mixture we would call racemic (a mix of right and left-handed amino acids) versus the homochiral variety (only left-handed) we find in biology.

Worse for the proponents of mindless oirgins of life, even homochiral amino acids will racemize spontaneously over time (some half lives are on the order of hundreds of years), and they will deanimate. Further, when Sidney tried to polymerize homochiral amino acids into protoproteins, they racemized due to the extreme heat and created many non-chains, and the chains he did create had few if any alpha peptide bonds. And then in the unlikely event the amino acids polymerize, in a soup, the amino acids can undergo hydrolysis. These considerations are consistent with the familiar observation that when something is dead, it tends to remain dead and moves farther away from any chance of resuscitation over time.

I could go on and on, but the point being is we can provisionally say the binomial distribution I used for coins also applies to the homochirality in living creatures, and hence we can make the design inference and assert a biopolymer has at least -log2(1/2^N) = N bits of CSI v1.0 based on N stereoisomer residues. One might try to calculate CSI v2.0 for this case, but me being lazy will stick to the CSI v1.0 calculation. Easier is sometimes better.

So how can the alternative approach (CSI v1.0 and pure statistics) detect design of something like the flagellum or DNA encoding and decoding system? It cannot do so as comprehensively as CSI v2.0, but v1.0 can argue for design in the components. As I argued qualitatively in the article Coordinated Complexity – the key to refuting postdiction and single target objections one can formulate observer independent specification (such as I did with the 500 coins being all heads) by appeal to pure statistics. I gave the example of how the FBI convicted cheaters of using false shuffles even though no formal specifications for design were asserted. They merely had to use common sense (which can be described mathematically as cross or auto correlation) to detect the cheating.

Here is what I wrote:

The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”

In fact, I found one such Darwinist screed here:

Creationists and “Intelligent Design” theorists claim that the odds of life having evolved as it has on earth is so great that it could not possibly be random. Yes, the odds are astronomical, but only if you were trying to PREDICT IN ADVANCE how life would evolve.

http://answers.yahoo.com/question/index?qid=20071207060800AAqO3j2

Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards

In this case, a team of cheaters bribed a casino dealer to deal cards and then reshuffle them in same order that they were previously dealt out (no easy shuffling feat!). They would arrive at the casino, play cards which the dealer dealt and secretly record the sequence of cards dealt out. Thus when the dealer re-shuffled the cards and dealt out the cards in the exact same sequence as the previous shuffle, the team of cheaters would be able to play knowing what cards they would be dealt, thus giving them substantial advantage. Not an easy scam to pull off, but they got away with it for a long time.

The evidence of cheating was confirmed by videotape surveillance because the first random shuffle provided a specification to detect intelligent design of the next shuffle. The next shuffle was intelligently designed to preserve the order of the prior shuffle.

Biology is rich with self-specifying systems like the auto correlatable sequence of cards in the example above. The simplest example is life’s ability to make copies of itself through a process akin to Quine Computing. Physics and chemistry makes Quine systems possible, but simultaneously improbable. Computers, as a matter of principle, cannot exist if they have no degrees of freedom which permit high improbability in some of its constituent systems (like computer memory banks).

We can see the correlation between a parent organism and its offspring not being the result of chance, and thus we can reject the chance hypothesis for that correlation. One might argue that though the offspring (copy) is not the product of chance, the process of copying is the product of a mindless copy machine. True, but we can further then estimate the probability of randomly implementing particular Quine computing algorithms (that makes it possible for life to act like computerized copy machines). The act of a system making copies is not in-and-of-itself spectacular (salt crystals do that), but the act of making improbable copies via an improbable copying machine? That is what is spectacular.

I further pointed out that biology is rich with systems that can be likened to login/password or lock-and-key systems. That is, the architecture of the system is such that the components are constrained to obey a certain pattern or else the system will fail. In that sense, the targets for individual components can be shown to be specified without having to calculate the chances the observer is randomly formulating subjective patterns onto the presumably designed object.

lock and key

That is to say, even though there are infinite ways to make lock-and-key combinations, that does not imply that emergence of a lock-and-key system is probable! Unfortunately, Darwinists will implicitly say, “there are infinite number of ways to make life, therefore we can’t use probability arguments”, but they fail to see the errors in their reasoning as demonstrated with the lock-and-key analogy.

This simplified methodology using v1.0, though not capable of saying “the flagellum is a motor and therefore is designed”, is capable of asserting “individual components (like the flagellum assembly instructions) are improbable hence the flagellum is designed.”

But I will admit, the step of invoking the login/password or lock-and-key metaphor is a step outside of pure statistics, and making the argument for design in the case of login/password and lock-and-key metaphors more rigorous is a project of future study.

Acknowledgments:
Mathgrrl, though we’re opponents in this debate, he strikes me a decent guy

NOTES:
The fact that life makes copies motivated Nobel Laureate Eugene Wigner to hypothesize a biotonic law in physics. That was ultimately refuted. Life does copy via a biotonic law but through computation (and the emergence of computation is not attributable to physical law in principle just like software cannot be explained by hardware alone).

Comments
Yeah set theory, now that will help evos win their case. Too bad they don't have any evidence for their position so they have to focus on irrelevant BS.Joe
May 19, 2013
May
05
May
19
19
2013
06:33 PM
6
06
33
PM
PDT
correction: "Keep it up you're IDs best point man!franklin
May 19, 2013
May
05
May
19
19
2013
06:16 PM
6
06
16
PM
PDT
I will take on any evolutionist in a debate, especially one that involves evidence
I'd advise you to brush up on a few (well pretty much everything) details first....like maybe set theory for starters. You're ignorance of the subject as well as your refusal to admit your ignorance and stubborn refusal to correct that deficiency has produced much laughter and jocularity on the interwebs. Keep it up your IDs best point man!franklin
May 19, 2013
May
05
May
19
19
2013
06:13 PM
6
06
13
PM
PDT
Where's CJYman?Joe
May 19, 2013
May
05
May
19
19
2013
05:31 PM
5
05
31
PM
PDT
Hi franklin, I will take on any evolutionist in a debate, especially one that involves evidence. :razz:Joe
May 19, 2013
May
05
May
19
19
2013
05:26 PM
5
05
26
PM
PDT
scordova @45:
No need for a cutoff, we can merely assert one hypothesis is more convincing than others.
We aren't talking about a design hypothesis vs., say, Darwinism here. We're talking about whether something is specified. The bacterial flagellum either is or isn't a specification. Or are you suggesting that the flagellum could be categorized as "kind of" specified, or "almost specified" or some other such scalable notion? One might be tempted to imagine a flagellum with additional parts that would allow it to, say, rotate faster. One might then think that this means such a flagellum has more specification. But such a flagellum would not be more specified, it would be more complex. Any way you try to describe those additional parts and their interaction with the existing parts and the improbability of those parts arising on their own would inexorably lead you back to the complexity side of the CSI equation. Regardless, if we take the position that something can be more or less of a specification, then -- by definition -- in order for the design inference to work we must have a cutoff.* Otherwise we can never be sure that we really have a specification and the critical response is quite simple: (i) show me your calculation of specification, and (ii) you haven't demonstrated that this particular item is specified enough. ----- * Just like we have a cutoff for complexity. With complexity we can readily see that there is a continuum, with things being more or less complex. Thus, we have to put in place a cutoff to avoid false positives, typically understood as the universal probability bound. A continuum of specification would require exactly the same kind of cutoff to properly detect design and avoid false positives.Eric Anderson
May 19, 2013
May
05
May
19
19
2013
05:15 PM
5
05
15
PM
PDT
Eric @44,
"This being the case, when does A become specified enough to say that it is truly specified?"
I can only say that I think it helps to rule out chance. It also helps to rule out necessity. If both of these things can be ruled out, it strengthens the inference to design. Other methods would better help us understand what specificity is, and what content it has. I don't think there's mutual exclusivity here.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
02:48 PM
2
02
48
PM
PDT
Eric @44,
"But I think the whole effort to quantify and calculate and measure specification is heading down the wrong path and will serve primarily to confuse rather than enlighten. Indeed, some of the critics of the design inference have wrongly (in my view) gone down this path and demanded an objective mathematical calculation of specification before they will admit that something is specified. It seems like the wrong way to approach things."
That may be the case, but In order to know a path is the wrong one, sometimes it has to be traversed a little. It's not my intention to cause any confusion, only to explore something that occurred to me when reading Sal's OP. There might be some relationship between entropy and design. If it turns out not to be the case, I'll still be satisfied with my attempt to understand why.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
02:42 PM
2
02
42
PM
PDT
Footnote 2 for #46, If we must import a context for English letter frequencies in order to reduce uncertainty in a string, wouldn't this increase the likelihood that meaning is present, and decrease the likelihood of either chance or necessity?Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
02:39 PM
2
02
39
PM
PDT
Footnote to #46. How do we rule out chance? In a sequence of 1000 coin tosses, 100 coming up tails, there are about 536 bits of entropy. This doesn't necessarily mean specification, but it sure seems to eliminate chance, and that increases the likelihood of either design or necessity. Yes, no?Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
02:31 PM
2
02
31
PM
PDT
I recently had a similar thought, that the quality of conduct is often proportional to the strength of one’s argument.
why is it that Joe's name is the first one that popped into my head when I read this? Likely because he is the poster child for IDists and their arguments both here and across the interwebs!franklin
May 19, 2013
May
05
May
19
19
2013
02:31 PM
2
02
31
PM
PDT
Eric @42, I am suggesting a correlation between specificity and the properties of the output. However you are correct. My phrase, "complexity of the specification" is not appropriate in the context, and has caused confusion. Thanks for bringing it up. What I'm suggesting at #40 is that the text is more likely to be specified when the uncertainty is low. In the two strings you provided, the first one has less uncertainty than the second, and at the same time more specificity (meaning). These appear inversely proportional. Can we always expect that to be the case when English text is contrasted to random sequences? Yes, I think so. And as I said in the first paragraph of #40, I think we are ruling out chance explanations with low uncertainties, but that doesn't specifically rule out necessity, or causes with algorithmic simplicity. You are correct. I'm not calculating specification. I'm calculating the uncertainty in the output. And I kept referring it as specificity, but only because of the inference to its increased likelihood when uncertainty is reduced. This is an artifact of my original #3, where I associated low uncertainty with the presence of specification. In actuality, I think low uncertainty only rules out chance, it doesn't rule out necessity. Does that seem more reasonable and consistent? Thanks for giving the opportunity to try and be more concise. And I think you make good points about the elusiveness of quantification of specification. However, isn't "specification" in essence a description by which an object or phenomenon can be reproduced, and aren't we interested in how complex or simple this description is at some point when evaluating design?Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
02:21 PM
2
02
21
PM
PDT
we would require not only a measuring unit, but also a rational cutoff point for “specified enough,” which essentially is a percentage. Is it 100% specified, 90% specified, 50% specified?
No need for a cutoff, we can merely assert one hypothesis is more convincing than others. That is probably true of the way we make desicions, we don't necessarily decide absolutely something is right, but we do rank which things look like better propositions. So, I don't try to say something is necessarily true because it is improbable, but something (like ID) is more believable as an explanation if it can be shown the alternative is improbable (like mindless origins). FWIW, Dawkins still uses the chance hypothesis as an argument for the origin of life. The chances are very good then that he is wrong. I should add, there is a dimension of this that sometimes is forgotten. Future random chance tossings of a system of 500 coins will tend to evolve it into a "racemic" mixture of heads and tails. Particularly in biology, not only is it improbable for homochirality to emerge, it is even more improbable that it will remain that way over time. That's sort of hard to capture in a specification, so in my example above, the specificity number of N bits for N amino acids was awfully generous for the chance hypothesis. Its much more than N bits. For certain limited situations, such as with coins and homochirality, we can state specifications as having a certain number of bits. For example, the outcome of 500 coins conveys 500 bits of information. "All coins heads" is a 500 bit specification of a possible outcome. "1 coin tails, the rest heads" is a 491 bit specification, etc. The use of "bits" is just a measure of improbability, that's all. We may affix to it fancy names like: "shannon uncertainty", "shannon entorpy" , "shannon information". Mathematically, it's a measure of improbability. We attach the name Shannon because he was able to use the math of improbability to come up with Shannon's theorem of communcation which tells us how much bandwidth we can theoretically pump through a wire. In the construction and defense of his theorem, he coin the notion of "bit". It was an incredible intellectual achievement.
I still haven’t made up my mind about whether specification can be amenable to mathematical calculation. Perhaps in some rare cases it can.
Very rare cases indeed, but thankfully there are designs we can use this limited approach on. There are perhaps other ways to detect design, but that is outside the scope of my present research. I salute efforts to go beyond the methods I outlined in this thread.scordova
May 19, 2013
May
05
May
19
19
2013
02:20 PM
2
02
20
PM
PDT
On further comment. It may be helpful to think of it this way: One implication of a mathematical calculation to measure specification, as some people may be proposing, is that specification then becomes a scale. As a result, A can be more specified than B. This being the case, when does A become specified enough to say that it is truly specified? We can't measure specification (using whatever unit we can imagine) in terms of the universal probability bound. It can be measured only in terms of whether it is sufficiently specified. Therefore, we would require not only a measuring unit, but also a rational cutoff point for "specified enough," which essentially is a percentage. Is it 100% specified, 90% specified, 50% specified? And 50% of what? Some idealized, hypothetical specification? Is a Ferrari more specified than a Ford? Against what are we measuring and how could we even in principle make such a determination? Harking back to biology, is the bacterial flagellum specified? Sure, we all recognize that it is. We recognize it because we look at it functionally, logically, and experientially. In contrast, if we take the position that its specification can be measured and calculated, then we are, by definition, now forced to ask "But how much is it specified? And is it specified enough?" Maybe the bacterial flagellum is specified but it isn't specified enough to warrant a design inference? I still haven't made up my mind about whether specification can be amenable to mathematical calculation. Perhaps in some rare cases it can. But I think the whole effort to quantify and calculate and measure specification is heading down the wrong path and will serve primarily to confuse rather than enlighten. Indeed, some of the critics of the design inference have wrongly (in my view) gone down this path and demanded an objective mathematical calculation of specification before they will admit that something is specified. It seems like the wrong way to approach things.Eric Anderson
May 19, 2013
May
05
May
19
19
2013
01:53 PM
1
01
53
PM
PDT
Correction to my #40: "both of which have more than 500 bits of entropy" -> "both of which have 500 or more bits of entropy" _______ Sal,
"What else really convinces me of design? The behavior of the Dariwnists — many of them rail, intimidate, abuse, demean, but never back up there claims with facts and coherent reasoning, just sophistry and misrepresentation and equivocation. That’s the conduct indicative of those that have no case."
I recently had a similar thought, that the quality of conduct is often proportional to the strength of one's argument.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
01:37 PM
1
01
37
PM
PDT
Chance @40: Thanks for your thoughts.
There is less uncertainty in the former, if properties of English text are taken into account.
Any time we talk about uncertainty, we are talking about complexity. I think if you look back carefully at the additional rules of English you bring to the table to lessen the uncertainty you will see that what you are really calculating -- again -- is complexity, not specification. As to #37:
But if the complexity of the specification is greater than the UPB, then CSI = yes, to put it simplistically.
Again, I think you are mixing the two concepts up as though they were one. The UPB relates to complexity. If the UPB is exceeded then we have a certain level of complexity, but not necessarily CSI. We can have a 'C' way beyond the UPB, but the 'S' will still be missing unless it is found on its own merits. I'm not sure we're saying different things here, but just wanted to make sure.Eric Anderson
May 19, 2013
May
05
May
19
19
2013
01:29 PM
1
01
29
PM
PDT
Pure math, or any design detection methodology, will not detect all possible designs. In fact, our best design detection methods will only detect a very small subspace of all possible designs. It is fortuitous, dare I say Provident, we detect any design at all. But the fact that we have designs in biology which we can detect, I'd say that was by Design. :-)scordova
May 19, 2013
May
05
May
19
19
2013
12:48 PM
12
12
48
PM
PDT
Eric @35,
"...but the 500 bits of information you mention is a calculation of the complexity, not the specification. "
Yes I think you are correct here. On further reflection, I think that when uncertainty is low, chance is out of the picture, as in the case of zero tails out of 500 tosses, or 100 tails out of 1000 tosses, both of which have more than 500 bits of entropy. So here we can rule out chance by examining the uncertainty contained in a string, but we cannot rule out necessity by this same method. Some sequences with low uncertainty will have algorithmic simplicity.
"I think we can see the futility of this when we look at two sequences: tobeornottobethatisthequestion vs. brnottstinoisqotebeeootthuathe Exact same letters. Exact same Shannon “information.” The calculation of entropy/probability/surprise/whatever-we-want-to-call-it is exactly the same."
This is only true in a first-order approximation of English text, where letter frequencies are taken into account, instead of pairs or triplets. The first thing to note is that both of those strings, by comparison to a truly random sequence, exhibit a signal, a reduced uncertainty, because they will tend to correspond to English letter frequencies, such that 'e' will occur with around 0.13 probability and 'q' with negligible frequency. We can differentiate the strings from a Shannon perspective by moving to a second-order approximation of English, which takes into account the relative frequencies of letter pairs. In this case, the first string has a reduced uncertainty over the second. In essence, we could transmit the first message with fewer bits. As we move up to triplets, or word approximations, the uncertainty decreases more. However it should be noted that we're smuggling information in at the same time, because any approximation of English text is a context, which specifies probabilistic details about language.
"Yet the first is a specification; the second is not. We clearly recognize it as such. What is it about the first that allows us to determine that it is a specification? It isn’t because we’ve gone through some calculation and assessed the specification quantitatively. Rather, it is because there is recognizable meaning/function to the sequence."
Clearly there's an enigma with regard to intelligence, language, specification, design, etc. And while specification may not be amenable to precise mathematical definition, we can apply principles like I alluded to above to determine whether specification is more likely in the former or the latter strings. There is less uncertainty in the former, if properties of English text are taken into account.
"To be sure, I think it would be neat if we could somehow mathematically and quantitatively assess and calculate specification. Perhaps in some very narrow and rare instances we can. But I’m very skeptical that specification, as a general matter, is subject to mathematical quantification (unlike complexity, which oftentimes can be readily calculated). Most of the time specification is much more of a logical or practical or experiential assessment, than a mathematical one."
I tend to agree, and I'm certainly not suggesting that specification can be reduced to a mathematical formula; but that's not to say that math is unuseful in exploring the properties of specification and its output. I think it's possible that language will turn out to be the more illuminating property of designed things.Chance Ratcliff
May 19, 2013
May
05
May
19
19
2013
12:22 PM
12
12
22
PM
PDT
I didn't think that mathgrrl was a "decent guy" when I read the post you linked. He admitted to resorting to dishonesty then accused all of us of being intellectually dishonest. He used a heavy dose of shaming techniques to make us ashamed of being ignorant and wicked "intelligent design creationists." I also think that he is being willfully ignorant about CSI being a useful concept even if it is not mathematically definable yet (just as entropy is a useful concept even when you don't have a mathematical rubric for it). I know that he was told that in the many debates on this website on CSI yet he failed to even bring that up in his post summarizing the debates. This was (imo) done purposefully and therefore dishonestly. So no, I don't think he's a decent guy.Collin
May 19, 2013
May
05
May
19
19
2013
10:20 AM
10
10
20
AM
PDT
I believe from an operational practical standpoint, most of the designs we'll formally demonstrate as designed will be of the v1.0 variety, the rest will be intuitively discovered. For example, even today, people who reject ID treat much of the operation of an organism as a functioning entity and are doing reverse engineering. The only ones who are seriosly missing out are Darwinists who insist things are junk. v1.0 methods are very good for discovering grammar. Using correlation, meaning can be discovered in some cases. For example we were able to decode the meaning of the genetic code. One only had to assume a design existed, and with some clever discovery of correlation, the meaning of the code was constructed. Here is a meaning table in biology that was unwittingly deduced by v1.0 methods long before v1.0 methods were codified by ID proponents: http://www.lucasbrouwers.nl/blog/wp-content/uploads/2010/04/genetic-code.jpg the question of the Designer was not necessary to assume there existed a design in the engineering sense. The sort of things that Sternberg discovered are on a whole nother level. These sort of things might help us to elucidate and figure out how things connect together and really work. It could be like the Rosetta stone or stones. For things that v2.0 can confirm as designed, in practice, we already assume those things are designed. For example we call eyes "eyes" even though the perception of what constitutes and eye is subjective, we don't need any formalisms to convince us that they fundamentally serve the purpose of helping the organism see. The formalisms might help demonstrate that the evolutionary path could not be one based on random mutation, and because of the No Free Lunch theorems and population genetics, neither could the evolutionary path be through a process of natural selection. But if one is committed to dismissing ID, no amount of formalism will convince them anyway, save a very few people, but of the people that changed sides, the formalisms had very little to do with their change of mind (i.e. Dean Kenyon, John Sanford, Michael Behe, etc.) Common sense was far more important. I have studied the formalisms for my own benefit over the years, just to help make sure that the perception of design wasn't some accident of human imagination, and because I'm a doubting Thomas by nature. The examples I gave above were enough of a starting point for me and how I concluded that the preception of design wasn't a misperception of human imagination. What else really convinces me of design? The behavior of the Dariwnists -- many of them rail, intimidate, abuse, demean, but never back up there claims with facts and coherent reasoning, just sophistry and misrepresentation and equivocation. That's the conduct indicative of those that have no case.scordova
May 19, 2013
May
05
May
19
19
2013
07:12 AM
7
07
12
AM
PDT
Eric @35, I'm about burned out for the evening, but I'll take a stab at this comment of yours:
"Usually (for the moment I’m willing to consider it may not be always; but usually) specification is a binary yes-no determination, not a sliding scale of amount of specification like we use for complexity. In other words, I’m not sure it is helpful or meaningful to say, in effect, “if the amount of specification [units] gets beyond a certain threshold then we’ll say we’re dealing with a specification.” Generally it either is or it isn’t a specification — yes or no."
But if the complexity of the specification is greater than the UPB, then CSI = yes, to put it simplistically. so the calculation needs to be in the context of an inequality. CSI is present when the specificity is greater than the UPB, or S > 500. That's our boolean yes/no. The presumed specificity S is calculated in terms of uncertainty, for certain specifications, like "0 tails in 500 trials" or "100 tails in 1000 trials", both of which result in a bit content greater than the UPB. I'm not sure it's ultimately correct ,but that's the basic reasoning. See #3 (and onward) for my original attempt, which isn't technically correct but gives the gist of the reasoning.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
10:43 PM
10
10
43
PM
PDT
What is it about the first that allows us to determine that it is a specification?
Our subjective perception based on experience, which is legitimate and which is somewhat beyond the scope of v1.0. v2.0 tries to address it, and, not surprisingly, it is substantially harder to pursue. It is a worthy pursuit, but, perhaps for some the smaller steps are needed. One reason I posted this is that I'm preparing pedagogical materials for college students interested in ID. When Allen MacNeill at Cornell used v2.0 in his ID class in 2006, it just about crushed everyone!scordova
May 18, 2013
May
05
May
18
18
2013
10:22 PM
10
10
22
PM
PDT
Chance:
Eric, would you agree that 500 heads out of 500 trials is an objective specification having 500 bits of information?
500 heads in a row could be a specification in certain instances, but the 500 bits of information you mention is a calculation of the complexity, not the specification. (Incidentally, 500 heads can be written: "heads; repeat 499 times" or some other way that uses a much shorter description than a 500-length string. But that is a sidenote.)
If so, could “1 tails and 499 heads” also be an objective specification with 500 sequences matching the specification? In the latter case, the uncertainty is higher, and I was toying with the idea that we might quantify this with Shannon entropy.
I think we can see the futility of this when we look at two sequences: tobeornottobethatisthequestion vs. brnottstinoisqotebeeootthuathe Exact same letters. Exact same Shannon "information." The calculation of entropy/probability/surprise/whatever-we-want-to-call-it is exactly the same. Yet the first is a specification; the second is not. We clearly recognize it as such. What is it about the first that allows us to determine that it is a specification? It isn't because we've gone through some calculation and assessed the specification quantitatively. Rather, it is because there is recognizable meaning/function to the sequence. To be sure, I think it would be neat if we could somehow mathematically and quantitatively assess and calculate specification. Perhaps in some very narrow and rare instances we can. But I'm very skeptical that specification, as a general matter, is subject to mathematical quantification (unlike complexity, which oftentimes can be readily calculated). Most of the time specification is much more of a logical or practical or experiential assessment, than a mathematical one. One final thought: Usually (for the moment I'm willing to consider it may not be always; but usually) specification is a binary yes-no determination, not a sliding scale of amount of specification like we use for complexity. In other words, I'm not sure it is helpful or meaningful to say, in effect, "if the amount of specification [units] gets beyond a certain threshold then we'll say we're dealing with a specification." Generally it either is or it isn't a specification -- yes or no. Then if the answer is yes we use the complexity measurement (which is quantifiable and can be on a sliding scale) to determine whether the complexity side of the assessment has been satisfied.Eric Anderson
May 18, 2013
May
05
May
18
18
2013
10:15 PM
10
10
15
PM
PDT
RE: #33, come to think of it, that error rate of 1.5 bits assumes 1 success in each of the search spaces. The error is likely not constant for differing numbers of trials. Stick a fork in me, I'm done for the night. Really. :PChance Ratcliff
May 18, 2013
May
05
May
18
18
2013
10:09 PM
10
10
09
PM
PDT
"Otherwise stated, the number of terms in the general form would be equal to the Universal Probability Bound trials of 2^500, so you’d be up really late trying to solve that equation."
Lol! I think it would still simplify, but I'm too loopy to think about it much more. It's quite odd that my original #3 formulation approximates it with an error of 1.5 bits when N is greater than 100. I won't even try to account for that tonight.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
09:55 PM
9
09
55
PM
PDT
Yep. That looks like it works. Apparently you're not as tired as I am; at least that's my version of the story.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
09:45 PM
9
09
45
PM
PDT
Chance Ratcliff, By the way, I'm glad you're using the special form of Shannon's entropy. If you used the general form, that would involve writing out 2^500 number of terms... Otherwise stated, the number of terms in the general form would be equal to the Universal Probability Bound trials of 2^500, so you'd be up really late trying to solve that equation. :-)scordova
May 18, 2013
May
05
May
18
18
2013
09:41 PM
9
09
41
PM
PDT
Bits = -log2[C(n,k)/n]
Actually I think you're missing a power. Since probability of heads is same as tails, the form of the binomial probability reduces to: Bits = -log2[C(n,k)/2^n]scordova
May 18, 2013
May
05
May
18
18
2013
09:34 PM
9
09
34
PM
PDT
Yep, it doesn't hold. I should have checked, and I should have known I'm too burnt to think straight. Hasta menana.Chance Ratcliff
May 18, 2013
May
05
May
18
18
2013
09:23 PM
9
09
23
PM
PDT
We could use -log2(P) where P is “the number of ways to succeed” divided by the total search space. Is that correct? I get just over 491 bits using that form for 1 tails and 499 heads.
Yes, exactly. With respect to the discrete math you applied using combinations, I think it may work for the case of just 1 coin, but look at your formulation and see how close it is to the form on this webpage for the binomial probability: http://www.regentsprep.org/Regents/math/algtrig/ATS7/BLesson.htm I'm getting sleepy, so take what I said with a shaker of salt.... Salscordova
May 18, 2013
May
05
May
18
18
2013
09:11 PM
9
09
11
PM
PDT
1 2 3 4

Leave a Reply