Those who have been following the recently heated up exchanges on the theory of intelligent design and the key design inference on tested, empirically reliable signs, through the ID explanatory filter, will know that a key move in recent months was the meteoric rise of the mysterious internet persona MathGrrl (who is evidently NOT the Calculus Prof who has long used the same handle).

MG as the handle is abbreviated, is well known for “her” confident-manner assertion — now commonly stated as if it were established fact in the Darwin Zealot fever swamps that are backing the current cyberbullying tactics that have tried to hold my family hostage — that:

**without a rigorous mathematical definition and examples of how to calculate [CSI], the metric is literally meaningless. Without such a definition and examples, it isn’t possible even in principle to associate the term with a real world referent.**

As the strike-through emphasises, every one of these claims has long been exploded.

You doubt me?

Well, let us cut down the clip from the CSI Newsflash thread of April 18, 2011, which was again further discussed in a footnote thread of 10th May (H’mm, anniversary of the German Attack in France in 1940), which was again clipped yesterday at fair length.

( BREAK IN TRANSMISSION: BTW, antidotes to the intoxicating Darwin Zealot fever swamp *“MG dunit”* talking points were collected here — Graham, why did you ask the question but never stopped by to discuss the answer? And the “rigour” question was answered step by step at length here. In a nutshell, as the real MathGrrl will doubtless be able to tell you, the Calculus itself, historically, was founded on sound mathematical intuitive insights on limits and infinitesimals, leading to the warrant of astonishing insights and empirically warranted success, for 200 years. And when Math was finally advanced enough to provide an axiomatic basis — at the cost of the sanity of a mathematician or two [doff caps for a minute in memory of Cantor] — it became plain that such a basis was so difficult that it could not have been developed in C17. *Had there been an undue insistence on absolute rigour as opposed to reasonable warrant, the great breakthroughs of physics and other fields that crucially depended on the power of Calculus, would not have happened*. For real world work, what we need is reasonable warrant and empirical validation of models and metrics, so that we know them to be sufficiently reliable to be used. The design inference is backed up by the infinite monkeys analysis tracing to statistical thermodynamics, and is strongly empirically validated on billions of test cases, the whole Internet and the collection of libraries across the world being just a sample of the point that the only credibly known source for functionally specific complex information and associated organisation [FSCO/I] is design. )

After all, a bit of careful citation always helps:

_________________

>>1 –> 10^120 ~ 2^398

**I = – log(p)**. . . eqn n2

_{2}[10^120 ·ϕS(T)·P(T|H)] . . . eqn n1]

_{2}(2^398 * D2 * p) . . . eqn n3

**Chi = I**. . . eqn n4

_{p}– (398 + K_{2})**So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold.**

**Chi_500 = Ip*S – 500**, bits beyond the [solar system resources] threshold . . . eqn n5

**Chi_1000 = Ip*S – 1000**, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

**Chi_1024 = Ip*S – 1024**, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a

*possible*, choice — i.e. design — would be the rational, best explanation on the sign observed, functionally specific, complex information.]

*We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f.*The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites.

*The number of Fits quantifies the degree of algorithmic challenge,*

**in terms of probability**[info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that**there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space.**In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space . . . .*explicit*threshold for degree of complexity. [Added, Apr 18, from comment 11 below:] However, their information values can be integrated with the reduced Chi metric:

**RecA:**242 AA, 832 fits, Chi: 332 bits beyond

**SecY:**342 AA, 688 fits, Chi: 188 bits beyond

**Corona S2:**445 AA, 1285 fits, Chi: 785 bits beyond . . . results n7

_________________

So, there we have it folks:

I: Dembski’s CSI metric is closely related to standard and widely used work in Information theory, starting with I = – log p

II: It is reducible on taking the appropriate logs, to an information beyond a threshold value

III: The threshold is reasonably set by referring to the accessible search resources of a relevant system, i.e. our solar system or the observed cosmos as a whole.

IV: Where, once an observed configuration — event E, per NFL — that bears or implies information is from a separately and “simply” describable narrow zone T that is *strongly unrepresentative* — that’s key — of the space of possible configurations, W, then

V: since the search applied is of a very small fraction of W, it is *unreasonable* to expect that chance can reasonably account for E in T, instead of the far more typical possibilities in W of in aggregate, overwhelming statistical weight.

(For instance the 10^57 or so atoms of our solar system will go through about 10^102 Planck-time Quantum states in the time since its founding on the usual timeline. 10^150 possibilities [500 bits worth of possibilities] is 48 orders of magnitude beyond that reach, where it takes 10^30 P-time states to execute the fastest chemical reactions. 1,000 bits worth of possibilities is 150 orders of magnitude beyond the 10^150 P-time Q-states of the about 10^80 atoms of our observed cosmos. When you are looking for needles in haystacks, you don’t expect to find them on relatively tiny and superficial searches.)

VI: Where also, in empirical investigations we observe that an aspect of an object, system, process or phenomenon that is controlled by mechanical necessity will show itself in low contingency. A dropped, heavy object falls reliably at g. We can make up a set of differential equations and model how events will play out on a given starting condition, i.e we identify an empirically reliable natural law.

VII: By contrast, highly contingent outcomes — those that vary significantly on similar initial conditions, reliably trace to chance factors and/or choice, e.g we may drop a fair die and it will tumble to a value essentially by chance. (This is in part an ostensive definition, by key example and family resemblance.) Or, I may choose to compose a text string, writing it this way or the next. Or as the 1,000 coins in a string example above shows, coins may be strung by chance or by choice.

VIII: Choice and chance can be reliably empirically distinguished, as we routinely do in day to day life, decision-making, the court room, and fields of science like forensics. FSCO/I is one of the key signs for that and the Dembski-style CSI metric helps us quantify that, as was shown.

IX: Shown, based on a reasonable reduction from standard approaches, and shown by application to real world cases, including biologically relevant ones.

We can safely bet, though, that *you would not have known that this was done months ago — over and over again — in response to MG’s challenge, if you were going by the intoxicant fulminations billowing up from the fever swamps of the Darwin zealots*.

Let that be a guide to evaluating their credibility — and, since this was repeatedly drawn to their attention and just as repeatedly brushed aside in the haste to go on beating the even more intoxicating talking point drums, sadly, this also raises serious questions on the motives and attitudes of the chief ones responsible for those drumbeat talking points and for the fever swamps that give off the poisonous, burning strawman rhetorical fumes that make the talking points seem stronger than they are. (If that is offensive to you, try to understand: *this is coming from a man whose argument as summarised above has repeatedly been replied to by drumbeat dismissals without serious consideration, led on to the most outrageous abuses by the more extreme Darwin zealots (who were too often tolerated by host sites advocating alleged “uncensored commenting,” until it was too late), culminating now in a patent threat to his family by obviously unhinged bigots.*)

And, now also you know the most likely why of TWT’s attempt to hold my family hostage by making the mafioso style threat: *we know you, we know where you are and we know those you care about.* **END**