- Share
-
-
arroba
In a comment in the oldies thread on Sunday evening, Chance Ratcliff raised a very instructive case study for a search space that is well worth being headlined. Let us adjust a bit on the calc of the config space, and reflect:
_____________
CR, 111, Oldies: >> An illustration might be of some help. For {{an 8-bit, 256 level}} gray scale

image of 1024 [ –> 2^10] pixels squared, there’s a search space of {{ 2^20, 256-level elements giving 256^(2^20) = 4.26 *10^2,525,222}} possible configurations. This [strike . . . ] provides a vast landscape of images over which it is possible to traverse. For example, there are a nearly inestimable amount of configurations that could yield a recognizable rendering of Albert Einstein’s face. Yet it follows that this can only be a tiny proportion of all possible configurations, because where there may be a million ways to render Einstein’s face in a recognizable way, there must also be numerous ways to render any one of billions of other faces. And then we must also be able to render objects which are not faces at all, but any one of numerous other abstract or specific representations — cars, planes, trains, bicycles, motorcycles, cows, horses, donkeys, cats, dogs, planets, galaxies, chairs, tables, houses, skyscrapers, grass, trees, grapes, etc. — each in their personal identities (Saturn or Jupiter, Nero or Spot, Triumph or Harley, Ford Model T or Maserati MC12, etc). The images of all imaginable objects must be able to occupy the same configuration space of 1024×1024 pixels and 256 shades of gray in different configurations which must each differ substantially from Einstein.
Such is likely the case with proteins. After considering the noisy, non-folding sequences, specific biological functions must narrowly exist in the overall search space, because the space must also account for every type of possible function. I don’t think it’s reasonable to presume that ubiquitous functions such as ATP synthase, the ribosome, and the various polymerases are not required for “other” types of life. We don’t know such organisms can exist. It seems likely that, as with images of Einstein that are specific to a singular man, these biological subsystems are specific to a singular phenomenon in the known universe.
Even so, objections to specific functional necessities notwithstanding, traversing the noise is practically prohibitive. Just as generating random permutations in a 1024^2 gray scale image shouldn’t practically be expected to produce a recognizable image of Einstein, neither should random mutations effectively stumble on a functional sequence of amino acids, regardless of whether such a sequence could contribute to function in a constrained and complex operational system. >>
_____________
Just so, the space of configs for 500 bits — think of a string of 500 coins in a row — must contain ALL 72 or so letter ASCII code sequences in English. That is, just as the screen space has in it every conceivable image that can fit in 1024 * 1024 pixels as a grayscale image, the 500-element binary string has in it every possible set of 72 letter English language sequences.
So, why don’t we use random walk scans to try to find what is out there, maybe with some sort of detector to lock-in a lucky hit?
Because that would predictably fruitlessly exhaust the atomic and temporal resources of the solar system, because the number of gibberish or noise states so vastly outnumbers the functional ones, that we confront the needle in the haystack search on steroids.

That is the context in which we see why a threshold of 500 bits of complexity for a functionally specific organised entity allows us to identify that FSCO/I is a reliable, inductively backed up sign of design as best causal explanation.
CR’s screen example allows us to understand how even in cases where the is indeed a very large number of functional states, these will in turn be immersed in a vastly larger, unsearchable sea of gibberish or noise.
In simple terms, the number of snowy screen states vastly overwhelms the large but vastly smaller number of functional ones. So much so that we intuitively know that the best way to compose screen fulls of text or pictures or drawings, etc, is by intelligent effort.
In that context, let us look again at Dembski’s 2005 equation and its log-reduced, simplified derivative for practical purposes. First, Dembski 2005 (as was clipped and discussed in my always linked briefing note):
8 –> A more sophisticated (though sometimes controversial) metric has of course been given by Dembski, in a 2005 paper, as follows:
define ϕS as . . . the number of patterns for which [agent] S’s semiotic description of them is at least as simple as S’s semiotic description of [a pattern or target zone] T. [26] . . . . where M is the number of semiotic agents [S’s] that within a context of inquiry might also be witnessing events and N is the number of opportunities for such events to happen . . . . [where also] computer scientist Seth Lloyd has shown that 10^120 constitutes the maximal number of bit operations that the known, observable universe could have performed throughout its entire multi-billion year history.[31] . . . [Then] for any context of inquiry in which S might be endeavoring to determine whether an event that conforms to a pattern T happened by chance, M·N will be bounded above by 10^120. We thus define the specified complexity[χ] of T given [chance hypothesis] H [in bits] . . . as [the negative base-2 log of the conditional probability P(T|H) multiplied by the number of similar cases ϕS(t) and also by the maximum number of binary search-events in our observed universe 10^120]
χ = – log2[10^120 ·ϕS(T)·P(T|H)].
9 –> When 1 >/= χ, the probability of the observed event in the target zone or a similar event is at least 1/2, so the available search resources of the observed cosmos across its estimated lifespan are in principle adequate for an observed event [E] in the target zone to credibly occur by chance. But if χ significantly exceeds 1 bit [i.e. it is past a threshold that as shown below, ranges from about 400 bits to 500 bits — i.e. configuration spaces of order 10^120 to 10^150], that becomes increasingly implausible. The only credibly known and reliably observed cause for events of this last class is intelligently directed contingency, i.e. design. Given the scope of the Abel plausibility bound for our solar system, where available probabilistic resources
qΩs = 10^43 Planck-time quantum [not chemical — much, much slower] events per second x
10^17 s since the big bang x
10^57 atom-level particles in the solar system
Or, qΩs = 10^117 possible atomic-level events [–> and perhaps 10^87 “ionic reaction chemical time” events, of 10^-14 or so s],
. . . that is unsurprising.
10 –> Thus, we have a Chi-metric for CSI/FSCI . . . providing reasonable grounds for confidently inferring to design . . . [which relies] on finding a reasonable measure for the information in an item on a target or hot zone — aka island of function where the zone is set off observed function — and then comparing this to a reasonable threshold for sufficiently complex that non-foresighted mechanisms (such as blind watchmaker random walks from an initial start point and leading to trial and error), will be maximally unlikely to reach such a zone on the gamut of resources set by our observable cosmos.
This can be simplified, as is done in the IOSE, to give the Chi_500 metric:
χ = – log2[10^120 ·ϕS(T)·P(T|H)].
–> χ is “chi” and ϕ is “phi” . . . .
xxi: So, since 10^120 ~ 2^398, we may “boil down” the Dembski metric using some algebra — i.e. substituting and simplifying the three terms in order — as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):
Chi = – log2(2^398 * D2 * p), in bits, and where also D2 = ϕS(T)Chi = Ip – (398 + K2), where now: log2 (D2 ) = K2That is, chi is a metric of bits from a zone of interest, beyond a threshold of “sufficient complexity to not plausibly be the result of chance,” (398 + K2). So,(a) since (398 + K2) tends to at most 500 bits on the gamut of our solar system [[our practical universe, for chemical interactions! ( . . . if you want , 1,000 bits would be a limit for the observable cosmos)] and(b) as we can define and introduce a dummy variable for specificity, S, where(c) S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:Chi = Ip*S – 500, in bits beyond a “complex enough” threshold
This leads to a situation where this can be used to take advantage of the Durston metric (which reckons with function already so S = 1, and which takes into account redundancy):
xxii: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was designed. The metric may be directly applied to biological cases:
Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:
RecA: 242 AA, 832 fits, Chi: 332 bits beyondSecY: 342 AA, 688 fits, Chi: 188 bits beyondCorona S2: 445 AA, 1285 fits, Chi: 785 bits beyond
xxiii: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.
All of this in turn gives a context for the significance of CR’s discussion. END