Uncommon Descent Serving The Intelligent Design Community

ID Foundations: The design inference, warrant and “the” scientific method

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

It has been said that Intelligent design (ID) is the view that it is possible to infer from empirical evidence that “certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection” . . .”  This puts the design inference at the heart of intelligent design theory, and raises the questions of its degree of warrant and relationship to the — insofar as a  “the” is possible — scientific method.

Leading Intelligent Design researcher, William Dembski has summarised the actual process of  inference:

“Whenever explaining an event, we must choose from three competing modes of explanation. These are regularity [i.e., natural law], chance, and design.” When attempting to explain something, “regularities are always the first line of defense. If we can explain by means of a regularity, chance and design are automatically precluded. Similarly, chance is always the second line of defense. If we can’t explain by means of a regularity, but we can explain by means of chance, then design is automatically precluded. There is thus an order of priority to explanation. Within this order regularity has top priority, chance second, and design last”  . . . the Explanatory Filter “formalizes what we have been doing right along when we recognize intelligent agents.” [Cf. Peter Williams’ article, The Design Inference from Specified Complexity Defended by Scholars Outside the Intelligent Design Movement, A Critical Review, here. We should in particular note his observation: “Independent agreement among a diverse range of scholars with different worldviews as to the utility of CSI adds warrant to the premise that CSI is indeed a sound criterion of design detection. And since the question of whether the design hypothesis is true is more important than the question of whether it is scientific, such warrant therefore focuses attention on the disputed question of whether sufficient empirical evidence of CSI within nature exists to justify the design hypothesis.”]

The design inference process as described can be represented in a flow chart:

explan_filter

Fig. A: The Explanatory filter and the inference to design, as applied to various  aspects of an object, process or phenomenon, and in the context of the generic scientific method. (So, we first envision nature acting by low contingency law-like mechanical necessity such as with F = m*a . . . think of a heavy unsupported object near the earth’s surface falling with initial acceleration g = 9.8 N/kg or so. That is the first default. Similarly, we may see high contingency knocking out the first default — under similar starting conditions, there is a broad range of possible outcomes. If things are highly contingent in this sense, the second default is: CHANCE. That is only knocked out if an aspect of an object, situation, or process etc. exhibits, simultaneously: (i) high contingency, (ii) tight specificity of configuration relative to possible configurations of the same bits and pieces, (iii)  high complexity or information carrying capacity, usually beyond 500 – 1,000 bits. In such a case, we have good reason to infer that the aspect of the object, process, phenomenon etc. reflects design or . . . following the terms used by Plato 2350 years ago in The Laws, Bk X . . .  the ART-ificial, or contrivance, rather than nature acting freely through undirected blind chance and/or mechanical necessity. [NB: This trichotomy across necessity and/or chance and/or the ART-ificial, is so well established empirically that it needs little defense. Those who wish to suggest no, we don’t know there may be a fourth possibility, are the ones who first need to show us such before they are to be taken seriously. Where, too, it is obvious that the distinction between “nature” (= “chance and/or necessity”) and the ART-ificial is a reasonable and empirically grounded distinction, just look on a list of ingredients and nutrients on a food package label. The loaded rhetorical tactic of suggesting, implying or accusing that design theory really only puts up a religiously motivated way to inject the supernatural as the real alternative to the natural, fails. (Cf. the UD correctives 16 – 20 here on. as well as 1 – 8 here on.) And no, when say the averaging out of random molecular collisions with a wall gives rise to a steady average, that is a case of empirically  reliable lawlike regularity emerging from a strong characteristic of such a process, when sufficient numbers are involved, due to the statistics of very large numbers  . . . it is easy to have 10^20 molecules or more . . . at work there is a relatively low fluctuation, unlike what we see with particles undergoing Brownian motion. That is in effect low contingency mechanical necessity in the sense we are interested in, in action. So, for instance we may derive for ideal gas particles, the relationship P*V = n*R*T as a reliable law.] )

Explaining (and discussing) in steps:

1 –> As was noted in background remarks 1 and 2, we commonly observe signs and symbols, and infer on best explanation to underlying causes or meanings. In some cases, we assign causes to (a) natural regularities tracing to mechanical necessity [i.e. “law of nature”], in others to (b) chance, and in yet others we routinely assign cause to (c) intentionally and intelligently, purposefully directed configuration, or design.  Or, in leading ID researcher William Dembski’s words, (c) may be further defined in a way that shows what intentional and intelligent, purposeful agents do, and why it results in functional, specified complex organisation and associated information:

. . . (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.)

2 –>As an illustration, we may discuss a falling, tumbling die:

A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by  quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)
A pair of dice showing how 12 edges and 8 corners contribute to a flat random distribution of outcomes as they first fall under the mechanical necessity of gravity, then tumble and roll influenced by the surface they have fallen on. So, uncontrolled small differences make for maximum uncertainty as to final outcome. (Another way for chance to act is by quantum probability distributions such as tunnelling for alpha particles in a radioactive nucleus)

Heavy objects tend to fall under the law-like natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance.

But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

[Also, the die may be loaded, so that it will be biased or even of necessity will produce a desired outcome. Or, one may simply set a die to read as one wills.]

3 –> A key aspect of inference to cause is the significance of observed characteristic signs of causal factors, where we may summarise such observation and inference on sign as:

I observe one or more signs [in a pattern], and infer the signified object, on a warrant:

I: [si] –> O, on W

a –> Here, the connexion is a more or less causal or natural one, e.g. a pattern of deer tracks on the ground is an index, pointing to a deer.

b –> If the sign is not a sufficient condition of the signified, the inference is not certain and is defeatable; though it may be inductively strong. (E.g. someone may imitate deer tracks.)

c –> The warrant for an inference may in key cases require considerable background knowledge or cues from the context.

d –> The act of inference may also be implicit or even intuitive, and I may not be able to articulate but may still be quite well-warranted to trust the inference. Especially, if it traces to senses I have good reason to accept are working well, and are acting in situations that I have no reason to believe will materially distort the inference.

4 –> Fig. A highlights the significance of contingency in assigning cause. If a given aspect of a phenomenon or object is such that under similar circumstances, substantially the same outcome occurs, the best explanation of the outcome is a natural regularity tracing to mechanical necessity.  The heavy object in 2 above, reliably and observably falls at 9.8 m/s^2 near the earth’s surface. [Thence, via observations and measurements of the shape and size of the earth, and the distance to the moon, the theory of gravitation.]

5 –> When however, under sufficiently similar circumstances, the outcomes vary considerably on different trials or cases, the phenomenon is highly contingent. If that contingency follows a statistical distribution and is not credibly directed, we assign it to chance.  For instance, given eight corners and twelve edges plus a highly non-linear behaviour, a standard, fair die that falls and tumbles, exhibits sensitive dependency to initial and intervening conditions, and so settles to a reading pretty much by chance. Things that are similar to that — notice the use of “family resemblance” [i.e. analogy] — may confidently be seen as chance outcomes.)

6 –> However, under some circumstances [e.g. a suspicious die], the highly contingent outcomes are credibly intentionally, intelligently and purposefully directed. Indeed:

a: When I type the text of this post by moving fingers and pressing successive keys on my PC’s keyboard,

b: I [a self, and arguably:  a self-moved designing, intentional, initiating agent and initial cause] successively

c: choose alphanumeric characters (according to the symbols and rules of a linguistic code)  towards the goal [a purpose, telos or “final” cause] of writing this post, giving effect to that choice by

d: using a keyboard etc, as organised mechanisms, ways and means to give a desired and particular functional form to the text string, through

e: a process that uses certain materials, energy sources, resources, facilities and forces of nature and technology  to achieve my goal.

. . . The result is complex, functional towards a goal, specific, information-rich, and beyond the credible reach of chance [the other source of high contingency] on the gamut of our observed cosmos across its credible lifespan.  In such cases, when we observe the result, on common sense, or on statistical hypothesis-testing, or other means, we habitually and reliably assign outcomes to design.

7 –> For further instance, we could look at a modern version of Galileo’s famous cathedral chandelier as pendulum experiment.

i: If we were to take several measures of the period for a given length of string and [small] arc of travel, we would see a strong tendency to have a specific period. This, is by mechanical necessity.

ii: However, we would also notice a scattering of the result, which we assign to chance and usually handle by averaging out [and perhaps by plotting a frequency distribution).

iii: Also, if we were to fix string length and gradually increase the arc, especially as the arc goes past about six degrees, we would notice that the initial law no longer holds. But, Galilleo — who should have been able to spot the effect — reported that period was independent of arc. (This is a case of “cooking.” Similarly, had he dropped a musket ball, a feather and a cannon ball over the side of the tower in Pisa,  the cannon ball should hit the ground just ahead of the musket ball, and of course considerably ahead of the feather.)

iv: So, even in doing, reporting and analysing scientific experiments, we routinely infer to law, chance and design, on observed signs.

8 –> But, are there empirically reliable signs of design that can be studied scientifically, allowing us to confidently complete the explanatory filter process? Design theorists answer, yes, and one definition of design theory is, the science that studies signs of design. Thus, further following Peter Williams, we may suggest that:

. . . abstracted from the debate about whether or not ID is science, ID can be advanced as a single, logically valid syllogism:

(Premise 1)    Specified complexity reliably points to intelligent design.

(Premise 2)    At least one aspect of nature exhibits specified complexity.

(Conclusion) Therefore, at least one aspect of nature reliably points to intelligent design.

9 –> For instance, in the 1970’s Wicken saw that organisation, order and randomness are very distinct, and have characteristic signs:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and note added. )]

10 –> We see here, the idea-roots of a term commonly encountered at UD, functionally specific, complex information [FSCI]. (The onward restriction to digitally coded FSCI [dFSCI] as is seen in DNA — and as will feature below, should also be obvious. I add [11:01:18], based on b/g note 1:  once we see digital code and a processing system, we are dealing with a communication system, and so the whole panoply of the code [a linguistic artifact], the message in the code as sent and as received, the apparatus for encoding, transmitting, decoding and applying, all speak to a highly complex –indeed, irreducibly so — system of intentionally directed configuration, and messages that [per habitual and reliable experience and association] reflect intents. From b/g note 2, the functional sequence complexity of such a coded data entity also bespeaks organisation as distinct from randomness and order, which can in principle and in practice be measured and shows that beyond a reasonable threshold of complexity, the coded message itself is an index-sign pointing to its nature as an artifact of design, thence its designer as the best explanation for a design. )

11 –> Earlier, in reflecting on the distinctiveness of living cells, Orgel had observed:

In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.[Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189. Emphases added.]

12 –> This seems to be the first technical use of the term “specified complexity,” which is now one of the key — and somewhat controversial — terms of design theory.  As the second background note summarises, Dembski and others have quantified the term, and have constructed metrics that allow measurement and decision on whether or not the inference to design is well-warranted.

13 –> However, a much simpler rule of thumb metric can be developed, based on a common observation highlighted in points 11 – 12 of the same background note:

11 –>We can compose a simple metric . . .  Where function is f, and takes values 1 or 0 [as in pass/fail], complexity threshold is c [1 if over 1,000 bits, 0 otherwise] and number of bits used is b, we can measure FSCI in functionally specific bits, as the simple product:

FX = f*c*b, in functionally specific bits

12 –> Actually, we commonly see such a measure; e.g. when we see that a document is say 197 kbits long, that means it is functional as say an Open Office Writer document, is complex and uses 197 k bits storage space.

13a –> Or [added Nov 19 2011] we may use logarithms to reduce and simplify the Dembski Chi metric of 2005, thusly:

>>1 –> 10^120 ~ 2^398

 

I = – log(p) . . .  eqn n2

 

3 –> So, we can re-present the Chi-metric:

 

[where, from Dembski, Specification 2005,

 

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1]

 

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

 

Chi = Ip – (398 + K2) . . .  eqn n4

 

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

 

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . .

 

6 –> So, the idea of the Dembski metric in the end — debates about peculiarities in derivation notwithstanding — is that if the Hartley-Shannon- derived information measure for items from a hot or target zone in a field of possibilities is beyond 398 – 500 or so bits, [then] it is so deeply isolated that a chance dominated process is maximally unlikely to find it, but of course intelligent agents routinely produce information beyond such a threshold.

 

7 –> In addition, the only observed cause of information beyond such a threshold is the now proverbial intelligent semiotic agents.
8 –> Even at 398 bits that makes sense as the total number of Planck-time quantum states for the atoms of the solar system [most of which are in the Sun] since its formation does not exceed ~ 10^102, as Abel showed in his 2009 Universal Plausibility Metric paper. The search resources in our solar system just are not there.

 

9 –> So, we now clearly have a simple but fairly sound context to understand the Dembski result, conceptually and mathematically [cf. more details here]; tracing back to Orgel and onward to Shannon and Hartley . . . .

 

As in (using Chi_500 for VJT’s CSI_lite [UPDATE, July 3: and S for a dummy variable that is 1/0 accordingly as the information in I is empirically or otherwise shown to be specific, i.e. from a narrow target zone T, strongly UNREPRESENTATIVE of the bulk of the distribution of possible configurations, W]):

 

Chi_500 = Ip*S – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

 

Chi_1000 = Ip*S – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

 

Chi_1024 = Ip*S – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a

 

[UPDATE, July 3: So, if we have a string of 1,000 fair coins, and toss at random, we will by overwhelming probability expect to get a near 50-50 distribution typical of the bulk of the 2^1,000 possibilities W. On the Chi-500 metric, I would be high, 1,000 bits, but S would be 0, so the value for Chi_500 would be – 500, i.e. well within the possibilities of chance.  However, if we came to the same string later and saw that the coins somehow now had the bit pattern of the ASCII codes for the first 143 or so characters of this post, we would have excellent reason to infer that an intelligent designer, using choice contingency, had intelligently reconfigured the coins. that is because, using the same I = 1,000 capacity value, S is now 1, and so Chi_500 = 500 bits beyond the solar system threshold. If the 10^57 or so atoms of our solar system, for its lifespan, were to be converted into coins and tables etc, and tossed at an impossibly fast rate, it would be impossible to sample enough of the possibilities space W to have confidence that something from so unrepresentative a zone T,  could reasonably be explained on chance. So, as long as an intelligent agent capable of choice is possible, choice — i.e. design — would be the rational, best explanation on the sign observed, functionally specific, complex information.]

 

10 –> Similarly, the work of Durston and colleagues, published in 2007, fits this same general framework . . . .

 

We use the formula log (20) – H(Xf) to calculate the functional information at a site specified by the variable Xf such that Xf corresponds to the aligned amino acids of each sequence with the same molecular function f. The measured FSC for the whole protein is then calculated as the summation of that for all aligned sites. The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space . . . .

 

11 –> So, Durston et al are targetting the same goal, but have chosen a different path from the start-point of the Shannon-Hartley log probability metric for information. That is, they use Shannon’s H, the average information per symbol, and address shifts in it from a ground to a functional state on investigation of protein family amino acid sequences. They also do not identify an explicit threshold for degree of complexity. [Added, Apr 18, from comment 11 below:] However, their information values can be integrated with the reduced Chi metric:

 

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

 

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

 

SecY: 342 AA, 688 fits, Chi: 188 bits beyond;

Corona S2:445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . .  (Think about the cumulative fits metric for the proteins for a cell . . . )

 

In short one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]>>

13b –> Some [GB et al] have latterly tried to discredit the idea of a dummy variable in a metric, as a question-begging a priori used to give us the result we “want.”  Accordingly, in correction, let us consider:

1 –> The first thing is why is S = 0 the default? ANS: Simple, that is the value that says we have no good warrant, no good objective reason, to infer that it is a serious candidate that anything more than chance and necessity acting on matter in space across time is at work.

2 –> In the case of Pinatubo [a well-known volcano], that is tantamount to saying that however complex the volcano edifice may be, its history can be explained on it being a giant sized, aperiodic relaxation oscillator that tends to go in cycles of eruption from quiescent to explosive eruption depending on charging up, breaking through erupting, discharging and reblocking. In turn, driven by underlying plate tectonics. As SA just said: S=0 means it’s a volcano!

3 –> In short, we are looking at an exercise in doing science, per the issue of scientific warrant on empirically based inference to best explanation . . . .

5 –> But as was repeatedly laid out with examples, there is another class of known causal factors capable of explaining highly contingent outcomes that we do not have a good reason to expect on blind chance and mechanical necessity, thanks to the issue of the needle in the haystack.

6 –> Namely, the cause as familiar as that which best explains the complex, specified information — significant quantities of contextually responsive text in English coded on the ASCII scheme — in this thread. Intelligence, working by knowledge and skill, and leaving characteristic signs of art behind.

7 –> Notice, how we come to this: we see complexity, measured by the scope of possible configurations, and we see objectively, independently definable specificity, indicated by descriptors that lock down the set of possible or observed events E, to a narrow zone T within the large config space W, such that a blind search process based on chance plus necessity will only sample so small a fraction that it is maximally implausible for it to hit on a zone like T. indeed, per the needle in the haystack or infinite monkeys type analysis, it is credibly unobservable.

8 –> Under those circumstances, once we see that we are credibly in a zone T, by observing an E that fits in a T, the best explanation is the known, routinely observed cause of such events, intelligence acting by choice contingency, AKA design.

9 –> In terms of the Chi_500 expression . . .

a: I is a measure of the size of config space, e.g. 1 bit corresponds to two possibilities, 2 bits to 4, and n bits to 2^n so that 500 bits corresponds to 3 * 10^150 possibilities and 1,000 to 1.07*10^301.

b: 500 is a threshold, whereby the 10^57 atoms of our solar system could in 10^17 s carry out 10^102 Planck time quantum states, giving an upper limit to the scope of search, where the fastest chemical reactions take up about 10^30 PTQs’s.

c: In familiar terms, 10^102 possibilities from 10^150 is 1 in 10^48, or about a one-straw sample of a cubical haystack about 3 1/2 light days across. An entire solar system could lurk in it as “atypical” but that whole solar system would be so isolated that — per well known and utterly uncontroversial sampling theory results — it is utterly implausible that any blind sample of that scope would pick up anything but straw; straw being the overwhelming bulk of the distribution.

d: In short not even a solar system in the haystack would be credibly findable on blind chance plus mechanical necessity.

e: But, routinely, we find many things that are like that, e.g. posts in this thread. What explains these is that the “search” in these cases is NOT blind, it is intelligent.

f: S gives the criterion that allows us to see that we are in this needle in the haystack type situation, on whatever reasonable grounds can be presented for a particular case, noting again that the default is that S = 0, i.e. unless we have positive reason to infer needle in haystack challenge, we default to explicable on chance plus necessity.

g: What gives us the objective ability to set S = 1? ANS: Several possibilities, but the most relevant one is that we see a case of functional specificity as a means of giving an independent, narrowing description of the set T of possible E’s.

h: Functional specificity is particularly easy to see, as when something is specific in order to function, it is similar to the key that fits the lock and opens it. That is, specific function is contextual, integrative and tightly restricted. Not any key would do to open a given lock, and if fairly small perturbations happen, the key will be useless.

i: The same obtains for parts for say a car, or even strings of characters in a post in this thread, or notoriously, computer code. (There is an infamous case where NASA had to destroy a rocket on launch because a coding error put in I think it was a comma not a semicolon.)

j: In short, the sort of reason why S = 1 in a given case is not hard to see, save if you have an a priori commitment that makes it hard for you to accept this obvious, easily observed and quite testable — just see what perturbing the functional state enough to overwhelm error correcting redundancies or tolerances would do — fact of life. This is actually a commonplace.

k: So, we can now pull together the force of the Chi_500 expression:

i] If we find ourselves in a practical cosmos of 10^57 atoms — our solar system . . . check,

ii] where also, we see that something has an index of being highly contingent I, a measure of information-storing or carrying capacity,

iii] where we may provide a reasonable value for this in bits,

iv] and as well, we can identify that the observed outcome is from a narrow, independently describable scope T within a much larger configuration space set by I, i.e. W.

v] then we may infer that E is or is not best explained on design according as I is greater or less than the scope 500 bits.

10 –> So, we have a metric that is reasonable and is rooted in the same sort of considerations that ground the statistical form of the second law of thermodynamics.

11 –> Accordingly, we have good reason to see that claimed violations will consistently have the fate of perpetual motion machines: they may be plausible to the uninstructed, but predictably will consistently fail to deliver the claimed goods.

12 –> For instance, Genetic Algorithms consistently START from within a zone (“island”) of function T, where the so-called fitness function then allows for incremental improvements along a nice trend to some peak.

13 –> Similarly, something like the canali on Mars, had they been accurately portrayed, would indeed have been a demonstration of design. However, these were not actual pictures of the surface of Mars but drawings of what observers THOUGHT they saw. They were indeed designed, but they were an artifact of erroneous observations.

14 –> Latterly, the so-called Mars face, from the outset, was suspected to be best explained as an artifact of a low-resolution imaging system, and so a high resolution test was carried out, several times. The consistent result, is that the image is indeed an artifact. [Notice, since it was explicable on chance plus necessity, S was defaulted to 0.]

15 –> Mt Pinatubo is indeed complex, and one could do a sophisticated lidar mapping and radar sounding and seismic sounding to produce the sort of 3-D models routinely used here with our volcano, but the structured model of a mesh of nodes and arcs, is entirely within the reach of chance plus necessity, the default. There is no good reason to infer that we should move away from the default.

16 –> If there were good evidence on observation that chance and necessity on the gamut of our solar system could explain origin of the 10 – 100 million bits of info required to account for major body plans, dozens of times over, there would be no design theory movement. (Creationism would still exist, but that is because it works on different grounds.)

17 –> If there were good evidence on observation that chance and necessity on the gamut of our observed cosmos could account for the functionally specific complex organisation and associated information  for the origin of cell based life, there would be no design theory movement. (Creationism would still exist, but that is because it works on different grounds.)

18 –> But instead, we have excellent, empirically based reason to infer that the best explanation for the FSCO/I in body plans, including the first, is design.

13c –> So, we have a more sophisticated metric derived from Dembski’s Chi metric, that does much the same as the simple product metric, and is readily applied to actual biological cases.

14 -> The 1,000 bit information storage capacity threshold can be rationalised:

The number of possible configurations specified by 1,000 yes/no decisions, or 1,000 bits, is ~ 1.07 * 10^301; i.e. “roughly” 1 followed by 301 zeros. While, the ~ 10^80 atoms of the observed universe, changing state as fast as is reasonable [the Planck time, i.e. every 5.39 *10^-44 s], for its estimated lifespan — about fifty million times as long as the 13.7 billion years that are said to have elapsed since the big bang — would only come up to about 10^150 states. Since 10^301 is ten times the square of this number, if the whole universe were to be viewed as a search engine, working for its entire lifetime, it could not scan through as much as 1 in 10^150 of the possible configurations for just 1,000 bits. That is, astonishingly, our “search” rounds down very nicely to zero: effectively no “search.” [NB: 1,000 bits is routinely exceeded by the functionally specific information in relevant objects or features, but even so low a threshold is beyond the credible random search capacity of our cosmos, if it is not intelligently directed or constrained. That is, the pivotal issue is not incremental hill-climbing to optimal performance by natural selection among competing populations with already functional body forms. Such already begs the question of the need to first get to the shorelines of an island of specific function in the midst of an astronomically large sea of non-functional configurations; on forces of random chance plus blind mechanical necessity only. Cf. Abel on the Universal Plausibility Bound, here.]

15 –> So far, all of this will probably seem to be glorified common sense, and quite reasonable. So, why is the inference to design so controversial, and especially the explanatory filter?

[ Continued, here ]

Comments
Dr. Torley, it seems Dr. Sheldon's 'Infinitely Wrong' article is no longer at that link I cited but may be found on this page, about the third article down: http://procrustes.blogtownhall.com/page1bornagain77
January 17, 2011
January
01
Jan
17
17
2011
02:14 PM
2
02
14
PM
PDT
kf are these the references you were talking about: Roger Penrose discusses initial entropy of the universe. - video http://www.youtube.com/watch?v=WhGdVMBk6Zo The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose Excerpt: "The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the "source" of the Second Law (Entropy)." http://www.pul.it/irafs/CD%20IRAFS%2702/texts/Penrose.pdf How special was the big bang? - Roger Penrose Excerpt: This now tells us how precise the Creator's aim must have been: namely to an accuracy of one part in 10^10^123. (from the Emperor’s New Mind, Penrose, pp 339-345 - 1989) http://www.ws5.com/Penrose/ Infinitely wrong - Sheldon - November 2010 Excerpt: So you see, they gleefully cry, even [1 / 10^(10^123)] x infinity = 1! Even the most improbable events can be certain if you have an infinite number of tries.,,,Ahh, but does it? I mean, zero divided by zero is not one, nor is 1/infinity x infinity = 1. Why? Well for starters, it assumes that the two infinities have the same cardinality. http://procrustes.blogtownhall.com/2010/11/05/infinitely_wrong.thtml This 1 in 10^10^123 number, for the time-asymmetry of the initial state of the 'ordered entropy' for the universe, also lends strong support for 'highly specified infinite information' creating the universe since; "Gain in entropy always means loss of information, and nothing more." Gilbert Newton Lewis - Eminent Chemist "Is there a real connection between entropy in physics and the entropy of information? ....The equations of information theory and the second law are the same, suggesting that the idea of entropy is something fundamental..." Tom Siegfried, Dallas Morning News, 5/14/90 - Quotes attributed to Robert W. Lucky, Ex. Director of Research, AT&T, Bell Laboratories & John A. Wheeler, of Princeton & Univ. of TX, Austin in the article http://www.bible.ca/tracks/dp-lawsScience.htm Thermodynamic Argument Against Evolution - Thomas Kindell - video http://www.metacafe.com/watch/4168488bornagain77
January 17, 2011
January
01
Jan
17
17
2011
12:57 PM
12
12
57
PM
PDT
kf and Dr. Torley, "how many bits of FCSI are needed to specify the values of the fundamental constants, to the degree of precision required for life to emerge? Has anyone calculated this number?" This is a very interesting question in that the constants are 'transcendent information' in and of themselves and do not reduce to a material basis but instead dictate what the 'material' basis of energy-matter will do, a 'material' basis which reduces to transcendent information itself as clearly illustrated by Quantum Teleportation.,,, Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn't quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable - it is enforced by the laws of quantum mechanics, which stipulate that you can't 'clone' a quantum state. In principle, however, the 'copy' can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap - 2009 Excerpt: Ytterbium ions have been 'teleported' over a distance of a metre.,,, "What you're moving is information, not the actual atoms," says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts A while back I recall someone tried to employ Szostak's functional information equation to deduce approximately how many functional information bits would be required for the 'Privileged Planet' parameters, but I felt this approach was really a disservice to the problem we are facing since it failed to build a proper foundation for addressing the problem.,,, but to back up a bit and to try to put this problem more fully in context,, First it must be remembered that materialists are loathe to admit that the transcendent universal constants are even constant in the first place since materialism presupposes variance of the transcendent universal constants. Yet it is shown that the 'material' basis of reality is in fact governed by these 'transcendent' universal constants that are in fact CONSTANT: Stability of Coulomb Systems in a Magnetic Field - Charles Fefferman Excerpt of Abstract: I study N electrons and M protons in a magnetic field. It is shown that the total energy per particle is bounded below by a constant independent of M and N, provided the fine structure constant is small. Here, the total energy includes the energy of the magnetic field. http://www.jstor.org/pss/2367659?cookieSet=1 Testing Creation Using the Proton to Electron Mass Ratio Excerpt: The bottom line is that the electron to proton mass ratio unquestionably joins the growing list of fundamental constants in physics demonstrated to be constant over the history of the universe.,,, http://www.reasons.org/TestingCreationUsingtheProtontoElectronMassRatio Latest Test of Physical Constants Affirms Biblical Claim - Hugh Ross - September 2010 Excerpt: The team’s measurements on two quasars (Q0458- 020 and Q2337-011, at redshifts = 1.561 and 1.361, respectively) indicated that all three fundamental physical constants have varied by no more than two parts per quadrillion per year over the last ten billion years—a measurement fifteen times more precise, and thus more restrictive, than any previous determination. The team’s findings add to the list of fundamental forces in physics demonstrated to be exceptionally constant over the universe’s history. This confirmation testifies of the Bible’s capacity to predict accurately a future scientific discovery far in advance. Among the holy books that undergird the religions of the world, the Bible stands alone in proclaiming that the laws governing the universe are fixed, or constant. http://www.reasons.org/files/ezine/ezine-2010-03.pdf Dr Sheldon has written a defense of 'non-variance' of the 'fine-structure' constant here on one of your old threads Dr. Torley: https://uncommondescent.com/intelligent-design/why-a-multiverse-proponent-should-be-open-to-young-earth-creationism-and-skeptical-of-man-made-global-warming/#comment-367471 ,,, Yet how would one go about calculating functional information inherent with the 'transcendent information constants' when the denominator for total possible values approaches infinite, if it is not in fact infinite??? As Dr. Bruce Gordon explains: BRUCE GORDON: Hawking's irrational arguments - October 2010 Excerpt: Rather, the transcendent reality on which our universe depends must be something that can exhibit agency - a mind that can choose among the infinite variety of mathematical descriptions and bring into existence a reality that corresponds to a consistent subset of them. http://www.washingtontimes.com/news/2010/oct/1/hawking-irrational-arguments/ and it should also be remembered that we are dealing with far more than a few constants: Systematic Search for Expressions of Dimensionless Constants using the NIST database of Physical Constants Excerpt: The National Institute of Standards and Technology lists 325 constants on their website as ‘Fundamental Physical Constants’. Among the 325 physical constants listed, 79 are unitless in nature (usually by defining a ratio). This produces a list of 246 physical constants with some unit dependence. These 246 physical constants can be further grouped into a smaller set when expressed in standard SI base units.,,, http://www.mit.edu/~mi22295/constants/constants.html It should also be remembered in trying to ascertain FCSI that these constants are 'irreducible complex' “If we modify the value of one of the fundamental constants, something invariably goes wrong, leading to a universe that is inhospitable to life as we know it. When we adjust a second constant in an attempt to fix the problem(s), the result, generally, is to create three new problems for every one that we “solve.” The conditions in our universe really do seem to be uniquely suitable for life forms like ourselves, and perhaps even for any form of organic complexity." Gribbin and Rees, “Cosmic Coincidences”, p. 269 So Dr. Torley and kf that is a very, very, basic outline of the problem to give you some food for thought,,bornagain77
January 17, 2011
January
01
Jan
17
17
2011
12:27 PM
12
12
27
PM
PDT
KF, facinating post! I watched an interesting talk on microprocessor design the other month and your post reminded me of it, particuarly in relation to human designers. If you take a look at the silicon microarchetecture of modern processors they are, apart from the orderly memory, a mess. The reason (and the reason this technology has progressed so fast) is that they are designed by computers. As designers we have some quite severe limitations but we are clever enough to invent mechanistic processes to generate designs that exceed the capabilities of human designers alone. We use our intelligence to specify target behaviours and create processes to generate systems that meet those requirements but the resulting systems can be difficult for us to understand. Of course when it comes to God, if God is an all powerfull being then these limitations don't apply so I guess I was wondering (rather vaguely :) ) how the limited abilities of human designers link into the chain of reasoning that allows us to infer that we were designed, and if it has any implications at all? For example, is it reasonable to infer that the creator might have created mechanisms to aid further creation - I realise I'm skimming dangerously close to the idea of theistic evolution here but the question is independent of evolutionary arguments - there are plenty of other mechanisms we can concieve of that can aid a designer! --- One other note (Just skimming because I'm a bit busy so apologies if I missed the deeper reasoning):
all higher animals are embodied, but only one class is fully, conceptually and abstractly linguistic.
Is this warranted? How do we know that other higher animals can't reason and use abstract symbols in the same way, just not to our level. In other words could it be that we are just (much) further along a continuum of cognitive abilities (rooted in embodied brains), rather than on the other side of a wall nessecitated by something extra. I guess the question that this begs is how can we tell?DrBot
January 17, 2011
January
01
Jan
17
17
2011
11:58 AM
11
11
58
AM
PDT
Dr Torley: Let's see how the usual objectors respond to the post and the two background posts, in light of the build-up over the past week or so. You have also raised some significant concerns. Let me scoop and reply, point by point: 1: how would you explicate the difference between the way in which the laws of nature are designed, and the way in which FCSI is the product of design? As p.2 of the post discusses briefly, the issue is one of the context of the design inferences. FSCI relates to inferring design of objects observed to incorporate it into their functionality, whether a computer [think microcoded MPU, or just the Word 2007 install -- "I HATE the ribbon, uncle Bill!"] or a living cell. Recall, the inference on sign construct:
I:[si] --> O, on W
At this level, physical laws are a given, and we have no immediate reason to suspect that the laws of our observed cosmos are designed. But then, lift your eyes to the cosmos, and look at its credible beginning and fine-tuning to sit at an operating point that is locally -- lone fly on the wall swotted by a bullet -- isolated and facilitates the sort of C-chemistry, cell based life we experience. From cosmological observations, and related reasoning, we have warrant to infer that that beginning entails a beginner, and that the complex fine-tuning [BTW, the linked has a table of five parameters that pushes us past the FSCI threshold] entails intelligently directed configuration of physics to create a habitat suitable for life. On one particular parameter, we are looking at a degree of tuning -- 1 in about 10^ 60 -- comparable to the ratio of one sand grain to the atomic matter of the observed universe. (And yes, that leaves off the dark matter. The point was to highlight just how finely tuned such a ratio is.) 2: one objection to the case for cosmic ID is that the inductive evidence that all designers are physical is just as strong as the overwhelming inductive evidence that all instances of FCSI exceeding a certain threshold of complexity (1000 bits) were intelligently designed. (Or is it?) So it seems the only kind of Designer we’re entitled to reason to is an embodied one. How would you respond to that line of argument? Nope, it's not a full induction, it is an analogical argument. So, immediately, it falls to the issue of what points of comparison are material: all higher animals are embodied, but only one class is fully, conceptually and abstractly linguistic. So, mere embodiment cannot explain abstract, verbal reasoning ability, a critical point to the required logico-mathematical and linguistic reasoning. Second, consider the PC you are working on. By far and away most people who use computers could not design or build them. So, mere brain size and verbal ability do not explain ability to design this class of systems. Computer engineers are deeply knowledgeable, highly talented, well-trained, intelligent people. In short, the issue is not crude physicality or possession of a brain, but possession of a MIND, with the knowledge and intellectual skills to carry out the relevant class of designs. And, while many would love to insist that we only consider embodied minds, the problem is that this is an expression of a priori Lewontinian materialism, not any empirically sound inference on evidence. It is a question-begging deduction resting on a bad analogy, not a cogent induction. As the Derek Smith model I often point to -- and which AIG pointedly repeatedly ignores -- shows, the two-tier controller architecture is compatible with diverse possible ways of getting to the supervisory controller, even for embodied intelligences. Then, when we look at the cosmological design inference, it is reasonable to infer to a mind who is a necessary being, and is knowledgeable and powerful enough to build a cosmos. Such a mind would be prior to anything we have a right to call matter; which from mass-energy equivalence a la Einstein, is plainly contingent. 3: how many bits of FCSI are needed to specify the values of the fundamental constants, to the degree of precision required for life to emerge? Has anyone calculated this number? My linked selected just five parameters and went past 1,000 bits. I think Penrose had was it 1 in 10^(10^123) as precision of the Creator's aim, which is a LOT more bits. BA 77 will recall the number and source better than I can just now. _____________ GEM of TKIkairosfocus
January 17, 2011
January
01
Jan
17
17
2011
10:31 AM
10
10
31
AM
PDT
Hi kairosfocus, Thanks very much for a great post. I have three questions for you. (1) Professor Dembski has remarked: "If we can explain by means of a regularity, chance and design are automatically precluded." But of course there are many (including Dembski himself) who also believe that the regularities (more precisely, laws) of nature are themselves designed. So my question is: how would you explicate the difference between the way in which the laws of nature are designed, and the way in which FCSI is the product of design? (2) As you and I are both aware from a recent exchange of views with Aiguy, one objection to the case for cosmic ID is that the inductive evidence that all designers are physical is just as strong as the overwhelming inductive evidence that all instances of FCSI exceeding a certain threshold of complexity (1000 bits) were intelligently designed. (Or is it?) So it seems the only kind of Designer we're entitled to reason to is an embodied one. How would you respond to that line of argument? One possible line of response that has occurred to me is that the case for design isn't just built on inductive logic, but also on abductive logic - whereas the case for all designers being physical is an example of inductive logic. (3) By the way, how many bits of FCSI are needed to specify the values of the fundamental constants, to the degree of precision required for life to emerge? Has anyone calculated this number? Just curious. Thanks again for a great post, kairosfocus. ++++++++++ ED: Fixed a bad tag -- looks like if you reverse the solidus and the i, you push through an infinite italicisation.vjtorley
January 17, 2011
January
01
Jan
17
17
2011
09:29 AM
9
09
29
AM
PDT
1 2

Leave a Reply