Artificial Intelligence Atheism Darwinian Debating Devices Darwinist rhetorical tactics Design inference Functionally Specified Complex Information & Organization ID Foundations Science, worldview issues/foundations and society

Darwinian Debating Device #15: Willfully distorting the ID position

Spread the love

One of the saddest aspects of the debates over the design inference on empirically reliable signs such as FSCO/I, is the way evolutionary materialist objectors and fellow travellers routinely insist on distorting the ID view, even after many corrections. (Kindly, note the weak argument correctives, accessible under the UD Resources Tab, which address many of these.)

Indeed, the introduction to the just liked WACs is forced to remark:

. . . many critics mistakenly insist that ID, in spite of its well-defined purpose, its supporting evidence, and its mathematically precise paradigms, is not really a valid scientific theory. All too often, they make this charge on the basis of the scientists’ perceived motives.

We have noticed that some of these false objections and attributions, largely products of an aggressive Darwinist agenda, have found their way into institutions of higher learning, the press, the United States court system, and even the European Union policy directives. Routinely, they find expression in carefully-crafted talking points, complete with derogatory labels and personal caricatures, all of which appear to have been part of a disinformation campaign calculated to mislead the public.

Many who interact with us on this blog recycle this misinformation. Predictably, they tend to raise notoriously weak objections that have been answered thousands of times . . .

Overnight, long-term objector RDF provides a case in point, despite his having been corrected many, many, many times over months and even years. So, it is appropriate to showcase the case in point I responded to just now, here at 47 in the WJM on a roll thread, filling in a few images and the like:



Pardon, but this — after all this time — needs correction:

my point is that if ID is proposing that a known cause of complexity is responsible for biological complexity, then ID is proposing that human beings were responsible – clearly a poor hypothesis. Alternatively, ID can propose an unknown cause that somehow has the same sort of mental and physical abilities as human beings. But in that case, ID would need to show evidence that this sort of thing exists.

Let’s take in slices:

>> my point is that if ID is proposing that a known cause of complexity>>

1: Design theory does not address simple complexity, but specified complexity, and particularly functionally specified complexity that requires a cluster of correct, properly arranged and coupled parts to achieve a function, often in life forms at cell based level using molecular nanotech, codes and algorithms . . . such as the protein synthesis process.

>> . . . is responsible for biological complexity,>>

2: Biological, FUNCTIONALLY SPECIFIC complex organisation, e.g. the protein synthesis system etc. (More generally, functionally specified, complex organisation and/or associated information, FSCO/I, requires many well-matched components, correctly arranged and coupled to achieve function, such as the glyph strings in this English text, or the algorithmic function of strings in D/RNA used to guide protein assembly in the ribosome.


Protein Synthesis (HT: Wiki Media)
Protein Synthesis (HT: Wiki Media)

Where that constraint on configuration to achieve function locks us to isolated islands of function in the configuration space of possible arrangements of components. Thus, beyond 500 – 1,000 bits of specified complex arrangement to achieve function, we see a material blind search challenge on chance and mechanical necessity that is readily solved by intelligence, whether human [this text, underlying software and hardware] or beavers [dams adapted to stream specifics in a feat of impressive engineering] etc. Where we may simply measure FSCO/I using the Chi_500 threshold metric:

FSCO/I on the gamut of our solar system is detected when the following metric goes positive:

Chi_500 = I*S – 500, bits beyond the solar system threshold [with 1,000 bits being adequate for the observed cosmos]

in which I is a reasonable info metric, most easily seen as the string of Y/N questions to specify configuration in a field of possibilities, such as is commonly done with AutoCAD files or the like

with S a dummy variable defaulting to zero ( chance as default explanation of high contingency, cheerfully accepting the possibilities of false negatives), and set high on noting good reason and evidence of functional specificity, e.g. key-lock fitting of proteins sensitive to sequence and folding

where 500 bits gives us a “haystack” sufficiently large to overwhelm the capacity of 10^57 atoms for 10^17 s, each making 10^14 observations of chance configs for 500 bits per second [a fast chem rxn rate],

comparable to taking a one straw sized sample blindly from a cubical haystack of possible configs for 500 bits [3.27*10^150] that is 1,000 light years on the side, comparably thick as our galaxy . . . light setting out when William The Conqueror attacked Saxon England in 1066 AD would still not have crossed the stack today

so that if S = 1 and I > 500 bits, Chi_500 going positive convincingly points to design as best explanation as such a blind search of a haystack superposed on our galactic neighbourhood would with moral certainty beyond reasonable doubt produce naught but the typical finding: a straw

but by contrast, on trillions of observed cases, design is the reliably known cause of FSCO/I

3: The rhetorical substitution made here therefore dodges a substantial case and sets up a strawman caricature, for which — given longstanding, repeated corrections across months and years — the error involved unfortunately has to be willful.

>> then ID is proposing that human beings were responsible – clearly a poor hypothesis.>>

4: Strawman.

5: First, the very names involved are the design inference and the theory of intelligent design. At no point is there a process of inference to human action or any particular agent, only, to a process that is observed and known per observations to not only be adequate to produce the phenomenon FSCO/I, but on trillions of cases, the ONLY observed process to do this.


6: This, multiplied by needle in haystack blind search challenge analysis that points to the gross inadequacy of blind chance and mechanical necessity on the gamut of the solar system or observed cosmos to find relevant deeply isolated islands of function.


7: Where, starting with beavers and the like, we have no good reason to infer that humans exhaust actual much less possible intelligences capable of intelligently directed contingency or contrivance, i.e. design.

An arched beaver dam (with a second one downstream)
An arched beaver dam (with a second one downstream)

8: As a further level of misrepresentation, the design inference is about causal process not identification of specific classes of agents or particular agents. One first identifies that a factory fire is suspicious and then infers arson on signs, before going on to call in the detectives to try to detect the particular culprit. Signs, that indicate that more than blind chance and the mechanical necessities of starting and propagating a fire were at work.

9: This willful caricature, after years of correction, then sets up the next step:

>>Alternatively, ID can propose an unknown cause that somehow has the same sort of mental and physical abilities as human beings.>>

10: As has been pointed out to you, RDF, over and over again and stubbornly ignored in the rush to set up and knock over a favourite strawman caricature,


the design inference process sets up no unknown cause [here a synonym for an agent], but compares known, empirically evident causal factors and their characteristic or typical traces.

11: Mechanical necessity is noted for low contingency natural regularities, e.g. guavas and apples reliably drop from trees under initial acceleration 9.8 N/kg, and attenuating for the surface of a sphere at the distance to the moon, the force field accounts aptly for its centripetal acceleration, grounding Newtonian gravitation analysis.

12: Blind chance tends to cause high contingency, but stochastically controlled contingency similar to how a Monte Carlo simulation analysis explores reasonably likely clusters of possibilities in a highly contingent situation.

The proverbial needle in the haystack
The proverbial needle in the haystack

13: But, some needles can be too isolated and some haystacks too big relative to sampling resources, for us to reasonably expect to find one needle, much less the thousands that are in just the so-called simple cell, i.e. the cluster of proteins and the nanomachines involved.

14: So, we are epistemically entitled to infer that the only vera causa plausible process that accounts for the needles coming up trumps is design. That is, intelligently directed contingency or contrivance.

15: Where also, the base of trillions of observations showing that design is the reliably known — and ONLY actually observed — causal process accounting for such FSCO/I makes it also a very strong, reliable sign of design as key causal factor involved where it is observed.

16: This bit of inductive reasoning then exposes the selectively hyperskeptical rhetorical agenda in:

>>But in that case, ID would need to show evidence that this sort of thing exists.>>

17: Designers exist, human, beaver and more. Where, we have no good reason whatsoever to assume, assert, insinuate or imply that human and similar cases exhaust possible cases of designers. So, designers exist and are therefore possible.

18: Likewise, FSCO/I on very strong empirical basis, is a highly reliable index of design.

19: Therefore, until someone can reasonably show otherwise empirically, we are inductively entitled to take the occurrence of FSCO/I — even in unexpected or surprising contexts — as evidence of design as relevant causal process.

20: So, why the implicit demand for separate, direct empirical evidence of designers in the remote unobserved past of origins? Why, by contrast with being very willing to assign causal success to very implausible mechanisms for FSCO/I such as chance and necessity — not needle in haystack plausible, not ever observed to account for FSCO/I?

21: Selective hyperskepticism joined to flip-side hypercredulity to substitute a drastically inferior explanation. In the wider context, typically for fear and loathing of the possibility of . . . shudder . . “A Divine Foot” in the door of the halls of evolutionary materialism dominated science.

22: Of course, ever since 1984, with Thaxton et al, design theorists have been careful to be conservative, noting that in effect for the case of what we see in the living cell and wider biological life, a molecular nanotech lab some generations beyond Venter et al would be adequate. But so locked in a death-battle with bogeyman “Creationists” are the materialists and fellow travellers that they too often will refuse to acknowledge any point, regardless of warrant, that could conceivably give hope to Creationists.

23: So, the issues of duties to reason, truth and fairness are predictably given short shrift.

24: Oddly, most such activists are typically missing in action when we point out, from the thought of lifelong agnostic and Nobel-equivalent Prize-holding Astrophysicist, Sir Fred Hoyle and others, the evidence of cosmological fine tuning that sets up a world in which we can have C-chemistry, aqueous medium, protein using cell based life on the five or six most abundant elements points to cosmological design; most credibly by a powerful, skilled and purposeful designer who set up physics itself to be the basis for such a world.

25: Here’s a key comment — just one of several — by Sir Fred:

From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.]

It seems that ideology rules the roost in present day origins science thinking (and in science education), even at the price of clinging to the inductively implausible in order to repudiate anything that might conceivably hint that design best accounts for our world.  >>


It is high time to take duties of care to fairness, accuracy and truth seriously, and to actually address the inductive evidence and the needle in haystack analysis challenge on the merits. END

31 Replies to “Darwinian Debating Device #15: Willfully distorting the ID position

  1. 1
    kairosfocus says:

    PS: In the end, the habitual and insistent resort to red herrings and strawman caricatures [not to mention ad hominems and just plain nasty or rude personalities and coarse bully-boy nihilist disrespect on grounds of “I can get away with it” . . . ] of the design inference by Evolutionary Materialist ideologues, enablers and fellow travellers, speaks inadvertent volumes on the actual strength of the design inference on FSCO/I, in the actual inductive, observational merits. Sadly revealing, and pointing to a need for fresh thinking and a better attitude.

  2. 2
    Joe says:

    kairosfocus- If they didn’t misrepresent ID then they wouldn’t have anything to say. 😉

    They seem oblivious to the fact it is up to them to provide probabilities, so what else can they do but distort ID in order to make some point?

  3. 3
    kairosfocus says:

    Joe, sadly, you have a sobering point. One that too many objectors refuse to attend to. KF

  4. 4
    Joe says:


    If I was an evolutionist and someone told me that the scientific way to dispense with ID is to step up and support evolutionism, that is where I would be spending my time. And to me it is very telling that our opponents do not even try to do so.

  5. 5
    kairosfocus says:

    Joe, actually ID has a deliberate single point failure mode, simply show that FSCO/I is credibly — with good empirical plausibility on available atomic resources in the solar system or observed cosmos — a product of blind chance and/or mechanical necessity. For instance a computer rig that credibly produces English text beyond 73 continuous ASCII characters on a blind chance search process. As you know dozens of attempts were made over years, all failed. Many inadvertently showed how FSCO/I reliably arises from design on a routine basis. Current evasiveness, definition derby games, strawman caricatures and the like including ad hominems and nasty personalities should be understood in that context. KF

  6. 6
    kairosfocus says:

    Spell that:


    As in empirically testable but robust to date.


  7. 7
    Joe says:

    Right and if they demonstrate that blind chance can produce a living organism- my bad for making it clear that evolutionism includes that- they would have nailed that point failure mode.

    So (it appears) we agree.

    The EF mandates we take Newton’s rules of scientific investigation seriously. So we give necessity, then necessity and chance, the first crack at solving the puzzle. When that fails, due to either empirical science, the lack of probabilistic resources or even the total failure at being able to provide probabilities, we are then free to consider the design inference.

    That means we wouldn’t even consider a design inference- at first blush anyway as future data can overturn any current inference- if we determined that blind necessity and/or chance can account for what we are investigating. And all of that means is our opponents seem to have all of the power by having the ability to stop ID before it could even get going. Yet they prefer to play hopscotch by not even dealing with the first nodes of the process and jumping right into the final decision box where intelligent design is considered.

    To me, and others, that is a sure sign that they have absolutely nothing. And that leads us back to the topic of your post. 😉

  8. 8
    Joe says:

    The EF I was referring to is Dembski’s simpler model.

  9. 9
    kairosfocus says:

    F/N: Wiki — testifying against known ideological interest — on Random documents production:

    One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[24]

    A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

    RUMOUR. Open your ears; 9r”5j5&?OWTY Z0d…

    In short, a space of ~10^50 is searchable within solar system scope resources, but that is a factor of 10^100 short of the threshold we have put on the table.

    Dismissing this as “big numbers” or the like is a label, caricature and dismiss tactic, not a serious response.

    It is time for fresh thinking.


  10. 10
    kairosfocus says:

    Joe, my elaboration — which WmAD liked BTW — was designed to address specific concerns, by taking an object etc and looking at it methodically aspect by aspect. Then, the fly-out boxes were showing onward actions as in this is not a “science stopper” but a working out of often overlooked aspects of a serious scientific investigation. Notice the next aspect/onward inquiry focus, indicating iteration until some scope limiting criterion is reached. A flowchart rather than an outright algorithm, but obviously using the old programming flowchart approach. A lot of work would have to be filled in for each box, on the ground. Likewise, this is actually tied to the Chi_500 metric, as the decision nodes feed in to the variables, I, S. KF

  11. 11
    Joe says:

    Yes, kairosfocus, your elaboration is much better than the original. It’s just that I have those three nodes of the original implanted and sometimes it is difficult to change. Once I realized that your elaboration only has two I had to clarify what I had said in my previous post.

    True the original EF was much too simple and needed a touch-up. Thank you for doing so.

  12. 12
    kairosfocus says:

    It’s actually a three-possibility case structure with alternatives addressing the two possible switch points.

  13. 13
    kairosfocus says:


    The AND involved in decision node 2 underscores that complexity or specificity in isolation are not enough; the issue is JOINT, single aspect complexity AND specificity beyond a relevant threshold, set off needle in haystack requisites on solar system or cosmos scope resources. (Notice, this actually goes beyond Dembski.)

    High contingency rules out default 1, mechanical, lawlike necessity.

    Joint complexity and specificity rule out default 2, blind chance searching a space of possible configurations. (Think here, ASCII text string or bit string.)

    At this point, FSCO/I has been isolated and identified.

    Intelligently directed configuration is the only empirically warranted, needle in haystack plausible explanation for such FSCO/I, cf the infographic.

    This is not rocket science, it is based on reasonable logic and empirical evidence, but it is hated and despised to the point of abusive bully-boy behaviour, because it does not sit well with the comfortable materialist ideology those like TWT and the like want. (I think he has anger management issues.)

    I will say here that, I saw that Joe Felsenstein has had the decency to object to what has been going on at Prof Moran’s blog on the outing tactics, rudeness and personal abuse front.

    It is seriously time for a bigtime wake up.

    What we are seeing comes straight out of the nihilism that Plato warned against so long ago as stemming from evo mat, 2350 years ago in The Laws, Bk X. Which of course the bully-boys want to brush aside.



  14. 14
    kairosfocus says:

    Joe, remember how many times we had to hammer away at this point to EL et al, and they were still evading and refusing to reckon with it? To the point where I had to pointedly ask if she had read or by extension could read a simple flowchart? I think we should never underestimate the blinding power of a demanding ideology such as Lewontinian-Saganian a priori evo mat. KF

  15. 15
    CLAVDIVS says:

    Why don’t you focus on getting the ID position published in academic papers so its there in black and white, and accordingly much more difficult to distort. Endless blog postings do not appear to be progressing ID.

    In academia 60-70% of the audience is either theistic or at least open to a non-materialist viewpoint like ID (only about 30% of academics and 40% of scientists claim to be atheist).

  16. 16
    rich says:

    Sorry guys – posting from iPhone, so short version:

    Lots of copypasta, but the emperor has no clothes. Do some calcs, provide a list of FSCO/I values for things. Otherwise it’s simply a distraction.

  17. 17
    jerry says:

    These discussions become cyclical.

    Here is something I posted 6 years ago as a proposal for an ID manifesto. Here is the link to the comment

    Well I disagree and I do not look at people as adversaries but just people who do not understand who we are. So I have modified the introductory document and readily admit it could use a lot of help from a good writer and some critical analysis from others.

    What is wrong and what should be added are some of the critical comments I am looking for. I may end of as the lone ranger on this but I believe something similar is necessary for ID in order to explain itself to the uninformed masses and the people at Panda’s Thumb and many at ASA are not the target. It is the people I meet every day who are well educated and know nothing about ID except what is said in the press that are my target. If in the process scientists learn more about what ID is really about, it precludes them from making wrong assertions without the opportunity to show them they are misleading.

    Here is a rewrite of my points given Paul’s comments

    The core belief that defines ID is that there is evidence for some kind of intelligence that is capable of creating and instantiating design in the physical universe, based on the similarity of certain features of nature to known designed artifacts and processes and the absence of any reasonable model of their occurrence by non-designed processes. ID does not insist that any investigation must stop looking for non-designed processes as causes only that intelligent input be a possible inference. We believe that in certain cases it is the best inference but that this may be superseded by future findings.

    This core principle is compatible with a wide variety of other beliefs and some of these belief systems have adopted this core principle as an essential element of their belief system. This does not mean that ID endorses these belief systems. Just as capitalism and socialism both espouse certain common engineering principles, different belief systems have adopted intelligent design principles.

    ID as a science accepts only good science and insists that only good science be considered when addressing scientific issues:

    Relevant to evolution which is the topic here, ID accepts that

    The earth is approximately 4.6 billion years old and the universe is approximately 13.7 billion years. The necessary ingredients for life were created by cosmological processes that slowly produced the various higher order elements over the last 8-10 billion years.

    Life on earth began about 3.5 to 3.8 billion years ago.

    Evolution which is the arrival of new and distinct species as evidenced by the fossil record as well as minor modifications of current species has happened over the last 3.5 billion years and there are periods when this phenomena of new species origins has been rapid and times when it has been slow. The fact of evolution does not imply any mechanism or rationale for the appearance of any new species.
    Life has progressed since 3.5 billion years ago showing a pattern to species of greater complexity sometimes with great changes happening in relatively short time periods such as 5-10 million years and sometimes with few changes over much longer periods of time.

    The number of cell types of animals has been increasing since the origin of multi-cellular organisms about 800 million years ago with over two hundred cell types currently in many living mammals. Along with this increase in cell types there has been an increase in complexity of the functions which the organisms can accomplish.

    The driving force for most of the diversity of life on the planet seems to be due to Darwinian processes. Darwin’s original ideas have been considerably changed since Darwin’s time but the theory today generally hypothesizes that the appearance of new minor variation in species is driven by environmental factors but modified by a whole host of genetic and epigenetic processes that tend to produce gradual changes in species over time. However, this process has never been to shown to be able to produce new complex functional capabilities but only minor changes probably creating at best a new genera. We remain skeptical of its ability to completely explain what Ernst Mayr called megaevolution.

    However, the mechanism by which new variation appears in the populations of the species on the earth that lead to new complex functional capabilities is at this time mostly unknown. We recognize that there is much speculation on this topic but at present all that is available is mainly speculation. While the traditional Darwinian processes can very often explain changes once these complex functional capabilities arrives, it cannot explain the origin of capabilities.

    In other words using an old saying, “Darwinian processes can explain the survival of the fittest but not the arrival of the fittest.”

    The concept of common descent or universal common descent is not a given but may have happened. It is not an essential part of evolutionary biology but a possible conclusion from the evidence which is still to be debated. ID does not dispute the analysis that many species with common genomic elements probably resulted from these species having a common ancestor and that most of differences of these species are due to micro evolutionary processes. This is an area of debate to be resolved through future research.

    Thus, the evolutionary debate is mainly about the mechanism for the development of new complex functional capabilities and secondarily about how fast some of these complex functional capabilities can permeate a population once they arrive.
    The resort to “deep time” as an explanation is not a valid scientific concept and used mainly to cover up inadequacies of the current evolutionary paradigm.

    It is also grossly inappropriate to argue that we are “creationists” without further defining that term. We are not young earth creationists though we are aware that many young earth creationists often use intelligent design to justify their beliefs just as socialists and capitalists use efficiency arguments to justify their distribution systems. There are fundamental differences between us, and specifically arguments that are dependent on a young age creationist reading of the book of Genesis or a young age of the earth are irrelevant for our position and should be recognized as such. We are not experts on young earth creationism nor the bible and because of this no questions about religious beliefs should be asked because we in reality are not able to answer them correctly. Those who espouse intelligent design are of various religious and non religious backgrounds.

    This was written as a reaction to another ID supporter’s view of ID which I did not completely agree with. As I said 6 years ago this could use some refinement but essentially describes the ID position as I know it.

    All of this is not new here. The interesting thing is that it keeps getting repeated. The anti-ID people are like the movie Groundhog Day, they keep repeating the same nonsense over and over. Each day is essentially the same, it is just that on each new day a slight variation is presented.

    As in Groundhog Day, it never goes anywhere till a fundamental different path is taken. The ID people know what the path is.

  18. 18
    Joe says:


    Why don’t you focus on getting the ID position published in academic papers so its there in black and white, and accordingly much more difficult to distort.

    Yet there isn’t any unguided evolution support in peer-review.

  19. 19
    Joe says:

    Earth to rich- Functional sequence complexity, ie CSI, has been calculated for some proteins and it is in peer-review. Don’t blame us for your willful ignorance.

  20. 20
    rich says:

    Provide a list of all CSI calcs. If its a real, useable thing I imagine there will be a long list. Joe, feel free to add your own CSI of cake example.

  21. 21
    Joe says:

    rich- you do realize that you come off as a little snot-nosed brat. It has all been covered on my blog- all the posts that you either choked on or refused to participate in.

    I have explained to you how to do it. You choked. I provided a peer-reviewed paper that uses that methodology and you ran away.

    The problem is your position doesn’t have any methodology beyond bald declaration and that means when a valid methodology is put in front of you you don’t have any idea what it is.

    Here, have another choke:

    Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.
    Each protein consists of a specific sequence of amino acid residues which is encoded by a specific sequence of processed mRNA. Each mRNA is encoded by a specific sequence of DNA.  The point being is biological information refers to the macromolecules that are involved in some process, be that transcription, editing, splicing, translation and functioning proteins. No one measures the biological information in a random sequence of DNA nor any DNA sequence not directly observed in some process. The best one can do with any given random DNA sequence is figure out its information carrying capacity. You couldn’t tell if it was biological information without a reference library.

    And Leslie Orgel first talked about specified complexity wrt biology:

    In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity.

    As far as I can tell IDists use the terms in the same way. Dembski and  Meyer make it clear that it is sequence specificity that is central to their claims.

    That is the whole point- if sequence specificity matters the tighter the specification the less likely blind physical processes could find it. Yup those dreaded probabilities again, but seeing yours doesn’t come with a testable model it’s all we have. See Is Intelligent Design Required for Life?

    With that said, to measure biological information, ie biological specification, all you have to do is count the coding nucleotides of the genes involved for that functioning system, then multiply by 2 (four possible nucleotides = 2^2) and then factor in the variation tolerance:

    from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):

    [N]either RSC [Random Sequence Complexity] nor OSC [Ordered Sequence Complexity], or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life. FSC [Functional Sequence Complexity] includes the dimension of functionality. Szostak argued that neither Shannon’s original measure of uncertainty nor the measure of algorithmic complexity are sufficient. Shannon’s classical information theory does not consider the meaning, or function, of a message. Algorithmic complexity fails to account for the observation that “different molecular structures may be functionally equivalent.” For this reason, Szostak suggested that a new measure of information—functional information—is required.

    ETA for OMagain:

     First, as observed in Table ?Table1,1, although we might expect larger proteins to have a higher FSC, that is not always the case. For example, 342-residue SecY has a FSC of 688 Fits, but the smaller 240-residue RecA actually has a larger FSC of 832 Fits. The Fit density (Fits/amino acid) is, therefore, lower in SecY than in RecA. This indicates that RecA is likely more functionally complex than SecY.  (results and discussion section) 

  22. 22
    rich says:

    Sorry / let me be precise: Provide a list of all CSI calcs. If its a real, useable thing I imagine there will be a long list. Joe, feel free to add your own CSI of cake example.

    Proof of the pudding…

  23. 23
    kairosfocus says:


    pardon but peer review — which I have but little interest in frankly for this field — is little more than an appeal to authority.

    If that is what you want there are dozens of peer reviewed ID-supportive papers linked to biology and must be hundreds on the cosmological side. Peer review is not the problem — other than, it is a potentially threadjacking side track.

    Willful distortion in the teeth of correction is.

    The measurement of info carrying capacity is an established field, for what, nigh on seventy years now. The simplest approach is a measure of the chain of Y/N questions to specify a state, which under well-behaved circumstances has the same properties as a weighted sum log-probability metric. Shannon’s famous paper used both, save he used ten state units in discussing direct measures.

    The common file size metrics we use are like that.

    When it comes to the most relevant case in the bio-world, any given member of A/G/C/T or U can follow any other, i.e under random circumstances, laying aside chirality and the like, the a priori odds would go like 1/4 each. In protein codes, for similar reasons to any other code, there will be a bit of redundancy so this is not quite exactly correct, but it’s good enough for a start. Four state elements, directly are two bits apiece, so a protein code for 250 three letter codons would have 750 bases. As a first rough metric, the carrying capacity, raw, is 1500 bits, i.e. a typical protein is already at or beyond the threshold of what our solar system or even observed cosmos could search out by blind mechanisms.

    If you cannot follow and understand simple calcs and thresholds like the above, there is but little hope that something like Durston’s work on 15 protein families will even make basic sense to you. FOr instance, it pivots on Shannon’s H info metric which he termed entropy, which on the informational view of thermodynamics, is connected. And, if the 15 protein families calc and presentation do not mean anything to you or are not perceived as cases in point by you, demanding “all” CSI calcs is pointless, apart form as an exercise in selective hyperskepticism. As it is if you simply were to read the already linked derivation of Chi_500, you would find more than enough explanation of how FSCO/I can be and is measured, and also a link onwards to Durston et al.

    If you take up the Durston calc of going from flat random null to ground state to functional state, you will drop these 2 bits per base pair/ 4.32 bits per AA in a protein a bit but not enough to make a difference in aggregate, noting that a typical cell needs hundreds and hundreds of diverse proteins to work, all of which have to pass through the chicken-egg problem of the ribosome protein assembly NC machine. Note the diagram from Wiki in the OP.

    The protein manufacturing system alone, is already well beyond the threshold of what blind chance and mechanical necessity acting on the gamut of the observed cosmos can reasonably do. The only reasonable known causal force capable of that much FSCO/I is design.

    Codes, algorithms, digital information in storage tapes, organised functional machinery etc.

    All point to design.

    So, the evolutionary materialist origins narrative cannot even get to the first functional cell, much less the tree of life.

    Grant a cell based world and you face pop genetics and time to fix changes to effect major body plans, multiplied by an utter lack of observational base to infer apart from question-begging a priori materialism, that such happend by chance variation plus differential reproductive success in niches leading to descent with incremental modification thence branching tree body plan level evo.

    All of this narrative lacks empirical warrant but is often presented as though it were as certain as gravity — which we directly observe.

    The certainty lives in the a priori, not the evidence. The a priori already demands something much like that, so any tiniest hint of a shadow of something that may fit such is blown up, scare headlined and enshrined by the lab coat clad new magisterium.

    But, all of this is an indulgence to a tangential side track.

    The point of the OP is that he basic design argument is being distorted willfully, in the teeth of copious correction and opportunity to get it right.

    Nothing you have had to say addresses this cogently, but that is a very serious issue indeed.

    One that demands correction forthwith.

    It is that insensitivity by evo mat advocates, enablers and fellow travellers to duties of care to truth, accuracy, fairness and more that are utterly, inadvertently revealing.

    Might and manipulation make ‘right’ is the credo of nihilism.

    Which should give us sobering pause.

    It is time for fresh thinking.


  24. 24
    kairosfocus says:

    Oh, I see Joe has already linked the paper by Durston et al. Durston BTW did his PhD on these things, in biophysics — in Canada, beyond the reach of the thought police. Which should tell you something.

  25. 25
    Joe says:

    rich (chokes):

    Sorry / let me be precise: Provide a list of all CSI calcs. If its a real, useable thing I imagine there will be a long list.

    I just showed you how to do it. Get started. And feel free to demonstrate how blind and undirected processes can account for what you are calculating.

    Joe, feel free to add your own CSI of cake example.

    Why am I not surprised that you are proud of your inability to grasp a very simple example.

  26. 26
    rich says:

    So you think the Durston FSC paper counts. I’ll get to that later. Is that *all* you have?

  27. 27
    rich says:

    So Joe, the reason there’s no CSI calcs is that we’ve not done them for you?

  28. 28
    kairosfocus says:

    Rich, why have you refused to attend to already given info and cases? This seems to be DDD #8, while your own posts are an example of FSCO/I, and indeed there are trillions of cases in point just online. Simply look at standard file sizes for relevant functional document files, and of course DNA code segments for proteins, starting at two bits per base or six per three-base codon. You are actually also providing an example of a willful distortion of the design inference on FSCO/I and of course selective hyperskepticism. The fact that DNA incorporates coded info is not in serious doubt, that info can be quantified, it is functionally specific and is often well beyond 500 – 1,000 bits. This also seems to be a desperate distraction from what is patent. All of which inadvertently point to the actual strength of the design inference case due to the scorched earth rhetoric being used by those who for ideological reasons seem backed into the corner of trying to deny its existence and ability to be given a quantitative information measure. KF

  29. 29
    rich says:

    KF – you’re a terrible bluffer. If you had many examples you would have posted them and sent me home with my tail between my legs. But instead we get “[outing tactic snipped — ed]”:

    [link to abusive site snipped — ed]

  30. 30
    kairosfocus says:

    Rich, you are talking blue smoke and mirrors, especially after you have been given actual citations and links, instructions on how to find FSCO/I values for cases using a relatively simple metric, a grounding for the metric, and information on not only ordinary files on computers but also DNA strings. You have simply shown that you are in desperate denial. Worse, you have now linked to an abusive site that hosts materials that cross the threshold of civility, in order to indulge in a namecalling ad hominems. KF

  31. 31
    kairosfocus says:

    F/N: Just as an example, Rich in 22, 198 ASCII characters, 7 bits/character, in reasonably recognisable English. I = 1386 bits. S = 1, Chi_500 = 886 bits beyond the solar system limit. Designed, as is separately known. This is offered only to underscore the unreasonableness of the behaviour being indulged by this objector. KF

    PS: on Rich’s resort to abusive behaviour, now snipped, I have terminated discussion in this thread.

    PPS: Observe, there has been utter unresponsiveness on the focal issue in the OP, revealing, given what is patent.

Comments are closed.