One of the saddest aspects of the debates over the design inference on empirically reliable signs such as FSCO/I, is the way evolutionary materialist objectors and fellow travellers routinely insist on distorting the ID view, even after many corrections. (Kindly, note the weak argument correctives, accessible under the UD Resources Tab, which address many of these.)
Indeed, the introduction to the just liked WACs is forced to remark:
. . . many critics mistakenly insist that ID, in spite of its well-defined purpose, its supporting evidence, and its mathematically precise paradigms, is not really a valid scientific theory. All too often, they make this charge on the basis of the scientists’ perceived motives.
We have noticed that some of these false objections and attributions, largely products of an aggressive Darwinist agenda, have found their way into institutions of higher learning, the press, the United States court system, and even the European Union policy directives. Routinely, they find expression in carefully-crafted talking points, complete with derogatory labels and personal caricatures, all of which appear to have been part of a disinformation campaign calculated to mislead the public.
Many who interact with us on this blog recycle this misinformation. Predictably, they tend to raise notoriously weak objections that have been answered thousands of times . . .
Overnight, long-term objector RDF provides a case in point, despite his having been corrected many, many, many times over months and even years. So, it is appropriate to showcase the case in point I responded to just now, here at 47 in the WJM on a roll thread, filling in a few images and the like:
___________
>>RDF:
Pardon, but this — after all this time — needs correction:
my point is that if ID is proposing that a known cause of complexity is responsible for biological complexity, then ID is proposing that human beings were responsible – clearly a poor hypothesis. Alternatively, ID can propose an unknown cause that somehow has the same sort of mental and physical abilities as human beings. But in that case, ID would need to show evidence that this sort of thing exists.
Let’s take in slices:
>> my point is that if ID is proposing that a known cause of complexity>>
1: Design theory does not address simple complexity, but specified complexity, and particularly functionally specified complexity that requires a cluster of correct, properly arranged and coupled parts to achieve a function, often in life forms at cell based level using molecular nanotech, codes and algorithms . . . such as the protein synthesis process.
>> . . . is responsible for biological complexity,>>
2: Biological, FUNCTIONALLY SPECIFIC complex organisation, e.g. the protein synthesis system etc. (More generally, functionally specified, complex organisation and/or associated information, FSCO/I, requires many well-matched components, correctly arranged and coupled to achieve function, such as the glyph strings in this English text, or the algorithmic function of strings in D/RNA used to guide protein assembly in the ribosome.

Where that constraint on configuration to achieve function locks us to isolated islands of function in the configuration space of possible arrangements of components. Thus, beyond 500 – 1,000 bits of specified complex arrangement to achieve function, we see a material blind search challenge on chance and mechanical necessity that is readily solved by intelligence, whether human [this text, underlying software and hardware] or beavers [dams adapted to stream specifics in a feat of impressive engineering] etc. Where we may simply measure FSCO/I using the Chi_500 threshold metric:
FSCO/I on the gamut of our solar system is detected when the following metric goes positive:
Chi_500 = I*S – 500, bits beyond the solar system threshold [with 1,000 bits being adequate for the observed cosmos]
in which I is a reasonable info metric, most easily seen as the string of Y/N questions to specify configuration in a field of possibilities, such as is commonly done with AutoCAD files or the like
with S a dummy variable defaulting to zero ( chance as default explanation of high contingency, cheerfully accepting the possibilities of false negatives), and set high on noting good reason and evidence of functional specificity, e.g. key-lock fitting of proteins sensitive to sequence and folding
where 500 bits gives us a “haystack” sufficiently large to overwhelm the capacity of 10^57 atoms for 10^17 s, each making 10^14 observations of chance configs for 500 bits per second [a fast chem rxn rate],
comparable to taking a one straw sized sample blindly from a cubical haystack of possible configs for 500 bits [3.27*10^150] that is 1,000 light years on the side, comparably thick as our galaxy . . . light setting out when William The Conqueror attacked Saxon England in 1066 AD would still not have crossed the stack today
so that if S = 1 and I > 500 bits, Chi_500 going positive convincingly points to design as best explanation as such a blind search of a haystack superposed on our galactic neighbourhood would with moral certainty beyond reasonable doubt produce naught but the typical finding: a straw
but by contrast, on trillions of observed cases, design is the reliably known cause of FSCO/I
3: The rhetorical substitution made here therefore dodges a substantial case and sets up a strawman caricature, for which — given longstanding, repeated corrections across months and years — the error involved unfortunately has to be willful.
>> then ID is proposing that human beings were responsible – clearly a poor hypothesis.>>
4: Strawman.
5: First, the very names involved are the design inference and the theory of intelligent design. At no point is there a process of inference to human action or any particular agent, only, to a process that is observed and known per observations to not only be adequate to produce the phenomenon FSCO/I, but on trillions of cases, the ONLY observed process to do this.
6: This, multiplied by needle in haystack blind search challenge analysis that points to the gross inadequacy of blind chance and mechanical necessity on the gamut of the solar system or observed cosmos to find relevant deeply isolated islands of function.
7: Where, starting with beavers and the like, we have no good reason to infer that humans exhaust actual much less possible intelligences capable of intelligently directed contingency or contrivance, i.e. design.

8: As a further level of misrepresentation, the design inference is about causal process not identification of specific classes of agents or particular agents. One first identifies that a factory fire is suspicious and then infers arson on signs, before going on to call in the detectives to try to detect the particular culprit. Signs, that indicate that more than blind chance and the mechanical necessities of starting and propagating a fire were at work.
9: This willful caricature, after years of correction, then sets up the next step:
>>Alternatively, ID can propose an unknown cause that somehow has the same sort of mental and physical abilities as human beings.>>
10: As has been pointed out to you, RDF, over and over again and stubbornly ignored in the rush to set up and knock over a favourite strawman caricature,
the design inference process sets up no unknown cause [here a synonym for an agent], but compares known, empirically evident causal factors and their characteristic or typical traces.
11: Mechanical necessity is noted for low contingency natural regularities, e.g. guavas and apples reliably drop from trees under initial acceleration 9.8 N/kg, and attenuating for the surface of a sphere at the distance to the moon, the force field accounts aptly for its centripetal acceleration, grounding Newtonian gravitation analysis.
12: Blind chance tends to cause high contingency, but stochastically controlled contingency similar to how a Monte Carlo simulation analysis explores reasonably likely clusters of possibilities in a highly contingent situation.

13: But, some needles can be too isolated and some haystacks too big relative to sampling resources, for us to reasonably expect to find one needle, much less the thousands that are in just the so-called simple cell, i.e. the cluster of proteins and the nanomachines involved.
14: So, we are epistemically entitled to infer that the only vera causa plausible process that accounts for the needles coming up trumps is design. That is, intelligently directed contingency or contrivance.
15: Where also, the base of trillions of observations showing that design is the reliably known — and ONLY actually observed — causal process accounting for such FSCO/I makes it also a very strong, reliable sign of design as key causal factor involved where it is observed.
16: This bit of inductive reasoning then exposes the selectively hyperskeptical rhetorical agenda in:
>>But in that case, ID would need to show evidence that this sort of thing exists.>>
17: Designers exist, human, beaver and more. Where, we have no good reason whatsoever to assume, assert, insinuate or imply that human and similar cases exhaust possible cases of designers. So, designers exist and are therefore possible.
18: Likewise, FSCO/I on very strong empirical basis, is a highly reliable index of design.
19: Therefore, until someone can reasonably show otherwise empirically, we are inductively entitled to take the occurrence of FSCO/I — even in unexpected or surprising contexts — as evidence of design as relevant causal process.
20: So, why the implicit demand for separate, direct empirical evidence of designers in the remote unobserved past of origins? Why, by contrast with being very willing to assign causal success to very implausible mechanisms for FSCO/I such as chance and necessity — not needle in haystack plausible, not ever observed to account for FSCO/I?
21: Selective hyperskepticism joined to flip-side hypercredulity to substitute a drastically inferior explanation. In the wider context, typically for fear and loathing of the possibility of . . . shudder . . “A Divine Foot” in the door of the halls of evolutionary materialism dominated science.
22: Of course, ever since 1984, with Thaxton et al, design theorists have been careful to be conservative, noting that in effect for the case of what we see in the living cell and wider biological life, a molecular nanotech lab some generations beyond Venter et al would be adequate. But so locked in a death-battle with bogeyman “Creationists” are the materialists and fellow travellers that they too often will refuse to acknowledge any point, regardless of warrant, that could conceivably give hope to Creationists.
23: So, the issues of duties to reason, truth and fairness are predictably given short shrift.
24: Oddly, most such activists are typically missing in action when we point out, from the thought of lifelong agnostic and Nobel-equivalent Prize-holding Astrophysicist, Sir Fred Hoyle and others, the evidence of cosmological fine tuning that sets up a world in which we can have C-chemistry, aqueous medium, protein using cell based life on the five or six most abundant elements points to cosmological design; most credibly by a powerful, skilled and purposeful designer who set up physics itself to be the basis for such a world.
25: Here’s a key comment — just one of several — by Sir Fred:
From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 MeV energy level in the nucleus of 12 C to the 7.12 MeV level in 16 O. If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? . . . I am inclined to think so. A common sense interpretation of the facts suggests that a super intellect has “monkeyed” with the physics as well as the chemistry and biology, and there are no blind forces worth speaking about in nature. [F. Hoyle, Annual Review of Astronomy and Astrophysics, 20 (1982): 16.]
It seems that ideology rules the roost in present day origins science thinking (and in science education), even at the price of clinging to the inductively implausible in order to repudiate anything that might conceivably hint that design best accounts for our world. >>
___________
It is high time to take duties of care to fairness, accuracy and truth seriously, and to actually address the inductive evidence and the needle in haystack analysis challenge on the merits. END
PS: In the end, the habitual and insistent resort to red herrings and strawman caricatures [not to mention ad hominems and just plain nasty or rude personalities and coarse bully-boy nihilist disrespect on grounds of “I can get away with it” . . . ] of the design inference by Evolutionary Materialist ideologues, enablers and fellow travellers, speaks inadvertent volumes on the actual strength of the design inference on FSCO/I, in the actual inductive, observational merits. Sadly revealing, and pointing to a need for fresh thinking and a better attitude.
kairosfocus- If they didn’t misrepresent ID then they wouldn’t have anything to say. 😉
They seem oblivious to the fact it is up to them to provide probabilities, so what else can they do but distort ID in order to make some point?
Joe, sadly, you have a sobering point. One that too many objectors refuse to attend to. KF
kairosfocus,
If I was an evolutionist and someone told me that the scientific way to dispense with ID is to step up and support evolutionism, that is where I would be spending my time. And to me it is very telling that our opponents do not even try to do so.
Joe, actually ID has a deliberate single point failure mode, simply show that FSCO/I is credibly — with good empirical plausibility on available atomic resources in the solar system or observed cosmos — a product of blind chance and/or mechanical necessity. For instance a computer rig that credibly produces English text beyond 73 continuous ASCII characters on a blind chance search process. As you know dozens of attempts were made over years, all failed. Many inadvertently showed how FSCO/I reliably arises from design on a routine basis. Current evasiveness, definition derby games, strawman caricatures and the like including ad hominems and nasty personalities should be understood in that context. KF
Spell that:
F-A-L-S-I-F-I-A-B-I-L-I-T-Y
As in empirically testable but robust to date.
KF
Right and if they demonstrate that blind chance can produce a living organism- my bad for making it clear that evolutionism includes that- they would have nailed that point failure mode.
So (it appears) we agree.
The EF mandates we take Newton’s rules of scientific investigation seriously. So we give necessity, then necessity and chance, the first crack at solving the puzzle. When that fails, due to either empirical science, the lack of probabilistic resources or even the total failure at being able to provide probabilities, we are then free to consider the design inference.
That means we wouldn’t even consider a design inference- at first blush anyway as future data can overturn any current inference- if we determined that blind necessity and/or chance can account for what we are investigating. And all of that means is our opponents seem to have all of the power by having the ability to stop ID before it could even get going. Yet they prefer to play hopscotch by not even dealing with the first nodes of the process and jumping right into the final decision box where intelligent design is considered.
To me, and others, that is a sure sign that they have absolutely nothing. And that leads us back to the topic of your post. 😉
The EF I was referring to is Dembski’s simpler model.
F/N: Wiki — testifying against known ideological interest — on Random documents production:
In short, a space of ~10^50 is searchable within solar system scope resources, but that is a factor of 10^100 short of the threshold we have put on the table.
Dismissing this as “big numbers” or the like is a label, caricature and dismiss tactic, not a serious response.
It is time for fresh thinking.
KF
Joe, my elaboration — which WmAD liked BTW — was designed to address specific concerns, by taking an object etc and looking at it methodically aspect by aspect. Then, the fly-out boxes were showing onward actions as in this is not a “science stopper” but a working out of often overlooked aspects of a serious scientific investigation. Notice the next aspect/onward inquiry focus, indicating iteration until some scope limiting criterion is reached. A flowchart rather than an outright algorithm, but obviously using the old programming flowchart approach. A lot of work would have to be filled in for each box, on the ground. Likewise, this is actually tied to the Chi_500 metric, as the decision nodes feed in to the variables, I, S. KF
Yes, kairosfocus, your elaboration is much better than the original. It’s just that I have those three nodes of the original implanted and sometimes it is difficult to change. Once I realized that your elaboration only has two I had to clarify what I had said in my previous post.
True the original EF was much too simple and needed a touch-up. Thank you for doing so.
It’s actually a three-possibility case structure with alternatives addressing the two possible switch points.
Joe
The AND involved in decision node 2 underscores that complexity or specificity in isolation are not enough; the issue is JOINT, single aspect complexity AND specificity beyond a relevant threshold, set off needle in haystack requisites on solar system or cosmos scope resources. (Notice, this actually goes beyond Dembski.)
High contingency rules out default 1, mechanical, lawlike necessity.
Joint complexity and specificity rule out default 2, blind chance searching a space of possible configurations. (Think here, ASCII text string or bit string.)
At this point, FSCO/I has been isolated and identified.
Intelligently directed configuration is the only empirically warranted, needle in haystack plausible explanation for such FSCO/I, cf the infographic.
This is not rocket science, it is based on reasonable logic and empirical evidence, but it is hated and despised to the point of abusive bully-boy behaviour, because it does not sit well with the comfortable materialist ideology those like TWT and the like want. (I think he has anger management issues.)
I will say here that, I saw that Joe Felsenstein has had the decency to object to what has been going on at Prof Moran’s blog on the outing tactics, rudeness and personal abuse front.
It is seriously time for a bigtime wake up.
What we are seeing comes straight out of the nihilism that Plato warned against so long ago as stemming from evo mat, 2350 years ago in The Laws, Bk X. Which of course the bully-boys want to brush aside.
Revealing.
KF
Joe, remember how many times we had to hammer away at this point to EL et al, and they were still evading and refusing to reckon with it? To the point where I had to pointedly ask if she had read or by extension could read a simple flowchart? I think we should never underestimate the blinding power of a demanding ideology such as Lewontinian-Saganian a priori evo mat. KF
Why don’t you focus on getting the ID position published in academic papers so its there in black and white, and accordingly much more difficult to distort. Endless blog postings do not appear to be progressing ID.
In academia 60-70% of the audience is either theistic or at least open to a non-materialist viewpoint like ID (only about 30% of academics and 40% of scientists claim to be atheist).
Sorry guys – posting from iPhone, so short version:
Lots of copypasta, but the emperor has no clothes. Do some calcs, provide a list of FSCO/I values for things. Otherwise it’s simply a distraction.
These discussions become cyclical.
Here is something I posted 6 years ago as a proposal for an ID manifesto. Here is the link to the comment
http://www.uncommondescent.com.....ent-296129
This was written as a reaction to another ID supporter’s view of ID which I did not completely agree with. As I said 6 years ago this could use some refinement but essentially describes the ID position as I know it.
All of this is not new here. The interesting thing is that it keeps getting repeated. The anti-ID people are like the movie Groundhog Day, they keep repeating the same nonsense over and over. Each day is essentially the same, it is just that on each new day a slight variation is presented.
As in Groundhog Day, it never goes anywhere till a fundamental different path is taken. The ID people know what the path is.
CLAVDIVS:
Yet there isn’t any unguided evolution support in peer-review.
Earth to rich- Functional sequence complexity, ie CSI, has been calculated for some proteins and it is in peer-review. Don’t blame us for your willful ignorance.
Provide a list of all CSI calcs. If its a real, useable thing I imagine there will be a long list. Joe, feel free to add your own CSI of cake example.
rich- you do realize that you come off as a little snot-nosed brat. It has all been covered on my blog- all the posts that you either choked on or refused to participate in.
I have explained to you how to do it. You choked. I provided a peer-reviewed paper that uses that methodology and you ran away.
The problem is your position doesn’t have any methodology beyond bald declaration and that means when a valid methodology is put in front of you you don’t have any idea what it is.
Here, have another choke:
Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein.
Each protein consists of a specific sequence of amino acid residues which is encoded by a specific sequence of processed mRNA. Each mRNA is encoded by a specific sequence of DNA. The point being is biological information refers to the macromolecules that are involved in some process, be that transcription, editing, splicing, translation and functioning proteins. No one measures the biological information in a random sequence of DNA nor any DNA sequence not directly observed in some process. The best one can do with any given random DNA sequence is figure out its information carrying capacity. You couldn’t tell if it was biological information without a reference library.
And Leslie Orgel first talked about specified complexity wrt biology:
As far as I can tell IDists use the terms in the same way. Dembski and Meyer make it clear that it is sequence specificity that is central to their claims.
That is the whole point- if sequence specificity matters the tighter the specification the less likely blind physical processes could find it. Yup those dreaded probabilities again, but seeing yours doesn’t come with a testable model it’s all we have. See Is Intelligent Design Required for Life?
With that said, to measure biological information, ie biological specification, all you have to do is count the coding nucleotides of the genes involved for that functioning system, then multiply by 2 (four possible nucleotides = 2^2) and then factor in the variation tolerance:
from Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling, Vol. 4:47 (2007):
ETA for OMagain:
Sorry / let me be precise: Provide a list of all CSI calcs. If its a real, useable thing I imagine there will be a long list. Joe, feel free to add your own CSI of cake example.
Proof of the pudding…
Rich,
pardon but peer review — which I have but little interest in frankly for this field — is little more than an appeal to authority.
If that is what you want there are dozens of peer reviewed ID-supportive papers linked to biology and must be hundreds on the cosmological side. Peer review is not the problem — other than, it is a potentially threadjacking side track.
Willful distortion in the teeth of correction is.
The measurement of info carrying capacity is an established field, for what, nigh on seventy years now. The simplest approach is a measure of the chain of Y/N questions to specify a state, which under well-behaved circumstances has the same properties as a weighted sum log-probability metric. Shannon’s famous paper used both, save he used ten state units in discussing direct measures.
The common file size metrics we use are like that.
When it comes to the most relevant case in the bio-world, any given member of A/G/C/T or U can follow any other, i.e under random circumstances, laying aside chirality and the like, the a priori odds would go like 1/4 each. In protein codes, for similar reasons to any other code, there will be a bit of redundancy so this is not quite exactly correct, but it’s good enough for a start. Four state elements, directly are two bits apiece, so a protein code for 250 three letter codons would have 750 bases. As a first rough metric, the carrying capacity, raw, is 1500 bits, i.e. a typical protein is already at or beyond the threshold of what our solar system or even observed cosmos could search out by blind mechanisms.
If you cannot follow and understand simple calcs and thresholds like the above, there is but little hope that something like Durston’s work on 15 protein families will even make basic sense to you. FOr instance, it pivots on Shannon’s H info metric which he termed entropy, which on the informational view of thermodynamics, is connected. And, if the 15 protein families calc and presentation do not mean anything to you or are not perceived as cases in point by you, demanding “all” CSI calcs is pointless, apart form as an exercise in selective hyperskepticism. As it is if you simply were to read the already linked derivation of Chi_500, you would find more than enough explanation of how FSCO/I can be and is measured, and also a link onwards to Durston et al.
If you take up the Durston calc of going from flat random null to ground state to functional state, you will drop these 2 bits per base pair/ 4.32 bits per AA in a protein a bit but not enough to make a difference in aggregate, noting that a typical cell needs hundreds and hundreds of diverse proteins to work, all of which have to pass through the chicken-egg problem of the ribosome protein assembly NC machine. Note the diagram from Wiki in the OP.
The protein manufacturing system alone, is already well beyond the threshold of what blind chance and mechanical necessity acting on the gamut of the observed cosmos can reasonably do. The only reasonable known causal force capable of that much FSCO/I is design.
Codes, algorithms, digital information in storage tapes, organised functional machinery etc.
All point to design.
So, the evolutionary materialist origins narrative cannot even get to the first functional cell, much less the tree of life.
Grant a cell based world and you face pop genetics and time to fix changes to effect major body plans, multiplied by an utter lack of observational base to infer apart from question-begging a priori materialism, that such happend by chance variation plus differential reproductive success in niches leading to descent with incremental modification thence branching tree body plan level evo.
All of this narrative lacks empirical warrant but is often presented as though it were as certain as gravity — which we directly observe.
The certainty lives in the a priori, not the evidence. The a priori already demands something much like that, so any tiniest hint of a shadow of something that may fit such is blown up, scare headlined and enshrined by the lab coat clad new magisterium.
But, all of this is an indulgence to a tangential side track.
The point of the OP is that he basic design argument is being distorted willfully, in the teeth of copious correction and opportunity to get it right.
Nothing you have had to say addresses this cogently, but that is a very serious issue indeed.
One that demands correction forthwith.
It is that insensitivity by evo mat advocates, enablers and fellow travellers to duties of care to truth, accuracy, fairness and more that are utterly, inadvertently revealing.
Might and manipulation make ‘right’ is the credo of nihilism.
Which should give us sobering pause.
It is time for fresh thinking.
KF
Oh, I see Joe has already linked the paper by Durston et al. Durston BTW did his PhD on these things, in biophysics — in Canada, beyond the reach of the thought police. Which should tell you something.
rich (chokes):
I just showed you how to do it. Get started. And feel free to demonstrate how blind and undirected processes can account for what you are calculating.
Why am I not surprised that you are proud of your inability to grasp a very simple example.
So you think the Durston FSC paper counts. I’ll get to that later. Is that *all* you have?
So Joe, the reason there’s no CSI calcs is that we’ve not done them for you?
Rich, why have you refused to attend to already given info and cases? This seems to be DDD #8, while your own posts are an example of FSCO/I, and indeed there are trillions of cases in point just online. Simply look at standard file sizes for relevant functional document files, and of course DNA code segments for proteins, starting at two bits per base or six per three-base codon. You are actually also providing an example of a willful distortion of the design inference on FSCO/I and of course selective hyperskepticism. The fact that DNA incorporates coded info is not in serious doubt, that info can be quantified, it is functionally specific and is often well beyond 500 – 1,000 bits. This also seems to be a desperate distraction from what is patent. All of which inadvertently point to the actual strength of the design inference case due to the scorched earth rhetoric being used by those who for ideological reasons seem backed into the corner of trying to deny its existence and ability to be given a quantitative information measure. KF
KF – you’re a terrible bluffer. If you had many examples you would have posted them and sent me home with my tail between my legs. But instead we get “[outing tactic snipped — ed]”:
[link to abusive site snipped — ed]
Rich, you are talking blue smoke and mirrors, especially after you have been given actual citations and links, instructions on how to find FSCO/I values for cases using a relatively simple metric, a grounding for the metric, and information on not only ordinary files on computers but also DNA strings. You have simply shown that you are in desperate denial. Worse, you have now linked to an abusive site that hosts materials that cross the threshold of civility, in order to indulge in a namecalling ad hominems. KF
F/N: Just as an example, Rich in 22, 198 ASCII characters, 7 bits/character, in reasonably recognisable English. I = 1386 bits. S = 1, Chi_500 = 886 bits beyond the solar system limit. Designed, as is separately known. This is offered only to underscore the unreasonableness of the behaviour being indulged by this objector. KF
PS: on Rich’s resort to abusive behaviour, now snipped, I have terminated discussion in this thread.
PPS: Observe, there has been utter unresponsiveness on the focal issue in the OP, revealing, given what is patent.