
In a recent comment, DWG has challenged Design thinkers who object to a priori materialism as a claimed necessary criterion of science:
________
KF: >>Origins science is under captivity to an evolutionary materialist a priori imposition, as Lewontin and others summarised.>> [Objectors, kindly read the onward linked before shooting off the usual talking points.]
DWG:>> Science has carved out for itself a purely and strictly material domain. This was done deliberately, and with full knowledge that there may be extremely important matters which lie outside this domain, but which therefore lie outside the competence of science to investigate or even comment on. And so Philip Johnson is exactly correct. The materialism MUST come first, because if it does not, it would not be possible to devise the scientific method, which rests on that materialism entirely. While it might often seem useful to allow a divine foot in the door, the result would not be science, and the scientific method could not be used . . . >>
____________
Of course, DWG is leaving off that it is the common view among those enamoured of this sort of scientism, that if one cannot reduce something to a scientific framework, it is not serious knowledge. Indeed, that is exactly what Lewontin implied when he said:
. . . the problem is to get [people] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [[NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . To Sagan, as to all but a few other scientists, it is self-evident [[actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . ] that the practices of science provide the surest method of putting us in contact with physical reality . . .
For, to such a priori materialists, physical reality equals reality.
And so, we see the context for the now notorious remarks:
. . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [Cf here on if you think the immediately following words JUSTIFY the above and turn this clip into quote-mining.]
That is why Design thinker, Philip Johnson was fully justified to retort as follows:
In addition, DWG seems to be ignorant of the basic point that C S Lewis was so fond of underscoring: unless there is a coherent, intelligible consistent natural order, the miraculous would not stand out from the chaos as a sign pointing beyond the usual way of the world.
That is why, as Nancy Pearcey has aptly highlighted, it is supernaturalistic theists (and even — shudder! — Bible-believing Creationists) who actually by and large built the foundations of modern science, including Newton, Galileo and many others. These worked in the tradition that science thinks God’s creative and sustaining thoughts for the world after him. Indeed, the number of such includes holders of the Nobel Prize and other eminent scientists.
But there is a more interesting side to all this, for the target of all this is the design inference.
That is, the key talking point being pushed is the notion that we are dealing with the natural vs the supernatural (which can do anything it pleases so the cosmos reduces to chaos once such is let in the door, foot first or otherwise), so to preserve the “integrity” of science such must be excluded a priori.
But in fact the relevant contrast — ever since Plato in The Laws, Bk X, is nature vs art, i.e chance and/or mechanical necessity vs intelligently directed configuration. Thus, we see the logic of the explanatory filter:

For instance, that is what the UD Weak Argument Correctives, no. 17 (cf also 16 and 18), highlights:
17] Methodological naturalism is the rule of science
Methodological naturalism is simply a quite recently imposed “rule” that (a) defines science as a search for natural causes of observed phenomena AND (b) forbids the researcher to consider any other explanation, regardless of what the evidence may indicate. In keeping with that principle, it begs the question and roundly declares that (c) any research that finds evidence of design in nature is invalid and that (d) any methods employed toward that end are non-scientific. For instance, in a pamphlet published in 2008, the US National Academy of Sciences declared:
In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations. [Science, Evolution and Creationism, p. 10. Emphases added.]
The resort to loaded language should cue us that there is more than mere objective science going on here!
A second clue is a basic fact: the very NAS scientists themselves provide instances of a different alternative to forces tracing to chance and/or blind mechanical necessity. For, they are intelligent, creative agents who act into the empirical world in ways that leave empirically detectable and testable traces. Moreover, the claim or assumption that all such intelligences “must” in the end trace to chance and/or necessity acting within a materialistic cosmos is a debatable philosophical view on the remote and unobserved past history of our cosmos. It is not at all an established scientific “fact” on the level of the direct, repeatable observations that have led us to the conclusion that Earth and the other planets orbit the Sun.
In short, the NAS would have been better advised to study the contrast: natural vs artificial (or, intelligent) causes, than to issue loaded language over natural vs supernatural ones
Notwithstanding, many Darwinist members of the guild of scholars have instituted or supported the question-begging rule of “methodological naturalism,” ever since the 1980’s. So, if an ID scientist finds and tries to explain functionally specified complex information in a DNA molecule in light of its only known cause: intelligence, supporters of methodological naturalism will throw the evidence out or insist that it be re-interpreted as the product of processes tracing to chance and/or necessity; regardless of how implausible or improbable the explanations may be. Further, if the ID scientist dares to challenge this politically correct rule, he will then be disfranchised from the scientific community and all his work will be discredited and dismissed.
Obviously, this is grossly unfair censorship.
Worse, it is massively destructive to the historic and proper role of science as an unfettered (but intellectually and ethically responsible) search for the truth about our world in light of the evidence of observation and experience.
But all of this raises the onward question, on how the ID-style inference to design is justified. This was picked up on in the response to DWG at 86 in the same thread:
____________
KF: >> . . . the sort of a priori materialistic censorship we see you trying to justify locks out the very intelligence that scientists must use to do science at all, and the question as to whether that intelligence may sometimes leave empirically testable, reliable signs that point to its source.
For instance, in comment 82 just above, you put up a post in this thread in English, using 1612 ASCII characters. Based on the 7-bit code, the number of contingent possibilities for that many characters is: 128^1612 ~ 6.6*10^3,396.
We may ignore for our purposes the debates on the redundancy of English as a digital code and the different proportions of letters in English, as on pure chance each character would be equiprobable. So, we are dealing with 7*1612 = 11,284 bits. (At even 1 bit per character as an average, we are still well beyond the resources of the observed cosmos so this is without loss of materiality of the argument.)
Applying the log-reduced Chi metric on the gamut of our solar system of some 10^57 atoms (mostly Hydrogen in the sun, BTW), we can see:
Chi_500 = I*S – 500
Where, I = 11,284 , and as we are dealing with an instance of a code known to be vulnerable to perturbations, the specificity dummy variable S is 1.
Chi_500 = 10,784 bits beyond the solar system threshold.
The import of this is that the only credible cause of something that has that many functionally specific bits is intelligence as we OBSERVE it, based on the challenge of finding islands of specific function in a large space of possibilities, beyond the solar system [our effective universe] threshold. (FYI, over the time since the usual estimate for the big bang, 10^57 atoms would go through {about 10^117 Planck time quantum states} where it takes bout 10^30 such to do the fastest — ionic — chemical reactions. 500 bits is about 10^150 possibilities, {33} orders of magnitude beyond, that is, a search of 10^102 steps at most will not sample enough of the possibilities to plausibly capture something that is UNrepresentative of the distribution as a whole.)
And, plainly, functionally specific configs are going to be absolutely overwhelmed by gibberish in the space of possibilities.
If you are looking for needles in haystacks but sample only {1 in 10^33} of the haystack, overwhelmingly you are going to be picking up a tiny bit of straw. The gamut of search is not reasonable relative to the isolation of the target in the field of possibilities.
To give an idea, let us say that a needle and a straw both weigh about a gram. {Where also, using the 10^30 states for a chemical level atomic event, the number of possible events in our solar system in 10^17 s is about 10^87. That is 1 in 10^63 of 10^150. A cubical hay bale made up of 10^63 1 g straws would weigh in at 10^57 tonnes, assuming a density equal to water. That’s 10^19 m on the side, or about 1,000 light years. Where our galactic disk is about that thick.}
[EXPLANATION: A one straw-sized sample from a {1,000 LY} across bale, i.e. big enough to swallow our solar system and its neighbourhood without noticing, is obviously overwhelmingly likely to pick up only straw. And remember, what this means is that our solar system is comparatively pulling just one straw-sized sample. {So, let us superpose the bale on the sun’s neighbourhood. It would be obvious that such a one straw sized sample will with overwhelming likelihood pick up the bulk of the distribution: straw, not anything else.} ]
So, we see a simple example of how functionally specific complex organisation and associated information [FSCO/I] beyond a reasonable complexity threshold is a reliable sign of design. An empirical sign.
So, we immediately see that we have a reasonable empirical procedure and test for inference to design on specified complexity beyond a threshold. A process that is subject to empirical test and falsification on the very simple challenge: provide a good OBSERVED counter-instance where it is credible that undirected chance and necessity led to FSCO/I beyond say the solar system threshold.
It will not surprise you, I am sure, to know that there is an Internet full of cases in point, and by contrast there are zero counter-examples that meet the required criterion. (If there were, the sort of debate points commonly used to object to the design inference would not be used.)
Now, a minimally complex cellular life form may have say 200 proteins, averaging let’s say 200 AAs each, and represented by DNA at 3 4-state letters per AA and with let’s say 10% more for regulatory stuff. 660 * 200 = 132,000 4-state elements; 264 k bits. The config space for this is 8.3 *10^79,471 possibilities.
An unintelligent chemical soup with possibilities for all sorts of cross reactions and breakdown mechanisms for such will simply not get there on the gamut of our solar system or the wider observed cosmos. Nor is there any credible observed sign of a ladder of development from simple initial components to that degree of specified complexity, regardless of the many just so stories commonly trotted out.
So, since life as we observe it is reasonably at least of that sort of complexity, and is known to be functionally highly specific. So, just as we can see from the example of post 82, we have excellent reason to infer that the original source of the living cell was design, on the sign of FSCO/I.
Nor does the fact of self-replication in the cell undermine this, for what we have is a case of a metabolising automaton, which has the ADDITIONAL capacity of replicating itself on a stored copy of digital coded algorithmically functional information, as von Neumann discussed 60 years ago, a von Neumann self replicator. The vNSR facility added to the ability to process energy and materials from environment to carry out operations to build up and break down internal components is yet another manifestation of FSCO/I.
What is happening is that instead of dealing with such on the merits, politically correct censorship is locking out the ability of science to freely and responsibly think through the issues just outlined.
That’s politics, and censorship, not science.
Period.
So, nope, it is simply not true that science MUST only explain in a materialistic circle of chance and necessity acting blindly on matter and energy, and in fact it was not even true of the co-founder of the theory of evolution, Wallace.
Call off the rottweilers, it is time for science to proceed as an unfettered but intellectually and ethically responsible progressive pursuit of the empirically based truth about our world, through experiment, observation and related logical and mathematical analysis discussed freely and fairly among the informed.
Otherwise the core integrity of science will utterly and irretrievably break down. >>
_____________
In short, the design inference is saying as a first threshold, we must at minimum be looking at the sort of search that is comparable to pulling a single straw-sized sample from a haystack about 1/10 light year on the side. In such a situation, it is unreasonable to expect that a sample of such a size will turn up needle, not straw. Except, by design.
For instance, if we had an autonomous robot that had a scanner that could pick up needles, say by using mm-wavelength radar, and then going to the detected needle and picking it up, we would have a much higher likelihood of finding needles scattered amidst the straw.
But of course, such a robot would be a product of art, and would be using a device that creates a warmer/colder signal that allows it to home in on needles, within the resources available.
Just so, design theory and the associated explanatory filter, are predicated on the advantage of using smarts not a chance-based random walk rewarded by trial and error success, as a precursor to allowing hill-climbing.
But, you may protest, the set task is too complex. 500 bits is a “lot.”
In fact, 72 or so ASCII characters worth of information is a very small amount to do any serious real world control in, as any experienced programmer can tell you. And we are looking at needing to get to a system capable of informationally controlling and building a metabolic entity that additionally, needs to be self-replicating.
We arte now beginning to explore what it takes to build such a self-replicating system, andyou can see what it already takes to build a unit that replicates about half its parts, here.
The observation that about 100,000 bits of information is a baseline for a self-replicating cell based life form, is not to be lightly brushed aside.
Nor, is the point that we have precisely one known source of required symbolic codes and algorithms, design.
As, the needle in the haystack challenge above highlights.
Of course such are not a priori conclusions, they can be overthrown by the gold standard approach of science: demonstrate a credible counter example.
A challenge that after 60 years of serious experimental effort, is not being met.
By a long shot. END
_______________
* Adjusted, on a corrected calculation, June 4, 2012.