The proverbial needle in the haystack
Uncommon Descent Serving The Intelligent Design Community

Of needles in haystacks (1,000 light years* on the side) and materialist a priorism in science

A needle in a haystack

In a recent comment, DWG has challenged Design thinkers who object to a priori materialism as a claimed necessary criterion of science:


KF: >>Origins science is under captivity to an evolutionary materialist a priori imposition, as Lewontin and others summarised.>> [Objectors, kindly read the onward linked before shooting off the usual talking points.]

DWG:>> Science has carved out for itself a purely and strictly material domain. This was done deliberately, and with full knowledge that there may be extremely important matters which lie outside this domain, but which therefore lie outside the competence of science to investigate or even comment on. And so Philip Johnson is exactly correct. The materialism MUST come first, because if it does not, it would not be possible to devise the scientific method, which rests on that materialism entirely. While it might often seem useful to allow a divine foot in the door, the result would not be science, and the scientific method could not be used . . . >>


Of course, DWG is leaving off that it is the common view among those enamoured of this sort of scientism, that if one cannot reduce something to a scientific framework, it is not serious knowledge. Indeed, that is exactly what Lewontin implied when he said:

. . .   the problem is to get [people] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations, and to accept a social and intellectual apparatus, Science, as the only begetter of truth [[NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]. . . . To Sagan, as to all but a few other scientists, it is self-evident [[actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . ] that the practices of science provide the surest method of putting us in contact with physical reality . . .

For, to such a priori materialists, physical reality equals reality.

And so, we see the context for the now notorious remarks:

. . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [[another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [[i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door. [Cf here on if you think the immediately following words JUSTIFY the above and turn this clip into quote-mining.]

That is why Design thinker, Philip Johnson was fully justified to retort as follows:

For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”
. . . .   The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

In addition, DWG seems to be ignorant of the basic point that C S Lewis was so fond of underscoring: unless there is a coherent, intelligible consistent natural order, the miraculous would not stand out from the chaos as a sign pointing beyond the usual way of the world.

That is why, as Nancy Pearcey has aptly highlighted, it is supernaturalistic theists (and even — shudder! — Bible-believing Creationists) who actually by and large built the foundations of modern science, including Newton, Galileo and many others. These worked in the tradition that science thinks God’s creative and sustaining thoughts for the world after him. Indeed, the number of such includes holders of the Nobel Prize and other eminent scientists.

But there is a more interesting side to all this, for the target of all this is the design inference.

That is, the key talking point being pushed is the notion that we are dealing with the natural vs the supernatural (which can do anything it pleases so the cosmos reduces to chaos once such is let in the door, foot first or otherwise), so to preserve the “integrity” of science such must be excluded a priori.

But in fact the relevant contrast — ever since Plato in The Laws, Bk X, is nature vs art, i.e chance and/or mechanical necessity vs intelligently directed configuration.  Thus, we see the logic of the explanatory filter:

The (per aspect) Design Inference Explanatory Filter

For instance, that is what the UD Weak Argument Correctives, no. 17 (cf also 16 and 18), highlights:

17] Methodological naturalism is the rule of science

Methodological naturalism is simply a quite recently imposed “rule” that (a) defines science as a search for natural causes of observed phenomena AND (b) forbids the researcher to consider any other explanation, regardless of what the evidence may indicate. In keeping with that principle, it begs the question and roundly declares that (c) any research that finds evidence of design in nature is invalid and that (d) any methods employed toward that end are non-scientific. For instance, in a pamphlet published in 2008, the US National Academy of Sciences declared:

In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations. [Science, Evolution and Creationism, p. 10. Emphases added.]

The resort to loaded language should cue us that there is more than mere objective science going on here!

A second clue is a basic fact: the very NAS scientists themselves provide instances of a different alternative to forces tracing to chance and/or blind mechanical necessity. For, they are intelligent, creative agents who act into the empirical world in ways that leave empirically detectable and testable traces. Moreover, the claim or assumption that all such intelligences “must” in the end trace to chance and/or necessity acting within a materialistic cosmos is a debatable philosophical view on  the remote and unobserved past history of our cosmos. It is not at all an established scientific “fact” on the level of the direct, repeatable observations that have led us to the conclusion that Earth and the other planets orbit the Sun.

In short, the NAS would have been better advised to study the contrast: natural vs artificial (or, intelligent) causes, than to issue loaded language over natural vs supernatural ones

Notwithstanding, many Darwinist members of the guild of scholars have instituted or supported the question-begging rule of “methodological naturalism,” ever since the 1980’s. So, if an ID scientist finds and tries to explain functionally specified complex information in a DNA molecule in light of its only known cause: intelligence, supporters of methodological naturalism will throw the evidence out or insist that it be re-interpreted as the product of processes tracing to chance and/or necessity; regardless of how implausible or improbable the explanations may be. Further, if the ID scientist dares to challenge this politically correct rule, he will then be disfranchised from the scientific community and all his work will be discredited and dismissed.

Obviously, this is grossly unfair censorship.

Worse, it is massively destructive to the historic and proper role of science as an unfettered (but intellectually and ethically responsible) search for the truth about our world in light of the evidence of observation and experience.

But all of this raises the onward question, on how the ID-style inference to design is justified. This was picked up on in the response to DWG at 86 in the same thread:


KF: >> . . . the sort of a priori materialistic censorship we see you trying to justify locks out the very intelligence that scientists must use to do science at all, and the question as to whether that intelligence may sometimes leave empirically testable, reliable signs that point to its source.

For instance, in comment 82 just above, you put up a post in this thread in English, using 1612 ASCII characters. Based on the 7-bit code, the number of contingent possibilities for that many characters is: 128^1612 ~ 6.6*10^3,396.

We may ignore for our purposes the debates on the redundancy of English as a digital code and the different proportions of letters in English, as on pure chance each character would be equiprobable. So, we are dealing with 7*1612 = 11,284 bits. (At even 1 bit per character as an average, we are still well beyond the resources of the observed cosmos so this is without loss of materiality of the argument.)

Applying the log-reduced Chi metric on the gamut of our solar system of some 10^57 atoms (mostly Hydrogen in the sun, BTW), we can see:

Chi_500 = I*S – 500

Where, I = 11,284 , and as we are dealing with an instance of a code known to be vulnerable to perturbations, the specificity dummy variable S is 1.

Chi_500 = 10,784 bits beyond the solar system threshold.

The import of this is that the only credible cause of something that has that many functionally specific bits is intelligence as we OBSERVE it, based on the challenge of finding islands of specific function in a large space of possibilities, beyond the solar system [our effective universe] threshold. (FYI, over the time since the usual estimate for the big bang, 10^57 atoms would go through {about 10^117 Planck time quantum states} where it takes bout 10^30 such to do the fastest — ionic — chemical reactions. 500 bits is about 10^150 possibilities, {33} orders of magnitude beyond, that is, a search of 10^102 steps at most will not sample enough of the possibilities to plausibly capture something that is UNrepresentative of the distribution as a whole.)

And, plainly, functionally specific configs are going to be absolutely overwhelmed by gibberish in the space of possibilities.

If you are looking for needles in haystacks but sample only {1 in 10^33} of the haystack, overwhelmingly you are going to be picking up a tiny bit of straw. The gamut of search is not reasonable relative to the isolation of the target in the field of possibilities.

To give an idea, let us say that a needle and a straw both weigh about a gram. {Where also, using the 10^30 states for a chemical level atomic event, the number of possible events in our solar system in 10^17 s is about 10^87. That is 1 in 10^63 of 10^150. A cubical hay bale made up of 10^63 1 g straws would weigh in at  10^57 tonnes, assuming a density equal to water.  That’s 10^19 m on the side, or about 1,000 light years. Where our galactic disk is about that thick.}

[EXPLANATION: A one straw-sized sample from a {1,000 LY}  across bale, i.e. big enough to swallow our solar system and its neighbourhood without noticing, is obviously overwhelmingly likely to pick up only straw.  And remember, what this means is that our solar system is comparatively pulling just one straw-sized sample.  {So, let us superpose the bale on the sun’s neighbourhood. It would be obvious that such a one straw sized sample will with overwhelming likelihood pick up the bulk of the distribution: straw, not anything else.} ]

So, we see a simple example of how functionally specific complex organisation and associated information [FSCO/I] beyond a reasonable complexity threshold is a reliable sign of design. An empirical sign.

So, we immediately see that we have a reasonable empirical procedure and test for inference to design on specified complexity beyond a threshold. A process that is subject to empirical test and falsification on the very simple challenge: provide a good OBSERVED counter-instance where it is credible that undirected chance and necessity led to FSCO/I beyond say the solar system threshold.

It will not surprise you, I am sure, to know that there is an Internet full of cases in point, and by contrast there are zero counter-examples that meet the required criterion. (If there were, the sort of debate points commonly used to object to the design inference would not be used.)

Now, a minimally complex cellular life form may have say 200 proteins, averaging let’s say 200 AAs each, and represented by DNA at 3 4-state letters per AA and with let’s say 10% more for regulatory stuff. 660 * 200 = 132,000 4-state elements; 264 k bits. The config space for this is 8.3 *10^79,471 possibilities.

An unintelligent chemical soup with possibilities for all sorts of cross reactions and breakdown mechanisms for such will simply not get there on the gamut of our solar system or the wider observed cosmos. Nor is there any credible observed sign of a ladder of development from simple initial components to that degree of specified complexity, regardless of the many just so stories commonly trotted out.

So, since life as we observe it is reasonably at least of that sort of complexity, and is known to be functionally highly specific. So, just as we can see from the example of post 82, we have excellent reason to infer that the original source of the living cell was design, on the sign of FSCO/I.

Nor does the fact of self-replication in the cell undermine this, for what we have is a case of a metabolising automaton, which has the ADDITIONAL capacity of replicating itself on a stored copy of digital coded algorithmically functional information, as von Neumann discussed 60 years ago, a von Neumann self replicator. The vNSR facility added to the ability to process energy and materials from environment to carry out operations to build up and break down internal components is yet another manifestation of FSCO/I.

What is happening is that instead of dealing with such on the merits, politically correct censorship is locking out the ability of science to freely and responsibly think through the issues just outlined.

That’s politics, and censorship, not science.


So, nope, it is simply not true that science MUST only explain in a materialistic circle of chance and necessity acting blindly on matter and energy, and in fact it was not even true of the co-founder of the theory of evolution, Wallace.

Call off the rottweilers, it is time for science to proceed as an unfettered but intellectually and ethically responsible progressive pursuit of the empirically based truth about our world, through experiment, observation and related logical and mathematical analysis discussed freely and fairly among the informed.

Otherwise the core integrity of science will utterly and irretrievably break down.  >>


In short, the design inference is saying as a first threshold, we must at minimum be looking at the sort of search that is comparable to pulling a single straw-sized sample from a haystack about 1/10 light year on the side. In such a situation, it is unreasonable to expect that a sample of such a size will turn up needle, not straw. Except, by design.

For instance, if we had an autonomous robot that had a scanner that could pick up needles, say by using mm-wavelength radar, and then going to the detected needle and picking it up, we would have a much higher likelihood of finding needles scattered amidst the straw.

But of course, such a robot would be a product of art, and would be using a device that creates a warmer/colder signal that allows it to home in on needles, within the resources available.

Just so, design theory and the associated explanatory filter, are predicated on the advantage of using smarts not a chance-based random walk rewarded by trial and error success, as a precursor to allowing hill-climbing.

But, you may protest, the set task is too complex. 500 bits is a “lot.”

In fact, 72 or so ASCII characters worth of information is a very small amount to do any serious real world control in, as any experienced programmer can tell you.  And we are looking at needing to get to a system capable of informationally controlling and building a metabolic entity that additionally, needs to be self-replicating.

We arte now beginning to explore what it takes to build such a self-replicating system, andyou can see what it already takes to build a unit that replicates about half its parts, here.

The observation that about 100,000 bits of information is a baseline for a self-replicating cell based life form, is not to be lightly brushed aside.

Nor, is the point that we have precisely one known source of required symbolic codes and algorithms, design.

As, the needle in the haystack challenge above highlights.

Of course such are not a priori conclusions, they can be overthrown by the gold standard approach of science: demonstrate a credible counter example.

A challenge that after 60 years of serious experimental effort, is not being met.

By a long shot. END


* Adjusted, on a corrected calculation, June 4, 2012.

KF, thanks for your clarification. I may offer more comments on this thread later today, but I'm on my way out the door in moments... material.infantacy
F/N: Tut, tut anti-evo objectors, you need to read the notes here on about that Lewontin quote, as the OP highlighted. Doesn't it ruin your whole day when someone lets the a priori materialism cat out of the science bag? kairosfocus
MI: It is not just complexity but specificity. Any particular config in a space for 500 bits is beyond 1 in 10^150, but many would be typical. It is the special zones that are NOT typical, i.e they are separately describable as to function or meaning etc, that we are interested in. I prefer 1,000 bits degree of complexity, as this then takes us well beyond the reach of the observed cosmos [150 orders of magnitude beyond the accessible PT quantum states]-- the only observed cosmos. But, our effective cosmos is the solar system and 500 bits of complexity takes us 48 orders of magnitude beyond that. One could make an argument that he fastest chem reactions are like 10^30 PT's, but we need not as 143 ASCII characters is enough isolation that we can see that this is trivially too small for a control algorithm or a structural specification. If, within the resources of the solar system, you found by pure chance and blind mechanical necessity a serious function that would be reasonably acceptable as CSI, better yet, FSCI, the design explanatory filter would be dead. In short that would be comparable to taking a single at random 1-gram [1 straw sized] sample from a hay bale 0.1 light year on the side, and picking needle not straw. It is not hard to see why that is not a reasonable assumption for a theory of origins. As a valid test, I have repeatedly suggested do a random ASCII text generator, and run it till you see a 73 ASCII character string in coherent English, perhaps by comparing the Gutenberg online book database. So far a 24-character string has been generated, but that is far, far from 73, where each additional BIT doubles the config space, and each additional character would multiply it 128 times over. What usually happens is that objectors argue that any string is specified, which is a strawman argument; that would be Dr Liddle's example stretched to 500 coins. Or, they end up slipping in intelligent guidance by the back door; typically on the claim that evolution allows hill climbing. When you point out that hill climbing from an already sufficiently functional string is not the issue, it is to get to the equivalent of 100,000 bits of functional DNA for origin of cell based life, or 10 - 100 million for origin of complex body plans, before you can hill climb, there is usually a cloud of rhetorical ink and an escape, or else a vanishing act or an ignoring. So, the CSI criterion is testable, and is tested. It passes empirical test, with billions of instances in our observation. The only known, observed source of such CSI and especially FSCI is design. You will love the recent objection on the canals on Mars. If they were accurate pics of real canals, then that would have indeed been evidence of design on Mars. Instead they were drawings designed by astronomers here on earth, to represent what they THOUGHT they saw on Mars, inaccurately. Looks like the latest news is that there may be seasonal salt water streams on Mars, but no-one is suggesting design. GEM of TKI kairosfocus
KF, putting aside the calculation for CSI for the time being: Would it be fair or even close to accurate to say that, given a simulation, if you found a function by c/n which had less than 10^-150 chance of being found, you would have "generated" specified complexity? The key is that there needs to be a less than one in 10^150 chance of being arrived at via random search, I'm presuming. Also, I know you've settled on 1000 bits for complexity, could you reiterate why you're using 1000 instead of 500 (10^300 instead of 10^150)? m.i. material.infantacy
Quit invoking the supernatural. material.infantacy
I swear I see a needle in that haystack. Mung
If only material causes are allowed then should we now throw quantum mechanics out of science??? Quantum Entanglement – The Failure Of Local Realism - Materialism - Alain Aspect - video The falsification for local realism (materialism) was recently greatly strengthened: Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. Quantum Information/Entanglement In DNA & Protein Folding - short video ================= Journal of Scientific Exploration (Issue 21-3) by Professors Richard Conn Henry and Stephen R. Palmquist Excerpt: Alain Aspect is the physicist who performed the key experiment that established that if you want a real universe, it must be non-local (Einstein’s “spooky action at a distance”). Aspect comments on new work by his successor in conducting such experiments, Anton Zeilinger and his colleagues, who have now performed an experiment that suggests that “giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned.” Be clear what is going on here. Quantum mechanics itself is not crying out for such experiments! Quantum mechanics is doing just fine, thank you, having performed flawlessly since inception. No, it is people whose cherished philosophical beliefs are being threatened that cry out for such experiments, exactly as Einstein used to do, and with exactly the same hope (we think in vain): that quantum mechanics can be refined to the point where it requires (or at least allows) belief in the independent reality of the natural world it describes. Quantum mechanics makes no mention of reality (Figure 1). Indeed, quantum mechanics proclaims, “We have no need of that hypothesis.” Now we are beginning to see that quantum mechanics might actually exclude any possibility of mind-independent reality?and already does exclude any reality that resembles our usual concept of such (Aspect: “it implies renouncing the kind of realism I would have liked”). Non-local causality is a concept that had never played any role in physics, other than in rejection (“action-at-a-distance”), until Aspect showed in 1981 that the alternative would be the abandonment of the cherished belief in mind-independent reality; suddenly, spooky-action-at-a-distance became the lesser of two evils, in the minds of the materialists. Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the illusion of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism. referenced work: “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 bornagain77
Once science has been allowed to be locked up into a philosophical materialistic circle by imposed a priori, it is being censored through question-begging. Very true now used to say that science can only explain by “natural causes. If we were to remove one word from that sentence and say that science can only explain "natural causes" we would end up with a healthy and useful definition and return the concept to what it was meant to be. Of course, those who have acquired influence and even wealth claiming that "science" can explain all, would shriek like banshees if we should. But that's OK. tribune7
Trib: Yes, we live in a world where even the random is orderly, hence the various statistical distributions. And, of course, a priori materialism is utterly anti-science. In the clipped off part of the response at 86, I noted:
Once science has been allowed to be locked up into a philosophical materialistic circle by imposed a priori, it is being censored through question-begging. And, once the public begins to realise what is going on, especially for origins science, the game is over. For, we all know where politically correct — ideological — philosophical — a priori impositions and censorship leads. Hence of course the pretence that for centuries the successful rule of the game in science is so-called methodological naturalism, now used to say that science can only explain by “natural causes.” NOT! (And a few other myths about the roots of science are popped here.) When pressed, that imposition boils down to you can only work in the circle of matter, energy, space, time, chance and forces of mechanical necessity. Precisely the a priori evolutionary materialism that Lewontin identified, and as others also back up. The “only” problem with this is that it leaves out of the reckoning exactly the forces that allow us to do science at all, i.e. that we are self-moved, freely thinking intelligent beings who can follow LOGICAL connexions by reasoning, and by observing and evaluating facts of the empirical world. As Haldane pointed out ever so long ago:
“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms.” [["When I am dead," in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.]
In short, if you reduce our mental capacity to a product of chance plus necessity acting on matter and energy, through genetic and socio-cultural conditioning, you end up in a self-referential absurdity. And tossing around word-magic terms like “emergence” and conscious software looping, is just that: word-magic, not serious explanation. Your system of explanations must leave room for a self-aware, self-moved mind that can think, decide, reason and act for itself, or the first premise of science collapses in self-refuting ruins: science requires scientists who can freely think, reason, and know . . .
GEM of TKI kairosfocus
The materialism MUST come first, because if it does not, it would not be possible to devise the scientific method, That's an extraordinarily silly concept to which to hold. I'll happily grant that science should refrain from invoking the supernatural -- something which makes ghost investigation shows on SciFy (or wherever) especially laughable -- but the notion that "materialism MUST come first" totally overlooks the far, far, far more important (and very much non-material) axioms that truth is objective and that there is a point to existence hence providing the rational to put forth effort to learn simply for the sake of knowing. Science, which basically means natural science in modern usage, is good but we shouldn't forget that all it can do is to attempt to discover the consistencies of nature. (quick quiz: can consistency be found in random events?) Invoking unknown forces or claiming known forces can definitively cause an effect without being able to demonstrate it -- hmmmm what popular "scientific" theory could we have in mind here? -- is actually anti-science. tribune7

Leave a Reply