# Infinite Probabilistic Resources Makes ID Detection Easier (Part 2)

Previously [1], I argued that not only may a universe with infinite probabilistic resources undermine ID, it will definitely undermines science. Science operates by fitting models to data using statistical hypothesis testing with an assumption of regularity between the past, present, and future. However, given the possible permutations of physical histories, the majority are mostly random. Thus, a priori, the most rational position is that all detection of order cannot imply anything beyond the bare detection, and most certainly implies nothing about continued order in the future or that order existed in the past.

Furthermore, since such detections of order encompass any observations we may make, we have no other means of determining a posteriori whether science’s assumption of regularity is valid to any degree whatsoever. And, as the probabilistic resources increase the problem only gets worse. This is the mathematical basis for Hume’s problem of induction. Fortunately, ID provides a way out of this conundrum. Not only does intelligent design detection become more effective as the probabilistic resources increase, but it also provides a basis (though not a complete answer) for the assumption of regularity in science.

In [1], I point out that as the resources approach infinity the proportion of coherent configurations in the space of possible configurations approach zero. This is important because Intelligent Design is really about hypothesis testing and specifying rejection regions [2], and coherent configurations allow us to form a rejection region. In hypothesis testing, the experimenter proposes a hypothesis and a probability distribution over potential evidence, signifying what results the hypothesis predicts. If the experiments produce results outside the predicted range to a great enough degree, then the result fall within the rejection region and the hypothesis is considered statistically unlikely and consequently rejected. Note that in this case it is actually better to have more result samples rather than fewer samples. With a few samples the variance is large enough that the results don’t render the hypothesis statistically unlikely. But, with enough samples the variance is reduced to where the hypothesis can be rejected. With an infinite number of samples we can see almost exactly whether the true distribution matches the predicted distribution.

Such is the case with an infinite universe and infinite samples. The infinite universe is composed of all possible configurations, which create a probability distribution over how ordered an arbitrary sample is expected to be. With an assumption of infinite samples (i.e. a conscious observer in every configuration), we can say in what proportion of the configurations intelligent design detection will be successful, which is the inverse of the proportion of ordered configurations. Unfortunately, the number of unsuccessful detections never actually reaches zero, since there will always be coherent configurations as long as they are possible. If I happen to find myself in a coherent configuration I may just be extremely lucky. But, in the majority of configurations the chance and necessity hypothesis will be validly rejected in favor of the intelligent design hypothesis.

At this point it may seem suspicious that I write we can reject a hypothesis in favor of another. Why should rejecting one hypothesis favor another hypothesis? This begins to sound like a god-of-the-gaps argument; just because we’ve dismissed chance and necessity doesn’t necessarily imply we can accept design. There may be yet another alternative we’ve yet to think of. While a good caution, science does not deal with unknown hypotheses. Science deals with discriminating between known hypotheses to select the best description of the data. But, what is the probability distribution over the evidence that ID provides? Well, the specific prediction of ID is that orderly configurations will be much more common than statistically expected. For example, we can see this in that Kolmogrov complexity provides an objective specification for CSI calculations [2]. Kolmogrov complexity is a universal measure of compression, and orderliness is a form of compression. [3] So, when I end up in a configuration that is orderly I have a higher probability of being in a configuration that is the result of ID than in a configuration that is the result of chance and necessity. Hence, an orderly configuration allows me to discriminate between the chance and necessity hypothesis and the ID hypothesis, in favor of the latter. Additionally, since orderly configurations drop off so quickly as our space of configurations approach infinity, then this shows that infinite resources actually make it extremely easy to discriminate in favor of ID when faced with an orderly configuration. Thus, intelligent design detection becomes more effective as the probabilistic resources increase.

Now that I’ve addressed the question of whether infinite probabilistic resources makes ID detection impossible or much, much easier, let’s see whether ID can in turn do science a favor and provide a basis for its regularity assumption. I will attempt to do this in part 3 of my series.

[1] http://www.uncommondescent.com/intelligent-design/the-effect-of-infinite-probabilistic-resources-on-id/
[2] http://www.designinference.com/documents/2005.06.Specification.pdf
[3] Interestingly, Kolmogrov complexity is uncomputable in the general case due to the halting problem. This means that in general no algorithm can generate orderliness more often than is statistically expected to show up by chance. Hence, if some entity is capable of generating orderliness more often than statistically predicted, it must be capabable, at least to some extent, of solving the halting problem.

## 13 Replies to “Infinite Probabilistic Resources Makes ID Detection Easier (Part 2)”

1. 1
bornagain77 says:

Thanks Eric for making this distinction clear.

2. 2
David W. Gibson says:

Thus, a priori, the most rational position is that all detection of order cannot imply anything beyond the bare detection, and most certainly implies nothing about continued order in the future or that order existed in the past.

Furthermore, since such detections of order encompass any observations we may make, we have no other means of determining a posteriori whether science’s assumption of regularity is valid to any degree whatsoever.

In the interests of clarity, maybe we should note here that building such models would not be feasible if there weren’t any consistent longitudinal pattern. Models can’t be built based on random or unpredictable patterns.

Beyond this, most models (by no means all!) aren’t constructed just for the Joy of Modeling. They are instead built in the hopes of producing useful predictions. This does NOT necessarily mean accurate predictions; often, failed predictions are more instructive than successful ones.

Nonetheless, if a model consistently makes accurate predictions based on past regularity, this accuracy is generally very helpful. It may become a moot point whether the assumption of regularity is valid. How is “validity” to be determined, in a case where a model makes consistently correct predictions?

At this point it may seem suspicious that I write we can reject a hypothesis in favor of another. Why should rejecting one hypothesis favor another hypothesis?

Conceptually, hypotheses are ALWAYS rejected in favor of competing hypotheses. In practice, with respect to any mystery, we begin with no reliable data. We collect a very few, and immediately we form an explanatory hypothesis. This is simply human nature. Given that we have hopelessly incomplete data, our initial hypothesis has very little chance of being even a little bit correct. As more data become available, our hypotheses improve in both scope and accuracy.

(The story is told of the London Geologocal Society, which once decided not to theorize until they had sufficient data. Darwin laughed at them, and said they might as well go into the nearest quarry and describe every pebble. Darwin said that ALL observations are meaningless, unless they either support or refute SOME notion, some proposed explanation. And this remains as true today as it was then. ALL observations are immediatley placed where they belong in a “best-fit” explanation. And if they don’t fit, the explanation must change. We make no progress in the growth of knowledge, simply rejecting or ignoring observations which don’t play will with what we think or wish is true.

We assume that reality is consistent, that there are no true paradoxes. Otherwise, why bother? And this means ALL theories must consistently handle ALL observations, most especially those least congenial to our theories. When we see people who carefully avoid uncomfortable cases, we should rightly be suspicious.)

3. 3
Neil Rickert says:

Science operates by fitting models to data using statistical hypothesis testing with an assumption of regularity between the past, present, and future.

I believe this to be a largely incorrect view of science.

According to a recent post Popper doubted that there is any clear scientific method. If science worked as you describe it, then Popper would not have come to such a conclusion.

4. 4
bornagain77 says:

Eric, you may enjoy this video:

Nuclear Strength Apologetics – Presuppositional Apologetics – video

=============

Here are some of William Lane Craig’s thoughts on the matter:

Multiverse and the Design Argument – William Lane Craig
Excerpt: Roger Penrose of Oxford University has calculated that the odds of our universe’s low entropy condition obtaining by chance alone are on the order of 1 in 10^10(123), an inconceivable number. If our universe were but one member of a multiverse of randomly ordered worlds, then it is vastly more probable that we should be observing a much smaller universe. For example, the odds of our solar system’s being formed instantly by the random collision of particles is about 1 in 10^10(60), a vast number, but inconceivably smaller than 1 in 10^10(123). (Penrose calls it “utter chicken feed” by comparison [The Road to Reality (Knopf, 2005), pp. 762-5]). Or again, if our universe is but one member of a multiverse, then we ought to be observing highly extraordinary events, like horses’ popping into and out of existence by random collisions, or perpetual motion machines, since these are vastly more probable than all of nature’s constants and quantities’ falling by chance into the virtually infinitesimal life-permitting range. Observable universes like those strange worlds are simply much more plenteous in the ensemble of universes than worlds like ours and, therefore, ought to be observed by us if the universe were but a random member of a multiverse of worlds. Since we do not have such observations, that fact strongly disconfirms the multiverse hypothesis. On naturalism, at least, it is therefore highly probable that there is no multiverse. — Penrose puts it bluntly “these world ensemble hypothesis are worse than useless in explaining the anthropic fine-tuning of the universe”.
http://www.reasonablefaith.org.....friendly=1

5. 5
bornagain77 says:

Eric, the part of the video that you may find particular interest in starts at the 10:00 minute mark;

6. 6
bornagain77 says:

Eric, I edited the video to the relevant part;

Infinite Multiverse Vs. Uniformity Of Nature – video
http://www.metacafe.com/watch/6853139/

7. 7
Eric Holloway says:

@BA77:

Thanks for the info. Yes, many other people have made my point much more eloquently, but I thought it wouldn’t hurt to say it again, especially since it seems a number of people don’t understand the problems with the infinite resources counter.

@NR:

Well, it helps to understand the motivation for Popper’s view of science. He’s largely coming from the assumption that Hume’s argument against induction is valid, and is trying to develop some way of still maintaining credibility for science. However, if Hume’s argument is valid then science is just not credible, period. Personally, I don’t find Popper’s fallibilism and the like very convincing, largely because I don’t believe Hume’s argument is valid in the first place. But, that’s a subject for my next article.

8. 8
Neil Rickert says:

Personally, I don’t find Popper’s fallibilism and the like very convincing, largely because I don’t believe Hume’s argument is valid in the first place.

I don’t agree with Popper’s falsificationism. Hume’s argument against induction would be valid, if science worked as you describe.

As far as I know, Popper was concerned that data appeared to be theory laden. If science works as you claim, then the data precedes the theory and should not be theory laden.

As I see it, the problem for science is not one of finding regularities in the data. Science can be useful even if there are no regularities in the data. The main problem for science, is that there is no data. So science has to invent ways of getting data. I see scientific theories as largely a documented account of how you get the data in the first place. That, of course, would lead to data being theory laden.

Take Newton’s law of gravitation. That law expressed a relation between the masses of two objects, the distance between them, and an hypothesized force of gravitation attraction between the. The first actual data that fits those characteristics was obtained by Cavendish, at around 100 years after Newton had proposed his law.

9. 9
PeterJ says:

Ah… past, present & future. Yes i get it.

10. 10
bornagain77 says:

Neil states:

‘Science can be useful even if there are no regularities in the data.’

,,, And just why do you presuppose that your perception to ‘hypothetically’ see that there might be ‘no regularities in the data’ should remain constant while all the rest of reality might just as well have exhibited ‘no regularities’ and should have been found to be in total chaos??? Especially since you, as a atheist, would hold that your beliefs and perception were merely a epiphenomena of that chaotic foundation you hold to be the ‘real’ basis of reality???

What Would The World Look Like If Atheism Were Actually True? – video
http://www.metacafe.com/w/5486757/

The Multiverse Gods, final part – Robert Sheldon – June 2011
Excerpt: And so in our long journey through the purgatory of multiverse-theory, we discover as we previously discovered for materialism, there are two solutions, and only two. Either William Lane Craig is correct and multiverse-theory is just another ontological proof a personal Creator, or we follow Nietzsche into the dark nihilism of the loss of reason. Heaven or hell, there are no other solutions.
http://procrustes.blogtownhall.....part.thtml

Atheism In Crisis – The Absurdity Of The Multiverse – video
http://www.metacafe.com/watch/4227733

of note:

,,,Even the ‘exotic’ virtual particles, which fleetingly pop into and out of existence, are found to be necessary for life in the universe:

Virtual Particles, Anthropic Principle & Special Relativity – Michael Strauss PhD. Particle Physics – video
http://www.metacafe.com/watch/4554674

11. 11
Neil Rickert says:

Especially since you, as a atheist, would hold that your beliefs and perception were merely a epiphenomena of that chaotic foundation you hold to be the ‘real’ basis of reality???

Please don’t tell me what I believe. Your mind reading abilities are not as good as you seem to think they are.

12. 12
bornagain77 says:

Neil, but alas, as a atheist, you can have no mind to read if I could read minds, nor can you have any beliefs of mind which are transcendent of a material basis:

The Mind and Materialist Superstition – Six “conditions of mind” that are irreconcilable with materialism:
http://www.evolutionnews.org/2.....super.html

This following video humorously reveals the bankruptcy that atheists have in trying to ground beliefs within a materialistic worldview;

John Cleese – The Scientists – humorous video