Intelligent Design

How ID Saves Science from Infinite Probabilistic Resources (Part 3)

Spread the love

In my previous articles [1] [2], I explained how infinite probabilistic resources lead to Hume’s problem of induction. In this article I explain how ID helps solve the problem of induction, and rescues science from infinite probabilistic resources.

First, here is a brief recap of the problem for science with infinite resources. Essentially, the problem is that with infinite resources anything can, and does, happen in the universe. In such a case, we have to determine what is true based on a priori considerations, and since a priori all order is highly improbable, we must assume maximum randomness possible given a particular observation. This means that no observation implies any order beyond itself. Since science depends on regularity and order throughout time, science can no longer produce inductive conclusions. This is Hume’s problem of induction.

Popper understood this problem, but sought to preserve science’s ability to tell us truth about reality. [3] Since no evidence corroborating a theory implies it is in fact true, if we assume maximum randomness, the only evidence that counts is contradictory evidence. Thus, for Popper, science consists of proposing theories and then trying our best to disprove them. Consequently, theories are only considered scientific by Popper if they are falsifiable. Now the common perception of lay scientists is that they are in fact performing an inductive activity. However, the formal process currently adopted by experimental scientsts is hypothesis rejection through statistical testing. This is falsification, whereby multiple hypotheses are put head to head and eliminated.

Of course, this approach to science as an answer to Hume’s problem of induction doesn’t eliminate the original problem. Regardless of what experimentation shows about a particular point in time, there is still no rational basis for extrapolating this into the future. With the assumption of maximum randomness, the future should be considered as random as possible. Any extrapolations assumes regularity within time, and if there is regularity there isn’t any reason to throw out inductivism. And if we don’t have to throw out inductivism, then there isn’t a problem for Popper’s falsification to solve.

A number of philosophers have attempted to address the problem at its root, such as David Stove. Stove relies on the sound premise that a typical sample will most likely resemble the population it was sampled from [4]. This gives a basis for assuming that a particular conclusion from a sample also applies to the population, at least with some significant degree of probability. On the face of it, this argument is quite sound, since its premise is sound. However, the problem lies in assuming an observation is a typical sample. If it is not a typical sample then the argument does not hold. And, determining whether a sample is typical ultimately relies on a priori considerations.

If we must a priori assume maximum randomness possible, then when we observe order it is more likely a priori that only the observation is orderly, rather than both the observation and the sampled population being orderly. So, even though Stove’s argument works if we can assume the population as whole is orderly, this assumption doesn’t hold if we must a priori assume maximum randomness. And, once again, we encounter a nice sounding solution to Hume’s problem, which unfortunately turns out to rely on Hume’s problem already being solved.

But, why must we assume maximum randomness? The assumption of randomness is a conclusion from the premise that only chance and necessity are responsible for everything in the universe. Consequently, most universe configurations that they produce are random. But, we don’t need to assume only chance and necessity ore responsible for everything. For example, intelligent design can produce more order than expected from chance and necessity. So, if intelligent design may also be responsible for events in the universe, we no longer must necessarily assume a priori maximum randomness. Now, ID proper doesn’t tell us exactly how much order we can expect in the universe, such a conclusion requires other considerations. But, ID does at least eliminate the need to assume maximum randomness.

Once we no longer have to assume maximum randomness, then we can start taking Stove’s argument at face value. If we detect order, since it is such a small likelihood if only chance and necessity are involved, we can instead conclude intelligent design is responsible through statistical hypothesis rejection. We won’t be correct in every circumstance, but, given enough order, we will be correct in our conclusion more often than not. And, if the detected order is the result of intelligent design, then we can further assume that an intelligent designer is producing at least some of the configurations in our universe and thus potentially creating more order than expected from chance and necessity. Of course, the existence of an intelligent designer doesn’t entail that it has made a lot of orderly configurations, but it does mean the amount of order, or lack thereof, is not a foregone conclusion.

At this point in the argument we have multiple options regarding the relation between our observation of orderliness and the actual orderliness of the universe, instead of the original 1 option. The first option is the usual chance and necessity option, where we have to assume maximum randomness for any observation, thus no observation of order entails any order beyond itself. The second, new option, introduced thanks to ID, is that an order creating designer is at work in our universe. In this case, assuming the designer isn’t specially creating order just for our particular observations, it is rational to assume that our particular observation of order is a typical observation in the universe, and thus the universe is orderly. The third option is that there is a designer, but the designer is for some reason specially creating order just for our observation, which becomes equivalent to the maximum randomness option. Our final question, then, is how do we discriminate between options two and three?

At this point, I must caveat that I consider my answer here more tentative, but it is novel and interesting in my mind, so I consider it worth mentioning. I believe we can resolve this final question, ironically enough, in the way the whole problem of induction was originally introduced. Our original problem came about through considerations of mathematical permutations, and the fact that most of these permutations are random. Now again we consider permutations, but this time of order creating intelligent designers. Namely, we are asking what proportion of the order creating intelligent designers are essentially fooling us? To frame the answer, think of an intelligent designer as an input/output process. When faced with a particular instance, there is a probability the designer will leave it as it is, or make it more orderly, and we consequently have permutations of input to output mappings. Accordingly, most of these mappings are random, and it is fairly arbitrary whether a designer chooses to create order in a particular situation or not. Now reconsider the case where the designer is fooling us. In this case, the mapping is not very random at all, the designer is picking out particular cases where he will create order. As such, these mappings themselves exhibit order, and become highly unlikely. The more likely case is that the mappings are random and the designer is arbitrarily creating order. So, the proportion of deceptive designers to arbitrary designers is very small, and we can expect most designers in our universe to not be out to fool us. And now the irony in this solution becomes fully apparent. In collaboration with itself, Hume’s problem of induction has helped save us from itself!




[4] The Rationality of Induction (Clarendon, 1986)

Leave a Reply