The scientific enterprise stands or falls on the legitimacy of making inductive inferences, from cases of which we have experience to cases of which we have no experience. The aim of this post will be to show that there can be no scientific knowledge if there is no God, and that there is no way of justifying inductive inference on a systematic basis, in the absence of God.
The UK-based Science Council has defined science as “the pursuit and application of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.” Scientific knowledge is therefore systematic rather than particular: it isn’t just about this or that fact, but about classes of facts. My senses can tell me that the apple I see in front of me is red and juicy, but it is science which tells me that the apple genome contains about 57,000 genes, that all apple trees are deciduous, and that apple trees belong in the rose family. It is this kind of systematic knowledge which, I maintain, would not be possible in the absence of God.
What is induction, and what is the problem of induction?
In science, the term induction is commonly used to describe inferences from particular cases to the general case, or from a finite sample to a generalization about a whole population. These generalizations include not only universal statements (e.g. “Every life-form observed to date has been carbon-based, so it’s safe to conclude that all life-forms are”) but also functional relations (e.g. Hooke’s law, F=k.x, which states that the force F needed to extend or compress a spring by a distance x is always proportional to that distance).
In logic, the term “induction” has a much broader meaning, encompassing all arguments in which the premises support the conclusion without deductively entailing it. Inductive arguments are not formally valid, but are nonetheless intended to be strong. Such arguments include predictions about the future based on past data (e.g. “I predict that the sun will rise tomorrow, because it has risen every day in the past”), as well as inferences about individuals based on statistical generalizations (“Most basketball players are tall, and Jodie’s friend Sam plays basketball, so Sam is probably tall, too”). Neither of these kinds of inferences would qualify as scientific inferences, in the strict sense, as they aren’t inferences from particular cases to the general case; nevertheless, they are inductive.
Associate Professor Kevin deLaplante, of Iowa State University, has posted an excellent 10-minute video on Youtube, titled, Induction and Scientific Reasoning. In the video, deLaplante explains that the scientific usage of the term “induction” is a subset of the broader, logical usage, and he adds that induction in the broader logical sense is fundamental to scientific reasoning, since it involves moving from known facts about observed phenomena to a tentative conclusion (or hypothesis) about the world, which goes beyond the observable facts.
This brings us to the problem of induction, which relates to how we can legitimately infer, in Hume’s words, that “instances of which we have had no experience resemble those of which we have had experience” [p. 89] (Hume, David, 1888, Hume’s Treatise of Human Nature, edited by L. A. Selby Bigge, Oxford, Clarendon Press; originally published 1739–40). John Vickers, writing in the Stanford Encyclopedia of Philosophy, succinctly explains why Hume’s principle is so important to science, and why at the same time, philosophers have had such a hard time in providing a justification for the principle:
[Inductive] methods are clearly essential in scientific reasoning as well as in the conduct of our everyday affairs. The problem is how to support or justify them and it leads to a dilemma: the principle cannot be proved deductively, for it is contingent, and only necessary truths can be proved deductively. Nor can it be supported inductively — by arguing that it has always or usually been reliable in the past — for that would beg the question by assuming just what is to be proved.
(Vickers, John, “The Problem of Induction” in The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.))
The philosopher C.D. Broad described induction as “the glory of science” and at the same time, “the scandal of philosophy.” (Broad made those remarks in a 1926 lecture on “The Philosophy of Francis Bacon,” reprinted in Broad, C. D., Ethics and the History of Philosophy, New York: Humanities Press, 1952, p. 143.)
In today’s post, I’d like to informally survey the rationales which have been put forward to support the legitimacy of inductive inference, and explain why I think they fail, without God.
Does the reliability of associative knowledge in animals legitimize scientific inference?
In an article on his Website, Debunking Christianity, the well-known skeptic and former preacher John Loftus, M.A., M.Div., author of Why I Became an Atheist: A Former Preacher Rejects Christianity, defends the possibility of scientific knowledge along the following lines:
“If there is no God then we don’t know anything.” False. If so, chimps don’t know anything either. They don’t know how to get food, or mate or even where to live. Without knowing anything they should’ve died off a long time ago. And yet here they are. They don’t need a god to know these things. Why do we need a god for knowledge? We learn through a process of trial and error. Since we’ve survived as a human species, we have acquired reliable knowledge about our world. Period.
There are several things wrong with this argument.
First, Loftus is attacking a straw man here. Theists who make this kind of argument do not claim that if there is no God then we don’t know anything. Rather, what they claim is that we can have no scientific knowledge in the absence of God: hence the attempt to invoke science in order to undermine belief in God is self-defeating, for it destroys science as well.
Second, Loftus fails to differentiate between procedural knowledge (“knowing how”) and declarative or descriptive knowledge (which can only be expressed in propositions). It is obvious that animals need to know how to obtain food or to mate, or they wouldn’t have survived. Some animals have also learned certain techniques that promote the survival of the population, on a trial-and-error basis. But science isn’t just a collection of techniques; it’s an organized body of facts, unified by theories which purport to accurately describe the world. Since the goal of science is to correctly describe the world on a systematic basis, it can only be expressed in the form of statements. That’s why the scientific enterprise cannot be based on mere “know-how.”
Third, the term “reliable,” which Loftus employs in the passage above, is an equivocal one: it can mean “tried and true,” or it can mean “trustworthy in general.” From the fact that human beings successfully relied on certain techniques (e.g. for foraging, hunting and tool-making) on past occasions, in order to survive and prosper as a species, we cannot infer that these techniques will work in other situations. All we can infer is that these techniques have a good track record: they must have worked up until now, in the situations where they have been employed, or otherwise we wouldn’t be here. Science, however, makes statements which go beyond situations of which we have had experience, to cover situations of which we have had no experience. Loftus cannot justify this inferential leap by simply appealing to the past successes we’ve had, without begging the question.
Fourth, the associative knowledge that animals have, which promotes their survival, relates to a contingent link between two stimuli. However, unexpected environmental changes may cause associations to fail, and when they do, many animals die. Suppose that an animal learns to associate a certain stimulus (e.g. a large nearby tree with red things hanging from its branches) with an abundance of good food (apples). For many years, the animal thrives on the basis of that knowledge, until it dies at a ripe old age. Did the animal really know that the fruit of the tree was good to eat and that the tree was a good source of food? Such an assessment can only be made in retrospect: if the association formed by the animal promoted its survival, then we can say in hindsight that it possessed useful and reliable knowledge. But if the animal died instead because the tree (and all the other plants nearby) withered in a drought, or because its fruit was poisoned by a farmer spraying it with pesticides, then we would certainly not say that it had reliable knowledge. In other words, the notion of reliability in this example is a relative one: it is defined relative to some broader context, which is assumed to be fixed. But since the enterprise of science is concerned with the description of the natural (and social) world as a whole, mere relative fixity is not enough. The question we need to address is: how can we be sure that the most general statements about our world are ones we can rely on?
Why the past success of science is irrelevant to my argument
The “Science works” comic that was indirectly alluded to by Professor Richard Dawkins, in a recent talk at Oxford’s Sheldonian Theater on 15 February 2013. Image courtesy of xkcd comics. Licensed under a Creative Commons Attribution-NonCommercial 2.5 License.
Some scientists argue that the successful track record of science is enough to legitimize scientific inferences, and solve the problem of induction. After giving a talk at Oxford’s Sheldonian Theater on 15 February 2013, the world-famous biologist Professor Richard Dawkins was asked by a member of the audience how we can know whether scientific induction is a legitimate way of knowing. Dawkins then proceeded to give some examples of how practices such as medicine, computing, driving, aeronautical flight and space travel work in everyday life when they are based on science, concluding with a crude but clever put-down: “It works, bitches!” – an apparent allusion to a popular XKCD comic on the Web.
Evolutionary biologist Professor Jerry Coyne is also highly impatient with critics who question the legitimacy of scientific inference, in the absence of God. In a recent post of his, Coyne offered a blunt response to what he called “the Planting-ian argument that science cannot philosophically justify its own methodologies”:
…I reply, “Who the hell cares — science has helped us understand the cosmos, and is justified by its successes.” I fail to understand why a lack of philosophical justification counts at all against the success of science.
In a recent online essay titled, No Faith in Science (Slate, November 14, 2013), Professor Coyne argues that when people speak of having “faith” in science, they really mean “confidence derived from scientific tests and repeated, documented experience,” as opposed to religious faith, which lacks rational justification. He writes: “You have faith (i.e., confidence) that the sun will rise tomorrow because it always has, and there’s no evidence that the Earth has stopped rotating or the sun has burnt out.”
Both of these responses by Professor Dawkins and Coyne entirely miss the point I want to make here. I do not doubt for a moment that the scientific method has worked in the past. Rather, my concern is with the question: what makes it reliable? For unless we can answer this question, we have no guarantee that it will continue to work on Earth in the future, let alone in places beyond our Earth. Nor can we be sure that it will work for past events which we have not yet discovered.
Let’s take a very common example: we all believe that the sun will rise tomorrow, and more generally, that it will continue to rise on every future day, at intervals of every 24 hours or so. In order to keep this illustration as simple as possible, let’s imagine that the sun rises at exactly the same time every morning (say, 6:00 a.m. sharp) – which it would, if we lived on a planet with an axial tilt of 0 degrees, and if there were no tidal drag. We might then plot the sunrises on graph paper, as a series of evenly spaced X’s on a timeline. We might even go further, and chart the position of the sun in the sky at various times of day, on our nice little graph, and we might also trace out the path it presumably follows at night. Now we have a smooth, wavy curve linking all the X’s and tracing the path of the sun over the course of time. It’s very natural for us to assume that this smooth curve will follow the same nice, regular path tomorrow, and that the sun will rise at the same time as usual. But would it be rational for us to assume this, if we didn’t believe in God? I don’t think it would. Here’s why.
Think of it this way. If you’re trying to follow a particular path in the woods, then there’s only one possible way in which you can go along the path. But there are an infinite number of ways in which you can go off the path. The same applies to the sun. There are countless ways in which it could conceivably fail to rise at the expected time tomorrow. (Here, I’m describing the sun’s motion from an earth-centered perspective.) For instance, it could soar up into the sky and disappear, or it could do a loop-the-loop, or it could jump suddenly from one place to another in the sky, or it could turn into a green dragon, or it could just disappear in a puff of smoke. Putting it another way: there are infinitely many ways we can draw a mathematical curve showing the sun’s path going off-course, but there’s only one way in which we can draw a curve showing the sun staying on-course. On the basis of that fact alone, we should rationally conclude that the sun’s staying on course consistently in the future is prima facie extremely improbable.
Are there any other facts about the sun which are capable of tipping the balance, making the expectation that it will rise in the future a warranted inference? I don’t think there are. I shall now proceed to review the leading arguments put forward to justify the logic of inductive inference, and explain why I believe they fail.
Can Bayes’ theorem legitimize scientific inference?
A blue neon sign at the Autonomy Corporation, showing a simple statement of Bayes’ theorem. Courtesy of mattbuck and Wikipedia.
It is often argued that Bayes’ theorem can provide a warrant for inductive inferences, and help us to confirm the hypothesis that the sun will rise at the expected time tomorrow (and in the future). It’s a hypothesis that could easily be falsified (e.g. if the sun comes up later than usual one day, or simply disappears), but it continues to hold up. Surely, it is argued, there must come a point – say, after 1,000,000 days of observations – at which it would be utterly irrational to deny that the sun will rise tomorrow at the forecast time.
Not so fast. Our observations provide support for the hypothesis that the sun always rises at the same time every day – but they’re equally consistent with the hypothesis that the sun rises at the same time every day until the year 2050, after which it sails off into space, or the hypothesis that it rises at the same time until 1 January 2437, after which it turns into a green dragon. In short, there are infinitely many alternative hypotheses about the future path of the sum which are also fully consistent with the observations we’ve made to date. The question we need to ask ourselves is: why is it rational for us to single out just one hypothesis – the hypothesis that the sun always rises at the same time every day and always will – and ignore all the other hypotheses about the future course of the sun which are fully consistent with the evidence? (Of course, I’m quite aware that the sun won’t keep rising forever, as it will eventually burn itself out, but we’ll overlook that point for the purposes of this illustration, and assume, as Aristotle did, that the stars are capable of shining eternally.)
Do appeals to simplicity legitimize scientific inference?
Physicist Sean Carroll, in his video, Is God a good theory?, argues that we should assign a higher prior probability to theories that seem more powerful, simple or elegant. In the (highly idealized) case which we are considering, Carroll would argue the simplest hypothesis is to assume that the sun will just keep rising at the same time every day. (In a similar vein, skeptic John Loftus approvingly quotes the following statement by Luiz Fernando Zadra in a recent post of his: “When facing equivalent theories, the one that is more simple is most likely to be the right one.”)
Carroll might then invoke Occam’s razor, and argue that we should jettison more complicated hypotheses – e.g. that the sun keeps rising regularly until 2020, after which it rises regularly only on Tuesdays, and zigzags around the sky on the other days of the week – as unworthy of serious scientific attention, and focus on the default hypothesis that it will continue rising at intervals of 24 hours. If that hypothesis holds up well under testing, then we should accept it, until something happens to cast it into doubt or falsify it.
Finally, Carroll might add that science, by definition, is the search for the simplest and most all-encompassing explanation of what we observe – as he put it on a recent post (June 7, 2011) on Uncommon Descent, “Scientists are trying to come up with the simplest description of nature that accounts for all the data… Science wants to know how we can boil the behavior of nature down to the simplest possible rules.” On this logic, the only hypothesis in my little illustration about the sun rising which merits scientific consideration is the one that says it rises at the same time every day.
Here’s the problem I have with arguments of this kind: just because an explanation is simple, doesn’t mean it’s any more likely to be true. (Oscar Wilde once humorously remarked in his play, The Importance of Being Earnest, that the truth is rarely pure and never simple.) We might want reality to be as simple as possible, but there’s no reason why reality has to bend to our whims. To expect the universe to be simple because we’d like it that way is to project our wishes onto the cosmos. But the cosmos doesn’t care about us. It just is. Hence I am at a loss to understand why Dr. Sean Carroll and John Loftus believe that simpler theories have a higher prior probability of being correct, or are more likely to be true.
Carroll and Loftus might respond by arguing that scientific theories which appeal to fewer entities are by default more likely to be true, as they don’t make as many background assumptions as theories which invoke a multitude of entities. This is the thinking which underlies Occam’s razor, which tells us never to multiply entities beyond necessity. But it isn’t at all clear to me that the hypothesis that that the sun rises at the same time very day until the year 2050, after which it sails off into space, requires us to postulate any more entities than the hypothesis that it keeps rising at the same time every day. The only real advantage of the latter hypothesis is its brevity: it can be stated very concisely, while the other hypotheses require more words to specify. Occam’s razor does not say that we should prefer simpler (i.e. more concise) explanations, as opposed to entities; and it certainly does not say that more concise explanations are more likely to be correct. So in order to justify your belief that the sun will rise at the forecast time tomorrow, you have to make quite a strong assumption: that the briefest explanation of reality in our language is the one most likely to be true. That’s a staggeringly anthropocentric claim, when you come to think of it.
Cut emeralds. We would say that emeralds are green. But how do we kow that they aren’t really grue, where “grue” is defined as “green before the year 2100 and blue afterwards”? Courtesy of Vzb83 and Wikipedia.
I might add in passing that defenders of this claim also have to address the grue paradox: whether the hypothesis that the sun will rise at the same time on every future day is the simplest one depending on what language you are using to describe the sun. The philosopher Nelson made a similar point when writing about the greenness of emeralds: the claim that emeralds are green before a certain year (say, 2100) and blue afterwards might sound convoluted than the claim that emeralds are always green, but if you use the term “grue” to mean “green before the year 2100 and blue afterwards” and “bleen” to mean “blue before the year 2100 and green afterwards” then the claim that emeralds are always green becomes more convoluted – you would have to say that they are grue before 2100 and bleen afterwards – while the claim that emeralds are grue is the more concise. To be fair, however, Carroll and Loftus could argue that the term “grue” is not epistemically basic: it can only be understood by someone who is already familiar with the notions of “blue” and “green.” So in a language employing only epistemically basic terms, the hypothesis that emeralds are always green turns out to be the most concise – and similarly, the hypothesis that the sun rises every day is simpler than the hypothesis that it rises at the same time very day until the year 2050, after which it sails off into space. That’s fine, but now defenders of the claim that simple and concise explanations are more likely to be true have to justify the even stronger claim that explanations which are easy to state simply, from a human-bound epistemic perspective, are more likely to be true. Now that’s a truly astonishing claim.
As for Carroll’s argument that science, by definition, is the enterprise of explaining the world in the simplest and most concise way: well, he can define science that way if he likes, but then I’ll have to ask him: what guarantees that this way of explaining the world reflects the way it actually is? And more worryingly, what guarantees that this way of explaining the world will work in the future? Nothing, as far as I can tell.
Does practical necessity legitimize scientific inference?
At this point, someone may impatiently object that we can argue till the cows come home about whether the sun will rise tomorrow, but on a practical level, we have to commit ourselves to one hypothesis or another. If we believe that the sun will rise at the same time every day, then planting crops in the expectation of harvesting them will be a very sensible thing to do; but if we think the sun is more likely to veer off course, then we probably won’t bother. Like it or lump it, we have to make a choice. Our very lives depend on it. And the hypothesis that’s easiest and most convenient for us to commit ourselves to is the hypothesis that the sun’s behavior is perfectly regular.
That’s perfectly fine, and I can certainly understand people reasoning in this way, on a practical level. But what I insist on pointing out is that convenience doesn’t equal truth. It might make good sense to hope that the sun will keep rising at the same time every day – after all, who wouldn’t want that? – but that doesn’t make it rational to believe that the sun will continue behaving in this fashion. Hoping and believing are two very different things. What I have yet to see is an argument explaining why our belief that the sun will rise at the forecast time tomorrow is a rational one.
Can scientific inference be legitimized over the short term, at least?
Perhaps someone might concede that the belief that the sun will rise every morning at the same time for all eternity is an irrational one, but at the same time argue that the belief that the sun will keep rising at the same time for the foreseeable future is a rational one. They might try to argue as follows. Suppose that the sun is going to stop rising one day. It could be tomorrow, or the next day, or in one year’s time, or in 100 years, or in 1,000,000 years. The point is that other things being equal, it’s much more likely to happen in the distant future than in the near future, as there are so many more days – perhaps infinitely many – in the distant future, and relatively few in the near future. So we should (if we’re rational) bet in the sun’s rising tomorrow, even if we think it will eventually stop rising some day.
What’s wrong with this argument is that it tacitly assumes that the likelihood that the first day on which the sun fails to rise is tomorrow is equivalent to the likelihood that the first day on which it fails to rise is the day after tomorrow, or for that matter, 1,000,000 years from now. But as I argued earlier, there are countlessly many ways in which the sun could fail to rise at the forecast time tomorrow, and there’s only one way in which it could stay “on track,” as it were. That makes it, prima facie, a very likely event to happen. By contrast, the event of the sun’s first failing to rise the day after tomorrow is a much less likely event, as it is conditional upon the apparently unlikely event of the sun’s rising on time tomorrow. In other words: given the number of possibilities (or alternative paths) that we can draw on paper, the sun’s first failing to rise tomorrow is much more likely than its first failing to rise the following day, which in turn is in turn much more likely than its first failing to rise the day after that, and so on.
Does my argument presuppose a “principle of indifference”?
Someone might also object that I’m assuming that all possible future outcomes are equally likely – in other words, I’m smuggling in a metaphysical “principle of indifference.” Not so. All I’m doing here is asking someone who wants to give a greater weighting to the simpler hypotheses: “Why? How can you justify doing that?” Since I haven’t received a good answer to this question, I’m going to treat all of the various alternative hypotheses about the future course of the sun as viable options, until someone gives me a good reason why I shouldn’t.
Do larger data sets help legitimize scientific inference?
A star-forming region in the Large Magellanic Cloud. Image courtesy of NASA, ESA and Wikipedia.
So far, I’ve just been talking about one celestial body: the sun. But what if we observe that all the other stars behave regularly, too? Wouldn’t that strengthen the belief that the sun will continue to behave regularly in the future?
No, it wouldn’t. Here’s why. Just as there are infinitely many ways in which we can graph the sun going off course at some point in the future, so too, there are infinitely many ways in which we can do so for the sun and the other stars. The possibilities are limitless. The fact that the sun and stars have all moved in a uniform manner in the past doesn’t tell us that they’ll continue to do so in the future, as there are infinitely many alternative paths they might take (singly or together), which can still be described by a mathematical equation, except that it’ll be a more complicated one than the equation for uniform motion. (Of course, I realize that the stars don’t really move in a perfectly uniform manner over the long-term, even from an earth-centered perspective, but as I stated above, I’m deliberately simplifying the example, in order to keep it non-technical.)
The point I’m making here is that the simplest equation that we could use to describe the movements of the stars is just one of infinitely many sets of equations we could have chosen, which provide identical descriptions of the stars’ previous movements, but which make wildly divergent predictions for the future courses of the stars. Now imagine someone writing all these alternative equations down on paper, starting with the shortest equation and then writing the rest, in increasing order of length. As we progress, the length of these equations keeps increasing, tending towards infinity. Now can you see what we are doing when we make the uniformitarian assumption? We’re picking the very shortest equation, and ignoring all of the infinitely many alternative equations that successfully predicted its behavior perfectly up to this point. And why? Simply because they’re not short. That doesn’t sound very rational to me, unless we have some reason for believing that shorter explanations are more likely to be correct.
Did the philosophers Donald Williams and D. C. Stove solve the problem of induction?
Philosophy Professor Tim McGrew of Western Michigan University attempts to solve the problem of induction by appealing to the example of balls being taken from a very large urn, containing only red and green balls. He shows that our sample reaches a certain size, we can be reasonably sure that the proportion of red balls in the sample roughly matches the proportion in the urn – even if the urn is a very large one. The picture above is of a Roman funeral urn belonging to one L. Cornelio Leto (R.I.P.), who died at the age of 16. Image courtesy of Museo archeologico regionale di Palermo, Giovanni Dall’Orto and Wikipedia.
Some philosophers (notably Donald Williams and D. C. Stove) have argued that the problem of induction can be solved by appealing to a form of direct inference. The most outstanding defense of this view is from philosophy professor Tim McGrew of Western Michigan University, who in a recent article titled, Direct Inference and the Problem of Induction (The Monist, Volume 84, Issue 2, April 2001, Pages 153-178), argues that a simple, non-controversial form of direct inference provides the key to the refutation of Humean skepticism.
To illustrate his point, McGrew uses the example of taking a sample of balls from a very large urn, containing a mix of red and green balls. He then considers the question: how can we be sure that the proportion of red balls in our limited sample roughly matches the proportion of red balls in the urn? Answering this question, McGrew contends, will enable us to see why we can legitimately infer the likelihood of the sun’s rising at the forecast time tomorrow on the basis of our past observations of sunrises.
First of all, Bernoulli’s theorem tells us that “most large samples differ but little from the population out of which they are drawn,” as McGrew puts it. He points out that it is the absolute size of the sample, and not its size relative to the population as a whole, that matters here:
In fact, the relative proportion of the population sampled is not a significant factor in these sorts of estimation problems. It is the sheer amount of data, not the percentage of possible data, that determines the level of confidence and margins of error.
Bernoulli’s law of large numbers entails that a random large sample of balls from the urn will probably roughly match the population, in its proportion of red and green balls. (For example, if we take a sample of 2,000 balls from the urn, we can be 95% sure that the proportion of red balls in the sample will differ by only 5% from the proportion of red balls in the urn, no matter how big the urn is.) Hence we can make a legitimate inference about an as yet unsampled ball from the urn: we can infer the likelihood that it will be red, with a high degree of confidence. Thus, argues McGrew, “we may draw a conclusion regarding an as-yet-unexamined member of the population with a reasonably high level of confidence.” In a similar fashion, we can view our past observations of sunrises occurring every morning as a large sample from the total population of all past, present and future mornings. Since the sun has risen on every morning in our sample (making our sample proportion of mornings with sunrises equal to 100%), we may infer with a high degree of confidence that the sun will rise on the next morning we observe (i.e. tomorrow morning), and that the sun will rise on most or all future mornings.
McGrew’s argument implicitly assumes that randomness is a primitive epistemic notion, and that in conjunction with the statistical data we possess, it is capable of yielding probabilities without our having to make any additional assumptions about how “fair” our sample was. But how do we know that our sample of balls from the urn was truly representative of the population as a whole? How do we know it wasn’t a biased sample? McGrew replies that we don’t need to know whether our sample was a fair one. “What is required instead is the condition that, relative to what we know, there is nothing about our particular sample that makes it less likely to be representative of the population than any other sample of the same size” (emphases mine – VJT).
The same argument can be applied to our sample of past observations of sunrises occurring every morning. In our sample of historical observations, the proportion of mornings on which the sun rises is 100%. Someone might object that we don’t know whether our current position in time (2013 A.D.) is a typical one, and so we cannot be sure that the sun will behave in the same way in the future. But McGrew would reply that since we have no reason to believe there’s anything atypical about our location in time, we should follow the data (which says that the sun rises on 100% of all mornings we have observed) and conclude that the proportion of all past, present and future mornings on which the sun rises is close to 100%. Hence we can be virtually certain that the sun will rise tomorrow. The same considerations apply to inferences about events occurring in the remote past, before the dawn of recorded history: “When we have no reason to believe conditions were relevantly different – as in the case, say, of certain geological processes – we may quite rightly extrapolate backwards across periods many orders of magnitude greater than those enclosing our observations” (emphasis mine – VJT).
The reason why I think McGrew’s argument fails to assuage skeptical doubts about the reliability of induction in general is that it illicitly assumes the very thing that needs to be established: that the items in the population have a consistency of character, which means that samples drawn from the population won’t vary significantly from it, unless there is a reason for them to do so.
This oversight on McGrew’s part is readily apparent in his reply to John Foster’s objection (based on an illustration by A.J. Ayer) that if we draw balls from a bag, and we’re told in advance that the balls come in only two possible colors, then even if all of the balls drawn turn out to be the same color, we can never be confident about the color of the next ball to be drawn, no matter how many balls we draw from the bag. McGrew responds to this objection by asking as to imagine that we return each ball to the bag immediately after we’ve taken it out, “creating, in effect, an indefinitely large population with a fixed frequency” (emphasis mine – VJT). He continues:
No finite sample with replacement, no matter how large, ever amounts to a measurable fraction of this population. Yet as we have seen, using direct inference and Bernoulli’s theorem it is simple to specify a sample size large enough to yield as high a confidence as one likes that the true population value lies within an arbitrarily small (but nondegenerate) real interval around the sample proportion. (Emphasis mine – VJT.)
But what we are really doing in this “replacement” example is re-examining the same balls, over and over again, as we take them out, put them back and (some time later) draw them again. The reader will also notice that these balls are assumed not to vary in color over the course of time. Given these constraints, no-one would contest the legitimacy of making inferences about draws of balls which we haven’t yet sampled, on the basis of draws that we’ve already made, since our future samples will be of the same balls we’ve already looked at, and they will (by stipulation) be the same color that they were previously. But the problem of induction is nothing like this. Instead, we are required to make inferences about items we haven’t seen, on the basis of items which we have seen, and to make matters worse, we possess no assurance whatsoever that the items will display any consistency of character, over time.
I might add that the epistemic principle which McGrew is appealing to sounds very odd when it is applied to the problem of guessing the equation for a mathematical curve, from a limited section of that curve. Consider a curve on the x-y axis. We need not assume the curve to be of infinite length: it suffices for our purposes if we confine ourselves to a finite but very long segment (say, from x = -1,000 to x = 1,000). Let us now assume that we know what parts of that segment look like, and that the parts we know appear to be broken segments of a linear curve – say, the curve y = 2.x. McGrew’s epistemic principle would then entail that we should infer that the rest of the segment is linear, in the absence of any reason to think otherwise. From a mathematical perspective, however, this is an absurd conclusion: there are infinitely many possible ways of joining all the broken parts together, apart from the “obvious” way of joining them with a linear curve. Which of these ways is “more likely”? Mathematically speaking, none of them are.
This brings me to another point of difference between McGrew’s example of drawing balls from an urn and my sunrise illustration: in McGrew’s case, there are only two possible values we have to consider (is the ball red or green?), whereas in the case of the Sun, there are infinitely many possible paths we can imagine it following: it could wander off in any direction.
Finally, I would argue that McGrew’s appeal to an epistemic norm – that when we have no reason to believe our particular sample is less likely to be representative of the population than any other sample of the same size, then we should take it to be a typical sample – is an illegitimate move, unless he can ground that epistemic norm in an underlying ontological norm relating to things in the natural world. The notion of an epistemic norm which is not ultimately grounded in reality surely makes no sense; for what, apart from reality, could possibly make it normative? But if McGrew wishes to argue that there is an ontological basis for the epistemic norm he proposes, then he is begging the question; for the ontological equivalent of his proposed norm is: “A sample of items taken from a population will be typical of that population, unless there is some reason for it not to be.” But that is precisely what needs to be established. A skeptic would contend that events can vary from their usual course for absolutely no reason.
I do not wish to disparage McGrew’s argument, which builds on that of Williams and Stove, for it has genuine merit. In my opinion, it constitutes a successful answer to restricted versions of skepticism, which concern themselves with the question of how we can infer this or that generalization from a limited sample. What it fails to address is global skepticism, which addresses the larger question of how we can legitimately infer any generalization from a limited sample. In my illustration above, I chose the example of the sunrise merely as a specific instance of the kind of global skepticism I had in mind. The larger question which I am attempting to answer is one which is fundamental to the scientific endeavor: “How do we know that any of the laws of Nature will continue to hold in the future?” It is this question which McGrew’s argument fails to furnish us with an answer to, in my opinion.
Do mathematical laws and scientific models legitimize scientific inference?
But perhaps it will be objected that I’ve been doing my science all wrong, up to this point. Someone might argue that I haven’t addressed the laws of nature, so far, in my discussion of the problem of induction. Laws are written in the language of mathematics. If I can not only chart the sun’s time of rising but also write an equation that allows me to calculate it as far as I like into the future, doesn’t that buttress the belief that the sun will rise at the forecast time tomorrow?
Additionally, I have hitherto confined my attention so far to just one property of the sun: its motion in the sky (actually, the earth’s, but let’s not worry about that trifling detail here). But what if I can construct a comprehensive model of how stars shine, which explains not one, but many different properties of the sun – its color, its temperature, its mass, and so on – in addition to explaining its motion? And what if it turns out that this model continues to hold up, in successfully predicting all of the sun’s future properties? Wouldn’t that strengthen the belief that the sun’s future movement in the sky is predictable, and that it will continue behaving regularly in the foreseeable future?
I’d now like to address each of these objections in turn. Neither of them, I believe, helps us solve the problem of induction.
(a) Why scientific models are incapable of legitimizing scientific inference
An example of scientific modelling: a schematic diagram of chemical and transport processes related to the composition of the atmosphere. Image courtesy of the Strategic Plan for the U.S. Climate Change Science Program, Phillipe Rekacewicz and Wikipedia.
First, let’s look at scientific models. For any given model that we might make of how stars behave, there are infinitely many alternative models that might explain the same properties of stars as our original model does, but make radically different predictions regarding their future behavior. Of course, the vast majority of these models will be inconceivable to us, but perhaps we could program a computer to generate these models and test them. (Is there a way of enumerating all possible models and testing them one by one? That’s an interesting question; I don’t know the answer, but I suspect not.) Or maybe some advanced aliens could grasp these models, even if we’re incapable of doing so. At any rate, for any particular model that lies beyond our grasp, we can at least imagine (and perhaps construct) some being that’s capable of grasping it.
Professor Carroll has maintained elsewhere that physicists last century were forced to adopt such theories as quantum mechanics and general relativity, despite their counter-intuitiveness. I hope the reader can see now why that statement is incorrect. When it comes to models, there are always other choices, even if we haven’t thought of them yet.
(b) Why the laws of Nature are also incapable of legitimizing scientific inference
Emmy Noether (1882-1935), described by Einstein as the most important woman in the history of mathematics, from a portrait circa 1910. In physics, Noether’s (first) theorem explains the fundamental connection between symmetry and conservation laws: any differentiable symmetry of the action of a physical system has a corresponding conservation law. Image courtesy of Wikipedia.
But what about the laws of Nature? It is often said that the laws of Nature must continue to hold, and that they cannot fail to hold. But what does “cannot” mean here? What makes a law incapable of failing? Science has not told us. Professor Carroll will probably point out that the conservation laws can be explained in terms of something called gauge invariance, as mathematician Emmy Noether showed almost 100 years ago in a theorem now known as Noether’s theorem. Since I’m not a physicist, I shall content myself with quoting from a handy summary of the theorem in a New York Times article by Natalie Angier entitled, The Mighty Mathematician You’ve Never Heard Of (March 26, 2012):
What the revolutionary theorem says, in cartoon essence, is the following: Wherever you find some sort of symmetry in nature, some predictability or homogeneity of parts, you’ll find lurking in the background a corresponding conservation — of momentum, electric charge, energy or the like. If a bicycle wheel is radially symmetric, if you can spin it on its axis and it still looks the same in all directions, well, then, that symmetric translation must yield a corresponding conservation. By applying the principles and calculations embodied in Noether’s theorem, you’ll see that it is angular momentum, the Newtonian impulse that keeps bicyclists upright and on the move.
Some of the relationships to pop out of the theorem are startling, the most profound one linking time and energy. Noether’s theorem shows that a symmetry of time — like the fact that whether you throw a ball in the air tomorrow or make the same toss next week will have no effect on the ball’s trajectory — is directly related to the conservation of energy, our old homily that energy can be neither created nor destroyed but merely changes form.
In other words, the symmetry of Nature across space and time corresponds to conservation laws. And if these conservation laws didn’t hold, we’d be living in a different kind of world. This is a very profound and interesting fact, but it still leaves us with the epistemological question of how we know that the conservation laws do hold, in our world. Or putting it another way: how do we know that Nature is symmetrical? As we’ve seen, the evidence we’ve amassed from our observations to date is insufficient to determine the answer to that question. The fact that energy has been conserved for 1,000,000 days in a row does not, in and of itself, give us any warrant for believing that it will continue to be conserved, on the 1,000,001st day, let alone into the indefinite future. And if we don’t know that energy is conserved, then we cannot know that the behavior of an object – such as a ball thrown in the air – is invariant across time.
I conclude that if we accept the modern scientific account of reality, then we have no epistemic warrant for treating the laws of Nature as anything more than mere regularities, which we have observed holding until now, but which may break down at any point in the future.
At this point, I think it’s time to take stock of where we are. We’ve been trying to come up with a justification of scientific inference – in particular, the uniformitarian assumption that the regularities we observe in Nature will continue to hold in the future. Without that assumption, we have no good reason to believe that the sun will rise tomorrow or at any time in the future, or that scientists’ experiments in the laboratory will continue to work, in the same way that they always have previously. So far, we have found no grounds whatsoever for accepting that assumption. In short: repeated observations, Bayesian testing, appeals to simplicity, appeals to our practical needs, the use of large data sets, appeals to forms of direct inference, the formulation of mathematical laws, and the generation and testing of scientific models, have all failed to supply us with the warrant we need to ground our belief in the rationality of scientific inference and solve the problem of induction. It seems that we’ve run out of options for rescuing science, and restoring it to a rational footing. Or have we?
How the existence of God makes scientific inferences rational
A possible way out: what if things have prescriptive properties, in addition to descriptive properties?
A cross-section of a star like the Sun. Image courtesy of NASA, Phil Newman, Dr. Jim Lochner, Meredith Gibb and Wikipedia.
So far, we’ve been doing science as if it meant: the enterprise of accurately describing the past, present and future properties of the entities we observe in Nature. But this assumes that the various properties of an entity are all descriptive. What if, instead, we assume that some of the properties of things are prescriptive?
Putting it another way, we’ve been proceeding as if all the properties of things are “is” properties: the sun, for instance, isa type G2V star, is 1,392,684 kilometers in diameter, is 1.989×10^30 kilograms in mass, and so on. But what if some of the fundamental properties of things are not “is” properties but “ought” properties? For instance, what if the sentence “Salt is soluble in water” really means: “Sodium chloride ought to dissolve in water,” where the term “ought” refers to the fact that it has a built-in (and ontologically irreducible) disposition, or tendency, to dissolve in water?
The idea we are pursuing here is that things have built-in tendencies which define how they ought to behave. I’m not using “ought” in the moral sense, of course; all I mean is that it’s a basic fact about things that they should behave in certain ways and should not behave in other ways. In other words, we can – indeed, must – use prescriptive terminology when we’re talking about things in the real world.
We can see how prescriptive terminology could provide a ground for scientific inference. For if things have certain ways in which they ought to behave, then the only question we need to answer is: which ways are those? Putting it another way: we no longer have to worry about whether we can rely on Nature to conform to our expectations. Nature is reliable, once you get to know it properly. The problem of induction disappears; all that remains is the epistemic problem of properly identifying the ways in which things should behave. (I’ll say more about this problem below.)
We thus seem to have arrived at a notion of things as embodying prescriptions. What’s more, these prescriptions have to go all the way down: there’s no “ultimate level of reality” at which descriptions take over from prescriptions – for if there were, then the problem of justifying scientific inferences made about that “bottom level” of reality would only raise its ugly head again, and science would rest upon an insecure foundation.
Prescriptions imply rules
Structure of a crystal of sodium chloride (table salt). Below, I propose that any proper account of the properties of table salt has to include reference to rules governing how it behaves. Image courtesy of Raj6 and Wikipedia.
All this talk of “shoulds” (or “oughts”) and “should nots” (or “ought nots”) in reference to things only makes sens if rules are somehow part of their very warp and woof. (For if there were no such rules, then it’s hard to see how the term “should” could have any meaning when applied to things.) What I’m suggesting, then, is that things in the natural world are constituted, in part at least, by rules, which are prescriptive. I am not, however, claiming that objects consist of “nothing but” rules; that would be Platonistic. Objects have other properties as well: they are also associated with quantitative (and qualitative) values, such as having a particular size, shape or color, as well as a spatio-temporal location. Additionally, objects are defined by their complex web of relationships with other natural objects.
The view that laws of Nature are rules is additionally supported by the fact that the laws of Nature are all capable of being given a rigorous mathematical formulation: they can be written down as mathematical equations. In other words, they are formal statements. But a mathematical equation, per se, is not a prescriptive rule; what makes it a rule is that it prescribes the behavior of something. Platonic abstractions are defined by their forms, but they do not follow rules; only real things do that. Things behaving in accordance with a rule must have a built-in tendency, under the appropriate circumstances, to generate the effect that the rule states that they should.
The world, as we have seen, is not a world of facts alone, as the younger Wittgenstein believed; it is also a world of rules which specify what ought to be the case. Rules make up the very warp and woof of the natural world: without them, it would be nothing, as natural objects could no longer be said to possess a nature of their own, and a thing without a nature is not a thing at all. What’s more, these rules pervade all levels of reality: the domain of the lawless is nowhere to be found in Nature. Even at the quantum level, strict mathematical rules still apply.
Kepler’s 3rd Law: The square of the orbital time period T is proportional to the mean radius a:
where M is the mass of the central body (i.e. star).
How we get to a Mind behind Nature
The world thus appears to be made of mathematical prescriptive rules, all the way down. How very, very odd. Where do these rules come from? To answer this question, we have to remember that these prescriptive rules are expressible only in some sort of language – and as we have seen, for the laws of Nature, this language will also have to embody mathematical concepts. Since these rules can only be formulated in some sort of language, then by definition, the only place where rules can come from is a mind. We are forced, then, to assume the existence of a Mind (or minds) underlying Nature, which is responsible for establishing its laws.
A hard-nosed skeptic might object that even if the behavior of things can only be described by us in terms of rules (e.g. recipes), it doesn’t follow that things in themselves are essentially characterized by rules. Rules might be an anthropomorphic projection that we impose on things. We can now see that this objection misses the point, as it presupposes that there are things for rules to be “imposed on” in the first place – in other words, that a thing possesses some underlying essence which is independent of any rules we might impose upon it. But as we’ve seen, it’s “rules all the way down.” There is no level of reality where we can escape the need for prescriptive terminology: as we have seen, the scientific enterprise hangs upon it. What’s more, the rules in question are mathematical: they need a special kind of language, even to formulate them. The universe, to quote Sir James Jeans, is “nearer to a great thought than to a great machine.” But a great thought requires a Great Thinker.
The hard-nosed skeptic might still object that abstract objects, such as triangles, also require language in order to describe them properly. But we don’t say that a mind created these objects. The answer to this objection is that abstract objects are either instantiated in the natural world (e.g. tetrahedra) or they are not (e.g. 999-sided regular polygons). If they are, then their existence is derivative upon that of the objects in the world instantiating them; if they are not (e.g. a regular 999-sided figure, to borrow an example from Professor Edward Feser), then they only exist in the minds of the people who think them up and/or talk about them.
A short argument for God’s existence
We can now sketch how an argument for God’s existence might work. It proceeds as follows:
1. (a) All natural objects – and their parts – exhibit certain built-in, fixed tendencies, which can be said to characterize these objects and circumscribe the ways in which they are capable of acting.
(Note: Although this premise refers to objects and their tendencies and activities, it refrains from saying anything about substance vs. accidents, matter vs. form, or essence vs. existence. These metaphysical categories are of no concern to us.)
(b) The universe itself – or the multiverse, if there is one – can be regarded as a giant natural object.
2. In order to properly ground scientific inferences and everyday inductive knowledge, the tendencies exhibited by natural objects must be construed not merely as properties which describe these objects, but as properties which prescribe the behavior of those objects, and define their very natures. What’s more, these prescriptive rules go all the way down: they are not superimposed on pre-existing objects, but actually constitute those objects, in their very being.
3. By definition, prescriptive rules presuppose a rule-maker. (Rules can only be formulated in some sort of language; hence the notion of a mind-independent rule is an oxymoron.) Thus the existence of prescriptive rules in the natural world can only be explained by an intelligent being or beings who has defined those rules. Hence the rule-governed behavior of natural objects presupposes the existence of an intelligent being or beings who has defined their natures – and hence their very being.
4. An infinite regress of explanations is impossible; all explanations must come to an end somewhere. Hence the intelligent being (or beings) who defines the prescriptive rules which govern the behavior of natural objects and their parts, must not exhibit any built-in, fixed tendencies which can be formulated as invariant propositional rules, and which constrain its mode of acting. Additionally, this intelligent being (or beings) must not be composed of any parts exhibiting such fixed tendencies. We are left, then, with an intelligent being (or beings), whose mode of acting is totally unconstrained by any fixed tendencies of its own, or of any underlying parts.
5. Since the cosmos itself is an entity whose nature is defined by prescriptive rules, it follows that it too requires a Rule-maker, Who must therefore be supernatural, since this Being explains Nature itself. Finally, this Being must be infinite, as nothing constrains its mode of acting. Thus we arrive an an Intelligent Author of Nature, Who is one, simple, supernatural and infinite.
On this account, then, to be infinite is simply to have a nature which is not circumscribed by rules relating to how it can and cannot act. Thus the reason why God must be both supernatural and infinite is that Nature is a giant system of invariant propositional rules (relating to the interactions between various kinds of objects), and because the nature of the Ultimate Rule-maker cannot be defined by any rules of this sort.
How God solves the problem of induction
Even if we grant the existence of a Transcendent Rule-maker for the cosmos, we might still wonder how postulating the existence of such a Being solves the problem of induction. After all, if God’s Nature is not defined in terms of any fixed rules, then that seems to make God a “no rules” Deity. How could it be rational to trust such a Being to make a world in which things behave in a consistent manner? How do we know that God is not an Almighty Joker?
I would like to respond to skeptical concerns about a whimsical Deity by pointing out that I have never argued that God is totally lawless. Consider the traditional concept of God as a simple Being Whose nature it is to know and love in a perfect and unlimited way, and Whose mode of acting is simply to know, love and choose (without anything more basic underlying these acts). The nature of such a Being cannot be characterized by any set of invariant propositional rules; nevertheless, because this Being is essentially loving, there will be certain things that it is incapable of doing – among them being, playing mean tricks on us. Now of course, I haven’t proven that this traditional conception of God is correct. I mention it merely to show that it can be rational to trust a “no rules” Deity.
So, how do I resolve the skeptical problem of induction? I would suggest that the problem disappears if we are prepared to make the following two fairly minimal assumptions about God: first, that if God were to create a cosmos, God would want to produce intelligent beings; and second, that God would want these intelligent beings to know that their Creator exists. (I’m not assuming here that God would want our love or adoration, let alone our prayers.) Since the only way of our knowing God’s existence is through Nature (barring any direct supernatural revelation on God’s part, which very few people claim to have had), it follows that God must have made things in such a way that their natures are knowable by the human mind – or otherwise, we could not reason our way from the knowability of things to the existence of God, Who prescribed the rules which define the nature of things.
“This is all very well,” the skeptic might retort, “but your case for God still hangs on two big ‘ifs.’ How do you know that God is like that?” The short answer is that: (a) my case for the existence of God doesn’t hang on either of the two assumptions in the preceding paragraph – rather, it is my proposed solution to the skeptical problem of induction which hangs upon them; and (b) all I am trying to show here is that invoking God can solve the skeptical problem of induction, not that invoking God will necessarily solve the problem. I made two fairly modest assumptions about the Deity in the preceding paragraph. Given these assumptions, the skeptical problem disappears: if God wants to be known by us, then the things in the world must behave in a reliable fashion. And if they do, then of course, human beings can go about their daily lives and scientists can conduct their research without having to continually worry about whether the sky will fall on them, as the ancient Gauls did. (What I will say, though, is that if I were an atheist, I would be just as worried as the Gauls were.)
The two assumptions which I have made about God follow very naturally from the traditional, classical conception of God as a Being Whose nature it is to know and love in a perfect and unlimited way, and Whose mode of acting is simply to know, love and choose (without anything more basic underlying these acts). Such an essentially Being might well wish to create beings capable of knowing (and loving) their Creator.
A skeptic might still object that the classical description of God as a Being Who is simple (having no parts) and at the same time intelligent flies in the face of our experience that all intelligent beings are highly complex entities – a point which Professor Richard Dawkins deploys to great effect in his Ultimate Boeing 747 gambit. But this objection, I would argue, constitutes an illicit use of the principle of induction. It is difficult enough to justify inferences about other natural objects on the basis of objects which we have observed, how much more so when the Being we are talking about lies outside Nature, as its Author? God the Creator is on another plane of reality than we are, and we cannot make legitimate inferences as to whether an intelligent being on this plane of reality would have to be composite or not.
In any case, my argument above for God’s existence did not attempt to prove that God is absolutely simple. Rather, what it tried to show was that God does not contain any parts whose interactions can be characterized by invariant propositional rules – in other words, mechanical parts, whose working can be described by mathematical formulae. I have not discussed the possibility that God might contain parts of some other sort.
How God guarantees that the scientific enterprise works
I alluded above to the troubling fact that even if we assume that objects somehow instantiate rules, there remains the epistemic problem of knowing whether we’ve chosen the right model, or identified the right mathematical equation (i.e. laws of Nature) for characterizing the rules that define a certain kind of object – be it a tiny electron or a star, like the sun. But if we make the two assumptions about God which I referred to in the preceding section – that God wants to make intelligent beings, and that God wants these intelligent beings to reason their way to God’s existence – then we can infer that the rules which are embodied by objects in the natural world must be tailor-made to fit the minds of intelligent beings that are capable of contemplating their Creator. In other words, the universe is designed to be knowable by us. Hence we don’t need to concern ourselves with the theoretical possibility that the rules which characterize things might be too complicated even in principle for us to grasp.
God, then, is the ultimate Guarantor that science can work.
A Short Note on the Problem of Evil
An atheist might object that while I have put forward a powerful argument for the existence of God, the argument from evil is an equally powerful argument against the existence of God. What the objection overlooks is that not all arguments are equally strong. The foregoing argument for God’s existence can be described as a transcendental argument: if God does not exist, then scientific knowledge is impossible; but scientific knowledge is possible; therefore God exists. However, the argument from evil is of a much weaker sort.
It is generally agreed by philosophers that the argument from evil is not a logically conclusive argument against the existence of God – a point conceded even by skeptic John Loftus, in his post, James K. Dew On “The Logical Problem of Evil”. Rather, the argument appeals to powerful prima facie evidence against the existence of God: the existence of senseless evil in our cosmos. Loftus argues that it is not enough for theists to attempt to avoid the force of the argument by saying that it is possible that an omnibenevolent and omnipotent God might make a world in which senseless evil occurs. What needs to be shown, contends Loftus, is that it is reasonably probable that the cosmos would contain senseless evil, if it were made by God. But it is precisely here that the argument from evil displays its Achilles’ heel. A key weakness of the argument is that it is unquantifiable: it makes no attempt to calculate how improbable the existence of God is, given the evil we find in the world. But if one cannot quantify the weight we should attach to evidence against the existence of God, then it would be foolish to place much credence in an argument appealing to such evidence. In short, the argument from evil is properly described as an argument from incredulity, to use the words of Professor Richard Dawkins. The atheist who triumphantly points to some hideous example of evil in the world – say, the Boxing Day tsunami of 2004, which killed 400,000 people – and grandiosely declares, “Voila! How do you explain that on your hypothesis, hey?”, is making a rhetorical point rather than a logical one. And as Professor Dawkins likes to point out, the mere fact that we cannot imagine a good explanation for some event does not render that event impossible or even improbable. Thus the mere fact that we cannot imagine why God would have allowed the Boxing Day tsunami of 2004 to occur does not necessarily mean that it is unlikely that He would have done so.
I should add that I personally find rhetorical arguments of this sort very forceful, on an intuitive level. But the point I want to make here is that as objective arguments, these rank pretty low on the scale.