Professor Victor Stenger is an American particle physicist and a noted atheist, who popularized the phrase, “Science flies you to the moon. Religion flies you into buildings”. Professor Stenger is also the author of several books, including his recent best-seller, The Fallacy of Fine-Tuning: How the Universe is Not Designed for Humanity (Prometheus Books, 2011). Stenger’s latest book has been received with great acclaim by atheists: “Stenger has demolished the fine-tuning proponents,” writes one enthusiastic Amazon reviewer, adding that the book tells us “how science is able to demonstrate the non-existence of god.”
Well, it seems that the great Stenger has finally met his match. Dr. Luke A. Barnes, a post-doctoral researcher at the Institute for Astronomy, ETH Zurich, Switzerland, has written a scathing critique of Stenger’s book. I’ve read refutations in my time, but I have to say, this one is devastating.
In his paper, Dr. Barnes takes care to avoid drawing any metaphysical conclusions from the fact of fine-tuning. He has no religious axe to grind. His main concern is simply to establish that the fine-tuning of the universe is real, contrary to the claims of Professor Stenger, who asserts that all of the alleged examples of fine-tuning in our universe can be explained without the need for a multiverse.
Dr. Barnes’ ARXIV paper, The Fine-Tuning of the Universe for Intelligent Life (Version 1, December 21, 2011), is available online, and I shall be quoting from it below. Since the paper is quite technical at times, I’ve omitted mathematical equations and kept the references to physical parameters to a minimum, since I simply wish to give readers an overview of what Dr. Barnes perceives as the key flaws in Professor Stenger’s book.
I would like to add that Dr. Barnes has also written an incisive online critique of Mike Ikeda and Bill Jeffery’s widely cited paper, The Anthropic Principle Does Not Support Supernaturalism, which is cited by Professor Stenger in his book, in order to show that even if some observation were to establish that the universe is fine-tuned, it could only count as evidence against God’s existence. Part 1 of Dr. Barnes’ reply is here; Part 2 is here.
What follows is a selection of quotes from Dr. Barnes’ ARXIV paper, covering the key points. All bold emphases are mine, not the author’s, and page references are to Dr. Barnes’ paper. The term “FOFT” in the quotes below is an abbreviation of the title of Professor Stenger’s latest book, The Fallacy of Fine-Tuning: How the Universe is Not Designed for Humanity (Prometheus Books, 2011).
Finally, I would like to thank Dr. Barnes for making his paper available for public comment online, and I wish him every success in his future scientific work.
The fine-tuning of the universe for intelligent life has received a great deal of attention in recent years, both in the philosophical and scientific literature. The claim is that in the space of possible physical laws, parameters and initial conditions, the set that permits the evolution of intelligent life is very small. I present here a review of the scientific literature, outlining cases of fine-tuning in the classic works of Carter, Carr and Rees, and Barrow and Tipler, as well as more recent work. To sharpen the discussion, the role of the antagonist will be played by Victor Stenger’s recent book The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us. Stenger claims that all known fine-tuning cases can be explained without the need for a multiverse. Many of Stenger’s claims will be found to be highly problematic. We will touch on such issues as the logical necessity of the laws of nature; objectivity, invariance and symmetry; theoretical physics and possible universes; entropy in cosmology; cosmic inflation and initial conditions; galaxy formation; the cosmological constant; stars and their formation; the properties of elementary particles and their effect on chemistry and the macroscopic world; the origin of mass; grand unified theories; and the dimensionality of space and time. I also provide an assessment of the multiverse, noting the significant challenges that it must face. I do not attempt to defend any conclusion based on the fine-tuning of the universe for intelligent life. This paper can be viewed as a critique of Stenger’s book, or read independently. (p. 1)
The claim that the universe is fine-tuned can be formulated as:
FT: In the set of possible physics, the subset that permit the evolution of life is very small.
As it stands, FT [the fine-tuning claim – VJT] is precise enough to distinguish itself from a number of other claims for which it is often mistaken. FT is not the claim that this universe is optimal for life, that it contains the maximum amount of life per unit volume or per baryon, that carbon-based life is the only possible type of life, or that the only kinds of universes that support life are minor variations on this universe. These claims, true or false, are simply beside the point. (p. 3)
The reason why FT [the fine-tuning claim – VJT] is an interesting claim is that it makes the existence of life in this universe appear to be something remarkable, something in need of explanation. The intuition here is that, if ours were the only universe, and if the causes that established the physics of our universe were indifferent to whether it would evolve life, then the chances of hitting upon a life-permitting universe are very small. (p. 3)
There are a few fallacies to keep in mind as we consider cases of fine-tuning.
The Cheap-Binoculars Fallacy: “Don’t waste money buying expensive binoculars. Simply stand closer to the object you wish to view”. We can make any point (or outcome) in possibility space seem more likely by zooming-in on its neighbourhood. Having identified the life-permitting region of parameter space, we can make it look big by deftly choosing the limits of the plot. We could also distort parameter space using, for example, logarithmic axes. A good example of this fallacy is quantifying the fine-tuning of a parameter relative to its value in our universe, rather than the totality of possibility space. If a dart lands 3 mm from the centre of a dartboard, is it obviously fallacious to say that because the dart could have landed twice as far away and still scored a bullseye, therefore the throw is only fine-tuned to a factor of two and there is “plenty of room” inside the bullseye. The correct comparison is between the area (or more precisely, solid angle) of the bullseye to the area in which the dart could land. Similarly, comparing the life-permitting range to the value of the parameter in our universe necessarily produces a bias toward underestimating fine-tuning, since we know that our universe is in the life-permitting range.
The Flippant Funambulist Fallacy: “Tightrope-walking is easy!”, the man says, “just look at all the places you could stand and not fall to your death!”. This is nonsense, of course: a tightrope walker must overbalance in a very specific direction if her path is to be life-permitting. The freedom to wander is tightly constrained. When identifying the life-permitting region of parameter space, the shape of the region is not particularly relevant. An elongated life-friendly region is just as fine-tuned as a compact region of the same area. The fact that we can change the setting on one cosmic dial, so long as we very carefully change another at the same time, does not necessarily mean that FT [the fine-tuning claim – VJT] is false.
The Sequential Juggler Fallacy: “Juggling is easy!”, the man says, “you can throw and catch a ball. So just juggle all five, one at a time”. Juggling five balls one-at-a-time isn’t really juggling. For a universe to be life-permitting, it must satisfy a number of constraints simultaneously. For example, a universe with the right physical laws for complex organic molecules, but which re-collapses before it is cool enough to permit neutral atoms will not form life. One cannot refute FT by considering life-permitting criteria one-at-a-time and noting that each can be satisfied in a wide region of parameter space. In set-theoretic terms, we are interested in the intersection of the life-permitting regions, not the union.
The Cane Toad Solution: In 1935, the Bureau of Sugar Experiment Stations was worried by the effect of the native cane beetle on Australian sugar cane crops. They introduced 102 cane toads, imported from Hawaii, into parts of Northern Queensland in the hope that they would eat the beetles. And thus the problem was solved forever, except for the 200 million cane toads that now call eastern Australia home, eating smaller native animals, and secreting a poison that kills any larger animal that preys on them. A cane toad solution, then, is one that doesn’t consider whether the end result is worse than the problem itself. When presented with a proposed fine-tuning explainer, we must ask whether the solution is more fine-tuned than the problem.
Stenger is a particle physicist, a noted speaker, and the author of a number of books and articles on science and religion. In his latest book, “The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us” [hereafter FOFT], he makes the following bold claim:
[T]he most commonly cited examples of apparent fine-tuning can be readily explained by the application of a little well-established physics and cosmology. … [S]ome form of life would have occurred in most universes that could be described by the same physical models as ours, with parameters whose ranges varied over ranges consistent with those models. And I will show why we can expect to be able to describe any uncreated universe with the same models and laws with at most slight, accidental variations. Plausible natural explanations can be found for those parameters that are most crucial for life… My case against fine-tuning will not rely on speculations beyond well-established physics nor on the existence of multiple universes. [FOFT pp. 22, 24]
Let’s be clear on the task that Stenger has set for himself. There are a great many scientists, of varying religious persuasions, who accept that the universe is fine-tuned for life, e.g. Barrow, Carr, Carter, Davies, Dawkins, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, Wilczek. They differ, of course, on what conclusion we should draw from this fact. Stenger, on the other hand, claims that the universe is not fine-tuned. (pp. 6-7)
The Laws of Nature
Are the laws of nature themselves fine-tuned? Stenger defends the ambitious claim that the laws of nature could not have been different because they can be derived from the requirement that they be Point-of-View Invariant (hereafter, PoVI)…
We can formulate Stenger’s argument for this conclusion as follows:
LN1. If our formulation of the laws of nature is to be objective, it must be PoVI.
LN2. Invariance implies conserved quantities (Noether’s theorem).
LN3. Thus, “when our models do not depend on a particular point or direction in space or a particular moment in time, then those models must necessarily contain the quantities linear momentum, angular momentum, and energy, all of which are conserved. Physicists have no choice in the matter, or else their models will be subjective, that is, will give uselessly different results for every different point of view. And so the conservation principles are not laws built into the universe or handed down by deity to govern the behavior of matter. They are principles governing the behavior of physicists.” [FOFT p. 82, emphasis original]
This argument commits the fallacy of equivocation – the term “invariant” has changed its meaning between LN1 and LN2. (pp. 7-8)
Conclusion: We can now see the flaw in Stenger’s argument. Premise LN1 should read: If our formulation of the laws of nature is to be objective, then it must be covariant. Premise LN2 should read: symmetries imply conserved quantities. Since ‘covariant’ and ‘symmetric’ are not synonymous, it follows that the conclusion of the argument is unproven, and we would argue that it is false. The conservation principles of this universe are not merely principles governing our formulation of the laws of nature. (p. 17)
SSB [spontaneous symmetric breaking – VJT] allows the laws of nature to retain their symmetry and yet have asymmetric solutions.
Even if the symmetries of the laws of nature were inevitable, it would still be an open question as to precisely which symmetries were broken in our universe and which were unbroken. (p. 18)
Changing the Laws of Nature
What if the laws of nature were different? Stenger says:
… what about a universe with a different set of “laws”? There is not much we can say about such a universe, nor do we need to. Not knowing what any of their parameters are, no one can claim that they are fine-tuned. [FOFT p. 69]
In reply, fine-tuning isn’t about what the parameters and laws are in a particular universe. Given some other set of laws, we ask: if a universe were chosen at random from the set of universes with those laws, what is the probability that it would support intelligent life? If that probability is suitably (and robustly) small, then we conclude that that region of possible-physics-space contributes negligibly to the total life-permitting subset. It is easy to find examples of such claims.
* A universe governed by Maxwell’s Laws “all the way down” (i.e. with no quantum regime at small scales) will not have stable atoms | electrons radiate their kinetic energy and spiral rapidly into the nucleus | and hence no chemistry (Barrow & Tipler, 1986, pg. 303). We don’t need to know what the parameters are to know that life in such a universe is plausibly impossible.
* If electrons were bosons, rather than fermions, then they would not obey the Pauli exclusion principle. There would be no chemistry.
* If gravity were repulsive rather than attractive, then matter wouldn’t clump into complex structures. Remember: your density, thank gravity, is 10^30 times greater than the average density of the universe.
* If the strong force were a long rather than short-range force, then there would be no atoms. Any structures that formed would be uniform, spherical, undifferentiated lumps, of arbitrary size and incapable of complexity.
* If, in electromagnetism, like charges attracted and opposites repelled, then there would be no atoms. As above, we would just have undifferentiated lumps of matter.
* The electromagnetic force allows matter to cool into galaxies, stars, and planets. Without such interactions, all matter would be like dark matter, which can only form into large, diffuse, roughly spherical haloes of matter whose only internal structure consists of smaller, diffuse, roughly spherical subhaloes. (p. 18)
Moving from the laws of nature to the parameters those laws, Stenger makes the following general argument against supposed examples of fine-tuning:
[T]he examples of fine-tuning given in the theist literature . . . vary one parameter while holding all the rest constant. This is both dubious and scientifically shoddy. As we shall see in several specific cases, changing one or more other parameters can often compensate for the one that is changed. [FOFT p. 70]
To illustrate this point, Stenger introduces “the wedge”… Here, x and y are two physical parameters that can vary from zero to x-max and y-max, where we can allow these values to approach infinity if so desired. The point (x0, y0) represents the values of x and y in our universe. The life-permitting range is the shaded wedge. Stenger’s point is that varying only one parameter at a time only explores that part of parameter space which is vertically or horizontally adjacent to (x0, y0), thus missing most of parameter space. (p. 19)
In response, fine-tuning relies on a number of independent life-permitting criteria. Fail any of these criteria, and life becomes dramatically less likely, if not impossible. When parameter space is explored in the scientific literature, it rarely (if ever) looks like the wedge. We instead see many intersecting wedges. Here are two examples… (p. 20)
These two examples show that the wedge, by only considering a single life-permitting criterion, seriously distorts typical cases of fine-tuning by committing the sequential juggler fallacy (Section 2). Stenger further distorts the case for fine-tuning by saying:
In the fine-tuning view, there is no wedge and the point has infinitesimal area, so the probability of finding life is zero. [FOFT p. 70]
No reference is given, and this statement is not true of the scientific literature. The wedge is a straw man. (p. 21)
We turn now to cosmology. The problem of the apparently low entropy of the universe is one of the oldest problems of cosmology. The fact that the entropy of the universe is not at its theoretical maximum, coupled with the fact that entropy cannot decrease, means that the universe must have started in a very special, low entropy state. (p. 23)
Let’s return to Stenger’s proposed solution… Stenger takes it for granted that the universe is homogeneous and isotropic. We can see this also in his use of the Friedmann equation, which assumes that space-time is homogeneous and isotropic. Not surprisingly, once homogeneity and isotropy have been assumed, Stenger finds that the solution to the entropy problem is remarkably easy.
We conclude that Stenger has not only failed to solve the entropy problem; he has failed to comprehend it. He has presented the problem itself as its solution. Homogeneous, isotropic expansion cannot solve the entropy problem – it is the entropy problem. Stenger’s assertion that “the universe starts out with maximum entropy or complete disorder” is false. A homogeneous, isotropic spacetime is an incredibly low entropy state. Penrose (1989) warned of precisely this brand of failed solution two decades ago... (p. 26)
We turn now to cosmic inflation, which proposes that the universe underwent a period of accelerated expansion in its earliest stages. The achievements of inflation are truly impressive – in one fell swoop, the universe is sent on its expanding way, the flatness, horizon, and monopole problem are solved and we have concrete, testable and seemingly correct predictions for the origin of cosmic structure. It is a brilliant idea, and one that continues to defy all attempts at falsification. Since life requires an almost-flat universe (Barrow & Tipler, 1986, pg. 408ff.), inflation is potentially a solution to a particularly impressive fine-tuning problem – sans inflation, the density of the universe at the Planck time must be tuned to 60 decimal places in order for the universe to be life-permitting. (p. 27)
Let’s summarise. Inflation is a wonderful idea; in many ways it seems irresistible (Liddle, 1995). However, we do not have a physical model, and even we had such a model, “although inflationary models may alleviate the “fine tuning” in the choice of initial conditions, the models themselves create new “fine tuning” issues with regard to the properties of the scalar field” (Hollands & Wald, 2002b). To pretend that the mere mention of inflation makes a life-permitting universe “100 percent” inevitable [FOFT p. 245] is naive in the extreme, a cane toad solution. (p. 31)
Suppose that inflation did solve the fine-tuning of the density of the universe. Is it reasonable to hope that all fine-tuning cases could be solved in a similar way? We contend not, because inflation has a target. Let’s consider the range of densities that the universe could have had at some point in its early history. One of these densities is physically singled out as special – the critical density. Now let’s note the range of densities that permit the existence of cosmic structure in a long-lived universe. We find that this range is very narrow. Very conveniently, this range neatly straddles the critical density.
We can now see why inflation has a chance. There is in fact a three-fold coincidence – A: the density needed for life, B: the critical density, and C: the actual density of our universe are all aligned. B and C are physical parameters, and so it is possible that some physical process can bring the two into agreement. The coincidence between A and B then creates the required anthropic coincidence (A and C). If, for example, life required a universe with a density (say, just after reheating) 10 times less than critical, then inflation would do a wonderful job of making all universes uninhabitable.
Inflation thus represents a very special case. Waiting inside the life-permitting range (L) is another physical parameter (p). Aim for p and you will get L thrown in for free. This is not true of the vast majority of fine-tuning cases. There is no known physical scale waiting in the life-permitting range of the quark masses, fundamental force strengths or the dimensionality of space-time. There can be no inflation-like dynamical solution to these fine-tuning problems because dynamical processes are blind to the requirements of intelligent life.
What if, unbeknownst to us, there was such a fundamental parameter? It would need to fall into the life-permitting range. As such, we would be solving a fine-tuning problem by creating at least one more. And we would also need to posit a physical process able to dynamically drive the value of the quantity in our universe toward p. (pp. 31-32)
The Amplitude of Primordial Fluctuations Q
Q, the amplitude of primordial fluctuations, is one of Martin Rees’ Just Six Numbers. In our universe, its value is [approx.] Q = 2 x 10^(-5), meaning that in the early universe the density at any point was typically within 1 part in 100,000 of the mean density. What if Q were different? (p. 32)
If Q were smaller than 10^(-6), gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial ‘mix’ of atoms, dark energy and radiation were the same as our own. On the other hand, a universe where Q were substantially larger than 10^(-5) – were the initial “ripples” were replaced by large-amplitude waves – would be a turbulent and violent place. Regions far bigger than galaxies would condense early in its history. They wouldn’t fragment into stars but would instead collapse into vast black holes, each much heavier than an entire cluster of galaxies in our universe… Stars would be packed too close together and buffeted too frequently to retain stable planetary systems. (Rees, 1999, pg. 115)
Stenger has two replies…. (p. 32)
Stenger’s second reply is to ask “… is an order of magnitude fine-tuning? …”
There are a few problems here. We have a clear case of the flippant funambulist fallacy – the possibility of altering other constants to compensate the change in Q is not evidence against fine-tuning. Choose Q and, say, alpha-G at random and you are unlikely to have picked a life-permitting pair, even if our universe is not the only life-permitting one. We also have a nice example of the cheap-binoculars fallacy. The allowed change in Q relative to its value in our universe (“an order of magnitude”) is necessarily an underestimate of the degree of fine-tuning. The question is whether this range is small compared to the possible range of Q. Stenger seems to see this problem, and so argues that large values of Q are unlikely to result from inflation. This claim is false, and symptomatic of Stenger’s tenuous grasp of cosmology. (p. 33)
The fine-tuning of Q stands up well under examination. (p. 34)
The Cosmological Constant, Lambda
The cosmological constant problem is described in the textbook of Burgess & Moore (2006) as “arguably the most severe theoretical problem in high-energy physics today, as measured by both the difference between observations and theoretical predictions, and by the lack of convincing theoretical ideas which address it”. A well-understood and well-tested theory of fundamental physics (Quantum Field Theory – QFT) predicts contributions to the vacuum energy of the universe that are [approx.] 10^120 times greater than the observed total value. Stenger’s reply is guided by the following principle:
Any calculation that disagrees with the data by 50 or 120 orders of magnitude is simply wrong and should not be taken seriously. We just have to await the correct calculation. [FOFT p. 219]
This seems indistinguishable from reasoning that the calculation must be wrong since otherwise the cosmological constant would have to be fine-tuned. One could not hope for a more perfect example of begging the question. More importantly, there is a misunderstanding in Stenger’s account of the cosmological constant problem. The problem is not that physicists have made an incorrect prediction. We can use the term dark energy for any form of energy that causes the expansion of the universe to accelerate, including a “bare” cosmological constant (see Barnes et al., 2005, for an introduction to dark energy). Cosmological observations constrain the total dark energy. QFT [quantum field theory – VJT] allows us to calculate a number of contributions to the total dark energy from matter fields in the universe. Each of these contributions turns out to be 10^120 times larger than the total. There is no direct theory-vs.-observation contradiction as one is calculating and measuring different things. The fine-tuning problem is that these different independent contributions, including perhaps some that we don’t know about, manage to cancel each other to such an alarming, life-permitting degree. This is not a straightforward case of Popperian falsification. (pp. 34-35)
The cosmological constant problem is actually a misnomer. This section has discussed the “bare” cosmological constant. It comes purely from general relativity, and is not associated with any particular form of energy. The 120 orders-of-magnitude problem refers to vacuum energy associated with the matter fields of the universe… The source of the confusion is the fact that vacuum energy has the same dynamical effect as the cosmological constant, so that observations measure an “effective” cosmological constant: effective-Lambda = bare-Lambda + vacuum-Lambda. The cosmological constant problem is really the vacuum energy problem. Even if Stenger could show that bare-Lambda = 0, this would do nothing to address why effective-Lambda is observed to be so much smaller than the predicted contributions to vacuum-Lambda. (p. 36)
There are a number of excellent reviews of the cosmological constant in the scientific literature (Weinberg, 1989; Carroll, 2001; Vilenkin, 2003; Polchinski, 2006; Durrer & Maartens, 2007; Padmanabhan, 2007; Bousso, 2008). In none will you find Stenger’s particular brand of dismissiveness. The calculations are known to be correct in other contexts and so are taken very seriously. Supersymmetry won’t help. The problem cannot be defined away. (p. 38)
The Origin of Mass
Let’s consider Stenger’s responses to these cases of fine-tuning. (p. 47)
Stenger is either not aware of the hierarchy and flavour problems, or else he has solved some of the most pressing problems in particle physics and not bothered to pass this information on to his colleagues… (p. 47)
We can draw some conclusions. First, Stenger’s discussion of the surprising lightness of fundamental masses is woefully inadequate. To present it as a solved problem of particle physics is a gross misrepresentation of the literature. Secondly, smallness is not sufficient for life… The masses must be sufficiently small but not too small. Finally, suppose that the LHC [Large Hadron Collider – VJT] discovers that supersymmetry is a (broken) symmetry of our universe. This would not be the discovery that the universe could not have been different. It would not be the discovery that the masses of the fundamental particles must be small. It would at most show that our universe has chosen a particularly elegant and beautiful way to be life-permitting. (p. 49)
Protons, Neutrons, Electrons
We turn now to the relative masses of the three most important particles in our universe: the proton, neutron and electron, from which atoms are made. Consider first the ratio of the electron to the proton mass, … of which Stenger says: “…we can argue that the electron mass is going to be much smaller than the proton mass in any universe even remotely like ours.” [FOFT p. 164] (p. 50)
The fact that Stenger is comparing the electron mass in our universe with the electron mass in universes “like ours” is all the evidence one needs to conclude that Stenger doesn’t understand fine-tuning. The fact that universes like ours turn out to be rather similar to our universe isn’t particularly enlightening. (p. 50)
Finally, and most importantly, note carefully Stenger’s conclusion. He states that no fine-tuning is needed for the neutron-proton mass difference in our universe to be approximately equal to the up quark-down quark mass difference in our universe. Stenger has compared our universe with our universe and found no evidence of fine-tuning. There is no discussion of the life-permitting range, no discussion of the possible range of [mass(neutron) – mass(proton)] (or its relation to the possible range of [mass(down quark) – mass(up quark)], and thus no relevance to fine-tuning whatsoever. (p. 51)
The Strength of the Fundamental Forces – Conclusion
Suppose Bob sees Alice throw a dart and hit the bullseye. “Pretty impressive, don’t you think?”, says Alice. “Not at all”, says Bob, “the point-of-impact of the dart can be explained by the velocity with which the dart left your hand. No fine-tuning is needed.” On the contrary, the fine-tuning of the point of impact (i.e. the smallness of the bullseye relative to the whole wall) is evidence for the fine-tuning of the initial velocity.
This flaw alone makes much of Chapters 7 to 10 of FOFT irrelevant. The question of the fine-tuning of these more fundamental parameters is not even asked, making the whole discussion a cane toad solution. Stenger has given us no reason to think that the life-permitting region is larger, or possibility space smaller, than has been calculated in the fine-tuning literature.
The parameters of the standard model remain some of the best understood and most impressive cases of fine-tuning. (pp. 54-55)
Dimensionality of Spacetime
A number of authors have emphasised the life-permitting properties of the particular combination of one time- and three space-dimensions, going back to Ehrenfest (1917) and Whitrow (1955), summarised in Barrow & Tipler (1986) and Tegmark (1997). (p. 55)
FOFT addresses the issue:
Martin Rees proposes that the dimensionality of the universe is one of six parameters that appear particularly adjusted to enable life … Clearly Rees regards the dimensionality of space as a property of objective reality. But is it? I think not. Since the space-time model is a human invention, so must be the dimensionality of space-time. We choose it to be three because it fits the data. In the string model, we choose it to be ten. We use whatever works, but that does not mean that reality is exactly that way. [FOFT p. 51]
…String theory is actually an excellent counterexample to Stenger’s claims. String theorists are not content to posit ten dimensions and leave it at that. They must compactify all but 3+1 of the extra dimensions for the theory to have a chance of describing our universe. This fine-tuning case refers to the number of macroscopic or ‘large’ space dimensions, which both string theory and classical physics agree to be three. The possible existence of small, compact dimensions is irrelevant. (p. 56)
The confusion of Stenger’s response is manifest in the sentence: “We choose three [dimensions] because it fits the data” [FOFT p. 51]. This isn’t much of a choice. One is reminded of the man who, when asked why he choose to join the line for ‘non-hen-pecked husbands’, answered, “because my wife told me to”. The universe will let you choose, for example, your unit of length. But you cannot decide that the macroscopic world has four space dimensions. It is a mathematical fact that in a universe with four spatial dimensions you could, with a judicious choice of axis, make a left-footed shoe into a right-footed one by rotating it. Our inability to perform such a transformation is not the result of physicists arbitrarily deciding that, in this space-time model we’re inventing, space will have three dimensions. (p. 56)
Could a multiverse proposal ever be regarded as scientific? FOFT p. 228 notes the similarity between undetectable universes and undetectable quarks, but the analogy is not a good one. The properties of quarks – mass, charge, spin, etc. – can be inferred from measurements. Quarks have a causal effect on particle accelerator measurements; if the quark model were wrong, we would know about it. In contrast, we cannot observe any of the properties of a multiverse… as they have no causal effect on our universe. We could be completely wrong about everything we believe about these other universes and no observation could correct us. The information is not here. The history of science has repeatedly taught us that experimental testing is not an optional extra. The hypothesis that a multiverse actually exists will always be untestable.
The most optimistic scenario is where a physical theory, which has been well-tested in our universe, predicts a universe-generating mechanism. Even then, there would still be questions beyond the reach of observation, such as whether the necessary initial conditions for the generator hold in the metaspace, and whether there are modifications to the physical theory that arise at energy scales or on length scales relevant to the multiverse but beyond testing in our universe. Moreover, the process by which a new universe is spawned almost certainly cannot be observed. (p. 58)
We should be wary of any multiverse which allows for single brains, imprinted with memories, to fluctuate into existence. The worry is that, for every observer who really is a carbon-based life form who evolved on a planet orbiting a star in a galaxy, there are vastly more for whom this is all a passing dream, the few, fleeting fancies of a phantom fluctuation. (p. 61)
Another argument against the multiverse is given by Penrose (2004, pg. 763ff.). As with the Boltzmann multiverse, the problem is that this universe seems uncomfortably roomy. (p. 62)
In other words, if we live in a multiverse generated by a process like chaotic inflation, then for every observer who observes a universe of our size, there are 10^10^123 who observe a universe that is just 10 times smaller. This particular multiverse dies the same death as the Boltzmann multiverse. Penrose’s argument is based on the place of our universe in phase space, and is thus generic enough to apply to any multiverse proposal that creates more small universe domains than large ones. Most multiverse mechanisms seem to fall into this category. (p. 62)
A multiverse generated by a simple underlying mechanism is a remarkably seductive idea. The mechanism would be an extrapolation of known physics, that is, physics with an impressive record of explaining observations from our universe. The extrapolation would be natural, almost inevitable. The universe as we know it would be a very small part of a much larger whole. Cosmology would explore the possibilities of particle physics; what we know as particle physics would be mere by-laws in an unimaginably vast and variegated cosmos. The multiverse would predict what we expect to observe by predicting what conditions hold in universes able to support observers.
Sadly, most of this scenario is still hypothetical. The goal of this section has been to demonstrate the mountain that the multiverse is yet to climb, the challenges that it must face openly and honestly. The multiverse may yet solve the fine-tuning of the universe for intelligent life, but it will not be an easy solution. “Multiverse” is not a magic word that will make all the fine-tuning go away. (p. 62)
Conclusions and Future
We conclude that the universe is fine-tuned for the existence of life. Of all the ways that the laws of nature, constants of physics and initial conditions of the universe could have been, only a very small subset permits the existence of intelligent life. (p. 62)
It is not true that fine-tuning must eventually yield to the relentless march of science. Fine-tuning is not a typical scientific problem, that is, a phenomenon in our universe that cannot be explained by our current understanding of physical laws. It is not a gap. Rather, we are concerned with the physical laws themselves. In particular, the anthropic coincidences are not like, say, the coincidence between inertial mass and gravitational mass in Newtonian gravity, which is a coincidence between two seemingly independent physical quantities. Anthropic coincidences, on the other hand, involve a happy consonance between a physical quantity and the requirements of complex, embodied intelligent life. The anthropic coincidences are so arresting because we are accustomed to thinking of physical laws and initial conditions as being unconcerned with how things turn out. Physical laws are material and efficient causes, not final causes. There is, then, no reason to think that future progress in physics will render a life-permitting universe inevitable. When physics is finished, when the equation is written on the blackboard and fundamental physics has gone as deep as it can go, fine-tuning may remain, basic and irreducible. (p. 63)
Perhaps the most optimistic scenario is that we will eventually discover a simple, beautiful physical principle from which we can derive a unique physical theory, whose unique solution describes the universe as we know it, including the standard model, quantum gravity, and (dare we hope) the initial conditions of cosmology. While this has been the dream of physicists for centuries, there is not the slightest bit of evidence that this idea is true. It is almost certainly not true of our best hope for a theory of quantum gravity, string theory, which has “anthropic principle written all over it” (Schellekens, 2008). The beauty of its principles has not saved us from the complexity and contingency of the solutions to its equations. Beauty and simplicity are not necessity. (p.63)
Appendix B – Stenger’s MonkeyGod
In Chapter 13, Stenger argues against the fine-tuning of the universe for intelligent life using the results of a computer code, subtly named MonkeyGod. It is a Monte Carlo code, which chooses values of certain parameters from a given probability density function (PDF) and then calculates whether a universe with those parameters would support life. (p. 68)
We conclude that MonkeyGod is so deeply flawed that its results are meaningless. (p. 71)