Uncommon Descent Serving The Intelligent Design Community

Is fine-tuning a fallacy?


Professor Victor Stenger is an American particle physicist and a noted atheist, who popularized the phrase, “Science flies you to the moon. Religion flies you into buildings”. Professor Stenger is also the author of several books, including his recent best-seller, The Fallacy of Fine-Tuning: How the Universe is Not Designed for Humanity (Prometheus Books, 2011). Stenger’s latest book has been received with great acclaim by atheists: “Stenger has demolished the fine-tuning proponents,” writes one enthusiastic Amazon reviewer, adding that the book tells us “how science is able to demonstrate the non-existence of god.”

Well, it seems that the great Stenger has finally met his match. Dr. Luke A. Barnes, a post-doctoral researcher at the Institute for Astronomy, ETH Zurich, Switzerland, has written a scathing critique of Stenger’s book. I’ve read refutations in my time, but I have to say, this one is devastating.

In his paper, Dr. Barnes takes care to avoid drawing any metaphysical conclusions from the fact of fine-tuning. He has no religious axe to grind. His main concern is simply to establish that the fine-tuning of the universe is real, contrary to the claims of Professor Stenger, who asserts that all of the alleged examples of fine-tuning in our universe can be explained without the need for a multiverse.

Dr. Barnes’ ARXIV paper, The Fine-Tuning of the Universe for Intelligent Life (Version 1, December 21, 2011), is available online, and I shall be quoting from it below. Since the paper is quite technical at times, I’ve omitted mathematical equations and kept the references to physical parameters to a minimum, since I simply wish to give readers an overview of what Dr. Barnes perceives as the key flaws in Professor Stenger’s book.

I would like to add that Dr. Barnes has also written an incisive online critique of Mike Ikeda and Bill Jeffery’s widely cited paper, The Anthropic Principle Does Not Support Supernaturalism, which is cited by Professor Stenger in his book, in order to show that even if some observation were to establish that the universe is fine-tuned, it could only count as evidence against God’s existence. Part 1 of Dr. Barnes’ reply is here; Part 2 is here.

What follows is a selection of quotes from Dr. Barnes’ ARXIV paper, covering the key points. All bold emphases are mine, not the author’s, and page references are to Dr. Barnes’ paper. The term “FOFT” in the quotes below is an abbreviation of the title of Professor Stenger’s latest book, The Fallacy of Fine-Tuning: How the Universe is Not Designed for Humanity (Prometheus Books, 2011).

Finally, I would like to thank Dr. Barnes for making his paper available for public comment online, and I wish him every success in his future scientific work.



The fine-tuning of the universe for intelligent life has received a great deal of attention in recent years, both in the philosophical and scientific literature. The claim is that in the space of possible physical laws, parameters and initial conditions, the set that permits the evolution of intelligent life is very small. I present here a review of the scientific literature, outlining cases of fine-tuning in the classic works of Carter, Carr and Rees, and Barrow and Tipler, as well as more recent work. To sharpen the discussion, the role of the antagonist will be played by Victor Stenger’s recent book The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us. Stenger claims that all known fine-tuning cases can be explained without the need for a multiverse. Many of Stenger’s claims will be found to be highly problematic. We will touch on such issues as the logical necessity of the laws of nature; objectivity, invariance and symmetry; theoretical physics and possible universes; entropy in cosmology; cosmic inflation and initial conditions; galaxy formation; the cosmological constant; stars and their formation; the properties of elementary particles and their effect on chemistry and the macroscopic world; the origin of mass; grand unified theories; and the dimensionality of space and time. I also provide an assessment of the multiverse, noting the significant challenges that it must face. I do not attempt to defend any conclusion based on the fine-tuning of the universe for intelligent life. This paper can be viewed as a critique of Stenger’s book, or read independently. (p. 1)



The claim that the universe is fine-tuned can be formulated as:

FT: In the set of possible physics, the subset that permit the evolution of life is very small.

As it stands, FT [the fine-tuning claim – VJT] is precise enough to distinguish itself from a number of other claims for which it is often mistaken. FT is not the claim that this universe is optimal for life, that it contains the maximum amount of life per unit volume or per baryon, that carbon-based life is the only possible type of life, or that the only kinds of universes that support life are minor variations on this universe. These claims, true or false, are simply beside the point. (p. 3)

The reason why FT [the fine-tuning claim – VJT] is an interesting claim is that it makes the existence of life in this universe appear to be something remarkable, something in need of explanation. The intuition here is that, if ours were the only universe, and if the causes that established the physics of our universe were indifferent to whether it would evolve life, then the chances of hitting upon a life-permitting universe are very small. (p. 3)


Cautionary Tales

There are a few fallacies to keep in mind as we consider cases of fine-tuning.

The Cheap-Binoculars Fallacy: “Don’t waste money buying expensive binoculars. Simply stand closer to the object you wish to view”. We can make any point (or outcome) in possibility space seem more likely by zooming-in on its neighbourhood. Having identified the life-permitting region of parameter space, we can make it look big by deftly choosing the limits of the plot. We could also distort parameter space using, for example, logarithmic axes. A good example of this fallacy is quantifying the fine-tuning of a parameter relative to its value in our universe, rather than the totality of possibility space. If a dart lands 3 mm from the centre of a dartboard, is it obviously fallacious to say that because the dart could have landed twice as far away and still scored a bullseye, therefore the throw is only fine-tuned to a factor of two and there is “plenty of room” inside the bullseye. The correct comparison is between the area (or more precisely, solid angle) of the bullseye to the area in which the dart could land. Similarly, comparing the life-permitting range to the value of the parameter in our universe necessarily produces a bias toward underestimating fine-tuning, since we know that our universe is in the life-permitting range.

The Flippant Funambulist Fallacy: “Tightrope-walking is easy!”, the man says, “just look at all the places you could stand and not fall to your death!”. This is nonsense, of course: a tightrope walker must overbalance in a very specific direction if her path is to be life-permitting. The freedom to wander is tightly constrained. When identifying the life-permitting region of parameter space, the shape of the region is not particularly relevant. An elongated life-friendly region is just as fine-tuned as a compact region of the same area. The fact that we can change the setting on one cosmic dial, so long as we very carefully change another at the same time, does not necessarily mean that FT [the fine-tuning claim – VJT] is false.

The Sequential Juggler Fallacy: “Juggling is easy!”, the man says, “you can throw and catch a ball. So just juggle all five, one at a time”. Juggling five balls one-at-a-time isn’t really juggling. For a universe to be life-permitting, it must satisfy a number of constraints simultaneously. For example, a universe with the right physical laws for complex organic molecules, but which re-collapses before it is cool enough to permit neutral atoms will not form life. One cannot refute FT by considering life-permitting criteria one-at-a-time and noting that each can be satisfied in a wide region of parameter space. In set-theoretic terms, we are interested in the intersection of the life-permitting regions, not the union.

The Cane Toad Solution: In 1935, the Bureau of Sugar Experiment Stations was worried by the effect of the native cane beetle on Australian sugar cane crops. They introduced 102 cane toads, imported from Hawaii, into parts of Northern Queensland in the hope that they would eat the beetles. And thus the problem was solved forever, except for the 200 million cane toads that now call eastern Australia home, eating smaller native animals, and secreting a poison that kills any larger animal that preys on them. A cane toad solution, then, is one that doesn’t consider whether the end result is worse than the problem itself. When presented with a proposed fine-tuning explainer, we must ask whether the solution is more fine-tuned than the problem.


Stenger’s Case

Stenger is a particle physicist, a noted speaker, and the author of a number of books and articles on science and religion. In his latest book, “The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us” [hereafter FOFT], he makes the following bold claim:

[T]he most commonly cited examples of apparent fine-tuning can be readily explained by the application of a little well-established physics and cosmology. … [S]ome form of life would have occurred in most universes that could be described by the same physical models as ours, with parameters whose ranges varied over ranges consistent with those models. And I will show why we can expect to be able to describe any uncreated universe with the same models and laws with at most slight, accidental variations. Plausible natural explanations can be found for those parameters that are most crucial for life… My case against fine-tuning will not rely on speculations beyond well-established physics nor on the existence of multiple universes. [FOFT pp. 22, 24]

Let’s be clear on the task that Stenger has set for himself. There are a great many scientists, of varying religious persuasions, who accept that the universe is fine-tuned for life, e.g. Barrow, Carr, Carter, Davies, Dawkins, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, Wilczek. They differ, of course, on what conclusion we should draw from this fact. Stenger, on the other hand, claims that the universe is not fine-tuned. (pp. 6-7)


The Laws of Nature

Are the laws of nature themselves fine-tuned? Stenger defends the ambitious claim that the laws of nature could not have been different because they can be derived from the requirement that they be Point-of-View Invariant (hereafter, PoVI)…

We can formulate Stenger’s argument for this conclusion as follows:

LN1. If our formulation of the laws of nature is to be objective, it must be PoVI.

LN2. Invariance implies conserved quantities (Noether’s theorem).

LN3. Thus, “when our models do not depend on a particular point or direction in space or a particular moment in time, then those models must necessarily contain the quantities linear momentum, angular momentum, and energy, all of which are conserved. Physicists have no choice in the matter, or else their models will be subjective, that is, will give uselessly different results for every different point of view. And so the conservation principles are not laws built into the universe or handed down by deity to govern the behavior of matter. They are principles governing the behavior of physicists.” [FOFT p. 82, emphasis original]

This argument commits the fallacy of equivocation – the term “invariant” has changed its meaning between LN1 and LN2. (pp. 7-8)

Conclusion: We can now see the flaw in Stenger’s argument. Premise LN1 should read: If our formulation of the laws of nature is to be objective, then it must be covariant. Premise LN2 should read: symmetries imply conserved quantities. Since ‘covariant’ and ‘symmetric’ are not synonymous, it follows that the conclusion of the argument is unproven, and we would argue that it is false. The conservation principles of this universe are not merely principles governing our formulation of the laws of nature. (p. 17)

SSB [spontaneous symmetric breaking – VJT] allows the laws of nature to retain their symmetry and yet have asymmetric solutions.

Even if the symmetries of the laws of nature were inevitable, it would still be an open question as to precisely which symmetries were broken in our universe and which were unbroken. (p. 18)


Changing the Laws of Nature

What if the laws of nature were different? Stenger says:

… what about a universe with a different set of “laws”? There is not much we can say about such a universe, nor do we need to. Not knowing what any of their parameters are, no one can claim that they are fine-tuned. [FOFT p. 69]

In reply, fine-tuning isn’t about what the parameters and laws are in a particular universe. Given some other set of laws, we ask: if a universe were chosen at random from the set of universes with those laws, what is the probability that it would support intelligent life? If that probability is suitably (and robustly) small, then we conclude that that region of possible-physics-space contributes negligibly to the total life-permitting subset. It is easy to find examples of such claims.

* A universe governed by Maxwell’s Laws “all the way down” (i.e. with no quantum regime at small scales) will not have stable atoms | electrons radiate their kinetic energy and spiral rapidly into the nucleus | and hence no chemistry (Barrow & Tipler, 1986, pg. 303). We don’t need to know what the parameters are to know that life in such a universe is plausibly impossible.

* If electrons were bosons, rather than fermions, then they would not obey the Pauli exclusion principle. There would be no chemistry.

* If gravity were repulsive rather than attractive, then matter wouldn’t clump into complex structures. Remember: your density, thank gravity, is 10^30 times greater than the average density of the universe.

* If the strong force were a long rather than short-range force, then there would be no atoms. Any structures that formed would be uniform, spherical, undifferentiated lumps, of arbitrary size and incapable of complexity.

* If, in electromagnetism, like charges attracted and opposites repelled, then there would be no atoms. As above, we would just have undifferentiated lumps of matter.

* The electromagnetic force allows matter to cool into galaxies, stars, and planets. Without such interactions, all matter would be like dark matter, which can only form into large, diffuse, roughly spherical haloes of matter whose only internal structure consists of smaller, diffuse, roughly spherical subhaloes. (p. 18)


Fine-tuned parameters

Moving from the laws of nature to the parameters those laws, Stenger makes the following general argument against supposed examples of fine-tuning:

[T]he examples of fine-tuning given in the theist literature . . . vary one parameter while holding all the rest constant. This is both dubious and scientifically shoddy. As we shall see in several specific cases, changing one or more other parameters can often compensate for the one that is changed. [FOFT p. 70]

To illustrate this point, Stenger introduces “the wedge”… Here, x and y are two physical parameters that can vary from zero to x-max and y-max, where we can allow these values to approach infinity if so desired. The point (x0, y0) represents the values of x and y in our universe. The life-permitting range is the shaded wedge. Stenger’s point is that varying only one parameter at a time only explores that part of parameter space which is vertically or horizontally adjacent to (x0, y0), thus missing most of parameter space. (p. 19)

In response, fine-tuning relies on a number of independent life-permitting criteria. Fail any of these criteria, and life becomes dramatically less likely, if not impossible. When parameter space is explored in the scientific literature, it rarely (if ever) looks like the wedge. We instead see many intersecting wedges. Here are two examples… (p. 20)

These two examples show that the wedge, by only considering a single life-permitting criterion, seriously distorts typical cases of fine-tuning by committing the sequential juggler fallacy (Section 2). Stenger further distorts the case for fine-tuning by saying:

In the fine-tuning view, there is no wedge and the point has infinitesimal area, so the probability of finding life is zero. [FOFT p. 70]

No reference is given, and this statement is not true of the scientific literature. The wedge is a straw man. (p. 21)



We turn now to cosmology. The problem of the apparently low entropy of the universe is one of the oldest problems of cosmology. The fact that the entropy of the universe is not at its theoretical maximum, coupled with the fact that entropy cannot decrease, means that the universe must have started in a very special, low entropy state. (p. 23)

Let’s return to Stenger’s proposed solution… Stenger takes it for granted that the universe is homogeneous and isotropic. We can see this also in his use of the Friedmann equation, which assumes that space-time is homogeneous and isotropic. Not surprisingly, once homogeneity and isotropy have been assumed, Stenger finds that the solution to the entropy problem is remarkably easy.

We conclude that Stenger has not only failed to solve the entropy problem; he has failed to comprehend it. He has presented the problem itself as its solution. Homogeneous, isotropic expansion cannot solve the entropy problem – it is the entropy problem. Stenger’s assertion that “the universe starts out with maximum entropy or complete disorder” is false. A homogeneous, isotropic spacetime is an incredibly low entropy state. Penrose (1989) warned of precisely this brand of failed solution two decades ago... (p. 26)



We turn now to cosmic inflation, which proposes that the universe underwent a period of accelerated expansion in its earliest stages. The achievements of inflation are truly impressive – in one fell swoop, the universe is sent on its expanding way, the flatness, horizon, and monopole problem are solved and we have concrete, testable and seemingly correct predictions for the origin of cosmic structure. It is a brilliant idea, and one that continues to defy all attempts at falsification. Since life requires an almost-flat universe (Barrow & Tipler, 1986, pg. 408ff.), inflation is potentially a solution to a particularly impressive fine-tuning problem – sans inflation, the density of the universe at the Planck time must be tuned to 60 decimal places in order for the universe to be life-permitting. (p. 27)

Let’s summarise. Inflation is a wonderful idea; in many ways it seems irresistible (Liddle, 1995). However, we do not have a physical model, and even we had such a model, “although inflationary models may alleviate the “fine tuning” in the choice of initial conditions, the models themselves create new “fine tuning” issues with regard to the properties of the scalar field” (Hollands & Wald, 2002b). To pretend that the mere mention of inflation makes a life-permitting universe “100 percent” inevitable [FOFT p. 245] is naive in the extreme, a cane toad solution. (p. 31)

Suppose that inflation did solve the fine-tuning of the density of the universe. Is it reasonable to hope that all fine-tuning cases could be solved in a similar way? We contend not, because inflation has a target. Let’s consider the range of densities that the universe could have had at some point in its early history. One of these densities is physically singled out as special – the critical density. Now let’s note the range of densities that permit the existence of cosmic structure in a long-lived universe. We find that this range is very narrow. Very conveniently, this range neatly straddles the critical density.

We can now see why inflation has a chance. There is in fact a three-fold coincidence – A: the density needed for life, B: the critical density, and C: the actual density of our universe are all aligned. B and C are physical parameters, and so it is possible that some physical process can bring the two into agreement. The coincidence between A and B then creates the required anthropic coincidence (A and C). If, for example, life required a universe with a density (say, just after reheating) 10 times less than critical, then inflation would do a wonderful job of making all universes uninhabitable.

Inflation thus represents a very special case. Waiting inside the life-permitting range (L) is another physical parameter (p). Aim for p and you will get L thrown in for free. This is not true of the vast majority of fine-tuning cases. There is no known physical scale waiting in the life-permitting range of the quark masses, fundamental force strengths or the dimensionality of space-time. There can be no inflation-like dynamical solution to these fine-tuning problems because dynamical processes are blind to the requirements of intelligent life.

What if, unbeknownst to us, there was such a fundamental parameter? It would need to fall into the life-permitting range. As such, we would be solving a fine-tuning problem by creating at least one more. And we would also need to posit a physical process able to dynamically drive the value of the quantity in our universe toward p. (pp. 31-32)


The Amplitude of Primordial Fluctuations Q

Q, the amplitude of primordial fluctuations, is one of Martin Rees’ Just Six Numbers. In our universe, its value is [approx.] Q = 2 x 10^(-5), meaning that in the early universe the density at any point was typically within 1 part in 100,000 of the mean density. What if Q were different? (p. 32)

If Q were smaller than 10^(-6), gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial ‘mix’ of atoms, dark energy and radiation were the same as our own. On the other hand, a universe where Q were substantially larger than 10^(-5) – were the initial “ripples” were replaced by large-amplitude waves – would be a turbulent and violent place. Regions far bigger than galaxies would condense early in its history. They wouldn’t fragment into stars but would instead collapse into vast black holes, each much heavier than an entire cluster of galaxies in our universe… Stars would be packed too close together and buffeted too frequently to retain stable planetary systems. (Rees, 1999, pg. 115)

Stenger has two replies…. (p. 32)

Stenger’s second reply is to ask “… is an order of magnitude fine-tuning? …”

There are a few problems here. We have a clear case of the flippant funambulist fallacy – the possibility of altering other constants to compensate the change in Q is not evidence against fine-tuning. Choose Q and, say, alpha-G at random and you are unlikely to have picked a life-permitting pair, even if our universe is not the only life-permitting one. We also have a nice example of the cheap-binoculars fallacy. The allowed change in Q relative to its value in our universe (“an order of magnitude”) is necessarily an underestimate of the degree of fine-tuning. The question is whether this range is small compared to the possible range of Q. Stenger seems to see this problem, and so argues that large values of Q are unlikely to result from inflation. This claim is false, and symptomatic of Stenger’s tenuous grasp of cosmology. (p. 33)

The fine-tuning of Q stands up well under examination. (p. 34)


The Cosmological Constant, Lambda

The cosmological constant problem is described in the textbook of Burgess & Moore (2006) as “arguably the most severe theoretical problem in high-energy physics today, as measured by both the difference between observations and theoretical predictions, and by the lack of convincing theoretical ideas which address it”. A well-understood and well-tested theory of fundamental physics (Quantum Field Theory – QFT) predicts contributions to the vacuum energy of the universe that are [approx.] 10^120 times greater than the observed total value. Stenger’s reply is guided by the following principle:

Any calculation that disagrees with the data by 50 or 120 orders of magnitude is simply wrong and should not be taken seriously. We just have to await the correct calculation. [FOFT p. 219]

This seems indistinguishable from reasoning that the calculation must be wrong since otherwise the cosmological constant would have to be fine-tuned. One could not hope for a more perfect example of begging the question. More importantly, there is a misunderstanding in Stenger’s account of the cosmological constant problem. The problem is not that physicists have made an incorrect prediction. We can use the term dark energy for any form of energy that causes the expansion of the universe to accelerate, including a “bare” cosmological constant (see Barnes et al., 2005, for an introduction to dark energy). Cosmological observations constrain the total dark energy. QFT [quantum field theory – VJT] allows us to calculate a number of contributions to the total dark energy from matter fields in the universe. Each of these contributions turns out to be 10^120 times larger than the total. There is no direct theory-vs.-observation contradiction as one is calculating and measuring different things. The fine-tuning problem is that these different independent contributions, including perhaps some that we don’t know about, manage to cancel each other to such an alarming, life-permitting degree. This is not a straightforward case of Popperian falsification. (pp. 34-35)

The cosmological constant problem is actually a misnomer. This section has discussed the “bare” cosmological constant. It comes purely from general relativity, and is not associated with any particular form of energy. The 120 orders-of-magnitude problem refers to vacuum energy associated with the matter fields of the universe… The source of the confusion is the fact that vacuum energy has the same dynamical effect as the cosmological constant, so that observations measure an “effective” cosmological constant: effective-Lambda = bare-Lambda + vacuum-Lambda. The cosmological constant problem is really the vacuum energy problem. Even if Stenger could show that bare-Lambda = 0, this would do nothing to address why effective-Lambda is observed to be so much smaller than the predicted contributions to vacuum-Lambda. (p. 36)

There are a number of excellent reviews of the cosmological constant in the scientific literature (Weinberg, 1989; Carroll, 2001; Vilenkin, 2003; Polchinski, 2006; Durrer & Maartens, 2007; Padmanabhan, 2007; Bousso, 2008). In none will you find Stenger’s particular brand of dismissiveness. The calculations are known to be correct in other contexts and so are taken very seriously. Supersymmetry won’t help. The problem cannot be defined away. (p. 38)


The Origin of Mass

Let’s consider Stenger’s responses to these cases of fine-tuning. (p. 47)

Stenger is either not aware of the hierarchy and flavour problems, or else he has solved some of the most pressing problems in particle physics and not bothered to pass this information on to his colleagues… (p. 47)

We can draw some conclusions. First, Stenger’s discussion of the surprising lightness of fundamental masses is woefully inadequate. To present it as a solved problem of particle physics is a gross misrepresentation of the literature. Secondly, smallness is not sufficient for life… The masses must be sufficiently small but not too small. Finally, suppose that the LHC [Large Hadron Collider – VJT] discovers that supersymmetry is a (broken) symmetry of our universe. This would not be the discovery that the universe could not have been different. It would not be the discovery that the masses of the fundamental particles must be small. It would at most show that our universe has chosen a particularly elegant and beautiful way to be life-permitting. (p. 49)


Protons, Neutrons, Electrons

We turn now to the relative masses of the three most important particles in our universe: the proton, neutron and electron, from which atoms are made. Consider first the ratio of the electron to the proton mass, … of which Stenger says: “…we can argue that the electron mass is going to be much smaller than the proton mass in any universe even remotely like ours.” [FOFT p. 164] (p. 50)

The fact that Stenger is comparing the electron mass in our universe with the electron mass in universes “like ours” is all the evidence one needs to conclude that Stenger doesn’t understand fine-tuning. The fact that universes like ours turn out to be rather similar to our universe isn’t particularly enlightening. (p. 50)

Finally, and most importantly, note carefully Stenger’s conclusion. He states that no fine-tuning is needed for the neutron-proton mass difference in our universe to be approximately equal to the up quark-down quark mass difference in our universe. Stenger has compared our universe with our universe and found no evidence of fine-tuning. There is no discussion of the life-permitting range, no discussion of the possible range of [mass(neutron) – mass(proton)] (or its relation to the possible range of [mass(down quark) – mass(up quark)], and thus no relevance to fine-tuning whatsoever. (p. 51)


The Strength of the Fundamental Forces – Conclusion

Suppose Bob sees Alice throw a dart and hit the bullseye. “Pretty impressive, don’t you think?”, says Alice. “Not at all”, says Bob, “the point-of-impact of the dart can be explained by the velocity with which the dart left your hand. No fine-tuning is needed.” On the contrary, the fine-tuning of the point of impact (i.e. the smallness of the bullseye relative to the whole wall) is evidence for the fine-tuning of the initial velocity.

This flaw alone makes much of Chapters 7 to 10 of FOFT irrelevant. The question of the fine-tuning of these more fundamental parameters is not even asked, making the whole discussion a cane toad solution. Stenger has given us no reason to think that the life-permitting region is larger, or possibility space smaller, than has been calculated in the fine-tuning literature.

The parameters of the standard model remain some of the best understood and most impressive cases of fine-tuning. (pp. 54-55)


Dimensionality of Spacetime

A number of authors have emphasised the life-permitting properties of the particular combination of one time- and three space-dimensions, going back to Ehrenfest (1917) and Whitrow (1955), summarised in Barrow & Tipler (1986) and Tegmark (1997). (p. 55)

FOFT addresses the issue:

Martin Rees proposes that the dimensionality of the universe is one of six parameters that appear particularly adjusted to enable life … Clearly Rees regards the dimensionality of space as a property of objective reality. But is it? I think not. Since the space-time model is a human invention, so must be the dimensionality of space-time. We choose it to be three because it fits the data. In the string model, we choose it to be ten. We use whatever works, but that does not mean that reality is exactly that way. [FOFT p. 51]

String theory is actually an excellent counterexample to Stenger’s claims. String theorists are not content to posit ten dimensions and leave it at that. They must compactify all but 3+1 of the extra dimensions for the theory to have a chance of describing our universe. This fine-tuning case refers to the number of macroscopic or ‘large’ space dimensions, which both string theory and classical physics agree to be three. The possible existence of small, compact dimensions is irrelevant. (p. 56)

The confusion of Stenger’s response is manifest in the sentence: “We choose three [dimensions] because it fits the data” [FOFT p. 51]. This isn’t much of a choice. One is reminded of the man who, when asked why he choose to join the line for ‘non-hen-pecked husbands’, answered, “because my wife told me to”. The universe will let you choose, for example, your unit of length. But you cannot decide that the macroscopic world has four space dimensions. It is a mathematical fact that in a universe with four spatial dimensions you could, with a judicious choice of axis, make a left-footed shoe into a right-footed one by rotating it. Our inability to perform such a transformation is not the result of physicists arbitrarily deciding that, in this space-time model we’re inventing, space will have three dimensions. (p. 56)


The Multiverse

Could a multiverse proposal ever be regarded as scientific? FOFT p. 228 notes the similarity between undetectable universes and undetectable quarks, but the analogy is not a good one. The properties of quarks – mass, charge, spin, etc. – can be inferred from measurements. Quarks have a causal effect on particle accelerator measurements; if the quark model were wrong, we would know about it. In contrast, we cannot observe any of the properties of a multiverse… as they have no causal effect on our universe. We could be completely wrong about everything we believe about these other universes and no observation could correct us. The information is not here. The history of science has repeatedly taught us that experimental testing is not an optional extra. The hypothesis that a multiverse actually exists will always be untestable.

The most optimistic scenario is where a physical theory, which has been well-tested in our universe, predicts a universe-generating mechanism. Even then, there would still be questions beyond the reach of observation, such as whether the necessary initial conditions for the generator hold in the metaspace, and whether there are modifications to the physical theory that arise at energy scales or on length scales relevant to the multiverse but beyond testing in our universe. Moreover, the process by which a new universe is spawned almost certainly cannot be observed. (p. 58)

We should be wary of any multiverse which allows for single brains, imprinted with memories, to fluctuate into existence. The worry is that, for every observer who really is a carbon-based life form who evolved on a planet orbiting a star in a galaxy, there are vastly more for whom this is all a passing dream, the few, fleeting fancies of a phantom fluctuation. (p. 61)

Another argument against the multiverse is given by Penrose (2004, pg. 763ff.). As with the Boltzmann multiverse, the problem is that this universe seems uncomfortably roomy. (p. 62)

In other words, if we live in a multiverse generated by a process like chaotic inflation, then for every observer who observes a universe of our size, there are 10^10^123 who observe a universe that is just 10 times smaller. This particular multiverse dies the same death as the Boltzmann multiverse. Penrose’s argument is based on the place of our universe in phase space, and is thus generic enough to apply to any multiverse proposal that creates more small universe domains than large ones. Most multiverse mechanisms seem to fall into this category. (p. 62)

A multiverse generated by a simple underlying mechanism is a remarkably seductive idea. The mechanism would be an extrapolation of known physics, that is, physics with an impressive record of explaining observations from our universe. The extrapolation would be natural, almost inevitable. The universe as we know it would be a very small part of a much larger whole. Cosmology would explore the possibilities of particle physics; what we know as particle physics would be mere by-laws in an unimaginably vast and variegated cosmos. The multiverse would predict what we expect to observe by predicting what conditions hold in universes able to support observers.

Sadly, most of this scenario is still hypothetical. The goal of this section has been to demonstrate the mountain that the multiverse is yet to climb, the challenges that it must face openly and honestly. The multiverse may yet solve the fine-tuning of the universe for intelligent life, but it will not be an easy solution. “Multiverse” is not a magic word that will make all the fine-tuning go away. (p. 62)


Conclusions and Future

We conclude that the universe is fine-tuned for the existence of life. Of all the ways that the laws of nature, constants of physics and initial conditions of the universe could have been, only a very small subset permits the existence of intelligent life. (p. 62)

It is not true that fine-tuning must eventually yield to the relentless march of science. Fine-tuning is not a typical scientific problem, that is, a phenomenon in our universe that cannot be explained by our current understanding of physical laws. It is not a gap. Rather, we are concerned with the physical laws themselves. In particular, the anthropic coincidences are not like, say, the coincidence between inertial mass and gravitational mass in Newtonian gravity, which is a coincidence between two seemingly independent physical quantities. Anthropic coincidences, on the other hand, involve a happy consonance between a physical quantity and the requirements of complex, embodied intelligent life. The anthropic coincidences are so arresting because we are accustomed to thinking of physical laws and initial conditions as being unconcerned with how things turn out. Physical laws are material and efficient causes, not final causes. There is, then, no reason to think that future progress in physics will render a life-permitting universe inevitable. When physics is finished, when the equation is written on the blackboard and fundamental physics has gone as deep as it can go, fine-tuning may remain, basic and irreducible. (p. 63)

Perhaps the most optimistic scenario is that we will eventually discover a simple, beautiful physical principle from which we can derive a unique physical theory, whose unique solution describes the universe as we know it, including the standard model, quantum gravity, and (dare we hope) the initial conditions of cosmology. While this has been the dream of physicists for centuries, there is not the slightest bit of evidence that this idea is true. It is almost certainly not true of our best hope for a theory of quantum gravity, string theory, which has “anthropic principle written all over it” (Schellekens, 2008). The beauty of its principles has not saved us from the complexity and contingency of the solutions to its equations. Beauty and simplicity are not necessity. (p.63)


Appendix B – Stenger’s MonkeyGod

In Chapter 13, Stenger argues against the fine-tuning of the universe for intelligent life using the results of a computer code, subtly named MonkeyGod. It is a Monte Carlo code, which chooses values of certain parameters from a given probability density function (PDF) and then calculates whether a universe with those parameters would support life. (p. 68)

We conclude that MonkeyGod is so deeply flawed that its results are meaningless. (p. 71)

ROTFL ;) GCUGreyArea
A tested observation is an observation that has been tested, ie repeated and investigated. An observation is just something you have seen. Joe
What is a 'tested observation' as opposed to just an observation? GCUGreyArea
In any case, the point is moot, because nobody in OOL thinks that modern proteins sprang into existence fully formed.
Then come up with another mechanism and test it. Joe
No, I mean there are many tested observations that say the universe is finely tuned. Walter Bradley wrote about it. Joe
http://www.youtube.com/watch?v=kTKn1aSOyOs Elizabeth Liddle
and a force is a force of course of course.
Hmm, don't look a gift force in the mouth ;) GCUGreyArea
You mean someone actually saw the universe being finely tuned? GCUGreyArea
First, the man who led the Apollo project, the world famed von Braun, was not only a design thinker and Christian, but a creationist. (Cf. the notes in reply to Lewontin’s similar well-poisoning attempt, here.)
He was also a card carrying member of the Nazi party - I can't help thinking that if he had ever expressed an approval of Darwins theory then he would be viewed very differently by the ID community, and would be considered evidence of a link between Nazism and Evolutionary theory. But of course he is a design thinker, Christian and creationist so that's OK ;) GCUGreyArea
kairosfocus, Hoyle wrote:
When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link, it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes.
Hoyle is wrong. The size of the total space, by itself, tells you nothing. You need to know what percentage of the space is functional. Hoyle did not have that information, so his assertion was groundless. In any case, the point is moot, because nobody in OOL thinks that modern proteins sprang into existence fully formed. I notice that you didn't address gaffes #2, #3, and #4. Do you agree that those were mistakes on Hoyle's part? If not, why not? champignon
C: I have a moment. Since when is pointing out that 200 AA, at 20 AA per position has 20^200 possibilities the same as asserting or implying that just one arrangement will work? In fact, 20^200 ~ 1.6 *10^260, which is indeed vastly beyond the number of atoms in the observed universe, c. 10^80. So much more, in fact, that the number of Planck-time quantum states for these atoms since the big bang is less than 1 in 10^100 of that. The number of possible states that can be searched is so nearly zero by comparison as makes no difference. And, Hoyle pointed out that we are into thousands of enzymes in life. Starting from some prebiotic little pond, or a giant molecular cloud, you just could not get enough shuffling through states to get anywhere significantly different from a zero scope search, relative to what would be required to assemble enough molecules of life to get to a reasonable metabolic entity, by blind chance and necessity. Just remember, it takes 10^30 Planck times for the fastest chem rxns, and organic ones are usually far slower. The only empirically warranted source for complex, functionally specific organisation, is design. And, that outlined analysis of the supertask implied for blind forces is a part of why that is so. (BTW, the vproblem is closely analogous to the challenge of doing your post by random typing. That does not imply that only your post is a possible functionally specific, complex outcome -- so is mine -- but it does underscore what you are ever so eager to ignore: both of these posts are FSCI, and are cases E from a relatively narrow set of contextually responsive posts, T in the set of possible strings of appropriate length, W. By far and away most would be gibberish, and the resources of the observed cosmos would be fruitlessly challenged to get to ANY member E1, E2, . . . of T, within the resources of the observed cosmos, by blind chance and mechanical necessity. Islands of function in vast seas of non-function, as is typical of such things. But, if one is committed to not seeing what is exemplified all around us . . . ) Later, we can look at the further cases, but it is already plain that this is another strawman being kicked over. GEM of TKI kairosfocus
What? Human physicists designed the laws of nature?
I specifically commented on "laws of physics" and not on "laws of nature." As for my view on laws of nature, I have discussed that in other comments in this thread. Neil Rickert
noam ghish:
You’re not reasoning properly. What you’re reasoning is: ...
No, I most certainly am not reasoning as you say I am. Neil Rickert
Hoyle made no gaffe...
Let's see about that. From Hoyle's 1981 essay, to which you link below:
When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link, it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes.
Gaffe #1: Hoyle assumes that for a given enzyme, there is only one possible amino acid sequence. This is wrong. Hoyle continues:
This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes.
Gaffe #2: Hoyle seems to think that this specific set of 2000 enzymes is the minimum required for life. He presents no evidence for either of those assertions. He also writes:
In thinking about this question I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe. So try as I would, I couldn't convince myself that even the whole universe would be sufficient to find life by random processes - by what are called the blind forces of nature.
Gaffe #3: Hoyle thinks that if life originated on earth, then specific target enzymes must have spontaneously self-assembled. He's right that this is ridiculously improbable, but nobody in the OOL community thinks that this is how life began. He's tilting at windmills, arguing against a position that nobody holds. Hoyle does give an indication that (by 1981, anyway) he was aware of some the criticisms of his ill-informed assertions:
It's easy to frame a deceitful answer to it [the question of enzyme formation]. Start with much simpler, much smaller enzymes, which are sufficiently elementary to be discoverable by chance; then let evolution in some chemical environment cause the simple enzymes to change gradually into the complex ones we have today. The deceit here comes from omitting to explain what is in the environment that causes such an evolution. The improbability of finding the appropriate orderings of amino acids is simply being concealed in the behavior of the environment if one uses that style of argument.
Gaffe #4: Hoyle is not even making an argument here. He is just waving his hands and saying "That's impossible!", without providing an explanation. champignon
Onlookers: Notice, how the discussion from C et al, predictably, is distractively tangential to the substantial matter on the table, and how it almost invariably pivots on the rhetoric of denigration and dismissal, here of a Nobel equivalent prize-holder speaking on astrophysical, thermodynamics linked matters, and raising questions that need to be answered? Like, what is going on with the physics of O, C, and H and stars, that sets up life in our cosmos? What does that tell us? KF kairosfocus
PS: And since when does a discussion that focuses in significant part on how the errors of someone may be instructive morph into giving a free pass? (I shudder to think of how this and other exchanges are being misrepresented elsewhere, where we are not there to correct. Which comes right back to the issue of live donkeys and safely dead lions.) kairosfocus
C; Hoyle made no gaffe, and was speaking of something which is eminently within the ambit of thermodynamics-linked issues, OOL; which evo mat objectors at UD never tire to tell us in not a part of the theory of evolution. And BTW, there is a link between thermodynamics and information, which is also relevant. In addition, his specific discussion was on matters in astrophysics, as the article you need to read will further inform. So, kindly stop setting up and kicking over ad hominem laced strawmen -- second thread I have had to deal with that this morning from you. KF kairosfocus
Rickert, When are you going to answer my thread? Or do you concede? noam_ghish
I’m trying to suggest that nature is governed by simple “laws” which have been discovered through observation, in other words, through our senses in which they can then be expressed through our understanding of mathematics.
Right. And I am quite clearly disagreeing with that. I'm pretty close to being a naive realist. And I suspect you think you are arguing for that kind of naive realism. But I am also a realist about science, and what you state does not accurately describe how science is done. Let me put it this way: If nature is governed by simple laws, then we have no ability at all to find what those laws are. The best we can do is come up with our own ways of describing and predicting nature.
I disagree, go and jump off a bridge repetitively and tell me how many times you float on your decent. Do not the “Laws” describe, and the “Theories” explain?
I don't disagree with you at all over the implications of jumping off a bridge. As I said, I'm pretty close to being a naive realist. But neither of us has any access to the governing of nature, nor any evidence that there is governing of nature. Neil Rickert
Neil: "There’s some ambiguity here about “law”. Many people take “law” to mean a natural language statement. And, as far as I can tell, the universe came without a natural language and without any natural language statement, so without any natural law." I'm trying to suggest that nature is governed by simple "laws" which have been discovered through observation, in other words, through our senses in which they can then be expressed through our understanding of mathematics. From what I understand, in its simplest form, "laws" describe a pattern found in nature, which can only be discovered through observation. "We don’t really observe anything that deserves to be call “the Law of Gravity”. I disagree, go and jump off a bridge repetitively and tell me how many times you float on your decent. Do not the "Laws" describe, and the "Theories" explain? I do understand what you're saying Neil, I just disagree... KRock
Neil Rickert at 3.1 writes, “I see that as meaningless. That our laws of physics were intelligently designed, I have no doubt. The intelligent designers were human physicists. They fine-tuned the laws that they designed to fit our world. This has no metaphysical implications.’ What? Human physicists designed the laws of nature? Newton discovered gravity,he didn’t create it from nothing. Einstein theorized about space-time, but he didn’t invent it. Laws of nature exist whether we know of them and describe them scientifically or not. Barb
Really? Have you ever walked on a NASCAR race track- the curves are banked, just as what happens with the fabric of spacetime. Einstein refined Newton and he did so because of advances in technology as well as the advantage of having Newton's work handy. Joe
There's some ambiguity here about "law". Many people take "law" to mean a natural language statement. And, as far as I can tell, the universe came without a natural language and without any natural language statement, so without any natural law. Some people, some of the time, take "natural law" to be a reference to how the universe behaves. That seems to be how you want to use it. Well fair enough. But then talking of something as being "law like" doesn't seem to make sense, because it is as reference to the likeness of natural language expressions and constraints that they logically imply. My preference is to avoid talking of "laws of nature" where possible, but to use "scientific laws" or "laws of physics" for what humans have constructed as part of how they describe and cope with the way the universe behaves.
The way I see it (and maybe I’m wrong here) the “Law of Gravity” is not what’s constructed (scaffolding like you suggested) but is something that is observed;
We don't really observe anything that deserves to be call "the Law of Gravity". We observe examples of how the universe behaves. And then we attempt to come up with a principled account of that behavior. It seems to me that the usage of "law of gravity" requires that it refer to the principle, rather than the observed behavior. But we never do observe the principle itself. Maybe there's an intelligent designer who used such a principle, and we try to approximate that. Or maybe we are just seeing statistical patterns, and there is no actual operating principle. I don't think there's a way of settling that. So I see it as better to talk only about what is actually observed. Neil Rickert
Motion in a non-geodesic path involves force. That's what's happening with the racecar drivers. According to Einstein, a planet such as earth is moving in a nearly geodesic path. It's not quite geodesic because there are some forces such as that due to the solar wind. Neil Rickert
Could you kindly explain your expertise in thermodynamics and astrophysics to correct what Sir Fred was actually speaking about?
How is that relevant? Hoyle's gaffe was due to his poor knowledge of evolutionary theory and biology, not thermodynamics or astrophysics.
I repeat, someone like Hoyle — a Nobel Equivalent Prize Holder (who actually invented the term “big bang”), speaking on a matter that is tied to his expertise may be wrong, but he is not going to be wrong in a silly or ill-informed way.
Biology, evolutionary theory and abiogenesis were way outside his field of expertise. After all, this is the man who thought that the Archaeopteryx fossil was a forgery and who suggested that human nostrils evolved pointing downward in order to prevent cosmic pathogens, drifting down from space, from falling into them. A perfect illustration of why the argument from authority is a fallacy, particularly when the authority in question is operating way outside his sphere of competence. Brilliant people deserve a fair hearing, but not a free pass, especially when they have a history of spouting nonsense. champignon
Neil: I believe you're describing Newton's and Einstein's theories, based on the "Gravitational Law". The fact is, "Gravity" and the "Law(s)" that conform to it (what ever they maybe) would still exist independently of any theory used to explain how it (gravity) works. The way I see it (and maybe I'm wrong here) the "Law of Gravity" is not what’s constructed (scaffolding like you suggested) but is something that is observed; the theories of how gravity might work, are what’s constructed. Most physicists would say that there is no contradiction between the two theories of Einstein and Newton, only an improvement on the one previously held by Newton. KRock
Ask NASCAR drivers if a curvature produces a force. :) Joe
A free-fall, like with Newton's cannonball? :) And if mass produces a curvature doesn't that curvature induce a force on all that enters it? Meaning the curvature does produce a force, and a force is a force of course of course. Joe
In what way? I know Einstein corrected some of what Newton said but that is different from contradicting it.
According to Newton, there's force of gravitational attraction acting on the earth, and that's why it moves in a curved path. According to Einstein, the earth is in free fall with no external force acting on it, taking the path it does because of the curvature of space-time. Gravity has the effect of producing that curvature, not of producing a force. That sure looks like a clear contradiction. Neil Rickert
C: Could you kindly explain your expertise in thermodynamics and astrophysics to correct what Sir Fred was actually speaking about? Otherwise, this is a case of the live donkey and the dead lion. Hoyle's primary focus on the relevant topics and context was the origin of life and particularly the complex info in life in light of some very interesting astrophysical phenomena (not issues on macroevolution); and I suggest that you take time to read the article on his Caltech talk before further trying to knock over a strawman. I repeat, someone like Hoyle -- a Nobel Equivalent Prize Holder (who actually invented the term "big bang"), speaking on a matter that is tied to his expertise may be wrong, but he is not going to be wrong in a silly or ill-informed way. Even his errors will be highly instructive. (I have in mind here, for instance, his magnetic interaction based model for the distribution of angular momentum and mass in the solar system. He may have been wrong on various subjects, e.g. in the end he abandoned the Steady State theory, and I think his attempted revival in the 90s too, but he is going to be a unique, original and profound thinker and scientist all the way through. One utterly unafraid to think for himself and speak his own mind.) And, let me speak a bit more. If we see a 747, there is excellent reason, on its FSCO/I, to infer to design, not to a tornado hitting a junkyard in Seattle. Indeed, if we see a D'Arsonval movement based instrument from its cockpit panel, we are still so far into the deeply isolated island of function territory that we have every right to infer to design as the best explanation. Now, living systems have in them not only a functionally specific complexity that dwarfs both of these -- just start with the ATP synthase, the kinesin, the ribosome, the chloroplast etc etc, but they are based on cells that implement molecular scale, code based von Neumann Self Replicators, in addition to the metabolic type functions linked to the above cluster of molecular nanomachines. Such self-replication is an ADDITIONAL reason to infer to design on FSCO/I, independent of whether or no such living systems thereafter evolved by strictly Darwinian Mechanisms. (Which, I believe Hoyle more or less accepted.) Blend in the underlying fine tuned cosmological physics that sets up H, He, O, C and N as the first five elements, and gives them the properties that make C-Chemistry, aqueous medium, cell based life possible in our observed cosmos. That gives us excellent reason to infer to design of the cosmos and its physics, and thence of life and in the end us. I suggest you take a moment to read here on and here on. H'mm, let me clip Averick, just to help you think about the issues a bit more broadly:
The entire plot of the classic film, 2001: A Space Odyssey is based on . . . [[an] obvious principle. At a dramatic moment in the film, when a rectangular monolith is discovered buried on the moon, it is clear to those who discover it (and accepted as absolutely logical and reasonable by everyone watching the movie) that this is unmistakable proof of alien life. After all, a precisely measured monolith couldn't possibly have made itself or "evolved naturally". . . . The human body is an incredible piece of machinery; who put it together? It certainly required a great deal more sophistication to build a human being than to construct a rectangular monolith [[or a Jumbo Jet, or a calculator] . . . . As it turns out, Darwinian evolution is not, as the skeptic would have us believe, a testimony to what can emerge from undirected processes; it is a testimony to the unimaginably awesome capabilities and potential contained in the first living cell and its genetic code. A paradigm-shifting insight emerges from all this: Contrary to popular belief, not only is Darwinian evolution not the cause or explanation of the staggering complexity of life on this planet; Darwinian evolution itself is a process which is the result of the staggering complexity of life on this planet . . . All existing life is nothing more than a variation on a theme. All the "organized complexity" of life is a variation on the "organized complexity" of the first living organism . . . . [[I]f it is statistically improbable that a 747 [[as Sir Fred Hoyle suggested and as prof Dawkins wishes to rebut by his infinite regress of complexities argument] could have originated by chance, then it is an even greater statistical improbability that the designer of the 747 originated by chance. I agree wholeheartedly. Both the 747 and the human creators of the 747 are here not by chance, but by design! . . . . The philosophical problem that must be addressed is the following: How do we escape from the dilemma of the infinitely regressing series of creators (i.e., whoever created me would have to be at least as complex and sophisticated as I am, and therefore he would also need someone to create him, and so on.)? To state this dilemma in a slightly different way: Since all agree that at one time life did not exist and now it does exist, there must be an actual beginning to the process, it cannot go back infinitely . . . . Properly presented, the question is as follows:
Any functionally complex and purposefully arranged form of physical matter (i.e. a Boeing 747, a calculator, or a bacterium), must itself have a creator at least as complex as the object in question. How do we (or can we) escape an infinite regression of creators?
That which demands and requires a preceding creator is a complex arrangement of physical matter. With this precise formulation of the question, the answer becomes obvious. At some point in the progression, we are faced with the inescapable conclusion that there must be a creator who is not physical matter at all; a creator who does not need to be created; a creator who is not subject to the limitations of cause and effect. There must be a creator who is the first, who is the beginning of it all. There must be a creator who is outside of the physical universe. A creator who is outside of the physical universe, not existing in time and space, and composed of neither matter nor energy, does not require a preceding creator. There is nothing that came before him. He created time, he does not exist in time; there is no "before". ("What happened before the big bang? The answer is there was no ‘before.’ Time itself began at the big bang." 16 Physicist, Dr. Paul Davies) We are created; along with time, space, matter, and energy. We are subject to the limitations of a time/space bound series of causes and effects. The creator simply is. [[Rabbi Moshe Averick, "Turns out Richard Dawkins' watchmaker has 20/20 vision after all," Aish.com, Feb. 5, 2011. ]
G'day GEM of TKI kairosfocus
1 2

Leave a Reply