Professor Victor Stenger is an American particle physicist and a noted atheist, who popularized the phrase, “Science flies you to the moon. Religion flies you into buildings”. Professor Stenger is also the author of several books, including his recent best-seller, The Fallacy of Fine-Tuning: How the Universe is Not Designed for Humanity (Prometheus Books, 2011). Stenger’s latest book has been received with great acclaim by atheists: “Stenger has demolished the fine-tuning proponents,” writes one enthusiastic Amazon reviewer, adding that the book tells us “how science is able to demonstrate the non-existence of god.”
Well, it seems that the great Stenger has finally met his match. Dr. Luke A. Barnes, a post-doctoral researcher at the Institute for Astronomy, ETH Zurich, Switzerland, has written a scathing critique of Stenger’s book. I’ve read refutations in my time, but I have to say, this one is devastating.
In his paper, Dr. Barnes takes care to avoid drawing any metaphysical conclusions from the fact of fine-tuning. He has no religious axe to grind. His main concern is simply to establish that the fine-tuning of the universe is real, contrary to the claims of Professor Stenger, who asserts that all of the alleged examples of fine-tuning in our universe can be explained without the need for a multiverse.
Dr. Barnes’ ARXIV paper, The Fine-Tuning of the Universe for Intelligent Life (Version 1, December 21, 2011), is available online, and I shall be quoting from it below. Since the paper is quite technical at times, I’ve omitted mathematical equations and kept the references to physical parameters to a minimum, since I simply wish to give readers an overview of what Dr. Barnes perceives as the key flaws in Professor Stenger’s book.
I would like to add that Dr. Barnes has also written an incisive online critique of Mike Ikeda and Bill Jeffery’s widely cited paper, The Anthropic Principle Does Not Support Supernaturalism, which is cited by Professor Stenger in his book, in order to show that even if some observation were to establish that the universe is fine-tuned, it could only count as evidence against God’s existence. Part 1 of Dr. Barnes’ reply is here; Part 2 is here.
What follows is a selection of quotes from Dr. Barnes’ ARXIV paper, covering the key points. All bold emphases are mine, not the author’s, and page references are to Dr. Barnes’ paper. The term “FOFT” in the quotes below is an abbreviation of the title of Professor Stenger’s latest book, The Fallacy of Fine-Tuning: How the Universe is Not Designed for Humanity (Prometheus Books, 2011).
Finally, I would like to thank Dr. Barnes for making his paper available for public comment online, and I wish him every success in his future scientific work.
The fine-tuning of the universe for intelligent life has received a great deal of attention in recent years, both in the philosophical and scientific literature. The claim is that in the space of possible physical laws, parameters and initial conditions, the set that permits the evolution of intelligent life is very small. I present here a review of the scientific literature, outlining cases of fine-tuning in the classic works of Carter, Carr and Rees, and Barrow and Tipler, as well as more recent work. To sharpen the discussion, the role of the antagonist will be played by Victor Stenger’s recent book The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us. Stenger claims that all known fine-tuning cases can be explained without the need for a multiverse. Many of Stenger’s claims will be found to be highly problematic. We will touch on such issues as the logical necessity of the laws of nature; objectivity, invariance and symmetry; theoretical physics and possible universes; entropy in cosmology; cosmic inflation and initial conditions; galaxy formation; the cosmological constant; stars and their formation; the properties of elementary particles and their effect on chemistry and the macroscopic world; the origin of mass; grand unified theories; and the dimensionality of space and time. I also provide an assessment of the multiverse, noting the significant challenges that it must face. I do not attempt to defend any conclusion based on the fine-tuning of the universe for intelligent life. This paper can be viewed as a critique of Stenger’s book, or read independently. (p. 1)
The claim that the universe is fine-tuned can be formulated as:
FT: In the set of possible physics, the subset that permit the evolution of life is very small.
As it stands, FT [the fine-tuning claim – VJT] is precise enough to distinguish itself from a number of other claims for which it is often mistaken. FT is not the claim that this universe is optimal for life, that it contains the maximum amount of life per unit volume or per baryon, that carbon-based life is the only possible type of life, or that the only kinds of universes that support life are minor variations on this universe. These claims, true or false, are simply beside the point. (p. 3)
The reason why FT [the fine-tuning claim – VJT] is an interesting claim is that it makes the existence of life in this universe appear to be something remarkable, something in need of explanation. The intuition here is that, if ours were the only universe, and if the causes that established the physics of our universe were indifferent to whether it would evolve life, then the chances of hitting upon a life-permitting universe are very small. (p. 3)
There are a few fallacies to keep in mind as we consider cases of fine-tuning.
The Cheap-Binoculars Fallacy: “Don’t waste money buying expensive binoculars. Simply stand closer to the object you wish to view”. We can make any point (or outcome) in possibility space seem more likely by zooming-in on its neighbourhood. Having identified the life-permitting region of parameter space, we can make it look big by deftly choosing the limits of the plot. We could also distort parameter space using, for example, logarithmic axes. A good example of this fallacy is quantifying the fine-tuning of a parameter relative to its value in our universe, rather than the totality of possibility space. If a dart lands 3 mm from the centre of a dartboard, is it obviously fallacious to say that because the dart could have landed twice as far away and still scored a bullseye, therefore the throw is only fine-tuned to a factor of two and there is “plenty of room” inside the bullseye. The correct comparison is between the area (or more precisely, solid angle) of the bullseye to the area in which the dart could land. Similarly, comparing the life-permitting range to the value of the parameter in our universe necessarily produces a bias toward underestimating fine-tuning, since we know that our universe is in the life-permitting range.
The Flippant Funambulist Fallacy: “Tightrope-walking is easy!”, the man says, “just look at all the places you could stand and not fall to your death!”. This is nonsense, of course: a tightrope walker must overbalance in a very specific direction if her path is to be life-permitting. The freedom to wander is tightly constrained. When identifying the life-permitting region of parameter space, the shape of the region is not particularly relevant. An elongated life-friendly region is just as fine-tuned as a compact region of the same area. The fact that we can change the setting on one cosmic dial, so long as we very carefully change another at the same time, does not necessarily mean that FT [the fine-tuning claim – VJT] is false.
The Sequential Juggler Fallacy: “Juggling is easy!”, the man says, “you can throw and catch a ball. So just juggle all five, one at a time”. Juggling five balls one-at-a-time isn’t really juggling. For a universe to be life-permitting, it must satisfy a number of constraints simultaneously. For example, a universe with the right physical laws for complex organic molecules, but which re-collapses before it is cool enough to permit neutral atoms will not form life. One cannot refute FT by considering life-permitting criteria one-at-a-time and noting that each can be satisfied in a wide region of parameter space. In set-theoretic terms, we are interested in the intersection of the life-permitting regions, not the union.
The Cane Toad Solution: In 1935, the Bureau of Sugar Experiment Stations was worried by the effect of the native cane beetle on Australian sugar cane crops. They introduced 102 cane toads, imported from Hawaii, into parts of Northern Queensland in the hope that they would eat the beetles. And thus the problem was solved forever, except for the 200 million cane toads that now call eastern Australia home, eating smaller native animals, and secreting a poison that kills any larger animal that preys on them. A cane toad solution, then, is one that doesn’t consider whether the end result is worse than the problem itself. When presented with a proposed fine-tuning explainer, we must ask whether the solution is more fine-tuned than the problem.
Stenger is a particle physicist, a noted speaker, and the author of a number of books and articles on science and religion. In his latest book, “The Fallacy of Fine-Tuning: Why the Universe is Not Designed for Us” [hereafter FOFT], he makes the following bold claim:
[T]he most commonly cited examples of apparent fine-tuning can be readily explained by the application of a little well-established physics and cosmology. … [S]ome form of life would have occurred in most universes that could be described by the same physical models as ours, with parameters whose ranges varied over ranges consistent with those models. And I will show why we can expect to be able to describe any uncreated universe with the same models and laws with at most slight, accidental variations. Plausible natural explanations can be found for those parameters that are most crucial for life… My case against fine-tuning will not rely on speculations beyond well-established physics nor on the existence of multiple universes. [FOFT pp. 22, 24]
Let’s be clear on the task that Stenger has set for himself. There are a great many scientists, of varying religious persuasions, who accept that the universe is fine-tuned for life, e.g. Barrow, Carr, Carter, Davies, Dawkins, Deutsch, Ellis, Greene, Guth, Harrison, Hawking, Linde, Page, Penrose, Polkinghorne, Rees, Sandage, Smolin, Susskind, Tegmark, Tipler, Vilenkin, Weinberg, Wheeler, Wilczek. They differ, of course, on what conclusion we should draw from this fact. Stenger, on the other hand, claims that the universe is not fine-tuned. (pp. 6-7)
The Laws of Nature
Are the laws of nature themselves fine-tuned? Stenger defends the ambitious claim that the laws of nature could not have been different because they can be derived from the requirement that they be Point-of-View Invariant (hereafter, PoVI)…
We can formulate Stenger’s argument for this conclusion as follows:
LN1. If our formulation of the laws of nature is to be objective, it must be PoVI.
LN2. Invariance implies conserved quantities (Noether’s theorem).
LN3. Thus, “when our models do not depend on a particular point or direction in space or a particular moment in time, then those models must necessarily contain the quantities linear momentum, angular momentum, and energy, all of which are conserved. Physicists have no choice in the matter, or else their models will be subjective, that is, will give uselessly different results for every different point of view. And so the conservation principles are not laws built into the universe or handed down by deity to govern the behavior of matter. They are principles governing the behavior of physicists.” [FOFT p. 82, emphasis original]
This argument commits the fallacy of equivocation – the term “invariant” has changed its meaning between LN1 and LN2. (pp. 7-8)
Conclusion: We can now see the flaw in Stenger’s argument. Premise LN1 should read: If our formulation of the laws of nature is to be objective, then it must be covariant. Premise LN2 should read: symmetries imply conserved quantities. Since ‘covariant’ and ‘symmetric’ are not synonymous, it follows that the conclusion of the argument is unproven, and we would argue that it is false. The conservation principles of this universe are not merely principles governing our formulation of the laws of nature. (p. 17)
SSB [spontaneous symmetric breaking – VJT] allows the laws of nature to retain their symmetry and yet have asymmetric solutions.
Even if the symmetries of the laws of nature were inevitable, it would still be an open question as to precisely which symmetries were broken in our universe and which were unbroken. (p. 18)
Changing the Laws of Nature
What if the laws of nature were different? Stenger says:
… what about a universe with a different set of “laws”? There is not much we can say about such a universe, nor do we need to. Not knowing what any of their parameters are, no one can claim that they are fine-tuned. [FOFT p. 69]
In reply, fine-tuning isn’t about what the parameters and laws are in a particular universe. Given some other set of laws, we ask: if a universe were chosen at random from the set of universes with those laws, what is the probability that it would support intelligent life? If that probability is suitably (and robustly) small, then we conclude that that region of possible-physics-space contributes negligibly to the total life-permitting subset. It is easy to find examples of such claims.
* A universe governed by Maxwell’s Laws “all the way down” (i.e. with no quantum regime at small scales) will not have stable atoms | electrons radiate their kinetic energy and spiral rapidly into the nucleus | and hence no chemistry (Barrow & Tipler, 1986, pg. 303). We don’t need to know what the parameters are to know that life in such a universe is plausibly impossible.
* If electrons were bosons, rather than fermions, then they would not obey the Pauli exclusion principle. There would be no chemistry.
* If gravity were repulsive rather than attractive, then matter wouldn’t clump into complex structures. Remember: your density, thank gravity, is 10^30 times greater than the average density of the universe.
* If the strong force were a long rather than short-range force, then there would be no atoms. Any structures that formed would be uniform, spherical, undifferentiated lumps, of arbitrary size and incapable of complexity.
* If, in electromagnetism, like charges attracted and opposites repelled, then there would be no atoms. As above, we would just have undifferentiated lumps of matter.
* The electromagnetic force allows matter to cool into galaxies, stars, and planets. Without such interactions, all matter would be like dark matter, which can only form into large, diffuse, roughly spherical haloes of matter whose only internal structure consists of smaller, diffuse, roughly spherical subhaloes. (p. 18)
Moving from the laws of nature to the parameters those laws, Stenger makes the following general argument against supposed examples of fine-tuning:
[T]he examples of fine-tuning given in the theist literature . . . vary one parameter while holding all the rest constant. This is both dubious and scientifically shoddy. As we shall see in several specific cases, changing one or more other parameters can often compensate for the one that is changed. [FOFT p. 70]
To illustrate this point, Stenger introduces “the wedge”… Here, x and y are two physical parameters that can vary from zero to x-max and y-max, where we can allow these values to approach infinity if so desired. The point (x0, y0) represents the values of x and y in our universe. The life-permitting range is the shaded wedge. Stenger’s point is that varying only one parameter at a time only explores that part of parameter space which is vertically or horizontally adjacent to (x0, y0), thus missing most of parameter space. (p. 19)
In response, fine-tuning relies on a number of independent life-permitting criteria. Fail any of these criteria, and life becomes dramatically less likely, if not impossible. When parameter space is explored in the scientific literature, it rarely (if ever) looks like the wedge. We instead see many intersecting wedges. Here are two examples… (p. 20)
These two examples show that the wedge, by only considering a single life-permitting criterion, seriously distorts typical cases of fine-tuning by committing the sequential juggler fallacy (Section 2). Stenger further distorts the case for fine-tuning by saying:
In the fine-tuning view, there is no wedge and the point has infinitesimal area, so the probability of finding life is zero. [FOFT p. 70]
No reference is given, and this statement is not true of the scientific literature. The wedge is a straw man. (p. 21)
We turn now to cosmology. The problem of the apparently low entropy of the universe is one of the oldest problems of cosmology. The fact that the entropy of the universe is not at its theoretical maximum, coupled with the fact that entropy cannot decrease, means that the universe must have started in a very special, low entropy state. (p. 23)
Let’s return to Stenger’s proposed solution… Stenger takes it for granted that the universe is homogeneous and isotropic. We can see this also in his use of the Friedmann equation, which assumes that space-time is homogeneous and isotropic. Not surprisingly, once homogeneity and isotropy have been assumed, Stenger finds that the solution to the entropy problem is remarkably easy.
We conclude that Stenger has not only failed to solve the entropy problem; he has failed to comprehend it. He has presented the problem itself as its solution. Homogeneous, isotropic expansion cannot solve the entropy problem – it is the entropy problem. Stenger’s assertion that “the universe starts out with maximum entropy or complete disorder” is false. A homogeneous, isotropic spacetime is an incredibly low entropy state. Penrose (1989) warned of precisely this brand of failed solution two decades ago... (p. 26)
We turn now to cosmic inflation, which proposes that the universe underwent a period of accelerated expansion in its earliest stages. The achievements of inflation are truly impressive – in one fell swoop, the universe is sent on its expanding way, the flatness, horizon, and monopole problem are solved and we have concrete, testable and seemingly correct predictions for the origin of cosmic structure. It is a brilliant idea, and one that continues to defy all attempts at falsification. Since life requires an almost-flat universe (Barrow & Tipler, 1986, pg. 408ff.), inflation is potentially a solution to a particularly impressive fine-tuning problem – sans inflation, the density of the universe at the Planck time must be tuned to 60 decimal places in order for the universe to be life-permitting. (p. 27)
Let’s summarise. Inflation is a wonderful idea; in many ways it seems irresistible (Liddle, 1995). However, we do not have a physical model, and even we had such a model, “although inflationary models may alleviate the “fine tuning” in the choice of initial conditions, the models themselves create new “fine tuning” issues with regard to the properties of the scalar field” (Hollands & Wald, 2002b). To pretend that the mere mention of inflation makes a life-permitting universe “100 percent” inevitable [FOFT p. 245] is naive in the extreme, a cane toad solution. (p. 31)
Suppose that inflation did solve the fine-tuning of the density of the universe. Is it reasonable to hope that all fine-tuning cases could be solved in a similar way? We contend not, because inflation has a target. Let’s consider the range of densities that the universe could have had at some point in its early history. One of these densities is physically singled out as special – the critical density. Now let’s note the range of densities that permit the existence of cosmic structure in a long-lived universe. We find that this range is very narrow. Very conveniently, this range neatly straddles the critical density.
We can now see why inflation has a chance. There is in fact a three-fold coincidence – A: the density needed for life, B: the critical density, and C: the actual density of our universe are all aligned. B and C are physical parameters, and so it is possible that some physical process can bring the two into agreement. The coincidence between A and B then creates the required anthropic coincidence (A and C). If, for example, life required a universe with a density (say, just after reheating) 10 times less than critical, then inflation would do a wonderful job of making all universes uninhabitable.
Inflation thus represents a very special case. Waiting inside the life-permitting range (L) is another physical parameter (p). Aim for p and you will get L thrown in for free. This is not true of the vast majority of fine-tuning cases. There is no known physical scale waiting in the life-permitting range of the quark masses, fundamental force strengths or the dimensionality of space-time. There can be no inflation-like dynamical solution to these fine-tuning problems because dynamical processes are blind to the requirements of intelligent life.
What if, unbeknownst to us, there was such a fundamental parameter? It would need to fall into the life-permitting range. As such, we would be solving a fine-tuning problem by creating at least one more. And we would also need to posit a physical process able to dynamically drive the value of the quantity in our universe toward p. (pp. 31-32)
The Amplitude of Primordial Fluctuations Q
Q, the amplitude of primordial fluctuations, is one of Martin Rees’ Just Six Numbers. In our universe, its value is [approx.] Q = 2 x 10^(-5), meaning that in the early universe the density at any point was typically within 1 part in 100,000 of the mean density. What if Q were different? (p. 32)
If Q were smaller than 10^(-6), gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial ‘mix’ of atoms, dark energy and radiation were the same as our own. On the other hand, a universe where Q were substantially larger than 10^(-5) – were the initial “ripples” were replaced by large-amplitude waves – would be a turbulent and violent place. Regions far bigger than galaxies would condense early in its history. They wouldn’t fragment into stars but would instead collapse into vast black holes, each much heavier than an entire cluster of galaxies in our universe… Stars would be packed too close together and buffeted too frequently to retain stable planetary systems. (Rees, 1999, pg. 115)
Stenger has two replies…. (p. 32)
Stenger’s second reply is to ask “… is an order of magnitude fine-tuning? …”
There are a few problems here. We have a clear case of the flippant funambulist fallacy – the possibility of altering other constants to compensate the change in Q is not evidence against fine-tuning. Choose Q and, say, alpha-G at random and you are unlikely to have picked a life-permitting pair, even if our universe is not the only life-permitting one. We also have a nice example of the cheap-binoculars fallacy. The allowed change in Q relative to its value in our universe (“an order of magnitude”) is necessarily an underestimate of the degree of fine-tuning. The question is whether this range is small compared to the possible range of Q. Stenger seems to see this problem, and so argues that large values of Q are unlikely to result from inflation. This claim is false, and symptomatic of Stenger’s tenuous grasp of cosmology. (p. 33)
The fine-tuning of Q stands up well under examination. (p. 34)
The Cosmological Constant, Lambda
The cosmological constant problem is described in the textbook of Burgess & Moore (2006) as “arguably the most severe theoretical problem in high-energy physics today, as measured by both the difference between observations and theoretical predictions, and by the lack of convincing theoretical ideas which address it”. A well-understood and well-tested theory of fundamental physics (Quantum Field Theory – QFT) predicts contributions to the vacuum energy of the universe that are [approx.] 10^120 times greater than the observed total value. Stenger’s reply is guided by the following principle:
Any calculation that disagrees with the data by 50 or 120 orders of magnitude is simply wrong and should not be taken seriously. We just have to await the correct calculation. [FOFT p. 219]
This seems indistinguishable from reasoning that the calculation must be wrong since otherwise the cosmological constant would have to be fine-tuned. One could not hope for a more perfect example of begging the question. More importantly, there is a misunderstanding in Stenger’s account of the cosmological constant problem. The problem is not that physicists have made an incorrect prediction. We can use the term dark energy for any form of energy that causes the expansion of the universe to accelerate, including a “bare” cosmological constant (see Barnes et al., 2005, for an introduction to dark energy). Cosmological observations constrain the total dark energy. QFT [quantum field theory – VJT] allows us to calculate a number of contributions to the total dark energy from matter fields in the universe. Each of these contributions turns out to be 10^120 times larger than the total. There is no direct theory-vs.-observation contradiction as one is calculating and measuring different things. The fine-tuning problem is that these different independent contributions, including perhaps some that we don’t know about, manage to cancel each other to such an alarming, life-permitting degree. This is not a straightforward case of Popperian falsification. (pp. 34-35)
The cosmological constant problem is actually a misnomer. This section has discussed the “bare” cosmological constant. It comes purely from general relativity, and is not associated with any particular form of energy. The 120 orders-of-magnitude problem refers to vacuum energy associated with the matter fields of the universe… The source of the confusion is the fact that vacuum energy has the same dynamical effect as the cosmological constant, so that observations measure an “effective” cosmological constant: effective-Lambda = bare-Lambda + vacuum-Lambda. The cosmological constant problem is really the vacuum energy problem. Even if Stenger could show that bare-Lambda = 0, this would do nothing to address why effective-Lambda is observed to be so much smaller than the predicted contributions to vacuum-Lambda. (p. 36)
There are a number of excellent reviews of the cosmological constant in the scientific literature (Weinberg, 1989; Carroll, 2001; Vilenkin, 2003; Polchinski, 2006; Durrer & Maartens, 2007; Padmanabhan, 2007; Bousso, 2008). In none will you find Stenger’s particular brand of dismissiveness. The calculations are known to be correct in other contexts and so are taken very seriously. Supersymmetry won’t help. The problem cannot be defined away. (p. 38)
The Origin of Mass
Let’s consider Stenger’s responses to these cases of fine-tuning. (p. 47)
Stenger is either not aware of the hierarchy and flavour problems, or else he has solved some of the most pressing problems in particle physics and not bothered to pass this information on to his colleagues… (p. 47)
We can draw some conclusions. First, Stenger’s discussion of the surprising lightness of fundamental masses is woefully inadequate. To present it as a solved problem of particle physics is a gross misrepresentation of the literature. Secondly, smallness is not sufficient for life… The masses must be sufficiently small but not too small. Finally, suppose that the LHC [Large Hadron Collider – VJT] discovers that supersymmetry is a (broken) symmetry of our universe. This would not be the discovery that the universe could not have been different. It would not be the discovery that the masses of the fundamental particles must be small. It would at most show that our universe has chosen a particularly elegant and beautiful way to be life-permitting. (p. 49)
Protons, Neutrons, Electrons
We turn now to the relative masses of the three most important particles in our universe: the proton, neutron and electron, from which atoms are made. Consider first the ratio of the electron to the proton mass, … of which Stenger says: “…we can argue that the electron mass is going to be much smaller than the proton mass in any universe even remotely like ours.” [FOFT p. 164] (p. 50)
The fact that Stenger is comparing the electron mass in our universe with the electron mass in universes “like ours” is all the evidence one needs to conclude that Stenger doesn’t understand fine-tuning. The fact that universes like ours turn out to be rather similar to our universe isn’t particularly enlightening. (p. 50)
Finally, and most importantly, note carefully Stenger’s conclusion. He states that no fine-tuning is needed for the neutron-proton mass difference in our universe to be approximately equal to the up quark-down quark mass difference in our universe. Stenger has compared our universe with our universe and found no evidence of fine-tuning. There is no discussion of the life-permitting range, no discussion of the possible range of [mass(neutron) – mass(proton)] (or its relation to the possible range of [mass(down quark) – mass(up quark)], and thus no relevance to fine-tuning whatsoever. (p. 51)
The Strength of the Fundamental Forces – Conclusion
Suppose Bob sees Alice throw a dart and hit the bullseye. “Pretty impressive, don’t you think?”, says Alice. “Not at all”, says Bob, “the point-of-impact of the dart can be explained by the velocity with which the dart left your hand. No fine-tuning is needed.” On the contrary, the fine-tuning of the point of impact (i.e. the smallness of the bullseye relative to the whole wall) is evidence for the fine-tuning of the initial velocity.
This flaw alone makes much of Chapters 7 to 10 of FOFT irrelevant. The question of the fine-tuning of these more fundamental parameters is not even asked, making the whole discussion a cane toad solution. Stenger has given us no reason to think that the life-permitting region is larger, or possibility space smaller, than has been calculated in the fine-tuning literature.
The parameters of the standard model remain some of the best understood and most impressive cases of fine-tuning. (pp. 54-55)
Dimensionality of Spacetime
A number of authors have emphasised the life-permitting properties of the particular combination of one time- and three space-dimensions, going back to Ehrenfest (1917) and Whitrow (1955), summarised in Barrow & Tipler (1986) and Tegmark (1997). (p. 55)
FOFT addresses the issue:
Martin Rees proposes that the dimensionality of the universe is one of six parameters that appear particularly adjusted to enable life … Clearly Rees regards the dimensionality of space as a property of objective reality. But is it? I think not. Since the space-time model is a human invention, so must be the dimensionality of space-time. We choose it to be three because it fits the data. In the string model, we choose it to be ten. We use whatever works, but that does not mean that reality is exactly that way. [FOFT p. 51]
…String theory is actually an excellent counterexample to Stenger’s claims. String theorists are not content to posit ten dimensions and leave it at that. They must compactify all but 3+1 of the extra dimensions for the theory to have a chance of describing our universe. This fine-tuning case refers to the number of macroscopic or ‘large’ space dimensions, which both string theory and classical physics agree to be three. The possible existence of small, compact dimensions is irrelevant. (p. 56)
The confusion of Stenger’s response is manifest in the sentence: “We choose three [dimensions] because it fits the data” [FOFT p. 51]. This isn’t much of a choice. One is reminded of the man who, when asked why he choose to join the line for ‘non-hen-pecked husbands’, answered, “because my wife told me to”. The universe will let you choose, for example, your unit of length. But you cannot decide that the macroscopic world has four space dimensions. It is a mathematical fact that in a universe with four spatial dimensions you could, with a judicious choice of axis, make a left-footed shoe into a right-footed one by rotating it. Our inability to perform such a transformation is not the result of physicists arbitrarily deciding that, in this space-time model we’re inventing, space will have three dimensions. (p. 56)
Could a multiverse proposal ever be regarded as scientific? FOFT p. 228 notes the similarity between undetectable universes and undetectable quarks, but the analogy is not a good one. The properties of quarks – mass, charge, spin, etc. – can be inferred from measurements. Quarks have a causal effect on particle accelerator measurements; if the quark model were wrong, we would know about it. In contrast, we cannot observe any of the properties of a multiverse… as they have no causal effect on our universe. We could be completely wrong about everything we believe about these other universes and no observation could correct us. The information is not here. The history of science has repeatedly taught us that experimental testing is not an optional extra. The hypothesis that a multiverse actually exists will always be untestable.
The most optimistic scenario is where a physical theory, which has been well-tested in our universe, predicts a universe-generating mechanism. Even then, there would still be questions beyond the reach of observation, such as whether the necessary initial conditions for the generator hold in the metaspace, and whether there are modifications to the physical theory that arise at energy scales or on length scales relevant to the multiverse but beyond testing in our universe. Moreover, the process by which a new universe is spawned almost certainly cannot be observed. (p. 58)
We should be wary of any multiverse which allows for single brains, imprinted with memories, to fluctuate into existence. The worry is that, for every observer who really is a carbon-based life form who evolved on a planet orbiting a star in a galaxy, there are vastly more for whom this is all a passing dream, the few, fleeting fancies of a phantom fluctuation. (p. 61)
Another argument against the multiverse is given by Penrose (2004, pg. 763ff.). As with the Boltzmann multiverse, the problem is that this universe seems uncomfortably roomy. (p. 62)
In other words, if we live in a multiverse generated by a process like chaotic inflation, then for every observer who observes a universe of our size, there are 10^10^123 who observe a universe that is just 10 times smaller. This particular multiverse dies the same death as the Boltzmann multiverse. Penrose’s argument is based on the place of our universe in phase space, and is thus generic enough to apply to any multiverse proposal that creates more small universe domains than large ones. Most multiverse mechanisms seem to fall into this category. (p. 62)
A multiverse generated by a simple underlying mechanism is a remarkably seductive idea. The mechanism would be an extrapolation of known physics, that is, physics with an impressive record of explaining observations from our universe. The extrapolation would be natural, almost inevitable. The universe as we know it would be a very small part of a much larger whole. Cosmology would explore the possibilities of particle physics; what we know as particle physics would be mere by-laws in an unimaginably vast and variegated cosmos. The multiverse would predict what we expect to observe by predicting what conditions hold in universes able to support observers.
Sadly, most of this scenario is still hypothetical. The goal of this section has been to demonstrate the mountain that the multiverse is yet to climb, the challenges that it must face openly and honestly. The multiverse may yet solve the fine-tuning of the universe for intelligent life, but it will not be an easy solution. “Multiverse” is not a magic word that will make all the fine-tuning go away. (p. 62)
Conclusions and Future
We conclude that the universe is fine-tuned for the existence of life. Of all the ways that the laws of nature, constants of physics and initial conditions of the universe could have been, only a very small subset permits the existence of intelligent life. (p. 62)
It is not true that fine-tuning must eventually yield to the relentless march of science. Fine-tuning is not a typical scientific problem, that is, a phenomenon in our universe that cannot be explained by our current understanding of physical laws. It is not a gap. Rather, we are concerned with the physical laws themselves. In particular, the anthropic coincidences are not like, say, the coincidence between inertial mass and gravitational mass in Newtonian gravity, which is a coincidence between two seemingly independent physical quantities. Anthropic coincidences, on the other hand, involve a happy consonance between a physical quantity and the requirements of complex, embodied intelligent life. The anthropic coincidences are so arresting because we are accustomed to thinking of physical laws and initial conditions as being unconcerned with how things turn out. Physical laws are material and efficient causes, not final causes. There is, then, no reason to think that future progress in physics will render a life-permitting universe inevitable. When physics is finished, when the equation is written on the blackboard and fundamental physics has gone as deep as it can go, fine-tuning may remain, basic and irreducible. (p. 63)
Perhaps the most optimistic scenario is that we will eventually discover a simple, beautiful physical principle from which we can derive a unique physical theory, whose unique solution describes the universe as we know it, including the standard model, quantum gravity, and (dare we hope) the initial conditions of cosmology. While this has been the dream of physicists for centuries, there is not the slightest bit of evidence that this idea is true. It is almost certainly not true of our best hope for a theory of quantum gravity, string theory, which has “anthropic principle written all over it” (Schellekens, 2008). The beauty of its principles has not saved us from the complexity and contingency of the solutions to its equations. Beauty and simplicity are not necessity. (p.63)
Appendix B – Stenger’s MonkeyGod
In Chapter 13, Stenger argues against the fine-tuning of the universe for intelligent life using the results of a computer code, subtly named MonkeyGod. It is a Monte Carlo code, which chooses values of certain parameters from a given probability density function (PDF) and then calculates whether a universe with those parameters would support life. (p. 68)
We conclude that MonkeyGod is so deeply flawed that its results are meaningless. (p. 71)
76 Replies to “Is fine-tuning a fallacy?”
Yes, fine tuning is a fallacy.
Well, okay, “fallacy” isn’t the correct technical term here. But it is a mistaken claim.
The problem with the fine tuning argument is that it is not a logical argument. It is an appeal to the intuition. And that makes it dependent upon one’s other intuitions.
At least part of what Stenger argues is correct. Namely, that our scientific laws are human constructs that come from our efforts to coordinatize the universe. And to the extent that is true, the fact that they appear fine tuned is mostly evidence that the scientists who constructed them did a fine job. If the universe were different, there’s no way to know whether any of our concepts would even be applicable to that different universe. So there really isn’t any way to assess probabilities.
While part of what Stenger argues is correct, other parts make dubious assumptions as Barnes points out. But the same limitations that apply to Stenger’s review also apply to Barnes’s review. Any assessment of whether the universe is fine tuned is dependent upon very uncertain assumptions.
If Barnes wants to believe that the universe is fine tuned, that’s his privilege. And if Stenger wants to believe that it isn’t, that’s his privilege. But there could be no clear conclusion either way. So maybe we should leave it for people to decide for themselves, or to ignore completely if they so prefer.
Thank you for your post. You write:
I’m afraid I have to disagree with you on this point. Dr. Barnes writes on page 63 of his paper that “theoretical physicists find it rather easy to describe alternative universes that are free from logical contradiction” and he cites a paper by P. Davies in Manson N. A., ed., God and Design: The Teleological Argument and Modern Science, (Routledge, 2003).
Dr. Barnes even gives some illustrations of his point in section 4.1.3 of his paper (page 18) which seem to demonstrate plainly that we can meaningfully speak of what the universe would be like, if its laws were different, and that we can say that if these laws were changed in certain ways, life would be impossible. In other words, we can still apply concepts and estimate probabilities in other possible universes. Some examples:
In all of these alternative universes, life would be impossible. If universes like these make up the great majority of alternative universes, then I think it is fair to say our own universe is fine-tuned.
You also write that “our scientific laws are human constructs that come from our efforts to coordinatize the universe” and that “the fact that they appear fine tuned is mostly evidence that the scientists who constructed them did a fine job.” I don’t see how this follows. From the fact that we can construct mathematical equations which describe the behavior of objects in the cosmos, it doesn’t automatically follow that alternative sets of equations will describe situations in which atoms wouldn’t even be able to form, and hence life could never arise. That’s a surprising fact – and it’s one that caught physicists by surprise a few decades ago, when the fine-tuning of our universe became apparent.
“The problem with the fine tuning argument is that it is not a logical argument. It is an appeal to the intuition. And that makes it dependent upon one’s other intuitions.”
When someone can observe that our universe exhibits parameters that are balanced and ordered, cannot that same individual infer a logical argument that supports an intelligent designer as its cause?
“the fact that they appear fine tuned is mostly evidence that the scientists who constructed them did a fine job.”
If I understand you correctly than I disagree. They appear fine tuned because of the information they provide and the fact that as irreducible complex creatures, we are able to gather and understand this information? Information is key here. Scientists constructing them really has nothing to do with it, they would still exist with the information they exhibit, whether a scientist was able to observe that they did or not.
“If the universe were different, there’s no way to know whether any of our concepts would even be applicable to that different universe. So there really isn’t any way to assess probabilities.”
I’m not sure the above statement is fare. We live in this universe and the probabilities apply to what we have here, not to what we don’t or wish to have. I’m not sure if anything I’m about to say or have said will make any sense but I don’t buy the notion that the “fine tuned” argument is not a logical argument. If the “fine tuned” argument is not logical (based on the inability to cross examine) than neither can the materialistic argument from blind chance be logical. If testing our concepts (with other universes) to assess probabilities is the name of the game, than the same needs to apply to any other argument.
It’s quite reasonable to believe that the fine tuned argument of our universe is a logical argument when we know that any adjustment to the parameters that hold our universe together would have serious implications.
If it is a logical argument, you should be able to find precise premises and a logical deduction from those premises.
Perhaps there could be a universe with no electricity, no magnetism, and thus no elecromagnetic radiation, no light. It might have no spatial dimensions. Maybe there’s no chemistry as we know it. But maybe there could be intelligent creatures whose way of life fits that world, though it is unimaginable to us what that way of life would be.
Could a hypothetical really intelligent fish conceive of space flight, when its entire experience is in the sea?
Any examination of possible universes is limited to the universes that we can conceive of. And what we can conceive is very much limited by our own experience. We should not trust the results of such an examination.
This is the fallacy from possibility. It’s not rational to believe:
1. x is more probable than y
2. y is possible
3. therefore I believe y
What you have is an argument from ignorance:
1. I don’t know what other universes are like
2. Therefore, we are here by chance
It doesn’t follow.
We IDers have an argument from analogy:
1. like causes spawn like effects
2. intelligence is the only thing that can fine-tune
3. the universe is fine-tuned
4. therefore the universe is the result of intelligence
The only argument the atheists have is a mere wish:
1. like causes spawn like effects
2. intelligence is the only thing that can fine-tune
3. the universe is fine-tuned
4. I wish like causes did not spawn like effects or I wish fine-tuning were the result of chance.
5. therefore, we are here by chance.
We are not allowed to believe things that are not real. Such behavior leads to death. The evidence for fine-tuning is enormous. I’m actually starting to think that almost every property that contributes to life arising must be fine-tuned. The ph value of blood, the amount of green house gases on Earth, the distance of the Earth from the sun, the iron in the middle of the Earth, the energy of visible light, the speed at which the Earth spins around its axis, the amount of oxygen in the atmosphere, the distance of stars from each other, the ratio of the strength of the covalent bonds to van der waals forces – all of these are fine-tuned. Fine-tuning is something real, it’s not a matter of opinion. How one interprets fine-tuning is a matter of proper reasoning which is always much more difficult than identifying a phenomenon. The theists have in the favor analogical reasoning which is the most foundational of all reasoning, the atheists have a mere wish that there is a one time exception to the rules.
Again, you have a flawed argument from possibility. What you’re saying is:
1. It’s possible that X exists
2. Therefore, we are here by chance
You also have an argument from ignorance
1. I don’t know if X exists
2. Therefore, we are here by chance
You need to cite positive evidence that we are here by chance. If you’re an agnostic then you need to state positive evidence that something in principle cannot be known.
In order for life to exist it must be able to predict the future. Even a one celled eukaryote predicts the future when it uses its mitochondria to produce ATP. You might say that it looks like a mechanistic process with no thought required and that’s true, but it’s designer was predicting the future when it built the mitochondria.
You can’t predict the future unless the following four criteria are met:
1. a thing must be constant through time and space. you can’t predict the future if cars lawlessly become horses.
2. time must progress in an orderly fashion. you can’t predict the future if it’s 2005, then randomly it becomes 2002.
3. a broad set of natural law must remain unchanged. you can’t predict the future if gravity keeps changing randomly.
4. parallel universes do not overlap. you can’t predict the future if there is a whole in the ground for me, but there is not a whole in the ground for no one else.
5. caveat: these four criteria must be met to a large degree, not absolutely. randomness of course happens in our universe, the point is, not everything must be random.
There’s a few more, but you get the main idea. The point is you can’t just cook up a universe any way you want and expect life to arise from it. Life requires intelligence, that is to say, out of all the possible actions that life can do, it has to select from those possible actions a very tiny set of actions that will actually work and keep it alive. The only way this can be accomplished is if the above 4 criteria are met. Chance has no knowledge of those criteria. Thus chance is not an acceptable explanation for the existence of life.
No, I am not making that argument. I am not concluding that we are here by chance. I’m saying that there are questions about the origin of the universe that are not answered and probable not answerable.
It is not even clear that “we are here by chance” is meaningful. All we can really conclude is that we are here.
If I ever conclude that we are here by chance, I provide the supportive evidence.
However, I only conclude that we are here, which seems self-evident.
That we are here by chance is meaningful. We are either here by intention or not, if not then by definition we are here by chance.
What we have is evidence for fine-tuning. You don’t dispute that. What you dispute is what the evidence means. There are only two possibilities: chance or not chance, if not chance then we are here by intention. You have to ask yourself which of the two, given the evidence we have, is more plausible. Intention is clearly more plausible due to analogous reasoning. There is no reason to believe that fine-tuning is the result of chance. The only reason why people believe that it’s just a coincidence is because they wish to believe it.
You’re not reasoning properly. What you’re reasoning is:
1. like causes like effects
2. the effect of fine-tuning is caused by intelligence
3. Therefore, I think there is not enough evidence to choose between chance or intelligence.
You raise an interesting point with respect to possible universes that are radically unlike our own. Dr. Barnes has anticipated this objection, for he writes in his paper:
So Dr. Barnes is not claiming to be performing a probability calculation over the set of all possible physical laws, but over the set of all universes whose laws are either variations or negations of our own. That explains why he writes:
The question, “What if the laws of nature were different?”, shows that what he is considering are possible universes with laws that are variations on our own – e.g. universes where gravity is repulsive rather than attractive – or universes lacking some laws that are found in our own – e.g. universes lacking the electromagnetic force. These universes can fairly regarded as the ones that are nearest to our own. Universes with radically different laws, like the ones you hypothesize, are further away in terms of “possibility space”. So what Dr. Barnes is arguing is that for universes that are somewhat similar to our own, the vast majority cannot support intelligent life. Surely that’s a significant result, and one that invites the question, “Why is it so?”
One also needs to be careful about defining the fine-tuning claim. Dr. Barnes defines it as follows:
On page 2, he makes it clear that he is talking about universes that are able to “to evolve and support intelligent life.”
You mentioned the possibility of a universe with no spatial dimensions. I would say that in such a universe, there might be disembodied intelligent life-forms (e.g. angels), but by definition, there could be no biological life-forms, and hence no evolution either. Any intelligent life in such a universe would therefore fall outside the scope of Dr. Barnes’ paper and outside the scope of the fine-tuning claim.
I don’t know about Stenger’s book but you know form our earlier discussions that much of this is irrelevant because the whole idea of the probability of a different universe is deeply questionable. If a constant X has life permitting values between X-m and X+m, then we what are the range of possible values of X we are comparing it to? And why X and not logX or X^1000? Do either Stenger or Barnes discuss these rather fundamental issues? They seem to dwarf any discussion of what the actual life permitting values are.
Religion was the mechanism for the modern world to smarten up and fly to the moon.
Identity is what flew then into the buildings. A muslim Arab Identity and not a religion.
It is inaccurate to say religious ideas were behind 9/11
Just as the big point over there about Israel.
Israel is not a entity for a religion but an identity or people.
Thats the big rub amongst other complaints.
This is incompetent analysis of important matters. So why should I pay attention to him about the universe.?
Pardon a quick note, on first glance, in the main in response to a bit of well-poisoning:
First, the man who led the Apollo project, the world famed von Braun, was not only a design thinker and Christian, but a creationist. (Cf. the notes in reply to Lewontin’s similar well-poisoning attempt, here.)
Second, not even the vast majority of Muslims are to be equated to Al Qaeda’s deeply indoctrinated, brainwashed and hate-programmed murderous terrorists. Much less, the varied people of ever so many other faiths. And in particular, the Christian Faith and Judaism, the two dominant religious views in our civilisation.
Not even, the much scapegoated Bible-believing, Gospel-preaching evangelicals.
And, Stenger has had every opportunity, reason and duty of care to know, speak and do better.
So, Stenger here is indulging in irresponsible and dangerous well-poisoning and scapegoating based on picking the most extreme case he can find then smearing it all across those with whom he differs; all of which are marks of hostility, contempt and worse. At the same time, as we can easily see here at UD, his general ilk ever so often and easily go ballistic if one so much as points out the well-established, wide-scale HISTORY concerning the moral hazards exhibited by Darwinist thought in the name of science over the past 100+ years. (Onlookers, cf. here on the relevant history [a CSU video lecture], and here for a very recent exchange at UD on the matter, that exemplifies the points just made. Observe, in particular, the utter want of serious response to the historical evidence on line of descent of thought that prof Weikart has raised.
Even though the issue is manifestly significant in the context of Science in Society, which has to do with ethical issues of the practice of pure and applied science. And for that, the rise of Social Darwinism — with Charles Darwin and family heavily implicated from the foundational days of that movement of thought — and its manifestations in especially German militarism, the rape of Belgium and the onward expansion of that horror to the whole continent of Europe a generation later, are highly legitimate case studies in point. As, is the associated eugenics movement, which has had global impact.
Such a pattern of behaviour on Stenger’s part is noteworthy because it goes to the all too frequently manifested character of the movement of which he is a prominent part, the New/Gnu Atheists.
Such, need to do some very sober re-thinking about their extremist, hostile, slanderous and even hateful conduct. (A point that regulars at UD will know why I underscore so specifically. To the hate-bloggers out there who picked this fight, Bydand!)
I do not cite this to dismiss his fine-tuning arguments, but to highlight major fallacies in his way of thinking and promoting his views that are patently laced with serious moral hazards. Well-poisoning disrespect, contempt and scapegoating towards those who dare to differ with today’s so-called New Atheism, are very dangerous signs indeed.
On the what if another cluster of laws and parameters etc would be life-permitting talking point, I would first draw attention to John Leslie’s well-known parable of the fly on the wall swatted by a bullet (noting as well his point in the same lecture that “a force strength or a particle mass often appears to require accurate tuning for several reasons at once . . . “ — i.e. the fine tuning in view is also multiply constrained and convergent):
In short, if we see an evident, deeply isolated target hit by a bullet, the best explanation is marksman, regardless of whether there are other possibilities elsewhere where similar targets positively carpet the “wall.” Fine tuning rests on observation of an evident narrowly specified operating point friendly to C-chemistry, aqueous medium life, not a global argument as to how there is no other possibility out there. Or, in another way of putting it: that a Honda Civic shows signs of being set up at a finely tuned operating point that marks it as designed based on multiple well-matched coordinated and organised components, does not lose its force because we can point to a submarine or a jumbo jet or a building or simply a Class AB push-pull audio or video bandwidth amplifier for that matter. And, since this has been put on the table at responsible level for so long, to go on as if this is not a key issue, is irresponsible or ill-informed.
I think I would add here [just read down at the already linked to see the source, I have to husband my links budget . . . ], Bradley’s list of requisites for cell-based life as we know it:
It is entirely in order to give Sir Fred Hoyle the last word, to open the door to serious thought through considering the significance of water and Carbon, as well as the way the cosmos seems to be so set up that the first five most abundant elements are H, He, O, C, N, which just happen to be more or less the atomic foundation of life:
(I invite the interested onlooker to cf the UD introductory post on the fine-tuning issue here, and the onward linked readings.)
GEM of TKI
Actually, the 9-11 hijackers exhibited a religiously motivated [“we are the true Islam” was the bait on the hook that got them hooked . . . ], indoctrination in IslamIST ideology. (notice the distinction marked by that emphasis.)
When texts, ideas and prestige of highly respected institutions are taken out of wider context of the life situation that birthed them, are put in a hot-house high-pressure zealotry-driven environment and are isolated from balancing issues, honest self-examination and considerations in ethics and wider worldviews considerations, they can be used to brainwash and program extremists into hate and violence.
That is how Al Qaeda has set out to violently attack what the Mullahs of Iran — who seem to be having a hard time keeping heir people brainwashed [too long has passed and people know too much about the consequences of what seemed so wonderful to the naive and those fed up with the Shah in 1979* . . . ] — The Great Satan and the Little Satan, and how it seems to think it can justify murder of the innocents.
And of course the Black Flag Army and Gharqad tree hadiths are ever so handy proof texts to poison the gullible or foolishly idealistic and lead them to strap on bombs and walk into Pizzerias in Jerusalem, or to hijack symbolic planes and fly them into symbolic buildings on a symbolic date — 318 years but one day from the lifting of the siege of Vienna by Jan Sobieski [i.e. 9 — 11 -10 was the 318th anniversary of the last high-water mark of Islamist expansionism].
Just as, the prestige of “Science” can be similarly abused.
Which is precisely what happened with Social Darwinism, as the ghosts of over 100 million victims remind us.
The point is, that we need to foster virtue and we need to teach people how to spot and expose or break free of indoctrination and irresponsible sowing of the seeds of hate and violence. ANY movement of consequence can be abused to be a vehicle of brainwashing and violence against scapegoats.
That is part of what I have been longing to hear for these ten years since 9-11, as a serous and consistent discussion among our media elites.
The conspicuous silence and failure to adequately warn of a patent mortal danger [manipulative indoctrination leading to violence in the name of doing good: let us do evil that good may come . . . ], even as they endlessly beat their talking-point drums for their favourite agendas and poisonously spin the stories and commentary on those they do not like, tells me that something has gone very, very wrong with our civilisation’s key power centres.
We need to wake up, now.
GEM of TKI
PS: My first two steps for those who want to be immunised: here [spin], and here [straight thinking]. To go up to the next level, cf here on comparative difficulties thinking and worldviews analysis, and here on (in context) on on building a sound theistic worldview in the Judaeo-Christian tradition.
*PPS: MOST revolutions move from one extreme to another, they just change who is the dominant oppressive elite. That is why we need to take time to carefully study the cluster of revolutions and transformations that gave rise to modern liberty and democracy, without putting on anti-Christian blinkers that blind us to the significance of Judaeo-Christian thought and ethics in the process; cf here on in context for starters. And, I emphasise the Judaeo part for good reason, as ever so many are all too pat to forget the deeply Hebraic context and the sterling contributions by Jews to the process. The world owes a debt of gratitude to Judaism that can never be repaid. And, to Christianity too. (That is part of why I take so dim a view of those who seem to be forever drumming on about the real or imagined sins of especially Christendom and Israel, without a due and responsible balance and fair-minded acknowledgement. It is that lack of due balance that is ever so clear as a diagnostic sign. [Cf my discussion here.] I link for record, not to invite an off-topic debate. The fine tuning issue is so important that we should not allow this thread to be diverted.)
The funny, in a sad way, part about the fine tuning argument is that the materialist doesn’t have any explanation for the laws that govern this universe. All they can say about those laws is “They just are (the way they are)”- Hawking in “A Briefer History of Time”- and tat is just sad…
Of course I dispute that.
What we have is a philosophical argument that I find unpersuasive.
I see that as meaningless.
That our laws of physics were intelligently designed, I have no doubt. The intelligent designers were human physicists. They fine-tuned the laws that they designed to fit our world. This has no metaphysical implications.
Yes, they are avoiding the old question Why the universe is like it is and not in another way?
They prefer to beleive in the existance of infinite alternative universes than there is cause for the only know universe.
KF, a little more from Hoyle in that article for Engineering and Science you may find interesting.
Thank you for your post. Actually, Dr. Barnes does address the question you raise, on page 22 (section 4.2). I’ll quote the relevant passage:
So there you have it. To avoid fine-tuning, you have to suppose an underlying probability distribution that is itself fine-tuned.
You also asked about range. Here’s an excerpt from page 20 of Dr. Barnes’ article:
Here’s another quote from page 33 (section 4.5):
And on page 36:
On page 37:
So the limits of these ranges are set for a good physical reason. They’re not arbitrary.
On page 40, however, the graph has no upper limit, and the author writes:
It doesn’t matter anyway, because the life-permitting area is still finite and small. Ditto for page 44, where Dr. Barnes argues that unless 0.78 x 10^-17 < v (i.e. the Higgs mass parameter) < 3.3 x 10^-17, hydrogen will be unstable. Once again, that’s a very small range.
I hope that helps.
From what you say and what you have previously said you seem to be adopting a instrumentalism position. And while I myself oscillate between instrumentalism and scientific realism I think there is a problem.
Most people are realist especially theists and ID proponents. So I wander if it possible to have a meaningful discussion if you adhere to so fundamentaly different interpretations of the scientific data.
BTW fine-tuning is not a fallacy, it is a well tested observation.
Thank you for your post. You made an excellent point about Dr. Wernher von Braun, the father of the Saturn V rocket that flew the Apollo XI astronauts to the moon, and also about John Leslie’s fly on the wall analogy (which is very relevant to fine-tuning) and Hoyle’s famous remark about a super-intellect monkeying with the laws of physics. Hoyle’s remarks on abiogenesis are also worth recording:
Atheists love to talk about Hoyle’s fallacy (see here for a humorous take on the biased Wikipedia entry), but actually Hoyle was more or less right – a fact I came to realize when I read Dr. Stephen Meyer’s Signature in the Cell. Rabbi Averick has a good article on Hoyle’s argument here, if you’re interested.
Once again, many thanks.
(Just say) Anything but design, right?
Interesting indeed, and welcome to UD as a commenter.
My longstanding suggestion to those who bush Hoyle aside — especially the Wikipedians (thanks VJT) — has been that one should be cautious in postulating to know more than a Nobel Equivalent Prize holder whose field of expertise requires mastery of thermodynamics, speaking on a thermodynamics-linked matter. Such a man may be wrong, but not in a simplistic or ill-informed way.
My only thought is that within this century, we will most likely be able to do it, i.e. really synthesise life in vitro.
Happy new year.
Thanks, both for kind words and rather interesting links.
I am betwixt and between to say which of the two linked is better. I say, read both!
GEM of TKI
You can’t just say something is unpersuasive, you have to give your reasons. The only reason you have right now is an argument from ignorance and a mere wish that chance can fine-tune. Those aren’t good reasons.
There isn’t a point in space/time that’s favorable for the spontaneous formation of life as we know it. It’s been proposed that it happened in the both the hottest and coldest place.
If someone believes that it happened anyway then I don’t see what difference the appearance of fine tuning will make. Life might just as well have formed in the middle of the sun. It also could just as well have formed out of any other elements at any temperature anywhere in the universe. I don’t actually believe any of that. But it’s absurd to believe that life formed spontaneously and then get particular about the temperature and the elements.
What’s odd is that one person tries to boost the outlandishly preposterous odds of such a thing happening by reasoning that life as we know it isn’t a target, but one of limitless possible configurations. Meanwhile the guys with the instruments search the galaxy and get all excited when they find something that looks closer to supporting life exactly as we know it. If the first person is right, then isn’t the second person essentially looking for the same lightning strike to repeat itself?
Give me precise premises and a precise deductive argument, and I will either tell you which premise I disagree with, or where the logical flaw can be found.
But when you only make a vague philosophical plea, then I need say no more than that I don’t agree.
I’m a realist too. But when I look at the history of scientific laws, my realism about that history tells me that those laws are human constructs.
If there’s an old building that people want to renovate, they first try to document what is there. For that purpose, they build a scaffolding. I see scientific laws as a scaffolding that scientists build, to allow them to get closer to reality and document it better. That the scaffolding is finely tuned for what the designers (the scientists) wanted it to do is no surprise and has no metaphysical implications.
Please read: http://en.wikipedia.org/wiki/Instrumentalism
I am familiar with instrumentalism. If it is seen as computation rules, which is how Popper saw it, then that is different my position.
I don’t see a difference between your position and instrumentalism but I don’t really want to argue about that. But I think your interpretation of scientific theories is the reason why you have survived here so long.
In contrast your opponents don’t think that scientific theories like the theory of evolution are just scaffolding. They belief that if the theory of evolution was “good” a description of reality it would be real (true). But since you don’t there are no real philosophical consequences for you either way.
“For that purpose, they build a scaffolding. I see scientific laws as a scaffolding that scientists build, to allow them to get closer to reality and document it better. That the scaffolding is finely tuned for what the designers (the scientists) wanted it to do is no surprise and has no metaphysical implications.”
These scientific laws you’re referring too are not constructed by the scientist, they are discovered by them. For example, scientists did not construct the “law of gravity” it already existed. Yes, they’ve discovered it’s properties, so to speak, but by no means have they dictated what gravity “Ought” to do.
Maybe I’m not understanding what you’re trying to convey, but as far as I understand, the scientist has no say in what the “parameters” of the universe ought to do. These scientific laws exist independently of science.
I already did spell about my deductive argument with this
You’re not reasoning properly. What you’re reasoning is:
1. like causes like effects
2. the effect of fine-tuning is caused by intelligence
3. Therefore, I think there is not enough evidence to choose between chance or intelligence.
But let me do it even more formally:
1. if something is fine-tuned, then it came into existence
2. if something comes into existence, then it does so either due to chance or not chance
3. if something is not chance, then it is intentional
4. there is no known instance of chance fine-tuning any complex thing
5. therefore, it is more plausible that that which is fine-tuned is due to intention, rather than chance
6. the universe is fine-tuned
7. therefore, it is more plausible that it is due to intention
Now let’s apply this same logic to the pyramids. we have no evidence that the pyramids were built by humans, only a prior reasoning.
1. if something is fine-tuned, then it came into existence
2. if something comes into existence, then it does so either due to chance or not chance
3. if something is not chance, then it is intentional
4. there is no known instance of chance fine-tuning any complex thing
5. therefore, it is more plausible that that which is fine-tuned is due to intention, rather than chance
6. the pyramids are fine-tuned
7. therefore, it is more plausible that it is due to intention
I don’t agree 9/11 was religious. it was about identity. Islamic identity or pride.
These people were not religious as some were in strip joints and bars the night before I heard.
They were simply Islamic nationalists and perhaps with a important Arab thing.
later you call this civilization Judaeo-Christian.
Yet in fact it is just Christian. In fact really Protestant christian and even Puritan Christian.
The Judaeo thing was recently added to allow Jews a stake in the society.
Its a sham of a concept as they are just immigrants no different Muslims.
Its all about identity .
Unrelated to religion.
The 9/11 murderers were a rare extreme type of Muslim nationalism.
They are muslim first and then /if at all identify with their “nation”.
Pardon, I think I need to note for record.
Enough has been highlighted to objectively show that the 9/11 hijackers were acting in the name of Islam as they understood it. And, in recounting the “strong horse” theory in that captured tape bin Laden spoke about how numbers of inquirers and converts to Islam had gone up; as just one indicator.
As to the being in strip joints etc, this was a part of the deceptive cover that is a part of Jihad.
The final instructions by Atta were to men setting out to meet God, through undertaking a so-called martyrdom operation through jihad by bands under one they probably saw as a candidate to be Caliph, perhaps even Mahdi, the ultimate Caliph and main heroic end of days, world subjugating figure of Islam; which (e.g. through the Muslim Brotherhood and backers) has a 100 year world subjugation plan now 30 years along. The very location of UBL’s base was steeped in eschatological significance: from the direction of Khorasan.
It is no accident that both Al Qaeda and the Taliban play to the Black Flag eschatological invincible Army theme in their propaganda. That is the army muslims are supposed to join even if they must crawl across ice and snow for Mahdi will be in their number. It is to conquer the ME, and Mahdi will from that base subjugate the world.
In short, we are up against a religiously motivated global conquest ideology. And if 100 million IslamISTS cause so much trouble, imagine if this “weak” hadith gains the support of evident success? Do you not see that the companion Gharqad tree hadith — on the slaughter of the Jews — is embedded in Hamas’ charter? Do you not know that HAMAS is simply the Palestinian branch of the Muslim Brotherhood? That there are entire universities in Arabia for this and related movements?
That Ahmadinejad is not a mad man, he is acting in his understanding that the Mahdi, the 12th Imam in Shia thought, is shortly to emerge from seclusion so Iran, the east of which is a key part of Khorasan, seeks to be his vanguard, armed with the appropriate weapons?
What is happening, is that — a decade after 9/11 — further taqqiya multiplied by a warped sense of multiculturalism has led us to fail to properly assess the evidence that is fairly easy to hand, if we are willing to look it up. Why not start by working through Surah 9 of the Quran, especially ayas/verses 5 and 29 – 35, understanding that this abrogates earlier more irenic materials?
What is the historic understanding, and how did it, say, shape the era from 630 – 732 or so?
What does that open the door to?
Then fast forward to 1683 and ask yourself why Ottoman-Islamic armies under the Caliph were besieging Vienna. Then appreciate what happened on Sept 12 1683, and why Sept 11, 2001 then takes on highly symbolic significance, Add, who invented skyscrapers and aeroplanes and what crashing the one into the other at the symbolic and literal nerve centre of Western Capitalism, to trigger a physical collapse — UBL had been a demolition expert for his father’s firm — and financial meltdown multiplied by a decapitation strike and attempt to spread an anthrax epidemic would mean.
As I pointed out previously, the very date and means of the attacks were highly significant, and we should note that Muslims — not just Islamists — think like this. We have to understand how they think, not how we think.
I think you also need to appreciate that across the ME, the “nation-states” we see are by and large artificial. Personal identity is locked into family, tribe and the Umma.
As for our civilisation, I am speaking about that synthesis of the heritage of Jerusalem, Athens and Rome first led by Paul of Tarsus — apostle to the gentiles, which so decisively shaped our history for 2,000 years. The very name Christian, reflects this: Christos is the Greek word for Messiach, i.e. anointed of God in light of the Hebraic prophetic tradition. So, we have a synthesis, led int eh moral spiritual domain by the Hebraic, enscripturated tradition as filtered through the Christian faith.
Indeed the USA is strongly shaped by Presbyterian and Puritan roots, but the civilisation as a whole is much broader than that. For instance, in the Caribbean, Anglican, Roman Catholic and Baptist influences have been very important. In Europe, we have to add Lutheran, Calvinist-Reformed, and Orthodox streams.
But in every case, Judaism is at the root, exactly as Paul highlighted in Rom 9 – 11.
(And that is before we identify modern Jewish contributions, which have been disproportionate to numbers, especially in sciences, medicine and technology as well as business and commerce.)
GEM of TKI
F/N: The paper is here, in all its glory. KF
F/N: Let me add a bit more from Sir Fred’s Caltech talk, enfolding some of UD’s clip:
Notice, how he focuses on the info origin problem.
And, in parts not clipped he explores the idea of life in interstellar space.
Thanks again UD.
That’s absurd. Even brilliant people can be wrong in “simplistic or ill-informed” ways. We need to give them a fair hearing, but they don’t get an automatic pass based on their brilliance.
Hoyle’s “tornado in a junkyard” fallacy shows that he was extremely ill-informed about the theory of evolution.
This is the commonly held view. Discussing that, and the problems with that view, takes us well outside the topic of this thread.
Newton’s law of gravity and Einstein’s general relativity contradict one another. They cannot both have been discovered. So at least one of those must be a human construct.
Newtonian physics turned out to be an approximation, but it’s still constrained by reality. The inhabitants of the planet Zorg are not in the process of discovering that gravity varies with the inverse cube of the distance. The fact that gravity varies approximately as the inverse square of the distance, under suitably restricted conditions, is a discovery, not an invention.
In what way? I know Einstein corrected some of what Newton said but that is different from contradicting it.
That’s what I want to know. I believe Neil is confusing “Laws” with “Theories”
Could you kindly explain your expertise in thermodynamics and astrophysics to correct what Sir Fred was actually speaking about?
Otherwise, this is a case of the live donkey and the dead lion.
Hoyle’s primary focus on the relevant topics and context was the origin of life and particularly the complex info in life in light of some very interesting astrophysical phenomena (not issues on macroevolution); and I suggest that you take time to read the article on his Caltech talk before further trying to knock over a strawman.
I repeat, someone like Hoyle — a Nobel Equivalent Prize Holder (who actually invented the term “big bang”), speaking on a matter that is tied to his expertise may be wrong, but he is not going to be wrong in a silly or ill-informed way. Even his errors will be highly instructive. (I have in mind here, for instance, his magnetic interaction based model for the distribution of angular momentum and mass in the solar system. He may have been wrong on various subjects, e.g. in the end he abandoned the Steady State theory, and I think his attempted revival in the 90s too, but he is going to be a unique, original and profound thinker and scientist all the way through. One utterly unafraid to think for himself and speak his own mind.)
And, let me speak a bit more.
If we see a 747, there is excellent reason, on its FSCO/I, to infer to design, not to a tornado hitting a junkyard in Seattle. Indeed, if we see a D’Arsonval movement based instrument from its cockpit panel, we are still so far into the deeply isolated island of function territory that we have every right to infer to design as the best explanation.
Now, living systems have in them not only a functionally specific complexity that dwarfs both of these — just start with the ATP synthase, the kinesin, the ribosome, the chloroplast etc etc, but they are based on cells that implement molecular scale, code based von Neumann Self Replicators, in addition to the metabolic type functions linked to the above cluster of molecular nanomachines.
Such self-replication is an ADDITIONAL reason to infer to design on FSCO/I, independent of whether or no such living systems thereafter evolved by strictly Darwinian Mechanisms. (Which, I believe Hoyle more or less accepted.)
Blend in the underlying fine tuned cosmological physics that sets up H, He, O, C and N as the first five elements, and gives them the properties that make C-Chemistry, aqueous medium, cell based life possible in our observed cosmos.
That gives us excellent reason to infer to design of the cosmos and its physics, and thence of life and in the end us.
I suggest you take a moment to read here on and here on.
H’mm, let me clip Averick, just to help you think about the issues a bit more broadly:
GEM of TKI
According to Newton, there’s force of gravitational attraction acting on the earth, and that’s why it moves in a curved path.
According to Einstein, the earth is in free fall with no external force acting on it, taking the path it does because of the curvature of space-time. Gravity has the effect of producing that curvature, not of producing a force.
That sure looks like a clear contradiction.
A free-fall, like with Newton’s cannonball? 🙂
And if mass produces a curvature doesn’t that curvature induce a force on all that enters it? Meaning the curvature does produce a force, and a force is a force of course of course.
Ask NASCAR drivers if a curvature produces a force. 🙂
I believe you’re describing Newton’s and Einstein’s theories, based on the “Gravitational Law”. The fact is, “Gravity” and the “Law(s)” that conform to it (what ever they maybe) would still exist independently of any theory used to explain how it (gravity) works. The way I see it (and maybe I’m wrong here) the “Law of Gravity” is not what’s constructed (scaffolding like you suggested) but is something that is observed; the theories of how gravity might work, are what’s constructed. Most physicists would say that there is no contradiction between the two theories of Einstein and Newton, only an improvement on the one previously held by Newton.
How is that relevant? Hoyle’s gaffe was due to his poor knowledge of evolutionary theory and biology, not thermodynamics or astrophysics.
Biology, evolutionary theory and abiogenesis were way outside his field of expertise. After all, this is the man who thought that the Archaeopteryx fossil was a forgery and who suggested that human nostrils evolved pointing downward in order to prevent cosmic pathogens, drifting down from space, from falling into them. A perfect illustration of why the argument from authority is a fallacy, particularly when the authority in question is operating way outside his sphere of competence. Brilliant people deserve a fair hearing, but not a free pass, especially when they have a history of spouting nonsense.
Motion in a non-geodesic path involves force. That’s what’s happening with the racecar drivers. According to Einstein, a planet such as earth is moving in a nearly geodesic path. It’s not quite geodesic because there are some forces such as that due to the solar wind.
There’s some ambiguity here about “law”. Many people take “law” to mean a natural language statement. And, as far as I can tell, the universe came without a natural language and without any natural language statement, so without any natural law.
Some people, some of the time, take “natural law” to be a reference to how the universe behaves. That seems to be how you want to use it. Well fair enough. But then talking of something as being “law like” doesn’t seem to make sense, because it is as reference to the likeness of natural language expressions and constraints that they logically imply.
My preference is to avoid talking of “laws of nature” where possible, but to use “scientific laws” or “laws of physics” for what humans have constructed as part of how they describe and cope with the way the universe behaves.
We don’t really observe anything that deserves to be call “the Law of Gravity”. We observe examples of how the universe behaves. And then we attempt to come up with a principled account of that behavior. It seems to me that the usage of “law of gravity” requires that it refer to the principle, rather than the observed behavior. But we never do observe the principle itself. Maybe there’s an intelligent designer who used such a principle, and we try to approximate that. Or maybe we are just seeing statistical patterns, and there is no actual operating principle. I don’t think there’s a way of settling that. So I see it as better to talk only about what is actually observed.
Really? Have you ever walked on a NASCAR race track- the curves are banked, just as what happens with the fabric of spacetime.
Einstein refined Newton and he did so because of advances in technology as well as the advantage of having Newton’s work handy.
Neil Rickert at 3.1 writes, “I see that as meaningless.
That our laws of physics were intelligently designed, I have no doubt. The intelligent designers were human physicists. They fine-tuned the laws that they designed to fit our world. This has no metaphysical implications.’
What? Human physicists designed the laws of nature? Newton discovered gravity,he didn’t create it from nothing. Einstein theorized about space-time, but he didn’t invent it. Laws of nature exist whether we know of them and describe them scientifically or not.
“There’s some ambiguity here about “law”. Many people take “law” to mean a natural language statement. And, as far as I can tell, the universe came without a natural language and without any natural language statement, so without any natural law.”
I’m trying to suggest that nature is governed by simple “laws” which have been discovered through observation, in other words, through our senses in which they can then be expressed through our understanding of mathematics. From what I understand, in its simplest form, “laws” describe a pattern found in nature, which can only be discovered through observation.
“We don’t really observe anything that deserves to be call “the Law of Gravity”.
I disagree, go and jump off a bridge repetitively and tell me how many times you float on your decent. Do not the “Laws” describe, and the “Theories” explain?
I do understand what you’re saying Neil, I just disagree…
Right. And I am quite clearly disagreeing with that.
I’m pretty close to being a naive realist. And I suspect you think you are arguing for that kind of naive realism. But I am also a realist about science, and what you state does not accurately describe how science is done.
Let me put it this way: If nature is governed by simple laws, then we have no ability at all to find what those laws are. The best we can do is come up with our own ways of describing and predicting nature.
I don’t disagree with you at all over the implications of jumping off a bridge. As I said, I’m pretty close to being a naive realist. But neither of us has any access to the governing of nature, nor any evidence that there is governing of nature.
When are you going to answer my thread? Or do you concede?
C; Hoyle made no gaffe, and was speaking of something which is eminently within the ambit of thermodynamics-linked issues, OOL; which evo mat objectors at UD never tire to tell us in not a part of the theory of evolution. And BTW, there is a link between thermodynamics and information, which is also relevant. In addition, his specific discussion was on matters in astrophysics, as the article you need to read will further inform. So, kindly stop setting up and kicking over ad hominem laced strawmen — second thread I have had to deal with that this morning from you. KF
PS: And since when does a discussion that focuses in significant part on how the errors of someone may be instructive morph into giving a free pass? (I shudder to think of how this and other exchanges are being misrepresented elsewhere, where we are not there to correct. Which comes right back to the issue of live donkeys and safely dead lions.)
Onlookers: Notice, how the discussion from C et al, predictably, is distractively tangential to the substantial matter on the table, and how it almost invariably pivots on the rhetoric of denigration and dismissal, here of a Nobel equivalent prize-holder speaking on astrophysical, thermodynamics linked matters, and raising questions that need to be answered? Like, what is going on with the physics of O, C, and H and stars, that sets up life in our cosmos? What does that tell us? KF
Let’s see about that. From Hoyle’s 1981 essay, to which you link below:
Gaffe #1: Hoyle assumes that for a given enzyme, there is only one possible amino acid sequence. This is wrong.
Gaffe #2: Hoyle seems to think that this specific set of 2000 enzymes is the minimum required for life. He presents no evidence for either of those assertions.
He also writes:
Gaffe #3: Hoyle thinks that if life originated on earth, then specific target enzymes must have spontaneously self-assembled. He’s right that this is ridiculously improbable, but nobody in the OOL community thinks that this is how life began. He’s tilting at windmills, arguing against a position that nobody holds.
Hoyle does give an indication that (by 1981, anyway) he was aware of some the criticisms of his ill-informed assertions:
Gaffe #4: Hoyle is not even making an argument here. He is just waving his hands and saying “That’s impossible!”, without providing an explanation.
No, I most certainly am not reasoning as you say I am.
I specifically commented on “laws of physics” and not on “laws of nature.” As for my view on laws of nature, I have discussed that in other comments in this thread.
I have a moment.
Since when is pointing out that 200 AA, at 20 AA per position has 20^200 possibilities the same as asserting or implying that just one arrangement will work? In fact, 20^200 ~ 1.6 *10^260, which is indeed vastly beyond the number of atoms in the observed universe, c. 10^80.
So much more, in fact, that the number of Planck-time quantum states for these atoms since the big bang is less than 1 in 10^100 of that.
The number of possible states that can be searched is so nearly zero by comparison as makes no difference. And, Hoyle pointed out that we are into thousands of enzymes in life.
Starting from some prebiotic little pond, or a giant molecular cloud, you just could not get enough shuffling through states to get anywhere significantly different from a zero scope search, relative to what would be required to assemble enough molecules of life to get to a reasonable metabolic entity, by blind chance and necessity.
Just remember, it takes 10^30 Planck times for the fastest chem rxns, and organic ones are usually far slower.
The only empirically warranted source for complex, functionally specific organisation, is design. And, that outlined analysis of the supertask implied for blind forces is a part of why that is so. (BTW, the vproblem is closely analogous to the challenge of doing your post by random typing. That does not imply that only your post is a possible functionally specific, complex outcome — so is mine — but it does underscore what you are ever so eager to ignore: both of these posts are FSCI, and are cases E from a relatively narrow set of contextually responsive posts, T in the set of possible strings of appropriate length, W. By far and away most would be gibberish, and the resources of the observed cosmos would be fruitlessly challenged to get to ANY member E1, E2, . . . of T, within the resources of the observed cosmos, by blind chance and mechanical necessity. Islands of function in vast seas of non-function, as is typical of such things. But, if one is committed to not seeing what is exemplified all around us . . . )
Later, we can look at the further cases, but it is already plain that this is another strawman being kicked over.
GEM of TKI
Hoyle is wrong. The size of the total space, by itself, tells you nothing. You need to know what percentage of the space is functional. Hoyle did not have that information, so his assertion was groundless.
In any case, the point is moot, because nobody in OOL thinks that modern proteins sprang into existence fully formed.
I notice that you didn’t address gaffes #2, #3, and #4. Do you agree that those were mistakes on Hoyle’s part? If not, why not?
He was also a card carrying member of the Nazi party – I can’t help thinking that if he had ever expressed an approval of Darwins theory then he would be viewed very differently by the ID community, and would be considered evidence of a link between Nazism and Evolutionary theory.
But of course he is a design thinker, Christian and creationist so that’s OK 😉
You mean someone actually saw the universe being finely tuned?
Hmm, don’t look a gift force in the mouth 😉
No, I mean there are many tested observations that say the universe is finely tuned.
Walter Bradley wrote about it.
Then come up with another mechanism and test it.
What is a ‘tested observation’ as opposed to just an observation?
A tested observation is an observation that has been tested, ie repeated and investigated. An observation is just something you have seen.