# Sabine Hossenfelder: No way to tell if the universe was fine-tuned for us — Rob Sheldon partially agrees

First, theoretical physicist Sabine Hossenfelder:

The relevant part of the argument goes like this: It’s extremely unlikely that these constants would happen to have just exactly the values that allow for our existence. Therefore, the universe as we observe it requires an explanation. And then that explanation may be god or the multiverse or whatever is your pet idea. Particle physicists use the same type of argument when they ask for a next larger particle collider. In that case, they claim it requires explanation why the mass of the Higgs boson happens to be what it is. This is called an argument from “naturalness”. I explained this in an earlier video.

What’s wrong with the argument? What’s wrong is the claim that the values of the constants of nature that we observe are unlikely. There is no way to ever quantify this probability because we will never measure a constant of nature that has a value other than the one it does have. If you want to quantify a probability you have to collect a sample of data. You could do that, for example, if you were throwing dice.Throw them often enough, and you get an empirically supported probability distribution.

But we do not have an empirically supported probability distribution for the constants of nature. And why is that. It’s because… they are constant. Saying that the only value we have ever observed is “unlikely” is a scientifically meaningless statement. We have no data, and will never have data, which allow us to quantify the probability of something we cannot observe. There’s nothing quantifiably unlikely, therefore, there’s nothing in need of explanation.

Sabine Hossenfelder, “Was the universe made for us?” at BackRe(Action) (January 16, 2021)

Now experimental physicist Rob Sheldon:

I’ve weighed in on the debate occasionally, usually siding with Sabine. We don’t have a probability distribution for these constants, and probably never will. When we invoke one, such as [commenter] Blais’ attempt to ask about the probability of a probability function, we are playing God. So bite the bullet and do it—play God.

“Why do we have such a loving God who cares about us? In the space of all probable Gods, what is the probability that we get this one?”

Now that I have phrased it that way, you can see the problem with the cosmological fine tuning argument. It assumes we have some ability to judge God. For some people that’s no problem, for others that is deeply disturbing. That’s why I side with Sabine.

But it doesn’t have to be this way.

If, and Blais makes this same point, the cosmological constants actually ARE explainable by physical processes, then they aren’t fine-tuned at all. A magnetic field in the early Big Bang can explain a half-dozen of the fine tuning constants. Then we can get on with the true business of science and consider the laws that arranged it this way, and move away from hypotheticals about fine tuning.

Does this remove the cosmological fine tuning argument from supporting ID?

Absolutely not. In Stephen Barr’s book “Modern Physics and Ancient Faith”, he argues that the symmetry behind the laws of physics needs explaining too. We never get away from explaining. The deeper we go, the more profound it gets. As in the discussion of Hindu metaphysics that the earth is supported on the backs of 4 elephants and they in turn stand on a great turtle, the question was raised “what is the turtle standing on?” and the reply was “It’s turtles all the way down”. In the same sense we physicists can say, “It’s design all the way down.”

So we reach the same conclusions, but without the shortcut about “probability distributions of cosmological constants.” Even though I agree with Sabine about the fine tuning argument, I disagree strongly with her about the significance of the design we see in the world. “It just is” is not an explanation.

Rob Sheldon is the author of Genesis: The Long Ascent and The Long Ascent, Volume II.

## 8 Replies to “Sabine Hossenfelder: No way to tell if the universe was fine-tuned for us — Rob Sheldon partially agrees”

1. 1
jerry says:

There is an infinite number of possibilities for constants. What are the consequences of the one that happened. It’s not that we are one of the consequences. It’s that an ecology resulted that supports us. (Physical one, not biological one)

But other things also happened that are equally unlikely. Life is another. The Earth is another and so is complex life. A web of unlikely events. All this has to be explained.

This does not preclude that this ecology also supports a zillion other yet to be discovered unlikely existences. But the few we know about defy any probabilistic explanation.

2. 2
bornagain77 says:

Sabine Hossenfelder stated: “we do not have an empirically supported probability distribution for the constants of nature.”

First and foremost, I find it very interesting, and extremely ironic, that Sabine Hossenfelder would now claim that until we can perform experiments to determine the probability distribution for the constants and/or laws of nature then we can say nothing meaningful about the probability distribution for the constants and/or laws of nature.

The reason why I find this extremely ironic is that Sabine Hossenfelder herself has, at the drop of a hat, ignored empirical evidence when it has contradicted her a-priori belief in atheistic materialism.

Specifically, with the closing of the freedom of choice loophole by Anton Zeilinger and company,,,,

Cosmic Bell Test Using Random Measurement Settings from High-Redshift Quasars – Anton Zeilinger – 14 June 2018
Abstract: This experiment pushes back to at least approx. 7.8 Gyr ago the most recent time by which any local-realist influences could have exploited the “freedom-of-choice” loophole to engineer the observed Bell violation, excluding any such mechanism from 96% of the space-time volume of the past light cone of our experiment, extending from the big bang to today.
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.080403

Specifically, with the closing of the freedom of choice loophole by Anton Zeilinger and company,
Hossenfelder did not like the experimental result that Anton Zeilinger and company got and opted for believing in ‘superdeterminism’ instead:

Rethinking Superdeterminism – Sabine Hossenfelder and Tim Palmer – 2020
https://www.frontiersin.org/articles/10.3389/fphy.2020.00139/full

To call superdeterminism “preposterous” is to make an understatement.

Superdeterminism entails believing that “a physicist running the experiment does not have complete free will in choosing each detector’s setting’ but that “a particle detector’s settings may “conspire” with events in the shared causal past of the detectors themselves to determine which properties of the particle to measure”.

Closing the ‘free will’ loophole: Using distant quasars to test Bell’s theorem – February 20, 2014
Excerpt: Though two major loopholes have since been closed, a third remains; physicists refer to it as “setting independence,” or more provocatively, “free will.” This loophole proposes that a particle detector’s settings may “conspire” with events in the shared causal past of the detectors themselves to determine which properties of the particle to measure — a scenario that, however far-fetched, implies that a physicist running the experiment does not have complete free will in choosing each detector’s setting. Such a scenario would result in biased measurements, suggesting that two particles are correlated more than they actually are, and giving more weight to quantum mechanics than classical physics.
“It sounds creepy, but people realized that’s a logical possibility that hasn’t been closed yet,” says MIT’s David Kaiser, the Germeshausen Professor of the History of Science and senior lecturer in the Department of Physics. “Before we make the leap to say the equations of quantum theory tell us the world is inescapably crazy and bizarre, have we closed every conceivable logical loophole, even if they may not seem plausible in the world we know today?”
http://www.sciencedaily.com/re.....112515.htm

In short, Hossenfelder, instead of believing the experimental results of Zeilinger and company, and believing that physicists have the freedom necessary to choose whatever measurement settings that they may want to choose, and therefore having the freedom necessary to experimentally probe whatever aspect of reality that he, or she, may be interested in probing, Hossenfelder has instead opted to believe that, basically, the law of physics 13.7 billion years ago somehow ‘superdetermined’ all of our actions today, including determining whatever measurement settings in an experiment we might choose.

In short, In Hossenfelder’s atheistic worldview, the laws of physics are what are actually performing the experiments into quantum mechanics, not physicists.

Again, to call Hossenfelder’s belief in superdeterminism preposterous is to make an understatement.

It is a belief that entails that Einstein “could not have been responsible for the theory of relativity – it would have been a product of lower level processes but not of an intelligent mind choosing between possible options.”

Physicist George Ellis on the importance of philosophy and free will – July 27, 2014
Excerpt: And free will?:
Horgan: Einstein, in the following quote, seemed to doubt free will: “If the moon, in the act of completing its eternal way around the Earth, were gifted with self-consciousness, it would feel thoroughly convinced that it was traveling its way of its own accord…. So would a Being, endowed with higher insight and more perfect intelligence, watching man and his doings, smile about man’s illusion that he was acting according to his own free will.” Do you believe in free will?
Ellis: Yes. Einstein is perpetuating the belief that all causation is bottom up. This simply is not the case, as I can demonstrate with many examples from sociology, neuroscience, physiology, epigenetics, engineering, and physics. Furthermore if Einstein did not have free will in some meaningful sense, then he could not have been responsible for the theory of relativity – it would have been a product of lower level processes but not of an intelligent mind choosing between possible options.
I find it very hard to believe this to be the case – indeed it does not seem to make any sense. Physicists should pay attention to Aristotle’s four forms of causation – if they have the free will to decide what they are doing. If they don’t, then why waste time talking to them? They are then not responsible for what they say.
http://www.uncommondescent.com.....free-will/

In other words, instead of believing what the experimental results of quantum mechanics are actually telling us, (i.e. that free will is a real and tangible part of reality),, the Determinist, and/or Atheistic Naturalist,(i.e. Hossenfelder in this case), is now forced to claim, via ‘superdeterminism’, that the results of the experiments were somehow ‘superdetermined’ at least 7.8 billion years ago, (basically all the way back to the creation of the universe itself), and that the experimental results are now merely ‘fooling us’ into believing that our experimental results in quantum theory are trustworthy and correct and that we do indeed have free will.

To call such a move on the part of Atheistic Naturalists, (i.e. the rejection of experimental results that conflict with their apriori philosophical belief in ‘determinism’), unscientific would be a severe understatement. It is a rejection of the entire scientific method itself.

Atheistic Naturalists, in their appeal to ‘superdeterminism’, are basically arguing that we cannot trust what the experimental results of quantum mechanics themselves are now telling us because events in the remote past ‘conspired’ to give us erroneous experimental results today. Erroneous experimental results that are merely ‘fooling us’ into believing that we have free will.

As should be needless to say, if we cannot trust what our experimental results are actually telling us, then science is, for all practical purposes, dead.

Atheistic Naturalists, in their rejection of experimental results that directly conflict with their a-priori belief in determinism and/or materialism, have become ‘science deniers’ in the truest sense of the term ‘science denier’.

Thus Hossenfelder, in her current article, may wax poetic about the importance of empirical testing all she wants, but when the rubber meets the road, she has already shown herself more than capable of ignoring empirical evidence at the drop of a hat when it conflicts with her a-priori belief in Atheism.

Of supplemental note to Zeilinger’s closing of the freedom of choice loophole

Steven Weinberg stated, “In the instrumentalist approach (in quantum mechanics) humans are brought into the laws of nature at the most fundamental level.,,, the instrumentalist approach turns its back on a vision that became possible after Darwin, of a world governed by impersonal physical laws that control human behavior along with everything else.,,, In quantum mechanics these probabilities do not exist until people choose what to measure,,, Unlike the case of classical physics, a choice must be made,,,”

The Trouble with Quantum Mechanics – Steven Weinberg – January 19, 2017
Excerpt: The instrumentalist approach,, (the) wave function,, is merely an instrument that provides predictions of the probabilities of various outcomes when measurements are made.,,
In the instrumentalist approach,,, humans are brought into the laws of nature at the most fundamental level. According to Eugene Wigner, a pioneer of quantum mechanics, “it was not possible to formulate the laws of quantum mechanics in a fully consistent way without reference to the consciousness.”11
Thus the instrumentalist approach turns its back on a vision that became possible after Darwin, of a world governed by impersonal physical laws that control human behavior along with everything else. It is not that we object to thinking about humans. Rather, we want to understand the relation of humans to nature, not just assuming the character of this relation by incorporating it in what we suppose are nature’s fundamental laws, but rather by deduction from laws that make no explicit reference to humans. We may in the end have to give up this goal,,,
Some physicists who adopt an instrumentalist approach argue that the probabilities we infer from the wave function are objective probabilities, independent of whether humans are making a measurement. I don’t find this tenable. In quantum mechanics these probabilities do not exist until people choose what to measure, such as the spin in one or another direction. Unlike the case of classical physics, a choice must be made,,,
http://quantum.phys.unm.edu/46.....inberg.pdf

Also of supplemental note:

,,, the Copernican Principle and/or the Principle of Mediocrity has now been overturned by both General Relativity and Quantum Mechanics, our two most powerful theories in science (as well as being overturned by other powerful lines of scientific evidence,,, i.e. humanity is not nearly as inconsequential in this universe as atheists have erroneously presupposed)
August 2021

3. 3
Fasteddious says:

While it may not be possible to come up with a meaningful probability distribution for the constants in nature, I suggest it may be possible to determine something of the degree of optimization for many of those constants.
Suppose that we know enough about physics to say how it would be affected if one of those constants was changed by some small amount. Say, the speed of light increases by 1%, how does that affect physics and how would that affect all the things needed to allow intelligent life to exist: stars, elements, radioactivity, galaxies, planets, chemistry, and so on. Does science know enough to determine that? Then do the same for a 1% decrease in the speed of light. Then try +/-2%, 5%, 10%, etc., until the analysis indicates life to be unlikely, on both the plus and the minus sides.
If the “fail” changes, as best we can tell, are roughly symmetrical around the actual value, then we can perhaps say that the speed of light is somehow optimum for life in the universe. If the spread is highly asymmetric, however, that may suggest that the speed of light is not optimized. Of course, other considerations will come into play, making this an iffy conclusion.
If we were to do this exercise for all the constants for which we can intelligently say how the world would change for +/- small changes, then we can collect several iffy conclusions. If every one of those seems to indicate that the actual values are indeed optimum, then we have a good argument for saying that they must have been optimized by a mind for the purpose of maximizing the suitability of the universe for life. On the other hand, if all the distributions look randomly asymmetric then we might think we have a liveable universe but not the optimally liveable one. That in turn might suggest something other than optimal design.
Now I suspect that “we” (i.e. science in total) does not know enough to credibly assess these limits to liveability for most of the constants. However, it would make for some good analyses and debate among scientists as they argue about effects and suggest limits. Moreover, if two or more constants change at the same time, it would doubtless affect the liveability limits of both (or all), making for more complexity and even more iffy-ness. Indeed, since we doubtless do not know enough about a lot of things to say for sure what effects changing multiple constants would have on the universe, this analysis may just be speculative hand-waving at the end, but it would be interesting nonetheless.
And yes, I am aware that some of the initial conditions and perhaps physical constants are very finely tuned, but even one part if 10^120 can be tweaked by 0.5 parts in 10^120. If you “know” enough to claim tuning to one part in 10^120 is needed, then you can still posit what would happen before hitting that limit; e.g. the universe grows for 100 billion years and then collapses rather than keep on expanding, yet still allows for life.
In any case, I think some sort of clever analysis of this nature could start to address the question of optimality vs. suboptimality for the fine tuning we all like to talk about. It might even be a career for one or more eager PhD graduates?

4. 4

Fasteddious, we can only do that hypothetical experiment because we have a model. But a model is not reality. If our model, for example, has no magnetic field in the Big Bang, it is wrong. Then all assumptions about fine-tuning of expansion coefficients is wrong, as are all conclusions we reach. Not being God, we can never be certain our model has all the physics incorporated. We are only as certain as our model, and from a model builder, let me assure you that these are the most simplified models that computers could run in 1967. I am more certain the models are wrong, than I am that fine-tuning exists. Therefore making theological conclusions from bad models is just bad theology.
Is it possible to avoid this problem? Yes, if we discuss only model-independent properties. Existence, uniqueness, goodness, design are all model-independent. But all these things violate MN, whereas fine-tuning seems to be a valid MN point, so we very much want to make an MN argument without arguing “metaphysics”. I understand the urge, but I’m afraid, in this case, it can’t be done.
In biology, where we have several solutions to, say, finding things to eat, one can compare various metabolism strategies and discuss the “optimum”. We just can’t do that for cosmological constants, no matter how sophisticated our physics model.
Unless, of course, you think physics models are perfect. And many people do. Just not me and Sabine.

5. 5
EDTA says:

Fasteddious and Mr. Sheldon,

This seems to be the sort of argument that Luke Barnes addresses in this paper here:
A Reasonable Little Question: A Formulation of the Fine-Tuning Argument, by LUKE A. BARNES
Ergo JOURNAL OF PHILOSOPHY

Do you see this post’s stated flaw or others in Barnes’ paper?

6. 6
jerry says:

Is this it?

A Reasonable Little Question: A Formulation of the Fine-Tuning Argument

A new formulation of the Fine-Tuning Argument (FTA) for the existence of God is offered, which avoids a number of commonly raised objections. I argue that we can and should focus on the fundamental constants and initial conditions of the universe, and show how physics itself provides the probabilities that are needed by the argument. I explain how this formulation avoids a number of common objections, specifically the possibility of deeper physical laws, the multiverse, normalisability, whether God would fine-tune at all, whether the universe is too fine-tuned, and whether the likelihood of God creating a life-permitting universe is inscrutable.

Is the physical world all that exists? Are the ultimate laws of the physical universe the end of all explanations, or is there something about our universe that is noteworthy or rare or clever or unexpected?

https://theisticscience.com/papers/tree/VariableCouplings/Barnes-Ergo2020-reasonable-little-question-a-formulation-of-the-fine-tuning.pdf

Extremely long. But

5. Conclusion
What physical universe would we expect to exist, if naturalism were true? To systematically and tractably explore other ways that the universe could have been, we vary the free parameters of the standard models of particle physics and cosmology. This exercise could have discovered that our universe is typical and unexceptional. It did not. This search for other ways that the universe could have been has overwhelmingly found lifelessness.
In short, the answer to the Little Question is no. And so, plausibly and as best we can tell, the answer to the Big Question is no. The fine-tuning of the universe for life shows that, according to the best physical theories we have, naturalism overwhelmingly expects a dead universe.

Again the argument is very strong evidence on one side but not absolute and thus objections on the other side that it’s not absolute.

7. 7
Fasteddious says:

Thank you Robert, EDTA and Jerry. I will certainly take a look at the article. But taking Robert’s caveats into consideration, it appears we do not really know enough about the physics to properly begin the analysis I had in mind. Perhaps my depth of physics was inadequate to appreciate the uncertainties and doubts regarding the current theories and models? On the other hand, perhaps using the current models to begin such an analysis might reveal some hints about how the models go wrong or are incomplete? After all, if people can get paid to hypothesize and analyse a variety of multiverse concepts, why not get paid to explore the limits and constraints of the present models of the real universe from a fine-tuning perspective?

8. 8
EDTA says:

Yes, that’s the article. In it he explains how he comes up with reasonable probability distributions over fundamental constants, particularly ones that have dimensions in physical units. Seems like he answers this objection, probabilistically, of course, as he admits.