Why do computers mean we never have to be humble again? We asked our physics color commentator, Rob Sheldon, about the recent deflation of the “expanding blueberry muffin” picture of the universe. That is, as one astrophysicist puts it “Just as cosmological measurements have became so precise that the value of the Hubble constant was expected to be known once and for all, it has been found instead that things don’t make sense.”

Sheldon offers a little background:

**— — —**

Hubble’s constant is a calculation of the expansion rate of the universe by:

a) measuring how far away something is (e.g., parsecs)

b) measuring how fast it is moving away from us (e.g., kilometers/second)

c) divide speed by distance (km/s)/pc ==>Hubble’s Constant

Two methods have been used to calculate Hubble’s Constant and they disagree by 3+ standard deviations.

The first method for calculation is the Direct method:

d) Find a pulsating star whose period depends upon size, which gives us brightness, which gives us absolute distance;

e) Find the redshift of the host galaxy to compute its speed away from us.

f) Divide speed/distance ==> Hubble’s Constant

Actually, this is done for hundreds of stars and galaxies, and works only for nearby galaxies.

Further away galaxies are calibrated by their brightness in relation to similar-looking nearby galaxies. With thousands of galaxies to calibrate, the result is a pretty good statistical average, whose error-bars shrink every year.

The other method uses a model.

g) Measure the temperature of the Cosmic Microwave Background, and calculate the expansion needed to cool it down from 13eV X-rays when the universe was young and emitted this light. The present size had to be due to an expansion rate, so all we need is a time.

h) Use the model to predict when the Universe cooled down to 13keV. This is subtracted from the total time since the Big Bang.

i) Divide Size/(time since 13keV) / Size ==> Hubble’s Constant.

So why do the two methods disagree? One is instantaneous (sort of). The other is based on a Big Bang model and integrated over 13.7 billion years. That means that any mistake in physics, any mistaken time between now and 13.7 billion years ago will show up in the 2nd method.

In my mind, the answer for resolving this disparity is obvious—we need a better model.

But, as with global warming, natural section acting on random mutations, non-zero-threshold, black holes, etc., in this era of overpaid computer jockeys, models have become sacrosanct.

Here’s a quote from a January 2018 paper loaded on the physics arXiv:

Precision big bang nucleosynthesis with improved Helium-4 predictions — … the model has no free parameter and its predictions are rigid. Since this is the earliest cosmic process for which we a priori know all the physics involved, departure from its predictions could provide hints or constraints on new physics or astrophysics in the early universe.

You got that? It amounts to saying, “We a priori know all the physics involved.” Wow. What they’re talking about is a 1-dimensional, isotropic, homogeneous, no magnetic field, no helicity, 50-year-old numerical model with two arbitrary dials called “dark matter” and “dark energy”.

Do you suppose Newton said “we know all the physics” about gravity? Why do computers mean we never have to be humble again?

Rob Sheldon is the author of *Genesis: The Long Ascent*

*See also:* Rob Sheldon’s thoughts on physicists’ “warped” view of time: An attempt to force complete symmetry on a universe that does not want to be completely symmetrical

“What they’re talking about is a 1-dimensional, isotropic, homogeneous, no magnetic field, no helicity, 50-year-old numerical model with two arbitrary dials called “dark matter” and “dark energy”.”

Big Bang Nucleosynthesis (BBN) doesn’t depend on the amount of dark matter and dark energy. It depends on the photon to baryon ratio in the universe, which is tied down by measurements of the cosmic microwave background. It also depends on the physics of matter at nuclear energy scales, which is very well constrained by experiment. The amount of anisotropy, inhomogeneity, and magnetic fields in the very early universe is also tightly constrained by the cosmic microwave background. That’s why they correctly say that standard BBN has no free parameters.

The Big Bang Model, of course, does. But stating that “models have become sacrosanct” is just silly.

* The paper that first pointed out the tension (by Nobel Prize Winner Adam Riess and collaborators) has over 600 citations from cosmologists trying to reproduce the result, test its statistical significance, and propose alternative models to explain it. (http://adsabs.harvard.edu/abs/2016ApJ…826…56R)

* The tension has been recognised and discussed by cosmologists with the stature of Wendy Freedman (https://arxiv.org/abs/1706.02739), Joe Silk (http://adsabs.harvard.edu/abs/2018FoPh..tmp…69S) and Michael Turner (http://adsabs.harvard.edu/abs/2018FoPh..tmp…68T).

This is not a few woke fringe iconoclasts who are trying to fight the establishment. This is cosmology doing its thing. If you’ve got your own model, then publish it.

LoL Luke!

BBN *does* depend on dark matter and dark energy. I can cite you 20 papers on the topic. And this is instructive Luke. Why do you say it doesn’t? Because the results of playing, say with the # of neutrinos dial (sterile neutrinos, entropy of dark matter, etc.) was determined to be something like 2.97-3.4 and then it agreed with the 4He/H ratio and it was declared “fixed”. Every single one of those “fixed” parameters was handled this way. It is only “fixed” because it gives 75% of the “right answer” (3He and 7Li being excepted because nobody can make them fit).

This is called “curve fitting” Luke, it is not theoretically “no free parameters”. Yet you and the whole community have confused curve fitting with theory. It is the difference between Ptolemy and Copernicus. It is the difference between “precision” and “accuracy”. It is a very, very serious mistake, and one that I have to constantly rag my students about. And this confusion has effectively sterilized the field of cosmology. We are onto our fourth set of epicycles with inflation, yet inflation is considered to be on a firm foundation. With Alain Coc’s latest BBN model we are onto our fourth epicycle of BBN and no closer to 7Li, perhaps further away measured by sigma.

And yes, I am publishing my BBN model. Working on the neutron lifetime problem (essential to the BBN model, but experimental determinations are 4 sigma apart and not getting any closer, whereas theory is all over the map.) Then I have to work on the Friedman equation which lacks magnetic fields and topologically turbulent magnetic fields. After that, I have to add the triple-reactant reactions such as triple-alpha and dineutron-alpha which bypass both the deuterium bottleneck and the A=5 chasm.

It’s slow going because people like you keep saying that the BBN has no free parameters. But if you are interested in sponsoring the paper on the physics arXiv, I would be very grateful, otherwise it will be stuffed in the SPIE proceedings.

“BBN *does* depend on dark matter and dark energy. I can cite you 20 papers on the topic.”

Go on.

There is evidence of three neutrino species from particle physics. This is an independent measurement of the parameter, so it is not just fit to cosmological observation observations. (Also, 3He is just fine https://arxiv.org/pdf/1307.6955.pdf. Lithium remains a problem, as every single paper on BBN points out.)

“I am publishing my BBN model” – cracking. Send me a copy. (You know about AlterBBN, I assume. It’s for a particular model but it can be adapted.)

Luke,

Go to arXiv.org. In the search bar type “BBN dark matter”. I get 199 papers, with 5 in the past 6 months alone. Here’s a sentence from the abstract of the latest one:

“We have considered some modified exotic cosmologies during which the Universe could have been dominated by some form of energy other than radiation, down to a reheating temperature which is bounded by BBN considerations.”

Here’s a sentence from the 2nd one:

“Finally, we point out that a ~0.4% millicharged DM component which is tightly coupled to the baryons at recombination may resolve the current 2-sigma tension between the BBN and CMB determinations of the baryon energy density.”

Hmmm, you didn’t seem to think that there was any uncertainty in the baryon energy density either.

Have I “Gone on” enough for you, or would you like more sentences from the other 197 papers?

As to whether theory predicted 3 families of neutrinos or not, need I remind you that theories were getting non-integer numbers up to 4.0 just a few years ago, which experiment reduced to 3 (or 2.9 with corrections)? This is by no means a “no parameter” decision. And the best measurements on 3He come from Jupiter’s atmosphere where it is some 2X larger than Alain Coc’s 2018 BBN model. Coc simply says that 3He is created and destroyed in stars, so we don’t have any firm observational limits. Hmmm, would he have said that if Jupiter had agreed with his model?

And finally, the overall scheme of the BBN model is written up in a preprint at: http://www.rbsp.info/rbs/PDF/HUBE.pdf

It would have been published in a book, but the biology editor thought my language was too religious. The paper is mostly about water and life and the BBN model was a lengthy footnote based on Arbey code that you referenced. Thank you for the reference, up until then I had only the Kawano and Parthenope codes. I simply put a kludge into Arbey’s code to change all the weak rates by a scalar dependent on the external magnetic field. Not too surprisingly, it was nearly the same as the neutrino degeneracy dial. But more surprisingly, it enabled me to fit the 7Li abundance.

The forthcoming physics paper will involve explicit calculations of the weak rates as a function of the external field, as well as changes to the pressure due to magnetic fields and additional triple-reactant cross sections. I was pleased to see Coc’s PRIMAT code has 3-alpha in it, but he doesn’t have 2-alpha,neutron or 2-neutron,alpha which show up in neutron star physics.

All that to say, the BBN model is full of parameters, most of which are very poorly known. Just for example, if comets are the Dark Matter of the galaxy, then the H, C, N, O that is bound up in comets completely change the 0.2477 primordial He/H ratio used in everyone’s favorite BBN model. And not just in the 4th decimal, but in the first decimal. Likewise the change to the weak rates change the n/p ratio at 1MeV in the first decimal place.

My offer is still open. I would love to have you sponsor the paper on the arXiv.

will just put this here for readers…

https://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy

… .–. .- -.-. . –..– / – …. . / ..-. .. -. .- .-.. / ..-. .-. — -. – .. . .-.

Indeed, maybe

“normal”is not so normal, but abnormal, as in “.-.. .. ..-. .” the universe and everything 😉