Why do computers mean we never have to be humble again? We asked our physics color commentator, Rob Sheldon, about the recent deflation of the “expanding blueberry muffin” picture of the universe. That is, as one astrophysicist puts it “Just as cosmological measurements have became so precise that the value of the Hubble constant was expected to be known once and for all, it has been found instead that things don’t make sense.”
Sheldon offers a little background:
— — —
Hubble’s constant is a calculation of the expansion rate of the universe by:
a) measuring how far away something is (e.g., parsecs)
b) measuring how fast it is moving away from us (e.g., kilometers/second)
c) divide speed by distance (km/s)/pc ==>Hubble’s Constant
Two methods have been used to calculate Hubble’s Constant and they disagree by 3+ standard deviations.
The first method for calculation is the Direct method:
d) Find a pulsating star whose period depends upon size, which gives us brightness, which gives us absolute distance;
e) Find the redshift of the host galaxy to compute its speed away from us.
f) Divide speed/distance ==> Hubble’s Constant
Actually, this is done for hundreds of stars and galaxies, and works only for nearby galaxies.
Further away galaxies are calibrated by their brightness in relation to similar-looking nearby galaxies. With thousands of galaxies to calibrate, the result is a pretty good statistical average, whose error-bars shrink every year.
The other method uses a model.
g) Measure the temperature of the Cosmic Microwave Background, and calculate the expansion needed to cool it down from 13eV X-rays when the universe was young and emitted this light. The present size had to be due to an expansion rate, so all we need is a time.
h) Use the model to predict when the Universe cooled down to 13keV. This is subtracted from the total time since the Big Bang.
i) Divide Size/(time since 13keV) / Size ==> Hubble’s Constant.
So why do the two methods disagree? One is instantaneous (sort of). The other is based on a Big Bang model and integrated over 13.7 billion years. That means that any mistake in physics, any mistaken time between now and 13.7 billion years ago will show up in the 2nd method.
In my mind, the answer for resolving this disparity is obvious—we need a better model.
But, as with global warming, natural section acting on random mutations, non-zero-threshold, black holes, etc., in this era of overpaid computer jockeys, models have become sacrosanct.
Here’s a quote from a January 2018 paper loaded on the physics arXiv:
Precision big bang nucleosynthesis with improved Helium-4 predictions — … the model has no free parameter and its predictions are rigid. Since this is the earliest cosmic process for which we a priori know all the physics involved, departure from its predictions could provide hints or constraints on new physics or astrophysics in the early universe.
You got that? It amounts to saying, “We a priori know all the physics involved.” Wow. What they’re talking about is a 1-dimensional, isotropic, homogeneous, no magnetic field, no helicity, 50-year-old numerical model with two arbitrary dials called “dark matter” and “dark energy”.
Do you suppose Newton said “we know all the physics” about gravity? Why do computers mean we never have to be humble again?
See also: Rob Sheldon’s thoughts on physicists’ “warped” view of time: An attempt to force complete symmetry on a universe that does not want to be completely symmetrical