Uncommon Descent Serving The Intelligent Design Community

Rob Sheldon: Are multiple measurements “closing in” on dark energy? Nope.


Recently, we were informed that multiple measurements are indeed getting somewhere:

An extensive analysis of four different phenomena within the universe points the way to understanding the nature of dark energy, a collaboration between more than 100 scientists reveals.

Dark energy – the force that propels the acceleration of the expanding universe – is a mysterious thing. Its nature, writes telescope scientist Timothy Abbott from the Cerro Tololo Inter-American Observatory, in Chile, and colleagues, “is unknown, and understanding its properties and origin is one of the principal challenges in modern physics”.

Indeed, there is a lot at stake. Current measurements indicate that dark energy can be smoothly incorporated into the theory of general relativity as a cosmological constant; but, the researchers note, those measurements are far from precise and incorporate a wide range of potential variations.Andrew Masterson, “Multiple measurements close in on dark energy” at Cosmos Magazine

So what difference will combining multiple measurements make? We asked our physics color commentator Rob Sheldon, of course:

Well, “closing in” is one of those meaningless phrases, because it means “shrinking the error bars in the model”, which says nothing about the model being correct. All it does is reveal whether or not the model was as good as it was touted in yesterday’s breathless press release. (It isn’t.) Which is to say, if the results had been at all positive, the presser would have likely said “Confirmation!!!!” or something like that. Seeing as the new results are neutral or faintly negative, “closing in” is the best they can manage.

But I don’t want to put words in their mouths, let me quote the offending passages:

“Dark energy – the force that propels the acceleration of the expanding universe – is a mysterious thing. Its nature, writes telescope scientist Timothy Abbott from the Cerro Tololo Inter-American Observatory, in Chile, and colleagues, “is unknown, and understanding its properties and origin is one of the principal challenges in modern physics”.”

So we don’t know what it is, how it works, its properties or where it comes from. But we’re “closing in” on it all the same. How?

By measuring its effects on other stuff. Namely, looking at 2 things:

“First, it deforms galactic architectures through accelerating the expansion of the universe. Second, it suppresses growth in some parts of the cosmic structure.”

Now I want to draw your attention to the verbs– “deforms” and “suppresses”. Both of these are changes–the first is a change in shape, and the second is a change in a change. So to measure a change in these things you have to have a measurement before the current one.

Now comes the kicker. Astronomy has been measuring these things for, oh, maybe 50 years, maybe 100 if we can get hold of some old glass plates from the turn-of-the-century. But the universe changes visibly on the scale of a 100-million-years to billion-years, so we’re talking changes of 100/10^8 = 0.0001% change in these variables. Is it likely that comparing present photographs with ones taken in 1919 can measure a 1:10^6 variation? If a star has a width of 1mm in the old plate, we’re talking 1 micrometer of precision in the new photographs. Is that possible?

The answer is No. Nobody’s talking about using old photographs to measure changes. What we’re talking about is models. The variables in the models can be changed in the 6th decimal place, and then we can compare the output changes to our assumed input. Remember the Planck team had 17 variables in their model that they displayed to the 6th decimal place? That’s what this article is talking about–changes in a model.

But it says nothing about whether it is the right model or not. Here’s how the scientists describe the problem themselves.

“However, it is not the only force that can produce such results, and the danger thus always exists that what is assumed to be evidence of dark matter activity may in fact be something else altogether.”

Exactly. The model may be very precise, but inaccurate because it is the wrong model. This is what all my previous posts have been saying. Precision is not accuracy.

So how are we to proceed?

Well one method is to try to validate the model. This is attempted by using multiple, independent lines of evidence to try and show that this model works on things that it was not designed to fit. The keyword here is independent. In statistics, this independence or randomness leads to a confidence measure called the “p-value” and it absolutely requires “no peeking” at the result beforehand. Bending the rules gives a false sense of confidence and different methods of cheating are variously called “cherry picking”, HARKing, p-hacking, etc.

And what do they find?

The independent measures give non-overlapping answers. In other words, the model is not valid. See, for example, the figure in this article about dark energy:

And here is where the article gets quite fascinating. Because this is where the scientists do exactly the wrong thing. Since they didn’t get a confirmation, they threw all the data into one pot, destroyed its independence by making a single data set out of it, and forced the model to depend on ALL of these lines of evidence.

“The researchers approach this challenge by invoking a combination of multiple observational probes for low-redshift phenomena – namely, those measuring Type Ia supernova light curves, fluctuations in the density of visible (or “baryonic”) matter, weak gravitational lensing, and galaxy clustering….Presenting the first tranche of results from the survey, Abbott and colleagues reveal progress towards constraining the nature of dark energy.”

Why turn independent data into dependent model fits?

Ostensibly to improve the error bars on the model, but more importantly, since they can’t validate the model, at least they can improve the model fits. This despite the fact that they said that they don’t know very much about dark energy at all, and it might not be modeled correctly, yada yada. Nevertheless, at the end of the day, you don’t draw a paycheck for saying that everyone’s livelihood was a waste of time, rather, you are paid to add another digit to the precision measurement of that waste of time.

But wait, am I not being unfair? Didn’t they just say that they measure the naked effects, not the modeled cause?

“Further planned DES surveys, they conclude, will likely sharpen up knowledge of the impact of dark energy in the universe by orders of magnitude.”

Okay, I’m being a tiny bit unfair. The “impacts” they are discussing are indeed observations, but observations passed through a model. When the model is eventually updated (as indeed the Planck team is demonstrating the need for a new one) in principle they will still have the observations which can then be passed through the new model. At least, we sincerely hope they will archive their original observations and not these useless model fits. NASA has been known to throw out old data in the past, so it has happened before.

Until then, we will just have to endure these “closing in” pronouncements of imaginary progress on a model known to be invalid.

PS. Didn’t we just read over at Forbes that Ethan Siegel says there’s no other alternative but dark matter and dark energy?

“This is the problem with every alternative. Every alternative to the expanding Universe, to the Big Bang, to dark matter, dark energy, or inflation, all fail to even account for whatever’s been already observed, much less the rest of it. That’s why practically every working scientist considers these proposed alternatives to be mere sandboxing, rather than a serious challenge to the mainstream consensus.”

Let’s see, where else do we find the argument “no other alternative” and “mainstream consensus”– Darwin, global warming, origin-of-life, dying coral of the Great Barrier Reef, unilateral disarmament, no-zero-threshold toxins, mandatory vaccination, single-payer-healthcare. And if we go back in history, we’ve got miasma (bad air) for disease, phlogiston (heat content of wood), ether (light waves in a vacuum), spontaneous generation (for origin of life), but I think your readers are getting my point. If there ever needs to be a contest for the oxymoron of our age, the winner isn’t “military intelligence” but “scientific consensus.”

And he’s incorrect about “no alternatives” too. If he wants to read my paper on a magnetic alternative, it’s in review at “Proceedings of the Blythe Institute” with preprints on my website.

Follow UD News at Twitter!

The Long Ascent: Genesis 1â  11 in Science & Myth, Volume 1 by [Sheldon, Robert]

Rob Sheldon is the author of Genesis: The Long Ascent

See also: At Forbes: Cosmology’s crisis is merely “manufactured misunderstandings”
An astrophysicist dismisses the concerns as “ideologically-driven diatribes.”


Rob Sheldon: The real reason there is a crisis in cosmology Nearly everything that has failed about the Big Bang model has been added because of bad metaphysics, a refusal to accept the consequences of a beginning. The remaining pieces of the Big Bang model that are failing and which can’t be attributed to bad metaphysics, were added from sheer laziness.

More on dark energy: Researchers: Either dark energy or string theory is wrong. Or both are. But dark energy is so glitzy! Isn’t it a line of cosmetics already?

Researchers: The symmetrons needed to explain dark energy were not found

Rob Sheldon: Has dark energy finally been found? In pop science mags?

Are recent dark energy findings a blow for multiverse theory?


Science at sunset: Dark energy might make a multiverse hospitable to life… if it exists

In the same way that, when whatshisname said they'd discovered 10,000 ways to not build a light bulb, he was "closing in" on how to build a light bulb. So, only about 9,747 dark energy negative results to go ... ? ScuzzaMan

Leave a Reply