Recently, we reported on the question of whether the missing dark matter of the universe has finally been found, and we cautioned,
Keep the file open but remember: We did find the Higgs boson (and Peter Higgs got the Nobel). But gravitational waves and dark energy are questionable despite the Nobels awarded. More.
Our physics commentator Rob Sheldon offers some perspective:
The New Scientist was kind enough to link to the arXiv server, so I could read the actual paper. Because the journalese is not making the slightest bit of sense.
The two main conundrums of dark matter are:
a) It allows galaxies to spin faster than they otherwise would, yet not enough gas is found in the galaxies to hold it together;
b) The cosmology models of the Big Bang nucleosynthesis (BBN), don’t allow the dark matter to be made out of baryons (hydrogen), because it would increase the density of the early universe and change the ratio of He/H and Li/H. So the dark matter had to be non-baryonic.
This discovery is baryonic matter (ionized hydrogen) found between the galaxies. It totally was not predicted. It is a failure for cosmology and still not a solution for spinning galaxies. Yet the news blurb says,
“This goes a long way toward showing that many of our ideas of how galaxies form and how structures form over the history of the universe are pretty much correct,” he says.
(Get paper towel. Remove coffee from screen.)
Reading the arXiv paper abstract, they are comparing their data to a hydrodynamic (HD) model (that’s a magnetic-free version MHD) of cosmology that produces the observed structure of the Universe–fractal streamers. So their “success” is agreement with these HD models. Since there are literally hundreds of models, this is not really a very strenuous test. And the fact the models are magnetic-free already tells me that it is wrong, because this hydrogen “gas” is actually fully stripped plasma, (which is also why it can’t be seen with normal telescopes, it’s completely transparent) and plasma requires MHD modelling. But HD models is what this data is supposedly matching.
So how do they see something that is transparent?
They look for the very slight cooling of a background light source as it collides with these atoms (but doesn’t get absorbed). They also think the plasma should be of higher density between galaxies because the HD models predict that. So they took pictures of 260,000 galaxy pairs, rotated and rescaled the pictures and superposed (“stacked”) them. After subtracting out the “halo” from the galaxies themselves, they found a remnant streak between the pairs, and claimed victory. Of course, all it would take was a very dim galaxy in the background and the average would be ruined, so one hopes that a human looked at all 260,000 images, but the probability is that it is all AI and computer algorithms.
Now step back with me and consider what has been accomplished. A computer algorithm adds 260,000 images and is compared to another computer algorithm that simulates an expanding universe. The two algorithms just barely get agreement (fall within the error bars) that are characteristic of the fluctuations due to numerical sensitivity of the algorithms to noise in the initial conditions—assuming Gaussian fluctuations of course.
In what sense is this a “discovery”? Or conversely, at what point is this a falsification of existing paradigms? Can this method ever disprove a theory?
And that is how science moves from success to success without ever making a wrong move.
See also: Has the missing matter of our universe finally been found?