Yeah, we almost fell off our chairs too.
Astrophysicist Thomas Kitching offers some ideas and a rationale at RealClearScience:
A study that surveyed all the published cosmological literature between the years 1996 and 2008 showed that the statistics of the results were too good to be true. In fact, the statistical spread of the results was not consistent with what would be expected mathematically, which means cosmologists were in agreement with each other – but to a worrying degree. This meant that either results were being tuned somehow to reflect the status-quo, or that there may be some selection effect where only those papers that agreed with the status-quo were being accepted by journals.
No kidding. Pigs fly backwards too?
Seriously, to avoid confirmation bias, he suggests, for example,
Blind analysis is the most straightforward and obvious thing to do, and has also been the most talked about. In this case the aim is to create data sets that have randomised or fake signals in them, where the scientists doing the cosmological analysis are blind – meaning do not know if they are working on the true data or the fake one.
Blind analysis, and control samples, are commonly and successfully used in biology for example. The problem in cosmology is that we have no control group, no control universe, just one, so any blind data has to be faked or randomised. Blind analysis has started to be used in cosmology, but it is not the end of the story.
In addition to blind analysis there are two further approaches that are less widely practised, but no less important. More.
What you think, readers?
See also: Multiverse cosmology: Assuming that evidence still matters, what does it say?
In search of a road to reality
Follow UD News at Twitter!