The world, it seems, is full with people who mistakenly think that a theory which makes correct predictions is a good theory. This is rubbish, of course, and it has led to a lot of unnecessary confusion. I blame this confusion on the many philosophers, notably Popper and Lakatos, who have gone on about the importance of predictions, but never clearly said that it’s not a scientific criterion.
You see, the philosophers wanted a quick way to figure out whether a scientific theory is good or not that would not require them to actually understand the science. This, needless to say, is not possible. But the next best thing you can do is to ask how much you can trust the scientists. It is for this latter purpose, to evaluate the trust you can put in scientists, that predictions are good. But they cannot, and should not, ultimately decide what the scientific value of a theory is. Sabine Hossenfelder, “Predictions are overrated” at BackRe(Action)
Our physics color commentator Rob Sheldon responds,
I really like Sabine, she’s a fiery Swiss on a mission to clean up particle physics. In this article, she carefully distinguishes between “predictions” and “explanatory power”. What makes a scientific theory good, she argues, is not its “right predictions” but its fit to the data.
Now I don’t know who she is preaching to, but I have never mixed up predictions and explanatory power. Furthermore, the common, non-technical usage for “prediction” also includes the meaning of “explanatory power”, so it is a distinction without a difference. But for the few plebians out there who need some help, she defines the differences for us, and says that we need our theories to strive for explanatory power. So far I’m in complete agreement with her, minus the condescending tone.
But then she launches into why climate deniers are wrong to pan the climate models. At that point, for a very brief millisecond, I thought she was going to tell us that climate models had no explanatory power. But no, she accepts their poor predictions, precisely because of their explanatory power. That is, they can have 20 years of false predictions simply because they help her understand the data better.
It is at this point I realize that Sabine has no clue what explanatory power means. Oh I know what she thinks it means, but in the words of one wag “You keep using that word, I do not think it means what you think it means.”
Let me be precise. You have a data set with error bars (say temperature). You have a model with theoretical ranges for each of the fitted variables. That is, before you even looked at the data, you decided that variable “T” could only go from 0.0 to 1.0, which is its domain. Then you do the fit of your model to the data, and the computer thinks a while and spits out your “fitted variables” with an “error bar” based on the covariance matrix. If you multiply each of those “widths” together, you get an n-dimensional “volume” of the fit. You do the same thing with the pre-fit domain and its theoretical limits. The ratio of “fitted probability” to “theoretical probability” is the Ockham factor. (More precise examples using Bayesian calculations can be found on this blog: https://maximum-entropy-blog.blogspot.com/2012/07/the-ockham-factor.html and also here: http://maximum-entropy-blog.blogspot.com/2012/07/ockhams-razor.html ) Another way to say it, is that the Ockham factor gives the probability density of your model given this data fit.
Now suppose you see your model wandering away from the data, so you add another variable to fix that problem–known as an ad hoc fit. Does this make the model “better”?
It does match the data better. But any model with more free parameters SHOULD match the data better. The real question is: What does that do to the Ockham factor? Well the extra dimension increases the volume of the denominator but the numerator doesn’t expand very much (after all, that would mean the error bars grow!) The net result is that the Ockham factor almost always gets smaller as you add more variables to the theory.
Therefore, in the words of Bayesian hypothesis testing, your model has less predictive power, or roughly what Sabine meant by “explanatory power”.
The problem with global warming models, is that they have horrible Ockham factors–they are curve-fitting with a vengeance. This is why they “predict” the past perfectly, but fail to “predict” the future at all. Simpler models, such as those by John Christy and Roy Spencer, can explain the temperature variation with far fewer free variables, and therefore have superior explanatory power. These are the same scientists who are often labelled “global warming deniers”, so I would assume these are the men Sabine is disparaging.
Notice also how often “predictive power” and “explanatory power” are used interchangeably? This is why there is really no need to get all didactic about this distinction. The fact of the matter is that global warming models are bad at both. And ironically, most of Sabine’s blogs are about the poor predictive power in particle theory, but in this blog she feels she has to reverse herself to defend the good name of global warming. My advice to her is to stick with what she has first-hand knowledge of, because 2nd-hand knowledge always suffers from authoritarian bias.