From Nobelist and multiverse proponent* Steven Weinberg at New York Review of Books:

Many physicists came to think that the reaction of Einstein and Feynman and others to the unfamiliar aspects of quantum mechanics had been overblown. This used to be my view.

His view has changed to:

The introduction of probability into the principles of physics was disturbing to past physicists, but the trouble with quantum mechanics is not that it involves probabilities. We can live with that. The trouble is that in quantum mechanics the way that wave functions change with time is governed by an equation, the Schrödinger equation, that does not involve probabilities. It is just as deterministic as Newton’s equations of motion and gravitation. That is, given the wave function at any moment, the Schrödinger equation will tell you precisely what the wave function will be at any future time. There is not even the possibility of chaos, the extreme sensitivity to initial conditions that is possible in Newtonian mechanics. So if we regard the whole process of measurement as being governed by the equations of quantum mechanics, and these equations are perfectly deterministic, how do probabilities get into quantum mechanics?

Various solutions are mulled.

There is another thing that is unsatisfactory about the realist approach, beyond our parochial preferences. In this approach the wave function of the multiverse evolves deterministically. We can still talk of probabilities as the fractions of the time that various possible results are found when measurements are performed many times in any one history; but the rules that govern what probabilities are observed would have to follow from the deterministic evolution of the whole multiverse. If this were not the case, to predict probabilities we would need to make some additional assumption about what happens when humans make measurements, and we would be back with the shortcomings of the instrumentalist approach. Several attempts following the realist approach have come close to deducing rules like the Born rule that we know work well experimentally, but I think without final success. More.

In short, it may be that the assumption that there *is* a multiverse, in any scientifically meaningful sense, is part of the problem. Or…?

Similarly, string theory, which predicts a multiverse, can’t be verified by detecting the other parts of the multiverse. But it might make other predictions that can be verified. For example, it may say that in all of the big bangs within the multiverse, certain things will always be true, and those things may be verifiable. It may say that certain symmetries will always be observed, or that they’ll always be broken according to a certain pattern that we can observe. If it made enough predictions like that, then we would say that string theory is correct. And if the theory predicted a multiverse, then we’d say that that’s correct too. You don’t have to verify every prediction to know that a theory is correct.

But do they need to verify any predictions anymore? *See also:* A scientist on the benefits of a post-truth society

The war on falsifiability in science continues

Evolution bred a sense of reality out of us

Follow UD News at Twitter!