Sabine Hossenfelder, impatient with the results of recent experiments, seeks a better theory that is not observer-dependent:
She’s not happy with the outcome of the experiments, offering “If you claim that a single photon is an observer who make a measurement, that’s not just a fanciful interpretation, that’s nonsense.” She thinks that a new theory of quantum mechanics is needed:
So to summarize, no one has proved that reality doesn’t exist and no experiment has confirmed this. What these headlines tell you instead is that physicists slowly come to see that quantum mechanics is internally inconsistent and must be replaced with a better theory, one that describes what physically happens in a measurement. And when they find that theory, that will be the breakthrough of the century.
Sabine Hossenfelder, “Has quantum mechanics proved that reality does not exist?” at BackRe(Action) (February 19, 2022)
Now, the interesting thing is that Hossenfelder is comfortable with how strange classical particle physics can be. Take neutrinos, for example:
The neutrinos’ overall behavior, she tells us, is inconsistent with the Standard Model of physics. But that’s a “crazy” situation she finds easier to accept.
One conclusion:
We might conclude that the universe is a stranger place than we have sometimes been led to suspect and that the amount and type of strangeness each of us can tolerate depends, to some extent, on prior commitments. But it is what it is anyway.
News, “Theoretical physicist: Quantum theory must be replaced” at Mind Matters News (February 21, 2022)
Takehome: Sabine Hossenfelder can live with the neutrinos that are inconsistent with the Standard Model of physics but quantum uncertainties are beyond the pale.
You may also wish to read:
Study: Science fiction not as strange as quantum physics fact. At least, that’s what we can assume from a failed effort to disprove physicist Eugene Wigner’s thought experiment. The research (and the QBism that resulted) eliminates the possibility that the mind is just an illusion. Apart from observers’ minds, there is no knowledge.
and
Some elements of our universe do not make scientific sense. Well-attested observations of neutrinos are not compatible with the Standard Model of our universe that most physicists accept. Theoretical physicist Sabine Hossenfelder walks us through the reasons that neutrinos, nearly massless particles with no charge, confound expectations.
She offers an important corrective. And yes, if the theory can only be explained through paradoxes and absurdities then that’s a signal that we don’t really understand what’s going on and a new theory, that actually makes sense of the data, is needed.
Some will say that’s a problem with the universe. But it could just be a problem with the limits of our scientific measures.
I’ve read plenty of papers and popular books about the “measurement” problem and Sabina does an excellent job. As she indicates, the real “problem” is that we don’t have the full picture yet and arriving at the full picture will indeed be a big breakthrough. Worth your reading.
Easy. Everything is waves. Waves don’t have locations. Waves CAN have localizable and stable interference points.
They just can’t accept the idea that consciousness may indeed be a required ingredient to explain what a “measurement” ultimately is. They are all hell-bent on a model where consciousness is just an insignificant by-product in an otherwise mechanical Universe. Darwinism runs deep.
IOW, “I don’t wike it, make it go away!”
SA said:
The experimental results can be explained without paradoxes and absurdities. The data makes complete sense. The problem is not that there isn’t an explanation that makes sense of the data; the problem is that you (apparently), Sabine and many others don’t like the explanation that makes sense out of that data.
Hmm. Let’s see. Out of all the ontologies represented by members of this forum, which ontology predicts that all experimental attempts to confirm some form of local or non-local realism would fail?
Oh, that’s right. Mine.
Sabine Hossenfelder states, “If you claim that a single photon is an observer who make(s) a measurement, that’s not just a fanciful interpretation, that’s nonsense.”,,, “to summarize, no one has proved that reality doesn’t exist and no experiment has confirmed this.”,,,
In the first part of her statement she is criticizing the experimental realization of the Wigner’s friend thought experiment because photons were used as proxies for human observers.
,,, and since they used photons as proxies for the humans, her criticism of the Wigner’s friend experiment is fair enough as far as it goes. But the second part of Hossenfelder’s statement, “no one has proved that reality doesn’t exist and no experiment has confirmed this”, goes beyond just criticizing the current Wigner’s friend experiment. Her statement implies that what is termed ‘realism’, (i.e. the belief that an objective ‘material’ exists independent of measurement), has not been seriously challenged by other previous experimental results in quantum mechanics.
That implication on Hossenfelder’s part is simply is not true.
For instance, experiments violating Leggett’s inequality, and Wheeler’s Delayed Choice experiments, have both seriously challenged our notion of ‘material realism’. And these experiments are completely independent of the current experimental realization of the Wigner’s friend thought experiment that Hossenfelder is currently criticizing in her article.
Thus, directly contrary to what Hossenfelder tried to imply with her statement of “no one has proved that reality doesn’t exist and no experiment has confirmed this”, Hossenfelder’s belief in ‘realism’, .(i.e. that an objective ‘material’ reality exists independent of measurement), is on far shakier experimental ground than she is apparently willing to honestly admit in her present article.
Quote and Verse:
WJM@8:
Out of all the ontologies represented by members of this forum, which ontology predicts that all experimental attempts to confirm some form of local or non-local realism would fail?
And this is the rub: we know that local and non-local realism exists because the world exists; but we can’t “confirm” it. IOW, the world exists without anyone observing it and so is in no need of an “observer.” Yet, only an “observer” can “confirm” that the world contains local and non-local realism.
It’s always an epistemic problem.
WJM@8
BA77@9
It strikes me that some of the various “”physical” reality is a virtual reality simulation” theories (in addition to WJM’s Mental Reality Theory) appear to neatly explain such weird phenomena which imply the absence of “local realism”. In the above quoted case, the world physical reality simulation, in order to conserve the processing burden, would forgo computation of possible interactions until actually observed. Until the world virtual reality simulation system actually did the calculations, the result would simply not exist except as a potential.
A larger example would be perhaps not calculating in detail the virtual reality of unseen parts of the Universe until astronomers actually observe them through their telescopes. Efficiency of the utilization of processing power. This would also involve there being fundamental lower limits to computational intervals in parameters like time, distance, energy levels, velocities, and so on, to limit expenditure of hyper-processor execution time (the hyper-processor would be incredibly fast but not infinitely so). The observed quantization. Other measures would also have to be implemented to limit expenditure of hyper-processor execution time, like only computing the world state in detail for areas that are actually being observed.
The lightspeed limit would be simply a limit necessitated by this world simulation processor. We can’t travel faster than the speed of light because if we could, we’d for instance be able to get to another galaxy before the virtual reality simulation can program it. This makes the simulation hypothesis seem even more persuasive, because of explaining the absolute light speed limit as an expected artifact of inherent processing limits on the part of the multiprocessor virtual world/universe simulation, rather than a de facto limit arbitrarily imposed by the design of the cosmos being at least in part in accordance with Einsteinian relativity.
Energy or matter crossing vast distances of void would appear slowed down, and more slowed down the more void they crossed. The slowdown effect would increase as simulation progresses, as limited computing power needs to process data of increasing complexity spanning increasing simulation space.
Marcus Arvan’s multi party to party participitory P2P virtual reality simulation theory (involving multiple separate simulation “users” and processors connected in a network) apparently explains quantum mechanical interactions, as explained at https://fqxi.org/community/forum/topic/1765:
PAV said: And this is the rub: we know that local and non-local realism exists because the world exists; but we can’t “confirm” it No. It’s not that we “cannot confirm it.” It has been repeatedly disconfirmed. Some people cling to the notion of realism for various reasons.
The problem here is that the term “reality,’ and the root “real,” means that which has objective, independent existence. Usually, that also means “Independent of any mind/observation/experience. Perhaps we need a new definition of “reality” or “real.” Or a new word. This is why so many are now using the term “mental reality.” It is not our traditional concept of what “reality” means.
Well, I mean, if you just want to ignore the evidence that demonstrates otherwise, okay.
Doubter @11,
I agree that simulation theory is at least a good model of what we are experiencing and how, but the problem (in my perspective, at any rate) is that the simulation model just pushes the problem back a step. Is the world that is operating the simulation a world based on actual matter? Is it simulations all the way up? Etc.
Also, there’s a lot of other evidence to consider besides that which we get from physicists.
WJMurray:
Succinctly if you can (I can probably fill in the details), how has this been “disconfirmed”?
PaV:
The first half of this video explains it well, and includes the actual experiments and shows the published papers. The second half is a philosophical extrapolation of the experimental results into theism, but IMO they get into the weeds there.
https://www.youtube.com/watch?v=4C5pq7W5yRM
WJ Murray:
Thanks for the link. I’ll take a look.
WJ Murray:
I’ve looked at the video. Thanks. Here’s a video from Sabine Hossenfelder on the quantum erasure experiment. It’s fascinating.
We’re not supposed to know both location and velocity of particles in the quantum realm. Yet, bubble-chambers give us both. How is this possible? Mott, in 1929, addressed this issue saying how can a spherical wave function, with a spherical, 3D, wave then become a linear wave–that is, an alpha particle (He nucleus-> no electrons) emerges from a radioactive atom and then proceeds through a bubble chamber in a ‘line.’ His answer is that to explain what we “see,” we need to consider not the alpha particle alone, but the entire configuration that exists–the gas molecules in the chamber along with the alpha particle. He then proceeds to show that the ‘probability’ of finding the alpha-particle once outside the radioactive nucleus MUST be on a line since the probability wave (i.e., the wave function) vanishes outside a “cone” determined by the location of the gas molecule that is ‘excited.’
My own sense is that our intuition of time as passing forward in a positive direction forces us to see things as moving, let us say, from left to right. Yet, simultaneously the wave function of any particle can pass from ‘right to left,’ which we would think of as going “backwards” in time. The result is that we’re only seeing “half” of what’s going on. Feynman’s “Path-Integral” approach includes both portions. It would have been interesting to see how Feynman would have interpreted all these experiments.
Bottom line: (1) we as persons analyze these experiments and ‘see’ interference where we want to, resulting in our ‘blindness’ to all the interference taking place around the experiment and interference that has been taking place for ( in terms of quantum mechanics) an infinity of time; (2) this means the ‘choices’ of the ‘observer’ is what matters (for it changes the overall configuration space of the ‘system’), and not the ‘observation.’ IOW, all is nothing more than material reality interacting with itself. Now, ‘consciousness’ exists outside of this “material reality,” and this gets us back to Descartes and the Idealists that he inspired.
PaV,
What I don’t understand from the video is SH’s conclusion at the end about combining the two inference patterns and they make a non-interference blob. She then uses the coin example and shows that she can “selectively disregard” some of the random coins on the mat to generate an interference pattern.
What she doesn’t explain – or did I miss it? – was how the beam splitter was making the specific “choice” to separate the combined “beam” (whatever that means, if you’re firing photons one or two at a time) into what resulted on the two screens as two interference patterns. Obviously, the splitter wasn’t designed to do that specifically. I note that photons from both slits are being split to hit both end screens. Why wouldn’t the “split beam’ just result in two blobs? What is sorting them out (“selectively disregarding”) into specific interference patterns on the D3 and D4 end screens?
BTW, here’s Bernardo Kastrup catching Sabine in a flat-out lie:
https://www.bernardokastrup.com/2022/02/sabine-hossenfelders-bluf-called.html
Also, Sabine doesn’t believe in free will. She’s not interested in philosophy, as if her entire perspective is not rooted in philosophy. I don’t really see how she can function in term of logic if she doesn’t realize the problems with this perspective.
WJ Murray:
This stuff gets murky right away. And I am no expert. But allow me some comments intermixed into your response above.
WJM:
Hope this helps somewhat.
WJ Murray:
As to Kastrup’s charge against Sabine, I’ve looked at the relevant part of the video debate and Sabine clearly tells Bernardo that he’s looking at the wrong paper. She then tells him what paper to look at.
Here’s the paper. If you go to the paper and then search for the word “hidden,” you’ll see that Sabine defines the “hidden variables” much as she does in the video.
Here’s how the paper ends:
[My emphasis]
The papers that Kustrup cites are the wrong papers–per Sabine. I just think Kustrup was not familiar with her most recent paper. It’s unfortunate that the charge has been levelled. It’s really a misunderstanding.
Now, I firmly believe in free will, contra Sabine. And there are other things I disagree with her about. But I do enjoy her willingness to speak her mind and to tangle with relevant topics. I’m not the greatest of judges here, but she does seem to pick the right fights as I see it. Maybe that’s because, having looked at her 2020 paper linked to above (I looked just minutes ago), it appears that I see things in a very similar way as Sabine.
By the way, her argument when it comes to all of these “inequalities” (all the way up to and including Legget’s) has to do with what she calls “statistical independence,” a notion she sees implicit in all of these equations. She apparently doesn’t buy the notion of “statistical independence” and sees it as a philosophical a priori position that physicists take. That’s as much as I can comment. But for those who are interested, this might be of some importance.
WJM: BTW, here’s Bernardo Kastrup catching Sabine in a flat-out lie:
I like Sabine when it comes to her wheel house, but she has a few blank spots beyond that. Her take on consciousness and superdeterminism is obviously wrong to anyone who is deep into the subject. I genuinely feel embarrassed for her. But again, I will say again I like her, and think she’s a net positive in the world, and has the guts to buck the status quo on several sub-topics in physics.
–Ram – Truth at All Costs
PaV,
Thanks for clearing that up for me.
Here’s where I see the problem. Check me if I’m wrong about this.
Additionally, we should not lose sight that this is the delayed choice quantum eraser experiment. The original quantum eraser experiment showed what happens when D1 and D2 are used; remove D1 and D2 and all the apparatus at the D3 and D4 end of the experiment, and what you get on the screen is an interference pattern. (the original quantum eraser experiment: https://www.youtube.com/watch?v=l8gQ5GNk16s ) When you add D1 and D2 and activate them, you get a blob. Turn them off, you get an interference pattern.
I’m going to call the individual photons in the entangled pair E1 and E2; the E1’s are going to the screen and the E2s are being redirected down the alternate path towards the detectors.
Perhaps this is something simple to point out, but D1 and D2 are simple photon detectors, they determine which slit the original photon when through simply because they are put in the path of the E2s; D1 is in the path of the E2s that came from photons passing through slit 1, D2 is in the path of the E2s that come from the photon passing through slit 2.
Note that according to time-linear cause and effect, whatever we do at D1 and D2 shouldn’t have any effect on what appears on the screen, because D1 and D2 occur after the E1s have already hit the screen. In the delayed-choice experiment, what happens at D3 and D4 is after what happens (or doesn’t happen) at D1 and D2.
In the delayed-choice experiment, the the screen pattern produces a blob after the crystal because D1 and D2 are determining which slit the original photon came from. Remove D1 and D2 and there will be an interference pattern, as per the original quantum eraser experiment.
The E2’s have their “which slit” information, so when you measure their “which slit” information at D1 and D2, their entangled E1s have still produced a blob.
If you turn off or remove D1 and D2, and “mix up” the E2s, you can no longer determine which slit the original photon came through. Even the potential for figuring that out has been erased. Supposedly, the mirrors, splitter and D3 & 4 detectors are set up to split these now mixed-up E2s down the two separate paths to D3 and D4 where the photons are simply registered as hitting the detectors there, but we, the observer, cannot determine which slot the original photon came through because the E2s are mixed up with each other.
You’re still detecting the photons. The detector device sets (D1 & 2 vs D3 & 4) themselves are not different from each other – they’re just getting hit by E2 photons. The only thing that has changed other than the addition of the mirrors and beam splitter is that you have removed the potential of figuring out which E2 came from which split.
IF the E2s hitting D3 are about a 50-50 mix of slit 1 and slit 2, and simply detecting the photons is sufficient to have produced a blob, then we would see a blob at both E3 and E4 because they would be detecting an equal mixture of slit 1 and slit 2 E2s. Simply detecting them, it turns out, is not enough to produce a blob; you have to be able to know which slit the original photon came through. IT doesn’t matter if you do that by turning off D1 and D2 in the original eraser experiment, or by mixing the photons up via the delayed choice apparatus. When you lose the potential for determining which slit the photon came through, you get an interference pattern. It doesn’t matter if you detect which slit the photon goes through before it goes through the slit, after, or deliberately make it impossible to ever figure it out. Whenever that information is known, before or after the photon passes through the slit, before or after after an E1 hits the screen, if at some point it becomes known, we lose the interference pattern.
Let’s accept Sabine’s assertion (I have no reason to doubt it, though I’ve never heard it before) that E3 and E4 screen patterns (so to speak) are, individually, interference patterns, but when overlayed produce a blob. Okay. This doesn’t represent an explanation at all because of what I said previously about the 50% mixture of slit 1 E2s and slit 2 E2s hitting both D3 and D4. IMO, it is astounding that this would be the case. It represents an even deeper mystery as to how and why that would happen. That this has to do with different probabilities at D3 and 4 (1) doesn’t explain the apparent retro-causality, and (2) doesn’t change that the availability of the which-way information is causal even in the quantum eraser experiment, because it doesn’t matter how far away you put D1 and D2, as long as that information is determined at some point, you’ll have a blob and not an interference pattern on the screen.
If the information is always in the E2s, and thus always available, why would we ever get an interference pattern in the eraser experiment? It seems the experiment (and the double-slit one) shows whether or not the the path of the original photon is ever known, before or after the slits.
As far as the “hidden variable” controversy between her and Kastrup, she says, “…if the hidden variables are the degrees of freedom of the detector,…” If? That means she doesn’t know what the hidden variable is or even where it is. She has not identified the hidden variable; she has pointed to where it might be and what it might be.
Ram,
Yeah, I like her just fine. I also appreciate that she is attempting to challenge the consciousness-centric paradigm that is currently emerging, even though it’s ultimately from a self-defeating philosophical position. Bring on the experiments!!!
WJ Murray:
Yes, Sabine is saying this and I am not familiar with whether this is true or not either. Those familiar with such things will have to tell us if she is wrong or not. So, yes, I’m accepting what Sabine is saying as fact.
I, too, find this experimental behavior to be mysterious–perhaps for slightly different reasons, but, yes, mysterious.
Yes, these are the question marks that quantum theory brings out in these experiments.
But, as you point out, Sabine thinks that the “hidden variables” are in the detectors. I agree with her. In the paper I linked to, Sabine says, as she does in the video with Kastrup, that the “hidden variables” are “complex numbers that are uniformly distributed inside the complex unit circle.”
It’s not a precise definition, but of sufficient precision to work out certain other details. She concludes by saying that it’s possible that these “complex numbers” emerge from the “degrees of freedom” of the detector.
I haven’t read the paper yet; but, in my view, and–as I’ve already stated, seeing things as going BOTH “forwards” and “backwards” in time, what is happening is that when the experimental components are configured, wave functions exist for all of the atoms that are involved, intermingling (‘interfering’) at the speed of light, both forwards and backwards in time (equivalent to saying, “in both directions”). As I see it, this means that as the photons are “fired” and go through the experimental apparatus, the setup of the apparatus has already determined what will happen–that is, if you set things up one way, then the ‘which-way’ information is “already” lost or “already” in place. This is where the “delayed” portion of the experiment becomes critical since the “new” wavefunctions might not have enough time to reconfigure the “screen” if time only travels in “one direction” From these kinds of experiments, most quantum theorists would conclude that the experimental results destroy ‘determinism’ altogether.
And, now, that’s where Sabine’s comment about the two E2’s producing two different ‘interference’ effects depending on whether the E2’s are detected by either D3 or D4 becomes the most critical of all that’s being considered. If they “add up” as she says, then the initial determinism produced by the actual experimental setup still applies. It has not been abolished.
Again, this is how I see things, too. But I cannot comment on whether she has stated things correctly or not. I do suppose, however, that if she has NOT stated things correctly, that she will then be quickly corrected.
As I just stated, I believe that “measurement” simply involves our intruding on what nature has already settled upon–IOW, the setup of the detectors tells us–from the beginning, whether or not the ‘which-way’ information will be available to the ‘screen.’
What is a bit “spooky” to me is that we get different, but complementary, ‘interference’ patterns depending on the detector used. Yet, per quantum mechanics, this is how nature “chooses” to operate. Just think, Avogadro’s Number is ~ 10^23 molecules per mole. How many molecules are there in a detector whose electrons are sensitive to photons? How many quantum possibilities are there? Well, QM would say that each electron has an infinite number of eigenvalues available to it. So, we have infinity raised to the 10^23 power. Some kind of set of interactions this!
So, in such a grand complex of possibilities, that two different, additive interference patterns emerge is just something we have to accept about how nature operates. Beyond our ‘pay-grade’! While all of this works itself out “deterministically,” it is a complete wonder to us. It is a “determinism” that is completely beyond us. That is, we’re “free” of this kind of determinism.