Cosmology Intelligent Design Physics

At Mind Matters News: Theoretical physicist: Quantum theory must be replaced

Spread the love

Sabine Hossenfelder, impatient with the results of recent experiments, seeks a better theory that is not observer-dependent:


She’s not happy with the outcome of the experiments, offering “If you claim that a single photon is an observer who make a measurement, that’s not just a fanciful interpretation, that’s nonsense.” She thinks that a new theory of quantum mechanics is needed:

So to summarize, no one has proved that reality doesn’t exist and no experiment has confirmed this. What these headlines tell you instead is that physicists slowly come to see that quantum mechanics is internally inconsistent and must be replaced with a better theory, one that describes what physically happens in a measurement. And when they find that theory, that will be the breakthrough of the century.

Sabine Hossenfelder, “Has quantum mechanics proved that reality does not exist?” at BackRe(Action) (February 19, 2022)

Now, the interesting thing is that Hossenfelder is comfortable with how strange classical particle physics can be. Take neutrinos, for example:

The neutrinos’ overall behavior, she tells us, is inconsistent with the Standard Model of physics. But that’s a “crazy” situation she finds easier to accept.

One conclusion:

We might conclude that the universe is a stranger place than we have sometimes been led to suspect and that the amount and type of strangeness each of us can tolerate depends, to some extent, on prior commitments. But it is what it is anyway.

News, “Theoretical physicist: Quantum theory must be replaced” at Mind Matters News (February 21, 2022)

Takehome: Sabine Hossenfelder can live with the neutrinos that are inconsistent with the Standard Model of physics but quantum uncertainties are beyond the pale.


You may also wish to read:

Study: Science fiction not as strange as quantum physics fact. At least, that’s what we can assume from a failed effort to disprove physicist Eugene Wigner’s thought experiment. The research (and the QBism that resulted) eliminates the possibility that the mind is just an illusion. Apart from observers’ minds, there is no knowledge.

and

Some elements of our universe do not make scientific sense. Well-attested observations of neutrinos are not compatible with the Standard Model of our universe that most physicists accept. Theoretical physicist Sabine Hossenfelder walks us through the reasons that neutrinos, nearly massless particles with no charge, confound expectations.

25 Replies to “At Mind Matters News: Theoretical physicist: Quantum theory must be replaced

  1. 1
    Silver Asiatic says:

    So to summarize, no one has proved that reality doesn’t exist and no experiment has confirmed this.

    She offers an important corrective. And yes, if the theory can only be explained through paradoxes and absurdities then that’s a signal that we don’t really understand what’s going on and a new theory, that actually makes sense of the data, is needed.

    Some elements of our universe do not make scientific sense. Well-attested observations of neutrinos are not compatible with the Standard Model of our universe that most physicists accept.

    Some will say that’s a problem with the universe. But it could just be a problem with the limits of our scientific measures.

  2. 2
    PaV says:

    I’ve read plenty of papers and popular books about the “measurement” problem and Sabina does an excellent job. As she indicates, the real “problem” is that we don’t have the full picture yet and arriving at the full picture will indeed be a big breakthrough. Worth your reading.

  3. 3
    polistra says:

    Easy. Everything is waves. Waves don’t have locations. Waves CAN have localizable and stable interference points.

  4. 4
    bornagain77 says:

    “Despite the unrivaled empirical success of quantum theory, the very suggestion that it may be literally true as a description of nature is still greeted with cynicism, incomprehension and even anger.”
    (T. Folger, “Quantum Shmantum”; Discover 22:37-43, 2001)

    Anton Zeilinger interviewed about Quantum Mechanics – video – 2018
    (The essence of Quantum Physics for a general audience)
    https://www.youtube.com/watch?v=z82XCvgnpmA
    40 sec: Every object has to be in a definite place is not true anymore.,,,
    The thought that a particle can be at two places at the same time is (also) not good language.
    The good language it that there are situations where it is completely undefined where the particle is. (and it is not just us (we ourselves) that don’t know where the particle is, the particle itself does not know where it is). This “nonexistence” is an objective feature of reality.,,,
    5:10 min:,,, superposition is not limited to small systems,,,
    7:35 min:,,, I have given lectures on quantum physics to children, 6 and 7 years old, and they understand the basic concepts of quantum physics if you tell them the right way.,,,
    9:00 min:,,, the main issue (with quantum mechanics) is interpretation. What does it mean for our view of the world.,,, “emotional” fights happen over what it means,,,
    17:30:,,, In quantum mechanics we have the measurement paradox (i.e. measurement problem),,, I think it (the measurement paradox) tells us something about the role of observation in the world. And the role of information.,, Maybe there are situations where we have to reconsider the “Cartesian cut”*,,,
    *Cartesian Cut
    The Cartesian cut is a metaphorical notion alluding to Decartes’ distinction of res cogitans (thinking substance) and res extensa (extended substance). It plays a crucial role in the long history of the problem of the relationship between mind and matter and is constitutive for the natural sciences of today. While the elements of res cogitans are mental (non-material) entities like ideas, models, or concepts, the elements of res extensa are material facts, events, or data. The conventional referents of all natural sciences belong to the latter regime.
    http://see.library.utoronto.ca.....utdef.html

  5. 5
    Eugene says:

    They just can’t accept the idea that consciousness may indeed be a required ingredient to explain what a “measurement” ultimately is. They are all hell-bent on a model where consciousness is just an insignificant by-product in an otherwise mechanical Universe. Darwinism runs deep.

  6. 6
    William J Murray says:

    So to summarize, no one has proved that reality doesn’t exist and no experiment has confirmed this.

    IOW, “I don’t wike it, make it go away!”

  7. 7
    William J Murray says:

    SA said:

    She offers an important corrective. And yes, if the theory can only be explained through paradoxes and absurdities then that’s a signal that we don’t really understand what’s going on and a new theory, that actually makes sense of the data, is needed.

    The experimental results can be explained without paradoxes and absurdities. The data makes complete sense. The problem is not that there isn’t an explanation that makes sense of the data; the problem is that you (apparently), Sabine and many others don’t like the explanation that makes sense out of that data.

  8. 8
    William J Murray says:

    Hmm. Let’s see. Out of all the ontologies represented by members of this forum, which ontology predicts that all experimental attempts to confirm some form of local or non-local realism would fail?

    Oh, that’s right. Mine.

  9. 9
    bornagain77 says:

    Sabine Hossenfelder states, “If you claim that a single photon is an observer who make(s) a measurement, that’s not just a fanciful interpretation, that’s nonsense.”,,, “to summarize, no one has proved that reality doesn’t exist and no experiment has confirmed this.”,,,

    In the first part of her statement she is criticizing the experimental realization of the Wigner’s friend thought experiment because photons were used as proxies for human observers.

    Quantum paradox points to shaky foundations of reality – George Musser – Aug. 17, 2020
    Excerpt: Now, researchers in Australia and Taiwan offer perhaps the sharpest demonstration that Wigner’s paradox is real. In a study published this week in Nature Physics, they transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the scenario. The team also tests the theorem with an experiment, using photons as proxies for the humans. Whereas Wigner believed resolving the paradox requires quantum mechanics to break down for large systems such as human observers, some of the new study’s authors believe something just as fundamental is on thin ice: objectivity. It could mean there is no such thing as an absolute fact, one that is as true for me as it is for you.
    https://www.sciencemag.org/news/2020/08/quantum-paradox-points-shaky-foundations-reality

    ,,, and since they used photons as proxies for the humans, her criticism of the Wigner’s friend experiment is fair enough as far as it goes. But the second part of Hossenfelder’s statement, “no one has proved that reality doesn’t exist and no experiment has confirmed this”, goes beyond just criticizing the current Wigner’s friend experiment. Her statement implies that what is termed ‘realism’, (i.e. the belief that an objective ‘material’ exists independent of measurement), has not been seriously challenged by other previous experimental results in quantum mechanics.

    That implication on Hossenfelder’s part is simply is not true.

    For instance, experiments violating Leggett’s inequality, and Wheeler’s Delayed Choice experiments, have both seriously challenged our notion of ‘material realism’. And these experiments are completely independent of the current experimental realization of the Wigner’s friend thought experiment that Hossenfelder is currently criticizing in her article.

    An experimental test of non-local realism – 2007
    Simon Gröblacher, Tomasz Paterek, Rainer Kaltenbaek, Caslav Brukner, Marek Zukowski, Markus Aspelmeyer & Anton Zeilinger
    Abstract: Most working scientists hold fast to the concept of ‘realism’—a viewpoint according to which an external reality exists independent of observation. But quantum physics has shattered some of our cornerstone beliefs. According to Bell’s theorem, any theory that is based on the joint assumption of realism and locality (meaning that local events cannot be affected by actions in space-like separated regions) is at variance with certain quantum predictions. Experiments with entangled pairs of particles have amply confirmed these quantum predictions, thus rendering local realistic theories untenable. Maintaining realism as a fundamental concept would therefore necessitate the introduction of ‘spooky’ actions that defy locality. Here we show by both theory and experiment that a broad and rather reasonable class of such non-local realistic theories is incompatible with experimentally observable quantum correlations. In the experiment, we measure previously untested correlations between two entangled photons, and show that these correlations violate an inequality proposed by Leggett for non-local realistic theories. Our result suggests that giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned.
    http://www.nature.com/nature/j.....05677.html

    Quantum physics says goodbye to reality – Apr 20, 2007
    Excerpt: Many realizations of the thought experiment have indeed verified the violation of Bell’s inequality. These have ruled out all hidden-variables theories based on joint assumptions of realism, meaning that reality exists when we are not observing it; and locality, meaning that separated events cannot influence one another instantaneously. But a violation of Bell’s inequality does not tell specifically which assumption – realism, locality or both – is discordant with quantum mechanics.
    Markus Aspelmeyer, Anton Zeilinger and colleagues from the University of Vienna, however, have now shown that realism is more of a problem than locality in the quantum world. They devised an experiment that violates a different inequality proposed by physicist Anthony Leggett in 2003 that relies only on realism, and relaxes the reliance on locality. To do this, rather than taking measurements along just one plane of polarization, the Austrian team took measurements in additional, perpendicular planes to check for elliptical polarization.
    They found that, just as in the realizations of Bell’s thought experiment, Leggett’s inequality is violated – thus stressing the quantum-mechanical assertion that reality does not exist when we’re not observing it. “Our study shows that ‘just’ giving up the concept of locality would not be enough to obtain a more complete description of quantum mechanics,” Aspelmeyer told Physics Web. “You would also have to give up certain intuitive features of realism.”
    http://physicsworld.com/cws/article/news/27640

    Do we create the world just by looking at it? – 2008
    Excerpt: In mid-2007 Fedrizzi found that the new realism model was violated by 80 orders of magnitude; the group was even more assured that quantum mechanics was correct.
    Leggett agrees with Zeilinger that realism is wrong in quantum mechanics, but when I asked him whether he now believes in the theory, he answered only “no” before demurring, “I’m in a small minority with that point of view and I wouldn’t stake my life on it.” For Leggett there are still enough loopholes to disbelieve. I asked him what could finally change his mind about quantum mechanics. Without hesitation, he said sending humans into space as detectors to test the theory.,,,
    (to which Anton Zeilinger responded)
    When I mentioned this to Prof. Zeilinger he said, “That will happen someday. There is no doubt in my mind. It is just a question of technology.” Alessandro Fedrizzi had already shown me a prototype of a realism experiment he is hoping to send up in a satellite. It’s a heavy, metallic slab the size of a dinner plate.
    http://seedmagazine.com/conten....._tests/P3/

    Reflecting light off satellite backs up Wheeler’s quantum theory thought experiment – October 26, 2017 – Bob Yirka
    Excerpt: Back in the late 1970s, physicist John Wheeler tossed around a thought experiment in which he asked what would happen if tests allowed researchers to change parameters after a photon was fired, but before it had reached a sensor for testing—would it somehow alter its behavior mid-course? He also considered the possibilities as light from a distant quasar made its way through space, being lensed by gravity. Was it possible that the light could somehow choose to behave as a wave or a particle depending on what scientists here on Earth did in trying to measure it?,,,
    The experiment consisted of shooting a laser beam at a beam splitter, which aimed the beam at a satellite traveling in low Earth orbit, which reflected it back to Earth. But as the light traveled back to Earth, the researchers had time to make a choice whether or not to activate a second beam splitter as the light was en route. Thus, they could test whether the light was able to sense what they were doing and respond accordingly. The team reports that the light behaved just as Wheeler had predicted—demonstrating either particle-like or wave-like behavior, depending on the behavior of those studying it.
    https://phys.org/news/2017-10-satellite-wheeler-quantum-theory-thought.html

    New Mind-blowing Experiment Confirms That Reality Doesn’t Exist If You Are Not Looking at It – June 3, 2015
    Excerpt: Some particles, such as photons or electrons, can behave both as particles and as waves. Here comes a question of what exactly makes a photon or an electron act either as a particle or a wave. This is what Wheeler’s experiment asks: at what point does an object ‘decide’?
    The results of the Australian scientists’ experiment, which were published in the journal Nature Physics, show that this choice is determined by the way the object is measured, which is in accordance with what quantum theory predicts.
    “It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” said lead researcher Dr. Andrew Truscott in a press release.,,,
    “The atoms did not travel from A to B. It was only when they were measured at the end of the journey that their wave-like or particle-like behavior was brought into existence,” he said.
    Thus, this experiment adds to the validity of the quantum theory and provides new evidence to the idea that reality doesn’t exist without an observer.
    http://themindunleashed.org/20.....at-it.html

    Thus one decides the photon shall have come by one route or by both routes after it has already done its travel”
    – John A. Wheeler
    “J.A. Wheeler and W.H. Zurek Eds. Princeton University Press, 1983.
    “Quantum theory and measurement.”page 182.

    “It begins to look as we ourselves, by our last minute decision, have an influence on what a photon will do when it has already accomplished most of its doing… we have to say that we ourselves have an undeniable part in what we have always called the past. The past is not really the past until is has been registered. Or to put it another way, the past has no meaning or existence unless it exists as a record in the present.”
    – John Wheeler
    – The Ghost In The Atom – Page 66-68 – P. C. W. Davies, Julian R. Brown – Cambridge University Press, Jul 30, 1993

    Thus, directly contrary to what Hossenfelder tried to imply with her statement of “no one has proved that reality doesn’t exist and no experiment has confirmed this”, Hossenfelder’s belief in ‘realism’, .(i.e. that an objective ‘material’ reality exists independent of measurement), is on far shakier experimental ground than she is apparently willing to honestly admit in her present article.

    Quote and Verse:

    “hidden variables don’t exist. If you have proved them come back with PROOF and a Nobel Prize.
    John Bell theorized that maybe the particles can signal faster than the speed of light. This is what he advocated in his interview in “The Ghost in the Atom.” But the violation of Leggett’s inequality in 2007 takes away that possibility and rules out all non-local hidden variables. Observation instantly defines what properties a particle has and if you assume they had properties before we measured them, then you need evidence, because right now there is none which is why realism is dead, and materialism dies with it.
    How does the particle know what we are going to pick so it can conform to that?”
    per Jimfit – UD blogger

    Colossians 1:17
    He is before all things, and in him all things hold together.

  10. 10
    PaV says:

    WJM@8:

    Out of all the ontologies represented by members of this forum, which ontology predicts that all experimental attempts to confirm some form of local or non-local realism would fail?

    And this is the rub: we know that local and non-local realism exists because the world exists; but we can’t “confirm” it. IOW, the world exists without anyone observing it and so is in no need of an “observer.” Yet, only an “observer” can “confirm” that the world contains local and non-local realism.

    It’s always an epistemic problem.

  11. 11
    doubter says:

    WJM@8
    BA77@9

    Observation instantly defines what properties a particle has and if you assume they had properties before we measured them, then you need evidence, because right now there is none which is why realism is dead, and materialism dies with it.

    It strikes me that some of the various “”physical” reality is a virtual reality simulation” theories (in addition to WJM’s Mental Reality Theory) appear to neatly explain such weird phenomena which imply the absence of “local realism”. In the above quoted case, the world physical reality simulation, in order to conserve the processing burden, would forgo computation of possible interactions until actually observed. Until the world virtual reality simulation system actually did the calculations, the result would simply not exist except as a potential.

    A larger example would be perhaps not calculating in detail the virtual reality of unseen parts of the Universe until astronomers actually observe them through their telescopes. Efficiency of the utilization of processing power. This would also involve there being fundamental lower limits to computational intervals in parameters like time, distance, energy levels, velocities, and so on, to limit expenditure of hyper-processor execution time (the hyper-processor would be incredibly fast but not infinitely so). The observed quantization. Other measures would also have to be implemented to limit expenditure of hyper-processor execution time, like only computing the world state in detail for areas that are actually being observed.

    The lightspeed limit would be simply a limit necessitated by this world simulation processor. We can’t travel faster than the speed of light because if we could, we’d for instance be able to get to another galaxy before the virtual reality simulation can program it. This makes the simulation hypothesis seem even more persuasive, because of explaining the absolute light speed limit as an expected artifact of inherent processing limits on the part of the multiprocessor virtual world/universe simulation, rather than a de facto limit arbitrarily imposed by the design of the cosmos being at least in part in accordance with Einsteinian relativity.

    Energy or matter crossing vast distances of void would appear slowed down, and more slowed down the more void they crossed. The slowdown effect would increase as simulation progresses, as limited computing power needs to process data of increasing complexity spanning increasing simulation space.

    Marcus Arvan’s multi party to party participitory P2P virtual reality simulation theory (involving multiple separate simulation “users” and processors connected in a network) apparently explains quantum mechanical interactions, as explained at https://fqxi.org/community/forum/topic/1765:

    “Without adequate error protection, different computers on the network can provide their respective users with blatantly contradictory experience. For instance, without adequate error protection, I might experience myself (in, say, the game of Halo) as shooting and killing you before you kill me, but you might experience yourself as shooting and killing me before I kill you. Without adequate error protection – which takes an incredible amount of processing power – users can experience a P2P simulation as an “unplayable” incoherent series of events. But these are just technical problems resulting from limitations in processing power. P2P networked simulations are possible with enough processing power and error-correction – and here’s the crucial thing about them: they reproduce all of the fundamental and most baffling features of quantum mechanics. For consider once again the very structure of a P2P simulation. A P2P simulation is one in which no individual computer represents “the” reality that all users within the simulation experience. On the contrary, the simulated reality just is the network of individual machines connected to one another taking “measurements” of where things are in the environment in real time, utilizing error-correction algorithms to ensure that different machines’ measurements don’t come apart “too far.” Further, notice that in any P2P simulation, there will have to be random divergences between the measurements of different machines, and indeed the same machine at different instants, due to “noise” within the simulation. Because information cannot be transmitted instantaneously, but must instead flow from machine to machine with some (perhaps minute) “time lag”, anytime an observer in a P2P simulation takes a measurement of their external reality, they not only (a) affect the entire network, thus altering measurements taken by others (i.e. a direct analogue of quantum-measurement problems); because of the inherent “noise” within a P2P simulation (each individual computer is continually error-correcting itself against others on the network), it’s also the case that (b) observers will only be able to develop a probabilistic theory of the fundamental properties of their simulated environment. In other words, quantum phenomena – the quantum measurement problem and quantum indeterminacy – emerge naturally and inevitably from any P2P simulation. Not only that: P2P simulations reproduce a strong analogues of quantum entanglement. There are clear cases of non-locally entangled states within existing online simulations. [b]I’ve experienced them before when playing Halo. If one steps on a particular patch of ground, another patch of ground elsewhere may instantaneously shift to a different state (with no information transfer observable to individuals in the simulated-world reference frame).”

  12. 12
    William J Murray says:

    PAV said: And this is the rub: we know that local and non-local realism exists because the world exists; but we can’t “confirm” it No. It’s not that we “cannot confirm it.” It has been repeatedly disconfirmed. Some people cling to the notion of realism for various reasons.

    The problem here is that the term “reality,’ and the root “real,” means that which has objective, independent existence. Usually, that also means “Independent of any mind/observation/experience. Perhaps we need a new definition of “reality” or “real.” Or a new word. This is why so many are now using the term “mental reality.” It is not our traditional concept of what “reality” means.

    IOW, the world exists without anyone observing it and so is in no need of an “observer.”

    Well, I mean, if you just want to ignore the evidence that demonstrates otherwise, okay.

  13. 13
    William J Murray says:

    Doubter @11,

    I agree that simulation theory is at least a good model of what we are experiencing and how, but the problem (in my perspective, at any rate) is that the simulation model just pushes the problem back a step. Is the world that is operating the simulation a world based on actual matter? Is it simulations all the way up? Etc.

    Also, there’s a lot of other evidence to consider besides that which we get from physicists.

  14. 14
    PaV says:

    WJMurray:

    It’s not that we “cannot confirm it.” It has been repeatedly disconfirmed. Some people cling to the notion of realism for various reasons.

    Succinctly if you can (I can probably fill in the details), how has this been “disconfirmed”?

  15. 15
    William J Murray says:

    PaV:
    The first half of this video explains it well, and includes the actual experiments and shows the published papers. The second half is a philosophical extrapolation of the experimental results into theism, but IMO they get into the weeds there.

    https://www.youtube.com/watch?v=4C5pq7W5yRM

  16. 16
    PaV says:

    WJ Murray:

    Thanks for the link. I’ll take a look.

  17. 17
    PaV says:

    WJ Murray:

    I’ve looked at the video. Thanks. Here’s a video from Sabine Hossenfelder on the quantum erasure experiment. It’s fascinating.

    We’re not supposed to know both location and velocity of particles in the quantum realm. Yet, bubble-chambers give us both. How is this possible? Mott, in 1929, addressed this issue saying how can a spherical wave function, with a spherical, 3D, wave then become a linear wave–that is, an alpha particle (He nucleus-> no electrons) emerges from a radioactive atom and then proceeds through a bubble chamber in a ‘line.’ His answer is that to explain what we “see,” we need to consider not the alpha particle alone, but the entire configuration that exists–the gas molecules in the chamber along with the alpha particle. He then proceeds to show that the ‘probability’ of finding the alpha-particle once outside the radioactive nucleus MUST be on a line since the probability wave (i.e., the wave function) vanishes outside a “cone” determined by the location of the gas molecule that is ‘excited.’

    My own sense is that our intuition of time as passing forward in a positive direction forces us to see things as moving, let us say, from left to right. Yet, simultaneously the wave function of any particle can pass from ‘right to left,’ which we would think of as going “backwards” in time. The result is that we’re only seeing “half” of what’s going on. Feynman’s “Path-Integral” approach includes both portions. It would have been interesting to see how Feynman would have interpreted all these experiments.

    Bottom line: (1) we as persons analyze these experiments and ‘see’ interference where we want to, resulting in our ‘blindness’ to all the interference taking place around the experiment and interference that has been taking place for ( in terms of quantum mechanics) an infinity of time; (2) this means the ‘choices’ of the ‘observer’ is what matters (for it changes the overall configuration space of the ‘system’), and not the ‘observation.’ IOW, all is nothing more than material reality interacting with itself. Now, ‘consciousness’ exists outside of this “material reality,” and this gets us back to Descartes and the Idealists that he inspired.

  18. 18
    William J Murray says:

    PaV,
    What I don’t understand from the video is SH’s conclusion at the end about combining the two inference patterns and they make a non-interference blob. She then uses the coin example and shows that she can “selectively disregard” some of the random coins on the mat to generate an interference pattern.

    What she doesn’t explain – or did I miss it? – was how the beam splitter was making the specific “choice” to separate the combined “beam” (whatever that means, if you’re firing photons one or two at a time) into what resulted on the two screens as two interference patterns. Obviously, the splitter wasn’t designed to do that specifically. I note that photons from both slits are being split to hit both end screens. Why wouldn’t the “split beam’ just result in two blobs? What is sorting them out (“selectively disregarding”) into specific interference patterns on the D3 and D4 end screens?

  19. 19
    William J Murray says:

    BTW, here’s Bernardo Kastrup catching Sabine in a flat-out lie:
    https://www.bernardokastrup.com/2022/02/sabine-hossenfelders-bluf-called.html

    Also, Sabine doesn’t believe in free will. She’s not interested in philosophy, as if her entire perspective is not rooted in philosophy. I don’t really see how she can function in term of logic if she doesn’t realize the problems with this perspective.

  20. 20
    PaV says:

    WJ Murray:

    This stuff gets murky right away. And I am no expert. But allow me some comments intermixed into your response above.

    WJM:

    What I don’t understand from the video is SH’s conclusion at the end about combining the two inference patterns and they make a non-interference blob. She then uses the coin example and shows that she can “selectively disregard” some of the random coins on the mat to generate an interference pattern.
    PaV: the “selective disregard,” I believe, has to do with which detector is being used. She’s saying that detector D3 involves one ‘set’ of entangled photons and that detector D4 involves a different ‘set’ of entangled photons. If BOTH detectors are turned on, then both ‘sets’ of entangled photons are involved in the ‘erasing’ of the ‘which-way’ information and, hence, NO interference pattern will show up on the screen.

    WJM: What she doesn’t explain – or did I miss it? – was how the beam splitter was making the specific “choice” to separate the combined “beam” (whatever that means, if you’re firing photons one or two at a time) into what resulted on the two screens as two interference patterns.
    PaV:
    Sabine says toward the end that we’re still left dealing with the quantum “weirdness” of the double-slit experiment. Part of that “weirdness” involves probabilities. And, when both detectors, D3 and D4, are turned on, then all the probabilities are involved and the ‘which-way’ information for ALL the photons is taken away; whereas, when only one of the detectors, either D3 or D4, is used then HALF of the entangled photons lose their ‘which-way’ information, and so, an interference pattern appears. However, when you ADD both those patterns, this is the same as adding the two HALVES of entangled photons together and then ALL of the ‘which-way’ information is gone and so there is no interference, but only a ‘blob.’ That these two “halves” can be added into a pattern that is associated with what happens when ALL of the photons are affected, even when photons might emerge in small numbers over a period of time, is part of the quantum “weirdness,” if you will.

    WJM: Obviously, the splitter wasn’t designed to do that specifically. I note that photons from both slits are being split to hit both end screens.
    PaV: There’s only “one” screen: to the top of the video; now, on the right-side of the video are the “detectors.” That’s not always clear. When precision is absolutely needed, just a little imprecision in language can confuse, and confuse greatly.

    WJM: Why wouldn’t the “split beam’ just result in two blobs? What is sorting them out (“selectively disregarding”) into specific interference patterns on the D3 and D4 end screens?
    PaV: the only “blob” is on the screen; the detectors simply detect which slit the photon traveled through [Detectors D1 and D2]. As to when D3 and D4 are being used, when used together, the “blob” emerges on the screen at the top. When D3 and D4 are used independently of each other, an ‘interference pattern’ is seen on the screen to the top. What is sorting them out? Quantum weirdness.

    Again, both the ‘blue’ and the ‘yellow’ photons are ‘combined’ when entering either D3 or D4; but when only one detector is operating, only HALF of the photons lose their ‘which-way’ information. The way in which the two ‘beams’ of entangled photons are combined at detectors D3 and D4 are probabilistically different from each other and so TWO ‘sets’ of entangled photons are involved, one associated with D3 and one with D4. Now this is what Sabine is saying. I am not trained sufficiently to confirm this, but it seems to me to be consistent with how probabilities operate in quantum mechanics.

    Hope this helps somewhat.

  21. 21
    PaV says:

    WJ Murray:

    As to Kastrup’s charge against Sabine, I’ve looked at the relevant part of the video debate and Sabine clearly tells Bernardo that he’s looking at the wrong paper. She then tells him what paper to look at.

    Here’s the paper. If you go to the paper and then search for the word “hidden,” you’ll see that Sabine defines the “hidden variables” much as she does in the video.

    Here’s how the paper ends:

    Exactly when the deviations from quantum mechanics become non-negligible depends on what the hidden variables are; the above introduced toy model cannot answer this question.
    The toy model is therefore, strictly speaking, untestable, because it does not specify where the
    distribution of hidden variables comes from. But, as pointed out above and in more detail in
    [44, 23], if the hidden variables are the degrees of freedom of the detector, it is reasonable to
    expect that minimizing the variation in the detectors’ degrees of freedom between consecutive
    measurements will reveal deviations from Born’s rule which cannot be detected by Bell-type
    experiments [26, 27, 28], no matter how ingeniously conceived and executed.

    [My emphasis]

    The papers that Kustrup cites are the wrong papers–per Sabine. I just think Kustrup was not familiar with her most recent paper. It’s unfortunate that the charge has been levelled. It’s really a misunderstanding.

    Now, I firmly believe in free will, contra Sabine. And there are other things I disagree with her about. But I do enjoy her willingness to speak her mind and to tangle with relevant topics. I’m not the greatest of judges here, but she does seem to pick the right fights as I see it. Maybe that’s because, having looked at her 2020 paper linked to above (I looked just minutes ago), it appears that I see things in a very similar way as Sabine.

    By the way, her argument when it comes to all of these “inequalities” (all the way up to and including Legget’s) has to do with what she calls “statistical independence,” a notion she sees implicit in all of these equations. She apparently doesn’t buy the notion of “statistical independence” and sees it as a philosophical a priori position that physicists take. That’s as much as I can comment. But for those who are interested, this might be of some importance.

  22. 22
    ram says:

    WJM: BTW, here’s Bernardo Kastrup catching Sabine in a flat-out lie:

    I like Sabine when it comes to her wheel house, but she has a few blank spots beyond that. Her take on consciousness and superdeterminism is obviously wrong to anyone who is deep into the subject. I genuinely feel embarrassed for her. But again, I will say again I like her, and think she’s a net positive in the world, and has the guts to buck the status quo on several sub-topics in physics.

    –Ram – Truth at All Costs

  23. 23
    William J Murray says:

    PaV,
    Thanks for clearing that up for me.

    Here’s where I see the problem. Check me if I’m wrong about this.

    Additionally, we should not lose sight that this is the delayed choice quantum eraser experiment. The original quantum eraser experiment showed what happens when D1 and D2 are used; remove D1 and D2 and all the apparatus at the D3 and D4 end of the experiment, and what you get on the screen is an interference pattern. (the original quantum eraser experiment: https://www.youtube.com/watch?v=l8gQ5GNk16s ) When you add D1 and D2 and activate them, you get a blob. Turn them off, you get an interference pattern.

    I’m going to call the individual photons in the entangled pair E1 and E2; the E1’s are going to the screen and the E2s are being redirected down the alternate path towards the detectors.

    Perhaps this is something simple to point out, but D1 and D2 are simple photon detectors, they determine which slit the original photon when through simply because they are put in the path of the E2s; D1 is in the path of the E2s that came from photons passing through slit 1, D2 is in the path of the E2s that come from the photon passing through slit 2.

    Note that according to time-linear cause and effect, whatever we do at D1 and D2 shouldn’t have any effect on what appears on the screen, because D1 and D2 occur after the E1s have already hit the screen. In the delayed-choice experiment, what happens at D3 and D4 is after what happens (or doesn’t happen) at D1 and D2.

    In the delayed-choice experiment, the the screen pattern produces a blob after the crystal because D1 and D2 are determining which slit the original photon came from. Remove D1 and D2 and there will be an interference pattern, as per the original quantum eraser experiment.

    The E2’s have their “which slit” information, so when you measure their “which slit” information at D1 and D2, their entangled E1s have still produced a blob.

    If you turn off or remove D1 and D2, and “mix up” the E2s, you can no longer determine which slit the original photon came through. Even the potential for figuring that out has been erased. Supposedly, the mirrors, splitter and D3 & 4 detectors are set up to split these now mixed-up E2s down the two separate paths to D3 and D4 where the photons are simply registered as hitting the detectors there, but we, the observer, cannot determine which slot the original photon came through because the E2s are mixed up with each other.

    You’re still detecting the photons. The detector device sets (D1 & 2 vs D3 & 4) themselves are not different from each other – they’re just getting hit by E2 photons. The only thing that has changed other than the addition of the mirrors and beam splitter is that you have removed the potential of figuring out which E2 came from which split.

    IF the E2s hitting D3 are about a 50-50 mix of slit 1 and slit 2, and simply detecting the photons is sufficient to have produced a blob, then we would see a blob at both E3 and E4 because they would be detecting an equal mixture of slit 1 and slit 2 E2s. Simply detecting them, it turns out, is not enough to produce a blob; you have to be able to know which slit the original photon came through. IT doesn’t matter if you do that by turning off D1 and D2 in the original eraser experiment, or by mixing the photons up via the delayed choice apparatus. When you lose the potential for determining which slit the photon came through, you get an interference pattern. It doesn’t matter if you detect which slit the photon goes through before it goes through the slit, after, or deliberately make it impossible to ever figure it out. Whenever that information is known, before or after the photon passes through the slit, before or after after an E1 hits the screen, if at some point it becomes known, we lose the interference pattern.

    Let’s accept Sabine’s assertion (I have no reason to doubt it, though I’ve never heard it before) that E3 and E4 screen patterns (so to speak) are, individually, interference patterns, but when overlayed produce a blob. Okay. This doesn’t represent an explanation at all because of what I said previously about the 50% mixture of slit 1 E2s and slit 2 E2s hitting both D3 and D4. IMO, it is astounding that this would be the case. It represents an even deeper mystery as to how and why that would happen. That this has to do with different probabilities at D3 and 4 (1) doesn’t explain the apparent retro-causality, and (2) doesn’t change that the availability of the which-way information is causal even in the quantum eraser experiment, because it doesn’t matter how far away you put D1 and D2, as long as that information is determined at some point, you’ll have a blob and not an interference pattern on the screen.

    If the information is always in the E2s, and thus always available, why would we ever get an interference pattern in the eraser experiment? It seems the experiment (and the double-slit one) shows whether or not the the path of the original photon is ever known, before or after the slits.

    As far as the “hidden variable” controversy between her and Kastrup, she says, “…if the hidden variables are the degrees of freedom of the detector,…” If? That means she doesn’t know what the hidden variable is or even where it is. She has not identified the hidden variable; she has pointed to where it might be and what it might be.

  24. 24
    William J Murray says:

    Ram,
    Yeah, I like her just fine. I also appreciate that she is attempting to challenge the consciousness-centric paradigm that is currently emerging, even though it’s ultimately from a self-defeating philosophical position. Bring on the experiments!!!

  25. 25
    PaV says:

    WJ Murray:

    Let’s accept Sabine’s assertion (I have no reason to doubt it, though I’ve never heard it before) that E3 and E4 screen patterns (so to speak) are, individually, interference patterns, but when overlayed produce a blob.

    Yes, Sabine is saying this and I am not familiar with whether this is true or not either. Those familiar with such things will have to tell us if she is wrong or not. So, yes, I’m accepting what Sabine is saying as fact.

    Okay. This doesn’t represent an explanation at all because of what I said previously about the 50% mixture of slit 1 E2s and slit 2 E2s hitting both D3 and D4. IMO, it is astounding that this would be the case. It represents an even deeper mystery as to how and why that would happen.

    I, too, find this experimental behavior to be mysterious–perhaps for slightly different reasons, but, yes, mysterious.

    That this has to do with different probabilities at D3 and 4 (1) doesn’t explain the apparent retro-causality, and (2) doesn’t change that the availability of the which-way information is causal even in the quantum eraser experiment, because it doesn’t matter how far away you put D1 and D2, as long as that information is determined at some point, you’ll have a blob and not an interference pattern on the screen.

    Yes, these are the question marks that quantum theory brings out in these experiments.

    But, as you point out, Sabine thinks that the “hidden variables” are in the detectors. I agree with her. In the paper I linked to, Sabine says, as she does in the video with Kastrup, that the “hidden variables” are “complex numbers that are uniformly distributed inside the complex unit circle.”
    It’s not a precise definition, but of sufficient precision to work out certain other details. She concludes by saying that it’s possible that these “complex numbers” emerge from the “degrees of freedom” of the detector.

    I haven’t read the paper yet; but, in my view, and–as I’ve already stated, seeing things as going BOTH “forwards” and “backwards” in time, what is happening is that when the experimental components are configured, wave functions exist for all of the atoms that are involved, intermingling (‘interfering’) at the speed of light, both forwards and backwards in time (equivalent to saying, “in both directions”). As I see it, this means that as the photons are “fired” and go through the experimental apparatus, the setup of the apparatus has already determined what will happen–that is, if you set things up one way, then the ‘which-way’ information is “already” lost or “already” in place. This is where the “delayed” portion of the experiment becomes critical since the “new” wavefunctions might not have enough time to reconfigure the “screen” if time only travels in “one direction” From these kinds of experiments, most quantum theorists would conclude that the experimental results destroy ‘determinism’ altogether.

    And, now, that’s where Sabine’s comment about the two E2’s producing two different ‘interference’ effects depending on whether the E2’s are detected by either D3 or D4 becomes the most critical of all that’s being considered. If they “add up” as she says, then the initial determinism produced by the actual experimental setup still applies. It has not been abolished.

    Again, this is how I see things, too. But I cannot comment on whether she has stated things correctly or not. I do suppose, however, that if she has NOT stated things correctly, that she will then be quickly corrected.

    If the information is always in the E2s, and thus always available, why would we ever get an interference pattern in the eraser experiment? It seems the experiment (and the double-slit one) shows whether or not the the path of the original photon is ever known, before or after the slits.

    As I just stated, I believe that “measurement” simply involves our intruding on what nature has already settled upon–IOW, the setup of the detectors tells us–from the beginning, whether or not the ‘which-way’ information will be available to the ‘screen.’

    What is a bit “spooky” to me is that we get different, but complementary, ‘interference’ patterns depending on the detector used. Yet, per quantum mechanics, this is how nature “chooses” to operate. Just think, Avogadro’s Number is ~ 10^23 molecules per mole. How many molecules are there in a detector whose electrons are sensitive to photons? How many quantum possibilities are there? Well, QM would say that each electron has an infinite number of eigenvalues available to it. So, we have infinity raised to the 10^23 power. Some kind of set of interactions this!

    So, in such a grand complex of possibilities, that two different, additive interference patterns emerge is just something we have to accept about how nature operates. Beyond our ‘pay-grade’! While all of this works itself out “deterministically,” it is a complete wonder to us. It is a “determinism” that is completely beyond us. That is, we’re “free” of this kind of determinism.

Leave a Reply