Bell inequality violation finally done right

A few weeks ago, Hensen et al., of the Delft University of Technology and Barcelona, Spain, put out a paper reporting the first experiment that violates the Bell inequality in a way that closes off the two main loopholes simultaneously: the locality and detection loopholes.  Well, at least with ~96% confidence.  This is big news, not only because of the result itself, but because of the advances in experimental technique needed to achieve it.  Last Friday, two renowned experimentalists—Chris Monroe of U. of Maryland and Jungsang Kim of Duke—visited MIT, and in addition to talking about their own exciting ion-trap work, they did a huge amount to help me understand the new Bell test experiment.  So OK, let me try to explain this.

While some people like to make it more complicated, the Bell inequality is the following statement. Alice and Bob are cooperating with each other to win a certain game (the “CHSH game“) with the highest possible probability. They can agree on a strategy and share information and particles in advance, but then they can’t communicate once the game starts. Alice gets a uniform random bit x, and Bob gets a uniform random bit y (independent of x).  Their goal is to output bits, a and b respectively, such that a XOR b = x AND y: in other words, such that a and b are different if and only if x and y are both 1.  The Bell inequality says that, in any universe that satisfies the property of local realism, no matter which strategy they use, Alice and Bob can win the game at most 75% of the time (for example, by always outputting a=b=0).

What does local realism mean?  It means that, after she receives her input x, any experiment Alice can perform in her lab has a definite result that might depend on x, on the state of her lab, and on whatever information she pre-shared with Bob, but at any rate, not on Bob’s input y.  If you like: a=a(x,w) is a function of x and of the information w available before the game started, but is not a function of y.  Likewise, b=b(y,w) is a function of y and w, but not of x.  Perhaps the best way to explain local realism is that it’s the thing you believe in, if you believe all the physicists babbling about “quantum entanglement” just missed something completely obvious.  Clearly, at the moment two “entangled” particles are created, but before they separate, one of them flips a tiny coin and then says to the other, “listen, if anyone asks, I’ll be spinning up and you’ll be spinning down.”  Then the naïve, doofus physicists measure one particle, find it spinning down, and wonder how the other particle instantly “knows” to be spinning up—oooh, spooky! mysterious!  Anyway, if that’s how you think it has to work, then you believe in local realism, and you must predict that Alice and Bob can win the CHSH game with probability at most 3/4.

What Bell observed in 1964 is that, even though quantum mechanics doesn’t let Alice send a signal to Bob (or vice versa) faster than the speed of light, it still makes a prediction about the CHSH game that conflicts with local realism.  (And thus, quantum mechanics exhibits what one might not have realized beforehand was even a logical possibility: it doesn’t allow communication faster than light, but simulating the predictions of quantum mechanics in a classical universe would require faster-than-light communication.)  In particular, if Alice and Bob share entangled qubits, say $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}},$$ then there’s a simple protocol that lets them violate the Bell inequality, winning the CHSH game ~85% of the time (with probability (1+1/√2)/2 > 3/4).  Starting in the 1970s, people did experiments that vindicated the prediction of quantum mechanics, and falsified local realism—or so the story goes.

The violation of the Bell inequality has a schizophrenic status in physics.  To many of the physicists I know, Nature’s violating the Bell inequality is so trivial and obvious that it’s barely even worth doing the experiment: if people had just understood and believed Bohr and Heisenberg back in 1925, there would’ve been no need for this whole tiresome discussion.  To others, however, the Bell inequality violation remains so unacceptable that some way must be found around it—from casting doubt on the experiments that have been done, to overthrowing basic presuppositions of science (e.g., our own “freedom” to generate random bits x and y to send to Alice and Bob respectively).

For several decades, there was a relatively conservative way out for local realist diehards, and that was to point to “loopholes”: imperfections in the existing experiments which meant that local realism was still theoretically compatible with the results, at least if one was willing to assume a sufficiently strange conspiracy.

Fine, you interject, but surely no one literally believed these little experimental imperfections would be the thing that would rescue local realism?  Not so fast.  Right here, on this blog, I’ve had people point to the loopholes as a reason to accept local realism and reject the reality of quantum entanglement.  See, for example, the numerous comments by Teresa Mendes in my Whether Or Not God Plays Dice, I Do post.  Arguing with Mendes back in 2012, I predicted that the two main loopholes would both be closed in a single experiment—and not merely eventually, but in, like, a decade.  I was wrong: achieving this milestone took only a few years.

Before going further, let’s understand what the two main loopholes are (or rather, were).

The locality loophole arises because the measuring process takes time and Alice and Bob are not infinitely far apart.  Thus, suppose that, the instant Alice starts measuring her particle, a secret signal starts flying toward Bob’s particle at the speed of light, revealing her choice of measurement setting (i.e., the value of x).  Likewise, the instant Bob starts measuring his particle, his doing so sends a secret signal flying toward Alice’s particle, revealing the value of y.  By the time the measurements are finished, a few microseconds later, there’s been plenty of time for the two particles to coordinate their responses to the measurements, despite being “classical under the hood.”

Meanwhile, the detection loophole arises because in practice, measurements of entangled particles—especially of photons—don’t always succeed in finding the particles, let alone ascertaining their properties.  So one needs to select those runs of the experiment where Alice and Bob both find the particles, and discard all the “bad” runs where they don’t.  This by itself wouldn’t be a problem, if not for the fact that the very same measurement that reveals whether the particles are there, is also the one that “counts” (i.e., where Alice and Bob feed x and y and get out a and b)!

To someone with a conspiratorial mind, this opens up the possibility that the measurement’s success or failure is somehow correlated with its result, in a way that could violate the Bell inequality despite there being no real entanglement.  To illustrate, suppose that at the instant they’re created, one entangled particle says to the other: “listen, if Alice measures me in the x=0 basis, I’ll give the a=1 result.  If Bob measures you in the y=1 basis, you give the b=1 result.  In any other case, we’ll just evade detection and count this run as a loss.”  In such a case, Alice and Bob will win the game with certainty, whenever it gets played at all—but that’s only because of the particles’ freedom to choose which rounds will count.  Indeed, by randomly varying their “acceptable” x and y values from one round to the next, the particles can even make it look like x and y have no effect on the probability of a round’s succeeding.

Until a month ago, the state-of-the-art was that there were experiments that closed the locality loophole, and other experiments that closed the detection loophole, but there was no single experiment that closed both of them.

To close the locality loophole, “all you need” is a fast enough measurement on photons that are far enough apart.  That way, even if the vast Einsteinian conspiracy is trying to send signals between Alice’s and Bob’s particles at the speed of light, to coordinate the answers classically, the whole experiment will be done before the signals can possibly have reached their destinations.  Admittedly, as Nicolas Gisin once pointed out to me, there’s a philosophical difficulty in defining what we mean by the experiment being “done.”  To some purists, a Bell experiment might only be “done” once the results (i.e., the values of a and b) are registered in human experimenters’ brains!  And given the slowness of human reaction times, this might imply that a real Bell experiment ought to be carried out with astronauts on faraway space stations, or with Alice on the moon and Bob on earth (which, OK, would be cool).  If we’re being reasonable, however, we can grant that the experiment is “done” once a and b are safely recorded in classical, macroscopic computer memories—in which case, given the speed of modern computer memories, separating Alice and Bob by half a kilometer can be enough.  And indeed, experiments starting in 1998 (see for example here) have done exactly that; the current record, unless I’m mistaken, is 18 kilometers.  (Update: I was mistaken; it’s 144 kilometers.)  Alas, since these experiments used hard-to-measure photons, they were still open to the detection loophole.

To close the detection loophole, the simplest approach is to use entangled qubits that (unlike photons) are slow and heavy and can be measured with success probability approaching 1.  That’s exactly what various groups did starting in 2001 (see for example here), with trapped ions, superconducting qubits, and other systems.  Alas, given current technology, these sorts of qubits are virtually impossible to move miles apart from each other without decohering them.  So the experiments used qubits that were close together, leaving the locality loophole wide open.

So the problem boils down to: how do you create long-lasting, reliably-measurable entanglement between particles that are very far apart (e.g., in separate labs)?  There are three basic ideas in Hensen et al.’s solution to this problem.

The first idea is to use a hybrid system.  Ultimately, Hensen et al. create entanglement between electron spins in nitrogen vacancy centers in diamond (one of the hottest—or coolest?—experimental quantum information platforms today), in two labs that are about a mile away from each other.  To get these faraway electron spins to talk to each other, they make them communicate via photons.  If you stimulate an electron, it’ll sometimes emit a photon with which it’s entangled.  Very occasionally, the two electrons you care about will even emit photons at the same time.  In those cases, by routing those photons into optical fibers and then measuring the photons, it’s possible to entangle the electrons.

Wait, what?  How does measuring the photons entangle the electrons from whence they came?  This brings us to the second idea, entanglement swapping.  The latter is a famous procedure to create entanglement between two particles A and B that have never interacted, by “merely” entangling A with another particle A’, entangling B with another particle B’, and then performing an entangled measurement on A’ and B’ and conditioning on its result.  To illustrate, consider the state

$$ \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} \otimes \frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}} $$

and now imagine that we project the first and third qubits onto the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}.$$

If the measurement succeeds, you can check that we’ll be left with the state $$\frac{\left| 00 \right\rangle + \left| 11 \right\rangle}{\sqrt{2}}$$ in the second and fourth qubits, even though those qubits were not entangled before.

So to recap: these two electron spins, in labs a mile away from each other, both have some probability of producing a photon.  The photons, if produced, are routed to a third site, where if they’re both there, then an entangled measurement on both of them (and a conditioning on the results of that measurement) has some nonzero probability of causing the original electron spins to become entangled.

But there’s a problem: if you’ve been paying attention, all we’ve done is cause the electron spins to become entangled with some tiny, nonzero probability (something like 6.4×10-9 in the actual experiment).  So then, why is this any improvement over the previous experiments, which just directly measured faraway entangled photons, and also had some small but nonzero probability of detecting them?

This leads to the third idea.  The new setup is an improvement because, whenever the photon measurement succeeds, we know that the electron spins are there and that they’re entangled, without having to measure the electron spins to tell us that.  In other words, we’ve decoupled the measurement that tells us whether we succeeded in creating an entangled pair, from the measurement that uses the entangled pair to violate the Bell inequality.  And because of that decoupling, we can now just condition on the runs of the experiment where the entangled pair was there, without worrying that that will open up the detection loophole, biasing the results via some bizarre correlated conspiracy.  It’s as if the whole experiment were simply switched off, except for those rare lucky occasions when an entangled spin pair gets created (with its creation heralded by the photons).  On those rare occasions, Alice and Bob swing into action, measuring their respective spins within the brief window of time—about 4 microseconds—allowed by the locality loophole, seeking an additional morsel of evidence that entanglement is real.  (Well, actually, Alice and Bob swing into action regardless; they only find out later whether this was one of the runs that “counted.”)

So, those are the main ideas (as well as I understand them); then there’s lots of engineering.  In their setup, Hensen et al. were able to create just a few heralded entangled pairs per hour.  This allowed them to produce 245 CHSH games for Alice and Bob to play, and to reject the hypothesis of local realism at ~96% confidence.  Jungsang Kim explained to me that existing technologies could have produced many more events per hour, and hence, in a similar amount of time, “particle physics” (5σ or more) rather than “psychology” (2σ) levels of confidence that local realism is false.  But in this type of experiment, everything is a tradeoff.  Building not one but two labs for manipulating NV centers in diamond is extremely onerous, and Hensen et al. did what they had to do to get a significant result.

The basic idea here, of using photons to entangle longer-lasting qubits, is useful for more than pulverizing local realism.  In particular, the idea is a major part of current proposals for how to build a scalable ion-trap quantum computer.  Because of cross-talk, you can’t feasibly put more than 10 or so ions in the same trap while keeping all of them coherent and controllable.  So the current ideas for scaling up involve having lots of separate traps—but in that case, one will sometimes need to perform a Controlled-NOT, or some other 2-qubit gate, between a qubit in one trap and a qubit in another.  This can be achieved using the Gottesman-Chuang technique of gate teleportation, provided you have reliable entanglement between the traps.  But how do you create such entanglement?  Aha: the current idea is to entangle the ions by using photons as intermediaries, very similar in spirit to what Hensen et al. do.

At a more fundamental level, will this experiment finally convince everyone that local realism is dead, and that quantum mechanics might indeed be the operating system of reality?  Alas, I predict that those who confidently predicted that a loophole-free Bell test could never be done, will simply find some new way to wiggle out, without admitting the slightest problem for their previous view.  This prediction, you might say, is based on a different kind of realism.

158 Responses to “Bell inequality violation finally done right”

  1. Mike Says:

    No it will not convince everyone that local realism is dead, but not because of this “loophole-free” Bell test. The argument that realism is incompatible with quantum mechanics and locality depends on a particular formalization of the concept of realism. If one chooses to reject counterfactual definiteness, there is no non-locality problem. On the other hand, one is introducing intrinsic randomness into our picture of the world. In the many-worlds interpretation of quantum mechanics, reality consists only of a deterministically evolving wave function, and non-locality is a non-issue. 🙂

  2. Michael Gogins Says:

    Thanks for this post. It was somewhat clearer for a non-expert than others I have read on this topic.

    Are there any plans to do experiments that will achieve a much higher level of significance? That would seem worth doing to me.

    Regards,
    Mike

  3. GASARCH Says:

    (wow- I’m the first commenter!)
    The people who didn’t believe Bell’s Theorem. Were they
    1) Serious scientists who raised a fair point that lead to good
    research, and there future objections might still lead to good research.
    2) serious scientsits who… but should now shut up.
    3) Serious scientists, but didn’t add much to the discussion since
    the experiements would have happened anyway.
    4) quasi-cranks who, even so, there complaints lead to good research.
    5) quasi-cranks who didn’t contributer- the reserach would have happened anyway.
    6) CRANKS! who spouted nonsense about philosphy and free will.

    Probably some of the above, but my serious-science question is,
    were some of them serious people who raised fair objections.

    5)

  4. Joshua Zelinsky Says:

    Well, there’s still the loophole that no one has ruled out the Cartesian demon who is fooling everyone’s senses.

    Joking aside, this is very impressive work and it shows a real attention to detail and (I gather from the people who are actually experimentalists) serious technical care and cleverness.

    I look forward to Bell inequality experiments using humans on different objects. After we do Earth and the Moon we can do Earth and Mars.

  5. Fazal Majid Says:

    At some point we need to draw a line and put the local realists in the same box with creationists, birthers and other conspiracy theorists who always come with ever-more-contrived objections.

  6. Observer Says:

    Is this the work that is finally going to bring a Nobel Prize for Bell inequalities and test experiments? Or will the experts take the view that the hidden variable theories were ruled out in 1930 and the Nobel Prizes were already given in 1932 and 1933?

  7. Scott Says:

    Mike #1: Of course you’re right, in that once you’re willing to declare that the quantum state IS the reality, there’s no further problem from these quarters. On the other hand, I did define in the post exactly what’s meant by “local realism”—and by this point, that term is sufficiently entrenched that objecting to it is sort of like objecting that a hot dog isn’t a dog.

  8. Scott Says:

    Michael #2: Had it been me, I would’ve held out for higher significance—after all, isn’t the whole point of this game to pound the skeptics into total submission? But it is what it is, and alas, I don’t know of any current plans to go for higher significance. Still, I’m sure someone will do it eventually, as the technology improves and these experiments get less and less hard.

  9. Scott Says:

    GASARCH #3: I think you had the whole range, but drifting more from 1) towards 6) as time went on.

  10. Scott Says:

    Observer #6: I have zero special insight about such things. FWIW, I would’ve given the prize to John Bell and Alain Aspect in the 1980s. It would hardly be the only time that the prize would’ve been given for noticing, and then experimentally confirming, an important phenomenon that already follows from the basic rules of quantum mechanics. (Bell passed away in 1990, but Aspect is still very much alive.)

  11. Ashley Says:

    Hi Scott,

    Thanks for this lucid and detailed summary! And it turns out that your prediction has already come true

  12. Sandro Says:

    An important milestone indeed. Now we’re left with the properties that are in actual tension here: give up realism, give up locality, or give up the illusion that we were ever free to choose experimental parameters.

    As much derision as the last position gets, I think it will ultimately yield some interesting insights.

  13. keith Says:

    So is everything entangled since the first observer? I don’t understand reality.

  14. fred Says:

    Ah, okay, I was gonna ask… if you reject “intrinsic randomness” (that is “one of two alternatives can happen without any cause”), x and y are always related in some way, just like flipping a coin in Brazil and simultaneously flipping a coin in Tokyo depend both on the same world state.
    Bit I guess I won’t ask it.

  15. fred Says:

    Btw, “intrinsic randomness” (i.e. producing an effect without a cause) is of the same nature as the debate about the soul – “how could a soul act on the physical world and yet be somehow independent of it”?

  16. Koray Says:

    Scott,

    In the paragraph where you wrote about the “doofus physicists”, why do you say that if we believe that the particles colluded about their spin directions, then we can expect to do at best 75% at the game? Isn’t the 75% the limit score when the two input variables are independent?

  17. Scott Says:

    Koray #16: No, the 75% limit applies even when the particles are classically correlated—which is different from entangled. Indeed, the point of the Bell inequality (if you like) is precisely to tease out the difference between classical correlation and quantum entanglement.

    As a secondary point, the input variables are not the same thing as the particles’ spin directions. The “input variables” (in the language of the CHSH game) correspond to the “detector settings” (in the usual physics language).

  18. Joshua Zelinsky Says:

    Possibly silly question with a possible silly followup: none of the states involved in the CSHS involve cancellation, yes? That is, the states in question are things like (1/sqrt(2))(|00> + |11>), which have non-negative amplitudes right? But the fact that amplitudes can cancel while classical probabilities can only add is a critical part of what makes quantum computing (potentially) possible. Should this get more emphasis, that the issue of entanglement and the issue of canceling amplitudes are essentially different aspects of quantum mechanics? Or am I off base here?

  19. Bram Cohen Says:

    Count me as a bell inequality believer but a bit of a quantum computing skeptic at *some* scale. This experimental result, while expected, is a big relief! Also the experimental setup is very cool and a major step towards scaling QC.

  20. Scott Says:

    keith #13:

      So is everything entangled since the first observer?

    Yeah, at least in the many-worlds perspective, there’s a huge amount of entanglement all across the universe; indeed there was since shortly after the big bang, long before the first observers slithered out of the primordial soup. Ironically, though, precisely because entanglement is so pervasive, we usually don’t observe it! The principle called monogamy of entanglement says that you can’t observe maximal entanglement between two systems A and B, if they’re also entangled with a third system C anywhere else. And more broadly: the more A and B are entangled with their external environments, the less entangled they can be with each other. This means that, if you want to observe entanglement between A and B (e.g., to do a Bell experiment), then you need to prevent A and B from becoming entangled with anything else, by maintaining the quantum coherence of the joint state AB. This typically only happens in laboratory conditions, for A and B that are far apart and that you could measure in practice.

  21. Scott Says:

    fred #14: That’s correct; you’re pointing out what’s sometimes called the “free will loophole” (though it has almost nothing to do with free will in the human sense). Namely, if you don’t even accept that the experimenter has the freedom to set x and y independently of each other—i.e., if you believe that everything, no matter how remote, is linked together by some gargantuan cosmic conspiracy—then obviously the entire reasoning behind the Bell experiment collapses. At least one famous physicist, Gerard ‘t Hooft, actually advocates such a cosmic conspiracy, under the name “superdeterminism.”

    To my mind, however, a decisive reason to reject superdeterminism is that once we’ve adopted this way of thinking, we could use it to explain any experimental result; absolutely nothing seems to be off-limits. For example, we could let Alice send faster-than-light signals to Bob, by simply postulating that whenever Alice thinks she’s choosing x, the cosmic conspiracy will have foreseen that, and will cause x to appear in Bob’s detector as well! So, why can Alice and Bob violate the Bell inequality but not send superluminal signals? Unlike the conventional quantum mechanic, the superdeterminist has no answer to that or any similar question. I.e., much like solipsism, or brain-in-a-vat-ism, or the-universe-was-created-last-Wednesday-ism, superdeterminism has the twin properties that

    (1) it can never be logically refuted, but
    (2) it’s been completely scientifically fruitless—indeed, if you accept such things, then it seems like you might as well throw in the towel and quit doing science!

    Or to put it differently: if you’re forced to superdeterminism, that strikes me as just a convoluted way of admitting you’ve lost the argument. If that’s your only way to save local realism, then clearly we can stick a fork in local realism for any conceivable empirical purpose.

  22. Sniffnoy Says:

    Josh #18: I think it does involve cancellation, actually. Alice generates her bit by measuring in the standard basis, and Bob measures in a rotated basis. But if we break down what measuring in this rotated basis entails, in terms of the coefficients in the standard basis coefficients, it will involve subtraction. Equivalently, we can think of this as Bob first doing a rotation and then measuring in the standard basis; the rotation matrix will include negative entries, i.e., performing the rotation will involve subtraction.

  23. Scott Says:

    Joshua #18: You’re absolutely right that “every nontrivial quantum effect involves amplitudes cancelling somewhere.” And CHSH is no exception!

    In the post, I didn’t describe the actual strategy that Alice and Bob use to measure their entangled qubits |00⟩+|11⟩. But it’s this: Alice measures in the “standard” basis if x=0 and in a π/4-rotated basis if x=1, while Bob measures in a π/8-rotated basis if y=0 and in a -π/8-rotated basis if y=1. The calculation of what happens when x=y=1 involves an interference effect.

    [crossed with Sniffnoy’s comment saying the exact same thing…]

  24. Raoul Ohio Says:

    For those in need of some merriment to break up the hard thinking required for this topic, follow up Ashley’s lead in #11.

    This level of Quantum Theory is pretty hard for me, because I have always studied things where my intuition was a decent guide.

  25. Sandro Says:

    Scott #21, “Unlike the conventional quantum mechanic, the superdeterminist has no answer to that or any similar question”

    I’m not sure that’s a fair characterization, because there haven’t been any serious attempts at formulating a superdeterministic theory. If ‘t Hooft really does achieve that goal, then he will indeed provide a set of answers at least as good as the answers provided by QM.

    Any decent such theory will posit axioms, and so it can indeed answer “that or any similar question”. Ultimately, a superdeterministic theory would no doubt produce some compelling answers to questions that orthodox QM does not address, but which we’ve just become accustomed to hand waving away; it will probably also provide unsatisfactory answers to other questions that many consider orthodox QM to adequately answer.

    In other words, just like every other interpretation of QM.

    Regarding superdeterminism explaining “any” experimental results, well sure, but you can “explain any results” with non-superdeterministic theories too. We weed out unsatisfactory superdeterministic theories much like we weed out any scientific theories: predictive power, explanatory power, parsimony, etc. The correlations needed to support superdeterminism must be minimized for parsimonious reasons, just like any other scientific theories.

    I’m also mystified at the common perception that superdeterminism undermines the scientific process, an opinion you also obliquely mention. I’m fairly certain you’re familiar with genetic algorithms, hill climbing, and other determinstic search algorithms. Just because our search is deterministic, does not mean it’s inexhaustive!

  26. Jay Says:

    > the experimental setup is very cool and a major step towards scaling QC.

    Should we update our prior for QC soon?

  27. Craig Gidney Says:

    Could they have doubled or quadrupled their hit rate if they used all four entanglement swapping measurement outcomes?

    My understanding is that entanglement swapping always succeeds at creating entanglement, but the type of entanglement you get varies (i.e. |00>+|11> vs |01>+|01> vs |01>-|10> vs |00>-|11>). The measurement just tells you which case you ended up in.

    … I guess each of those states would win at *different* games. And it’s not “really” a CHSH test if you’re going to throw in extra rules like “also there’s some third unrelated yet somehow helpful referee somewhere deciding whether you’re playing the I, X, Z, or XZ variant of the game”. Also you probably wouldn’t use the same measurement axies in all four cases, and their setup required the measurement to happen before the entanglement-swapping result could be communicated.

  28. Job Says:

    Bell’s theorem always reminds me of a topic in data synchronization.

    In collaborative systems, where users perform non-commuting operations concurrently and in real-time (without locking or taking turns), eventual consistency is reached by having each client transform collaborator operations against their own, as they are received. This is known as operational transformation (OT).

    If Alice applies operation X and Bob applies operation Y at the same time, then:
    – Alice’s history will start with X followed by Transform(Y, X)
    – Bob’s history will start with Y followed by Transform(X, Y)

    Both users will end up with the same state despite taking different paths.

    I’m mentioning this in the context of Bell’s theorem because i’ve wondered whether, in a simulation of the universe as a collaborative system, the transformation step could surface in ways such as a violation of Bell’s inequality – despite being otherwise completely imperceptible to anyone inside the system.

    In your CHSH game analogy, Alice’s operation computes and stores f(x) into variable a and Bob’s operation computes and stores f(y) into variable b.

    Note that Alice and Bob are inside their respective local universes which are made consistent because of OT, so they’re unaware that their histories are different.

    In Alice’s universe a=f(x) happens first and is followed by Transform(b=f(y), a=f(x)).
    In Bob’s universe b=f(y) happens first and is followed by Transform(a=f(x), b=f(y)).

    Alice and Bob’s interaction with a shared entangled pair is the type of action which might require transformation. Given that the transform function can leak information between operations, the result would potentially allow Alice and Bob to beat the CHSH game ~85% of the time – and it would be incomprehensible to us.

    I’ve seen several interpretations of Quantum Mechanics as favoring the idea of the universe as a simulation – this could be another one.

  29. luca turin Says:

    Wow! I understand it now. Thank you Scott. Will likely have to re-read your post again tomorrow to re-understand it, but that’s just age.

  30. Scott Says:

    Craig #27:

      Also you probably wouldn’t use the same measurement axies in all four cases, and their setup required the measurement to happen before the entanglement-swapping result could be communicated.

    Yeah, I was going to point out that issue, when I read further in your comment and noticed you did so yourself. 🙂

  31. Scott Says:

    Sandro #25:

      I’m not sure that’s a fair characterization, because there haven’t been any serious attempts at formulating a superdeterministic theory. If ‘t Hooft really does achieve that goal, then he will indeed provide a set of answers at least as good as the answers provided by QM.

    OK, fine. I guess it’s hard to debate this in the abstract, without having even one successful example of a superdeterministic theory of anything. So maybe it’s better to say that I regard this as spectacularly unpromising—as a cure a million times worse than any disease that might motivate it. You don’t like the ‘nonlocal’ aspects of entanglement? Well, superdeterminism requires a nonlocality that’s absolutely terrifying compared to that nonlocality, that controls our measuring devices and random-number generators and brains!

    It’s as if someone said: “I’m working on an explanation for why things fall down that says it’s all because of a fine-tuned conspiracy in the initial conditions of the universe. By the laws of physics, free-falling objects would be just as happy to fall away from the earth as towards it, or sideways, etc. It’s just that, in any experiment we can perform, we’ll always set things up in just such a way that the objects will fall down. And, yes, part of that is that the amazingly fine-tuned initial condition constrains our own brains, preventing us from choosing to do certain experiments that we could otherwise easily do that would cause objects to fall up. This theory, once I develop it, will be preferable to ugly ‘gravity’ theories, because it will eliminate the otherwise-unexplained asymmetry between up and down.”

    What is there to say, except that given this choice, I’ll take the asymmetry between up and down, along with a billion other asymmetries? 🙂

  32. Joshua Zelinsky Says:

    Sniffnoy and Scott,

    Excellent. Thanks for the clarification.

  33. John Sidles Says:

    I’d like to join with Bill GASARCH (#3) in wondering what recommendations Shtetl Optimized might suggest in regard to the best literature grappling with non-locality?

    My own BibTex database is replete with hundreds of references that grapple with state-space geometry (linear and otherwise), but sadly the database contains only a bare handful of “positive, hopeful, generous, broad-minded” references that sympathetically survey the literature of causal and informatic non-locality.

    Among the best of these sparse references (as it seems to me) is the chapter More Closing Words that concludes physicist Anthony “Tony” Zee’s much-cherished (by many folks, including me) textbook Quantum Field Theory in a Nutshell (2nd Edition, 2010).

    Zee’s discussion clarifies (for example) that crucial point that “nonlocal” quantum field theory and “nonlocal” quantum information theory mean two very different things by “nonlocal” … and that (terrific as it is) Zee’s textbook only begins to discuss the relation between these two “nonlocals”.

    Conclusion  Discussions like Zee’s show us that experiments like Delft’s expand rather than contract the many extant mysteries — both mathematical and physical — that are associated to the word “nonlocal.”

  34. fred Says:

    Scott #31

    “[…]“I’m working on an explanation for why things fall down that says it’s all because of a fine-tuned conspiracy in the initial conditions of the universe. By the laws of physics, free-falling objects would be just as happy to fall away from the earth as towards it, or sideways, etc.””

    What about seeing superdeterminism as a theory where all the possible universes just happen, like an extension of many-worlds?
    To me, Multiverse interpretation of QM gets rid of the paradoxal “intrinsic randomness” (an alternative choice is made without any cause) by saying that there’s no choice – both alternatives “happen”, so the paradox vanishes.

    But if the universe is “mathematical” in nature, one could take this one step further and say that every possible universes “exists”, in terms of size, initial conditions, evolution rules, etc, like an infinite variations of the game of life. The vast majority are “sterile” soups but a tiny minority are “rich”, exhibiting self similarity patterns (the Mandelbrot set is an analogy). Of course the difficulty is that, from inside one instance of a universe, how do we categorize things happening as result of initial conditions vs the dynamic evolutionary rules of a particular universe (but the difference becomes almost meaningless).

    Anyway all this is more philosophy than science (I don’t think we’ll ever prove experimentally that “many-worlds” is valid), but at least it gets rid of that “intrinsic randomness”, i.e. from an existentialist point of view those theories make it maybe easier to accept the human condition – “I may have been totally screwed in *this* life, but somewhere else there may be hope and happiness”.

  35. Sandro Says:

    Scott #31, “And, yes, part of that is that the amazingly fine-tuned initial condition constrains our own brains, preventing us from choosing to do certain experiments that we could otherwise easily do that would cause objects to fall up.”

    The fact that it’s not so easy to do those other experiments is what makes it so fantastically interesting!

    One question that interested me were the implications for quantum computing. ‘t Hooft replied that he believed general QC probably wouldn’t pan out beyond a certain number of qubits.

    I’m rather more of the opinion that the rich correlations of superdeterminism mean part of the answers we’re interested in solving have been precomputed by the correlations during experiment setup, which is where quantum speedup actually comes from. That seems like a pretty interesting and unique answer among interpretations of QM.

  36. Peter Morgan Says:

    Scott #21, I largely agree with Sandro #25, but will add that “superdeterminism” ought to be called “stochastic superdeterminism”, since only probabilities of events need to be determined to reproduce experimental results. That is, we can have free will at each moment, but the statistics of the choices we make are what they are.

    The point of superdeterminism to me is not that we should be constructing superdeterministic models, it’s that from a determinedly classical perspective it’s one way in which we can reasonably shut up and calculate using the empirically effective signal processing technology of QM Hilbert spaces. Superdeterminism is no different from a common 19thC belief about Physics, that initial conditions determine the future, so the idea that one couldn’t do Physics in such a case seems to ignore that history.

    One perspective is to note that the vacuum state of a quantum field includes nonlocal correlations that decay exponentially (massive free field) or inverse polynomially (massless fields) that can be used as a resource. Bell-violating states are modulations of the vacuum that, with significant experimental effort, use the decaying nonlocal correlations (again, as a resource) to construct states for which the nonlocal correlations do not decay as fast (in particular, lasers and/or wave guides are essential to reduce the effective dimensionality of the quantum system to 1). [Perhaps it’s as well also to remind ourselves that although field operators are local in the sense of zero commutation at space-like separation, particle creation and annihilation operators are not local in that sense.] Classical equilibrium random fields also have nonlocal correlations that can be used as a resource in the same way (you can see my EPL, 87 (2009) 31002 for an isomorphism between Hilbert spaces, though it has issues).

    There is a further alternative, one that AFAIK is not in the literature, which is that *incoherent* /very/ much faster than light propagation (sound: 300 m/s; light: 300 million m/s; incoherent causal propagation: 300 septillion m/s, say) at /very/ small scales (Planck scale or much smaller) could be consistent with an emergent physics that is Einstein-local-in-the-sense-of-no-FTL-signalling. Again, this is only a justification for shut up and calculate that a determined classical physicist can adopt. In any case, many, perhaps all approaches to quantum gravity introduce Lorentz symmetry breaking at the Planck scale and smaller, in ways that are intended to minimally affect the emergent Lorentzian Physics by being stochastically incoherent except for its Lorentzian symmetry.

    A useful further resource for Gregor Weihs’ experiment, which you link to in the arXiv version, is his dissertation, http://www.uibk.ac.at/exphys/photonik/people/gwdiss.pdf, which contains his PhysRevLett and a lot of details in German.

  37. Peter Morgan Says:

    Scott #31, no experiment comes close to using only brains to determine measurement choices. Given how slow our brains and bodies are, that would seem to require space-like separation of at least several light seconds. Directions are chosen by random number generating apparatuses, most likely driven by environmental noise (qv Gregor Weihs). If the noise measurement is sensitive at the quantum mechanical scale, the environmental noise is Lorentzian, with amplitude determined by Planck’s constant.

  38. Scott Says:

    Peter #37: So OK then, could I get you on record for a prediction that, when this experiment is done with a couple light-seconds of spacelike separation, and with human observers as the measuring devices, then the result will be that local realism is upheld?

    Note that, if we were willing to recreate the Apollo program, then we’d probably have the technology to do this experiment today—albeit, doing it in a way that also closed the detection loophole would be harder. Anyone have $500 billion or so lying around?

  39. Scott Says:

    fred #34: You’re talking about MWI, which is a fine, venerable option, but a completely different thing from superdeterminism—and ‘t Hooft himself would be the first to agree. If you read his papers, he really does want an essentially classical deterministic world, a century of quantum mechanics be damned.

  40. Scott Says:

    Sandro #35:

      I’m rather more of the opinion that the rich correlations of superdeterminism mean part of the answers we’re interested in solving have been precomputed by the correlations during experiment setup, which is where quantum speedup actually comes from. That seems like a pretty interesting and unique answer among interpretations of QM.

    I’d say that this stance makes you a more logical, intellectually consistent superdeterminist than ‘t Hooft himself! When you read ‘t Hooft’s early papers on this subject, he’s clearly looking for some way to make quantum computing fail, and that at least partly motivates his interest in classical cellular automaton models. As incredible as it sounds, he simply didn’t notice or recognize Bell violation, at that time, as a central issue that any classical CA model would need to address. Only later, after dozens of other physicists criticized him for not being able to explain the Bell violation, he appears to have added on superdeterminism as a way around that problem (!). But in my opinion, if he wanted to be consistent about it, he would now notice—as you have—that once you’ve gone down that route, you could just as easily account for a quantum computer’s working as standard QM predicts, by saying that the right answers were precomputed at the beginning at the time. And more broadly, there would seem to be no reason to predict any specific empirical difference between your superdeterminist theory and QM. Why not keep your theory permanently safe from falsification? 🙂

  41. domotorp Says:

    This might not be the best place to ask, but does this allow to make a quantum lottery/bingo/roulette with appropriate rules where it is possible to cheat? Doubters can be offered odds that give them positive payout supposing local realism, but lose due to quantum entanglements.

  42. Travis Says:

    The terminology “local realism” may indeed be entrenched (as noted in comment #7) but it is nevertheless so misleading that it would almost certainly be better to avoid using it. The definition of “local realism” given in the original post — basically a=a(x,w) and similarly for b — can be understood as a conjunction of locality and determinism. So I guess the “realism” in “local realism” is supposed to mean “determinism”. And that is more or less what’s implied by the wikipedia link — the “realism” in “local realism” means “hidden variables” (whose purpose is surely, after all, to restore determinism to otherwise-indeterministic QM).

    But it is well known that even *indeterministic* theories obeying a natural notion of locality cannot agree with the QM predictions — or now we should say, and more importantly, cannot agree with experimental observations. EPR of course already pointed this out in 1935: if you take Bohr’s “completeness” doctrine seriously, i.e., reject the idea that QM’s wave function descriptions need to be supplemented with some sort of hidden variables, then QM is already a nonlocal theory.

    There is admittedly some (almost partially legitimate) controversy about how exactly one should define locality for indeterministic theories. But in the definition that is in my opinion the clearest and the best — the one given by Bell in 1976 and then elaborated most clearly in his 1990 paper “la nouvelle cuisine” — it is just completely unambiguous that orthodox QM (without any hidden variables) is nonlocal, and it is indeed straightforward to derive the CHHS inequality from this notion of locality (without anything like an extra assumption of determinism).

    So by all means let’s celebrate this great experimental accomplishment and enjoy watching the conspiracy theorists search around for ever tinier holes to bury their heads in. But let’s also make sure to get the significance of the result right. It’s not just that local “realist” theories (or local deterministic theories, or local hidden variable theories) are now known to be at odds with experimental fact. All of those are true, but they are true in the same way that it is true that, say, theories that were invented on Tuesdays are now known to be at odds with experimental fact, or that theories with at least two v’s in their names are now known to be at odds with experimental fact. All of these are true because what is really established by Bell’s work and these experiments is: *all* local theories (whether deterministic or not, whether invented on a Tuesday or not, etc.) are incompatible with experimental fact.

    (Well… the one extra assumption that *is* needed is something along the lines of: each individual measurement has a unique definite outcome. So many-worlds type theories are indeed an exception to what I just wrote. That said, it is very difficult to even know what “locality” would mean in the context of many worlds type theories — especially extant formulations with none of what Bell called “local beables”. But that’s too big a tangent for this comment…)

  43. Peter Morgan Says:

    Scott #38. I agree you could take me as saying that, but no, because, if I take a physicalist stance, which I’m willing but don’t feel compelled to take, then I would take us to be modulations of the vacuum no less than is an environmentally driven number generator. Recalling my #36, I say, inter alia, that QM is a very good signal processing formalism that apparently does include models that very well reflect the measured symmetries of the effective stochastic dynamics and that it makes sense for a determinedly classical physicist to shut up and calculate without any necessity to commit to QM being incomprehensible in principle.

    Two people who generate random numbers (which most people would do very badly) would have to do so in such a way that the efficiency of the two detectors at detecting and storing the details of appropriate pairs of events is not much impaired. I haven’t ever seen an analysis of what constraints that would produce, but obviously all 1s, if I’m feeling ornery, won’t do, so there are at least some constraints. For the experiment to obtain a definitely non-classical Bell-violation, perhaps I have to operate /as if/ I’m environmentally driven. The Aspect experiment was quasi-periodic, quasi-random, which was taken to be problematic, so presumably so also would any non-random statistics of a person-generated sequence of choices.

    I suppose I take the free will thing to be slightly wild rhetoric on the part of Physicists, given its distance from experiment. In general, when scientists go beyond experiments that have actually been done in public pronouncements, as opposed to predictions that are plausibly intended for experimentalists to actually check, there is more of a possibility that they will lose the confidence of the public, in proportion to how far the public extrapolation is obviously beyond what has been tested.

  44. Jay Says:

    Scott,

    About 10 years ago, you wrote an hilarious fable criticizing how physicists (miss)use the concept of an impossibility proof (Five-Ox Theorem! Ooops… but for Elephants!)

    https://scottaaronson-production.mystagingwebsite.com/?p=95

    But isn’t the use of Bell theorem a perfect exemple of what young Scott was criticizing? (Local realism is wrong! Ooops… but for CFD!)

  45. Tim Maudlin Says:

    Just to second what Travis says above (# 42), the terminology “local realism” is terrible due to being misleading. This is already shown in comment 1: just give up (some form of) realism and you can save locality. But there is no sense of “realism” (partularly nothing to do with CFD) on which that is true. CFD, like determinism (from which it follows), is not a presupposition of Bell’s theorem but a consequence of the locality condition and quantum predictions, as shown by EPR already in 1935. Bell presupposes only locality (not “realism”). You can say that he also presupposes that experiments have unique outcomes (ruling out MWI), but that is part of standard quantum theory. It shows up in quantum computation as the existence of “measurement gates”. Whether MWI is local, and whether it actually makes any predictions, is another question. If there are not unique outcomes on each side, then it is unclear what the prediction of any “correlation” between the outcomes could even mean. Quantum theory does predict such correlations.

  46. fred Says:

    What amuses me is the passion and even judgmental aspect attached to the question:

    Scott said

    “Then the naïve, doofus physicists measure one particle, find it spinning down, and wonder how the other particle instantly “knows” to be spinning up—oooh, spooky! mysterious!”

    ” if people had just understood and believed Bohr and Heisenberg back in 1925, there would’ve been no need for this whole tiresome discussion. ”

    “To someone with a conspiratorial mind […]”

    or Travis #42

    “So by all means let’s celebrate this great experimental accomplishment and enjoy watching the conspiracy theorists search around for ever tinier holes to bury their heads in.”

    This suggests that the people who don’t see how obvious this is must have a brain similar to the people who doubt the moon landings?
    But clearly lots of brilliant scientists have struggled with this (Einstein, Bohm) and each new generation of physicists have to face it on their own, and maybe help close the debate once and for all (like Bell).
    At the very least some interesting experimental breakthroughs are being made.

  47. Scott Says:

    fred #46: The reason for the passion and judgment, I would say, is that this subject has been “crackpot catnip” for decades, more so than almost anything else in physics. When you want to discuss recent developments—e.g., the closing of loopholes, PR boxes, parallel repetition, verifying untrusted quantum devices, etc.—it can be frustrating to have people insist that Bell made some trivial arithmetic mistake (!), or repeat Einstein’s original pronouncements from 1935 as if the conversation hadn’t advanced since then. Einstein was … well, an Einstein, 🙂 and was 100% correct (and way ahead of his time) to be worried about this. But today, the undergrads in my 6.045 class understand entanglement much better than Einstein did.

  48. Nick Read Says:

    Scott,

    IIRC, according to historian of science Arthur Fine in his book “The Shaky Game”, Einstein didn’t see the actual EPR paper until after it was published—so it may not reflect Einstein’s views very well. (Of course, other historians may differ on this.) As in all citations, it is good to remember there is sometimes more than one author.

  49. Nick Read Says:

    To amplify, I think Fine says Einstein wasn’t happy with the paper.

  50. fred Says:

    Scott #47

    haha, ok, so I guess Bell’s inequality is to physics what the P!=NP inequality is to CS!

  51. Scott Says:

    Travis #42, Jay #44, Tim Maudlin #45: I propose, as a ground rule for discussing these things, that we all just accept that the term “local realism” means what the people who work in this field have taken it to mean. Namely, it means that a(x,w) is a function of x and w only, and b(y,w) is a function of y and w only. The independence of a from y, and of b from x, is the “local” part; while the existence of a state w that determines a and b (in conjunction with x and y) is the “realist” part. The question of determinism vs. indeterminism is not really central here, since any classical, local indeterminism could simply be incorporated into w as hidden variables.

    Now, it’s fine to grumble about the term “local realism,” or to argue that a different term would’ve been better. Many of us are constantly forced to use grumble-worthy terms: for example, “Kolmogorov complexity” is neither a complexity measure, nor was it first studied by Kolmogorov. “Quantum groups” are neither quantum nor groups. Electrons should’ve been called positively charged (we can blame Ben Franklin for that one). And let’s not even start on the complexity class PP (pronounced like my 2-year-old daughter calls one of her chief outputs).

    But if we allow redefinitions of standard terms, the problem is that the door then becomes open to the cavalcade of cringe-inducing crackpots who claim to have “disproved Bell’s theorem” (!), because they have a personal definition of “local realism” that’s more permissive than the usual one.

    As I use the term, “local realism” is not a “physics definition,” it’s a math definition. And Bell’s theorem is not a “physics theorem” (whatever that means), it’s a math theorem that’s been proved and will stay proved until the end of time. If you want to argue about the theorem’s relevance to physics, you can do that, but you don’t get to negotiate the definition of “local realism,” because it’s now part of math.

    If we can agree on this way of using language, then among other benefits, it completely addresses the concern of Jay #44. As long as we remain crystal-clear and unyielding about the definitions, the situation is not at all analogous to the vague “no-go theorems” that some people are constantly “proving” and then “refuting,” the phenomenon that I ridiculed in my “Physicists and the Wagon” post. It’s more analogous to Abel and Galois’ proof of the unsolvability of the quintic by radicals.

  52. James Gallagher Says:

    ” if people had just understood and believed Bohr and Heisenberg back in 1925, there would’ve been no need for this whole tiresome discussion. ”

    Well, neither Bohr nor Heisenberg ever clearly explained themselves. It’s not good enough to waffle on about some qualitative idea of “complementarity”, whatever that means, they needed to suggest something scientific – a quantitative experiment that would distinguish what they mean from what Einstein thought was the case.

    Bell’s Inequality is a “triviality” with hindsight, but that doesn’t diminish its importance. We need to test our theories of Nature against observation whenever possible. If the theory is built on earlier precious well-tested ideas and is mathematically compelling, such as General Relativity, then we can be less worried about experimental confirmation, since Nature has to be that way or humans are being fooled by a trust in logical argument, which would make all science pointless.

    But Quantum Mechanics was suggesting something quite new, with no previous foundation, that Nature is fundamentally probabilistic, and that needs experimental confirmation. Bell provided the first, and best way to confirm that, and I’m pretty sure Einstein himself would have called it a marvellous discovery, especially if he had lived to see the experiments of Clauser, Aspect, Zeilinger et al.

    (I would have given the Nobel Prize to Clauser, as well as Aspect and Bell in the 1980s, sad that Bell died so young)

  53. Scott Says:

    Nick #48: Well, I’ve also read some of Einstein’s correspondence from around the same time. He was clearer than Bohr, but as far as I can tell, neither he, nor Bohr, nor anyone else at that time clearly recognized that you could have something intermediate between “local realist” and “spooky”—something that, on the one hand, didn’t involve any superluminal signalling, but on the other hand, would require superluminal signalling to simulate in a classical universe. That had to wait for Bell.

  54. David Speyer Says:

    I’ve asked this before, but maybe it will make sense to me this time: Why should MWI be considered “local”? To me, “local” suggests that all physical quantities are associated with some particular location in the universe, and interact with each other locally.

    If I write down the Schroedinger equation for two particles, it is a differential equation for psi(x_1, x_2, t). This is NOT a field of some sort on the universe; rather, it is a field on the universe squared. If I want to talk about the 10^(80) particles that are actually out there, I am supposed to write down a PDE on a 3 x 10^(80)+1 fold — that doesn’t seem local to me.

    If I switch to the QFT perspective (as I understand it), then I have a collection of creation and destruction operators A(x,t) which ARE indexed by points in spacetime. However, these operators act on a vector in a Hilbert space which is not in any sense associated with a point of space time (nor is it a Hilbert space of sections of some vector bundle on spacetime, or something like that). So, again, there is a part of the theory which is intrinsically nonlocal.

    Now, as I understand it, MWI agrees with all of this. So why do people say it is local?

  55. Travis Says:

    Re: #51 I’m happy to accept the ground rule. But I still think the phrase “local realism” fails to accurately describe the (minimal) assumptions needed to derive a Bell inequality. You just don’t need the “realism” part.

    You say that the “realism” here is “the existence of a state w that determines a and b”. That sounds a lot like determinism to me. It’s hard to tell for sure, though, because of the weird word “classical” that appears in your next sentence: “The question of determinism vs. indeterminism is not really central here, since any classical, local indeterminism could simply be incorporated into w as hidden variables.” That makes it sound like there’s another, different (“quantum”??) kind of local indeterminism that can’t be so incorporated, and so which represents a magical get-out-of-nonlocality-free card. Maybe you can clarify what “classical” means in this context?

    In any case, it is known that locality (as Bell formulated it) alone yields a conflict with experiment. No extra assumption along the lines of determinism, CFD, hidden variables, or whatever exactly one means by “realism” is needed — all these things are instead, as Tim noted, consequences of locality and the predictions of QM for EPR type correlations. So there is no logically available possibility of saving locality by rejecting determinism/CFD/HVs/realism. So the claim that these experiments rule out “local realism”, meaning the conjunction of “locality” and some notion of “realism”, is really badly misleading/confused.

  56. Scott Says:

    Travis #55: Thanks. But after mulling it over, I think I actually prefer the term “local realism” for the condition that Bell simply called “locality.” Or at least, it should be called “local X-ism,” for some X that gestures toward the existence of an underlying state w, which is there prior to Alice and Bob choosing the measurement settings and obtaining the results, and which isn’t affected by either of those things.

    My reason is this: many of the physicists I know would strongly insist that quantum mechanics is “local,” simply because it has no superluminal signalling (or equivalently: Alice possesses a local state, her density matrix ρA, which determines the statistics of any measurement she can perform, and which is unaffected by any choice Bob makes). But clearly QM violates the Bell inequality! So, this could tempt the unwary into saying QM itself is a counterexample to “locality implies Bell inequality.”

    Now, you could reasonably object that the property these physicists are calling “locality,” would better be called “no-signalling.” (After all, the Popescu-Rohrlich theory, which violates the Bell inequality by even more than QM does, satisfies this property as well—but would anyone call that “local”?) But, I dunno, even when you have the right of way, sometimes it’s prudent to make way for the other driver. Sometimes you should add semi-redundant descriptor words, as in “proven theorem,” “random coin toss,” “Darwinian natural selection,” etc., just to eliminate any possibility of confusion.

    And note that I haven’t even gotten into the debate about whether MWI is “local,” or whether it makes QM’s locality more manifest! The MWIers’ argument is that, if you don’t have multiple worlds, then you need to posit a “collapse of the wavefunction”—and once you condition on what Alice sees when she causes a collapse, that will of course nonlocally affect Bob’s state as well. And they claim that MWI solves this problem by denying that there’s any such thing as “collapse.”

    FWIW, I’ve never found this argument compelling, since if you adopt a density matrix perspective, and you refuse to condition on the outcome of Alice’s measurement (which she doesn’t control) but only on her choice of measurement setting (which she does control), then of course there’s no nonlocal influence in collapse interpretations either. But to me, it further underscores the need for language that makes it clear that, when we talk about the Bell inequality, we’re not merely assuming “locality” in the weak senses of “no-signalling” or “nothing on Alice’s side changes Bob’s density matrix,” but in a stronger and more specific sense.

  57. Dániel Says:

    Scott #40 to Sandro #35:

    I’d say that this stance makes you a more logical, intellectually consistent superdeterminist than ‘t Hooft himself!

    Thanks Sandro for articulating these ideas so much clearer than I could. For what it’s worth, I’m this kind of superdeterminist myself. I never had any doubts about the outcome of this particular experiment, and no real doubts about the possibility of large-scale quantum computing. Also, my second best bet is Many Worlds, so an irrational attachment to single-world realism is not there among my motivations.

    Here is an example/analogy that I hope illustrates that a superdeterminist theory is not necessarily empty. The Gold universe is a finite space-time continuum that has two low-entropy endpoints: a Big Bang and a Big Crunch, and the thermodynamic arrow of time points from both of them toward the high-entropy middle. (Nobody claims that we are actually living in a Gold universe, but it’s logically consistent, and illuminating as a thought experiment.)

    The middle the Gold universe is in a state of heat death. Normally, inside observers can not inhabit this part, but we can imagine a resourceful civilization fighting for survival that manages to maintain a local low-entropy environment. To them, the surrounding reversed arrow of time manifests itself as weird low probability events in the style of eggs unscrambling, or “Rosencrantz and Guildenstern are Dead” Act One.

    Now comes my point. The Gold universe has a large-scale regularity. Inside observers first encounter this regularity in the form of weird low probability events. And if they try to explain these events within a framework of strictly-local regularities, they will never uncover the real explanation. But in their situation, the Gold universe property is in fact a very concise, falsifiable, discoverable theory with great explaining power.

    We can imagine physicists inside the Gold universe debating this issue. Some of them might argue: “Wow, you would need an infinite amount of low probability coincidences for such a weird theory to be true. That’s so ad hoc.” But they would be wrong: exactly one such coincidence is needed: the existence of a low-entropy Big Crunch in their future.

  58. Dániel Says:

    […] you could just as easily account for a quantum computer’s working as standard QM predicts, by saying that the right answers were precomputed at the beginning at the time.

    In my Gold universe example above, the right answers are not precomputed at the beginning of time. A better (although still easy to misinterpret) metaphor is to say that the answers were precomputed when God or whoever created the space-time continuum solved a pretty intricate jigsaw puzzle. All her local puzzle pieces had to fit together correctly, but they also had to assemble to obey the low-complexity global constraint of forming a low-entropy Big Crunch.

    That’s different from the classic “set rules, set initial conditions, let it evolve along the time dimension” role for universe creators, which I believe limits our intuition unnecessarily. But it’s not “anything goes” ad hocness either. The computational hardness of this jigsaw puzzle and the Kolmogorov complexity (ad hocness) of the resulting universe are completely separate questions.

    And more broadly, there would seem to be no reason to predict any specific empirical difference between your superdeterminist theory and QM. Why not keep your theory permanently safe from falsification? 🙂

    You are making fun of my actual position. 🙂 Turning superdeterminism into just another QM interpretation would be a huge step forward from its current embryonic form. For me, the motivation is not keeping it safe from falsification, but gathering all the ammunition needed to attack QM+GR. I believe that a superdeterminist QM interpretation gives more useful intuition for that than, say, MWI.

  59. Travis Says:

    “Local causality” is a nice term to make clear that one doesn’t mean “no superluminal signaling” by “locality”.

    Stepping back, it sounds like your position is: we should say that the Bell experiments refute “local realism”, where the “realism” part means the existence of a state that determines the measurement outcomes, even though (I think you are conceding??) this “realism” idea isn’t one of the assumptions needed for the empirically falsified inequality, because otherwise people might mistakenly think we are saying that the inequality follows only from “no signaling”.

    If I’ve got that right, it seems completely crazy to me — nothing short of a full confession that “local realism” is completely inappropriate and misleading in exactly the way that Tim and I complained about.

    Just to be crystal clear about one thing, do you agree that ordinary textbook QM (with wave functions as the complete description of the micro system under study, and the usual collapse/measurement axioms involving irreducible randomness) violates Bell’s locality condition (aka “local causality”)? As, for example, do the GRW / spontaneous collapse theories? So it’s not as if you start with a locally causal theory but then muck it up by trying to add hidden variables to restore determinism or something. Rather you start with a nonlocal theory and eventually realize that you can’t restore locality *even* by adding hidden variables.

    As I said before, I agree that many-worlds theories raise lots of new questions and confusions that deserve to be addressed. FWIW I’m particularly interested in the point raised by David in #54, should you have any thoughts on that!

  60. fred Says:

    Scott #56

    “and once you condition on what Alice sees when she causes a collapse, that will of course nonlocally affect Bob’s state as well. And they claim that MWI solves this problem by denying that there’s any such thing as “collapse.””

    What I’m not clear is whether the ordering here (Alice measures before Bob vs Bob measures before Alice) actually has any meaning/significance.
    Because from a relativistic point of view, the ordering of two distant events is relative, right? (a fast traveling observer could see Alice measuring before Bob or the reverse).

  61. Alexander Vlasov Says:

    The problem with such preparation of Bell state using projective measurement is necessity of very accurate estimation of density matrices due to the problem with distinguishing of pure and “pseudopure” (Werner) states. If Werner state is prepared instead of Bell state, it is separable and may be described by classical model.

  62. Mustafa Gundogan Says:

    The detection loophole was closed in this experiment thanks to the efficient readout of the entangled spins in the NV centers. From the paper: “…while the single-shot nature of the spin readout closes the detection loophole.”

    Basically, detection of photons are not 100% efficient. The efficiency of the single photon detectors for the wavelength of the photons in this experiment is around 60-70%. However, once you have entangled spins (what is called the “decoupling” in the post) one can readout the quantum states of these spins with an almost 100% efficiency. This is what closed the detection loophole.

  63. Scott Says:

    Mustafa #62: Yes, isn’t that what I said? (Or did I miss something?)

  64. David Speyer Says:

    “The MWIers’ argument is that, if you don’t have multiple worlds, then you need to posit a “collapse of the wavefunction”—and once you condition on what Alice sees when she causes a collapse, that will of course nonlocally affect Bob’s state as well. And they claim that MWI solves this problem by denying that there’s any such thing as “collapse.””

    It looks like you are deliberately avoiding the question of whether MWI should be called local, and obviously there are other people in this conversation. For the record, though, I understand that MWIers think collapse is horrible because it is nonlocal, and how MWI is supposed to solve this problem. What I don’t understand is why they aren’t also bothered that their theory (and all others, to my knowledge!) has components which aren’t stored at some particular place in the universe.

  65. Scott Says:

    David #64: Yeah, that’s also the tack I would take if I wanted to argue against MWI being local. (It’s been years since I’ve had an object-level, non-meta opinion about any of these questions… 🙂 )

  66. Flavio Says:

    Soon enough the other shoe will drop and superluminal signalling will be found.

    Here is how to do it: Alice and Bob share entangled partners and Bob will perform the (non-delayed) quantum eraser experiment with its photon. Alice is able to send a signal by choosing whether to measure the which-path information in her photon between the moment Bob’s photon acquires such marker and Bob erases the information.

    Bob just needs to check if interference pattern is occurring or not… Obviously for a real protocol there is the need for a lot more, but the meat of the idea is this.

    The communication no-go theorem is vacuous, it assumes Alice only can perform measurements that result in an identity matrix for Bob’s state.
    Alice can capture information while it is available, before Bob is able to erase it from the system. According to Zurek, Wootters, 1979, if enough information is acquired in the whole system, then the interference pattern has to disappear.

  67. James Gallagher Says:

    fred #60

    You could use a three particle entangled state, like a GHZ state – then there is (in general) no reference frame where all three particle states can collapse simultaneously – so there can be no physical collapse.

    MWI is still wrong though, since they can’t get the Born Rule without being silly – the Born Rule for probability in a QM Hilbert Space is natural from Gleason’s Theorem – the only puzzle is where does the randomness come from? So we should just declare that Nature is random naturally – then the whole mathematical structure of Quantum mechanics follows simply.

    Cheers.

  68. Joshua Zelinsky Says:

    Flavio #67,

    That’s an interesting claim. What probability do you estimate that we’ll have confirmed FTL signaling in 5 years? In 10 years?

  69. Scott Says:

    Travis #59: I actually love your suggestion to call the property in question “local causality” rather than “local realism.” Thank you! Being a pragmatist opportunist, all I really want is a term that gets across the intended “a(x,w), b(y,w)” meaning while ticking off as few constituencies as possible. So maybe I’ll try out “local causality” the next few times I talk about the Bell inequality and see how people respond.

  70. William Hird Says:

    @James Gallagher # 68
    The randomness comes from a concept that comes from the world of metaphysics ( sorry Scott ! ). According to Eastern mysticism, everything gives off etheric vibrations ( you and I as sentient beings and all matter) so what we see at the atomic level of our world is subatomic particles being bombarded by vibrations from all directions of space-time, a cellular automata of unimaginable complexity.There aren’t “many worlds” there is only one world but we need the crutch of a “many worlds” interpretation in order to try and make sense of the complexity. So Bohr had it correct from the beginning, but he didn’t really fully know why, but ultimately, it doesn’t matter, quantum mechanics as it is “understood now” is a complete theory of universe.

  71. Flavio Says:

    Joshua Zelinsky #69,

    Only recently researchers have found sufficiently bright entangled photons sources (reasonably high percentage of the photons created are members of entangled pairs, sometimes called ultrabright) that allow us to ditch the coincidence counter that is currently used for quantum eraser experiments setups…

    If I had such crystal in my hand, I would be running to try the experiment… 🙂

  72. Lou Scheffer Says:

    Scott #31 wrote: “And, yes, part of that is that the amazingly fine-tuned initial condition constrains our own brains, preventing us from choosing to do certain experiments that we could otherwise easily do that would cause objects to fall up.”

    This has been proposed as a reason we have no (backward) time travel. It may be perfectly possible, but if so people will continually go back and meddle with the past. Eventually, perhaps after some huge numbers of such changes, we end up in our current world line where no-one ever invents time travel, perhaps since everyone is (mistakenly) convinced it is impossible.

    At least this explains how such amazingly fine-tuned conditions could arise….

  73. Jay Says:

    >Bob just needs to check if interference pattern is occurring or not…

    This idea probaby occurs to everyone at some point. If it could work, it would not be that hard to implement. (google it: do-it-yourself quantum eraser). Problem is, it doesn’t work.

    Specifically, the problem is you can’t “measure” interference unless you have both data sets. The magic is: the interference would show up only for the trials for which both Alice and Bob destructed the which path information. In a way, Bell theorem is just a further certificate that it’s as crazy as it sounds.

  74. Mark Srednicki Says:

    IMO, the “realism” part of “local realism” is that a(x,w) and b(y,w) always have definite values, whether or not they get measured. This is what is not true in QM. I don’t think “causality” captures this idea.

  75. Evan Says:

    @Flavio,

    The reason why your protocol won’t do you what you say is that quantum erasers have a 50% success probability (or, alternately, 100% success, but with two outcomes). The two outcomes have inverse interference patterns, and without knowing Alice’s result, Bob can’t identify which pattern he is supposed to be looking for, and the two patterns blend together.

    But just in case you aren’t convinced, go ahead and actually work it out, rather than just handwaving away the details. Write down the quantum state at each point in time, including any needed ancilla.

  76. Alexander Vlasov Says:

    Yet another question: if such “inverted” setup is equivalent with initial one with distributed measurements? In initial one separated measurements are useful to illustrate tension between suggestion about certain outcomes and relativity principle suggesting that order of them may not be defined.
    It is not possible to suggest an algorithm producing outcome of A without access to B due to reasons discussed in original version of Bell’s theorem (sometimes ignored in favor of more “modern” explanations) and so all the story begins.
    In new setup the measurement is produced in single point and so there is no any problem with order of outcome. Instead of that the question about (unitary) interactions in different points appears. Here the very existence of some real problem between locality and order of interactions is not obvious, even if it may be valid in some philosophical discussions, it does not look much more crucial than other manifestations of relativity, e.g. relativity of motion.

  77. Travis Says:

    Just to repeat one last time — the problem is not really what you call the condition “a(x,w), b(y,w)”, but rather whether such a condition is in fact a necessary assumption at all (and hence something that could be rejected instead of locality in the face of the data).

    This condition — “a(x,w), b(y,w)”, which just manifestly presupposes determinism — is simply not part of what Bell called “local causality”. (It rather *follows* from local causality applied to the perfect EPR correlations.) And the inequalities arise from “local causality” alone. See

    http://arxiv.org/abs/0707.0401

    especially sections V and VI, for example. (Or Bell’s “la nouvelle cuisine”.)

  78. Un experimento tipo Bell libre de loopholes | Ciencia | La Ciencia de la Mula Francis Says:

    […] recomiendo leer a Scott Aaronson, “Bell inequality violation finally done right,” Shtetl-Optimized, 15 Sep 2015. Su descripción divulgativa de las desigualdades merece la pena, así como su […]

  79. fred Says:

    Lou #72

    If time travel does exist, then the future could be “interfering” destructively with the past:
    When the effects can influence their own causes, then only “stable/self-consistent” time loops can exist, i.e. loops were all the influences/side-effects of the time travel are already accounted for in the entire history. This also would require the universe to be truly deterministic (no free will, no intrinsic randomness).

  80. fred Says:

    fred #77
    To clarify, I doubt that time travel is possible.
    E.g. if some mass M for the future (time t) can be sent to the past (time t-T), the universe as of t would see an excess of mass/energy of M, and the universe as of time t-T would see a deficit of M.
    How would we account for this?

    But I’m more curious to understand the relation between “Time Reversal” (i.e. known laws of physics being symmetrical with t, when t -> -t) and QM. It would be nice if some aspects of QM (like interference) could be explained by the necessity to respect some symmetries.

  81. Joshua Zelinsky Says:

    So do you have a probability estimate of when this experiment will be done? Let me put it this way: if someone offered to make you a $200 bet on whether this experiment or any other experiment will be generally accepted as showing FTL signaling in the next ten years, would you take it?

  82. Joshua Zelinsky Says:

    Clarification: My last comment was directed at Flavio.

  83. Decius Says:

    Now I’m more confused; what strategy do Alice and Bob adopt knowing that they have entangled particles but without sharing any information acquired after x and y are determined?

    Has the possibility that the photons will not interact and be measurable unless they are coincidentally entangled been rejected?

  84. Darrell Burgan Says:

    “intrinsic randomness”

    If this experiment helps establish that there is such a thing as this, where effects happen without a cause, then doesn’t this ultimately undermine determinism?

  85. JollyJoker Says:

    Thanks. I’ve never seen an explanation of the Bell inequality I understood before. Not that I fully understand how entanglement gives 85%, but… 🙂

  86. James Gallagher Says:

    William Hird #70

    Ok, that’s a viewpoint.

    But you get so far away from simplicity requirements with that kind of stuff. Yes the world could be due to that and the randomness might be illusory to us poor stupid humans.

    But let’s assume us humans are not being fooled by a bad demon, and we simply evolved, randomly, due to evolution, to have a very good understanding of the world.

    If we are in a huge cellular automata then why is there a speed limit? Who set the speed limit for the automata?

    You see, how silly this gets?

    If we have random processes, then we can talk about an average for those processes, so some occur at rates smaller than any time imaginable, some occur on really long googols of years time, BUT MOST occur on, say a planck timescale, 10^-43 seconds.

    I have posted such a model, previously, I don’t want to spam Scott’s blog with reposts of that, but I do think a model built on fundamental randomness is easier and more natural.

    edit: correct spelling of be to by

  87. Scott Says:

    Travis #77: OK, thanks very much for clarifying.

    So it sounds like we should reserve the term “local realism” for the “a(x,w), b(y,w)” condition. And it’s perfectly correct that local realism implies the Bell inequality, and that still strikes me as a fine way to explain the Bell inequality to beginners, since “a(x,w), b(y,w)” is an extremely natural first guess as to how the universe should work.

    However, one could then add that actually, there’s a weaker condition called “local causality” that already implies the Bell inequality, and that this was well-understood by Bell himself.

    As it happens, in 2002 I wrote a review of Stephen Wolfram’s A New Kind of Science, in which one of the main points I made was that there was a weaker condition than local realism that’s already enough to imply the Bell inequality—and that that observation killed Wolfram’s proposals for a classical CA underlying physics. (In particular, it doesn’t matter if you have “long-range threads” in the CA, if you want the classical signals sent across those threads not to violate special relativity by picking out a preferred reference frame.) I didn’t know where, if anywhere, this point had been made in the literature before, but at any rate, didn’t regard it as particularly novel. Later, in 2006, Conway and Kochen made an extremely-related point in their famous paper on the “Free Will Theorem.”

    So let me ask you point blank: did Conway, Kochen, and I simply rediscover what Bell himself already said explicitly? I.e., is there any difference between what we assume and his “locality” / “local causality”?

  88. Flavio Says:

    @Evan

    The quantum eraser experiment can be done with ordinary non-entangled photons, the one in Wikipedia uses entagled photons but it isn’t necessary… Why would a coincidence counter be needed?

    Or are you saying that entangled photons are distinguishable from ordinary photons?

    Let’s play an alternative game: Alice provides Bob as many photons as he wants, she will send either all photons with entangled partners (Alice won’t do anything to her partners) or all ordinary photons. Is Bob able to distinguish if he is receiving entangled photons? How?

    If he simply did the quantum eraser and no interference patterns appeared simply because they are entangled, then he can distinguish ordinary photons from entangled ones…

    @Joshua Zelinsky I hate bets, but I would do it just to put money where my mouth is.

    People talking about time traveling, I don’t believe in it. Jus

  89. William Hird Says:

    @James Gallagher #86

    Sorry for the late response James, I will try and answer some of your points, I am not a professional scientist, I’m just giving my opinion based on a casual study of physics and philosophy.
    So who or what sets the speed limit for the cellular automata? Who sets the speed limit for our heartbeats, why does it beat 60-70 times a minute instead of say 300 times a minute or 15 times a minute? Silly, right?
    I can’t prove that there is an ether but to me it is a pretty good theory to explain how E/M radiation propagates, the curvature of spacetime ( I wish I could have asked Albert if there is no ether, then WHAT is being curved or bent by large objects in space, how can nothingness (no ether medium) have the physical property of curvature?). So I’m guessing that this ether is propagating all kinds of waveforms in space and that this big jumble of noise is responsible for the complex standing wave pattern that gives rise to the randomness associated with a quantum measurement. So I guess it would be considered a hidden variable theory but would be considered a local and non local variable theory because the ether is everywhere, am I making sense ? 🙂

  90. eitan bachmat Says:

    Hi Scott
    Thanks for the interesting explanation. Naive question, does
    the recent EPR=ER discussions have anything to do with this, could it be argued that the Bell violation comes from changes in the topology of space time caused by the creation of entangled particles, i.e., changes in the notion of locality, so that parts of the lab are not really a mile apart any more since they are connected via wormholes?
    Obviously I dont have a clue what I am talking about, but perhaps you could deconfuse me a bit.

  91. Travis Says:

    “So it sounds like we should reserve the term “local realism” for the ‘a(x,w), b(y,w)’ condition. And it’s perfectly correct that local realism implies the Bell inequality, and that still strikes me as a fine way to explain the Bell inequality to beginners, since ‘a(x,w), b(y,w)’ is an extremely natural first guess as to how the universe should work.”

    Especially if one has noticed the perfect EPR correlations. That is, what you say is “natural” here is I think exactly what Einstein et al were saying: this kind of determinism is *required* by the perfect EPR type correlations, if you want to avoid nonlocality. And this is worth stressing, because Bell took himself to be adding something to the EPR argument, i.e., as building on their earlier identification that locality requires determinism. If you think of Bell’s argument in this “two-part” way, it is much easier to appreciate that the ultimate upshot is that locality is untenable, without getting into a lot of complicated discussions about exactly how he formulated “locality”, etc.

    In any case, it’s not clear to me why one would want to use/preserve the terminology of “local realism” at all. Surely beginners can be helped to understand the EPR argument from locality to (local) deterministic hidden variables, and can then be helped to understand Bell’s proof that local deterministic hidden variable theories can’t make the right predictions for a wider class of correlations, so that locality is untenable. If you just acknowledge and remember the first, EPR, part of Bell’s two-part argument, everything is simple and clear, including the fact that “local realism” is at best some “middle term” in the argument that isn’t actually of any particular importance. What’s important is that locality is incompatible with the (now very well confirmed) QM predictions.

    Let me say this again from another perspective. You said that local determinism is a natural first guess about how the universe works and suggested that it’s worth knowing and sharing the proof that this can’t be right. That’s all fine and good, but it would be a shame if you then bury what is to me an equally important thing: if your second guess about how the universe works is that it is local but indeterministic (or, ridiculously, “unreal”) that too doesn’t work! Basically what upsets me about the whole “local realism” terminology is that everybody who uses it seems to think that if only we jettison “realism” (which really always comes down to meaning deterministic hidden variables) then we can preserve locality. That is everybody’s standard view of ordinary QM: it’s local, and it’s only if you foolishly try to muck it up by restoring determinism that you run into problems with nonlocality. But this is simply wrong. Ordinary QM is already nonlocal, and you don’t even need anything remotely as fancy as Bell’s theorem to see this. Just look at the theory (especially the collapse postulate!). (Of course it’s nice to have a formal definition of “locality” that we can use to unambiguously diagnose ordinary QM as nonlocal. But once you know where to look the nonlocality is so blatant and obvious that you don’t really need any of this fancy stuff to see it.)

    As to Wolfram, I never got around to reading his book, so it was a little hard to follow all the details of your argument in the book review. But in so far as Wolfram claimed that the QM correlations could be reproduced with a deterministic and relativistic model, I think I would agree with you that that is problematic. But then it seems like the heart of your counter-thesis is that one needs irreducible indeterminism and/or superposition. To me these options also both seem problematic and I would want to press you to explain how even the perfect EPR correlations (just a subset of the general QM predictions after all) can be explained in a local (or is it just a relativistic?) way by introducing indeterminism and/or superposition. My sense is that this kind of claim tends to arise when people focus too narrowly on just the second part of Bell’s two-part argument (the part showing that local deterministic hidden variables imply a Bell inequality) while forgetting the first (EPR) part (the part showing that locality requires local deterministic hidden variables already for the simpler sub-class of QM predictions).

    But the relationship between locality and relativity is subtle, and I kind of lose sight of what exactly both you and Wolfram are even claiming to argue for here. So take this with a large grain of salt. But my sense is indeed that there is nothing new here that wasn’t already known to Bell. (For sure that is my opinion of Conway/Kochen — I don’t think they proved anything new at all; they just made a big mess with words regarding the so-called free will assumption of Bell’s inequality.)

  92. Joshua Zelinsky Says:

    Flavio,

    Well, I wouldn’t want you to do something you hate, I am genuinely curious if you, as you put it, put your money where your mouth is. Based on that, I’m willing to make the following proposal: if any form of FTL signaling is discovered in the next 10 years (either of the form you outline or something else) I will pay you 200 USD, and if not, you pay me 50 USD. Want to take it?

  93. Davide Orsucci Says:

    Travis #77 and #91:

    Thank you for pointing out your article (arxiv:0707.0401) in comment #77, it did clarify the meaning of the terminology you were using in this context a lot! Also, it does provide a nice introduction to J.S. Bell views – whose writings I definitely have to read, sooner or later.

    Indeed, the EPR paradox and Bell inequalities are tricky subjects, which may be more suited for a discussion among philosophers then among physicists (and computer scientist? 😉 ). The EPR paradox in particular is an archetypical example of a problem which in itself is almost a mathematical triviality, and yet it is extremely difficult to achieve a widespread consensus about its interpretation.

    IMHO, what Bell experiments have shown is that the “realism” assumption given by EPR is demonstrably false, and that quantum physics is indeed a local theory! The point is that, for me at least, “local” simply is the statement that superluminal communication is impossible! And I cannot really understand which are the other interpretation of the word “local”. Probably, there is some metaphysical confusion going around, stemming from the fact that in order to simulate a Bell inequality violation with a classical system, one would need faster-than-light communication…

    So, Travis, following your paper: it is evident that the EPR argument is sufficient to rule out the possibility that the (quantum mechanical) world satisfies the “local causality condition”, as given in equation (1) of your paper. Let me explain and comment over that here, for those who have not read it.

    Let’s suppose that exist `entities’ which provide an `objective’ description of the state of reality, i.e. what Bell called “beables”; in which be-able is used in contrast to observ-able, to denote something which `is’, rather than something which is merely `observed’. Ok, this seems to be a pretty metaphysical definition; but I think it can be described more concretely as follows. By beable, it is meant a description of the state of reality, at a given time, which:

    1) is not necessarily observable;
    2) is correlated with the `future evolution of the system’: by this, we mean that the measurements made in the future will have outcomes which are probabilistically correlated with these beables;

    and moreover, if these beables are “complete”, then:

    3) these beables provide all possible information about the future measurement outcomes: that is, the beables completely determine, given the measurement which will be performed, the probability of obtaining a certain future measurement outcome.

    Well, with these assumptions, the beables seem to me to be pretty similar to a set of “hidden variables”; also, it seems to me that the “realism” hypothesis has been implicitly introduced, as the measurement outcomes are determined by these beables… but let’s move on.

    Now, we suppose that Alice and Bob are far away from each other, and that we have a complete beable for Alice’s experiment, which however is disjoint form Bob’s past light-cone (represented in Fig. 2 in your paper). Then, the condition 3) above can be expressed as:

    P( a | Beable, b ) = P( a | Beable )

    in which “a” is Alice’s outcome and “b” any experiment that Bob can perform in a causally disjoint region of space, so that by the no-faster-than-light-communication principle, it cannot influence Alice’s results. And this is exactly equation (1) of your paper, i.e. the “local causality condition”.

    This condition seems to be very natural, almost obvious… but it is bluntly violated by Quantum Mechanics, as shown by EPR. For a maximally entangled pair of qubits we get P(a|Beable) = 1/2 and P(a|Beable,b) = 1; i.e., Bob’s measurement is completely correlated with Alice’s measurement!

    Ok, so this “local causality condition” is violated… what can one learn from it? There are three possible way-out, as far as I can see:

    1) Quantum Mechanics is incomplete (the solution supported by Einstein, Podolsky and Rosen); pictorically, it could be represented as a hole in the “beable” reagion in Fig. 2 in your paper, which allows information to flow from the common past of Alice and Bob to influence Alice’s experiment, without being `comprehended’ by the standard Quantum Mechanics theory.

    2) Locality is violated (supported by JS Bell himself), however without violating the no-faster-than-light-communication principle. However I cannot understand this position… for me no-superluminal signalling is just the definition of locality, so what are we talking about? If one can give me a satisfying alternative definition of locality, I would be very happy…

    3) The beables/hidden-variables/realism assumption is wrong (and standard quantum mechanincs is right). Just to elaborate a little bit further: it is assumed that the beables determine Alice’s measurements, but the beables themselves are never measured! If, instead, one tries to observe these beables by preventively measuring Alice’s quantum system, this will give a (complete) description of Alice’s system, but then it will be unentangled from Bob’s system! So we end up with having P(a|Beable) = P(a|Beable,b) = 1.

    Ok, so far I have talked only about the EPR paradox.

    Conceptually, what the violation of Bell inequalities adds to the argument is that it precludes point 1) above to be a valid argument. Any hidden-variable/realistic (local) theory which assigns a definite value (at least probabilistically) for Alice’s and Bob’s measurements, cannot violate Bell’s inequalities!

    Since also point 2) is extremely well verified, it seems to me that the problem has been pretty much resolved, in favour of solution 3) above…

    Finally, I apologize for the length of this comment… But I have wanted to write a little bit about Bell inequalities for ages!

  94. Travis Says:

    Davide —

    1. I discuss the difference between “local causality” and “no signalling” explicitly in the paper. Maybe check out those parts. By the way, if you think that “no signalling” is the only available/meaningful/important notion of locality, would you then agree that Bohmian mechanics is a local theory, as compatible with relativity as ordinary QM?

    2. You describe what you think “beable” means and suggest that it is the same as the “realism” or “hidden variable” or “determinism” assumption which people often (but erroneously) claim is another assumption of Bell’s argument. But note that the ordinary quantum wave function satisfies all of your criteria for “beable status”. So it should be clear that talking about what exists (according to some candidate theory, e.g., ordinary QM) is hardly the same as assuming hidden variables or determinism.

    3. I’m glad you are able to see that ordinary QM violates local causality.

    4. You say that for you “locality” just means “no signalling” and you wish somebody would offer a “satisfying alternative definition”. Huh? Bell did. I wrote a whole long paper about which you claim to have read… ???

    5. It becomes more explicitly clear later in your post that, yes, you are just assuming that “beable” = “hidden variable”. But that’s just not what it means (and actually your own earlier characterization of “beable” is pretty good). In ordinary QM, for example, the beables are (I guess) the really-existing macroscopic classical stuff that Bohr was always going on about, and then the quantum states for microscopic systems. If that’s not right, please just tell me what *does* really exist according to ordinary QM. In any case, there simply is no “third option” like your option #3.

    Finally let me second your suggestion that you should really get around to reading Bell’s actual papers at some point. I think that would be a good idea. You seem to suffer from a number of very common misunderstandings that arise because everybody hears these misconceptions from other people and assumes they must be the truth, but none of these people have studied the issues carefully or bothered to actually read Bell. It is very difficult to remain confused when you get it from the horse’s mouth.

  95. John Sidles Says:

    Scott’s essay and these comments provide ample material to inspire submissions to the 2015 Quantum Short Competition — hosted by Scientific American, Nature, and Tor Publishing — that CalTech’s Quantum Frontiers weblog is describing.

    Conclusion  Everyone gains if the Quantum Shorts competition plenty of terrific entries.

  96. Philip Thrift Says:

    It’s surprising no one has mentioned Huw Price on this subject.

    “A Neglected Route to Realism About Quantum Mechanics”
    Huw Price
    http://arxiv.org/abs/gr-qc/9406028
    More recently:
    “Dispelling the Quantum Spooks — a Clue that Einstein Missed?”
    http://arxiv.org/abs/1307.7744
    “Disentangling the quantum world”
    http://arxiv.org/abs/1508.01140

  97. Davide Orsucci Says:

    Travis #94:

    I’ll try to explicate further what are my views and my doubts, and the arguments I still don’t fully understand. So, let me address your points.

    1. Sticking to my own definition of locality, yes, I have to assert that Bohmian mechanics (once it is formulated in a properly Lorentz covariant form) is indeed a local theory, since it is just an interpretation of Quantum Mechanics — which respects the no-faster-then-light-communication principle. As far as I can see, this fact is completely equivalent to asserting that electromagnetism is a local theory, even though the electrostatic potential in the Coulomb gauge is instantly changed in all points in space when you modify the position of a point charge. But the electrostatic potential is not an observable entity, it is just a useful calculation tool, or a “bookkeeping” method. Once you compute the electric field out of it – the physically observable entity – you end up with effects that propagate at most at the speed of light (all this is also discussed in your paper). But then, it seems to me that this example is completely equivalent to what is happening with the so called “non-local” correlations in quantum mechanics. The “non-locality” is just in the bookkeeping, and never in the observables (but maybe there is some subtlety I did not understand?).

    2. Mmm, I think now I see your point. So, taking the standard view of quantum mechanics, the beable (the “existing” stuff) for Alice in the EPR/Bell experiment, should be given by the partial trace over Bob’s system of the two party entangled state – obtaining a mixed state. But then, knowing Bob’s measurement outcome, we will gain some information about Alice’s outcome, meaning that “local causality” is violated. So, it seems that either 1) Alice’s mixed state is not a complete beable or 2) there are real non-local correlations. Yet, I think the most natural solution is to realize that 3) Alice’s mixed state might not a beable at all! It might just represents the information she does have about the quantum state! So for, example, suppose that her friend Amanda decides, unknowingly to her, to measure her quantum system immediately before Alice’s own measurement. Then Alice’s mixed state will remain unaffected by Amanda’s measurement (she doesn’t know that the measurement has taken place!), and yet their measurement outcomes will always be equal! Again, it seems to me that this scenario reproduces all the relevant points of an EPR-like experiment, except that everything is done locally!

    3. Fine!

    4. Yes, it is explained in section V.D of your paper… let me read it once again, and sleep over it one more night.

    5. My conclusion is that probably nothing exists at all, and all of what we perceive is just the veil of Maya, a pure illusion! Speaking more seriously, I think we can safely agree that macroscopic readings and settings of the experimental apparatuses are real. So, maybe I will take the instrumentalist view that a quantum system is what connects the state preparation device to the state measurement device; or otherwise, that it is just a representation of our knowledge.

    This was again a too long comment, I apologize again!

    (Also, I just realized that the example I give in point 2. is related to the Leggett-Garg inequalities… I should definitely find some time to think about the connection between these and the Bell inequalities…)

  98. fred Says:

    Travis #91

    “You said that local determinism is a natural first guess about how the universe works and suggested that it’s worth knowing and sharing the proof that this can’t be right. ”

    What I always found interesting is that “Classical determinism” as a first guess has always been filled with very basic complications.
    For example because cause and effect are delayed (propagation of signals at finite speed), even the most trivial system is totally chaotic, and therefore it’s anything but “deterministic” (in the sense of predictable, it is deterministic in the sense that a given state in time depends only on the past history, so it’s non-random).
    Another example are the attempts to compute the effect of an electron on itself
    http://feynmanlectures.caltech.edu/II_28.html
    … but apparently, even when bringing QM into the picture, it’s still not solved!
    Feynman:
    “It turns out, however, that nobody has ever succeeded in making, a self-consistent quantum theory out of any of the modified theories. Born and Infeld’s ideas have never been satisfactorily made into a quantum theory. The theories with the advanced and retarded waves of Dirac, or of Wheeler and Feynman, have never been made into a satisfactory quantum theory. The theory of Bopp has never been made into a satisfactory quantum theory. So today, there is no known solution to this problem. We do not know how to make a consistent theory—including the quantum mechanics—which does not produce an infinity for the self-energy of an electron, or any point charge. And at the same time, there is no satisfactory theory that describes a non-point charge. It’s an unsolved problem.”

    We’re talking about correctly modelling a single electron, not some fancy schmancy black hole scenario! This is the sort of stuff that makes me really skeptical about the ultimate predictive/explanatory power of mathematics and QM.

  99. Travis Says:

    Davide #97… trying to be brief…

    1. Most people would look at Bohmian mechanics and say “well, although this theory makes the same empirical predictions as ordinary QM, including the impossibility of faster-than-light signaling, this is just obviously a dynamically nonlocal theory”. I agree with most people in this case. Of course, very frequently people then go on to say “and since we of course want to require locality, we should reject Bohmian mechanics”. I don’t agree with that, because the implication is that there’s some way of preserving locality. The point here is just that we should all agree to be fair. If you are going to say that the only kind of locality that matters is signal locality, you have to bite the bullet and accept that you can have local deterministic hidden variable theories! Whereas if you want to exclude something like Bohmian mechanics as “obviously nonlocal in the important sense” then you should apply that same notion of locality across the board and accept that ordinary QM, too, is problematically nonlocal.

    5. I don’t think there is any genuine 3rd alternative in “the instrumentalist view”. If the claim (of this view) is that the only things that really exist are the directly observable macroscopic things, well then that theory is even more blatantly nonlocal than the version of ordinary QM in which quantum states are taken as beables. (If even the quantum states don’t suffice to screen off the correlations in the way required by Bell’s locality condition, then obviously you won’t screen them off if you instead have *nothing*!) On the other hand, if “the instrumentalist view” isn’t anti-realist about the microworld, but is instead just a long-winded version of “I refuse to talk about what’s real and what’s not”, well then that’s also obviously not helpful. Bell proved that, whatever your candidate theory says is physically real, it cannot agree with the experimental data if it respects locality. You don’t refute that just by refusing to say what you think is real.

    (This already partly addresses what you wrote in 2. Note also that even an instrumentalist, I guess, grants beable status to the *outcomes* A and B. And the probability assigned to (say) A=+1 changes when you conditionalize on Bob’s distant setting and outcome. So we have a violation of locality whether you do or don’t grant beable status to Alice’s reduced density matrix or whatever.)

  100. Scott Says:

    fred #98: Leaving aside the philosophical issues, I think the comments of Feynman that you quoted are dated, since they don’t reflect the Wilsonian perspective. As I understand it, there’s a quantum-mechanical theory of the electron that’s incredibly successful (12 decimal places, yadda yadda); indeed Feynman himself was instrumental in developing it. The only catch is that you can’t pretend that you understand what’s going on at arbitrarily small distance scales (it’s only when you do pretend this that you get infinite answers). But on reflection, all experimentally successful theories in physics—classical electromagnetism, GR, etc.—have worked despite our not knowing the ultimate constituents of reality. So one might ask: why is this a special problem for quantum electrodynamics? Why is it there, and only there, that our inability to explain everything calls into question the “predictive/explanatory power of mathematics [?!?] and QM”?

    (One should also say that the renormalization problems of QFT have basically nothing to do with the conceptual or interpretational problems of QM. Whatever is the unknown Planck-scale physics of the electron, string theory or whatever else, either it obeys the usual rules of QM—and I don’t think there’s any good reason to suppose it doesn’t—or else the problem is much bigger than the one Feynman was talking about in that passage!)

  101. Mika O. Says:

    Any good book about this “several side of coin” – problem?

  102. Scott Says:

    Travis #99: I agree with you that Bohmian mechanics and standard QM equally prohibit superluminal signalling, and I also agree that they equally violate Bell’s “locality” axiom (i.e., what we’ve been calling local causality). But having said that, there are two ways in which it seems to me that Bohmian mechanics does make the locality situation philosophically “worse” (or “more radical”?) than it was in standard QM. The first is the need to pick a preferred reference frame, thereby making nonlocal choices about simultaneity. (I know there’s been decades of research aiming to solve that problem, but has it led to anything?) The second issue is that the extra state posited by Bohmian mechanics (i.e., the set of actual particle positions) needs to be updated nonlocally, even if we ignore conditioning on measurement outcomes. This is in contrast to the local density matrices, which can change due to faraway events only if we condition on particular outcomes of those events.

  103. Mateus Araújo Says:

    I’m afraid I’m a bit late to the party, but I’d like to point out that the distance record in a Bell test is much larger than 18 km; as far as I know, it is 144 km, done by the Zeilinger group: arXiv:0811.3129

  104. Scott Says:

    Mateus #103: Thanks so much! I updated the post.

    Yes, I had a vague recollection that Zeilinger group had violated the Bell inequality over 100 miles or something—but then I googled and couldn’t find anything about it. I guess part of the problem is that they don’t say anything in the abstract about the distance, or about setting a new distance record.

  105. John Sidles Says:

    Scott asserts “As I understand it, there’s a quantum-mechanical theory of the electron that’s incredibly successful (12 decimal places, yadda yadda); indeed Feynman himself was instrumental in developing it.

    The only catch is that you can’t pretend that you understand what’s going on at arbitrarily small distance scales (it’s only when you do pretend this that you get infinite answers).”

    And there’s one further catch: you have to pretend that you do understand what’s going on at arbitrarily long distance scales (it’s only when you do pretend this that you view individual electrons as isolated systems).

    A canonical reference in regard to high-precision tests of QED is Lowell Brown and Gerald Gabrielse’s magisterial, and much-cited Review of Modern Physics article “Geonium theory: Physics of a single electron or ion in a Penning trap” (1986). The bulk of this 89-page survey is concerned not with high-energy subtleties and pathologies of QED (that are regulated by renormalization methods) but with low-energy subtleties and pathologies of QED, that are regulated by a gallimaufry of mathematical methods guided chiefly by physical intuition.

    Indeed Brown and Gabrielse begin their review by quoting Ernest Rutherford in the year 1907

    Even the best of them [mathematical physicists] have a tendency to treat physics as purely a matter of equations. I think this is shown by the poverty of the theoretical communications on the problems with face the experimenter today. I quite recognize that the experimenter is inclined to drop his mathematics also. […] As a metter of fact it is quite difficult to keep up the latter when all your energies are absorbed in experimentation.
      — Letter from Ernest Rutherford
          to Sir Arthur Schuster, 1907.

    One hundred and seven years later, the high-energy subtleties and pathologies of electrodynamics are nowadays pretty well understood, but the low-energy subtleties and pathologies of electrodynamics have (if anything) gained in both mystery and practical importance. The list is long: low-energy quantum electrodynamic phenomena such as superconductivity, semiconductivity, the various quantum Hall effects, the AC and DC Josephson effects, blinking atom experiments, photon-mediated Bell inequalities, the Einstein A and B coefficients, laser amplifiers, and electromagnetic Hawking radiation from black-hole event-horizons, all were unforeseen by Rutherford’s generation of physicists, and indeed would have seemed incredible.

    Even today these infrared/low-energy/noisy electrodynamic phenomena still hold plenty of mysteries (needless to say) … not least being the obstructions that these phenomena raise to scalable quantum computing technologies. Our century needn’t repeat Pauli’s mistake of regarding low-energy electrodynamic phenomena as Dreckeffects having only incidental significance.

    Conclusion  The open problems in quantum mathematics and physics that are associated to infrared/low-energy electrodynamics are comparably puzzling and significant to the open problems associated to ultraviolet/high-energy electrodynamics.

  106. Mateus Araújo Says:

    Scott #104: You’re welcome. The reason is that the distance record is not really the goal of the article, they needed the 144 km to be able to refute a weaker version of superdeterminism.

  107. Mateus Araújo Says:

    Scott, Travis, and Maudlin (various posts):

    I’m a bit surprised by your discussion. You are arguing that the local realism from Bell’s 1964 should be replaced with the local causality from Bell’s 1976, and that local causality should just be called locality, and that quantum mechanics is anyway nonlocal?

    First of all I stand with Scott in saying that we shouldn’t rename standard terms. This is the first step on the road to becoming a crackpot.

    Second, and this is the only important point actually, is that local realist correlations are exactly the same as locally causal correlations, so what’s the argument about? Yes, it is true that local causality is weaker than local realism, in the sense that are locally causal theories that are not local realistic, but as far as Bell’s theorem is concerned (as shown in Fine 1982), they are equivalent! So what are you arguing about?

    Third, local realism is actually the conjuction of two assumptions — determinism and locality — so you learn that at least one of them must be false, whereas local causality is a single assumption, so I would argue that one learns less by rejecting local causality than one learns by rejecting locality or determinism.

    Fourth, what is this insanity about orthodox quantum mechanics being nonlocal? I would repeat again Scott’s crackpot warning: don’t change the meaning of standard terms. Quantum mechanics is clearly local, in the sense of Bell’s first paper (and as argued by Scott, that the probabilities for Bob don’t change when you don’t condition on Alice’s outcome). If you’re bothered about the collapse of the wavefunction or some other crap, invent another name for it, that does not create confusion with Bell’s theorem.

    Fifth, this paper by Wiseman and Cavalcanti might interest you: arXiv:1503.06413. They split even finer hairs than you are splitting, which doesn’t actually interest me, but they do so by having the decency of not changing the meaning of standard terms, which makes the whole discussion quite easy to understand.

  108. Mark Srednicki Says:

    Mateus #107: thanks for the Wiseman Cavalcanti link, which cleared up a lot of my misunderstandings of this discussion. Also, I learned that I am an “operationalist” (who knew?), and that there is a group of people called “realists” with very strange ideas (at least to someone whose thinking about physics has been entirely in QM terms for the past 40 years or so …).

  109. Travis Says:

    Scott #102 — Trying again to be brief… I think the two things you note about Bohmian mechanics are really one thing: the dynamical equation for the particle positions basically requires some privileged frame / spacetime foliation. What I would question is whether you can avoid this in some version of ordinary (no “hidden variables”) QM that is clear and explicit about what really exists according to the theory. Obviously if you have something like quantum states with a collapse postulate, you need the same sort of privileged frame. You can avoid this by going the many worlds route, but that raises lots of other (and frankly even more troubling) issues, like the one raised in #54 above. The usual (“orthodox”) way of dealing with such questions is really no way at all — just dismiss questions about “what really exists according to the theory” as “unscientific” or “meaningless” or something, and focus on calculating things like transition amplitudes so you don’t have to even remember that you need a collapse postulate to make sensible predictions.

    Mateus #101: “You are arguing that the local realism from Bell’s 1964 should be replaced with the local causality from Bell’s 1976, and that local causality should just be called locality, and that quantum mechanics is anyway nonlocal?” As soon as you say “the local realism from Bell’s 1964…” I think you have misunderstood Bell completely, starting with his 1964 paper. My position is that the local causality condition he formulated explicitly in 1976 (and then re-formulated in 1990) is just an attempt to capture, more formally and explicitly, the same “locality” condition that Einstein et al had had in mind in the 1930s and which is therefore the condition that Bell meant to be continuing to discuss in 1964. So nobody is arguing for some crackpottish terminological switcheroo. Rather just trying to clarify these issues for people (like you) who have misunderstood them. If you really want to wade into the muck, you can find (for example) papers in which Howard Wiseman and I argue with each other about what Bell did in 1964. But really I’d instead suggest (as I did before) that people who want to understand Bell better, just read Bell. It’s just an incontrovertible fact that what most textbooks and commentators *say* about what Bell did, differs substantially from what Bell himself claimed he did. Anybody remotely interested in these ideas should find that fact intriguing and should want to try to understand the point of view of the guy everybody recognizes as having done something really important.

    “So what are you arguing about?” The argument is mostly about what Bell’s theorem proves. Bell himself (and Tim and I and some others) thought it proved a fundamental tension between the predictions of QM and the idea of “no faster-than-light causal influences” which basically everybody accepts as an implication of relativity theory. Whereas lots and lots and lots of people think instead that Bell just proved a kind of “no hidden variables theorem”, in the sense that (they’d say) he proved that nonlocality is the price you pay for rejecting Bohr’s philosophy and trying to “complete” QM with hidden variables to restore determinism, etc. The debate, in short, is about what minimal set of premises imply the empirically falsified inequalities: is it *just* “no faster-than-light causal influences”? or is it the *conjunction* of “no faster-than-light causal influences” and some other anti-orthodox thing like “determinism”, CFD, etc.

    “Quantum mechanics is clearly local, in the sense of Bell’s first paper”. Hogwash. Try reading Bell’s paper carefully. The second sentence notes that (as previously pointed out by EPR) additional (so-called “hidden”) variables would be needed to restore locality to QM. So apparently if you think that QM is local in the sense meant by Bell in 1964, you disagree with Bell about that — i.e., you are wrong.

  110. Travis Says:

    This is (I know) kind of lame after my somewhat inflammatory last post, but I’ve been spending too much time on this. So having I think made my views pretty clear, I’ll leave it there, read whatever anybody else posts, but not post any longer myself. If anybody wants to follow up on my suggestion to try to understand Bell’s own views better (but, say, can’t find a copy of “speakable and unspeakable”) here’s another potentially helpful (online) resource:

    http://www.scholarpedia.org/article/Bell%27s_theorem

    Thanks to Scott and everybody else for the interesting discussion!

  111. Mateus Araújo Says:

    Travis #110:

    I’m gonna answer, nevertheless, for the benefit of other readers that might be misled by your claims.

    First, please be more mature. Name-calling (saying that I misunderstood Bell’s paper, and that I am “just wrong”) is not an argument. But if there are whole papers by you and Wiseman arguing about the meaning of “locality” in Bell 1964, I cannot hope to convince you with just a few words. I will just point out that nobody agrees with you. The meaning of “locality”, as understood by today’s scientific community, is well-established to be that Bob’s probabilities don’t change when you don’t condition on Alice’s outcome. So you are indeed arguing for a crackpottish terminological switcheroo.

    Furthermore, I find it completely irrelevant to find out what “Bell really meant”, or what EPR 1935, Bell 1964, Bell 1976, Bell 1990 “actually proved”. We are grownups and we can prove our own theorems.

    A very simple theorem to prove (as I hope you will agree, even though you disagree with the names) is that

    (LOCALITY ^ DETERMINISM) => (LOCAL CAUSALITY) => (CHSH is bounded by 2).

    What you are trying to argue is that

    (QM ^ RELATIVITY) => (LOCAL CAUSALITY) => (CHSH is bounded by 2).

    Well, I do not know how to prove this. Moreover, I think this implication is false. Can you give me a concise mathematical proof? The paper you linked previously (arXiv:0707.0401) has 20 pages of blah blah about what “Bell really meant”. I’m not going to read that.

    But anyway, the validity of this second implication is independent of what is “Bell’s theorem”. What I find relevant is that the first implication is what is well-known by the community as Bell’s theorem, so again trying to argue that the second implication is actually “Bell’s theorem” is crackpottish switcheroo.

  112. fred Says:

    Not sure how much of this is still relevant, but here’s some long Scientific American article on Bell’s Theorem, circa 1979 (love those old ads!)
    https://www.scientificamerican.com/media/pdf/197911_0158.pdf

  113. fred Says:

    Fred #112
    Btw, the author of the article, Bernard d’Espagnat (a french theoretical physicist) just died a month ago.

  114. Abstracts for September 2015 - foreXiv Says:

    […] and randomness certification. Previously announced, but included here for completeness. Also see blog post by Scott Aaronson. I’m told that several experimental groups are very close to this, which is probably why this […]

  115. Mark Srednicki Says:

    Travis #109: “Bell himself … thought [he] proved a fundamental tension between the predictions of QM and the idea of ‘no faster-than-light causal influences’ which basically everybody accepts as an implication of relativity theory.”

    The modern version of “relativity theory” (and by “modern”, I mean for at least the past 60 years) is as a symmetry of quantum field theory. This is the version that “everybody accepts”. By construction, it is not in conflict with quantum mechanics.

  116. Jay Says:

    Scott 51,

    “If we can agree on this way of using language (…)”

    Well, as the rest of the discussion made clear at this point, there is no such thing as a concensual use of “local realism”, “local causality” or “local determinist” among physicists.

    A point I’m actually surprised you didn’t emphasize yourself, is that Bell theorem is not the iron lever we should expect from a theorem. From a theorem, we should expect being able to reject many theories at a glance, even in a deep coffee-depriven state.

    Say MWI. It’s local and realist (at least under most definitions discussed in this thread: no FTL + well defined be-ables). That alone should suffice to predict: at odds with experimentals results. But of course it’s not at odds.

    Say a local, realist, superdeterminism theory. Local + realist = should be wrong. But no, it escapes Bell’s theorem too (yeah, I know, “no conspiracy” was actually discussed by Bell himself, but where is it written in the theorem?).

    Say wormholes. Local + realist = should be wrong. But no, now the trick is there’s a shortcut to the spacetime separation and again Bell theorem doesn’t apply.

    Moreover, it’s seems not very hard to cook theories that will turn local and realist under most (all?) definitions, but still make the same predictions as all major interpretations of QM.

    Of course, we can (as you do) integrate langage within the theorem, and then the theorem is true forever. But that’d be true of the Five-Ox Theorem too (just define “pulling” as “ox-pulling”…), so maybe this move is not as benign as it looks.

    So my question for you: don’t you think there some need to rewrite or complete the Bell’s theorem so that the name holds its promises?

  117. Teresa Mendes Says:

    Hi Scott,
    Long time, no see … I’m glad you didn’t forget me? 🙂
    So here is, again, my point.

    Sampling only a small fraction of all prepared quantum system pairs is as inefficient as detecting only a small number of quantum system pairs.
    That is, by definition, what the detection loophole is all about.
    You cannot do a Bell test just in the sub-set you choose based on whether the emitted photons are correlated.
    Those photons are correlated because their electrons were correlated, what did you expect the result of Bell correlations to be?
    Bid deal.
    This experiment shows very very very low detection efficiency. It is once more falsification test for Local Realism that is inconclusive.
    Shall we wait until someone presents an experiment with efficiency over 80% ?
    Is that bet still on?
    🙂

    MIKE #1
    You said: “If one chooses to reject counterfactual definiteness, there is no non-locality problem.”
    Sorry? How do you reject counterfactual definiteness? Is it not intrinsic to Realism?
    Do you think you can “choose” what properties belong to Realism, as you please, when you are testing Local Realism, to get to any conclusion you want?

    To illustrate how important is to include this issue into the reasoning for Bell’s inequalities under non-ideal conditions just do this plotting (I can send you a picture – I can’t post it here).
    Plot both lines showing the limits of Local Realism [Sn], depending on the detection efficiency [eta] – Garg&Mermin’s [ (4/eta)-2, for eta >2/3] and J.Especial’s [4-2*eta*eta]
    (http://arxiv.org/abs/1205.4010)
    You will get 3 different areas that show:
    The A area (below J.Especial’s line) – Locality, factual and counterfactual.
    B area (between both lines) – Factual locality but counterfactual non-locality.
    C area (over Garg&Mermin’s line) – Non-locality, factual and counterfactual.
    Will this help to clear your mind?

    Fazal Majid #5
    “At some point we need to draw a line and put the local realists in the same box with creationists, …]”.
    Really? Relativity is Local Realist, all other natural sciences are Local and Realist, all Classical Physics is Local and Realist, just Quantum Mechanics is not.
    When Quantum Mechanics makes predictions while in the limits of Local Realism it works (?!), but when it envolves entanglement (non-local) … sorry, but no one has ever shown it working, it is still a prediction.

  118. Scott Says:

    Teresa #117: You have to be joking.

    Logically, conceptually, all that’s going on in the new experiment is that it’s really hard to create an entangled pair—it takes like half an hour, with millions of failed trials—but once you do create one, you know you’ve created it even without having to measure it, and you can measure it with extremely high reliability, and use it to violate a Bell inequality. (With the one additional complication that you have to go ahead and measure the pair even before you know whether you succeeded in entangling it—but then you only count the run if it turns out that, indeed, the pair was entangled.)

    As I explained in the post, the detection loophole arose only because of the inability to ensure “fair sampling”: that is, it was possible that whether the measurement succeeded or failed was somehow correlated with the detector settings. The new setup eliminates this issue, since whenever the photon measurement succeeds, it ensures that the measurement of the entangled electron spins is going to succeed with probability almost 100%, in a way that’s completely causally isolated from Alice and Bob’s detector settings. You simply haven’t engaged with this point at all.

    Maybe a better way to say it is that the ball is now in your court: construct a local hidden-variable model that’s able to account for experiments like this one (and show that you understand how they differ from the previous experiments), and that will force people to do another experiment.

    And as for the results of further experiments—I would bet any amount of money that all of them will continue to show Bell inequality violation, as all the previous ones have. But after making two very large scientific bets (one of which I clearly “won,” though there was no counterparty, the other of which is still outstanding), I promised my wife that I wouldn’t make any more.

  119. Scott Says:

    Jay #116:

      So my question for you: don’t you think there some need to rewrite or complete the Bell’s theorem so that the name holds its promises?

    No, I don’t. All the examples of “loopholes” that you mentioned in your comment, are just straightforward cases of violating the assumptions of the theorem. And crucially, the assumptions are clear and natural and easy to state. The simplest set of assumptions, to my mind, are the ones I discussed in the post (which I also called local realism or the “a(x,w), b(y,w)” assumption). Namely, Alice and Bob share classical random bits, they can’t communicate (neither by a wormhole nor anything else), they get bits x and y that are truly uniformly random and independent, and they can use any strategy they want to try to win the CHSH game with the maximum possible probability.

    Under the above assumptions, no one who can think clearly would deny that the Bell inequality follows. (Note that, in the whole dialogue with Travis and others above, all parties agreed that these assumptions suffice to imply the Bell inequality—the discussion was just about weaker assumptions that also suffice to imply it, and whether the BI should be stated in terms of those weaker assumptions.)

    Furthermore, the local realism assumption has the advantage that it doesn’t depend on anything slippery about the physics of spacetime, though the anti-Bell cranks keep wanting to bring spacetime into the picture. It’s simply about the logic of a game. And the conclusion is that, if Alice and Bob win the CHSH game more than 75% of the time, then they’re not in the “a(x,w), a(y,w)” situation that any sane person would have assumed they were in, before quantum mechanics and then the Bell inequality came along and caused people to say “oh, well, of course they’re not in that simpleminded situation; who ever would’ve imagined they were?”

  120. Jay Says:

    Ok, thx for the discussion.

  121. John Sidles Says:

    Worthy meditations  Another worthy meditation (as it seems to me anyway) that relates to quantum locality is Boaz Barak’s column in this week’s Windows on Theory, titled Is computational hardness the rule or the exception? (September 22, 2015). Excerpts:

    As a cryptographer, I am used to the viewpoint that computational problems are presumed hard unless proven otherwise. […]

    In most other fields of study, people have the opposite intuition. Algorithms researchers are generally in the business of solving problems, and not giving up any time the obvious approach doesn’t work, and so take the viewpoint that a problem is presumed easy unless it is proven hard. […]

    So is hardness the rule or the exception? I don’t really know. Perhaps the physicists should decide, as the problems they study come from nature, who presumably doesn’t have any bias toward hardness or easiness. Or does she? After all, nature needs to compute as well (at least in the forward direction).

    Nature’s bias towards easiness  For me at least, Barak’s intuition that “nature has a bias towards easiness” meshes nicely with two preprints by Bob Wald that deal with quantum nonlocality in a reasonably student-friendly way: “The Formulation of Quantum Field Theory in Curved Spacetime” (arXiv:0907.0416) and “The history and present status of quantum field theory in curved spacetime” (arXiv:gr-qc/0608018v1).

    When particle-pictures fail  Here the point is that particle-like approximations to field theory fail intrinsically at large length-scales, for reasons that Wald’s articles take pains to set forth:

    The particle interpretation/description of quantum field theory in flat spacetime has been remarkably successful — to the extent that one might easily get the impression that, at a fundamental level, quantum field theory is really a theory of “particles”. [… However] the attempt to describe quantum field phenomena in curved spacetime has directly led to a viewpoint where symmetries and notions of “vacuum” and “particles” play no fundamental role.

    Delft’s 1D dynamical universe  We see this even in the Delft experiments, whose single-mode optical fibers present a one-dimensional field-theoretic universe whose evolution (despite many decades of efforts) is far from unitary. Concretely, that the NV-centers in the Delft experiment couple to fabulously elongated optical modes … the transverse width of the fiber-modes (about 1 um) being \(\sim10^9\) times greater than the longitudinal length (about 1 km).

    Optimistic quantum hopes  Having achieved an modal-shaped aspect-ratio of \(\sim10^9\), surely an aspect-ratio of \(\sim10^{12}\) or even \(\sim10^{15}\) is achievable? Alternatively, with larger technological investment, is it plausible that the photon-losses associated to Delft-type experiments can be reduced to arbitrarily low levels?

    Skeptical quantum expectations  The sobering answer is “no,” not even in principle, so long as our experiments are made of electrons and nuclei from the period table, that are aggregated as condensed matter by QED forces.

    Summary Field theory teaches that quantum dynamics is unitary, but not unitary throughout arbitrarily large spatial volumes … such that quantum information is nonlocal, but not nonlocal across arbitrarily large spatial distances … and quantum simulation is computationally difficult, but not so computationally difficult as to be formally outside of PTIME.

    Conclusion  Boaz Barak’s intuition that “nature has a bias towards easiness” is physically plausible in Bob Wald’s field-theoretic universe, in which locally unitary evolution spanning arbitrarily large spatial volumes is infeasible by reason of inescapably lossy quantum field-dynamics.

  122. Teresa Mendes Says:

    Scott #118
    No, Scott, I’m really not joking.

    What is an ideal Bell’s experiment? An ideal Bell’s experiment imposes that you detect ALL pairs of quantum objects, so that you can use Bell’s theorem.
    In a non-ideal Bell’s experiment, when you can’t detect ALL pairs of quantum objects prepared you don’t follow the efficiency criterium for the ideal experiment.
    So you will always have an inconclusive experiment if that efficiency is less than 76.5% [Critical efficiency level – J. Especial] because the Sn for Local Realism is bigger then the one for Quantum Mechanics.

    Let’s analyse this experiment : you choose to only detect 6.4*10^-9 of the prepared pairs . Why? In your own words “it takes like half an hour, with millions of failed trials—but once you do create one, you know you’ve created it even without having to measure it” – so you know your sample is completely biased, is absolutely non-random; “and you can measure it with extremely high reliability, and use it to violate a Bell inequality”.
    No you can’t ! Efficiency is the number of pairs you detect, divided by the number of pairs you prepare. You can’t choose the sub-set and affirm 96% efficiency!
    If you want to treat this experiment as a Bell’s test then the efficiency is [6.4*10^-9]*96%[6.4*10^-9], that means in the order of 10^-18, meaning that Sn for the limit of Local Realism is very close to 4, and the experiment is absolutely inconclusive.

    That is not an experiment worthwhile reproducing – not with biased sampling – and that is the difference for all other Bell’s experiments already made.

    Since all other earlier major Bell’s experiments were analysed in detail in J.Especial’s article, in 2012, there are still 2 recent experiments worth discussing:

    1) The Canary Islands experiment, that was discussed earlier here in this blog. https://scottaaronson-production.mystagingwebsite.com/?p=902#comment-43319 Also inconclusive.

    2) Marissa Giustina’s experiment, where she calculated a value that violates Local Realism, really really close to Local Realism’s limit, and very very far from the maximum limit predicted by Quantum Mechanics. [Suggesting that an experimental error could had occurred] . That is the experiment that is worth repeating, making sure you guarantee space-like separation.

    These tests to disprove Local Realism have been going on since 1972. With inconclusive results, or in the case of M. Giustina’s needing confirmation by other teams.
    Do you know any other theory, or class of theories, that took as long to disprove?

  123. fred Says:

    Scott #119

    I guess I’m not a sane person because I still don’t get it:

    “[…] (which I also called local realism or the “a(x,w), b(y,w)” assumption) […]Furthermore, the local realism assumption has the advantage that it doesn’t depend on anything slippery about the physics of spacetime”

    You said
    “a=a(x,w) is a function of x and of the information w available before the game started, but is not a function of y.”
    What is “w” exactly? What do you mean by “before the game started”?
    Why is the w used in both functions the same?
    Shouldn’t it be
    a = a(x,w) and b = b (y,w’)
    If Alice and Bob are separated they wouldn’t have access to the same world information, no? (their “light cones” are separated).
    If Alice and Bob are separated by D, and we consider the “w” accessible to Alice in her lab, if something happens in the lab that could influence Alice’s decision of a (if a is truly a general function of w), like some atom of uranium randomly decaying which switches some neuron in Alice’s brain, that influence will only reach Bob after the delay D/C.
    If you insist that w=w’, then this can only happen when Alice and Bob are simultaneously at the same location.
    That’s why I don’t get why you say that spacetime doesn’t enter the picture.

  124. Scott Says:

    fred #123: Because the Bell inequality is an impossibility statement, it’s fine to grant Alice and Bob more than they would actually have in reality (it can only make the result stronger). So, we can allow both of them access to absolutely everything about the state of the world, except that Alice doesn’t see y (or anything that depends on y), and Bob doesn’t see x (or anything that depends on x). That’s all you need to prove the inequality.

  125. fred Says:

    Scott #124
    Ohh, ok, thanks Scott!

  126. Tom Says:

    A bitter-sweet human aspect to this: the technique they used to build remotely entangled qubits was proposed by the late Sean Barrett, who would have been amused, but not surprised, by the result (ref [26] Barrett and Kok, of the arXiv version).

  127. Mateus Araújo Says:

    Mark #115:

    I find it actually sad that these philosophers insist on using some language that is bound to confuse and annoy people instead of getting their point across, because they actually do have a point to get across. If they were younger, I would call this trolling.

    What Travis is trying to say is that quantum mechanics violates local causality (right), and that local causality follows from relativity (which I don’t think anybody would agree).

    But the point that I think he has is that the collapse of the wavefunction is against the spirit of relativity; to even define it one must choose a particular foliation of space-time, which is quite weird. Which brings us again to the war of words: for Travis, “Quantum Mechanics” means “Quantum Mechanics with the collapse postulate”, whereas for you, it means “Quantum Mechanics without the collapse postulate”, as relativity is a symmetry of the unitary part of quantum field theory (as for the non-unitary part, my impression is that people try to pretend that it does not exist).

    The question, then, becomes whether “Quantum Mechanics with the collapse postulate” is against the spirit of relativity or not. The standard position in the field is to regard the quantum state as something less than real, so that it’s collapse is not something that should worry us. It goes without saying that Travis finds this unacceptable. But at this point I actually agree with him: I don’t think it is a fruitful position, we learn more if we actually take the formalism seriously. I find it more satisfactory to deny the collapse postulate, and go with the Many-Worlds interpretation.

  128. John Sidles Says:

    Two more student-accessible articles by Bob Wald (that until recently were not known to me), which grapple concretely with classical and quantum issues of non-locality, are “Teaching the mathematics of general relativity” (American Journal of Physics, 2006, arXiv:gr-qc/0511073) and “Quantum fields in curved spacetime” (Physics Reports, 2015, arXiv:1401.2026, with Stefan Hollands).

    Teaching dilemmas  The first article “Teaching the mathematics …” (2006) plainly states dilemmas that afflict every student and teacher:

    If one takes the time to teach the mathematical material properly, one runs the risk of turning the course into a course on differential geometry and doing very little physics.

    On the other hand, if one does not teach it properly, then one is greatly handicapped in one’s ability to explain the major conceptual differences between general relativity and the prerelativistic and special-relativistic notions of spacetime structure.

    Quantum exacerbation  Wald’s 2006 article presents no entirely satisfactory resolution of this dilemma, and indeed the Holland-Wald 2015 Physics Report argues (implicitly) that this dilemma afflicts the present-day teaching of quantum dynamics even more acutely than classical dynamics.

    At the present time, relatively little attention is generally paid to the issue of whether quantum field theory can be given a mathematically precise and consistent formulation — as compared with such issues as the ‘fine tuning’ that would be necessary to give small values to the cosmological constant and Higgs mass if one views quantum field theory as the quantum theory of the modes of fields lying below some energy cutoff.

    Here one point is that even discussing these issues requires an understanding that is grounded jointly in physical intuition and in mathematical conceptions of universality and naturality.

    Missing texts Still more acutely, there are at present no expositions (known to me), of comparable physical intuition and mathematical naturality to Wald’s essays, that ground students in conceptions of quantum information theory, quantum thermodynamics, and quantum transport theory.

    Shortfalls in time  One problem is practical: nowhere in undergraduate curricula is there sufficient time to even begin such a unified program of study. Another reason is grounded in equity: few students show equal aptitude in physical intuition and mathematical maturity … so how can a course of study seek to cultivate both skills at once … without boring and frustrating at least half of the class?

    A modest proposal  Students seeking to achieve, jointly, an optimistic appreciation of the potentialities of scalable quantum computation and BosonSampling, and a skeptical appreciation of the dynamical plausibility of Kalai’s postulates (and thereby achieve a broad-based appreciation of Bell experiments) are well-advised to follow Ed Witten’s example: don’t study mathematics or physics at the undergraduate level; then at the graduate level (and continuing throughout one’s career) seek persistently to ground one’s understanding in explanations that are jointly mathematically universal and physically motivated.

    The radiant future  It is marvelous to contemplate the extra time that will be freed for quantum research, once the undergraduate teaching of mathematics and physics is abolished.

  129. Teresa Mendes Says:

    Hi Scott

    While waiting for you reply, and browsing through your blog I stumble upon your recent D-Wave post, last month, and I remember I had planned to write you some comment about this subject, some time ago.

    Perhaps now is the right moment to do it, because it is also relevant for the subject we are discussing here – Bell’s experiments.

    =======================================
    Dear Scott,

    I’ve noticed your last comment about D-Wave .(https://scottaaronson-production.mystagingwebsite.com/?p=1679) where you refer you have checked out this article:

    “How “Quantum” is the D-Wave Machine?” Seung Woo Shin, Graeme Smith, John A. Smolin, Umesh Vazirani http://arxiv.org/pdf/1401.7087v2.pdf

    “By contrast, a minimal requirement for even a special purpose quantum computer is that it exhibits large-scale quantum behavior. For all practical purposes this means large-scale entanglement. ”

    Then I have also noticed that D-Wave announced: “This is the first peer-reviewed scientific paper that proves entanglement in D-Wave processors,” Dr Colin Williams, director of business development at D-Wave, told BBC News. http://www.bbc.com/news/science-environment-27632140 with reference to their article : “Entanglement in a quantum annealing processor”, T. Lanting et al. Phys. Rev. X 4, 021041 – Published 29 May 2014, http://arxiv.org/abs/1401.3500

    Both, Lanting et al. and Shin et al. refer the 2009 Ansmann’s experiment: “Violation of Bell’s inequality in Josephson phase qubits”, M. Ansmann, H. Wang, R. C. Bialczak, M. Hofheinz, E. Lucero, M. Neeley, A. D. O’Connell, D. Sank, M. Weides, J. Wenner, A. N. Cleland, and J. M. Martinis, Nature 461, 504 (2009).

    My thought is that it seams that there is the need for the physics community to check more carefully the quantum foundations [namely Korotkov’s Bell inequality].

    The J.Especial’s article http://arxiv.org/pdf/1205.4010v1.pdf, I presented to you in April 2012 (https://scottaaronson-production.mystagingwebsite.com/?p=902#comment-43183 ) has an detailed discussion about the Ansmann’s experiment, and concludes:

    “The measured value of the Bell-quantity, can be seen to be not only compatible but actually consistent with the upper bound predicted by local-realism, for the observed probability of crosstalk. From a local-realistic point of view, this agreement is understandable: Since an optimization search was performed on all relevant parameters of this experiment to maximize the measured value of S, maximum use of available crosstalk was achieved. ”

    So, you were absolutely right when you wrote about D-Wave 2: “there’s pretty good evidence for quantum effects like entanglement at a ‘local’ level, but at the ‘global’ level we really have no idea.”

    How confident are you, now, about D-Wave’s large-scale entanglement ?

    ========================================

    Perhaps this time you will find the time to have a closer and detailed look into J.Especial’s article. I believe that now that you have a better understanding of EPRB experiments you will appreciate it more. [Don’t forget to look the counterfactual definiteness issue, because that is a key point].

    Looking forward for you comments,

    With my best regards
    Teresa

  130. Scott Says:

    Teresa #129: I don’t think the situation with regard to demonstrating large-scale entanglement (or even small-scale entanglement) in the D-Wave machine has changed much in the last few years. I really wish D-Wave would place higher priority on experiments that cleanly demonstrate quantum behavior, such as violating the Bell inequality (we know this is possible with superconducting qubits; the Schoelkopf group and various others have done it). I’ve criticized D-Wave for that in the past and will continue to criticize them for it.

    On the other hand, I also think that, if you believe that it’s not even possible to violate the Bell inequality—that some way must be found to make Nature satisfy local realism, a century of quantum-mechanics experiments be damned—then in some sense, you’re not even part of the conversation about these issues. That just seems like flat-eartherism to me: a position that was already untenable when the evidence for a round earth was subject to theoretical loopholes, but now, with this new Bell test, the astronauts have returned from orbit.

  131. Daniel Says:

    Teresa #122:

    I’m far from an expert in this, but isn’t there a confusion here between detection efficiency and preparation efficiency? If I understood Scott’s explanation correctly, the point here is that, although creating a pair requires several attempts, once you do create it you know it, and you are almost sure to measure it and include it in the data. You could store it in a freezer for a year, until you accumulated thousands of identical Bell pairs, and *then* do the whole experiment.

    Imagine the following scenario: the physicist wants to do a Bell test experiment. Everyday he wakes up, and tosses a coin. If it lands heads, he goes to the lab and collects some measurements. If it lands tails, he stays at home and plays with his kids. Would you object to the experimental results in this case? Would you say that there might be a conspiracy between the coin he tosses at home, and how the particles will behave when he arrives at the lab? Maybe particles are more strongly correlated on even-numbered days?

    This is of course an exaggeration, with the purpose of making a point: as I understand it, the detection loophole questions the fair sampling assumption, claiming that maybe the particles conspire in such a way that the ones that you did not measure would have biased enough the statistics to erase out the correlations you observed. But what does it say about the particles you didn’t even prepare in the first place?

    One last thing which bugs me in these kinds of discussions is the whole “No Bell violation => No entanglement => No quantum computers”. Is there any result that directly connect the ability to violate Bell inequalities to the ability of performing quantum computation? I thought results like the DQC1 model (or this paper by Nest: http://arxiv.org/abs/1204.3107) showed that the role of entanglement in QC is way more subtle than that.

  132. Mark Srednicki Says:

    Mateus #127, various points:

    1) I claim that the “collapse postulate” is today a completely untenable intellectual position. There is no mathematical model of it that is not in immediate and dramatic contradiction with experiment (usually—and the irony here is palpable—because the explicit models are non-relativistic!). Everything we know about the world says that it is continuously evolving according to unitary evolution of a quantum state, governed by a hamiltonian that is a volume integral over a sum of products of local quantum fields, and which transforms under Lorentz transformations as the time component of a four-vector. No one has the slightest idea how to modify this mathematical structure to allow for “collapse”, while still preserving all of its successes in accounting for experimental results (including its agreement with key aspects of relativity: no faster-than-light signaling, time dilation, the relativistic energy-momentum relation, etc etc).

    2) In accord with Scott #119, I think violations of the Bell inequalities would feel just as weird in a Galilean universe as they do in an Einsteinian one. “Spooky action at a distance” that does not fall off with distance, and that is accomplished with no observable transfer of energy or any other physical quantity, does not require relativity to be weird. See David Mermin’s “Is the moon there when nobody looks?” for a beautiful exposition that is consistent with this point of view.

    3) Weird does not mean wrong. To us, the world seems weird, but that’s because we grew up in a branch of the wave function where classical physics is usually an excellent approximation. The bad intuition that this has produced in us is our problem to overcome.

    4) I have no idea what you mean by “the non-unitary part” of quantum field theory.

  133. Teresa Mendes Says:

    Daniel #131

    Detection efficiency and preparation efficiency.
    It’s exactly the same thing. From a mathematical point of view they are indistinguishable.
    All systems prepared identically must be measured, detected and accounted for, otherwise the experiment is non-ideal and the efficiency of the experiment is the ratio of detected to prepared systems.
    Im my opinion in this experiment they are not creating anything because entanglement does not exist. From a LR point of view all they are doing is preparing a very large number of identical quantum pairs of systems and then from this population selecting a very small biased sample.
    If they wish to call that their population that is their problem, but I suggest they review their Statistics.

    Another coin story.
    In my school if you toss a coin and don’t show up in class you will (probably) have a “no-detection” red mark in your teacher’s calendar book. You just may, not toss a coin, during weekends. If you do, you (probably) won’t get detected because (probably) the teacher is not here. 🙂

    No entanglement, no quantum computation?
    Without entanglement all you can have is Turing Machine equivalent computation.
    But I will share Scott’s answer, back in 2012, when I asked him the same question:

    Scott Says:
    Comment #155 May 1st, 2012 at 9:17 am
    Teresa: It requires computation “beyond the limits of classical computation,” which in turn would indeed almost certainly require entanglement.

    https://scottaaronson-production.mystagingwebsite.com/?p=902#comment-43373

    Thank you for your questions, Daniel.

  134. Scott Says:

    Yes, on the relation between QC and entanglement (if not on any other question here… 😉 ), I guess my position is closer to Teresa’s than to Daniel’s from comment #131.

    It’s true that there are models like DQC1 that seem to get a quantum speedup using only tiny amounts of entanglement. And it’s also true that no one has proved that you couldn’t get a quantum speedup, even if your QC were in a separable mixed state between applying each gate and the next one.

    On the other hand, even supposing there are quantum algorithms that can get speedups using mixed states that happen never to be entangled (or entangled much), it’s still the case that implementing those algorithms would require gates that do have the ability to generate large amounts of entanglement, when they’re applied to other states.

    In other words, I think the broader point stands: I don’t know of any possible way the world could be, such that you could produce quantum speedups but you couldn’t produce huge amounts of entanglement.

    And more concretely, I agree with Teresa that, if Bell test experiments had found results that were not compatible with the standard quantum predictions, but were instead compatible with some local hidden-variable model that disagreed with the quantum predictions—then that would indeed be a scientific revolution that, almost as an afterthought, could invalidate the whole existing foundation of QC. Of course, if the hidden-variable description only took over at (say) 1km separations, and QM still applied at shorter distances, then for all practical purposes QC might remain unaffected. But at any rate, everything about QC would need to be reexamined in light of the new discovery.

    Thus, I think it’s relevant to this conversation that no Bell experiment has ever found anything of the kind! And more broadly, no experiment has ever found any deviation from the predictions of standard QM, with exponential wavefunctions and all.

    Of course, we’re still far from being able to do every quantum experiment we want (e.g., building a scalable QC)—and it’s tremendously important to try to do those experiments, partly because of the possibility that we might one day discover a deviation from QM! But at the same time, people should understand that, given the experiments we can do today, there’s nothing more that Nature could possibly be doing to send the message that QM is exactly correct.

  135. Mateus Araújo Says:

    Mark #132:

    That’s the language problem. I think I agree with you, but apparently I did not manage to communicate that due to differences in language.

    About your point 4: I thought it was very clear that I was referring to the collapse postulate. I understand that you do not accept it, but since for a large part of the community the very word “quantum mechanics” already implies the collapse postulate, it is better to be clear about it, in order to avoid generating confusion.

    I’m a bit confused about your point 3: I was pointing out how unnatural the collapse postulate is, which you seem to agree with, but you’re pointing out that I should accept “weird” results? Also, I find it hard to imagine a branch of the wavefunction where classical physics is not a good approximation for large, complex systems. For me, that would require statistics that deviate significantly from the Born rule, which again I find hard to imagine.

    About your other two points, I agree completely, I just do not understand why you brought them up.

  136. Daniel Says:

    I didn’t mean to say that entanglement is unnecessary for QC, we know that’s not really the case. I agree that, at the end of the day, there will be some entanglement somewhere. I’m talking about the connection between the type / amount of entanglement necessary for QC, and that necessary in Bell experiments.

    In Nest’s paper, I believe he shows a tradeoff between the overall amount of entanglement and number of times you have to repeat the computational. You can have a computation that has little entanglement at any point in time, at the cost of having to repeat the computation some times.

    Or take BosonSampling. The only entanglement there is “mode entanglement”, which several people in the quantum optics community don’t call entanglement at all. And the measurements you would need to perform a Bell experiment with this mode entanglement aren’t very realistic AFAIK.

    All I meant to say is that the role of entanglement might be more subtle than the blunt connection between Bell inequalities and QC that, say, Joy Christian likes to argue.

  137. Daniel Says:

    OK, I apologize for bringing this up. I was reading on the previous discussion from 2012 or something and got the arguments mixed up. Forget I mentioned QC, then 😉

  138. fred Says:

    I often imagine how exciting it must have been to be a physicist when the Michelson-Morley experiment results came out, then the publication of GR and subsequent attempts to validate it experimentally.
    It’s rare that such a mind-bending theory can be validated by relatively simple experiments.

  139. Mark Srednicki Says:

    Mateus #135: apparently it was my turn to be confusing. Though I was responding to your statements in #127, my response was more in the nature of general comments on the issues under discussion.

  140. Teresa Mendes Says:

    Scott #130
    Are you talking about this experiment Schoelkopf et al. experiment with the suggestive title “Violating Bell’s inequality with an artificial atom and a cat state in a cavity”?
    http://arxiv.org/pdf/1504.02512v1.pdf
    In page 5:
    “We benchmark the capabilities of this detection scheme with direct fidelity estimation and Bell test witnesses, which both reveal non- classical correlations of our system.”

    Sorry, not a Bell test. An imaginary test is not a real experiment.
    I guess Nature is on “my” side. 🙂

    Any other ?

  141. Scott Says:

    Teresa #140:

    For the record:

    You cite an experiment that the experimenters themselves clearly state shows the exact opposite of what you believe (i.e., it demonstrates the reality of non-classical correlations). The whole physics community, to within measurement error, also accepts this and other similar experiments as showing the exact opposite of what you believe.

    OK, but you reject these experiments, for unexplained reasons, as “imaginary tests” and “not real experiments.”

    And then you gloat that, because of this dismissal that took place within your head, you guess Nature is on “your” side!

    It seems to me that you’d only get to gloat if you could point to any experiment, anywhere, that showed results in “your” direction. Until then, the best you can say is that you’re playing defense. And you’re losing—like, the QM side is scoring thousands of goals on you—but the point I’m trying to get across is that even if it weren’t, you still wouldn’t be scoring any goals of your own. An experiment that fails to prove QM to every last skeptic’s satisfaction, is far from an experiment that gives any positive indication that Nature works the way you think it does.

    But I fear that, as with Joy Christian, I might as well be arguing with a cucumber. And unfortunately, when I talk to people who flaunt their unreasonableness the way you’re doing, eventually I lose my temper. For some reason, I can’t just let this stuff go unchallenged when it’s on my blog; I’m the polar opposite of a smooth politician that way.

    And because I know this about myself—because I foresee the descent into madness and want to prevent it, because I choose to spend my life on activities that are more fun and interesting than debating aggressive Bell-deniers—I regret to say that my exchange with you on my blog is now over. You can take your crusade elsewhere.

  142. John Sidles Says:

    Three Charitable Readings

    Mateus Araújo asserts (#127)  “As for the non-unitary part [of quantum field theory], my impression is that people try to pretend that it does not exist.”

    Mark Srednicki responds (#139)  “I have no idea what you mean by ‘the non-unitary part’ of quantum field theory.”

    Scott Aaronson hopes (in #32 of “Six Announcements”)  “Maybe there is a surprise in store that would make QC impossible—as I’ve said many times, that would be the scientific thrill of my life!”

    With a view to the Shtetl Optimized ‘spirit of transcending backgrounds and building a global shtetl’ the following is a charitably sympathetic reading of these three comments.

    Charitably moderating Mateus Araújo’s rhetoric  The sense of Mateus Araújo’s comment might reasonably have been preserved, and its message communicated more clearly, by muting its rhetorical excesses along the lines of “undergraduate courses and texts commonly do not grapple with the non-unitary parts of quantum field theory”.

    Carefully reading Mark Srednicki’s textbook  Professor Srednicki is the author of the introductory textbook Quantum Field Theory, which includes the chapters “Infrared divergences” and “Scattering in Quantum Chromodynamics” and “Wilson Loops, Lattice Theory, and Confinement”, as well as discussions throughout the text of zero-mass subtleties and pathologies.

    For QC builders and BosonSamplers, one teaching o fSrednicki’s book is that the quantum field theory of unitary scattering processes encounters ever-more-severe difficulties in describing dynamics whose scales are simultaneously larger than a centimeter, longer than tens of picoseconds, and less than a few tens of \(\mu\)eV (said energy / space / time scales all being the same in natural units).

    These scales are the domain of condensed matter physics, where it is well-known that many quantum surprises have been found in recent decades, and many open problems remain to be solved. Thus Mateus Araújo’s comment (charitably read) is well-justified in reminding Shtetl Optimized readers that even the best introductory quantum field textbooks (like Mark Srednicki’s) cannot reasonably hope to explore all of the subtleties, pathologies, and surprises that are already known to be associated to condensed matter’s Dreckeffects.

    And this is to say nothing of the subtleties, pathologies, and surprises that are described in advanced textbooks like Bob Wald’s Quantum Fields in Curved Spacetime and Black Hole Thermodynamics (1994). There’s no very urgent need (as it seems to me) to postulate new low-energy quantum physics, when the old low-energy quantum physics is so mysterious.

    Optimistically preserving Scott Aaronson’s hopes  These considerations help us to appreciate that today’s field-theoretic Standard Model includes sectors that are manifestly non-perturbative and whose dynamics is not naturally described by unitary particle-scattering formalisms … sectors that encompass condensed-matter systems in general, and in particular encompass all of chemistry and biology, as well as all scalable QC proposals and existing BosonSampling experiments.

    So there’s ample reason to hope for thrilling discoveries of QC-obstructing / Kalai-compatible quantum surprises in coming decades, even — or better, especially — if it turns out that we live in a physical universe whose low-energy / macroscopic sector is formally described (to arbitrarily high precision) by the quantum dynamics of the Standard Model.

  143. Teresa Mendes Says:

    Scott #141
    Am I allowed to a last post?
    Scott I understand: your blog, you rules.
    But you “called” me, in your initial post.
    And always respectful I answered you and made my reasons within the topic of discussion – claims on experimental violation of Local Realism. One, two, three, four, five experiments – the latest or the more relevant for the QC arena.
    Then I “gloated” – my mistake, sorry.
    But I do believe Nature is on Local Realism’s side. As you believe and stated the opposite.
    And until a conclusive Bell’s experiment, both paradigms should be taken seriously as scientifically valid. And this is not happening. QM get’s all the credits and “honours” (which Physics student, nowadays, can apply for a doctorate degree in Local Realism?), all the media attention, all the funding – a classical Kuhn’s “normal science” behaviour – no questioning of the foundations is allowed.

    Not pretending to be Galileo,
    “In questions of science the authority of a thousand is not worth the humble reasoning of a single individual.”
    — Galileo Galilei

    PS1: I do not follow Joy Christian’s arguments – he refutes Bell’s Theorem. I don’t. I think Bell’s Theorem is a wonderful tool, for what, if alive, Bell should get a Nobel award in Physics.

    PS2: On Schoelkopf’s experiment, they don’t follow a EPRB protocol. As they say, it’s just a “benchmark”, a “what if” mental experiment.

  144. peter cameron Says:

    Scott, Teresa,…

    There is a fairly new perspective on nonlocality, growing from an understanding of how quantum impedances of the scale invariant variety (quantum Hall, photon far field, aharonov-Bohm effect,…) communicate quantum phase. Point is that invariant impedances cannot communicate energy, only phase.

    A paper presented to the 2013 Rochester Conference on Quantum Optics, Information, and Measurement explained how this can resolve Hawking’s paradox.
    https://www.osapublishing.org/abstract.cfm?URI=QIM-2013-W6.01

    A paper presented to the 2015 Barcelona Conference on applications of the geometric interpretation of Clifford algebra extended that understanding to quantizing gauge theory gravity, and is available in the pre-conference proceedings.
    http://www-ma2.upc.edu/agacse2015/3641572286-EP/

    so far, despite impeccable credentials of both American Optical Society (referees for Rochester conference paper) and the geometric/Clifford algebra community (referees for the Barcelona paper), arxiv refuses to post these papers, and mainstream journals refuse to send them out to referees. You can find them and more at my author page.
    http://vixra.org/author/peter_cameron

    you can find brief author bio at my linked in page
    https://www.linkedin.com/pub/peter-cameron/11/b73/403

    A

  145. Richard Gill Says:

    Teresa #133

    You say “All systems prepared identically must be measured, detected and accounted for, otherwise the experiment is non-ideal and the efficiency of the experiment is the ratio of detected to prepared systems.”

    This sounds like a dogmatic (textbook?) statement belonging perhaps to quantum mechanics. Or it is just a matter of conventional definition of some concepts such as “efficiency”. Hence arbitrary and possibly irrelevant to the context.

    The Delft experimenters use statistical techniques (calculation of a statistical p-value) which depends on the randomness of the measurement settings, and *only* on this randomness. The decision whether or not to include a particular pair of measurements on the two fixed NV defects in the experiment is made before those two measurements are made. Before the settings have been determined with which those measurements are made.

    But as Scott says elsewhere, if you disagree, then go ahead and show by a computer simulation that the Delft result is easy to obtain under “local realist” physics.

  146. S. Thanksh Says:

    I also found helpful the link Travis #91 provided to his 2007 paper collecting and clarifying Bell’s original reasoning.

    “Ordinary QM is already nonlocal, and you don’t even need anything remotely as fancy as Bell’s theorem to see this. Just look at the theory (especially the collapse postulate!).”

    Another interesting way to consider this statement might be a thought experiment in which a macroscopically large collection of fermions (such as a neutron star) is approached by an additional neutron. Through what local physical mechanism does this neutron “calculate” the positions of all the others in order to choose a state for itself that is unique, thus satisfying the Pauli exclusion principle, while simultaneously obeying local causality?

  147. Simon Hofvander Says:

    like

  148. fred Says:

    Teresa #133

    “All systems prepared identically must be measured, detected and accounted for, otherwise the experiment is non-ideal and the efficiency of the experiment is the ratio of detected to prepared systems.”

    What does “identically” mean in this context?
    If several other “identical” experiments had been run by aliens on Betelgeuse 45 million years ago, we’d have to account for their results as well?
    If not, what’s the min time and distance thresholds when/where it starts to matter?

  149. jonas Says:

    Meanwhile, today’s xkcd strip (#1591: Bell’s Theorem) talks about this topic.

  150. ‘Einstein was wrong’, new research claims | Dear Kitty. Some blog Says:

    […] Bell inequality violation finally done right […]

  151. The anti-, an anti-anti-, my negativism, and miscellaneous | Ajit Jadhav's Weblog Says:

    […] because, recently, the MIT professor Scott Aaronson had a field day about hidden variables [^], though since then he seems to have moved on to some other things related to computational […]

  152. J Says:

    Didn’t Joy Christian construct an explicitly local realist model of the phenomenon studied in this experiment?

  153. Scott Says:

    J #152: No, he did not. But we already had two huge threads about his quackery on this blog, so I don’t want to reopen the subject now.

  154. Discourse in Delft | Quantum Frontiers Says:

    […] During my visit, Stephanie and Delft colleagues unveiled the “first loophole-free Bell test.” Their paper sent shockwaves (AKA camels) throughout the quantum community. Scott Aaronson explains the experiment here. […]

  155. Indrajit Says:

    I wanted to ask a question Scott. What about the assumption of measurement independence used in Bell’s theorem? One can still build a local, deterministic hidden variable model if one violates it. Also, such a model has been shown to be capable of maximal epistemicity in arbitrary dimensions.

    So why not count that a loophole remaining?

  156. Scott Says:

    Indrajit #155: Are you talking about the “free will loophole”? If so, then I don’t regard that as a “real” loophole, since it’s empirically sterile (i.e., no experiment could close it even in principle). For more see my comment #21.

  157. Shtetl-Optimized » Blog Archive » Edging In Says:

    […] loophole-free Bell test that I blogged about here (Anton Zeilinger and Hans Halvorson discussed this in their Edge […]

  158. Steve Silverman Says:

    I believe your characterization of realism in your “Bell inequality violation finally done right” is totally inadequate. Nothing you say prevents her lab from having one from the pair of entangled photons. Any measurement yields a definite result.

    You say:
    “If you like: a=a(x,w) is a function of x and of the information w available before the game started, but is not a function of y.”
    Does that mean she can’t flip a coin to decide a if x = 0? Of course not. Does it mean she must determine the value of a without the use of her lab? Then what purpose is her lab? The rest of your comment on realism is mere flapdoodle.

    The proof that they can win with at most probability 3/4 requires the existence of all four of (a|0), (a|1), (b|0), (b|1). However a given trial observes only two of them, e.g. (a|1) and (b|0). Where do the other two come from? They come from the assumption of realism (= determinism = hidden variables = counterfactual definiteness in this frame work). Realism says that a is determined and may depend on whether Alice receives x = 0 or 1, i.e. there is a classical algorithm (perhaps based on the state of reality) that gives the values (a|0) and (a|1), e.g. (a|0) = (a|1) = 0 (or perhaps a is determined by the flip of a fair coin). Same for Bob. Mind you, in some cases only God knows the algorithm.

    Under the assumption no information is passed (locality) the QM violation of the “at most 3/4” shows that realism is false.