6-photon BosonSampling

The news is more-or-less what the title says!

In Science, a group led by Anthony Laing at Bristol has now reported BosonSampling with 6 photons, beating their own previous record of 5 photons, as well as the earlier record of 4 photons achieved a few years ago by the Walmsley group at Oxford (as well as the 3-photon experiments done by groups around the world).  I only learned the big news from a commenter on this blog, after the paper was already published (protip: if you’ve pushed forward the BosonSampling frontier, feel free to shoot me an email about it).

As several people explain in the comments, the main advance in the paper is arguably not increasing the number of photons, but rather the fact that the device is completely reconfigurable: you can try hundreds of different unitary transformations with the same chip.  In addition, the 3-photon results have an unprecedentedly high fidelity (about 95%).

The 6-photon results are, of course, consistent with quantum mechanics: the transition amplitudes are indeed given by permanents of 6×6 complex matrices.  Key sentence:

After collecting 15 sixfold coincidence events, a confidence of P = 0.998 was determined that these are drawn from a quantum (not classical) distribution.

No one said scaling BosonSampling would be easy: I’m guessing that it took weeks of data-gathering to get those 15 coincidence events.  Scaling up further will probably require improvements to the sources.

There’s also a caveat: their initial state consisted of 2 modes with 3 photons each, as opposed to what we really want, which is 6 modes with 1 photon each.  (Likewise, in the Walmsley group’s 4-photon experiments, the initial state consisted of 2 modes with 2 photons each.)  If the number of modes stayed 2 forever, then the output distributions would remain easy to sample with a classical computer no matter how many photons we had, since we’d then get permanents of matrices with only 2 distinct rows.  So “scaling up” needs to mean increasing not only the number of photons, but also the number of sources.

Nevertheless, this is an obvious step forward, and it came sooner than I expected.  Huge congratulations to the authors on their accomplishment!

But you might ask: given that 6×6 permanents are still pretty easy for a classical computer (the more so when the matrices have only 2 distinct rows), why should anyone care?  Well, the new result has major implications for what I’ve always regarded as the central goal of quantum computing research, much more important than breaking RSA or Grover search or even quantum simulation: namely, getting Gil Kalai to admit he was wrong.  Gil is on record, repeatedly, on this blog as well as his (see for example here), as saying that he doesn’t think BosonSampling will ever be possible even with 7 or 8 photons.  I don’t know whether the 6-photon result is giving him second thoughts (or sixth thoughts?) about that prediction.

79 Responses to “6-photon BosonSampling”

  1. John Sidles Says:

    Slow horses  For me (and many), the rate-of-progress in experimental quantum information theory — including linear-optics quantum computing and BosonSampling — has been like watching a horse-race in which, during the early going, the horses have taken \(2^n\) minutes to run the \(n\)-th furlong.

    Competing explanations  It’s logically consistent to say “horses are very unpredictable” and “the track is very muddy”, and to be optimistic that, later on in the race, the track will dry and the race pace will be sustained.

    But it’s also logically consistent to wonder (as Gil Kalai wonders) whether the nature of quantum horses and quantum racetracks is such that the quantum track is never going to dry, and no finite quantum race pace can ever be sustained.

    Unexpected news  Unexpected good news for young folks working at the racetrack is that, despite the muddy track, semiclassical track-conditioning devices are (unexpectedly) running faster and faster. In consequence, semiclassical track-conditioning enterprises are prospering and hiring, while purebred quantum horse-stables encounter considerable difficulties in sustaining their economic viability.

    New performative enterprises  Hmmm … maybe new forms of quantum racing are emerging, that show reasonable prospects of sustaining family-supporting jobs in ample numbers for young quantum racetrack workers?

    Conclusion  Scientifically speaking, the gap is steadily widening between the (unexpectedly) slow purebred quantum horses, and the (unexpectedly) fast semiclassical track-conditioners, and the reasons for this widening gap are incompletely understood.

    Kalai appreciation  If Gil Kalai’s ideas help to explain this widening quantum gap, then further development of Kalai’s inspiring ideas will have considerable value, both fundamentally and in concrete terms of practical engineering and medical research.

    Broader appreciation  These are exciting times at the quantum racetrack, and the O’Brien group, and also Shtetl Optimized, and Gil Kalai too, all have contributed greatly to sustaining this excitement, for which they richly deserve appreciation and thanks from everyone.

    Because after all

    “It were not best that we should all think alike; it is difference of opinion that makes horse races.”

       The Tragedy of Puddn’head Wilson
       by Mark Twain

  2. Rahul Says:

    So now that we are up to the 6 photon events do we know how the scaling behavior works?

    i.e. How many weeks of data gathering did it take to get these 15 6-fold photon coincidence events? Versus 5-photon, 4-photon etc. studies.

    i.e. Are we at a point know where we can extrapolate the curve to how long a 45 photon Boson Sampling Run would take? I use 45-photon because based on a recent analysis by John Sidles on another thread it seems like 45-photon might be the magic case where the Boson Sampling device might sample a distribution inaccessible to any conventional computer of today.

  3. asdf Says:

    Fwiw here is an arxiv preprint of the (paywalled) article:

    http://arxiv.org/abs/1505.01182

  4. Scott Says:

    Rahul #2: You can try doing extrapolations, but all they’ll tell you is the boring information of what happens if you scale up with the sources, detectors, etc. that are available today (hint: it’s not pretty). What you really want to know, but what’s harder to predict, is how the technology will improve. E.g., getting 15 coincidences took a long time with the technology of today, but it presumably would’ve taken much longer with even the technology of 2012 (or else it would’ve been done then).

    And no, there’s no “magic number” of photons where BosonSampling becomes classically hard. If our complexity conjectures are correct, then it just gradually gets harder, probably roughly like 2 to the number of photons.

  5. Daniel Says:

    Personally, what I find awesome about this paper is the “universal linear optics” part.

    Sure, it was an experiment involving 6 photons, and the number of photons really is the current bottleneck for BosonSampling, and this is pretty cool. As Scott pointed out, though, the way these photons are created pushes the physical “number of photons” boundary, but not really the complexity boundary – I suppose Gil Kalai could fall back on saying that this didn’t really push the “high complexity regime” much, and that what will be really hard is controlling the noise when producing 6 photons from 6 different sources (which is also a qualitatively different technological challenge, otherwise I guess the authors would have done it).

    Now, what I think is an important highlight of this paper is not only that they have 6 photons, but that they have a *completely reconfigurable*, on-chip, 6-mode interferometer. Most previous BosonSampling papers used some restricted type of linear-optical network – either a quantum-walk-like network using only 50:50 beam splitters, or one which had all beam splitters parameters arbitrary but fixed (i.e. you could build whatever Haar-random interferometer you wanted, but you had to build a new chip for each new one), or at most only a few parameters were reconfigurable. This paper, though, did hundreds of different unitaries (both for BosonSampling and other linear-optical tasks) with the same piece of glass, just by changing the voltage on different parts of the glass. I think this bodes well not only for BosonSampling, but also for other linear-optical tasks (e.g. the measurement adaptations needed in KLM quantum computing).

  6. Gil Kalai Says:

    “Well, the new result has major implications for what I’ve always regarded as the central goal of quantum computing research, much more important than breaking RSA or Grover search or even quantum simulation: namely, getting Gil Kalai to admit he was wrong.”

    I am flattered!

  7. Rahul Says:

    Scott #4:

    By magic number I merely meant the number of photons where you can Boson Sample without being able to calculate the distribution on the state of the art classical computing hardware in a reasonable time.

    Is it fair to say this happens at somewhere around n=45 today? i.e. That’s the limit of a classically verifiable permanent calculation?

    If I interpret things correctly that’s a very crucial target because it is at that point that one would consider the Extended Church Turing thesis overthrown?

  8. Rahul Says:

    Scott #4:

    “You can try doing extrapolations, but all they’ll tell you is the boring information of what happens if you scale up with the sources, detectors, etc. that are available today”

    I think this isn’t “boring” information at all. It’s like saying the whole of technology & implementations is boring and the fundamentals underneath are the only interesting area.

    e.g. Sure we do not know how fast technology will change in the future but at least we can see how progress went last few years? Is attacking ECT via BS a target we can reasonably try to reach in the next few years or is it likely to take decades?

    I for one would love to know whether going from n=5 to n=6 was as hard or harder or easier than going from n=4 to n=5.

    Especially in the context of a “universal linear optics” setup that this paper develops it should be less messy to study the n-scale-up keeping all other setup details constant.

  9. Joshua Zelinsky Says:

    Daniel #5,

    Yes, the fact that these are adjustable devices they are using looks really neat and possibly exploitable. It makes one wonder if it we are getting to the point where we should start wondering if Boson sampling can be used for practical ends.

    Completely speculating for a moment: One thing that sort of comes to mind is in the context of understanding BCS superconductivity where the electrons in Cooper pairs act essentially like bosons. Trying to understand when and at what temperatures and pressures the transition to superconductivity is a major ongoing matter. See especially the recent work with sulfur hydride becoming a conventional superconductor at record high temperatures.

  10. Scott Says:

    Rahul #7: There are many numbers that you could consider reasonable targets. Even at 20-25 photons, you could argue that BosonSampling was sampling from the permanental probability distribution “faster” (in some appropriate sense) than a classical simulation would be—thus putting strain on any theory that invokes an efficient classical computation behind that can account for our observations. Of course 45 photons would be more impressive still—though too far beyond that, and you couldn’t even verify the results with a classical computer. (I just back-of-the-enveloped it, and got that at one billion arithmetic operations per second, computing a 45*45 permanent would take about a month.)

    In any case, I doubt there will be a single crucial moment when the Extended Church-Turing Thesis gets overthrown. More likely that better and better experiments will simply put more and more strain on it, until we reach a point where only Leonid Levin still believes it (even Gil having jumped ship).

  11. Rahul Says:

    Scott #10:

    I doubt there will be a single crucial moment when the Extended Church-Turing Thesis gets overthrown. More likely that better and better experiments will simply put more and more strain on it, until we reach a point where only Leonid Levin still believes it”

    Easy, easy, we are still at n=6. 🙂

    For all we know the ECT might still have some tricks up its sleeve? Wait & watch I guess.

    With this new universal optics setup I’m really excited & optimistic that the scale up to larger n will be fast.

  12. Gil Kalai Says:

    1) Of course, this BosonSampling achievement is very impressive, and I congratulate the researchers very warmly.

    2) My detailed prediction regarding BosonSampling is that (noisy) distributions witnessed by systems of photons can be approximated by low degree polynomials. And that physical computing devices whose computational power is so low (it is well below bounded depth computation) will not, in practice, be compatible (not to say superior) to classic computers.

    A similar prediction refer to quantum circuits with a small number of qubits. In short, with this primitive computational power you will not be able to manifest quantum supremacy, or the very basic building blocks to quantum error-correction. The only mechanism to go above this primitive computation is a mechanism for creating classical computation.

    3) Of course, this prediction, like any prediction based on asymptotic reasoning does depend on the constants. Those will be determined by experiment. But again, our experience with the way computational complexity interacts with reality is that we dont expect very very low level computational devices (in terms of their asymptotic behavior) making interesting computation in reality.

    If one understands this position, she (or he) does not really need me to make the call. She can examine the next progress and make the call for me. (If this position makes no sense to you, or you simply disagree, you are welcome to discuss it with me if you wish.)

    4)
    The two options we have (and I think Scott and I agree on that) are roughly

    A) The bad news is that quantum computers will not be built and quantum supremacy is not a real phenomenon. The good news is that even without quantum computers, there is no in principle (computational complexity) obstacle, for our ability to simulate nature and robust experimental outcomes.

    B) The bad news is that there are in principle obstacles for simulating quantum physics on digital computers: We can witness robust behavior in nature or in experiments that we will never be able to reproduce on our digital computers. The good news is that quantum computers will sometime be built.

    Both these options are stated in a fairly optimistic way. (And there are caveats for both) I think that A) is more plausible and that it will prevail. (And also that it will be important.)

    Of course, as ever, the idea that understanding nature requires referring to some superior capabilities is tempting and surely has explanatory power. On the *scientific* side our instincts should be to try to find a way around it.

  13. John Sidles Says:

    Rahul observes (#11)  “For all we know the ECT might still have some tricks up its sleeve?”

    One such ECT-affirming trick being, the prediction and optimization of superconducting transition temperatures in the new hydride superconductors, per arXiv:1506.08751v1, arXiv:1502.03017, and many more recent theoretical studies (with a nod to Joshua Zelinsky’s comment #9).

    The ECT paradox  The engineering of devices to ingeniously refute the ECT depends crucially upon algorithms that ingeniously affirm the ECT.

    This paradoxical mutual affirmation is one reason why many folks (including me) do not foresee a near-term resolution of the ECT/Kalai-Harrow debate.

  14. Job Says:

    I’m not at all familiar with Boson sampling.

    From what i could gather, modeling the probability distribution of bosons scattered according to some operator requires computing its permanent, which is #P-Complete?

    And that means Boson sampling could be used to demonstrate an exponential speedup of a (non-universal) QC over a classical machine?

    Can we expect the same type of back-and-forth surrounding D-Wave’s quantum annealing versus classical algorithms or is there less room for debate with Boson Sampling?

    It sounds like there are fewer classical algorithms for computing the permanent.

    In Boson Sampling, how do we pick the operator? I imagine that the permanent for some class of unitary operators will be efficiently computable by classical algorithms.

    How likely is it that, to provide an exponential speedup over classical machines, we’d need to pick an operator that corresponds to a quantum circuit that uses universal gates?

    Is the permanent known to be difficult to compute even for unitary operators composed of non-universal gate sets (e.g. Hadamard and CNOT)?

    Do we even need to worry about the operator? The focus seems to be on the number of photons – i’m missing something here.

  15. Rahul Says:

    John Sidles #13 says:

    “The engineering of devices to ingeniously refute the ECT depends crucially upon algorithms that ingeniously affirm the ECT.”

    Isn’t this reminiscent of people trying to build more & more ingenious perpetual motion devices?

    Till eventually we formulated a new law that put an end to such idiocy at least among serious circles.

    It could be that we are missing some such law that fundamentally makes overthrowing ECT impossible. Of course, if so, it would be very interesting to know what that law is.

  16. Joshua Zelinsky Says:

    Scott #10, Is there some reason you see Levin as more likely to be intractable about his position than Gil?

  17. fred Says:

    John #1
    I wonder whether/when we’ll observe an equivalent of Moore’s law for quantum electronics.

  18. fred Says:

    My understanding is that Moore’s law is purely based on observations from years of classical electronics engineering techniques in scaling.
    For quantum electronics, the arguing sides don’t seem quite sure if the limitations are purely engineering or theoretical (unforeseen limitations in QM).
    I think that the intellectual jump made by the complexity theorists in going from “bit” to “qbit” has created a lot of unfortunate expectations.

  19. Scott Says:

    Joshua #16:

      Is there some reason you see Levin as more likely to be intractable about his position than Gil?

    Gil says that he thinks scalable QC is impossible (for reasons I’ve never understood), but he admits that maybe he’s wrong, and he says that in any case the science being done to find out is wonderful—so much so that Gil is now an active member of Hebrew University’s quantum computing center.

    Leonid is absolutely certain scalable quantum computing is impossible, believes he’s proven it based on an argument about “metric vs. topology” that no one besides him has ever understood, and considers (I think) almost all research in quantum computing either incompetent or fraudulent, not even justified by the scientific interest of figuring out why it won’t work.

    With Gil, the idea that the universe is a huge purposeful conspiracy to prevent QC (while still allowing classical computation) is left simmering beneath the surface—you need to talk to him for a while, before you realize that lots of individually-puzzling statements make sense only under that assumption. With Leonid, by contrast, the idea of an omnipotent, diabolical, Occam’s-Razor-flouting, QC-thwarting conspiracy is loudly and colorfully stated to anyone who will listen.

  20. fred Says:

    Is there a free version of the paper? (accessible to outsiders like me?)

  21. Joshua Zelinsky Says:

    Hmm, I didn’t realize that Levin’s position was that much more extreme than Gil’s, but I’ve only read about both their positions rather than discussed them in person, so it is possible some amount of nuance gets lost that way. (I’m pretty sure that when I was an undergrad that Gil and I never discussed this. That was a long time ago and quantum computing was definitely not on my radar screen at the time.)

    I’m also relieved that not understanding Levin’s metric v. topology claim is not a failing on my part, or if it is it is a failing that I share with good company.

  22. Scott Says:

    fred #20: Yes, see the comment of asdf #3.

  23. Scott Says:

    Job #14: These are all questions answered in our paper and PowerPoint slides and other blog posts, but briefly…

    BosonSampling doesn’t let you calculate the permanent of a matrix of your choice; it lets you sample from a probability distribution over matrices that’s biased toward matrices with large permanents. But using the #P-completeness of the permanent, one can argue that that sampling task is already probably hard for a classical computer.

    Yes, there’s a danger of BosonSampling results getting overhyped, just like with quantum annealing, and one should carefully guard against it. At least with BosonSampling, (1) we have theoretical confidence that the underlying problem is something that admits an exponential quantum speedup (which isn’t the case for the kinds of optimization problems quantum annealing is used for), and (2) there’s not the huge distorting influence that comes from the claim of direct practical applications.

    While there are lots of possibilities for the operator, my and Arkhipov’s proposal was to choose a Haar-random unitary transformation U. One can then prove that small submatrices of U will look like i.i.d. Gaussian matrices, and one can then at least formulate a clean conjecture (as we did in the paper), that approximating the permanents of i.i.d. Gaussian matrices is #P-hard.

    BosonSampling is not based on qubits at all, so the basic “gates” that one uses to construct U are not things like Hadamard or CNOT at all. Instead they’re 1- and 2-mode beamsplitters. FWIW, in this paper Adam Bouland and I prove that any nontrivial 2-mode beamsplitter is universal for linear optics; there are no non-universal cases there analogous to Hadamard+CNOT.

  24. wolfgang Says:

    Scott,

    if both you and Gil are rational scientists, then an Aumannian dialog between both of you should quickly lead to agreement about the probability that scalable qc is achievable.
    Furthermore, this is certainly a purely technical discussion, because the question is independent of philosophical questions about the interpretation problem.

    So then why is such agreement *not* expected anytime soon (a Bayesian estimate on my part)?

    Perhaps there is utility in not agreeing, after all even in academia one gets more attention for not agreeing with the majority view than just being one of many members of the mainstream.

  25. Gil Kalai Says:

    “Scott (#19): “Gil says that he thinks scalable QC is impossible (for reasons I’ve never understood)…,”. Hmm, let be double check that I understand Scott’s view and reasons then. Here is a summary of Scott’s view as I see/understand it:

    1) Failure, in principle, of quantum computers will be much more surprising and interesting then building them.

    2) Failure in principle for QC amounts to breakdown of QM, or to major changes of similar magnitude in our understanding of QM

    3) All that is needed for building quantum computers is that noise rate will be dropped below the threshold. It will take time to achieve it but there is no reason to think it will not happen.

    4) That noise cannot be in-principle obstacle for quantum computing is already a consequence of an observation of Bernstein and Vazirani, that quantum computers are robust against one-over-polynomial error.

    5) There is no reason that, even without quantum fault-tolerance, Bosonsampling could not be scaled to 20-50 bosons, thus giving a convincing demonstration for quantum supremacy.

    Right? Is there something important I missed?

  26. fred Says:

    Gil #25
    The debater who has fully understood and integrated the other side’s arguments in its own reasoning is automatically more likely to be right? 😛

  27. fred Says:

    http://arstechnica.com/security/2015/08/nsa-preps-quantum-resistant-algorithms-to-head-off-crypto-apocolypse/

  28. John Sidles Says:

    Per fred’s link (#27) …

    Ars Technica’s hype  Quantum computing threatens crypto as we know it“.

    Synergistically skeptical counter-hype  “Crypto as we know it” is further threatened — even synergistically threatened — by Kalai-style performative skepticism:

    “For if the optimistic quantum light teaches that quantum computers can break the Suite B cryptographic algorithms, and the skeptical quantum dark teaches that quantum computers are innately lossy — hence noisy, hence simulable in PTIME — then it follows that only quantum-resistant algorithms can resist PTIME classical attacks.”

    Both conclusions are incredibly obvious, aren’t they Mandrake? Obvious to the NSA at least! 🙂

    Conclusion  Scott’s style of quantum optimism and Kalai’s style quantum skepticism alike become more interesting when we seriously endeavor to appreciate them simultaneously.

  29. fred Says:

    Can someone explain why this is usually referred to as “Linear Optics”, but a key phenomenon seem to be that Hong-Ou-Mandel effect, which is described as a “non-linearity”.

  30. Serge Says:

    I think quantum computers aren’t feasible, but for relativistic reasons rather than quantum. Indeed, the analogy I had forged between integer factoring and atomic nuclear fission seems more and more relevant to me. If you managed to show that the Feynman diagram of a nuclear fission is isomorphic to the flow chart of Shor’s algorithm, that would represent the missing connection that proves the energy expenditure required to run Shor’s algorithm. Thus, the roadblock to running Shor’s algorithm on large inputs would be the algorithmic side of Einstein’s equation E=mc².

    However, the existence of an efficient quantum algorithm for factoring is perhaps also the reason why no super-polynomial lower bounds factoring have been found for classical integer factoring. Because, once you’ve translated Shor’s algorithm into quantum logic, the predicate “classical” becomes meaningless (because undefinable). Indeed, is it possible to make a formal distinction between classical and quantum Turing machines within quantum logic? Therefore, if you had a proof that no polynomial classical algorithm can exist, then you’d also have a quantum proof that no polynomial algorithm at all can exist, be it classical or quantum. And this is notoriously wrong thanks to Shor.

    Let’s end with quoting Leonid Levin:
    “It is worth noting that the reasons why QC must fail are by no means clear; they merit thorough investigation. The answer may bring much greater benefits in the understanding of basic physical concepts than any benefits factoring devices could promise.”
    So I’d bet that Levin, just like you Scott, considers at least some of the research in quantum computing to be amply justified by the scientific interest of figuring out why it won’t work… 😉

  31. Scott Says:

    fred #29:

      Can someone explain why this is usually referred to as “Linear Optics”, but a key phenomenon seem to be that Hong-Ou-Mandel effect, which is described as a “non-linearity”.

    It’s called “linear optics” because, if you write everything in terms of photon creation operators, those transform in a strictly linear way (i.e., by a Bogliubov transformation). In this context, a “nonlinearity” would mean a force that coupled photons—or equivalently, a term in the Hamiltonian that was quadratic or higher-order in the creation operators. (Correction: Cubic or higher-order, as Daniel points out below.)

    I don’t know who calls the Hong-Ou-Mandel dip a “nonlinearity,” or why, but I would never describe it that way. I’d just call it a manifestation of quantum interference.

  32. Daniel Says:

    Fred #29, Scott #31

    One way that I like to think of it is that linear optics is any physical transformation that is completely defined by the way it affects single photons—the Hong-Ou-Mandel effect may look weird, and may look like its doing something different when two photons are present compared to when there’s just one, but it really isn’t. When you describe what a 50:50 beam splitter does to a single photon, the HOM effect just pops out automatically.

    Of course, this mental picture of mine is really just a rephrasing of what Scott said 🙂

    Tiny correction, though: linear optics are those transformation where the Hamiltonian is up to quadratic (e.g. has terms like a_1*a_2) in the creation operators. These quadratic terms act linearly on the creation operator as “hopping terms” (e.g., a term that annihilates a photon on mode 1 and create another in mode 2). An actual interaction Hamiltonian would need cubic or higher-order powers.

  33. fred Says:

    Scott #31, Danier #32 – thanks!

    “I don’t know who calls the Hong-Ou-Mandel dip a “nonlinearity,””

    I was just reading “An introduction to boson-sampling” (but couldn’t find any other quotes qualifying the effect as a non-linearity)

    http://arxiv.org/pdf/1406.6767v1.pdf

    “One final caveat – in almost all papers on the topic of
    boson-sampling, the interferometer is described as a passive linear device with non-interacting bosons (photons
    in this case). However, the Hong-Ou-Mandel effect (followed by a projective measurement) imparts an effective
    nonlinearity and hence an effective interaction at each
    beamsplitter. The presence or absence of a photon in one
    input mode radically changes the output state of a second
    photon in another input mode. This `interaction’ between
    indistinguishable particles, known as the exchange inter-
    action, arises simply from the demand that the multi-
    particle wavefunction be properly symmetrized. While
    not a `force’ in the usual sense, it can give rise to quite
    noticeable effects. […] It is therefore a misnomer to describe these interferometers as linear devices with non-interacting bosons.”

  34. Daniel Says:

    fred #33

    Oh, I see the problem now.

    I think this wasn’t a very good choice of words, to be honest, I think it generates even more confusion than what the authors wanted to dispel. What I think is key is the parenthesis in “the Hong-Ou-Mandel effect (followed by a projective measurement) imparts an effective nonlinearity”.

    The Hong-Ou-Mandel itself is a perfectly linear-optical transformation, described by quadratic (i.e. noninteracting) Hamiltonian, and is only a manifestation of interference, as Scott mentioned. However, projective measurements are *not* linear in that sense. You can, in fact talk about “measurement-induced nonlinearities”, where you can show that by using extra photons + linear optics + projective measurements you can in fact implement a nonlinear (e.g. quartic in creation operators) Hamiltonian (see e.g. http://arxiv.org/abs/quant-ph/0305082). In my opinion this is, implicitly, the main content of Knill, Laflamme and Milburn’s seminal paper where they first showed that adaptive measurements could replace nonlinear interactions in quantum optical computing.

    So my understanding of what the authors meant to say is that you shouldn’t underestimate linear optics since you can use exchange “interaction” – i.e. the bosonic statistics – together with measurements to implement effective nonlinearities. But notice that BosonSampling, for example, explicitly forbids adaptive measurements in the first place, so none of what I said shows up there anywhere.

  35. asdf Says:

    Meanwhile D-wave is now claiming 1000+ qubits and other handwaving.

    http://www.dwavesys.com/blog/2015/08/announcing-d-wave-2x-quantum-computer

  36. Rahul Says:

    Scott #32 Re. “Linear Optics”:

    Don’t these experiments often use Spontaneous parametric down-conversion?

    Maybe that’s the source of the non-linear optics argument?

    Isn’t SPDC is non-linear optics process?

  37. Nathan Montgomery Says:

    asdf #35

    Is that a Scott Signal I see in the sky?

  38. John Sidles Says:

    Shtetl Optimized readers seeking a performative appreciation of the engineering challenges associated to high-efficiency photonics are well-advised to search the arxiv for “superconducting nanowire”.

    ————
    Article I  “Ultimate low system dark count rate for superconducting nanowire single-photon detector” (arXiv:1507.01743, 2015) reminds us that in high-efficiency BosonSampling applications, the photon-source modifies an outgoing vacuum-state that couples (hopefully with near-perfect fidelity) to the incoming vacuum-state of the detector, such that source-noise is coupled to detector-noise (necessarily with the same near-perfect fidelity).

    The preprint describes how to “tune” the incoming vacuum-state so as to reduce the detector dark-noise (which is a trick we MRFMers have routinely used too). Otherwise it’s all-too-commonly observed that detector-fluctuations destabilize source-fluctuations.

    Conclusion I  Textbook formalisms that dynamically decouple source-noise from detector-noise — commonly by throwing-away plenty of photons — aren’t all that helpful in designing scalable BosonSampling experiments, in which source-currents and detector-currents necessarily are near-perfectly correlated.

    ————
    Article II  Speaking of detector noise-currents, what’s going in high-efficiency photon-detectors at the microscale level? “Vortex assisted mechanism of photon counting in superconducting nanowire single photon detector revealed by external magnetic field” (arXiv:1507.01743, 2015) reminds us that a great deal of photon-detector/photon-source photonic engineering at present is empirical cut-and-try.

    Conclusion II  Given the appreciable prevalence (for the present) of cut-and-try design methods, present there’s little ground to be confident, and ample ground to be cautious, in predicting the feasibility (or not) of scalable BosonSampling technologies.

    ————
    Book III  Speaking of design methods, it’s a pleasure to join Quantum Frontiers poster Xie Chen in directing attention to his outstanding new book (with coauthors Bei Zeng, Duan-Lu Zhou, and Xiao-Gang Wen) Quantum Information Meets Quantum Matter — From Quantum Entanglement to Topological Phase in Many-Body Systems (http://arxiv.org/abs/1508.02595).

    Conclusion III  Shtetl Optimized readers seeking a deeper appreciation of Gil Kalai’s remark (in comment #12 above) that “(noisy) distributions witnessed by systems of photons can be approximated by low degree polynomials” will read Chapters 6-9 of Zeng-Chen-Zhou-Wen (on topology, entangle, and matrix/tensor product-states) as a performative fleshing-out of Gil’s notions.

    Concluding hope  Hopefully the horse-race between quantum optimists and quantum skeptics still has plenty of race-track ahead of it.

  39. Scott Says:

    Nathan #37:

      Is that a Scott Signal I see in the sky?

    Yeah, yeah, Scott-man sees the signal, he’s just … a little out of shape these days …

  40. Alexander Vlasov Says:

    The particular method used for calculation of the 0.998 confidence is not quite clear due to some ambiguity in the way of interpretation of the rather short description. I’ve also seen reference they mentioned, but here is also some ambiguity, because it is suggested there to use the same experimental equipment to compare with “classical” case. The “classical” term is used for the same experiment with distinguishable photons. Such method of experimental proof is indeed looks simplest, but it is not clear if it was used for mentioned experiment with 15 events, because they also wrote about alternative method of checking, the comparison with some calculated values.

  41. John Sidles Says:

    Alexander Vlasov observes (charitably)  “The particular method used [by the Bristol group] for calculation of the 0.998 confidence is not quite clear due to some ambiguity.”

    Yes, it takes considerable scientific chutzpah to ascribe such a high confidence-level to such a scantily described hypothesis-test.

    To mention one crucial point of ambiguity, in models of quantum computation it is entirely feasible (even generically feasible) to specify noise-levels sufficiently large to allow PTIME simulation, and yet the observed dynamics is still grossly non-classical. The semi-obvious implication is that “0.998 confidence” does not mean “0.998 confidence that high-order permanental quantum amplitudes are necessary to simulate the data”.

    And yet to be fair, to describe any realistic hypothesis-test in sufficient detail to enable replication of the confidence-level calculation (or even critical assessment of it) would require a longish article in itself (or even a thesis).

    Conclusion  Hopefully there are longer articles in the works, in accord with the Mephistophelian duality principle that cumulative advances in methods for simulating BosonSampling experiments are of comparable scientific interest to cumulative advances in methods for performing BosonSampling experiments.

  42. Gil Kalai Says:

    “Gil is on record, repeatedly, on this blog as well as his (see for example here), as saying that he doesn’t think BosonSampling will ever be possible even with 7 or 8 photons.”

    Yes!, but this prediction refers to k bosons with 2k modes (and not with 2 modes). Here is the relevant paper.

    So for k=8 the dimension of the Hilbert space in my prediction is roughly 500,000, and not 9 as in the case of two modes.

    Certainly, understanding the envelope of what is experimentally possible, in general, and the 2-mode case, in particular, is very interesting. For the reasoning explained in this comment (second item), I expect that the envelope of what is possible for experimental BosonSampling will fall well below the “quantum supremacy” territory. (My prediction applies also to physical implementation of Fermion Sampling as well, although it is in P).

  43. John Sidles Says:

    In furtherance of the time-honored scientific tradition of Mephistophelean duality, please let me suggest that the Gil Kalai and Guy Kindler article “Gaussian noise sensitivity and BosonSampling” (arXiv:1409.3093) and the Carolan et al article “Universal linear optics” article (arXiv:1505.01182) could both have beneficially been far longer:

    • a far longer version of the Kalai/Kindler theory article, by virtue of greater realism in a detailed analysis, and

    • a far longer version of the Carolan et al experimental article, by the converse virtue of greater detail in a realistic analysis.

    Prediction  In coming decades, both hopes are likely to be fulfilled many times over.

    That’s why many folks (definitely including me) enjoyably foresee that the optimistic-vs-skeptic quantum horserace still has a very long way to run.

    And that’s why BosonSampling is a terrific topic, for which Scott Aaronson and Alex Arkhipov deserve great credit.

  44. Alexander Vlasov Says:

    Gil #42, output anyway has 6 modes and dimension 6! = 720. It is bigger than dimension for 9 qubits. I am not sure that input is so crucial – it is similar to complain, why we are using state |0000…0> a input of many quantum algorithms.
    And finally, due to noise the initial space may spread in bigger dimension confirming my doubts about overcoming ECT using noise.

  45. John Sidles Says:

    PS  The conclusion of comment #43 should have noted that Gil Kalai too deserves great credit, for demonstrating that the quantum-skeptical literature can aspire to comparable mathematical and scientific quality to the quantum-optimistic literature … and also for unfaltering good humor.

  46. John Sidles Says:

    Alexander Vlasov argues “due to noise the initial [state-]space may spread in bigger dimension confirming doubts about overcoming ECT using noise.”

    And yet, a key tenet of the Church of the Larger Hilbert Space — specifically, Nielsen and Chang’s Theorem 8.2: Unitary freedom in the operator-sum representation — affirms concretely that Lindblad processes realized as dimension-spreading noise can indistinguishably be realized as dimension-shrinking measurement processes.

    In other words, the Quantum Scriptures ingeniously accommodate Mephistophelean Duality, by spurring optimistic racehorses and skeptical racehorses with equal efficacy.

    Conclusion  Quantum heaven intrinsically resembles the paradise of Mark Twain’s Extract from Captain Stormfield’s visit to Heaven (1907): it’s a paradise that is divinely, even hilariously, arranged so as to finely balance the strivings of quantum optimists and quantum skeptics.

    Which is (semi-obviously) the very best kind of quantum paradise to which we mortals can reasonably aspire.

  47. Jay Says:

    @asdf

    Amusingly, back in 2013 Dwave was claiming X3600. Now they claim X600, with two times more “qbits” and two times less noise. Not clear if they should further decrease the number of qbits or further increase the noise to improve performance. 🙂

  48. fred Says:

    I’m not clear why ppl who think that noise/error levels are gonna be the limiting factor can’t just write a simulation to demonstrate this (with various models of noise/error).

  49. Joshua Zelinsky Says:

    fred #48,

    Well, if one thinks that the level of noise is going to skyrocket very near to the point where classical simulations become impractical, simulations of smaller instances aren’t going to necessarily tell you that much.

  50. Serge Says:

    The problem with quantum computers is that you must build a significantly more powerful machine each time you increase the size of your input data by one unit. The power complexity of the quantum model is perhaps linear, but with a proportionality constant of the order of the speed of light squared, due to the mass-energy equivalence. In this, quantum computers are reminiscent of particle colliders. Like those, they’re unlikely to leave the laboratory anytime soon. The polynomial-time behavior of quantum factoring is surely a theoretical marvel, but it’s also practical rip-off. The nice time behavior is obtained at the expense of a prohibitive energy model.

  51. James Gallagher Says:

    @fred #48

    I think they don’t know the exact hamiltonian governing the evolution of the BosonSampling system – you can assume an ideal QED model or similar, but even that would be ridiculously complicated to simulate to sufficient accuracy and even that probably isn’t exactly how Nature works.

    So adding a random noise element isn’t even close to being possible in a useful manner for a simulation of this type of physics.

  52. Scott Says:

    Serge #50: I’m not going to change your mind, but maybe I should say for the benefit of others reading that the idea that QCs have a “power requirement of order the speed of light squared” is complete nonsense. Firstly, the units aren’t right! And secondly, a QC is not a nuclear reactor! It’s not converting mass to energy or vice versa in any relevant way. The energy requirements come from much more prosaic sources—it’s just whatever amount of energy you need to cool the qubits down, manipulate them, etc. (In practice, I believe the refrigeration costs typically dominate.)

  53. John Sidles Says:

    Scott reminds readers “The energy requirements [in BosonSampling] come from much more prosaic sources […] In practice, I believe the refrigeration costs typically dominate.”

    And yet on the other hand, it’s true too that not every energy/entropy cost associated to BosonSampling is scientifically prosaic.

    In standard photonic practice, the photon-emitter current fluctuations are decoupled from the photon-absorber current fluctuations by either (or even both of two methods), the first method being thermodynamically wasteful and the second method being thermodynamically subtle.

    The wasteful isolation method  Make the photon-source bright, and dump most of the photons. In the original Hanbury-Brown and Twiss experiment, the photon-emitter current was the entire output of the star Sirius, while the photon-absorber current was a few picoamperes (at an estimate) of photoelectrons.

    The subtle isolation method  interpose a non-reciprocal optical isolator between the photon-source and the photon-sink. This provides two more tunable/designable BosonSampling elements, namely, the (optionally quantum-squeezed) vacuum state injected at the isolator ports.

    MRFM experiments commonly use both wasteful and subtle isolation methods simultaneously, and ultra-advanced experiments like LIGO are beginning to experiment with sophisticated vacuum-squeezing isolation.

    The associated quantum thermodynamics of optical isolators and decouplers is a tricky subject … as attested by a marvelous first-person account “Realizing squeezing: an interview with Carlton Caves” (2013), which appeared in LIGO Magazine (as an interview conducted by Michael Landry).

    If these isolation/decoupling physics-challenges aren’t easy for a highly experienced (and outstandingly talented) researcher like Carlton Caves, then how difficult are these challenges for less-experienced mortals?

    That’s why its far from clear — far from clear to me at least, and perhaps far from clear to anyone — whether the extra modes associated to optical decouplers will turn out to be a blessing or a curse in regard to scalable BosonSampling; the principle of Mephistophelean duality affirms both eventualities with equal weight.

    Conclusion  Realistic models of over-simplified BosonSampling experiments aren’t easy; over-simplified models of realistic BosonSampling experiments aren’t easy either, and realistic models of realistic BosonSampling experiments are a mighty challenging proposition indeed.

    So very challenging — and yet so very revealing — are realistic simulations of realistic BosonSampling experiments, that further progress would be heartily welcomed by researchers of all persuasions and temperaments.

  54. a Says:

    Is there a reason not to believe brain cannot do bosonsampling?

  55. Scott Says:

    a #54: Too many negatives there, but no, there’s no reason not to believe the brain can’t do BosonSampling, and there’s every reason to believe it can’t. I mean, can you sample probability distributions defined in terms of permanents? I can’t even sample ones defined in terms of determinants—and it gives me a lot of trouble to do 2-digit multiplication. Why would anyone even imagine BosonSampling was the kind of thing people would be good at?

  56. Alexander Vlasov Says:

    John #46, In my comment I just response on idea that the dimension is not enough. In fact, in previous paper of the same group (also mentioned in the post) they directly mentioned about dimension of Hilbert space > 50000.
    Your “Mephistophelean Duality” sounds like yet another manifestation of “observer solipsism” – if some system violates ECT, but we may not measure that (due to noise or other reasons), the system does not violate ECT. But I just complained about that particular interpretation.

  57. Gil Kalai Says:

    Alexander: “Your “Mephistophelean Duality” sounds like yet another manifestation of “observer solipsism” – if some system violates ECT, but we may not measure that (due to noise or other reasons), the system does not violate ECT. But I just complained about that particular interpretation.”

    Dear Alexander, this I think this is an important issue and we discussed it in the past.

    One aspect of it is this: if you have a noisy state that is easy to be prepared (so on its own does not violate ECT), but which is a mixture of pure states each of which demonstrates quantum supremacy (or violate ECT). Should we regard it as a violation of ECT?

    Another aspect of it related to BosonSampling is the following: Suppose you can prove that BosonSampling with n Bosons with the property that every boson vanishes with probability (1/log n), say, violates the ECT. (And suppose that 1/log n is technologically achievable for large n’s ) Does this mean that noisy BosonSampling in such a regime would violate ECT?

  58. Alexander Vlasov Says:

    Dear Gil #57, I am not sure, that is the model with vanishing bosons you mentioned. Is it something about more difficult case without conservation of photon number?

  59. Serge Says:

    Scott #52: my units aren’t right only because I was equating power and energy due to the tiny data thus far processed. Indeed, who cares how much longer it takes to factor 21 than 15? I came up with a comparison of Feynman diagrams with flowcharts after reading John Baez’s intriguing paper “Physics, Topology, Logic and Computation: a Rosetta Stone”. I admit the absence of energy-to-mass conversion during quantum processes, but wouldn’t it be beautiful if a new information-energy equivalence were discovered? There isn’t such a thing as a prosaic source of energy. Refrigeration costs dominate in the QCs based on supra-conductivity, but I suppose other types of energy costs dominate in the silicon-based QCs for example…

  60. Gil Kalai Says:

    Alexander, I was referring to a model that Scott is often talks about (I hope I get it right) : you start with an n by m matrix (representing n bosons) then you delete t rows (so the error rate is t/n), and you average the bosonic distributions for all possible ways to delete t rows of the matrix. Of course, for my question, you can take instead one of the noise models Guy and I were studying where the number of bosons remain fixed.

    Fred (#48): “I’m not clear why ppl who think that noise/error levels are gonna be the limiting factor can’t just write a simulation to demonstrate this (with various models of noise/error).”

    Fred, we do talk about simulation in the paper linked above and after it was arxived we made some simulation in subsequent work with Maya Leshkowitz. There are intensive efforts by quite a few people for simulations of BosonSampling (and noise) on various levels. The most simplified models can be studied analytically, slightly more general models need simulations, and for simulating the realistic experimental process towards BosonSamling you need much harder and more detailed simulation. (Often FermionSampling behaves like BosonSampling and you can push simulation further for FermionSampling.) Of course, there is no difference in this respect between people who think that noise/error levels are gonna be the limiting factor and those who don’t.

    Fred (#26) : The debater who has fully understood and integrated the other side’s arguments in its own reasoning is automatically more likely to be right?

    Not at all, of course. My disposition is to try to understand other people’s view as much as I can. (And then often discuss and argue with them, if they wish, or occasionally tease them a little).It is not even something I recommend for others. Many times it is better to go on with your program and ideas and not to overly be distracted by others. But I really like to understand the line of thought and reasoning of other people and especially to identify the interesting ideas and stronger arguments.

  61. fred Says:

    Gil #60
    Thank you!

  62. John Sidles Says:

    Gil Kalai [in comment #60] includes “My disposition is to try to understand other people’s view as much as I can […]”

    Please let me say that Gil Kalai’s comment #60, in its entirety, is among the very best passages (as it seems to me) that has ever been posted here on Shtetl Optimized.

    Graceful consonance  Note in particular, the graceful consonance between Gil’s remarks regarding the feasibility (or not) of large-n BosonSampling, and Carlton Caves’ remarks regarding the feasibility (or not) of astronomical gravity-wave detection, that appeared in Caves’ interview Realizing Squeezing (in LIGO Magazine, issue 3 (2013), as cited in comment #53)

    “The [squeezed-light interferometry formalism] is a good example of why physicists are really obliged to give every explanation they can think of. The particle perspective, though intuitive and right, is sterile. The less intuitive wave perspective is the royal road [to better designs]”

    For a synoptic view of technical aspects of gravity-wave detection, the recent status report “Advanced LIGO” (arXiv:1411.4547 [gr-qc], 2014) is commended (by me and many).

    For a synoptic view of sociological aspects of gravity-wave detection, Harry Collins’ book-length survey Gravity’s Shadow: the Search for Gravitational Waves is also commended (by me, and by pretty much everyone who was not offended by Collins’ sociological candor).

    Triumphs  The point is that gravity-wave detection, if and when it occurs — soon we hope per arXiv:1411.4547 — will be every bit as much a sociological triumph as a technical triumph; a triumph of Kalai-style/Caves-style collegial values, every bit as much as Kalai-style/Caves-style technical competence.

    Plausible eventualities  [merging Carlton Caves’ remarks with Gil Kalai’s remark #12]:

    “The [large-n BosonSampling formalism] is a good example of why physicists are really obliged to give every explanation they can think of. The Hilbert-space perspective, though intuitive and right, is sterile. The less intuitive low-dimension polynomial perspective has turned out to be the royal road [to viable BosonSampling designs]”

    Conclusion  Large-n BosonSampling demonstrations are proving to be every bit as fundamentally important, difficult, uncertain, contentious, and thrilling, as astronomical gravity-wave detection.

  63. Rahul Says:

    Scott #52:

    “I’m not going to change your mind, but maybe I should say for the benefit of others reading that the idea that QCs have a “power requirement of order the speed of light squared” is complete nonsense. “

    Isn’t that a bit like arguing about the daily dietary calorific requirement of a Unicorn? 🙂

    Jokes apart, I do agree about the speed-of-light-squared assertion being wacky.

  64. Rahul Says:

    Scott:

    Assuming we are going to see n=7 Boson Sampling Experiments in the near future, are there any conceivable features such experiments might observe that might make you less cheerful about the ability of Boson Sampling to overthrow ECT?

    i.e. If you had to play devil’s advocate, is there anything that, you think, might possibly derail the Boson Sampling scaling effort?

  65. Scott Says:

    Rahul #64: Well, there are basically three such possibilities, mirroring the analogous possibilities for conventional quantum computing.

    It could just turn out to be too laborious to scale—maybe no new ideas will ever be discovered to improve the reliability of the sources (and hence the coincidence detection rates), so things will stall where they are. That would be disappointing.

    Or, there could be some dramatic new discovery that overthrows our current understanding of how identical bosons behave, or even of quantum mechanics itself. I regard that as fantastically unlikely—and if it happened, it would of course be a million times more interesting and worthwhile than a “mere success” at scaling BosonSampling.

    Or, it could turn out that the permanents of Gaussian matrices can be approximated in BPPNP, and that all the other hardness conjectures one might use here are also false. In that case, at least approximate BosonSampling would be easy classically. I, too, would find that a surprising and fascinating outcome, but admittedly it would probably be more interesting to complexity theorists than to the rest of the world.

  66. Serge Says:

    @Rahul & Scott,

    I feel I must reply, because my original comment was actually “the power complexity of the quantum model is perhaps linear, but with a proportionality constant of the order of the speed of light squared, due to the mass-energy equivalence”, which Scott shortened to “QCs have a power requirement of order the speed of light squared” before commenting that the units weren’t right! 🙂 That’s like Einstein mistakenly claiming E=c² because a copy-pasting mistake…

    I had honestly come to believe there was some creation of output data during a quantum process. So tiny as this might have been, after being multiplied by c² it could have explained why it always takes more power to process a data that’s only one bit longer. I still believe that whatever energy you’ll spend to prevent the decoherence issue is directly related to the complexity of the computational problem being solved. I really hope the matter will be sorted out sooner or later.

  67. John Sidles Says:

    So far Scott has presented four BosonSampling outcomes in all, namely three negative outcomes (of comment #64) and one positive outcome: “getting Gil Kalai to admit he was wrong”. This calls to mind a gently humorous passage in Robert Heinlein’s young-adult science-fiction novel Tunnel in the Sky (1955)

    Psychologists once locked an ape in a room, for which they had arranged only four ways of escaping. Then they spied on him to see which of the four he would find. The ape escaped a fifth way.

    “Fifth ways” in BosonSampling  One plausible candidate for a “fifth way out” in regard to BosonSampling would be a PTIME algorithm for scattershot sampling from all but an exponentially small fraction of iid gaussian matrices.

    Multiple precedents  There exist multiple precedents for such “algorithms that usually work”: Dantzig’s simplex algorithm, 3SAT solvers, and (arguably) drug-binding/protein-folding algorithms (for example). Spielman and Teng’s Gödel Prize article “Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time” is a canonical example of this literature.

    Algorithmic humor  In a more humorous vein, the cover of Forman S. Acton’s textbook Numerical Methods That Work (1970) hilariously embossed the ink-free word “usually” between “That” and “Work”.

    Conclusion  “Fifth way” algorithms that usually solve simplex, 3SAT, and drug-binding/protein-folding problems are exceedingly valuable both fundamentally and practically … and complexity theory provides no very compelling reason (known to me at least) to foresee that BosonSamplers will conceive no similarly valuable “fifth way” algorithms.

    Question  Would such “fifth way” algorithms for BosonSamplng prove that “Gil Kalai’s postulates are right?”. Even the logical possibility of such algorithms suggests that “Gil Kalai’s postulates are not obviously wrong”.

  68. Scott Says:

    John #67: Your “fifth possibility” that you introduce with such fanfare is essentially equivalent to my third one. The only reason to specify that a BPPNP algorithm is for Gaussian matrices is that such an algorithm would only need to work for most matrices drawn from the iid Gaussian ensemble, not for all of them. (The latter is known to be #P-hard.)

  69. Jeff in Seattle Says:

    Jay #47: the initial x3600 test was a comparison run between two systems that were specified by a customer (google) and run by an independent third party (cathy mcgeoch who is a world leading expert in this sort of test). It wasn’t dwave that either set the competition or run the tests.

    The x600 number is between two completely different systems. Not only is the quantum computer totally different, the classical solvers used are completely different. As far as I can tell these tests were done by dwave and aren’t connected to acceptance tests for customers.

  70. John Sidles Says:

    Scott, your third possibility (of #65) and my fifth possibility (of #67) differ by the deliberate (and crucial?) inclusion, in the latter, of the restrictive proviso “scattershot”.

    The motivation (possibly mistaken?) for this proviso is the proof in Aaronson/Arkhipov “The Computational Complexity of Linear Optics” (2011) of Theorem 4.3: Hardness of Approximating \(\mathsf{\text{Per}}\,(X)^2\).

    The proof stipulates “an input [permanental] matrix, which we assume for simplicity consists only of 0s and 1s.” These (crucially?) restricted 0-or-1 elements, even allowing finite imprecision, are of course only a tiny subset of iid gaussian matrix elements, and moreover the “scattershot” proviso mixes these elements so thoroughly that (seemingly?) oracular assistance is required to sample a set of explicitly reducible 0-1 matrices.

    The ensuing web of oracle-dependent complexity-theory implications is plenty tricky, for me at least, and plausibly tricky for the great majority of Shtetl Optimized readers too … there is no shortage here of inferences in which loopholes can reasonably be sought.

    Closing confections  Here are four quotes reflecting the time-honored interest in these difficult matters:

    “A work should have either intelligibility or correctness; to combine the two is impossible, but to lack both is to be unworthy.”
       — Bertrand Russel (1901)

    “The virtue of a logical proof is not that it compels belief but that it suggests doubts.”
       — Henry George Forder (1927)

    “Beware of bugs in the above code; I have only proved it correct, not tried it.”
       — Donald Ervin Knuth (1977)

    “A proof tells us where to concentrate our doubts.”
       — Morris Kline (1988)

    Conclusion  A loophole-closing Shtetl Optimized column that clarified the question “What would it take to really collapse the polynomial hierarchy?” would be gratefully and enthusiastically read by many (most definitely including me).

    Best of all, and most importantly, there’s zero margin for doubt that Shtetl Optimized is an elite web-forum for ideas and information relating to quantum information and complexity theory.

  71. Scott Says:

    John #70: No, sorry, you’re confused. The thing with the 0/1 matrices was only in our hardness result for exact BosonSampling, not in the result for approximate sampling (where we use iid Gaussian matrices throughout, embedding them as submatrices of Haar unitaries). And the complexity situation for scattershot BosonSampling is provably the same as that for ordinary BosonSampling—in the sense that both of them are hard to do, even approximately, assuming there’s no way to approximate the permanents of most matrices drawn from the iid Gaussian ensemble in BPPNP.

  72. John Sidles Says:

    Scott, thank you for your gracious reply (#71), which is entirely correct (as it seems to me), yet surely too, cannot be the last word.

    Perhaps the nearest remark that we have to the last word is this passage from your own Quantum Computing Since Democritus:

    Something inside me [and, I suspect, inside many other computer scientists!] is suspicious of those parts of mathematics that bear the obvious imprint of physics, such as partial differential equations, differential geometry, Lie groups, or anything else that’s ‘too continuous.’

    So instead, I start with some of the most ‘physics-free’ parts of math yet discovered: set theory, logic, and computability.

    This passage presupposes, implicitly at least, a converse community of researchers that embraces the implications of partial differential equations, differential geometry, and Lie groups, and is suspicious of (or at least struggles to grasp) the implications of set theory, logic, and computability. That converse community includes me, needless to say!

    This division is at least as old as the atomic (hence discrete) worldview of Democritus, versus the geometric (hence continuous) worldview of Pythagoras. Bridging this division isn’t easy, even with modern mathematical tools that marvelously unite the discrete with the continuous. As Joseph Landsberg remarks in the introduction to his Tensors: Geometry and Applications (2012):

    In the course of preparing this book I have been fortunate to have had many discussions with computer scientists, applied mathematicians, engineers, physicists, and chemists. Often the beginnings of these conversations were very stressful to all involved. […]

    This difference of cultures is particularly pronounced when discussing tensors: for some practitioners these are just multi-way arrays that one is allowed to perform certain manipulations on [while] for geometers these are spaces equipped with certain group actions.

    In view of these divisions, the single most enjoyable passage in Quantum Computing Since Democritus (for me) was its zeroth passage, namely its cover image, Rembrandt’s portrait Democritus the Laughing Philosopher.

    In this wonderful painting, Rembrandt portrays laughter in its most inspirational and philosophically sophisticated form, which is (as it seems to me) Spinoza’s hilaritas. And in turn, the philosophical notion of hilaritas beckons us to consider all of the mathematical disciplines that bear upon quantum phenomena — both geometric and atomic, both continuous and discrete — as best appreciated hilariously (sensu stricto) to be universal, natural, and performative (also sensu stricto).

    Conclusion  The challenges of 21st century quantum research require all the hilarity that we can summon.

  73. John Sidles Says:

    Erratum  There are dozens of marvelous “laughing philosopher” portraits; it turns out that the cover of Quantum Computing since Democritus is the portrait by Hendrick ter Brugghen, not the one by Rembrant van Rijn. Needless to say, it’s still a terrific cover choice for an outstanding book.

  74. Alexander Vlasov Says:

    Gil #60, I am not sure, that you are talking about. It is resembling something like, e.g., Eq (*) in sec 2.2 of arXiv:1109.1674, but in such a case, I would not associate that with vanishing bosons, it is rather usual representation theory – irreducible (?) representation of unitary group in space of symmetric tensors/polynomials (e.g., Weyl “The theory of groups and quantum mechanics” ?).

  75. Serge Says:

    John #72 says:

    This passage presupposes, implicitly at least, a converse community of researchers that embraces the implications of partial differential equations, differential geometry, and Lie groups, and is suspicious of (or at least struggles to grasp) the implications of set theory, logic, and computability. That converse community includes me, needless to say!

    As far as I’m concerned, I’m not suspicious of any parts of mathematics. I’m just convinced that the counter-natural efforts that have systematically been trying to keep logic, computability and complexity away from physics are now reaching a dead-end. I foresee that a deep connection between relativity, quantum mechanics and complexity theory is about to get uncovered as an unwanted side-effect of the quantum computing adventure.

  76. John Sidles Says:

    Serge presupposes  “counter-natural efforts that have systematically been trying to keep logic, computability and complexity away from physics.”

    Serge, it is entirely feasible to put a name to these efforts: ABET Accreditation Commissions

    [ABET-certified Computer Science (CS)] students must have one year of science and mathematics:

    1.Mathematics:  At least one half year that must include discrete mathematics. The additional mathematics might consist of courses in areas such as calculus, linear algebra, numerical methods, probability, statistics, number theory, geometry, or symbolic logic. [CS]

    2. Science:  A science component that develops an understanding of the scientific method and provides students with an opportunity to experience this mode of inquiry in courses for science or engineering majors that provide some exposure to laboratory work. [CS]

    One year of math, for any STEM degree, isn’t very much. And yet it is fundamentally infeasible to strengthen undergraduate mathematics curriculums, and still graduate students within four years.

    As Scott’s senior MIT colleagues Gerald Sussman, Jack Wisdom, and Meinhard Mayer say in the introduction to their (excellent but immensely long) textbook Structure and Interpretation of Classical Mechanics

    Classical mechanics is deceptively simple. It is surprisingly easy to get the right answer with fallacious reasoning or without real understanding.

    If the Sussman/Wisdom/Mayer lament is true for classical mechanics, how much more true is it for quantum mechanics, as learnt by students pushed to study subjects too soon and too fast, by reason of a relentlessly fast-paced four-year curriculum?

    Conclusion  At the undergraduate level, the leisurely maturing of undergraduate mathematical capabilities, to foster a more universal, natural, and performative appreciation of STEM enterprises, is infeasible not by reason of obtuseness or conspiracy, but simply by reason of time.

  77. fred Says:

    I just saw this

    http://spectrum.ieee.org/nanoclast/computing/hardware/researchers-create-quantum-dots-that-transmit-identical-single-photons?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrum+%28IEEE+Spectrum%29

    would this help improve Boson Sampling?

  78. Shtetl-Optimized » Blog Archive » Avi Wigderson’s “Permanent” Impact on Me Says:

    […] at least a half-dozen groups around the world have reported small-scale implementations—the record, so far, being an experiment at Bristol that used 6 photons, and experimentally confirmed that, […]

  79. Shtetl-Optimized » Blog Archive » My 5-minute quantum computing talk at the White House Says:

    […] demonstrated by quantum optics groups around the world, with the current record being a 6-photon demonstration by the O’Brien group in Bristol, UK.  A second example is the IQP (“Instantaneous […]