The Boson Apocalypse

If the world ends today, at least it won’t do so without three identical photons having been used to sample from a probability distribution defined in terms of the permanents of 3×3 matrices, thereby demonstrating the Aaronson-Arkhipov BosonSampling protocol.  And the results were obtained by no fewer than four independent experimental groups, some of whom have now published in Science.  One of the groups is based in Brisbane, Australia, one in Oxford, one in Vienna, and one in Rome; they coordinated to release their results the same day.  That’s right, the number of papers (4) that these groups managed to choreograph to appear simultaneously actually exceeds the number of photons that they so managed (3).  The Brisbane group was even generous enough to ask me to coauthor: I haven’t been within 10,000 miles of their lab, but I did try to make myself useful to them as a “complexity theory consultant.”

Here are links to the four experimental BosonSampling papers released in the past week:

For those who want to know the theoretical background to this work:

For those just tuning in, here are some popular-level articles about BosonSampling:

I’ll be happy to answer further questions in the comments; for now, here’s a brief FAQ:

Q: Why do you need photons in particular for these experiments?

A: What we need is identical bosons, whose transition amplitudes are given by the permanents of matrices.  If it were practical to do this experiment with Higgs bosons, they would work too!  But photons are more readily available.

Q: But a BosonSampling device isn’t really a “computer,” is it?

A: It depends what you mean by “computer”!  If you mean a physical system that you load input into, let evolve according to the laws of physics, then measure to get an answer to a well-defined mathematical problem, then sure, it’s a computer!   The only question is whether it’s a useful computer.  We don’t believe it can be used as a universal quantum computer—or even, for that matter, as a universal classical computer.  More than that, Alex and I weren’t able to show that solving the BosonSampling problem has any practical use for anything.  However, we did manage to amass evidence that, despite being useless, the BosonSampling problem is also hard (at least for a classical computer).  And for us, the hardness of classical simulation was the entire point.

Q: So, these experiments reported in Science this week  have done something that no classical computer could feasibly simulate?

A: No, a classical computer can handle the simulation of 3 photons without much—or really, any—difficulty.  This is only a first step: before this, the analogous experiment (called the Hong-Ou-Mandel dip) had only ever been performed with 2 photons, for which there’s not even any difference in complexity between the permanent and the determinant (i.e., between bosons and fermions).  However, if you could scale this experiment up to about 30 photons, then it’s likely that the experiment would be solving the BosonSampling problem faster than any existing classical computer (though the latter could eventually solve the problem as well).  And if you could scale it up to 100 photons, then you might never even know if your experiment was working correctly, because a classical computer would need such an astronomical amount of time to check the results.

34 Responses to “The Boson Apocalypse”

  1. What Would Be Left, If…? « Gödel’s Lost Letter and P=NP Says:

    […] issues with the Higgs are still causing apocalyptic reactions from some physicists. At least news today is that other bosons are behaving well according to Scott Aaronson and Alexander Arkhipov’s […]

  2. Kenneth W. Regan Says:

    Our congratulations! (if not premature…)

  3. Attila Szasz Says:

    Mayans probably have foreseen the experiments, but
    thought putting the proposition

    There exists a world supporting qm-proper photons -> ECT fails

    to test will result in favor of the thesis:)

  4. Scott Says:

    Attila #3: Well, the fact that there are people who think as you do just underscores why it’s important to do the experiments!

  5. John Sidles Says:

    These are wonderful experiments … they testify to the seductive beauty and fascinating scientific challenges of Aaronson-Arkhipov BosonSampling. Congratulations to everyone!

    For many folks (well me anyway) among the *best* consequences will be a flood of photon-source designs that seek to combine high-efficiency detection of output states with large-n generation of input states.

    If in a decade-or-two, such a scalable photon-source technology exists, then our confidence in the fundamental correctness of QM will be greatly increased, and moreover these techniques will no doubt be of immense value in quantum computing.

    Conversely, if in a decade-or-two progress is slow, then we will have acquired (painfully, but so what?) a deeper understanding of why quantum computing is hard, and (perhaps) why QM is *not* the underlying dynamical theory of Nature.

    Since *both* of these outcomes would represent terrific scientific advances, it’s plainly evident (or so it seems to me) that BosonSampling *already* has proven itself as a genuinely seminal scientific idea. For which valuable service (again) this heartfelt appreciation are extended to everyone!

  6. Bram Cohen Says:

    This may be a vague or poorly-founded question, but how many photons would have to be involved before the amount of precision involved overrode the sorts of tricks which analog classical processes can pull off, assuming that the precision of slide-ruler-like things caps out at around a micrometer or nanometer?

  7. Henning Dekant Says:

    Congrats! Exceptionally well choreographed release of papers. Wonderful to have such a clear cut overwhelming demonstration for the protocol.

    Now, if you could just somehow convince everybody that it’ll bring about the apocalypse i.e. just like Shore’s algorithm in the public’s eye is destined to do when it’ll end all encryption as we know it … your name will live in infamy 🙂

  8. Scott Says:
      Now, if you could just somehow convince everybody that it’ll bring about the apocalypse

    Henning #7: Some students and I were joking yesterday that the BosonSampling experiments might indeed bring about the apocalypse, because the huge computational effort needed to calculate the permanents of 3×3 matrices would crash the universe.

  9. Scott Says:

    Bram #6: As a first remark, the reason why (we think) BosonSampling eventually gets hard to simulate on a classical computer really has nothing to do with precision. Indeed, even if you could add and multiply numbers to infinite precision (but had no access to the numbers’ binary representations), it’s still conjectured that calculating the permanent of an nxn matrix would require exp(n) arithmetic operations.

    Having said that, yes, there are tricks by which one could imagine using an analog computer to solve NP-complete or even #P-complete problems in polynomial time, assuming the analog computer worked to infinite precision (which, of course, it doesn’t) and that you could manipulate individual bits in the binary representations of its parameters.

    So, the only way I know how to address your question is to tell you what’s the fastest known classical algorithm for BosonSampling. That algorithm, I think, would be based on Ryser’s algorithm for the permanent, which computes the permanent of an nxn matrix using O(n2n) arithmetic operations. Based on that, I estimate that when n (i.e., the number of photons) is about 30, you start to see a crossover where the experiment is doing BosonSampling “faster” than the fastest known classical algorithms running on existing computers could simulate it. It’s hard to be more precise, since “faster” is a vague term when you’re comparing a computer simulation to a physics experiment: for example, do you count the effort involved in setting up the experiment in the first place? (As I’ve sometimes put it, how does the number of grad students needed scale with n? 🙂 )

    Having said that, if you were instead comparing BosonSampling to an analog computer, then the crossover point would likely happen much earlier than 30 photons—probably (I suppose, depending on the analog computer) at something like 5-10 photons. But that has much more to do with the crappiness of analog computers than it does with BosonSampling!

  10. Nex Says:

    Boson smapling doesn’t even have a wikipedia article.
    Your work is invalid.

  11. Attila Szasz Says:

    Taking it a bit further ~170 photons would have the same power as
    overclocking Cray Titan to Planck time, right? 😀

    Ok, im getting a bad stats with useless comments, so here’s
    something related to the above:
    I might be wrong, but I find it remarkable, and
    sort of identify the beauty of quantum computers versus
    classical digital or analog ones
    with the fact that the physically realized machine in
    the former case doesnt diverge much or any from the theoretical
    description, whereas serious compromises has to be made with the
    classical ones. Objects get very mathematical when you get to
    small scales dominated by qm/qft, Linear Optics despite being
    rudimentary seems to exploit and demonstrate
    this very thing, the statistics of noninteracting bosons
    already sniffing into things like #P computation* both
    on paper and practice.
    Its not hard to see that you run into forbidding
    thermodynamic(, relativity etc)
    considerations using machines building on classical
    circuitry with currents, noise, dissipative elements;
    there are no digital Intel Zeno-series, neither devices exactly
    carrying out for instance the analog brute-force k-SAT dynamical system
    of Toroczkai et al. (the latter is the most interesting
    i know of, avoids getting trapped in n-butterflies,
    provides a hardness-metric on problem instances using
    transient chaos considerations)

    My question then naturally follows from this;
    Is there (or why isnt ) work on practically and
    substantially more powerfully carrying out
    classical computation on systems closer to this
    maths-limit, things like particle ensembles?

    (D-Wave is a good candidate for the first part i guess,
    but their marketing probably wouldnt like my
    description)

    Maybe aspects of analog computers that make more sense
    and has more use in some quantum chaos settings? The variety of
    nonlinear phenomea seems to account for their
    expressive power (at least as is see),what could make someone think
    that despite linear magic like quantum fourier already seems to
    account for crazily useful things, something just goes
    wrong with nonlinearity?

    *(making hardness arguments involved with it)

  12. Alexander Vlasov Says:

    John Sidles #5 – May be also less optimistic outcomes, e.g. time necessary for counting statistics grows in such a way, that classical computer still may do the claculations faster.

  13. John Sidles Says:

    Alexander Vlasov comments  “May be also less optimistic outcomes [to the Aaronson-Arkhipov experiment], e.g. time necessary for counting statistics grows in such a way, that classical computer still may do the claculations faster.”

    Alexander, it seems to me that the outcome you describe would be wonderfully interesting and important (and thus nicely illuminates how the Aaronson-Arkhipov experiments already influence our thinking).

    That’s because such a result would catalyze a healthy debate in regard to the (presently) open question, of whether the various practical difficulties to photon-production in Aaronson-Arkhipov experiments illuminate fundamental structures obstructions to Nature’s quantum dynamics, versus merely reflecting quantenfeldtheoretisch Dreckeffecten that can be surmounted via experimental ingenuity and scrupulosity.

    Somewhere Tolkien writes “Even the very wise cannot see all ends”, and there is a (historically true) quantum-optics story about Hanbury Brown and Feynman that illustrates Tolkien’s point. We can all be hopeful (and even confident) that continuing Aaronson-Arkhipov experiments will catalyze debates that are comparably vigorous to the debates catalyzed by the Hanbury Brown and Twiss experiments! 🙂

  14. blk Says:

    Shouldn’t these quantum optical experiments just run into the same difficulty (i.e. decoherence) scaling up as they do for implementing quantum circuits? Thus is keeping 30 bosons coherent considered easier than e.g. 30 qubits (+ unitaries acting on them)? If not, what’s the point to use them if the goal is to reject the extended Church-Turing hypothesis? You will need some form of fault-tolerant encoding in both cases to scale up, so why bother with the non-universal bosons?

  15. Scott Says:

    blk #14: It’s an excellent question, and obviously something we thought about a lot. And yes, the “pessimistic” scenario here is precisely that building a scalable BosonSampling machine would be no easier than building a scalable universal QC.

    However, there are three factors that might make BosonSampling easier to implement than full universal QC, and one factor that makes it interesting for complexity theorists even if it turns out not to be any easier to implement.

    (1) Non-interacting photons are actually incredibly good in terms of decoherence. (Indeed, the earth is getting bombarded with cosmic microwave photons that have probably maintained their quantum states since 380,000 years after the Big Bang!) The main difficulties you face with photons are synchronization (different photons might arrive at the detectors at slightly different times), photon losses, and single-photon generation (a standard laser sometimes outputs 0 photons, sometimes 1, etc., in a superposition of different photon numbers called a coherent state). However, the optics folks tell me that there are technologies currently under development, especially “optical multiplexing,” that if they worked, would address these problems and allow scaling to at least 10-20 photons without the need for explicit error-correction. (As they say, time will tell.)

    (2) With most quantum algorithms like Shor’s, there’s a substantial overhead in mapping the problem you want to solve onto the QC architecture, which means that you need many more than N qubits to do what would classically take 2N steps. By contrast, because BosonSampling is so close to the “native physics” of linear-optical networks, N identical photons really do just correspond directly to NxN matrix permanents. For doing little proof-of-principle demonstrations, this is a huge advantage.

    (3) Most interestingly, the game we’re now playing (just sample some distribution that’s hard to sample with a classical computer) has many, many possible ways to win! Even if photon losses, “dark counts” and various other errors turn out to be unavoidable when scaling up to 20-30 photons, it remains plausible that you’d still be sampling from a classically hard distribution. Of course, that will require a new argument analogous to the one we gave, and is an obvious topic for further research.

    OK, now suppose that none of our hopes come to pass, and building an interestingly-large BosonSampling machine turns out to be not easier at all than building a scalable universal QC. Even then, BosonSampling would still be of complexity-theoretic interest, for the simple reason that it’s the only natural way known to make permanents show up as a quantum computation’s amplitudes, and the permanent is an extremely interesting and special function in theoretical computer science. Indeed, it’s because of special properties of the permanent (like its random self-reducibility) that we conjecture that even the approximate version of BosonSampling is classically hard. In other words: even if you had a full universal QC, if you wanted to sample a distribution that couldn’t be even approximately sampled efficiently classically without surprising complexity consequences, I wouldn’t be able to tell you anything to do except to use your universal QC to simulate BosonSampling.

    As a matter of fact, this—and not any of the experimental physics considerations—was my and Alex’s motivation for thinking about BosonSampling in the first place. We simply wanted to construct a quantum algorithm that samples from a probability distribution whose amplitudes are permanents, and then use the known properties of the permanent to argue about the classical hardness of doing the same thing. And that desire led inevitably to BosonSampling. The fact that Nature provides a physical system—namely, identical photons—that “implement BosonSampling for free,” really just came as a happy coincidence for us!

  16. John Sidles Says:

    Scott remarks  “The optics folks tell me that there are technologies currently under development, especially ‘optical multiplexing,’ that if they worked, would address these problems and allow scaling to at least 10-20 photons without the need for explicit error-correction.”

    Scott, an article/preprint/talk/rumor/speculation in this regard would be very welcome to me and to many.

    To the best of my (sparse) knowledge and (limited) understanding, there exist plausible designs for scalable n-photon sources, and there exist plausible designs for scalable high-efficiency n-photon collection-and-detection, but (even conceptually) no known schemes scalably combine both attributes.

    Broadly speaking, by what physical mechanism(s) do these new schemes scale? If anyone knows, please post! 🙂

  17. Scott Says:

    John Sidles #16: Jeremy O’Brien at Bristol told me about the optical multiplexing idea. As I understand it, the idea is to generate a large number of photon states using imperfect sources (which sometimes fail to generate a photon, sometimes generate 2, etc.), then perform non-demolition measurements on each mode to find out where single-photon Fock states were successfully created, and lastly, use optical logic (that’s the “multiplexing” part) to route the “good” single-photon Fock states into the computation. I don’t know a good reference for this; if anyone else has one, that would be great!

  18. Greg Kuperberg Says:

    Congratulations, Scott. I’m impressed with the intellectual success of your paper. To be honest, impressed enough to be envious.

    Although the sober thought is that it’s hardly realistic envy, in the sense that there are plenty of reasons that I couldn’t have gotten the results in this paper. On the positive side, there is a lot here for me to learn about hardness results. For one thing, I had no idea that every case of the permanent over a large finite field is reducible to a small fraction of average cases — that’s amazing.

  19. Aaronson and Arkhipov’s Result on Hierarchy Collapse | Combinatorics and more Says:

    […] (Dec2012): The Aaronson-Arkhipov paper have led to several exciting experimental works. See this post over the […]

  20. Alexander Vlasov Says:

    BTW, John #13, your story may sound like the HBT experiment was initially performed due to a wrong guess, but anyway produced correct outcome due to other reasons. The same is with my own remark #12. The guess may be true due to [L. Troyansky and N. Tishby 1996] work about exponential variation of permanent, but not due to reason with postselection (relevant to universality of QC with linear optical gates) I initially supposed.

  21. Google Scholar Manipulation and Boson Sampler « kryptomusing Says:

    […] the same Scott Aaronson has posted excellent blog post titled ‘The Boson Apocalypse‘, in which he has a collection of all articles along with a small FAQ regarding the boson […]

  22. Joe Fitzsimons Says:

    Congratulations on all the experimental implementations! As an aside, has there been any progress on algorithms for other problems within that model? I imagine it is hard to find decent problems for which it might work, since it seems that linear optics without feed forward may not contain P, but I would be very interested to hear if there has been any progress on that front.

  23. Scott Says:

    Joe #22: Alas, I know of almost no progress on that front — unless you count this paper by me and Travis Hance, which rules out a class of permanent estimation problems for which the BosonSampling model could have provided a speedup, if the classical algorithm we were discussing didn’t exist.

    I still think it’s an excellent research topic — and were I forced to guess, I’d even conjecture that there should be a decision/estimation problem (possibly an artificial one) for which the BosonSampling model gives a speedup.

  24. Raoul Ohio Says:

    Nex:

    No longer invalid, SciAm trumps Wikipedia:

    http://www.scientificamerican.com/article.cfm?id=new-computer-bridges-classical-and-quantum&WT.mc_id=SA_CAT_physics_20121228

    or, …, does it?

  25. Joe Fitzsimons Says:

    Ah, I see. That looks to be a very interesting paper. From skimming it looks like to get an advantage you need to go the route of the Zeno effect, and have every outcome with extremely low but non-zero probabilities and then look at which of two subsets the actual outcome falls into, but perhaps this fails too.

    However, it seems like you are considering only a restricted set of linear optics operations in both papers: set-ups where all possible optical paths is loop free. But linear optics allows for such loops. For example, take a beam splitter (A) and 3 mirrors (B,C,D) and arrange them so that you form a box with all for optical elements facing the center point. Then an input coming in at A where it is either reflected into the output mode, or enters the box (A->B->C->D->A). Once in the box, with some amplitude fixed by the weight of the beam splitter it exits, or else it cycles again. Such a setup increases the total number of output modes by introducing new temporal modes. Certainly this example is probably useless, but you can build more more complicated structures including counters etc.

    This generalisation would seem to remove the barrier presented by Gurvits’s algorithm and your refinements of it, but again, I might be missing something.

  26. Vadim Says:

    I’m curious what Matthew Broome means in the paper Raoul linked to by: Previously, there was nothing to say “that anything you can do on a quantum computer you can’t do on a normal computer, which leaves in question the necessity for quantum computers,” Broome said. “Now, with boson sampling, we’re coming up with machines based on quantum physics that can attack problems strongly believed to be intractable for classical computers.”

    Weren’t there quite a few things that classical computers weren’t known to do efficiently that could be done by quantum computers (factor, etc)? My very poor understanding of BosonSampling was that it’s a more restrictive but easier to realize model that still seems to give quantum speedup in some cases, is my poor understanding even poorer than I thought?

  27. Scott Says:

    Vadim #26: No, you’re not confused at all. And while I worked hard to tone down what was claimed about BosonSampling in the popular articles, I didn’t succeed in every case. 🙂

    Yes, of course, we’ve known since 1994 that if quantum computers can be efficiently simulated classically, then you get a fast classical factoring algorithm. Compared to Shor’s algorithm, BosonSampling has two main potential advantages. The first is the one you already mentioned: that it could be much easier to realize physically.

    But the other advantage is that factoring is “just one specific problem,” and as far as we know, a fast classical factoring algorithm wouldn’t cause a collapse of complexity classes or anything as dramatic as that (it would “merely” break RSA!). By contrast, Arkhipov and I showed that, if exact BosonSampling can be done efficiently with a classical computer, that would collapse the polynomial hierarchy to the third level, which seems much more implausible even than a fast factoring algorithm. Even if BosonSampling can only be done approximately by a classical computer, we conjecture that would also collapse the polynomial hierarchy, but to show that, one would need to prove a certain problem #P-complete that’s not yet known to be (see our paper for more details).

    So, those are the two potential advantages of BosonSampling compared to factoring — and yes, I’d say that they change the situation in detail but not in essence. The disadvantage of BosonSampling compared to factoring, of course, is that as far as we know, BosonSampling has zero applications (besides challenging the Extended Church-Turing Thesis)—neither to cryptography nor to anything else. 🙂

  28. Vadim Says:

    Thanks, Scott! The second advantage (collapsing the polynomial hierarchy if an efficient classical algorithm is found) is fascinating and previously went clear over my head.

  29. Chris Says:

    How wonderful! Another beautiful example of how science (im using the term broadly to include maths, philosophy, comp. sci. etc.) can bring disparate peoples, groups, nationalities etc. together in the pursuit of understanding. The LHC is another recent and well publicized example. Well done to all involved! Small steps are just as important as big ones. We’re still taking our baby steps but maybe one day we’ll be running : )

  30. Nathan Langford Says:

    John Sidles #16 & Scott #17: That’s basically the right idea about the optical multiplexing. In fact, there are many possible ways to implement it in practice, depending on what degree of freedom you wish to use to accommodate your redundancy (space, time, frequency, etc). The spatial multiplexing that you mention (i.e. many sources – pick and choose your photons and re-route them into the target spatial modes) is one idea. Another way is to use temporal multiplexing with quantum memories to synchronise the sources. To my knowledge, no one has actually implemented any version of this yet.

    Interestingly, you don’t actually need quantum non-demolition detection to do it. You can instead generate a bunch of photon pairs using down-conversion and detect one of them to “herald” the presence of the other one. Although this in some sense doubles the number of pairs you need to create, given the difficulty of implementing QND measurements, it may well still be easier to do it this way. (Note that one popular way to do QND detection, using a CNOT gate, in any case requires an extra ancilla photon!)

    The key practical issue is to do with error rates. Nonlinear pair sources such as parametric down-conversion produce higher-order terms which limit the allowable single-source production rate. For example, to keep the error rate low, you may wish to limit the single-source probability to, say, 0.01. Calculating crudely, you’ll need around 100 such sources to produce one photon with high probability. That’s quite a nasty overhead in terms of physical infrastructure if you want to use the “spatial multiplexing” version. This is one argument why a quantum-memory-based solution might be preferable. However, in this case, the memory needs to have a (decoherence) lifetime which is much longer than its corresponding bandwidth (which defines the length of the shortest photon it can store). Ultimately, this may turn out to be more of a religious debate that gets trumped by engineering demands.

    We recently put on the arXiv an article modelling the memory-based version (Nunn et al., http://arxiv.org/abs/1208.1534), as yet unpublished, where we calculate the expected multiphoton-production speed-up for memories with currently feasible experimental parameters. All the basic ideas are discussed in there, although we now have a slightly more elegant version which will go up once it is accepted for publication (cross fingers). It also has references to other papers which discuss the spatial multiplexing version.

    Hope this helps.

  31. Predictions and Principles « Gödel’s Lost Letter and P=NP Says:

    […] scaling up boson sampling from to permanents will be published by year’s […]

  32. Earl Campbell Says:

    Congratulations. Although, since the earth is only about 8000 miles in diameter, I am curious to how you managed to stay so far away from the lab!

  33. Scott Says:

    Earl #32: Thanks!

    I was thinking about the surface distance, which is ~9766 miles according to Google. People normally don’t quote travel distances assuming the ability to go through the core… 🙂

  34. BosonSampling and (BKS) Noise Sensitivity | Combinatorics and more Says:

    […] interesting BosonSampling experiments with 3 bosons (thus for permanents of 3 by 3 matrices.) (See this post on Scott’s […]