A small post

  1. I really liked this article by Chris Monroe, of the University of Maryland and IonQ, entitled “Quantum computing is a marathon not a sprint.” The crazier expectations get in this field—and right now they’re really crazy, believe me—the more it needs to be said.
  2. In a piece for Communications of the ACM, Moshe Vardi came out as a “quantum computing skeptic.” But it turns out what he means by that is not that he knows a reason why QC is impossible in principle, but simply that it’s often overhyped and that it will be hard to establish a viable quantum computing industry. By that standard, I’m a “QC skeptic” as well! But then what does that make Gil Kalai or Michel Dyakonov?
  3. Friend-of-the-blog Bram Cohen asked me to link to this second-round competition for Verifiable Delay Functions, sponsored by his company Chia. Apparently the first link I provided actually mattered in sending serious entrants their way.
  4. Blogging, it turns out, is really, really hard when (a) your life has become a pile of real-world obligations stretching out to infinity, and also (b) the Internet has become a war zone, with anything you say quote-mined by people looking to embarrass you. But don’t worry, I’ll have more to say soon. In the meantime, doesn’t anyone have more questions about the research papers discussed in the previous post? Y’know, NEEXP in MIP*? SBP versus QMA? Gentle measurement of quantum states and differential privacy turning out to be almost the same subject?

24 Responses to “A small post”

  1. Aula Says:

    doesn’t anyone have more questions about the research papers discussed in the previous post? Y’know, NEEXP in MIP*?

    Well, since you ask… It’s known that MIP=NEXP which is contained in EEXP, or in other words, every protocol in MIP can be simulated by a deterministic classical computer in doubly-exponential time. Giving the provers an entangled quantum state should slow down the simulation by just one exponential (analogous to simulating a quantum computer by a classical computer), so MIP* should be contained in EEEXP. Does this make any sense?

  2. gentzen Says:

    Michel Dyakonov links to the preprint [1702.07688] What determines the ultimate precision of a quantum computer? – arXiv by Xavier Waintal with the words: “The author makes a professional analysis of errors and error correction (of which I am not capable) and his conclusions are worth considering.” (This “preprint” is actually peer reviewed paper: Phys. Rev. A 99, 042318 – Published 10 April 2019)

    Xavier Waintal seems to be a special type of QC skeptic: He believes that for current quantum computing architectures, there can be a limit in the precision of the logical qubits (irrespective of the number N_c of physical qubits used to construct the logical qubits): “Using the example of the topological surface code, we show that a single local event can provoke the failure of the logical qubit, irrespective of N_c.”

  3. Geoffrey Irving Says:

    My understanding is that quantum computation *doesn’t* give any advantage over classical in the single prover case, even if the verifier is also quantum. Are there any loopholes around this view, such that some fancy setup with a single prover can get beyond PSPACE?

    Vaguely related: Micali’s Computationally Sound Proofs work gets all the way to NEXP with a single prover, as long as we have access to a random oracle which the prover can access only polynomially many times. This is a reasonable setting in practice whenever the prover is “good at some hard task, but bad at some other hard task which can we use to fake a random oracle”. However, the theoretical awkwardness of the statement makes it more difficult to place into the context of other proof systems, so I’d be curious for how you think about it.

  4. Geoffrey Irving Says:

    Whoops, I think I misremembered and thus misstated the Computationally Sound Proofs result. Thus, a followup question: what’s the right way to state it succinctly?

  5. Scott Says:

    Aula #1: In general, the size of the provers’ strategies will be exponential in the number of entangled qubits that they share. And the complexity of finding an optimal prover strategy could then be exponential in that size, so doubly exponential in the number of entangled qubits. In the Natarajan-Wright protocol, the number of entangled qubits is itself exponential, and that’s how they end up with NEEXP⊆EEEXP. The reason why we don’t know any upper bound on MIP*, is simply that we don’t know any upper bound on how many entangled qubits the provers might need to execute an (approximately) optimal strategy.

  6. Scott Says:

    Geoffrey #3: That’s correct, we’ve known since 2009 that QIP=IP=PSPACE. With quantum interactive proofs, the big surprise—proved by Kitaev and Watrous in 2000—was that a 3-round protocol already gives you the full power of QIP (ie of PSPACE), whereas in the classical world, any constant number of rounds gives you only AM, believed to be much smaller than PSPACE.

  7. Scott Says:

    gentzen #2: Thanks, I hadn’t seen that paper by Waintal and it looks kind of interesting. In the past, when people identified specific types of errors that went outside the assumptions of the fault-tolerance theorem, it was often possible to generalize the theorem to account for them. It will be interesting to see whether that can be done in this case, in addition to seeing whether there’s any plausible physics that would give rise to this sort of error (the other obvious question).

    Note that this will be a more sophisticated discussion than you could find in Dyakonov’s own papers. In his most recent broadsides, he simply says that QC obviously can’t scale, because it involves manipulating 2n continuous parameters. I.e., he offers an ‘argument’ that refuses to engage with the understanding of quantum fault-tolerance that was already firmly established by 1996.

  8. Raoul Ohio Says:

    Put me down also as a “quantum computing skeptic”, just as I am a “controlled nuclear fusion skeptic”. There might not be any theoretical reason why either or both cannot happen, but maybe the “one damned thing after another” sequence of practical problems does not converge.

  9. Boaz Barak Says:

    In the spirit of quote mining to embarrass you, didn’t you say in a comment to my post https://windowsontheory.org/2017/10/30/the-different-forms-of-quantum-computing-skepticism/ that “quantum simulation is EASILY the basis for a billion-dollar business”?

    An interesting historical analogy is that there were plenty of “Babbage computing engine skeptics” and indeed he ran out of money. In some sense these skeptics were right: given that NASA still employed human computers in the 1960’s, I would guess that even if he managed to built it, Babbage’s engine would have only beat human computers for very particular applications. In another sense they were of course completely wrong.

    I too am a so called “quantum computing hype skeptic” in the sense I don’t foresee a near term huge commercial impact to quantum computing (although I am even more of a skeptic in my own judgement of commercial potential..)

    But somehow I believe that in the long run quantum computing will turn out to be very significant, mostly from a sense of aesthetics. It just doesn’t seem OK for god to make the universe behave so strangely, give up on the notion of locality and in some sense on objective truth, enable a completely new kind of computational model, and then all that would come out of it is a machine to factor integers.

  10. Scott Says:

    Boaz #9: Rereading my comment, you’ll note that I didn’t put any timeframe on it! 🙂 I stand by the claims that:

    (1) if and when QC becomes practical at scale, the simulation of chemistry and physics provides an obvious class of applications (probably still the clearest applications that we know about) that has billions of dollars riding on it, and
    (2) given the experimental progress that there’s been, the possibility of quantum simulators really starting to become useful within the next decade or so can’t be dismissed.

    Does this make me an “optimist”? I’m not sure. Most QC optimists, certainly when they’re applying for funds (but not only then), go substantially beyond the above and into territory that I disagree with…

  11. asdf Says:

    Obvious way to get a 2-second verifiable delay is bounce a radio or laser signal off the moon ;-). Some time back I concocted what was supposed to be a hard-to-jam data broadcasting scheme based on that, probably not of interest here.

    Purely local proof-of-work schemes (like for crypto currency) should probably be memory bound (like scrypt) rather than just computation, on the idea that memory is about the most optimized silicon (both for performance and cost) that currently exists and it’s available everywhere, while bitcoin showed that task-specialized VLSI processors will always beat commodity ones such as gpu’s.

  12. L2 sans Interference Says:

    Since quantum theory can be seen as a generalized probability theory that allows L2 sampling + interference, can any purported advantage of a quantum algorithm be nullified if interference is, in some sense, unnecessary to efficiently solve the problem? In particular, did Ewin Tang’s work show that “L2 sampling” alone is insufficient for a quantum advantage in recommendation?

  13. amy Says:

    I was thinking about the “add noise first” approach to writing about my recent project (an ed project, 12 students) in a way that won’t fall foul of the IRB by invading the study subjects’ privacy, but I think that’s going to be difficult, because every way I can think of doing it approaches a state of creative nonfiction, which is definitely not what’s wanted here. That sort of thing works fine if you’re trying to write about real people/events in a literary manner and you don’t want to, say, get sued. People do it all the time. (Sometimes not well enough.) Here, I think I can subtract, but not add. Still thinking.

  14. Lucius Says:

    >Gentle measurement of quantum states and differential privacy turning out to be almost the same subject?

    Does that suggest new algorithms for machine learning? (classic or otherwise)

  15. Job Says:

    I’ve seen bio-chemistry mentioned as a field that can benefit from quantum computing.

    But do you think bio-chemistry itself will be used as a computing platform?

    For example, you can amplify a DNA sequence using a process called PCR. It splits up the DNA and fills in each half. You end up with 2^n copies, after n steps.

    Since DNA is more or less programmable and can be scanned/manipulated (using proteins like cas9, used in CRISPR kits), there’s the potential to enable computing with large amounts of parallelism.

    And it’s not unlike quantum computing in that you’d need to find a way to amplify a solution once it’s encountered – enough such that you can get it out.

    Relative to a QC, it’s far more resource intensive – it’s explicit exponentiality – but it has similar dynamics.

  16. Scott Says:

    L2 sans Interference #12: Yes and yes.

  17. Scott Says:

    Lucius #14: It’s indeed suggested new procedures for learning information about quantum states—most notably, the “Quantum Private Multiplicative Weights” (QPMW) algorithm, which we introduce in the paper (read it and see! 🙂 ). We don’t know yet whether it’s possible to go in the opposite direction, and use results about quantum measurement to get new insights purely for classical machine learning.

  18. Scott Says:

    Job #15: No, DNA computing is a fundamentally different proposition from quantum computing, because ultimately, it’s an exotic way to implement the familiar class P (i.e., do classical computation). If you gain, it’s “just” in the constant factors—or in being able to program biology for medical or other applications. You don’t enlarge the set of asymptotically feasible problems in the way that BQP is believed to do.

  19. Job Says:

    No, DNA computing is a fundamentally different proposition from quantum computing, because ultimately, it’s an exotic way to implement the familiar class P (i.e., do classical computation).

    OK so it’s DNA computing.

    I realize it’s still just P. The part that reminds me of a QC is the output part, since it requires amplifying the signal of whichever unit encounters the solution.

    Hey and natural selection could play the role of quantum interference. Both constructive and destructive, in a very real sense.

  20. Lucius Says:

    Scott #17,

    I think I got the connection. Wow!

    In retrospect it should not be surprising that ML meets DP: if the model you train is good, then it must get ride of most or all information about what particular set of examples you’re training your model on (except that this set was taken from some larger population defined as obeying the law your model implement). So DP and ML, same fight under disguise. And it’s even almost the same math:
    -DP would say take a database (a two-dimensional matrix where one row = one user, one column = one field), multiply each line by the query (a collection of weights, one per column), transform that into a probability (sum and exponentiate the result for each row, divide by the total if you wish), and finally pick one at random according to its probability.
    -ML would say the database is actually a minibatch of inputs to some neuron (a two-dimensional matrix where one row = one exemple, one column = one feature), the weights for the query are the synaptic weights for this neuron (it’s even the same name!), the exponentiation is an activation function (you might try some others we found, DP), the only difference is we don’t pick one winner from the batch. Well, maybe we should?

    I think I can see how a QT would see this: the database/inputs is a collection of physical systems one can measure (a two-dimensional matrix where one row = one system, one column = one possible measurement basis), the multiplication stuff is actually a rotation of the measurement basis, and when you actually perform the measurement then the result is at random according to an exponentiation (the squared amplitudes). That’s neat.

    So first, do you see anything wrong with the picture above? Second, it seems that there is no place for intrication in this picture. Would it be wrong to stipulate that it defines columns versus rows? (e.g. no intrication between two rows)

  21. Douh Says:

    What will happen first: robust QC or controlled fusion energy production? Which one is better for tenure?

  22. Scott Says:

    Lucius #20: To be honest, I have difficulty mapping what you write to what’s in our paper. The mathematical connection that Guy and I made is specifically between differential privacy and gentle measurement—i.e., the problem of measuring a collection of quantum states without much damaging them. It’s not a connection to ML or quantum mechanics in general. If you’re interested in this stuff, please reread our intro! To understand our main point, it’s absolutely essential that you understand what a “gentle measurement” is, for example by looking at our main example of one (the Laplace noise measurement).

  23. Scott Says:

    Douh #21: Dunno. QC is probably better for tenure than fusion at this specific moment, but these things can change.

  24. gentzen Says:

    Scott #7: Thanks for your answer. I know that I always take too long to reply (especially compared to today’s standards on the internet), but this time I really took long even by my own standards.

    My problem is that even so I find the questions raised by scalable quantum computing important and interesting, I have read neither Nielsen and Chuan nor even Mermin in depth. Even so the article by Waintal has only 6 pages and feels easy to read, it uses examples like “topological surface code”, “superconducting transmon”, or “semi-conducting singlet-triplet double quantum dot qubit” which are not explained in Nielsen and Chuan or Mermin. I would not have looked at such an article, but comment discussions on scirate are so rare that the time spend to read Michel Dyakonov defense and this referenced article was small.

    So in order to be able to give at least some sort of reply, I have now tried to read (in depth) chapter 7 “Quantum computers: physical realization” from Nielsen and Chuan. This was an enjoyable experience, and I especially liked the NMR approach with its direct use of statistical ensembles and the density matrix. But in the end, all of the examples presented in that chapter are obviously unable to allow scalable quantum computing:

    As these examples demonstrate, coming up with a good physical realization for a quantum computer is tricky business, fraught with tradeoffs. All of the above schemes are unsatisfactory, in that none allow a large-scale quantum computer to be realized anytime in the near future. However, that does not preclude the possibility, and in fact …

    So it is an interesting fact that today’s schemes for quantum computing are no longer obviously unable to allow scalable quantum computing. Waintal’s sophisticated discussion of where they might still have issues limiting their scalability shows just how much the field of scalable quantum computing has advanced.

    And still those excellent texts will likely remain (for a long time) the place for the uninitiated reader to learn the basics, even so they don’t explain a single physical scheme which would allow scalable quantum computing.