The Quantum PCP Manifesto

Behold the PCP Theorem, one of the crowning achievements of complexity theory:

Given a 3SAT formula φ, it’s NP-hard to decide whether (1) φ is satisfiable or (2) at most a 1-ε fraction of the clauses are satisfiable, promised that one of these is the case. Here ε is a constant independent of n.

In recent weeks, I’ve become increasingly convinced that a Quantum PCP Theorem like the following will one day be a crowning achievement of quantum complexity theory:

Given a set of local measurements on an n-qubit register, it’s QMA-hard to decide whether (1) there exists a state such that all of the measurements accept with probability 1, or (2) for every state, at most a 1-ε fraction of the measurements accept with probability more than 1-δ, promised that one of these is the case. Here a “local” measurement is one that acts on at most (say) 3 qubits, and ε and δ are constants independent of n.

I’m 99% sure that this theorem (alright, conjecture) or something close to it is true. I’m 95% sure that the proof will require a difficult adaptation of classical PCP machinery (whether Iritean or pre-Iritean), in much the same way that the Quantum Fault-Tolerance Theorem required a difficult adaptation of classical fault-tolerance machinery. I’m 85% sure that the proof is achievable in a year or so, should enough people make it a priority. I’m 75% sure that the proof, once achieved, will open up heretofore undreamt-of vistas of understanding and insight. I’m 0.01% sure that I can prove it. And that is why I hereby bequeath the actual proving part to you, my readers.

Notes:

  1. By analogy to the classical case, one expects that a full-blown Quantum PCP Theorem would be preceded by weaker results (“quantum assignment testers”, quantum PCP’s with weaker parameters, etc). So these are obviously the place to start.
  2. Why hasn’t anyone tackled this question yet? Well, one reason is that it’s hard. But a second reason is that people keep getting hung up on exactly how to formulate the question. To forestall further nitpicking, I hereby declare it obvious that a “Quantum PCP Theorem” means nothing more or less than a robust version of Kitaev’s QMA-completeness theorem, in exactly the same sense that the classical PCP Theorem was a robust version of the Cook-Levin Theorem. Any formulation that captures this spirit is fine; mine was only one possibility.

17 Responses to “The Quantum PCP Manifesto”

  1. michael vassar Says:

    Scott, there are only hundreds of quantum complexity experts out there. I’m pretty sure that you are implicitly claiming a level of ability far inferior to your peers through your probability assessments, especially given that there is surely a high correlation between two people’s abilities to solve a given problem. You probably don’t mean to imply this, so you should revise your numerical estimates.
    More generally, this seems like an example of the general failure of rationality, namely, the realization that probabilities mean something and can’t just be made up.

  2. Scott Says:

    Michael: When I say I’m “x% sure” of something, I mean that’s a lower bound on my subjective probability — the highest probability that I’m willing to claim in public.

    And even if I could state my “actual” probabilities, I don’t think they’d mean much. I’m a theoretical computer scientist, who only moonlights as a Bayesian agent.

  3. Andy D Says:

    Scott–great, gutsy post!

    What might get the research ball rolling would be to show the world that there would be some quantum analogy to the giant wave of inapproximability results that followed the classical PCP theorem.

    (i.e. build a network of robust reductions starting from a promise problem like the one stated.)

  4. Anonymous Says:

    Okay, being a lowly programmer that doesn’t know much about Complexity theory, can you explain to me how this classical PCP theorem has affected computer programming. For example, has it inspired new algorithms that I would expect to find in a C++ or Java cookbook?

    Same question if a quantum version of PCP is proven.

  5. Luca Says:

    Scott, complexity theory took a wrong turn at some point in the 70s because of the following logic: “complexity theory is like computability theory but with resource bounds, hence we should look at all the major results of computability theory and see what happens when we impose resource bounds.”

    What makes complexity theory interesting, however, are the notions and results that would have not made sense without resource bounds (like pseudorandomness, interactive proofs, circuit lower bounds…).

    It may be similarly risky to systematically think about “quantum analogs” of results and notions from classical complexity theory. The good quantum stuff probably does not have a classical counterpart (like Shor’s algorithm, or the unconditional quantum key agreement protocols).

  6. Andy D Says:

    anon–the main algorithmic contribution of the PCP theorem is, in principle, a method for verifying (specially written) proofs of mathematical claims that has time complexity independent of the complexity of the proof. So ‘in the limit’, it could be astonishingly useful, but it requires intelligent proofs as input and so is not directly useful in problem solving.

    As far as I can tell we are very far from either
    a) having enough interesting and enormous formal proofs to really create demand for PCP, or
    b) making the conversion process, from proofs to ‘holographic proofs’ that can be checked very quickly, efficient enough to justify using it.

    But there’s another issue. Regular formal proofs can generally be checked in linear time; PCPs still require superlinear time. It seems the only reasons to prefer having a PCP is for
    i) convincing people who don’t trust you, quickly; and is trust that much of an issue in the math/cs community? We could just make public the proof and allow anyone to validate it the traditional way. There’s a gut appeal to letting anyone do it quickly themselves, but how much intuition or true confidence about a theorem could anyone hope to gain from checking a PCP for it? I don’t think I’ll be eating my words anytime soon.

    ii) you might want a PCP if you’re worried that the standard proof is so large that the verification process is likely to make some mistakes. –but then the (longer) PCP-production process is even likelier to make mistakes, and even if you succeed, PCPs only convince probabilistically, so why not just check the standard proof multiple times instead?

    Finally, one might argue that, in helping to show that many approximation problems are unsolvable unless P = NP, the PCP theorem freed up brains from attempting impossible tasks, some of whom might’ve gone on to ‘practical’ programming careers. But this isn’t what you asked for.

    One can imagine scenarios where a PCP would be useful, and I hope the day does come to pass. Meanwhile, the theorem is finally more approachable with Dinur’s proof (there’s a new writeup, which I haven’t read–see second-to-top post at Luca’s blog), although still, key parts of the alphabet-reduction process are just too damn hard if you ask me.

  7. Anonymous Says:

    luca, why did you say that complexity theory took a wrong turn in the seventies?
    It seems to me that pretty much the opposite is true: complexity theory began with analogies to recursion theory, and then took a right turn later with the invention of all the truly original notions you mention.
    Also, some of the analogies with recursion theory are quite nice.
    Think of Cook’s theorem (if you wish to argue that it should be called Cook-Levin, please do so on Fortnow’s blog!),
    or even of Ladner’s theorem.

  8. Andy D Says:

    I think Luca means that progress was slower than it could have been because the tools provided by computability theory were relied upon heavily despite being too coarse. The problem didn’t become clear until the
    oracle work around the end of that decade showed attacking P vs NP in that framework was a dead-end.

    Cook-Levin was of course nice, and a nonrelativizing result too, but it took much longer to exploit the way Satisfiability could be approached with ‘robustizing’ tools of algebra, coding, graph expansion, etc. in a way that black-box computations couldn’t.

  9. Anonymous Says:

    I disagree that no one has thought about it (though I guess they didn’t get too far). I have heard “quantum PCP” talk for at least 3-4 years. In particular, when Kerenidis and de Wolf were working on quantum locally decodable codes in Berkeley there was such talk.

  10. Scott Says:

    Luca: I actually strongly agree with you in general. I’ve never been a fan of just flipping through old STOC proceedings and seeing how many classical results we can quantize. Indeed for precisely that reason, I wouldn’t have issued this challenge if not for the following three mitigating factors:

    (1) At a recent Banff workshop, at least three people independently asked me if there’s a quantum PCP theorem. Finally I said to myself, “Enough! Let’s stop apologizing and prove the damn thing!”

    (2) In contrast to (say) the people who used recursion theory techniques to attack P vs. NP, in this case we already have the successful examples of Kitaev’s Quantum Cook-Levin Theorem and Aharonov and Ben-Or et al.’s Quantum Fault-Tolerance Theorem. This is what indicates to me that quantizing the PCP theorem is likely to be possible.

    (3) In spite of (2), I’m quite certain that a Quantum PCP Theorem will require significant new ideas. Recently I spent a day or two studying Irit’s proof of the classical PCP theorem (which I hadn’t done before), and I found about 20 violations of the No-Cloning Theorem on every page. 🙂

  11. Scott Says:

    I disagree that no one has thought about it (though I guess they didn’t get too far). I have heard “quantum PCP” talk for at least 3-4 years. In particular, when Kerenidis and de Wolf were working on quantum locally decodable codes in Berkeley there was such talk.

    Yes, Ran Raz also had a paper that involved both quantum and PCP’s. But if anyone has thought specifically about a Quantum PCP Theorem (in the sense of my post), I’d be interested to know about it.

  12. Question Says:

    This is out of the topic, but I’ve been trying to better understand the Haar measure and how to apply it to particular sets of the Unitary matrices, I was wondering if anyone can help or point me in a direction.

  13. Anonymous Says:

    Scott are you following in Erdos’ footsteps? So far you’ve posted several conjectures and prizes for people to solve and you don’t seem too concerned about getting a tenure track position. If you do just don’t start taking the amphetamines. 🙂

  14. Scott Says:

    Scott are you following in Erdos’ footsteps?

    Well, we both love grapefruit, silk shirts, international travel, and clearly-stated open problems. But we part ways along the genius, productivity, and celibacy axes.

  15. Scott Says:

    Andy:

    What might get the research ball rolling would be to show the world that there would be some quantum analogy to the giant wave of inapproximability results that followed the classical PCP theorem.

    Right now, we don’t even have a zoo of ordinary QMA-complete problems, so that would be an obvious first step.

    (Personally, I’ve always found the PCP Theorem itself much more interesting than its applications — but maybe that’s just me.)

  16. Andy D Says:

    No, it’s not just you. I wish holographic proofs had more of a role to play in structural complexity theory as opposed to algorithmics. (but, one nice use I remember–Buhrman & Fortnow showing that if NP is easy on average then P = BPP).

    Happy anniversary! Thanks and keep up the good (non-)work.

  17. Anonymous Says:

    Still at the end of the 90s, people were working on Rice-style theorems for complexity theory.