The Scientific Case for P≠NP

Out there in the wider world—OK, OK, among Luboš Motl, and a few others who comment on this blog—there appears to be a widespread opinion that P≠NP is just “a fashionable dogma of the so-called experts,” something that’s no more likely to be true than false.  The doubters can even point to at least one accomplished complexity theorist, Dick Lipton, who publicly advocates agnosticism about whether P=NP.

Of course, not all the doubters reach their doubts the same way.  For Lipton, the thinking is probably something like: as scientists, we should be rigorously open-minded, and constantly question even the most fundamental hypotheses of our field.  For the outsiders, the thinking is more like: computer scientists are just not very smart—certainly not as smart as real scientists—so the fact that they consider something a “fundamental hypothesis” provides no information of value.

Consider, for example, this comment of Ignacio Mosqueira:

If there is no proof that means that there is no reason a-priori to prefer your arguments over those [of] Lubos. Expertise is not enough.  And the fact that Lubos is difficult to deal with doesn’t change that.

In my response, I wondered how broadly Ignacio would apply the principle “if there’s no proof, then there’s no reason to prefer any argument over any other one.”  For example, would he agree with the guy interviewed on Jon Stewart who earnestly explained that, since there’s no proof that turning on the LHC will destroy the world, but also no proof that it won’t destroy the world, the only rational inference is that there’s a 50% chance it will destroy the world?  (John Oliver’s deadpan response was classic: “I’m … not sure that’s how probability works…”)

In a lengthy reply, Luboš bites this bullet with relish and mustard.  In physics, he agrees, or even in “continuous mathematics that is more physics-wise,” it’s possible to have justified beliefs even without proof.  For example, he admits to a 99.9% probability that the Riemann hypothesis is true.  But, he goes on, “partial evidence in discrete mathematics just cannot exist.”  Discrete math and computer science, you see, are so arbitrary, manmade, and haphazard that every question is independent of every other; no amount of experience can give anyone any idea which way the next question will go.

No, I’m not kidding.  That’s his argument.

I couldn’t help wondering: what about number theory?  Aren’t the positive integers a “discrete” structure?  And isn’t the Riemann Hypothesis fundamentally about the distribution of primes?  Or does the Riemann Hypothesis get counted as an “honorary physics-wise continuous problem” because it can also be stated analytically?  But then what about Goldbach’s Conjecture?  Is Luboš 50/50 on that one too?  Better yet, what about continuous, analytic problems that are closely related to P vs. NP?  For example, Valiant’s Conjecture says you can’t linearly embed the permanent of an n×n matrix as the determinant of an m×m matrix, unless m≥exp(n).  Mulmuley and others have connected this “continuous cousin” of P≠NP to issues in algebraic geometry, representation theory, and even quantum groups and Langlands duality.  So, does that make it kosher?  The more I thought about the proposed distinction, the less sense it made to me.

But enough of this.  In the rest of this post, I want to explain why the odds that you should assign to P≠NP are more like 99% than they are like 50%.  This post supersedes my 2006 post on the same topic, which I hereby retire.  While that post was mostly OK as far as it went, I now feel like I can do a much better job articulating the central point.  (And also, I made the serious mistake in 2006 of striving for literary eloquence and tongue-in-cheek humor.  That works great for readers who already know the issues inside-and-out, and just want to be amused.  Alas, it doesn’t work so well for readers who don’t know the issues, are extremely literal-minded, and just want ammunition to prove their starting assumption that I’m a doofus who doesn’t understand the basics of his own field.)

So, OK, why should you believe P≠NP?  Here’s why:

Because, like any other successful scientific hypothesis, the P≠NP hypothesis has passed severe tests that it had no good reason to pass were it false.

What kind of tests am I talking about?

By now, tens of thousands of problems have been proved to be NP-complete.  They range in character from theorem proving to graph coloring to airline scheduling to bin packing to protein folding to auction pricing to VLSI design to minimizing soap films to winning at Super Mario Bros.  Meanwhile, another cluster of tens of thousands of problems has been proved to lie in P (or BPP).  Those range from primality to matching to linear and semidefinite programming to edit distance to polynomial factoring to hundreds of approximation tasks.  Like the NP-complete problems, many of the P and BPP problems are also related to each other by a rich network of reductions.  (For example, countless other problems are in P “because” linear and semidefinite programming are.)

So, if we were to draw a map of the complexity class NP  according to current knowledge, what would it look like?  There’d be a huge, growing component of NP-complete problems, all connected to each other by an intricate network of reductions.  There’d be a second huge component of P problems, many of them again connected by reductions.  Then, much like with the map of the continental US, there’d be a sparser population in the middle: stuff like factoring, graph isomorphism, and Unique Games that for various reasons has thus far resisted assimilation onto either of the coasts.

Of course, to prove P=NP, it would suffice to find a single link—that is, a single polynomial-time equivalence—between any of the tens of thousands of problems on the P coast, and any of the tens of thousands on the NP-complete one.  In half a century, this hasn’t happened: even as they’ve both ballooned exponentially, the two giant regions have remained defiantly separate from each other.  But that’s not even the main point.  The main point is that, as people explore these two regions, again and again there are “close calls”: places where, if a single parameter had worked out differently, the two regions would have come together in a cataclysmic collision.  Yet every single time, it’s just a fake-out.  Again and again the two regions “touch,” and their border even traces out weird and jagged shapes.  But even in those border zones, not a single problem ever crosses from one region to the other.  It’s as if they’re kept on their respective sides by an invisible electric fence.

As an example, consider the Set Cover problem: i.e., the problem, given a collection of subsets S1,…,Sm⊆{1,…,n}, of finding as few subsets as possible whose union equals the whole set.  Chvatal showed in 1979 that a greedy algorithm can produce, in polynomial time, a collection of sets whose size is at most ln(n) times larger than the optimum size.  This raises an obvious question: can you do better?  What about 0.9ln(n)?  Alas, building on a long sequence of prior works in PCP theory, it was recently shown that, if you could find a covering set at most (1-ε)ln(n) times larger than the optimum one, then you’d be solving an NP-complete problem, and P would equal NP.  Notice that, conversely, if the hardness result worked for ln(n) or anything above, then we’d also get P=NP.  So, why do the algorithm and the hardness result “happen to meet” at exactly ln(n), with neither one venturing the tiniest bit beyond?  Well, we might say, ln(n) is where the invisible electric fence is for this problem.

Want another example?  OK then, consider the “Boolean Max-k-CSP” problem: that is, the problem of setting n bits so as to satisfy the maximum number of constraints, where each constraint can involve an arbitrary Boolean function on any k of the bits.  The best known approximation algorithm, based on semidefinite programming, is guaranteed to satisfy at least a 2k/2k fraction of the constraints.  Can you guess where this is going?  Recently, Siu On Chan showed that it’s NP-hard to satisfy even slightly more than a 2k/2k fraction of constraints: if you can, then P=NP.  In this case the invisible electric fence sends off its shocks at 2k/2k.

I could multiply such examples endlessly—or at least, Dana (my source for such matters) could do so.  But there are also dozens of “weird coincidences” that involve running times rather than approximation ratios; and that strongly suggest, not only that P≠NP, but that problems like 3SAT should require cn time for some constant c.  For a recent example—not even a particularly important one, but one that’s fresh in my memory—consider this paper by myself, Dana, and Russell Impagliazzo.  A first thing we do in that paper is to give an approximation algorithm for a family of two-prover games called “free games.”  Our algorithm runs in quasipolynomial time:  specifically, nO(log(n)).  A second thing we do is show how to reduce the NP-complete 3SAT problem to free games of size ~2O(√n).

Composing those two results, you get an algorithm for 3SAT whose overall running time is roughly

$$ 2^{O( \sqrt{n} \log 2^{\sqrt{n}}) } = 2^{O(n)}. $$

Of course, this doesn’t improve on the trivial “try all possible solutions” algorithm.  But notice that, if our approximation algorithm for free games had been slightly faster—say, nO(log log(n))—then we could’ve used it to solve 3SAT in $$ 2^{O(\sqrt{n} \log n)} $$ time.  Conversely, if our reduction from 3SAT had produced free games of size (say) $$ 2^{O(n^{1/3})} $$ rather than 2O(√n), then we could’ve used that to solve 3SAT in $$ 2^{O(n^{2/3})} $$ time.

I should stress that these two results have completely different proofs: the approximation algorithm for free games “doesn’t know or care” about the existence of the reduction, nor does the reduction know or care about the algorithm.  Yet somehow, their respective parameters “conspire” so that 3SAT still needs cn time.  And you see the same sort of thing over and over, no matter which problem domain you’re interested in.  These ubiquitous “coincidences” would be immediately explained if 3SAT actually did require cn time—i.e., if it had a “hard core” for which brute-force search was unavoidable, no matter which way you sliced things up.  If that’s not true—i.e., if 3SAT has a subexponential algorithm—then we’re left with unexplained “spooky action at a distance.”  How do the algorithms and the reductions manage to coordinate with each other, every single time, to avoid spilling the subexponential secret?

Notice that, contrary to Luboš’s loud claims, there’s no “symmetry” between P=NP and P≠NP in these arguments.  Lower bound proofs are much harder to come across than either algorithms or reductions, and there’s not really a mystery about why: it’s hard to prove a negative!  (Especially when you’re up against known mathematical barriers, including relativization, algebrization, and natural proofs.)  In other words, even under the assumption that lower bound proofs exist, we now understand a lot about why the existing mathematical tools can’t deliver them, or can only do so for much easier problems.  Nor can I think of any example of a “spooky numerical coincidence” between two unrelated-seeming results, which would’ve yielded a proof of P≠NP had some parameters worked out differently.  P=NP and P≠NP can look like “symmetric” possibilities only if your symmetry is unbroken by knowledge.

Imagine a pond with small yellow frogs on one end, and large green frogs on the other.  After observing the frogs for decades, herpetologists conjecture that the populations represent two distinct species with different evolutionary histories, and are not interfertile.  Everyone realizes that to disprove this hypothesis, all it would take would be a single example of a green/yellow hybrid.  Since (for some reason) the herpetologists really care about this question, they undertake a huge program of breeding experiments, putting thousands of yellow female frogs next to green male frogs (and vice versa) during mating season, with candlelight, soft music, etc.  Nothing.

As this green vs. yellow frog conundrum grows in fame, other communities start investigating it as well: geneticists, ecologists, amateur nature-lovers, commercial animal breeders, ambitious teenagers on the science-fair circuit, and even some extralusionary physicists hoping to show up their dimwitted friends in biology.  These other communities try out hundreds of exotic breeding strategies that the herpetologists hadn’t considered, and contribute many useful insights.  They also manage to breed a larger, greener, but still yellow frog—something that, while it’s not a “true” hybrid, does have important practical applications for the frog-leg industry.  But in the end, no one has any success getting green and yellow frogs to mate.

Then one day, someone exclaims: “aha!  I just found a huge, previously-unexplored part of the pond where green and yellow frogs live together!  And what’s more, in this part, the small yellow frogs are bigger and greener than normal, and the large green frogs are smaller and yellower!”

This is exciting: the previously-sharp boundary separating green from yellow has been blurred!  Maybe the chasm can be crossed after all!

Alas, further investigation reveals that, even in the new part of the pond, the two frog populations still stay completely separate.  The smaller, yellower frogs there will mate with other small yellow frogs (even from faraway parts of the pond that they’d never ordinarily visit), but never, ever with the larger, greener frogs even from their own part.  And vice versa.  The result?  A discovery that could have falsified the original hypothesis has instead strengthened it—and precisely because it could’ve falsified it but didn’t.

Now imagine the above story repeated a few dozen more times—with more parts of the pond, a neighboring pond, sexually-precocious tadpoles, etc.  Oh, and I forgot to say this before, but imagine that doing a DNA analysis, to prove once and for all that the green and yellow frogs had separate lineages, is extraordinarily difficult.  But the geneticists know why it’s so difficult, and the reasons have more to do with the limits of their sequencing machines and with certain peculiarities of frog DNA, than with anything about these specific frogs.  In fact, the geneticists did get the sequencing machines to work for the easier cases of turtles and snakes—and in those cases, their results usually dovetailed well with earlier guesses based on behavior.  So for example, where reddish turtles and bluish turtles had never been observed interbreeding, the reason really did turn out to be that they came from separate species.  There were some surprises, of course, but nothing even remotely as shocking as seeing the green and yellow frogs suddenly getting it on.

Now, even after all this, someone could saunter over to the pond and say: “ha, what a bunch of morons!  I’ve never even seen a frog or heard one croak, but I know that you haven’t proved anything!  For all you know, the green and yellow frogs will start going at it tomorrow.  And don’t even tell me about ‘the weight of evidence,’ blah blah blah.  Biology is a scummy mud-discipline.  It has no ideas or principles; it’s just a random assortment of unrelated facts.  If the frogs started mating tomorrow, that would just be another brute, arbitrary fact, no more surprising or unsurprising than if they didn’t start mating tomorrow.  You jokers promote the ideology that green and yellow frogs are separate species, not because the evidence warrants it, but just because it’s a convenient way to cover up your own embarrassing failure to get them to mate.  I could probably breed them myself in ten minutes, but I have better things to do.”

At this, a few onlookers might nod appreciatively and say: “y’know, that guy might be an asshole, but let’s give him credit: he’s unafraid to speak truth to competence.”

Even among the herpetologists, a few might beat their breasts and announce: “Who’s to say he isn’t right?  I mean, what do we really know?  How do we know there even is a pond, or that these so-called ‘frogs’ aren’t secretly giraffes?  I, at least, have some small measure of wisdom, in that I know that I know nothing.”

What I want you to notice is how scientifically worthless all of these comments are.  If you wanted to do actual research on the frogs, then regardless of which sympathies you started with, you’d have no choice but to ignore the naysayers, and proceed as if the yellow and green frogs were different species.  Sure, you’d have in the back of your mind that they might be the same; you’d be ready to adjust your views if new evidence came in.  But for now, the theory that there’s just one species, divided into two subgroups that happen never to mate despite living in the same habitat, fails miserably at making contact with any of the facts that have been learned.  It leaves too much unexplained; in fact it explains nothing.

For all that, you might ask, don’t the naysayers occasionally turn out to be right?  Of course they do!  But if they were right more than occasionally, then science wouldn’t be possible.  We would still be in caves, beating our breasts and asking how we can know that frogs aren’t secretly giraffes.

So, that’s what I think about P and NP.  Do I expect this post to convince everyone?  No—but to tell you the truth, I don’t want it to.  I want it to convince most people, but I also want a few to continue speculating that P=NP.

Why, despite everything I’ve said, do I want maybe-P=NP-ism not to die out entirely?  Because alongside the P=NP carpers, I also often hear from a second group of carpers.  This second group says that P and NP are so obviously, self-evidently unequal that the quest to separate them with mathematical rigor is quixotic and absurd.  Theoretical computer scientists should quit wasting their time struggling to understand truths that don’t need to be understood, but only accepted, and do something useful for the world.  (A natural generalization of this view, I guess, is that all basic science should end.)  So, what I really want is for the two opposing groups of naysayers to keep each other in check, so that those who feel impelled to do so can get on with the fascinating quest to understand the ultimate limits of computation.


Update (March 8): At least eight readers have by now emailed me, or left comments, asking why I’m wasting so much time and energy arguing with Luboš Motl.  Isn’t it obvious that, ever since he stopped doing research around 2006 (if not earlier), this guy has completely lost his marbles?  That he’ll never, ever change his mind about anything?

Yes.  In fact, I’ve noticed repeatedly that, even when Luboš is wrong about a straightforward factual matter, he never really admits error: he just switches, without skipping a beat, to some other way to attack his interlocutor.  (To give a small example: watch how he reacts to being told that graph isomorphism is neither known nor believed to be NP-complete.  Caught making a freshman-level error about the field he’s attacking, he simply rants about how graph isomorphism is just as “representative” and “important” as NP-complete problems anyway, since no discrete math question is ever more or less “important” than any other; they’re all equally contrived and arbitrary.  At the Luboš casino, you lose even when you win!  The only thing you can do is stop playing and walk away.)

Anyway, my goal here was never to convince Luboš.  I was writing, not for him, but for my other readers: especially for those genuinely unfamiliar with these interesting issues, or intimidated by Luboš’s air of certainty.  I felt like I owed it to them to set out, clearly and forcefully, certain facts that all complexity theorists have encountered in their research, but that we hardly ever bother to articulate.  If you’ve never studied physics, then yes, it sounds crazy that there would be quadrillions of invisible neutrinos coursing through your body.  And if you’ve never studied computer science, it sounds crazy that there would be an “invisible electric fence,” again and again just barely separating what the state-of-the-art approximation algorithms can handle from what the state-of-the-art PCP tools can prove is NP-complete.  But there it is, and I wanted everyone else at least to see what the experts see, so that their personal judgments about the likelihood of P=NP could be informed by seeing it.

Luboš’s response to my post disappointed me (yes, really!).  I expected it to be nasty and unhinged, and so it was.  What I didn’t expect was that it would be so intellectually lightweight.  Confronted with the total untenability of his foot-stomping distinction between “continuous math” (where you can have justified beliefs without proof) and “discrete math” (where you can’t), and with exactly the sorts of “detailed, confirmed predictions” of the P≠NP hypothesis that he’d declared impossible, Luboš’s response was simply to repeat his original misconceptions, but louder.

And that brings me, I confess, to a second reason for my engagement with Luboš.  Several times, I’ve heard people express sentiments like:

Yes, of course Luboš is a raging jerk and a social retard.  But if you can just get past that, he’s so sharp and intellectually honest!  No matter how many people he needlessly offends, he always tells it like it is.

I want the nerd world to see—in as stark a situation as possible—that the above is not correct.  Luboš is wrong much of the time, and he’s intellectually dishonest.

At one point in his post, Luboš actually compares computer scientists who find P≠NP a plausible working hypothesis to his even greater nemesis: the “climate cataclysmic crackpots.”  (Strangely, he forgot to compare us to feminists, Communists, Muslim terrorists, or loop quantum gravity theorists.)  Even though the P versus NP and global warming issues might not seem closely linked, part of me is thrilled that Luboš has connected them as he has.  If, after seeing this ex-physicist’s “thought process” laid bare on the P versus NP problem—how his arrogance and incuriosity lead him to stake out a laughably-absurd position; how his vanity then causes him to double down after his errors are exposed—if, after seeing this, a single person is led to question Lubošian epistemology more generally, then my efforts will not have been in vain.

Anyway, now that I’ve finally unmasked Luboš—certainly to my own satisfaction, and I hope to that of most scientifically-literate readers—I’m done with this.  The physicist John Baez is rumored to have said: “It’s not easy to ignore Luboš, but it’s ALWAYS worth the effort.”  It took me eight years, but I finally see the multiple layers of profundity hidden in that snark.

And thus I make the following announcement:

For the next three years, I, Scott Aaronson, will not respond to anything Luboš says, nor will I allow him to comment on this blog.

In March 2017, I’ll reassess my Luboš policy.  Whether I relent will depend on a variety of factors—including whether Luboš has gotten the professional help he needs (from a winged pig, perhaps?) and changed his behavior; but also, how much my own quality of life has improved in the meantime.


Another Update (3/11): There’s some further thoughtful discussion of this post over on Reddit.


Another Update (3/13): Check out my MathOverflow question directly inspired by the comments on this post.


Yet Another Update (3/17): Dick Lipton and Ken Regan now have a response up to this post. My own response is coming soon in their comment section. For now, check out an excellent comment by Timothy Gowers, which begins “I firmly believe that P≠NP,” then plays devil’s-advocate by exploring the possibility that in this comment thread I called P being ‘severed in two,’ then finally returns to reasons for believing that P≠NP after all.

513 Responses to “The Scientific Case for P≠NP”

  1. Philip White Says:

    Great post…I really liked the green-yellow frog metaphor (and of course the related observation that NP-complete and P problems don’t overlap).

    I’m curious…is there any scientific or philosophical a posteriori evidence in nature (e.g., in evolutionary biology) that you believe supports P != NP? As a made-up example, consider the possibility that someone finds evidence of an extinct species of dinosaur that was capable of solving large Sudoku puzzles in polynomial time. I’d consider that to be evidence of P = NP, if it existed.

    Also, not to wax too theological, but do you think that P != NP has any relationship to questions about the existence of god? I always wondered how a god could have created something as complex as the world/universe in 7 or so days without a good SAT solver.

  2. wolfgang Says:

    >> Then, much like with the map of the continental US, there’d be a sparser population in the middle

    probably a stupid question, but do they form their own complexity class in the large zoo or is it just a matter of time until they *have to* fall either into P or NP ?

  3. Rahul Says:

    It’d be kinda fun to assign probabilities to each problem on the Clay Institute list.

    Are most of those probablistically estimated strongly one way or another?

    Navier Stokes, I think not: blowup as well as smoothness both camps seem to have a fair bit of adherents? I might be wrong.

  4. fred Says:

    Hi Scott, thanks for another great post 🙂

    In the Susskind/Tao thread you mentioned that P=NP would imply the “ability to solve all search problems efficiently”.

    I’m a bit confused about this (probably has to to with the exact definition of a search problem).

    A search problem on an input set of size m (unsorted database) can be solved worst-case in O(m), it looks linear (and apparently with some pre-processing a QC could do that in O(m^0.5) or if the keys have some structure sometimes we can pre-sort (cost O(m log m)) and do a binary search is O(log m), etc).

    On the other hand an NP-complete problem on an input set of size n can be solved worst-case in O(2^n), it looks exponential.

    (Both are NP because a solution can always be checked quickly.)

    1) Isn’t this difference simply a matter of point of view about what constitute input size?
    NP-complete problems have a particular internal structure, i.e. the set of all keys isn’t random but can be generated from a compact representation (like a graph of size n, or a set of n integers, etc).
    But if we consider the set of all keys, m = 2^n, then the two problems look equivalent (the costs become equal).

    2) If we were to find a way to solve NP-complete problems efficiently by taking advantage of the internal structure (“white box”), I just don’t see how that would imply that we can then solve all search problems efficiently (i.e. faster than O(m)), unless there’s a generic way to reduce a random set of keys of size m to a more compact NP-complete type representation of size log(m).

    But if we were to come up with a magical “black-box” way to solve NP-complete problems that doesn’t take advantage of the internal structure, i.e. somehow check all the possible 2^n = m solutions in parallel at no additional space cost and pick the one solution (even QCs can’t do that apparently), then we could solve efficiently any search problem as well?

    It just seems that there could be different ways to collapse the complexity hierarchy based on how NP-complete would be solved efficiently. But of course if P!=NP then none of this matters.

  5. Sam Hopkins Says:

    It’s interesting to note another asymmetry between P vs. NP and the Riemann Hypothesis/Goldbach’s Conjecture/Fermat’s Last Theorem/etc (an asymmetry which may get at the “self-referrential” nature of the problem, as evinced for example in other results like natural proofs): when computer scientists cannot come up with a polynomial-time solution to generic unstructured search problems after a long time of looking for one, they decide that this means there is no such solution; but when mathematicians cannot come up with a solution to the Riemann Hypothesis after many years, they *don’t* then conclude that this makes RH likely to be independent of the axioms of mathematics. Indeed, sometimes people do put forward such an assertion about famous unsolved problems (see e.g. http://mathoverflow.net/questions/27755/knuths-intuition-that-goldbach-might-be-unprovable/) but that is usually dismissed as fantasy talk.

    Maybe I’ll look more into this, but I think there is also the phenomenon of many simple analytic estimates “nearly” proving the RH.

  6. dpb Says:

    Laugh all you like, but I have managed to prove that frogs and giraffes are identical.

    Silly computer scientist!

  7. fred Says:

    Seems to me that the area between P and NP-complete is particularly interesting – i.e. factoring and graph isomorphism problems.
    Has there been any connection/reduction between the two problems? Is there any hint that it’s possible to find an efficient QC algo for graph iso? (like Shor did it for factoring)
    It seems that if these problems are neither P nor NP-complete then they could be used as a stick to “poke” the P/NP electric fence 🙂

  8. Klum Says:

    I’m a complexity theorists, and I’m intellectually agnostic to whether P equals or not NP. The fundamental hardness problems in complexity are, at the end, fortunately, clear mathematical questions. I don’t see the reason to make them something else from what they are: fundamental mathematical statements about computation; where, computation is by itself a mathematical-logical concept.

    I conjecture, as a mathematician that P is unequal to NP for most purposes, e.g., when picking a direction of study, or a problem to tackle. But in no means, am I going to claim it strongly, or to asses how reasonable it is for *non-mathematical* reasons, or for the sake of arguing about aspects which are non-mathematical (e.g., the physical nature, etc.). I don’t see the justification of doing otherwise.

    Indeed, what is the motivation in arguing with Lubos’ claims? What would we gain be “winning” this argument? Obviously, as long as there’s no proof of P vs. NP in both ways, his views are legitimate and certainly not utterly absurd.

  9. Pat Says:

    It’s good that you managed to avoid “literary eloquence and tongue-in-cheek humor” this time

  10. Sev Says:

    I’m simply going to rephrase your main points in a trivial and completely equivalent physics analogy, in case it’s helpful for non-CS people:

    The goal of physics is to model the natural world by mathematics. However, as far as I know, it’s often impossible to rigorously “prove” a model is actually correct. For example, we still don’t know whether quantum theory is 100% correct. But as experiments repeatedly confirm, the world sure seems to behave like the laws of quantum theory predict. Thus we assign greater weight to the probability that quantum theory is correct.

    Let’s think of P vs NP in the same way: It seems very difficult to resolve this question rigorously. In analogy to physics, each attempt at bringing these two classes together can be thought of as an “experiment”, and each experiment so far has failed. One can argue that perhaps we weren’t clever enough in our proof attempt, but the same also holds true for physics, as one can argue that perhaps the parameters in a physics experiment weren’t quite precise enough or set correctly.

    Thus, if you believe physics, then there’s no reason to doubt our approach in CS, as each “experiment” we’ve run to date suggests the conclusion that P is not equal to NP.

  11. Joshua Zelinsky Says:

    I’m intrigued and puzzled by the extended comment by Lubos. One cannot in fact phrase the Riemann hypothesis as a question about the existence of a “single discrete structure” as one can see for example in this paper by Lagarias: http://www.math.lsa.umich.edu/~lagarias/doc/elementaryrh.pdf . I’m also confused as to why he would not consider specific algorithms that solve some NP-complete problem but don’t run in polynomial time to be the same sort of instance as finding zeros of the zeta function that lie on the correct line.

    It may also be worth noting that if one applies the idea that one cannot say anything about such discrete problems, then one should conclude that one is just as unsure that P != PSPACE as that P != NP, but one of these implies the other, so assigning them the same confidence seems absurd.

  12. Joshua Zelinsky Says:

    One other issue directly related to where Lubos says:

    “So I do think that the frantic belief that P cannot be NP is really irrational, boiling down to totally irrational beliefs such as your “great composers must have an unexplainable miracle in their brains that makes them different species than any mortals”, and that are partly motivated as an excuse for the fact that you (the algorithmic community) haven’t been able to find the fast enough algorithm for SAT even though it may very well exist. So it’s safer to invent an ideology that “it cannot be found at all” although you can’t prove it, either. But those who look at it rationally see that the odds are about 50-50.”

    This runs into the serious empirical problem that the standard conjectures are that ZPP=BPP=P and yet there are problems in BPP and ZPP which have withstood serious attempts by many people to develop P algorithms for them. If Lubos’s claim about ideology were correct then one would expect the standard conjectures to be that P != ZPP and ZPP !=BPP.

  13. Sid Says:

    @Sam Hopkins: (1) I don’t think people seriously advocate that P/NP is independent. (2) The other problems you mention have had significantly more partial progress made towards their solution compared to P/NP. Even getting the most obvious lower bounds in theory has turned out to be an insanely hard exercise.

  14. Charles Forgy Says:

    Thank you for this explanation. Being involved in an experimental scientist (albeit as a theorist) this really made things clear.

  15. Mike Says:

    “I conjecture, as a mathematician that P is unequal to NP for most purposes, e.g., when picking a direction of study, or a problem to tackle.”

    Then you already disagree with Lubos, whether you want to or not, because FAPP you have assigned a materially higher probability to the view that P is not equal to NP. In “real life” you recognize the appreciable gap that exists between views being “legitimate” on the one hand, and not “utterly absurd” on the other. 😉

  16. ungrateful_person Says:

    Great Post Scott! Very well written. Loved the frog analogy 🙂

  17. Sam Hopkins Says:

    @Sid: I think you’re confused about the contrast I was drawing. The point is that “many people have looked for a solution and all have come up empty” is taken as evidence that no solution exists when the problem we are looking at is “find a polynomial time solution to generic unstructured search problems”; but on the other hand, no one really considers it evidence when the problem we are looking at is “prove the RH.”

  18. Scott Says:

    Sam Hopkins #5 and #17: I would say that proofs are different for the following reason. A proof is not only a mathematical object; it’s also an explanation. If you say (e.g.) that you think RH is true but unprovable, that’s tantamount to saying that you believe there’s no explanation for the lining-up of all those trillions of zeta function zeroes. Believing that there’s no polynomial-time algorithm for SAT has no similarly “abhorrent” consequence. Thus, as Sid #13 said, I think a better analogy is really to people who advocate that P vs. NP is independent. While that logical possibility can’t be ruled out, I personally regard it as just as “abhorrent” as the possibility that RH is unprovable, and for the same reasons (it would leave a huge part of mathematical reality totally inexplicable).

    Everyone: I’m headed to San Francisco now to give a talk (at Optimizely, a company where some people apparently read this blog), but will answer other comments after I’m done! My host at Optimizely has promised me the use of his laptop to do so. 🙂

  19. Bram Stolk Says:

    A bit rude, Scott.
    Your analogy compared Luboš to an a-hole.
    No need to sink to that, a little empathy would be nice.

  20. Klum Says:

    @Mike, I don’t understand what were you trying to say. I thought Lubos’ claim is that to strongly believe or assume that P is unequal NP, and to seriously justify these claims in a “scientific” manner is dogmatic or an expression of dogmatism. I didn’t know he claims that we should not be biased at all in favor of P is unequal to NP. Or maybe I misunderstood his claims?

    In any case, I don’t see the point of the debate between Lubos and Scott. The problem is a mathematical one. To argue about “circumstantial evidences” of mathematical statements seems to me absurd: what kind of content, benefit, or consequence do you want to extract out of such an argument?

  21. Mike Says:

    Klum@20,

    Yes, Lubos says that computer science is so unlike, for example, physics, that no amount of experience can give anyone any idea whether or not P equals NP. You, on the other hand, while claiming intellectual agnosticism on the question, live your life as if it’s pretty clear that P doesn’t equal NP (picking a direction of study, or a problem to tackle, more?). That was my only real point. 🙂 Regarding, the point of the debate between Lubos and Scott — it’s not that Scott and Lubos are the actors — it’s whether or not P equals NP. I don’t think there is every any debate about anything once the question has been settled.

  22. Sniffnoy Says:

    It’s interesting to note another asymmetry between P vs. NP and the Riemann Hypothesis/Goldbach’s Conjecture/Fermat’s Last Theorem/etc (an asymmetry which may get at the “self-referrential” nature of the problem, as evinced for example in other results like natural proofs): when computer scientists cannot come up with a polynomial-time solution to generic unstructured search problems after a long time of looking for one, they decide that this means there is no such solution; but when mathematicians cannot come up with a solution to the Riemann Hypothesis after many years, they *don’t* then conclude that this makes RH likely to be independent of the axioms of mathematics. Indeed, sometimes people do put forward such an assertion about famous unsolved problems (see e.g. http://mathoverflow.net/questions/27755/knuths-intuition-that-goldbach-might-be-unprovable/) but that is usually dismissed as fantasy talk.

    This analogy seems utterly bizarre to me. Why are you comparing the search for an ordinary mathematical object (a polynomial-time algorithm for 3-SAT) with the search for a proof or disproof of the Riemann hypothesis? The natural thing would be to compare it to the search for a nontrivial zero of the zeta function off the critical line! And in both cases, people have spent a long time searching, not found anything, and concluded that one probably doesn’t exist. (Though not just because of the search, of course; see e.g. all of Scott’s posts ever about why P is probably not NP, and note Josh’s point above about how nobody’s found a PRNG good enough to derandomize BPP despite a bunch of searching, but in that case it’s generally believed that one does exist.)

    Or, you could compare the search for a proof or disproof of the Riemann hypothesis to the search for a proof or disproof of P!=NP; and then for both problems people are generally of the opinion that the object being searched for (a proof or disproof) does exist despite one not having been found. But I don’t understand why you would mix and match types like that! There’s only an asymmetry because you introduced one in how you formulated it. (Now, as Josh points out, if you compared the Riemann hypothesis to P vs BPP instead, then there might be the sort of asymmetry you say.)

    (Btw, Josh, in #11, I assume you mean “One can in fact…”?)

  23. Lubos Motl Says:

    You’re completely irrational, Scott. My response is here:

    http://motls.blogspot.com/2014/03/pnp-is-conceivable-there-is-no-partial.html?m=1

  24. Sniffnoy Says:

    Klum, Scott’s argument for P vs NP was not in any way physical; it relied only on mathematics and the history of mathematics. And since the history of mathematics is, in fact, largely constrained by mathematics, you can get some information about mathematics out of it. I assume you don’t mean to entirely reject the idea that for an open mathematical problem one can sensibly talk about what the right answer probably is based on evidence weaker than proofs.

  25. Joshua Zelinsky Says:

    Sniffnoy, yes. No idea why I wrote “cannot” there.

  26. Rahul Says:

    The two surveys by Gasarch (one ~2001 & the other ~2012) seem to put the fraction of serious academics who believe P=NP at around 9-12%.

    While indeed a minority, it’s not a negligible minority. Note these aren’t merely agnostics but people who actually think P=NP will be true

    Of course, one might question the credentials of Gasarch’s respondents, or say they are being contrarian for contrariness’ sake etc. Or some of them may have been joking or trolling. Hard for me to say.

  27. Jay Says:

    Bram #19,

    You like empathy and your main concern is poor Molt victim of an analogy. Seriously?

    http://prime-spot.de/Bored/bolubos_short.doc
    http://rationalwiki.org/wiki/Lubos_Motl

  28. Sam Hopkins Says:

    Presumably proofs and programs are not such dissimilar objects (see the Curry-Howard correspondence). But I admit that “polynomial-time programs” are a lot different from programs generally.

    At any rate, my point more broadly was that in mathematics there is a common phenomenon of “pessimism” where we move from “naive attempt to resolve problem” to “proof that such an attempt is bound to fail”; but it is totally unclear to me when this move is appropriate.

    Well maybe it is a common phenomenon now, but I guess it only traces back a couple centuries to Galois (or Abel-Ruffini, if you like). The transition from searching for a generic formula that finds the roots of polynomial equations to showing that such a formula is impossible was a big conceptual leap forward, I think.

    We can see a similar but more developed strain in the Continuum Hypothesis:
    1) hope (by Cantor?) that CH is resolvable from normal set-theoretic axioms;
    2) Gödel and Cohen show that 1) is not the case;
    3) hope (by Gödel) that CH is resolvable by Large Cardinal axioms;
    4) strong evidence that 2) is impossible.
    It’s not even clear how to formalize 4, as the levels of “pessimistic” results get deeper and deeper.

    Or we can look at P vs. NP itself:
    1) hope (maybe never seriously advanced) that there is a polynomial time algorithm for generic search problems;
    2) hope that we can show 1) is not true;
    3) results showing what a proof of 2) cannot look like.

    Under what conditions is mathematical pessimism appropriate? Certainly whenever you can prove the optimist wrong; but it must be more often than that or else no one would ever try!

  29. Greg Kuperberg Says:

    “There is no partial evidence in purely discrete mathematics” is a completely naive claim that flies in the face of randomized algorithms for deterministic problems. If partial evidence didn’t exist in discrete mathematics, then there would be no such thing as a probable prime. Of course there are probable primes: For example, 8^173687-7^173687 is one that was identified nearly a decade ago. I invite anyone here to (a) actually prove that it’s prime; or (b) find any sane evidence that it could be composite, given that it passes the Miller-Rabin test repeatedly.

    http://en.wikipedia.org/wiki/Probable_prime

  30. Sniffnoy Says:

    Yeah, I’m kind of perversely curious as to how far Motl intends to push this claim in terms of what it can be applied to. Apparently it applies to P vs NP, but not the Riemann Hypothesis or the Goldbach Conjecture. How about the conjecture that there are only finitely many Fermat primes?

  31. Łukasz Grabowski Says:

    I think you make a very good case for P \neq NP. On the other hand, I remember reading somewhere on this blog that you think exponential lower bounds for circuit size for problems in NP is also a reasonable conjecture. Do you think the case for it is comparably strong? I personally would be very surprised if such a bound was true.

  32. Klum Says:

    @Mike, obviously, Lubos is correct when he says that computer science, or for that matter complexity theory (which I regard as a branch of mathematics) is completely different from physics. Mathematics and physics differ.

    I don’t see the merit of a “debate” about mathematical statements, except for a debate intrinsic to mathematics, e.g., a discussion on what should be the goal of current mathematicians in face of such and such barriers to prove this and that.

    But it seems to me that here the debate is different. It’s one side claiming not only that a mathematical statement is probably true, but also it extracts from it consequences that go beyond mathematics; namely, it claims that this mathematical conjecture should be a “scientific” conjecture. Am I correct?

    The (relevant) debate should be about how to prove or disprove P=NP, not on what kind of evidences we can collect from previous research.

    Further, these evidences of P unequal NP, do not have any sound theoretical foundation. In other words, I ask you: what is the theory that enables us to infer that a mathematical statement is true, from a history of previous attempts, and previous collections of mathematical results (that do not logically imply the statement itself)?

    I feel this is a real concern here.

  33. Lubos Motl Says:

    Joshua Zelinsky #12, I haven’t claimed that *every* belief in computer science is determined by the prejudiced folks inventing excuses four their failures, like our host and the P!=NP excuse.

    As I emphasize, every question in discrete maths is different and uncorrelated to others – unless provable to be equivalent (or at least provable to be stronger/weaker) – so there is no relationship between P=NP and (ZPP=P or BPP=P).

    It may very well be true that the subset of the CS community that knows enough about BPP and ZPP is smarter and less prejudiced, which is why they are more open to ZPP=BPP=P. Or they may be lazier in a different direction – instead of inventing excuses for a missing fast algorithm, they are inventing excuses for their inability to find examples that belong to some sets P,ZPP,BPP but not all.

    One may also say that the belief that ZPP=BPP=P shows hypocrisy or inconsistency in combination with the belief in P!=NP – people choose different outcomes even though the evidence in one way or another is non-existent in all these cases.

  34. Greg Kuperberg Says:

    (I guess I meant sane rationale rather than sane evidence, since the strange thesis is that there is no such thing as partial evidence either way.)

  35. Lubos Motl Says:

    Dear Greg #29,

    I have explicitly discussed tons of similar examples where partial evidence exists because the claim is “composite” and the different components (“predictions”) may be verified separately – just like the verification of predictions in physics.

    But P=NP isn’t one such example. There doesn’t exist *any* check that P!=NP would pass while P=NP wouldn’t be able to pass equally well and naturally. P!=NP isn’t a composite statement making numerous predictions, so it cannot be verified “by parts”. It is an irreducible claim about the existence of one rather particular algorithm for one rather particular problem. This algorithm either exists or not so the prior probabilities have to be comparable.

    The posterior probabilities of P=NP and P!=NP end up being the same – also comparable – because in the Bayesian reasoning, one doesn’t find any contradictions. All the “empirical data” is compatible both with P=NP and P!=NP. All Scott’s fairy-tales about frogs and other animals are nothing else than circular reasoning. He assumes P!=NP and then he shows that given this assumption and a model built upon it, P=NP looks silly. But one may equally well assume that P=NP and “show” that P!=NP is then silly. In both cases, it would be circular reasoning. Just to be sure, I don’t promote either. I say that according to a rational – Bayesian – evaluation of all the available evidence, P=NP and P!=NP must preserve their comparable probabilities i.e. around 50-50.

    Cheers
    LM

  36. Jaan Says:

    > How do the algorithms and the reductions manage to coordinate with each other, every single time, to avoid spilling the subexponential secret?

    This could be explained by all of the reductions that complexity theorists have considered being similar in some way – leaving the possibility of some intricate way of embedding structure that doesn’t rely on simple gadgets. Possibly because the problems we consider natural are almost always expressible in some framework like CSP?

    In search for an argument against my own point – what’s the most complex reduction to show hardness (be it for NP, #P, or whatever) that people are aware of?

  37. Curtis Bright Says:

    FYI, Super Mario Bros. isn’t known to be in NP. Back when you blogged about that I emailed the authors about a flaw in their argument and they’ve since retracted the claim.

  38. Scott Says:

    Curtis #32: I see, so the proof that it’s NP-hard is fine, just not the “in NP” part? What upper bound do we have, then? PSPACE?

  39. Scott Says:

    Rahul #26: I would guess that almost all complexity theorists who say they believe P=NP, say so in order to be contrary. (There’s no shortage of people in this field who enjoy being contrary for its own sake.) However, Gasarch also asked non-CS mathematicians, and it’s entirely possible that some of them seriously do believe P=NP, not being familiar with the kind of evidence that I outlined in this post.

    (Note that, if someone actually believes P=NP, that provides no support to the Lubosian position! According to Lubos, P vs. NP is just a random, arbitrary question, so there shouldn’t be any rational reason for favoring either answer to it.)

  40. Joshua Zelinsky Says:

    I’m puzzled even more by Lubos’s response, where he thinks that the analytic character of a problem matters so much, and so he assigns a higher probability to RH than the Goldbach conjecture due to its not having a similarly analytic formulation. Speaking as a number theorist, while both statements are pretty likely, Goldbach is far more likely than RH. The partial results are stronger and the heuristic support is much stronger. This seems to be vintage Lubos.

    It also seems that Lubos doesn’t get the point of the two sets touching. He says in his new blog entry:

    “The same comments apply to everything that Scott presents as “partial evidence”. He describes a “close call”, some two sets that seem to be nicely touching. But the detail he is unwilling or incapable of seeing is that this touching of the two sets is totally compatible with the P=NP diagram of the relationship between the sets, too! The only difference is that P=NP actually simplifies the shape of the boundaries between the sets because it identifies P with NP and NP-complete. It hasn’t been proven that the sets are “simplified” in this way but it hasn’t been disproven, either, and this assumption contains no “dinosaur bones”.”

    But this misses the point that if P=NP one wouldn’t expect to see anything looking like a boundary here at all.

    He does make a marginally interesting argument when he says that. If you only care about the polynomial character of the solving algorithms, the traveling salesman problem is the same thing as SAT. They’re expressed in two languages but the beef is the same because the equivalence/conversion has been found. So it’s really demagogy to talk about tens of thousands of problems.

    There’s a slight Princess Bride issue here about the word “demagogy” but there’s what may be a kernel of a valid point: how does one decide how to count separate pieces of evidence can be complicated. For example, the Riemann zeta function satisfies certain basic symmetries about where its zeros must lie, so how do you count those zeros? Are they part of the evidence or not? Deeper results show that at least one third of the zeros lie on the line, so maybe we should when go up to 10^k only really count that as (2/3)10^k trials, but then one is in the position where proving a higher density being on the line leads to fewer experimental results. I am curious what Scott think about this part of Lubos’s comment.

    (I’m amused by Lubos saying outright in the post that he didn’t finish reading Scott’s post, and yet he thinks that Scott is being a “bigot” and a “bully”.)

  41. Douglas Knight Says:

    Scott, are there any examples of “close calls” for separations other than P v NP?

    One class of close calls is PCP-type theorems, a sharp transition from P to NP-complete. Are there analogues of this for other separations? You suggested that there should be a form of PCP for quantum computing. Is there a version relative an oracle that would apply to the whole polynomial hierarchy? How about P vs PSPACE? or PSPACE vs EXP? What could that look like?

  42. Scott Says:

    Bram #19:

      A bit rude, Scott.
      Your analogy compared Luboš to an a-hole.
      No need to sink to that, a little empathy would be nice.

    HAHAHAHA! Have you ever read Lubos’s blog? Do you have any idea who he is? He’s notorious for describing the scientists he disagrees with as “subhuman garbage” (or if women, “dumb bitches”), and explicitly calling for their deaths. He’s one of the most empathy-challenged people on the planet. If you think that’s an exaggeration, please read this collection of Lubos quotes, which Jay #27 already linked to.

    I’m curious: when you called me out for being “rude” to poor Lubos, were you actually unaware of the context? Or did you simply decide that it’s my job to turn the other cheek, even as Lubos tells Richard Lipton to “buy a firearm that he may need if he ever runs into Scott Aaronson”?

  43. Serge Says:

    > Of course, to prove P=NP, it would suffice to find a single link […] In half a century, this hasn’t happened.

    Scott, this condition is certainly necessary and sufficient for P=NP being *proved*, but it’s only sufficient for P=NP being *true*. In no way is it necessary that someone be able to find such a link, for it to exist mathematically!

    Think about this: what if P=NP, but there’s some unknown principle – whether from math or physics – which has been – and will always be – preventing all intelligent programs – in brains and computers – from finding a polynomial algorithm for any NP-complete problem? Just because nobody can find an algorithm, it doesn’t mean this algorithm can’t exist mathematically! Even more so, when it so happens that the reasons for not finding it are perfectly scientific – or at least, perfectly natural.

    If this was actually the case – I mean, if such a principle did exist – then it’s no wonder we can’t make any practical distinction between P=NP and its negation!

  44. roland Says:

    “How do the algorithms and the reductions manage to coordinate with each other, every single time, to avoid spilling the subexponential secret?”

    I think this line of argument is not very good, because the math of all those results does not seem to be deep enough to tackle P vs NP, otherwise we would have a proof by now.

    So a P vs NP proof would most likely involve another set of ideas, that can spill a secret that those “algorithms and reductions” just can’t.

  45. Scott Says:

    Jaan #36 and Serge #43: Yes, you’re right. It’s logically possible that P=NP, and the reason no one has noticed yet is that the reduction connecting the NP-complete ones to the P ones is of a completely different kind than any of the reductions anyone has ever discovered. (E.g., maybe it’s some n10000 algorithm.) This is the possibility that Lubos keeps ranting about. It would be analogous to the green and yellow frogs mating, but only when some bizarre synthetic chemical was poured into the pond.

    However, I want you to notice something extremely important. Namely: even if this turned out to be true, there would still be a huge difference in kind between the problems that were in P for “normal” reasons, and the ones that were in P “but only for the crazy reason”! In other words, even though we’d formally have P=NP, we’d still have to distinguish between two large, disjoint, and interestingly-different parts of P: the part that’s “in P for normal reasons and only NP-complete for the crazy reason,” and the part that’s “NP-complete for normal reasons and only in P for the crazy reason.” (The stragglers, like graph isomorphism, could be both in P for the crazy reason and NP-complete for the crazy reason.)

    Since it would be inelegant and unnatural for the class P to be “severed into two” in this way, I’d say the much likelier possibility is simply that P≠NP.

  46. Curtis Bright Says:

    Yeah, the gadgets seem to have been fixed, so SMB is NP-hard. It’s in PSPACE since you could nondeterministicly play the game only needing to store the gamestate. It may well be in NP too, but that would require more detailed knowledge of the gamestate evolution function.

  47. Serge Says:

    @Scott #45

    OK, but then P≠NP becomes more like an axiom you’ve chosen to use in order to get a simpler theory, while implicitely aknowledging that it’s quite possibly not decidable. In any case, that’s my personal view.

  48. Scott Says:

    Klum #8 and #20:

      To argue about “circumstantial evidences” of mathematical statements seems to me absurd: what kind of content, benefit, or consequence do you want to extract out of such an argument?

    I thought I was pretty clear in the original post, but since apparently I wasn’t, let me try again. You yourself say that you conjecture P≠NP “for most purposes,” e.g. when choosing which problems to tackle. Well, according to Lubos, you shouldn’t. In fact, according to him, the very fact that you’re a complexity theorist at all, indicates that you lack the intellectual capacity to do “real” science (like string theory), or at least “serious” parts of math. Indeed, if you agree with him, why did you become a complexity theorist? According to him, complexity theory (and more generally, combinatorics) is just a mass of disconnected, random facts with no relation to each other. One shouldn’t conjecture that P≠NP “for most purposes,” because one shouldn’t conjecture anything at all about such questions: some discrete questions have one answer, some have another, but there isn’t any rhyme or reason or predictability to it whatsoever.

    But remember that it’s only for discrete math that Lubos says this! He admits the possibility of partial evidence about the Riemann Hypothesis, or even Goldbach’s conjecture. It’s only for P vs. NP (or other questions he dislikes / doesn’t understand) that he completely and totally rejects the possibility of making plausible conjectures based on induction. If he’s right, then no one with sufficient intellectual capacity should ever want to be a complexity theorist (and I suspect he’d happily agree with that implication). So, that’s why it’s relevant that he’s dead wrong.

  49. Greg Kuperberg Says:

    Lubos – “I have explicitly discussed tons of similar examples where partial evidence exists because the claim is “composite” and the different components (“predictions”) may be verified separately – just like the verification of predictions in physics.”

    But the assertion that 8^173687-7^173687 is prime is not a “composite” claim. It is an indivisible claim: Either it is prime or it is not; it cannot be statistically sort-of true. Nevertheless, there are good reasons to call 8^173687-7^173687 a probable prime. Again, I’m not talking about any statistical distribution of primes; I’m talking about that one number.

    Besides, the title of your blog posting has the sentence: “There is no partial evidence in purely discrete mathematics”. I’m that you’ve explicitly discussed counterexamples to your own statement. But explicitly discussing an example doesn’t make it disappear.

  50. Scott Says:

    Serge #47: Except that it’s not merely a choice to get a simpler theory; it’s also a prediction that the simpler theory is more likely to be the true theory.

  51. Scott Says:

    Lukasz #31:

      I think you make a very good case for P \neq NP. On the other hand, I remember reading somewhere on this blog that you think exponential lower bounds for circuit size for problems in NP is also a reasonable conjecture. Do you think the case for it is comparably strong?

    Yes, I do think that NP requires exp(n)-sized circuits (maybe with, say, 98% confidence?). I’m curious: if you agree that P≠NP (and with the Exponential Time Hypothesis?), what is it that makes you think that nonuniformity should make such a big difference? The way I think about it, a nonuniform algorithm is really just the “limit” of an infinite sequence of better and better uniform ones, so I think of allowing nonuniformity as just “taking the lim inf.” And a lim inf shouldn’t make such a big difference… 🙂

  52. Scott Says:

    wolfgang #2:

      probably a stupid question, but do they form their own complexity class in the large zoo or is it just a matter of time until they *have to* fall either into P or NP ?

    Ladner’s Theorem tells us that if P≠NP, then there must also be NP problems in the “intermediate zone,” neither in P nor NP-complete. On the other hand, the intermediate problems constructed by Ladner’s Theorem are extremely artificial.

    For “natural” problems, like factoring and graph isomorphism, the situation is this: we have excellent theoretical evidence that these problems can’t be NP-complete (namely, if they are, then the polynomial hierarchy collapses). On the other hand, these problems could be in P without any disastrous theoretical consequences (“merely” disastrous practical consequences, in the case of factoring 🙂 ). So if you think they’re hard, then presumably they’re NP-intermediate, but we can’t be nearly as confident about their hardness as we are for the NP-complete problems. (In particular, the “invisible electric fence” arguments that I gave in this post generally don’t apply to factoring or graph isomorphism.)

  53. Scott Says:

    Philip White #1:

      I’m curious…is there any scientific or philosophical a posteriori evidence in nature (e.g., in evolutionary biology) that you believe supports P != NP?

    Well, one could make an argument that if P=NP, then it’s somewhat surprising that evolution hasn’t hit on this wonderful general-purpose way to avoid brute-force search. However, someone else could reply that there are other conceivably-useful things that evolution never found: for example, the wheel (well, biology has it at molecular scales, but not above that). That’s why I’d say that the strongest argument relies on the actual discoveries of the tens of thousands of people thinking explicitly about algorithms, operations research, etc. over the past half-century.

  54. peppermint Says:

    Your actual arguments for mathematical plausibility are good. Your story about frogs is trippy and pointless though.

  55. Fred Says:

    Scott #53
    But noone has found either an efficient method for non-QC factoring and graph iso, which are, according to you, easier than NPC.

  56. anon Says:

    Evolution never discovered the wheel at MACROSCOPIC scales because limbs are better at getting over UNEVEN territory than wheels – which are ONLY good on MANMADE roads and railways.

    Otherwise your NP != P argument is just great, and any individual who argues against it is really putting him/herself in a very dodgy category not much distinguishable from the usual anti-modern mystics and the like

  57. Or Meir Says:

    I am sure you have seen this comics a dozen times, but it’s still relevant:
    https://xkcd.com/386/

  58. Scott Says:

    peppermint #54: What can I say? Other people seem to have liked the trippy frogs. More important that you were persuaded by the arguments, though. 🙂

  59. Ignacio Mosqueira Says:

    Scott

    I see you quoted something I wrote.

    Look there is a wide gulf between saying expertise is useless and expertise is not enough. I simply cannot imagine that anything I wrote is tantamount to saying there is a 50%/50% chance the world will end because of the LHC.

    So your approach is polemical. I can see how that will make your blog more entertaining and that’s probably why I am writing a message in it. That’s fine.

    You write of statistics of problems that have been shown to be separated by an electric fence from problems on the other side. But statistics of the sort you describe are meaningful when events are independent of each other and it is not clear to me that your network of NP-complete problems one coast and P problems on the other constitute such a sample. So I am not sure how to interpret your statistics.

    On a somewhat separate issue I noticed that there is apparently a way around the Harlow-Hayden complexity bounds for black-holes. I have not really had time to look at it in any level of detail or to decide whether it is reasonable for me to tilt in one direction or the other. However, the point is that complexity bounds are shifty things as far as I can tell. There is no point in being dogmatic about them.

  60. Jaan Says:

    Scott: Wouldn’t the crazy vs normal split be similar to asking which problems are NP-complete under Karp reductions versus under Cook reductions though? (I’m still a student, so forgive me if that’s not a decent comparison!)

    More generally I’m just concerned that we might be seeing a(n almost) neat split repeatedly because we’re somehow doing the same sort of thing in each case – and that there’s some important principle about encoding structure that would allow the gap to be breached (and simultaneously explain the severance, ideally!).

    It just seems like hubris to think that a mere few decades of human thought is enough to consider all possible types of reduction – why should be it be so easy? Why should there even be an obvious progression of ideas for us to follow?

  61. Ignacio Mosqueira Says:

    I see that Lubos has a similar thought to the one I expressed above:

    If you only care about the polynomial character of the solving algorithms, the traveling salesman problem is the same thing as SAT. They’re expressed in two languages but the beef is the same because the equivalence/conversion has been found. So it’s really demagogy to talk about tens of thousands of problems

    Do you have a reply for that specific issue or do you insist that your electric fence is meaningful scott?

  62. Jaan Says:

    Also: how confident are you that the polynomial heirarchy does not collapse to some level, in comparison to your confidence that P!=NP? To what extent do the same arguments apply?

  63. Michael Says:

    I’m puzzled by LM’s “Bayesian” argument. It seems to be exactly what SA was addressing in the post. Say that P!=NP. Then every interesting problem studied will not find equivalence, with probability one. Sa P=NP. Then there’s some small chance that even dimwitted CS types might find any one of those problems to be in both P and NP. Even without making silly assumptions about strict independence, after a huge collection of these problems are investigated and each gives a result with likelihood 1 if P!=NP and likelihood 1-eps if P=NP, the posterior likelihood that P!=NP is way up. Of course that’s Scott’s point, tho without frogs.

  64. Rahul Says:

    What are examples of problems that an overwhelming consensus of experts were confident of being resolved one way but were actually resolved the other?

    Especially in Math / Complexity Theory are there any such good examples of theorems etc.?

  65. John Archer Says:

    Scott,

    Nothing really important, but could you open up your margin a bit more, please?

    I just found neat proof that P = NP and I’m having a little difficult fitting it in here. I want to get it down before the moment passes and it’s lost to posterity.

    Thanks.

    John

    P.S. I’ll check out the porn on the internet while I’m waiting. This analytic stuff sounds interesting but I can’t find any pictures. Can you recommend any good sites?

  66. Anonymous Says:

    Scott, this isn’t directly related to your original post, but I’m wondering if you could clarify something you wrote in comment #18: that if something is unprovable, it means it has no explanation. Isn’t it conceivable (though perhaps far-fetched) that the statement could be expressed and proven under some other set of axioms?

  67. Fred Says:

    Maybe yellow and green frogs indeed can’t mate but there is a rare species of pink frogs living in the amazon that can breed with either.

  68. aram Says:

    I believe that P \neq NP, in part because of your point that it is easier to find algorithms than lower bounds. I also agree with your perspective that beliefs themselves are more about usefulness than truth, and so we should proceed with the belief that permits productive work while still being open-minded.

    But if I had to argue the P=NP case, I would say that the vast number of algorithms out there all use a small number of common techniques. A new technique might at first look like what you call a “crazy reason” for P to equal NP, but once we got used to it, it might be no worse than something like semi-definite programming.

    If I stop playing devil’s advocate, I cannot confidently say that the above paragraph is *wrong*, but I do believe that it is an unproductive line of thought.

  69. Scott Says:

    Anonymous #66:

      Isn’t it conceivable (though perhaps far-fetched) that the statement could be expressed and proven under some other set of axioms?

    Yes, I would allow a proof under any reasonable system of axioms: PA, ZFC, even large cardinal axioms or Grothendieck universes (though in practice, PA or even fragments thereof probably suffice for most natural arithmetical statements). It’s only if there’s no proof under any reasonable axiom system, that I would tend to say there’s no explanation (or at the least, I would want to know what other kind of explanation you had in mind!).

  70. Scott Says:

    Rahul #64:

      What are examples of problems that an overwhelming consensus of experts were confident of being resolved one way but were actually resolved the other?

    The biggest surprises in complexity theory were probably NL=coNL, IP=PSPACE, Barrington’s Theorem, and Shor’s algorithm (and maybe a few others that I’m forgetting?). Interestingly, in not one of these cases do I think any significant number of experts had expressed the opposite opinion beforehand. Instead (unless I’m mistaken), these results came as such a surprise precisely because so few people had even been asking the question.

  71. Scott Says:

    Jaan #62:

      how confident are you that the polynomial heirarchy does not collapse to some level, in comparison to your confidence that P!=NP? To what extent do the same arguments apply?

    Here’s how I like to put it: you agree that P≠NP? Good! As we see from this thread, that’s already a nontrivial hurdle that not everyone gets over. 🙂 Now, just take the intuitions that led you believe P≠NP, and apply them again at each of the higher levels of the PH! E.g., why should the intuitions suddenly fail if the P and NP machines both have access to an NP oracle? I admit that they might fail in a yet-unimagined way, even under the assumption that P≠NP. So maybe my confidence that PH is infinite is “only” 98%, rather than 99%. But once again, it would be weird and inelegant if it happened.

  72. Scott Says:

    Curtis #46: OK, thanks very much for clarifying!

  73. Lubos Motl Says:

    Scott #45, you say that for P=NP, the time has to be n^10,000 or something like that. But it’s simply not true. Your beliefs in P!=NP may very well be fundamentally false even in a practical, big way, because the number of operations may very well be 1,000 n^6 which can make the solving algorithms practical for quite high values of n. This assumption is compatible with all the evidence, too.

    Scott #48: P!=NP hasn’t been demonstrated even for “most purposes”. The only variation of P!=NP that is supported by evidence is the modification of P,NP that only talks about algorithms and proofs that are already known as of early March 2014. Among them, none of them solves SAT in a polynomial time. But that’s it. As the previous paragraph stresses, P!=NP may be *completely* wrong, even at a practical level, due to a novel algorithm that just isn’t known at this moment.

    Otherwise, concerning your paranoia, the reason why it’s impossible to extrapolate previous insights to answer new questions is that they’re *qualitatively different*, not that all complexity theorists must be stupid. In physics, qualitatively different questions also prevent one from extrapolation. Einstein was no moron and he had found not one but two theories of relativity, two layers, but this still failed to allow him to say the right things about quantum mechanics because questions in quantum mechanics are qualitatively different from those in classical physics (relativistic or otherwise). In fact, in physics, we deal with classes of questions and theories that are even much more qualitatively different than those in computer science.

    I have explained in detail why the partial evidence that may exist for RH or Goldbach etc. just isn’t possible for P=NP. You may feel discriminated but it’s because *reality discriminates*. Things and propositions and disciplines of mathematics and science are just not created equal, equally gradual, equally admitting partial evidence. This is still the same point I am doing. Different questions are different, may have different answers, different difficulty, and different degree to which the answers may be found gradually.

    Greg #49 wrote: “But the assertion that 8^173687-7^173687 is prime is not a “composite” claim. It is an indivisible claim: Either it is prime or it is not; it cannot be statistically sort-of true.”

    You’re just completely wrong, Greg. Any claim that “X is prime” is composite because it’s a combination of the claims “X is not a multiple of 2”, “X is not a multiple of 3”, and so on, and so on, and these subclaims (i.e. infinitely many “predictions”) may be tested separately – and perhaps some of them may be decided, giving partial evidence (that it could be true). In fact, the number’s being prime may be decomposed in several other known ways (and probably many other unknown ways).

    On the other hand, P!=NP isn’t composite in this sense – it is an existence claim about one (or many, but one may always discuss the fastest one only) particular thing.

    Scott #50: it’s just not true that P!=NP is the simpler theory. Quite on the contrary, P=NP is the simpler theory and P!=NP is the more contrived one. P=NP says that the classes P,NP,NP-complete are the *same* sets. P=NP is simpler exactly in the same sense as simpler theories that we tend to favor in physics. How can you say that some assumption that there are several intersecting circles with several distinct borderland regions is simpler? You know that what you’re writing is nonsense, don’t you?

    Michael #63: of course that both of us are talking about Bayesian inference. The only difference is that I do it right and Scott does it wrong. Bayesian inference starts with prior probabilities for P=NP and P!=NP which must be chosen comparable because they’re qualitatively different, simple hypotheses. And then the probabilities are adjusted by Bayesian inference. Because both hypotheses equally well pass the “comparison with the data” because no contradiction ever occurs, the posterior probabilities are still equal to the prior probabilities so they’re still “comparable” to 50-50.

    Otherwise we have already discussed the point that saying that the evidence is “multiple” because you can talk about “thousands” of problems, like SAT, traveling salesman, etc., is really wrong because these pieces of evidence are demonstrably *not* independent. If a polynomial algorithm exists for one, it exists for all of them. Ignacio recalled this basic bug in your reasoning in #61, for example.

    The explanation for why different problems in NP (or its complement) seem to behave in the same way when it comes to speed etc. isn’t P!=NP. Instead, it’s the known “conversion” proofs that they’re speedwise equivalent! Therefore, saying that their being on the same side of a border strengthens the belief in P!=NP is manifestly wrong.

  74. Scott Says:

    Lubos, your total failure to engage with the technical part of my post—and in particular, with the ubiquity of the “invisible electric fence” phenomenon in hardness of approximation results—was a pathetic disappointment. Of course I didn’t expect the tiniest iota of human decency or class from you, but I at least expected intelligence and engagement with the arguments, and I didn’t get it.

    You claimed (bizarrely) that unlike Goldbach’s conjecture or “ordinary” mathematical statements, P≠NP doesn’t make any mathematical predictions that were later confirmed. So then I went through three examples of confirmed predictions in great detail. Your response? Ignore the examples, and just shriek again that P≠NP doesn’t make any predictions, but this time louder.

    And I don’t care if you call the NP-complete problems 10,000 problems or one problem—that’s just semantic game-playing. The important point is that, if it is one problem, then it’s a “huge” problem, a problem with thousands of different facets. And crucially, those different facets generate different predictions of the sort discussed in my post: this should be approximable in polynomial time to within a factor of 7/8 but no better, that to within a factor of log log(n), etc. When these different predictions are then confirmed—using different sorts of algorithms for the different facets!—they strengthen our confidence in the original hypothesis. This is exactly the sort of thing you claimed to be impossible, and are now throwing a tantrum when confronted with the reality of. You’re really in out of your depth here.

    Finally, your constantly-repeated claim that P=NP is “simpler” than P≠NP since equalities are simpler than inequalities is so gobsmackingly ignorant that I wasn’t sure how to reply before. But OK (sigh) … you ignore an obvious fact: we already know, because of the hierarchy theorems, that most complexity classes can’t be equal to most other ones! Sure, it would be wonderfully “simple” if all the classes collapsed into one gigantic mega-class—P=NP=PSPACE=EXP=…—but we know for certain that that’s not how it is. Because of this—I repeat—known fact, anyone with a smidgen of intellectual seriousness starts from the realization that inequalities among the basic time and space classes have to be the “norm,” and it’s equalities (when true) that require special explanation.

    I’m going to sleep now. I guess I’ll continue to shovel this pile of crap in the morning.

  75. Lubos Motl Says:

    Scott #71,

    this comment of yours only has 13 lines (not counting the quoted previous comments and the line with the greeting) but the number of independent signs of your complete irrationality probably exceeds 13 – I will show the four main themes. It is sort of remarkable how much delusion you’re able to squeeze into such a small piece of text.

    First, this blog post is supposed to be about the “scientific case for P!=NP”. So it is really off-topic to start with “good boy, obedient puppy, you start with assuming P!=NP, so nice, so much better than the naughty puppies who dare to suggest P=NP is possible”. You just can’t *start* by assuming it if you are claiming to have evidence for the assumption.

    Second, it is just wrong to extrapolate “intuition” from one problem in CS to completely inequivalent problems in CS. The same people rarely succeeded in using their success elsewhere, and when there was a correlation of “making the same discoveries by different people”, it was because of their innate talents, not because of intuition that could be exported all over the space of ideas.

    Third, it is doubly irrational to use or export intuition that has led to P!=NP to other questions because there’s no evidence whatsoever that the initial intuition – and P!=NP itself – is right. So even if intuition could be exported in between different problems, it’s just a bad habit to export intuition that is equally likely to be wrong as it is to be right. The intuition hasn’t led to any victories or achievements yet, just to a cult that prevented many people from studying many questions impartially, so using it as a powerful weapon is really dumb.

    Finally, fourth, you attribute the probabilities 98% and 99% to different statements. This is just silly. There isn’t any way in which the probabilities of non-repeatable, unique propositions or events could be determined this accurately. Both numbers correspond to something like “2.3 sigma” when converted to normal distribution. But whenever there is a 2.3-sigma bump, the precise magnitude of the bump is uncertain by itself, so it could have easily been between 2 and 3 sigma (95% and 99.7%) with the same data, without any unlikely assumption. One just can’t distinguish these levels of (not quite) certainty. Incidentally, this level of “certainty”, like 98%, is extremely poor in hard sciences. Most physicists treat 2.3-sigma bumps or p-value being at 0.02 or 0.01 to be virtually no evidence at all. Note that the newest paper claiming WIMP dark matter at the Galactic center calculates the significance to be 40 sigma – huge certainty – but the conclusion may still be wrong due to really qualitative, discrete, systematic mistakes in the right interpretation. The room for such qualitative, discrete mistakes in computer science is even greater than in the dark matter business.

    So it’s a silly numerological game to make a big deal about the precise quantification of probabilities like 98% or 99%. After all, all the numbers like 98% or 99% are clearly just temporary state of affairs because if people make progress in answering these questions, the probabilities will converge much closer to 0% or 100%. At the end, they must be either 0% or 100% exactly, nothing in between. Trying to enforce some particular intermediate numbers as a “wisdom that should be shared by others” is a proof that you have no evidence and all this talk of yours is a pure propaganda, a deliberately imposed indefensible group think.

    Do you think that at some point, physicists would be teaching each other or the public that the probability that the Higgs boson had just increased from 98% to 99%? It’s silly. Depending on the evidence we incorporate, the way how we count it, and the way how we evaluate its independence, the probability that the Higgs was real could have been chosen by sensible physicists to be anything between 50% or so to 99.99999…% well before the particle was discovered. The people closer to 99.99999…% were more right, of course, and the arguments rooted by unitarity of the WW scattering etc. were always right, too. More empirically oriented or theoretically ignorant people could have dismissed/omitted all these arguments (that a Standard Model without a Higgs is really an inconsistent theory – unlike computer science with P=NP which is consistent). There couldn’t have been any “detailed numerical consensus” about the probabilities.

    Now, when tons of Higgses have been created, the probability that there must be some new particle around that mass is 99.9999999999….% and every time a few new Higgs bosons are produced, another digit 9 is being added, roughly speaking. But as long as we were in the stage of having vague, qualitative, “intuitive” arguments, it would have been totally ludicrous to try to evaluate the probability that the Higgs was there too accurately. All of this shows that your ways to determine your degree of belief has nothing rational or scientific in it whatsover; they’re random gibberish numbers you made up and you want to impose on everyone else, like the length of the Chinese emperor’s nose “decided” as the average of 1 million people who have never seen the emperor.

    Cheers, LM

  76. Greg Kuperberg Says:

    Lubos – Any claim that “X is prime” is composite because it’s a combination of the claims “X is not a multiple of 2″, “X is not a multiple of 3″, and so on, and so on, and these subclaims (i.e. infinitely many “predictions”) may be tested separately – and perhaps some of them may be decided, giving partial evidence (that it could be true).

    But you could say the same thing about P ≠ NP. You could examine the sequence of all algorithms with various polynomial time limits, and confirm that each one fails to solve 3-SAT. This is really at least as good partial evidence as testing trial divisors. (Trial divisors are not the actual reason that any numbers are called probable primes, but that’s a separate matter.)

    Unlike Scott, I’m not offended by what you’ve chosen to say. But I have to agree with him that your comments hardly advance the discussion. I would invite you to do some research in complexity theory instead of just writing blog posts. It’s a great topic and I think that you could be good at it.

  77. Jeremy Hahn Says:

    The question of how logical tautologies (proofs) can provide evidence for other logical tautologies is fascinating to me. I know Paul Christiano, Abram Demski, Will Sawin and a few others have been working on this. How can we meaningfully assign probabilities to mathematical statements? How should these probabilities shift as we compute longer and longer proofs, exploring more and more of the space of tautologies? It seems to me that the intuitions that you and Lubos are arguing about can be profitably formalized and their consequences explored rigorously.

  78. Darrell Burgan Says:

    Brace yourselves for another layperson question. If P=NP, then that means the P subset of NP represents every element of the set NP, correct? I.e. every element of P is in NP, and vice versa?

    (I assume because this question isn’t formally settled, nobody has found a problem that falls into NP that doesn’t fall into P, right?)

    Given all this, doesn’t this put the burden of proof on P=NP? After all, it would take only one set member to show P!=NP, but it would take every single set member to show P=NP? The presumption has to be for P!=NP, unless I’m misunderstanding.

    I’m missing it again, aren’t I … ?

  79. Greg Kuperberg Says:

    Indeed, this is a great point from Scott: I’m not sure that there is that much more evidence that P ≠ PSPACE than that P ≠ NP. Now, you can be agnostic as to whether P = PSPACE, and you can be agnostic as to whether PSPACE = EXP. It is a sort-of similar conjecture that the latter two are not equal. But you can’t have both! It is a theorem that P and EXP are different; in fact they are light years apart.

    It may seem unorthodox but sane at first to assign high probabilities that various pairs of complexity classes are equal. But if you make too many such wagers, then soon enough the probabilities are not consistent and you expose yourself to arbitrage.

  80. J Says:

    P=NP is very well possible. Infact all evidence so far is towards P=NP. Infact we we have no better than linear bounds for say INteger factorization(which I know is not NPComplete) and no better than linear bound for integer multiplication. We accept nlogn is the possible exact asymptotics for integer multiplication which is close to the linear bound. Why do we have to accept exp(n^a) as exact bound for integer factorization while the best circuit complexity is linear? In the same vein why do we have to accept that the best possible lower bound which is linear for any NPC problem is actually loose? The same linear bound was pretty much close for integer multiplication.

  81. Lubos Motl Says:

    Dear Scott #74, I haven’t responded to your technical observations because there haven’t been any. Your stories about fences and frogs are poetry that could be used to show to the children what “circular reasoning” means.

    There is no established “electric fence”. The only thing that has occurred is that some problems have been shown speedwise “equivalent”, so they’re on the same side of any fence or Iron Curtain in whose existence you believe, while others have not been proven – if P=NP, then it is probably because the required proof is harder or more complex or novel or original. So the fence may only reflect the current status of the proofs – chances are again 50-50 or so that it’s so.

    It is not surprising at all that some equivalences are proven before others. Some equivalences are easier than others – “easier” is usually the same as “earlier” but it is not necessarily so. So some walls may be higher than others but that doesn’t mean that they can’t be jumped over.

    Your discussion of the “confirmed predictions” is completely equivalent to the surprised reaction that a theorem may be proven in a paper written in English or Yiddish or Chinese. Great but they’re not independent at all because the actual equivalence is known, so the authors of the English, Yiddish, and Chinese papers were allowed to use the same available know-how, so it’s not shocking that they could reach the same “predictions” for what is actually the same problem, as discussed above. On one hand, you accept that these problems’ being in P (unknown whether it is true) is the same and equivalent, not independent, question. On the other hand, you love to count them as independent. That’s a self-evident logical mistake.

    Scott: “The important point is that, if it is one problem, then it’s a “huge” problem, a problem with thousands of different facets.”

    No, if it is one problem, then it is one problem, like one game of chess or one Rubik cube or one ant in a jungle or anything else that is “one”. It’s really my point. When thought about rationally, the proof of the speedwise equivalence of the facets of the problem exactly shows that in every moral sense, it should be thought of as one problem.

    The point is that the facets’ having the same properties is neither confirmation nor refutation of P=NP because the actual reasons behind the equivalence of the “facets” is known – it’s a known proof that is not correlated to P=NP in any way.

    An analogy with string theory. String theory is a “huge theory” because it has many limits and dual descriptions that are known to be the same theory. So far it is analogous to what you’re saying. But then you’re making a statement, “not P”, about all these equivalent NP problems. That would be analogous to saying that string theory doesn’t predict low-energy SUSY, or something like that. But we don’t actually have any evidence of such a statement. Indeed, the answer is the same for all the dual descriptions – this is a fact because we know that there is really one “huge” theory – but we don’t know for sure whether the answer is Yes or No.

    Scott: “We already know, because of the hierarchy theorems, that most complexity classes can’t be equal to most other ones!”

    That’s great but we *don’t* know whether P is equal to NP. We may know that most dots we see on the sky are different celestial objects but that doesn’t mean that the morning star is different from the evening star. For two different celestial objects, one may typically find a valid proof of their inequivalence, but that’s not the case of P and NP. And it’s not the case of the morning star and the evening star. That’s why there is a good case that P may very well be NP, and the morning star may be the evening star. In the latter case, we actually know it’s the case (Venus) and we don’t know it in the case of P=NP or its negation.

    Cheers
    LM

  82. ngfung Says:

    I work on approximation algorithms and naturally believe that P != NP. But I’m unconvinced by your argument.

    Yes we see two gigantic balls touching yet not intruding each other. But such is fairly common in maths where you are trying to prove some difficult conjecture (say A = B). You formulate A and B in different ways but alas, its just never clear that A and B are the same until some genius perhaps named Perelman prove it 102 years later (here A = set of manifolds homeomorphic to 3-sphere, B = set of simply connected, closed manifolds).

    I believe in P != NP not because you can come up with lots of contriving equivalences of P and NP which never intersect, but because of the “absurd” consequences that would otherwise follow. Their never intersecting can be due to some “fundamental difficulty” in proving the conjecture.

    Having said it, I think its unproductive to use pseudoevidence to convince layman that P != NP.

  83. Fred Says:

    Lipton wrote
    “Their paper shows that there are SAT solvers capable of solving hard natural problems in reasonable time bounds. What does this say about the strong belief of most that not only is but that the lower bounds are exponential?
    Often SAT solver success is pushed off as uninteresting because the examples are “random,” or very sparse, or very dense, or special in some way. The key here, the key to my excitement, is that the SAT problem for the Erdős problem is not a carefully selected one.”

    If in practice it seems that SAT solvers are reasonably efficient, why then don’t we see some of the amazing consequences that P=NP would imply?
    Those amazing feats are always about solving worst case problems?

  84. Hal Swyers Says:

    Scott, on this question you have my moral support, but I have reservations.

    1. NP problems need only be solved once. If I build a database with I will call the projection database (PD) then using the TSP as an example, for each configuration on N points, there is a unique optimal solution.

    2. Growth in computational power following Moore’s law is exponential. If I imagine solving a problem on a dynamically evolving computer, e.g. one’s whose computing power matches Moore’s law as it begins to tackle each TSP configuration, the amount of computational power available at the end of the problem is greater than what is available at the end. So our ability to tackle the problem of some arbitrary complexity should not be viewed as a linear time as a function of beginning power but as function of the integrated computational power growth curve. You will find this has a tremendous linearizing effects at least for small values of N in terms of the effective time it takes to tackle exponentially complex problems; e.g. each 2^N problem one dedicates a machine that grows at e^r, the integrated computations performed by the machine in some time T scales with N, e.g. T=kN, where k appears to be ln 2.

    3. The net effect would appear that as time marches forward, solving ever more difficult problems becomes computationally feasible within reasonable timelines. If one further imagines a situations that one can maintain a network to share those solutions in the PD, then by time T the PD will have a solution for problem with complexity 2^N = 2^(T/ln 2).

    I will of course check to see if this actually works, but more than happy to share for others to tear apart.

    http://thefurloff.com/2014/03/08/p%E2%89%A0np-asymptotically-pnp-practically/

  85. Hal Swyers Says:

    “Moore’s law as it begins to tackle each TSP configuration, the amount of computational power available at the end of the problem is greater than what is available at the end.” Should be
    “Moore’s law as it begins to tackle each TSP configuration, the amount of computational power available at the end of the problem is greater than what is available at the beginning.”

  86. Jim Says:

    Perhaps there comes a time to stop debating such people and let them be?

    Lubos is in the same category of people as the guy who made http://www.timecube.com/

  87. Klum Says:

    @Scott #48, I’m still not convinced about the content and merit of the debate.
    You say the following:

    ” Indeed, if you agree with him, why did you become a complexity theorist? According to him, complexity theory (and more generally, combinatorics) is just a mass of disconnected, random facts with no relation to each other. One shouldn’t conjecture that P≠NP “for most purposes,” because one shouldn’t conjecture anything at all about such questions: some discrete questions have one answer, some have another, but there isn’t any rhyme or reason or predictability to it whatsoever.”

    First, I was not aware that this is what Lubos says. I have not followed his thought carefully. I’m only commenting on the current debate about P vs. NP.

    Second, why would the views you ascribed above to Lubos would make me or anyone else become uninterested in complexity theory? My main criterion in research is intellectual significance (intrinsic to some area or having impact outsides of one’s area). I could respond that complexity theory is a great intellectual theory, even, and to some extent “because”, it’s a disparate bunch of results tied in a very complicated (and seemingly random) manner. The fact that people are accustomed to simple “narratives” in science, doesn’t mean that the rise of a new science, which is complicated an non-elegant is something bad. On the contrary, I may claim: it is just closer to truth.

    About the fact that computer scientists are “stupid” according to Lubos, I don’t know if indeed this is his claim, but even if it is, then it has no relevance to anything scientific. I don’t care if I’m stupid or clever. Science is not an intelligence competition. All I care for is to study and reveal deep or important truths.

  88. Joshua Zelinsky Says:

    Lubos, I’m curious if it bothers you at all that there are number theorists in this thread (e.g. me), who disagree with your assessment that there’s a different between problems like the Riemann hypothesis and Goldbach’s conjecture and the type of evidence being discussed here for P != NP.

    Note also that Greg’s example of a probable prime is far more interesting than you seem to appreciate. The high probability we assign to 8^173687-7^173687 is not because of testing it for prime divisors but by running the probabilistic version of Miller-Rabin. Note that running Miller-Rabin does not in general give one divisibility information of the form q does not divide p.

    You also wrote:

    of course that both of us are talking about Bayesian inference. The only difference is that I do it right and Scott does it wrong. Bayesian inference starts with prior probabilities for P=NP and P!=NP which must be chosen comparable because they’re qualitatively different, simple hypotheses. And then the probabilities are adjusted by Bayesian inference. Because both hypotheses equally well pass the “comparison with the data” because no contradiction ever occurs, the posterior probabilities are still equal to the prior probabilities so they’re still “comparable” to 50-50.

    But this sort of thing applies just as well to the many zeros of the Riemann zeta function we’ve looked at. Now contradiction has been obtained: if we had a contradiction we could prove RH. What you are actually touching on is a deep open problem: how to do Bayesian reasoning on axiomatic systems where one has bounded computational power.

    Now, if I were to steelman your objection to the various NP-complete problems being strong evidence, I think the argument I’d make would be more sociological: once a problem has been show to be NP-hard or to be in P people will likely stop trying to prove the other. And this problem is even more severe given that some of the problems that are NP-complete are problems which by nature are actually artificial. See for example http://arxiv.org/abs/1201.4995 . I presume that Scott would fine “P=NP and the person who proved it did so by showing a polynomial time algorithm for generalized Prince of Persia” to be even more surprising than P=NP.

    But both these objections run into severe limitations. First, people will often work on trying to get an efficient algorithm for something before they find out that the problem in question has been shown to NP-complete. Furthermore, there’s the problem of reductions of the sort that Scott talked about in his post where one gets a reduction by combining multiple distinct reductions one still ends up with something like 2^O(log n) time, and there are a lot of these.
    Second, hardness of approximation results don’t normally have these sociological limits, because when one is proving them one is generally pushing for as much of an approximation as one can get with one’s technique. It is only then that one finds that where this ends frequently runs into the exact parameter range where the problem transitions into being NP-hard. Set cover is an example of this. Moreover, in some of these cases, people proved they could do some approximation before it was shown that that approximation was running into the approximation barrier of P!=NP. Why that sort of thing would repeatedly happen if P =NP requires something else.

  89. Bill Gasarch Says:

    (Always feel silly being the 73th commentator, but oh well).
    1) Scott- KUDOS, great article.
    2) One sound-byte I get out of this, which might not be what you intended is that P NE NP has great explanatory power
    in that it EXPLAINS why we are stuck where we are at
    various non-approx results. So it may be likely-to-be-true
    the same way Evolution is— Great Expanatory power even if it may be harder to do Popper-type experiments on it.
    3) With that in mind the following seem reasonable:

    PH collapses does not seem to have that much explanatory power. At least, I don’t know if ALOT follows from it.

    Unique Game Conj DOES seem to have SOME explanatory power, giving some evidence that its true, but nowhere near that of P NE NP yet.

    4) Here is hoping that my capitol letters survive your spam filter!

  90. Michael Says:

    @Lubos #73. Are you sure you’ve ever actually done Bayesian analysis? It isn’t hard. For each new bit of data you update your previous odds ratio by multiplying by the ratio of the conditional likelihoods of getting that data. We’ve got some data points for which L(Data|!=)=1 and L(Data|=) <1. (You seem to think the question is whether L(Data|=)=0, not whether it's <1. If so, you're wrong.) So whatever your initial odds ratio is, it's now modified to favor !=. The key questions are "how much <1 ?" and "about how many effectively independent points?". Perhaps if Scott put the question that way and tried to be semi-formal he'd end up guessing something other than ~100/1. He would not end up guessing 1/1.

  91. Ignacio Mosqueira Says:

    Scott

    It matters for your arguments whether the NP complete problems are 10000 or one.

  92. Douglas Knight Says:

    readers who…just want ammunition to prove their starting assumption that I’m a doofus who doesn’t understand the basics of his own field

    Why bother addressing these readers? You say you’re disappointed in Lubos, but are you actually surprised? The frog allegory is accurate, but it seems aimed at people who aren’t going to believe you, regardless of what you say. Is it actually helpful?

    When you complain about literal-minded readers, do you mean the stuff about Mozart? I thought that was useful and it’s a pity you threw it out. Your old post had a diversity of arguments that I think is valuable, though I think the numbered list produced an illusion of more diversity than was really there. Focusing on PCP and other sharp boundaries here is better than any single argument there, but something is lost.

  93. Bill Gasarch Says:

    Some of the respondents to my P vs NP survey did indeed email me privately that they were voting P = NP for the same reason they voted for Ralph Nader in 2000— as a protest vote. And look where that got us!

    What were the protesting? The fact that everyone things p ne np.

  94. Serge Says:

    For the outsiders, the thinking is more like: computer scientists are just not very smart—certainly not as smart as real scientists—so the fact that they consider something a “fundamental hypothesis” provides no information of value.

    If one considers that Georg Cantor himself never felt the need for a proof to such controversial statements as the axioms of Infinity and of Choice, then it isn’t too shameful on your part that you believe P≠NP to be self-evident. It just shows you’re also a pioneer in your own domain..

  95. Scott Says:

    Darrell #78:

      doesn’t this put the burden of proof on P=NP? After all, it would take only one set member to show P!=NP, but it would take every single set member to show P=NP? The presumption has to be for P!=NP, unless I’m misunderstanding.

      I’m missing it again, aren’t I … ?

    Your argument would actually be OK, if it weren’t for the phenomenon of NP-completeness. Because of that, we know that you can just pick any particular NP-complete problem—say, Set Cover—and then P vs. NP will be equivalent to the question of whether that problem is in P.

    On the other hand—as I’ve been trying to explain to those who will listen!—the fact that we know so many different NP-complete problems means that we can examine the same mathematical question from thousands of different angles. For some NP-complete problems, the best known approximation strategy is semidefinite programming relaxation, for others it’s a greedy algorithm, etc. Yet again and again, we find that these totally different kinds of algorithms, applied to different NP-complete problems, are able to approach but never exceed the limit that would imply P=NP if crossed. So, this is what I meant before in talking about the “invisible electric fence.” And yes, I would say that the more different places we try to cross and get shocked, the more confident we can become that some sort of fence is really there—indeed, I don’t even understand the mindset that denies that. And while one can conceive of other possibilities, by far the simplest sort of fence would be P≠NP.

  96. Lubos Motl Says:

    Greg in #76: “But you could say the same thing [like “primarily is a composite statement”] about P ≠ NP. You could examine the sequence of all algorithms with various polynomial time limits, and confirm that each one fails to solve 3-SAT.”

    In principle, Yes. But eliminating one algorithm after seeing that it is “not being the right one that solves SAT quickly” only increases the probability of P=NP “infinitesimally” because there are virtually (and really) infinitely many algorithms to test. On the other hand, the test of divisibility by 2 removes 1/2 of the candidate primes, so this test may actually be done and affects the probabilities of someone’s primarily by a finite, significant amount. It ain’t the case in the case of P=NP.

    I believe you that it’s a nice research subject. I/we have been exposed to it since 1992 but I never got really excited. It’s fun to categorize and relate various limits on the speeds for various problems and their classes etc. but in some sense, it is a form of “comparative literature”. I don’t really feel that this “broad” perspective on all algorithms is teaching us something except for an artificial eclectic mixture of topics that have nearly nothing to do with each other. An interesting breakthrough would be actually to find the algorithm for SAT or salesman or something else proving P=NP. But I am afraid that even that could be boring.

    For example, you may find info on Mathematica’s function “FindShortestTour” that solves the traveling salesman problem by a combination of 10 more elementary functions or so. For small N, it is guaranteed to find the best route, then it is “close to the minimum length”. I can imagine that some optimization of this hybrid algorithm in Mathematica 10 or Mathematica 25 will turn out to work perfectly and polynomially, and 50 years later, someone will prove using a very longish and “bureaucratic” proof with lots of epsilons and bounds that it works and it is fast. I don’t see anything impossible about it whatsoever and from my perspective, it would still be a rather boring development even though Scott at all would view such a development as so miraculously religiously shocking that they don’t even dare to think about this totally real possibility that may materialize.

  97. Lubos Motl Says:

    Darrell #78, “burden of proof” is a somewhat political or legal term, not something natural in mathematics which is about things’ being true or false, proven or unproven. At any rate, you may say the same thing about any other similar conjecture, like the Riemann Hypothesis. It’s enough to find one zero of the zeta function that violates the RH. It’s not even necessary to find it: it’s enough to find a non-constructive proof that it exists! This comment still doesn’t prove RH even though the burden of proof is on those who claim that RH is false.

    Most importantly, you got the “burden of proof” upside down in the case of P vs NP. If P were not equal to NP, it would be enough to find *one* element of NP and prove that it is not in P. Despite the large (but smaller than in the RH case, especially after equialences are taken into account) number of elements in NP, no such element has been found, so one should still stick to P=NP and wait until someone falsifies this conjecture.

    This point of yours – after one corrects the obvious sign error you made – was really the reason why I said that P=NP looks like the simple “default” hypotheses in physics that we prefer to believe until the moment when they are ruled out.

    At any rate, as Scott correctly says, the equivalences render your “burden argument” misleading because it’s really analogously difficult to prove the membership in P for pretty much all of the NP problems. From some viewpoint, NP only has 1 or few elements. All these things have sort of emerged before and they are showing that the arguments that P!=NP is almost proven are just wrong.

  98. Sam Hopkins Says:

    I wish this blog post were titled “the mathematical case for P != NP”… as others have pointed out, P vs. NP is not a scientific problem. So while it’s true that the evidence for P != NP may be as good as the evidence for our best physical theories, the potential kind of evidence we could have for P != NP (namely, a proof!) is so different from the kind of evidence we can have for a physical theory. Thus what really matters is why we ought to buy into P != NP in the same way we buy into RH being true, which again is not a scientific problem.

  99. Halfdan Faber Says:

    Not sure I understand why LM is so upset about man-made structures in mathematics, considering the mounting evidence that string theory is nothing but an area of pure mathematics with no relation what so ever to physics, at least as it relates to attempts to understand the actual physical world. String theory is a man-made construction in mathematics, if such a thing ever existed.

  100. Sniffnoy Says:

    Greg #49 wrote: “But the assertion that 8^173687-7^173687 is prime is not a “composite” claim. It is an indivisible claim: Either it is prime or it is not; it cannot be statistically sort-of true.”

    You’re just completely wrong, Greg. Any claim that “X is prime” is composite because it’s a combination of the claims “X is not a multiple of 2″, “X is not a multiple of 3″, and so on, and so on, and these subclaims (i.e. infinitely many “predictions”) may be tested separately – and perhaps some of them may be decided, giving partial evidence (that it could be true). In fact, the number’s being prime may be decomposed in several other known ways (and probably many other unknown ways).

    On the other hand, P!=NP isn’t composite in this sense – it is an existence claim about one (or many, but one may always discuss the fastest one only) particular thing.

    This is just silly. You’re correct about the statement that 8^173687-7^173687 (let’s just call it q) is prime; it’s made up of the statements that 2 does not divide q, 3 does not divide q, etc. But the claim that P!=NP could be divided up similarly. Let’s say, for simplicity, that we phrase P!=NP as “There does not exist a polynomial time algorithm for 3-SAT”. Then this is made up of the statements that, for each particular algorithm computing 3-SAT, it does not run in polynomial time. Now you may object here that this is disanalogous because, whereas we can easily compute whether a given number is prime, there is no way to compute whether a given algorithm computes 3-SAT (or even to computably enumerate all the algorithms that do), so it’s harder to make concrete just what statements we’re conjoining. But then we can just rephrase this as “For any given algorithm, either it does not compute 3-SAT or it does not run in polynomial time.”

    That is to say, “q is prime” and “P!=NP” are both universal statements. (Well, on the face of it, P!=NP is a statement about the existence of certain problems, not the nonexistence of certain algorithms, but that is not really the most fruitful way of thinking about it, so let’s stick with the “nonexistence of algorithms” view, as you and Scott both have.) Either one can be decomposed as you suggest. Meanwhile, we can also look at it from the other side and negate both of them, to get the statements “q is composite” and “P=NP”, which are existential statements — “indivisible” statements about the existence of one particular mathematical object, as you say. In short, this distinction you’re drawing isn’t due to a real difference between the problems, but due to the fact that you’ve insisted on casting one problem universally and the other existentially. But either can be cast either way by just negating the statement! It’s the same problem either way.

    Given then that you accept that it’s OK to make inferences from “Every number we’ve tried so far does not divide q” to “We can increase our confidence that q is prime by some amount”, it should also be OK to make inferences from “Every algorithm we’ve tried so far either didn’t compute 3-SAT or wasn’t polynomial time” to “We can increase our that P!=NP by some amount”. Not in any way necessarily the same amount, of course; and in fact the bulk of the evidence for both these statements is not of this form. (The primeness of q from the Miller-Rabin test, P!=NP from a bunch of other arguments, including the sort of argument Scott makes above.) As has been mentioned above, if you tried to raise this sort of argument to claim that P!=BPP, you’d face quite a bit of skepticism. Still, the fact that so far no PRNG that’s been tried has been good enough to derandomize BPP should at least lower our confidence that P=BPP to some extent[0].

    Now, there are several ways out of this. For one, you could reject entirely the notion of “logical probability”. You point out that finding that any particular algorithm for 3-SAT fails to run in polynomial time is perfectly consistent either with P=NP or with P!=NP. Well, similarly, that any particular natural number fails to divide q is perfectly consistent with q being either prime or composite! Your attempt to draw a disanalogy fails again.

    Let’s look at Bayes’ theorem again — if we want to tell whether P(H) goes up or down upon conditioning on E, we need to compare P(E|H) to P(E|not H). Now, as you point out, both “this algorithm for 3-SAT fails to run in polynomial time” (E) is perfectly consistent with both P!=NP (H) and P=NP (not H), but, if we accept the notion of logical probability, that doesn’t mean that P(E|H) and P(E|not H) are the same; P(E|H)=1, while P(E|not H) should be at least slightly less than 1. And the case of E=”this number does not divide q”, H=”q is prime” is exactly analogous; P(E|H)=1, P(E|not H) is at least slightly less than 1. So it is at least a little bit of Bayesian evidence — so long as you accept the notion of logical probability, of course. If you don’t, then this whole argument is meaningless… but to reject logical probability would also be to reject the idea that you can talk about gathering Bayesian evidence that q is prime, which you don’t seem to want to do.

    A more interesting possible way out would be to point out that the logical structure of the statements is not exactly the same, due to the fact that (for any given natural number n) “n does not divide q, or n=1 or n=q” is a Delta_0 statement (in the arithmetical hierarchy) while (for any given algorithm P) “P does not compute 3-SAT, or P does not run in polynomial time” is, uh, not. (Assuming we’re encoding programs as numbers here.) (Indeed, “q is prime” is itself a Delta_0 statement…) OK, I’m not really sure that this is a way out at all, but I feel like someone more knowledgeable than me could potentially try to make some sort of argument out of that. But you haven’t done that, instead opting for some ridiculous discrete vs. continuous distinction.

    Now, of course, as I’ve already mentioned above, this sort of evidence should really probably not make up the bulk of your evidence. But, so long as you accept the notion of logical probability, it is a sort of Bayesian evidence. And we can recast the other arguments for P!=NP in Bayesian terms, as well! Hell, in his old (now “officially retired” post), Scott even refers to the classic “Lower bounds are harder to prove than upper bounds” argument as “The Bayesian argument”. He doesn’t cast it explicitly in Bayesian terms, but it’s easy to do so. And it’s not too hard to do so with the “invisible electric fence argument”, either. If H is “P!=NP”, and E is “there is no polynomial-time reduction from this particular NP-complete problem to this particular problem in P”, then as we vary E, well, P(H) doesn’t vary, P(E|H) is always 1, but P(E) varies; as the two problems being considered become more and more similar, closer and closer together, our prior probability for E goes down, and so P(H|E) goes up. This isn’t the whole argument, it’s not just about reductions, there’s an “invisible electric fence” elsewhere too, but, well, do I really need to finish this? I think I’m going to stop here.

    [0]Note: I’m a little uncertain as to definitions here; is a PRNG, in the complexity-theory sense, automatically enough to derandomize BPP? Well, I just mean PRNG in the ordinary nontechnical sense of the term.

  101. Sniffnoy Says:

    Oh, I submitted my last comment before seeing comment #96. I see #96 contains a response to essentially what I said — you acknowledge that yes, this is evidence, you just think it’s much weaker evidence than in the case of “2 does not divide q, 3 does not divide q…”

    In that case, all I have to say is: Holy hell, why didn’t you just say that in the first place?? If that’s the heart of your argument, you completely failed to make that clear. I, and it would appear everyone else, thought you were making some qualitative distinction, not some mere quantitative one. You sure made it sound like one with your nonsense about how one problem is composite but the other is indivisible!

  102. Luke G Says:

    I agree that the frog argument is pretty strong: if lots of very smart and motivated people are looking for something (eg. P=NP, and even the easier problem of factoring in P) and don’t find it, that’s evidence it doesn’t exist. It’s even stronger when there are lots of “near-misses”, or as you say, an “electric fence”. Not everyone tends to trust such arguments, however (often identifiable by insulting intelligence of researchers in the field or believing in an ivory tower conspiracy :).

    You have to be careful in this line of reasoning, though, because the argument “if we haven’t found it yet, it probably doesn’t exist” apparently applies to both “a proof that P!=NP” and “a P-time algorithm for NP-complete”. However, there’s still good reason why this line of argument favors P!=NP. The statement “P!=NP” is a “for-all” statement, while “P=NP” is a “there exists” statement, and generally speaking the latter category is easier to prove (and you can even make this notion formal in some sense). So one could argue, the longer P v NP goes unresolved, the more likely the answer is that P != NP.

    Rather than this meta-argument, though, I think there’s plenty of direct arguments that favor P!=NP. For me, some of the most convincing facts and intuitions for P!=NP include, in no particular order:

    1. If P=NP, the polynomial hierarchy collapses. So it’s not just NP that becomes P, but problems with another layer of quantifier: NP^co-NP = P, etc. Those are some difficult problems!

    2. If P=NP, you’re sorta saying entropy can be easily reversed (of course if you actually try to formalize this notion, it doesn’t quite work out–otherwise someone would’ve already proven P!=NP). If you have a P time algorithm for scrambling an egg, it’s NP to unscramble. Basically, this is physics intuition why one-way functions should exist.

    3. P=NP means that arithmetic statements with bounded quantifiers have short proofs. However, proving things tends to be hard — most theorems we know about proof length are about how amazingly LONG proofs can be.

    4. The arithmetic hierarchy doesn’t collapse, why should the polynomial hierarchy?

    5. P!=NP in a block-box model.

  103. Bram Cohen Says:

    My slightly idiosyncratic take on it is that I believe that P!=NP because the 4sum problem is quadratic, the 6sum problem is cubic, the 8sum problem is quartic, etc. and I believe this pattern doesn’t break down. An interesting quirk of this belief is that it would *just barely* show that P!=NP. In fact it would be just about the weakest result you could possibly show which would make that barrier firm. Which is another reason why I believe it’s a good candidate for a first P!=NP result. Not that I have any idea how to start approaching it.

    The really strong result, of course, would be showing exponential lower bounds for some problems. All the instances of those which I know of are either NP-complete, which obfuscates what the core hard problem is, or messy and handwavy, like reversing believed to be secure hash functions, which makes it feel like we don’t have a coherent beautiful explanation of what makes them hard.

    I think that at the core of P vs. NP in the strong case (that is, showing an exponential separation) is a concept of pseudorandomness. That is, the way you show that a problem requires exponential time is that you show that a hash function is pseudorandom, therefore by ‘The Pseudorandomness Theorem’ it takes an exponential amount of time to reverse. I have no idea how to prove the Pseudorandomness Theorem, or even exactly what it says, but the general outline of it is that if certain conditions are met a well defined process will behave the same as a stochastic process.

    Of course, the Pseudorandomness Theorem would also directly imply the Riemann Conjecture because of its equivalent statement about the distribution of prime numbers, in fact I suspect that will be one of the first things proved before a more general result. And I believe that the Collatz Conjecture, which calls for a very strong pseudorandomness property of a very specific and quirky function, will fall a ways after P vs. NP, which only requires any pseudorandomness property of any function. Not that I’m expecting to see any of these results in my lifetime.

  104. Lubos Motl Says:

    Ngfung #82: It’s amusing to see that your reasons for preferring P!=NP are in no way similar to Scott’s but: Could you please be more specific what you mean by the “absurd” consequences of P=NP and why you call them absurd? Thanks.

    As I mentioned in the previous comment, finding the fast algorithm for the salesman could mean nothing else than that 1 function among thousands of functions in Mathematica will get just a little bit better in Mathematica 10 relatively to Mathematica 9, not a big deal.

    Thanks, Jim #86, I have already been exposed to worse harassment, however, so you didn’t impress me.

    Klum #87: I surely don’t think that computer scientists are stupid. They are generally very bright, I’ve known and still know many in person, and their IQ rivals the physicists’. And of course that I know accomplished computer scientists who think that P=NP is either likely or equally likely as P!=NP. I just opposed the “movement” trying to impose P!=NP on anyone despite the absence of any real evidence in one way or another. And I countered claims that computer science is fundamental for our understanding of Nature or reality in a similar sense as theoretical physics.

    Of course that the research programs attempting to show P=NP are at least as exciting as those based on the P!=NP assumption, probably more exciting! Both possibilities should be tried.

    Joshua #88: I have discussed the major difference between P!=NP and RH or Goldbach too many times, it should be clear to everyone by now. P!=NP is the only “irreducible claim” in the sense that it isn’t making predictions whose validity may be decided and that are weaker than P!=NP itself, and whose evaluation changes the probabilities by a finite amount. Having run mersenne.org on all Harvard’s physics computers for years :-), I am fully aware of the huge diversity of primarily tests for various types of integers. This only strengthens my argument – there are many more types of “partial predictions” that a claim about primarily or RH is making. P!=NP doesn’t (demonstrably) predict anything [finitely] stronger than “FALSE” and [finitely] weaker than P!=NP itself, so no actual tests could have been done to modify the prior 50-50-like odds.

    There is no general “Bayesian way to direct thinking”. Bayesian inference only works when you are given – or you decide – which questions should be asked and what the answers are. But if you don’t know what are the right questions/experiments (yielding the evidence) to test, the finiteness of the computational power is the smallest problem preventing you from doing anything with Bayesian inference.

    Michael #90: in the Bayes formula applied to “any generic data”, the probability P(E|H) is unknown. But it is equally unknown for H = (P=NP) and for H = (P!=NP). For all the data for which P(E|H) may be predicted, and yes, this set is empty which is exactly what I mean by saying that P!=NP or P=NP is not making any partial predictions (that’s why I chose the title of my blog post saying that “partial evidence doesn’t exist” for this question), the values are the same for both H = (P=NP) and its negation. So the Bayesian inference has 0 nontrivial steps and the posterior probability stays equal to the prior probability – it’s 50-50 or so.

  105. Nilou Ataie Says:

    Hi Scott,
    I recently found your website and I want to thank you for your refreshing, honest, funny, and witty posts. I have learned a lot from you – you truly understand the importance of science in computer science. Keep up the good work,
    Nilou

  106. Dániel Says:

    I can imagine that some optimization of this hybrid algorithm in Mathematica 10 or Mathematica 25 will turn out to work perfectly and polynomially, and 50 years later, someone will prove using a very longish and “bureaucratic” proof with lots of epsilons and bounds that it works and it is fast.

    Scott, are you aware of any research that intends to show that the above scenario is improbable? The theory of NP-completeness does that, but only by referring to the combined brainpower of all the people who tried to solve NP-complete problems so far. (Not very convincing for Lubos. 🙂 ) With diagonalization and natural proofs we had some way to formalize and prove the intuition that proving P!=NP requires new ideas. How could we formalize the intuition that proving P=NP requires new ideas?

  107. Scott Says:
      For example, you may find info on Mathematica’s function “FindShortestTour” that solves the traveling salesman problem by a combination of 10 more elementary functions or so. For small N, it is guaranteed to find the best route, then it is “close to the minimum length”. I can imagine that some optimization of this hybrid algorithm in Mathematica 10 or Mathematica 25 will turn out to work perfectly and polynomially, and 50 years later, someone will prove using a very longish and “bureaucratic” proof with lots of epsilons and bounds that it works and it is fast.

    Annnd … we may have pinpointed Lubos’s problem right there. In the passage above, Lubos has almost perfectly expressed the mindset of the really bad, beginning programmer—the one who doesn’t have any higher-level strategic insight about why things work or don’t work. I, too, thought that way when I was maybe 14 years old. “Who knows, maybe if I varied one thing in this Mathematica code, it would suddenly start solving NP-complete problems in polynomial time!”

    But what happened next was that I went to college, took a bunch of CS courses and read a bunch of books and papers, and discovered that I’d vastly underestimated the depth with which other people had already thought about these things. The fact that I didn’t understand why some approach couldn’t work for solving NP-complete problems, didn’t mean that no one else understood. So for example, the exponential lower bounds on length of resolution proofs, due to Haken and others, immediately ruled out a giant swath of DPLL-like approaches. If you didn’t have the relevant insight, you could play around with DPLL for 30 years, thinking that one optimization you haven’t thought of yet could finally be the thing that makes it work in polynomial time. After you have the insight, you get it, you understand. Same thing for all the linear programming approaches to the Traveling Salesman Problem, and Yannakakis’s insight that immediately explains their failure.

    And with such understanding comes the realization that CS is not just a “bureaucratic mess” of disconnected facts: there are overarching principles that explain what can and can’t work. I feel bad that Lubos never got to have this experience, and that he’s now way too far gone for it.

  108. anon Says:

    So Lobus’s argument boils down to believing Mathematica will get better algorithhms in the future.

    What a Dickhead

    Lubos, you silly man, even in the simple case of factorisation, there will never be a polynomial bounded algorithm.

    Any one who believes there will be is akin to mystics and religious people.

  109. Michael Says:

    @Lubos #104. No, if P!=NP, the probability that some problem will not be found to be in both P and NP is known- it’s exactly 1. If P=NP that probability is a bit subjective but somewhat less than 1. Look, if you hate Bayesian reasoning and don’t want to use it, fine- but then why did you bring it up in the first place?

  110. Sasho Says:

    I wonder what is the probability of coming across someone who thinks “P \new NP is just fashionable dogma” if you condition on that someone not being an internet troll.

  111. Ignacio Mosqueira Says:

    Scott

    I have genuine curiosity here, not an agenda. Lubos claims that the mathematica traveling salesman software works for up to N = 10. Do you have a practical bound for N that future software may work for?

  112. Ignacio Mosqueira Says:

    Sasho

    Proving someone is an internet troll is an NP-hard problem.

  113. Scott Says:

    Sam Hopkins #98:

      I wish this blog post were titled “the mathematical case for P != NP”… as others have pointed out, P vs. NP is not a scientific problem.

    It’s true that we ultimately strive for a form of argument in math—namely, proof!—way beyond anything we can hope for in natural sciences. And that does indeed make math different.

    However, I submit to you that, when we don’t yet have proofs (but, say, are searching for them), the way we decide what is or isn’t plausible in math is fundamentally similar to what’s done in natural science. That is, we generate “predictions” from our conjectures, and then try to test those predictions (either by proof or by numerical calculation). Every nontrivial passed test strengthens our confidence in the conjecture; a failed test will force us to either reformulate the conjecture or abandon it entirely. We appeal to Occam’s Razor—trusting that the mathematical world is “far simpler than it we might’ve feared it was,” and that such trust has been amply repaid in the past. We look for related questions that we can answer, and try to reason by analogy. And so on.

    In fact, even Lubos basically agrees with the above, as long as we’re talking about “continuous” math! The only issue, then, is that he announces a completely arbitrary rule that we’re not allowed to apply scientific-style reasoning in discrete math. Lubos’s rule couldn’t be followed even if one wanted to follow it, for the simple reason that (as any good mathematician knows) there’s no clear boundary between continuous and discrete math. Many “continuous” problems have combinatorics at their core, and conversely, many of the best insights about NP-complete problems (for example, the recent work on SDP relaxation and unique games) come from making connections to continuous math. But even if there were a sharp distinction between continuous and discrete math, there would still be no rational justification for Lubos’s rule: only the prejudice of which he’s so quick to accuse everyone else.

  114. Greg Kuperberg Says:

    Lubos – “An interesting breakthrough would be actually to find the algorithm for SAT or salesman or something else proving P=NP. But I am afraid that even that could be boring.”

    Whether or not it is likely, it certainly wouldn’t be boring! That much has been established rigorously. Because, part of the importance 3-SAT is that it is actually easy, not merely possible, to encode any problem in NP into 3-SAT. For me at least is clearer to start with CircuitSAT, satisfiability of digital circuits. It is entirely routine and standard to draw a digital circuit to express a cryptogrphy problem or any other programmable existence problem — that’s practically what programmability means. It is equally routine, if slightly less standard, to re-express a digital circuit as 3-SAT with a constant overhead factor.

    I assigned a related homework problem in my combinatorics class recently. I had the students verify a construction of an AND gate as a graph 3-coloring gadget; and I did the more trivial NOT gate in class.

    So, if you have found an efficient algorithm for 3-SAT (or 3-colorability, etc.), then an immediate, elementary corollary is an efficient algorithm for CircuitSAT. So then you have shown that cryptography does not exist: All digital cryptography can be broken by The Great Algorithm; all cryptography is illusory.

    Again, short of proving that P ≠ NP, there is no way to refute the opinion that they could be equal. That it could be boring is untenable. In fact it would be so exciting that it would make a vast number of other topics in math and computer science boring. Unless maybe The Great Algorithm has time complexity Θ(n^10000) or similar — but that too would be unprecedented and anything but boring.

  115. Sasho Says:

    Let me make one more serious comment. The Unique Games Conjecture also seems to lead to some striking coincidences: Max Cut is UG-hard to approximate better than the Goemans-Williamson constant, Vertex Cover is UG-hard to approximate better than the trivial 2, likewise for Max Acyclic Subgraph. However it is not clear to me if this necessarily is evidence that UGC is true. In some formal sense we know that many of the coincidences have a single “root cause”: that UGC implies that a single semidefinite program is best possible for all MaxCSP problems. Subsequent work by Kumar, Manokaran, Tulsiani, and Vishnoi established an LP which is optimal under UGC for classes of CSPs with strict constraints, including Vertex Cover and other covering and packing problems. So the multiple coincidences really collapse into a small number of coincidences, although those are even more shocking. But now the argument from numbers is undermined, and you can see UGC as identifying the hardness structure that poses a barrier to our best current methods: linear programs with added semidefinite “correlation constraints,” or in other words a very low level of the Lasserre hierarchy. Recent work has suggested that the complexity world could look very different once we understand higher levels of Lasserre better.

    So all this maybe suggests a similar question for P vs NP. Are the multiple coincidences really one coincidence in hiding? Would we some day discover a super-method that all our current poly-time algorithms are special cases of, and see that the theory of NP-completeness merely outlines the boundaries of the power of this super-method? While I prefer to believe that UGC is false (how exciting would a better approximation algorithm for Vertex Cover or Max Cut be!), I still think that it is very very unlikely that P=NP.

  116. Sasho Says:

    I made the mistake of reading back some of the comments, and saw that Lubos claims that P \neq NP does not make any weaker and verifiable predictions. This is very easily shown to be completely false. *Any* of the many integrality gaps that agree with NP-hardness are such predictions that have already been rigorously verified. By know we have proven integrality gaps matching NP-hardness for hierarchies of linear programs, and in fact for *arbitrary* linear programs with polynomial number of facets.

  117. Scott Says:

    Dear Ignacio:

      I have genuine curiosity here, not an agenda

    Great! You didn’t always write in a way that made that clear, but let me assume it and try to satisfy your curiosity. To take your points in the order I remember them:

    1. Every NP-hard problem looks easy as long as you’re only looking at small instances (say, N=10). Just about any approach will work great for such instances. But yes, I can make the practical prediction that, if you take (say) a state-of-the-art 4096-bit cryptographic hash function, and reduce the problem of inverting it to the Traveling Salesman Problem, then you’ll get an instance for which Mathematica will croak.

    Note that this is a prediction for which people “put their money where their mouths are”! If it’s false, then most of the world’s cryptography would immediately be broken. And as we learned from Snowden, it appears that not even the NSA has the ability to break properly-implemented strong crypto (since if they did, they wouldn’t have to resort to so many backdoors and other shenanigans!).

    Keep in mind, also, that P≠NP is only a statement about the worst case. Yes, there are many tools out there that handle various real-world instances of NP-hard problems “well enough in practice”—in fact lots of CS research in recent years has been about understanding and explaining that behavior. But if a tool fails when given (e.g.) cryptographic instances, then it doesn’t bear directly on the P vs. NP question.

    2. You write, “It matters for your arguments whether the NP complete problems are 10000 or one.” What matters, I think, is that the different NP-complete problems—or “different facets of the same mega-problem,” if you prefer—are sufficiently different from each other that they suggest totally different intuitions and algorithms.

    That is: for some, there’s a “natural continuous relaxation” (e.g., a linear or semidefinite program) that you’d expect to approximate the answer; for others there’s not. For some a greedy approach does pretty well; for others it fails completely. For some, the constraints relating the variables are “geometrically local” in some low-dimensional space; for others not. And so on.

    Yet, despite these differences, what we find is that the different kinds of algorithms that are natural for the different NP-complete problems—greedy for one, LP for another, etc.—repeatedly go right up to, but never exceed, the different limits on their respective performances that would follow if P≠NP.

    So, faced with all these different confirmed predictions of the P≠NP hypothesis, how does Lubos respond? Amusingly, he says we don’t get to count the predictions as “different,” since the problems’ NP-completeness means they’re all “really the same” anyway!

    The irony of that argument seems completely lost on him. Look, suppose he’s right and P=NP. In that case, all P problems would be NP-complete as well! So the fact that we knew all these complicated relationships between “today’s” NP-complete problems would’ve turned out to be just a historical fluke. On the P=NP hypothesis, then, the known NP-complete problems shouldn’t behave like “the same problem” at all—in fact, they should behave no more like each other than they do like the known problems in P! Just because we hit the electric fence for one NP-complete problem, wouldn’t mean we shouldn’t cross it for the next problem using a different approach, etc.

    In summary, Lubos can either

    (a) consider the known NP-complete problems “morally different”—but in that case, he should be impressed by the success of the single P≠NP hypothesis in predicting all the different barriers of different approximation algorithms for all these different problems, or else

    (b) he can consider the known NP-complete problems “morally the same”—but in that case, he’s already implicitly admitted defeat, since if P=NP then these problems wouldn’t be morally the same, any more than any other pair of NP problems!

    He can’t have it both ways.

    More coming in the next comment…

  118. Sanjeev Arora Says:

    Scott,

    Great work explaining the issues. But, I must say I find your energy on these issues quite amazing. Why is it so important to convince Lubos and his followers about anything? Isn’t the world full of nutjobs holding the nuttiest opinions?

    Even the great Feynman apparently didn’t see the point of studying P vs NP. If anything, this should remind us that physicists think differently than us. As do English professors, Economists, Mathematicians, Politicians, you name it. Joy to them.

  119. Scott Says:

    Ignacio (con’t):

    3. You write, “Look there is a wide gulf between saying expertise is useless and expertise is not enough.” I wholeheartedly agree with you that expertise is not enough! We should always be prepared for the possibility of having our minds blown—by P=NP, quantum mechanics or Big Bang cosmology being overturned, or almost anything else, no matter how unlikely it seems to today’s experts. (If you recall, I did suggest only a ~99% confidence for P≠NP, and not something higher.)

    But you do agree that expertise is not useless? In that case, I want you to understand just how different your position is from Lubos’s. Lubos’s position is precisely that, with P vs. NP, expertise is useless—indeed, that when it comes to discrete math as a whole (but not continuous math!), there’s not even such a thing as “expertise,” over and above the set of theorems that have already been proved. (One wonders: how were all those theorems proved in the first place, then? By dumb luck, I guess…)

    That’s why it’s so important to him to insist that all the conceptual progress, all the insight and intuition and partial results obtained in computer science over the past half-century haven’t even moved the “prior probabilities” on P vs. NP one iota away from 50/50: because what he really wants to do, is to deny that any conceptual progress exists.

    4. You write:

      On a somewhat separate issue I noticed that there is apparently a way around the Harlow-Hayden complexity bounds for black-holes. I have not really had time to look at it in any level of detail or to decide whether it is reasonable for me to tilt in one direction or the other. However, the point is that complexity bounds are shifty things as far as I can tell. There is no point in being dogmatic about them.

    If you’d like to get beyond “complexity bounds are shifty” to understand what’s really going on here, then I’m happy to explain!

    Oppenheim and Unruh showed that, if you engineered your own custom black hole, and if it was maximally entangled with a fault-tolerant quantum computer that remained outside the hole, and if, before forming the BH, you were willing to spend exponential time (meaning, something like 210^67 years!) to put the quantum computer into just the right state—then you could get a BH to which the Harlow-Hayden argument doesn’t apply. This seems to be related to the observation, by Susskind and others, that the Harlow-Hayden argument doesn’t apply in the AdS/CFT “thermofield double state” universe.

    Personally, I find these observations fascinating, and see no good reason to doubt them! The simplest way to interpret them would be that the HH argument applies “only” to real, astrophysical black holes formed by collapse: the kind whose Hawking radiation is produced by “generic” scrambling dynamics on the horizon. It doesn’t always apply to hypothetical, fine-tuned black holes that would take exponential time to prepare, or that can exist only in special toy universes.

    This, in turn, might motivate us to think further about exactly which properties of a black hole are necessary and sufficient for an HH-style argument to apply to it. But then before we knew it we’d be engaging in actual research, rather than a blog comment-war… 🙂

  120. J Says:

    @Scott any comments on 80?

  121. Łukasz Grabowski Says:

    Scott #51:

    what is it that makes you think that nonuniformity should make such a big difference? The way I think about it, a nonuniform algorithm is really just the “limit” of an infinite sequence of better and better uniform ones

    I think the same 🙂 And it seems to me not unlikely that the constant c in c^n could be improved for large inputs. I think there could be some tiny bits of structure which perhaps are not entirely unsimilar to what Lubos is saying – i.e. if you’re really just looking at a particular input size, then you should be able to improve brute force because of some random correlations which appear and which don’t really have any explanation.

    A bit like aristotelian “humans are those animals which walk on two legs, don’t have feathers and aren’t kangaroos”.

    I could imagine something like that for SAT. “The formulas of length n which are satisfiable are those such that…” and then some complicated relatively short formula which really doesn’t have anything to do with satisfiability itself.

    There’s one other even less convincing argument 🙂 At some point humanity must encounter a mathematical problem which is honestly interesting but really is absolutely untacklable (i.e. a conjecture which is not undecidable but, say, true, but there is no proof of it in any sensible axiomatic system) Questions which involve lower bounds on circuit sizes seem to me very good candidates. (and on the other hand – I’d bet money on someone showing a proof of the exponential time hypothesis or at least P\neq NP in the next 100 years.)

  122. Vadim Says:

    Lubos is like a real-life, actual straw man. I’m glad he’s part of the debate because he makes Scott and others argue things that they otherwise would probably never argue because they must seem so obvious to them. To an interested lay man, though, these arguments are very illuminating and much more accessible than debates between two TCS experts.

  123. Fred Says:

    P=NP would have extraordinary practical consequences.
    But what are the practical consequences of proving that P!=NP?
    Would it make more valid to try to reframe some physics problems in terms of computational complexity? (Like for the Susskind paper about black holes that you discussed the other day)

  124. PeterM Says:

    I think that there is some funny inconsistency in the argument of Lubos. Namely the already quoted sentences by him “On the other hand, partial evidence in discrete mathematics just cannot exist.” and “In continuous mathematics that is more physics-wise, there may perhaps be partial evidence.” seems to take for granted some ABSOLUTE SEPARATION between continuous and discrete mathematics. What is funny about it is of course is that the P vs NP question is itself a question of certain type of separation. Of course, there is the big difference, that this taken for granted separation by Lubos is vague while the P vs NP question is completely well defined. But still, if one wants to take his argument seriously, then this assumed separation needs to be taken seriously at least in some vague level. But: On a “vague level” this separation of continuous and discrete mathematics is KNOWN TO BE false (scaling limits of discrete probabilistic structures, topological dynamical arguments in ramsey theory, gromov hausdorff limits of discrete groups and other ultrafilter powered constructions, I personally cannot recall more immediate examples, but to refute an absolute separation it is enough).

    So, in an abstract level his argument seems to be the following:
    “there is no particular reason to be deeply convinced that ‘separation statement A’ should hold without a proof and this is because ‘vague separation B’ which allows being biased in some cases while ABSOLUTELY forbids it in some other.”

    And if one asks, still in the abstract level, ok, what high level things could be said about “A” and “B” above?, then we could say, well, an incredible amount of sophisticated work is done around “A ” while “B” is known to be false.

    I know, that other comments were also criticizing this claimed separation between the two types of mathematics, but what I want to emphasize, that the extra funny thing about this is that it is of a similar flavor (except that it is outright wrong) as the P vs NP question itself. Ironically though, it may not be a too bad analogy to illustrate why it is not obvious that “P is not NP”: besides pointing out, that there are so many non-obvious algorithms, you could point out, “you know, on a gut level you also may feel, that duh, there is a separation between continuous and discrete, but then, you know, you learn, and you realize the links you could not dream of and you became skeptical of separations”. (Ok, a particularly weak part of the analogy may be, that in the discrete continuous case there is no obvious ordering by difficulty)

  125. xyz Says:

    if you reduce a 3 Sat problem to an unknown NP-Complete problem and the resulting problem again to an unknown NP-Complete Problem ….. and the resulting Problem to a Traveling Salesman Problem Is it NP-Complete to determine what the original Problem was?

  126. Attila Szasz Says:

    Sanjeev #118: I heard it’s more like that Feynman had this relation to P vs NP that he didn’t believe it was and open problem, because P!=NP was so obvious from a physicist’s standpoint.

    I agree with your general sentiment though, this comment board is getting to be quite difficult and probably also energy consuming for the participants while hardly producing any merit. (Ok, I really liked the original post, but given how much
    crackpot-attention is generated when someone puts P vs NP in the title, along with especially addressing this ignorant person who clearly doesn’t know an undergrad worth of CS
    .. If I was that annoyed and eager to reply something, I’d probably just blog back a single link to the fine textbook of yours and Boaz Barak, along with “read first, dude” and perhabs a smiley.)

  127. matt Says:

    I believe the biggest irony in Lubos’ post is that he (ok, slightly out of context) equates the probability that climate change will be a problem with the probability that P is not equal to NP. Guess Lubos is an alarmist, as he would put it?

  128. Greg Kuperberg Says:

    PeterM – “[Lubos] seems to take for granted some ABSOLUTE SEPARATION between continuous and discrete mathematics.”

    As long as it is strictly discrete questions about finite sets, or the integers or even the rational numbers, then there is no such thing as partial evidence. But once you take the completion so that Cauchy sequences converge, then partial evidence can exist. 🙂

  129. PeterM Says:

    The following is science fiction flavored question, assume, that there is some process in the universe which amounts to running an effective algorithm for an NP complete question. Could we notice the running of such a process?
    What if the process “wants itself to be noticed” (say a galactic civilization)? Could it make a difference big enough?
    If the answer is “yes”, then somehow the empirical argument that
    “in the last 30 years, many effort by humans did not lead to any effective algorithm to solve SAT.” could be replaced by: “in the last 13 billion years no physical process in the Universe seems to amount to the running of an effective algorithm for SAT”. Of course here is the big caveat that how we would notice it if it is only used in some weird video game in the next galaxy or it is running as part of the immune system of a deep sea creature, but maybe it is run by something which is want to be noticed, could not it make a real big difference? In the last post Scott wrote about the work of Susskind which I could not yet grasp, but it was claimed that really wicked thing could happen to spacetime which seems to be prevented only by complexity bounds. I am curious if it also implies that a noticeable difference could be made by someone capable of effectively solving SAT
    Then maybe we could say, that “no agent in the universe who wants to be noticed has ever found an effective SAT decider algorithm, is not it weird if there is one?”

  130. Hal Swyers Says:

    I wanted to clarify my comments and so wrote a post where I made some of the arguments more clear. The impact of Moore’s law as a practical mitigator of complexity drives one to a linear scaling of complexity with time for exponential problems. TSP can already by driven to O(n^2 2^n) complexity, and since n^2 is quickly dominated by 2^n it isn’t hard to see that Moore’s law can keep pace.

    http://thefurloff.com/2014/03/09/moores-law-drives-computational-resources-pnp-practically-part-2/

  131. Scott Says:

    Hal Swyers #130: The problem is that Moore’s Law isn’t a “law” at all; it was a temporary historical phenomenon that’s already ending! Transistor densities have already stopped increasing, forcing people to go to multicore if they want to continue to get improvements (and multicore will also hit barriers from latency and from the inherent limits of parallelization). And even if all the current technological issues were miraculously solved, certainly Moore’s Law would stop once computing speeds and memories hit the fundamental limits imposed by quantum gravity! (E.g., you can’t store more than ~1069 bits per square meter, or run more than ~1043 clock cycles per second, without creating a black hole.) And whenever the music stops, the P vs. NP problem will still be there like before, serenely unaffected by all the technological improvements.

    The only existing proposal that could fundamentally change the situation would be quantum computing, which is exactly why many of us got interested in it! But, as you could learn by reading the tagline of this blog 🙂 , not even quantum computers are known or believed to be able to solve NP-complete problems in polynomial time.

    Of course, even if everything I said above was wrong, it still wouldn’t alter the outlook for P vs. NP as a pure mathematical question. But because what I said is not wrong, 🙂 P≠NP (and related conjectures like NP⊄BPP and NP⊄BQP) remain not only our best guesses mathematically, but practically relevant as well. For example, it’s only because of them that most forms of cryptography should continue to be possible.

  132. ppnl Says:

    Actually transistor density is still decreasing just not as fast. The real problem is a thermal barrier that prevents higher clock speeds with such dense transistors. To get more processing power you have to use multiple cores (which requires smaller transistors.) operating at a lower clock speed.

    But your point remains. The increase in computing power is a logistic curve rather than an exponential.

    I hope quantum computers do work out. I have also wondered if a deeper physical theory, say quantum gravity, will give even more of a speed up of some problems. Maybe that is why quantum gravity is so hard to figure out.

  133. Greg Kuperberg Says:

    Scott – So what kind of cryptography is still possible if P = NP? Okay, you could find ways to share large keys. Anything else?

  134. Scott Says:

    Greg #133: The one-time pad, quantum crypto, variations on those two, and that’s all I can think of.

  135. Sniffnoy Says:

    Shouldn’t symmetric crypto in general still work? Or does that count as a variation on the one-time pad?

  136. Kenneth W. Regan Says:

    Scott, regarding the “Update March 28th”, is it OK if I/we wait 20 days to react, or can I get it done now thru the comment box in the AMPS-experiment/firewall/wormholes post? 🙂 Well let me ask how much it would disturb the examples in your post if SAT were to have an n^20 algorithm that can be improved to n^14 but not much more (where we might see 70-page papers by multiple people improving the exponent from 13.76 to 13.74 to 13.73…)—?

  137. Fred Says:

    PeterM #129
    “in the last 30 years, many effort by humans did not lead to any effective algorithm to solve SAT.” could be replaced by: “in the last 13 billion years no physical process in the Universe seems to amount to the running of an effective algorithm for SAT”.

    Imagine observing a planet that’s nothing but raging oceans and sterile lands. Then fast forward a few billion years, and that planet has spontaneously transformed itself in such a way that it can send pieces of itself flying anywhere in its solar system and produce nuclear reactions on a scale that happens only in stars… If that’s not a feat at least on scale of solving a difficult 3SAT problem efficiently, I don’t know what is…

  138. Scott Says:

    Ken #136: Thanks for catching the mistaken date. 🙂 Regarding the n14 SAT algorithm, see my comment #45. Briefly, I regard it as a logical possibility, but an extremely ugly one, and one that’s correspondingly unlikely to be true. Certainly it would leave all the examples like my third one (where a bunch of reductions get combined to reproduce a 2O(n) algorithm for SAT, but nothing faster) weird and unexplained.

  139. Rahul Says:

    Silly question: How much effort do you need to type “Luboš” the right way. I’m impressed. 🙂

  140. Scott Says:

    Rahul #139: I just copy and paste. 🙂

  141. Sid K Says:

    Scott,

    After seeing your update, let me just say that I really appreciate you taking the time to write clearly and accessibly about the state of the art in theoretical computer science. As you point out, it doesn’t have to be in response to Luboš; the rest of us can enjoy it nonetheless. While the things you wrote about might be obvious to computers scientists, it is far from obvious to the rest of us.

    So, definitely don’t trouble yourself about Luboš, but keep writing the good stuff.

  142. Mark Says:

    I’m just an undergrad but this is really interesting, thanks. Don’t the equivalences/conversions between problems reduce the number of frogs? How many frogs are there really?

  143. Scott Says:

    J #80: I think you’re wrong that the paucity of circuit lower bounds constitutes any sort of positive evidence for P=NP. The reason is that we understand in great detail why proving circuit lower bounds is so hard! In other words, even supposing that the superlinear circuit lower bounds are true, it’s unsurprising that at the current stage of mathematics we wouldn’t be able to prove them for most problems. Doing so already requires overcoming natural proofs, etc.! On the other hand, where we can get around the barriers, we can prove circuit lower bounds (e.g., ZPPNP⊄SIZE(nk), MAEXP⊄P/poly, …), so we know that there can’t be any sort of general “law” against them. Not that there could’ve been such a law anyway! Indeed, a crucial fact that your argument doesn’t engage with, is that we already know from the hierarchy theorems that hard problems exist! (Even “practical” problems in AI and other fields, whenever they can be proved EXP-complete or above.) So it would be astonishing if (as you seem to suggest) none of the problems whose complexity is currently unknown were among the hard ones, and all just so happened to be solvable in linear time.

  144. Alex Says:

    Scott, I have to say I am sorry to see you ban Luboš! I disagree with almost everything Luboš says, but I think you were completely correct when you said the most exciting part of your new book was that “Luboš Motl is reading the free copy of QCSD that I sent him and blogging his reactions chapter-by-chapter!”

    I see the two of you much like I see Jon Stewart and Bill O’Reilly. One of you is obviously correct, but the arguments are so entertaining! I think you have the most interesting blog in the world in part because you engage the Motls of the world.

    Obviously if arguing with him is causing you stress then it’s not worth doing. I just wanted to let you know that at least I appreciate it.

  145. Scott Says:

    Mark #142: Good question! See comment #117 part 2 for my answer. Briefly, I’d say that there are tens of thousands of frogs: the reductions/equivalences between problems were supposed to be modeled in my analogy by two of the frogs being interfertile. If you really wanted, you could say there were just a few frogs, but then you should add that these are giant frogs with tens of thousands of separate sets of genitalia. 🙂

  146. Vladimir Putin Says:

    I think you misunderstood the person complaining about your description of LM as an asshole. There are plenty of perfectly respectable assholes out there.

  147. Greg Kuperberg Says:

    Besides frogs, here is a different animal metaphor: The parable of the blind men and the elephant. What was seen for centuries as many different algorithm questions turned out to be one elephant, the class of NP-complete problems. The fact that all of these questions looked difficult separately, and yet turned out to be only one animal, is indeed evidence that it is a formidable animal.

    I can also point out that computer scientists have confidence in complexity theory for a reason that theoretical physicists should be able to appreciate: consistency checks. Complexity theory offers a consistent-looking landscape with many complexity classes, not just P and NP, and many conjectures for which ones are the same and which ones are different. There is some controversy around the edges and there have been a few medium-sized surprises. (Like IP = PSPACE, which was a non-relativizing result and refuted the random oracle hypothesis.) But most of the landscape holds together pretty well, which is an intellectual status that is a little like string theory and many quantum field theories.

    Any smart-aleck could say, for all you know, poof, non-perturbative string theory does not exist. Or for all you know, poof, non-perturbative 4D Yang-Mills does not exist except as an approximation to something else. But these are very strange claims. At a bare minimum, you have to earn the right to make such claims by carefully studying what you think doesn’t exist. Even though P ≠ NP is a priori the non-existence of an algorithm, from another point of view P = NP implies the non-existence of a great deal else that computer scientists study: Non-existence of most cryptography, non-existence of the polynomial hierarchy, non-existence of many quantum accelerations, etc.

  148. Scott Says:

    Alex #144: Well then, I’m glad you’ve enjoyed the interaction that there’s been, and you’re more than welcome to reread the archives whenever you’d like to relive it!

    The specific problem with Luboš is that he never stops: get into an argument with him, and having apparently no other responsibilities in life, he’ll just keep spewing more and more garbage forever, so that you’d need to drop everything to refute it all. As someone with a job and a family, I knew there was no way I could possibly match him at that for long.

    So, that’s why I decided to do what I did: the one week when I had time for it, I exposed his intellectual dishonesty once and for all for the entire sane world to see (while also, I hope, providing some modest edutainment…). Then I publicly committed not to interact with him for 3 years. That way, when Luboš inevitably tries to bait me again, I’ll have something I can point people to, to explain why I’ve chosen to spend my time more rewardingly.

    In the meantime, if you’re worried that this blog will become too tame without him … well, my guess is that other people will continue to be wrong on the Internet. 🙂

  149. Scott Says:

    Greg #147: Thanks; that’s extremely well-said!

  150. Rahul Says:

    Any comments on this opinion by Moshe Vardi (Rice Univ.) that was interestingly contrarian:

    I do not really have any deep intuition in favor of P=NP. I do not, however,believe that the evidence in favor of P /neq NP P is as strong as it is widely believed to be. The main argument in favor of P /neq NP is the total lack of fundamental progress in the area of exhaustive search. This is, in my opinion, a very weak argument. The space of algorithms is very large and we are only at the beginning of its exploration. Witness the non-constructive tractability proofs in the area of graph minors and tractability proofs in the area of group theory that are based on the very deep classi?cation of ?nite simple groups. The resolution of Fermat’s Last Theorem also shows that very simply questions may be settled only by very deep theories. Over the two decades we have seen several major lines of attack on the PvNP question. I myself was involved on one of them, that of ?nite-model theory. All these lines of attack yielded beautiful theories, but there is little reason to believe that they led us any closer to resolving the problem.

  151. Rahul Says:

    Greg Kuperberg #133

    Scott – So what kind of cryptography is still possible if P = NP? Okay, you could find ways to share large keys. Anything else?

    Scott #134 answers:
    The one-time pad, quantum crypto, variations on those two, and that’s all I can think of.

    Why does P = NP imply the non-existence of most cryptography?

    I personally think P = NP is very unlikely, but say it is indeed true, couldn’t it be that because of a large constant or high degree of polynomial or the sheer cumbersomeness of the reduction procedure the practical effect on the ease of a factoring problem is negligible?

    Does P = NP necessarily have to be a threat to cryptography? The result P=NP is in itself so weird that if it is so then imagining the P-NP connection to be of so weird nature as to make its practical exploitation impossible is not that weird.

  152. Rahul Says:

    ppnl #132:

    Actually transistor density is still decreasing just not as fast. The real problem is a thermal barrier that prevents higher clock speeds with such dense transistors.

    How come I don’t see much effort to run chips cooler? I mean sure there’s detailed heat transfer modelling & better heat sink design, fans etc. but the average chip surface temperature seems about constant over the last 30 years or so? The effort seems to get more heat out of the chip, yes, that’s a success.

    Or am I wrong?

    Except for the nerdy fringe of overclockers not much effort to run chips, say, at sub-zero temperatures? Is the cooling overhead just not worth it? If the barrier is indeed thermal, isn’t dropping temperatures the obvious workaround?

    What gives?

  153. jonas Says:

    You’ve slightly disappointed me with this update, Scott. I knew your goal was not to convince Luboš, but I didn’t think his rants were your most important motivation for this post. I thought you’d just written it to generally inform people about why the P vs. NP problem is important and why most scientists have a good reason to expect that P is not equal to NP.

  154. A.Vlasov Says:

    A technical question: that is relation (if any) between P vs NP and halting problem?

  155. Fred Says:

    It seems that one of the arguments is:
    Solving the P vs NP question is in itself NP hard, noone has done it yet, therefore P!=NP is more likely.
    A self-fulfilling prophecy.

  156. Hal Swyers Says:

    @Scott #131
    First, thanks for taking the time to respond. It’s much appreciated, and the dialogue is welcome.

    I wanted to assure you I am well aware of the limitations you pointed as far as some of the technical challenges as well as the interest in quantum computing, in fact amongst all the other obligations I have, I spare ever spare minute digging into these questions as well. My background is engineering and operations research, so hopefully I can be humored at least a little longer 🙂

    While the question posed by P=NP or P != NP is purely mathematical, the practical realities favor P=NP, and the connection to math and physics is likely to be more robust than what people think.

    If we are willing to take Turing’s concept of his Turing machine as an example, the standard view would have us think that machine will be in some configuration, read some input from a strip of paper, then perform some operation from a table and then choose to move and then produce an output etc, placing it in a new configuration.

    This is ok, except the table really should be dynamically update-able and the tape should really loop through the table, e.g.operations and inputs and outputs are not truly separable, and each subsequent operation will be more “efficient” than our previous operation.

    This approach brings our Moore’s law into a mathematical footing and so arbitrary dismissal based on engineering arguments should be slightly reserved (although I don’t disagree with some of the technical issues you bring up).

    The turing machine thereby can be viewed as learning, and therefore does not have to repeat certain steps. Moore’s law is simply a reflection of a broader learning on how to perform more calculations in a some period of time.

    If you look at resource allocation problems, you will find out that real world planners will never follow a crude linear extrapolation of effort, learning must be included in every planned effort. So the concepts embodied by Moore’s law are really a reflection of a broader realization of how to approach real world allocations.

    In other words, if you came to me to ask for money to pay to perform some computation of complexity NP, I would allocate you resources that scale with P simply because any other answer would be an over allocation. History has repeatedly shown that this is true, so much so there is an entire industry based exactly on understanding these sorts of issues.

    http://en.wikipedia.org/wiki/Learning_curve

    So I go back to the position where I understand that there is some asymptotic limit where p != np from a practical standpoint p=np and I have little evidence to show this isn’t the case.

  157. Jay Says:

    >these are giant frogs with tens of thousands of separate sets of genitalia.

    I want Abstruse Goose and Randall Munroe to design T-shirts of these giant frogs.

  158. John Preskill Says:

    In some ways P≠NP reminds me of the AdS/CFT conjecture in string theory. It’s an imperfect analogy — I think we have stronger reasons to believe P≠NP than AdS/CFT — but both are deep and fundamental statements, and we lack good ideas about how to prove either one.

    Our belief in both conjectures is strengthened by many nontrivial consistency checks, some seemingly miraculous. That’s the point of the “invisible fence” argument for P≠NP, which I find enlightening.

  159. Greg Kuperberg Says:

    Rahul – Why does P = NP imply the non-existence of most cryptography?

    Because encryption can typically be expressed with a fixed-size logic circuit, and code-breaking can be expressed as circuit satisfiability. For instance this is certainly the case with AES, even (or especially) with only a little encrypted data. You can guess that the unencrypted data has many highly predictable or completely predictable bits (e.g., a mail header), and then you are left to guess the unpredictable data bits (if any) and the encryption key. AES is some fixed logic circuit that has been accepted as standard, and breaking it is a clear example of CircuitSAT.

    say it is indeed true, couldn’t it be that because of a large constant or high degree of polynomial or the sheer cumbersomeness of the reduction procedure

    Well, “sheer cumbersomeness” has never been much of an obstacle to computer programmers for very long — in fact, many of them seem to enjoy it. 🙂

    But yes, you could posit that P = NP and then try to escape from the cataclysm by saying that the CircuitSAT algorithm has time complexity Θ(n^10000). That has been a well-known quibble concerning the conjecture from the beginning. The problem is that mathematics would then be playing a very strange trick on humanity: Yes, it sure looks like CircuitSAT takes exponential time; but hey, if you could a computer the size of the sun, then you would see that it is just a very high-degree polynomial.

    You can propose the same sort of trick in the opposite direction: That P ≠ NP, but there is a simple algorithm that runs fast when the input has less than an octillion bits. Cryptography looks ruined; but hey, if you could build a computer the size of the sun, then you would see that it exists after all.

    I think that these quibbles over P vs NP sidestep what people really believe. Namely, that there is a hard exponential lower bound on the number of (classical) circuit gates that you need to solve CircuitSAT, specifically Ω(2^n) gates to solve circuits with n non-deterministic inputs and depth n (say). And that the constant in this Ω(2^n) isn’t extreme; I bet with some reasonable model of circuit encoding you could set it to 1.

    If you are allowed quantum circuits, then the spectacular result (Grover’s algorithm) is that Õ(2^{n/2}) gates suffice. Still, most people would conjecture that CircuitSAT needs Ω(2^{n/2}) quantum gates, with a tame value for the constant.

  160. Scott Says:

    Hal Swyers #156: Well, the Planck scale is really a hard limit. It can’t be “learned” around, any more than the speed of light or the Second Law of Thermodynamics can be. Of course we have plenty of room to make our computers (and our cars!) more efficient before they hit those limits—and we can find countless clever ways to improve them consistent with the limits, or to achieve things that the limits seemed to rule out but don’t. But we can’t evade the limits themselves, at least not without changing the laws of physics. So we know there are fundamental limits to computation—and if you agree that P≠NP and NP⊄BQP in the mathematical sense, then those limits will presumably make at least some NP search problems intractable. Thus, whatever (good) point you’re trying to make about technological optimism or the value of learning, I don’t think saying “practically P=NP” is a useful way of making it.

  161. ppnl Says:

    Rahul #152

    Getting the heat out has always been a focus. Cray was immersing their processors in refrigerants way back in the 1980s. Currently designers are looking to diamond to replace silicone in part because of its thermal conductivity.

    Running more processors at a lower more efficient clock speed is the simplest and cheapest solution to the thermal problem. After all your brain has a very low clock speed and a very high degree of parallelism.

    But not all problems can easily be parallelized so you still need the high clock speed.

  162. Scott Says:

    John #158: Thanks very much for the comment!

  163. Fred Says:

    Hal Swyers #156
    I think that your special Turing machine where the state table is updated is a case of a program modifying itself. Lots of practical languages allow for that explicitly (e.g. Lisp where data and program are the same thing) or more implicitly (you can always write a Java or C program that generates a new program as its output) – i.e. a Turing machine can always simulate another Turing machine (and modify that one’s state table), so I believe its all accounted for in the “default” model.
    Your point about Moore’s law reminds me of Kurzweil’s argument about self improving learning machines and his famous singularity.

  164. Scott Says:

    jonas #153: Yeah, that’s an unfortunate truth about my personality. I might have lofty ideals about educating the world, but if you want me to get off my ass and DO something (especially, WRITE something), you probably need to piss me off first!

  165. Ignacio Mosqueira Says:

    Scott

    I may not have caught up with everything you wrote. I will give it a second pass after this post. Let me just say that one can’t get too frustrated about the back and forth of blogs. It isn’t the same thing as publishing in a journal. It is a different form of communication with its strengths and weaknesses.

    I’ve already agreed with you that Lubos is difficult to deal with. Nevertheless I would strongly encourage you not to censor him if indeed you are doing that as he claims. I fully realize that he does the same thing in his own blog but you should be better than that. It does seem to me that this blog censorship thing can in the long term have profound consequences.

    BTW I got Lubos point wrong about the N = 10. But still in general it seems to me that it would be useful to build complexity progressively in order to study it in a controlled way.

    Yes, I will read your comments about Black Holes and complexity and Hayden-Harlow. I take it you believe there is merit to the arguments. Would you go as far as to make an equivalence between that and the BH complementarity idea? I am in fact trying to get through the BH ideas. I don’t know how far I will get and this is only a side interest for me. But at this point the key question for me appears to be exactly when information escapes the black hole. Is this possible for a massive black hole or do you have to wait for the BH to evaporate and quantum effects to become relevant? This question does not seem to be the main one in the minds of researchers based on my read of the state of the field, but it seems to me to be compelling from my beginner’s point of view.

    I saw that Preskill endorsed your points above and compared P != NP to Ads/CFT. I am sure that Lubos would have quite a bit to say about that. And you should let him do it.

    I also watched his talk at Kavli on the BH question. As you probably know Joe Polchinski gave an update here:

    http://online.kitp.ucsb.edu/online/qft14/polchinski/

    He does not really get into the complexity question all that much. He seemed to think that making operators dependent on the state might provide a way to avoid the firewall argument, despite his misgivings on making QM non-linear. This is in spirit similar to things Lubos has endorsed in the past.

    Do you think it is really complexity that saves the day here?

  166. Scott Says:

    Rahul #150: I would actually tend to agree with Moshe Vardi that the work in finite model theory brought us no closer to resolving P vs NP! But I think that other work—especially circuit lower bounds (like Ryan Williams’ recent one), natural proofs, Mulmuley’s GCT, and derandomization—has brought us closer. Not close, but closer, in that it’s hard for me to imagine an eventual solution to the problem not having those ideas somewhere in its intellectual background.

  167. Sniffnoy Says:

    Because encryption can typically be expressed with a fixed-size logic circuit, and code-breaking can be expressed as circuit satisfiability. For instance this is certainly the case with AES, even (or especially) with only a little encrypted data. You can guess that the unencrypted data has many highly predictable or completely predictable bits (e.g., a mail header), and then you are left to guess the unpredictable data bits (if any) and the encryption key. AES is some fixed logic circuit that has been accepted as standard, and breaking it is a clear example of CircuitSAT.

    Beginner question, but — will these known plaintext bits really sufficiently constrain the remaining bits, considering the key could be anything? How do you know there are not many possible keys that will work and yield sensible apparent plaintext messages? And, in many contexts, couldn’t this problem be handled only by encrypting “bodies” of things? Or reduce predictability by combining encryption with compression? (You don’t want to just compress than encrypt, as the compressed file format will probably have header information… you want one where just the “body” is encrypted…) Am I making sense?

  168. Fred Says:

    If the Many-Worlds theory is true there is a practical way to solve NP complete problems in poly time:
    if the problem input has n bits, then perform n quantum “coin flips”. Use the generated sequence to pick one among the 2^n possible solutions. Check the solution.
    If the problem has a solution, you’re *guaranteed* that there is a copy of you in one of the possible 2^n worlds created by the experiment that will find it.

  169. Greg Kuperberg Says:

    Scott – There is an art to responding to the preposterous and I think it’s fine to refute Lubos Motl. Or not to. But I think it’s a mistake to get this angry, and to weave this much personal attack into your response. I’m perfectly willing to get angry and say angry things when people are scheming or illogically influential. But I don’t see that in Lubos’s case. I see little to it than overdosing on strong opinions. Yes, some of his strong opinions are untenable and rude, but the fact is that I agree with some of his other strong opinions. (I also think that his papers are valuable and I would encourage him to go back to writing them.)

  170. Greg Kuperberg Says:

    At the level of human affairs, the Planck scale is a red herring. In various ways the atomic scale and higher expresses Turing universality: DNA, neurons, machines with gears, transistor computers, societies of people or other organisms, etc. At least universality for classical computation; of course some of us think that quantum computation could also be within reach.

    However, no one has discovered Turing universality in nuclear physics or at any smaller scale. You could hypothesize it, but actually finding it would be its own scientific revolution.

    So, for the forseeable future the length scale limit of computation is the atomic scale. It’s interesting that technology has advanced so much that some of the experimental classical and quantum computer technologies really are near the length scale limit.

  171. John Preskill Says:

    For clarity, an addendum to my comment #158:

    By the AdS/CFT conjecture I mean that the CFT on the boundary provides a complete and consistent nonperturbative (unitary) theory of quantum gravity in the bulk spacetime.

    Part of the reason the analogy with the P≠NP conjecture is imperfect is that we don’t (as far as I know) have a nice way of characterizing what constitutes a “theory of quantum gravity” in terms of something analogous to the Wightman axioms for quantum field theory. Therefore, we don’t (yet) know how to formalize the AdS/CFT conjecture as a precise mathematical statement. (If we did, AdS/CFT might be a good candidate for addition to the list of Clay Millennium problems.)

    Nevertheless, I think it’s an instructive analogy.

    Speaking of Clay problems, I think that P≠NP and AdS/CFT are much more interesting conjectures than the Yang-Mills mass gap, since substantial progress on P≠NP or AdS/CFT is bound to teach us something really important; that seems less clear for Yang-Mills (which is even more likely to be true than AdS/CFT).

  172. J Says:

    Thank you very much for your clarification. Honestly my bias towards P=NP went down from 80-20 to may be 90-10. What are some of the best sources I could enlighten myself with the specific results you have mentioned and also get a helicopter view? If you could suggest it will be great. Thank you very much again.

  173. Rahul Says:

    Greg #169

    Yes, some of his strong opinions are untenable and rude, but the fact is that I agree with some of his other strong opinions.

    Out of curiosity, which opinions are these? The ones you agree with.

  174. Greg Kuperberg Says:

    Rahul #173 – For instance, Lubos has a devastating interpretation of “loop quantum gravity” as an alternative to string theory. Indeed, my outside impression is that loop quantum gravity is a Nader-like quest to compete with string theory. In both science and politics, you should compete when the facts compel you to compete. You shouldn’t compete just for the sake of competing; that’s self-aggrandizement and jealousy.

    Okay, you can compete because you think that friendly competition is fun. But it will only be fun when the facts lend some joy to it.

  175. Scott Says:

    Greg #169: The point of my update was not to say that “Lubos is an asshole.” That he is one, I simply took as a known fact. We’re talking about who regularly wishes for the deaths of the people he disagrees with; who says hateful things towards women (“you’re only in science because of your ovaries”), blacks, environmentalists, and much of the rest of the world; and who’s called me “the most corrupt moral trash,” a “deluded bigot,” and I forget what else, solely because of views I’d expressed about string theory and P vs. NP. Anyone familiar with him knows that this collection of quotes barely scratches the surface. So the normal rules of academic discourse don’t exactly apply here, any more than they would if a naked man burst into a seminar room swinging a baseball bat at everyone and screaming (and then, let’s say, asked a question).

    Even so, again, my point was not that he’s a sad and broken human being. My point was a different one: that he doesn’t even have the redeeming virtues of being intellectually honest or particularly often right. If he can get P vs. NP, and complexity and discrete math more generally, so howlingly wrong—if he can be so sure of himself despite knowing the technical facts so little—then why should I trust him about AdS/CFT, or other things that interest me but about which I know less? Why should I think I can learn anything from him, that I couldn’t learn so much better from others? So, that’s why I made the personal decision that I want nothing to do with him for the next three years.

  176. Greg Kuperberg Says:

    Sniffnoy – “Beginner question, but — will these known plaintext bits really sufficiently constrain the remaining bits, considering the key could be anything?”

    The short answer is yes. If you rely on unpredictable plaintext to make cryptography hard, then you are in effect using some of the plaintext as more key. Of course you can do that. You can even intersperse random data in the plaintext because no one said that the encrypted data has to be as short as the original. But useful data, sort-of by definition, comes with some way to interpret it. However you expect the plaintext to be interpreted, that eventually provides enough check to tell if encryption has been broken. Whatever the check is, it can typically be expressed in CircuitSAT.

    Unless you have so much explicit and implicit key that there is no conclusive way to know that encryption is broken. That is the definition of a “one-time pad”.

    In practice people want a clean opposite standard of cryptography: That there is a rigorous dichotomy between plaintext and key, and that the attacker can in principle dictate the plaintext.

  177. Scott Says:

    J #172:

      What are some of the best sources I could enlighten myself with the specific results you have mentioned and also get a helicopter view?

    Try “Complexity Theory: A Modern Approach” by Arora and Barak. For PCP and hardness of approximation, you could also try lecture notes here or here.

  178. Ignacio Mosqueira Says:

    Scott

    It is nice to believe in freedom of speech for those you disagree with, even in your own blog.

    Beyond that, I’d see it is fair to characterize Polchinski as sympathetic to ideas that Lubos has spoused often before concerning BHs. Lubos was big defender of the Papadodimas-Raju paper from the beginning:

    http://motls.blogspot.com/2013/10/raju-papadodimas-isolate-reasons-why.html

    It now seems that there are others intrigued by that same paper. So it is certainly not accurate to say that Lubos is somehow uninformed or out the loop. That is more misleading than its converse.

  179. Greg Kuperberg Says:

    Scott – I didn’t make any appeal to “normal rules of discourse”, whatever they are. If Lubos has said damaging things about women and minorities, then I agree, that could be bad, but that has no connection to P vs NP. It’s a mistake for you to group yourself with them. Not only are you not part of any underrepresented group, you have tenure at a top university. I don’t know that Lubos has any academic appointment. So he called you some names, so what. Recall “sticks and stones”.

  180. Scott Says:

    A. Vlasov #154:

      A technical question: that is relation (if any) between P vs NP and halting problem?

    You can think of NP-complete problems, in many ways, as “finitary” or “truncated” versions of the halting problem. Indeed this was Gödel’s perspective in his 1956 letter to von Neumann. It’s literally the case that, if you take standard undecidable problems like tiling the plane or proving a theorem, and then impose a finite upper bound on the size of the region to be tiled or the length of the proof, then you get an NP-complete problem instead. And the analogy extends further in many ways: for example, the polynomial hierarchy is like the “lower-down cousin” of the arithmetic hierarchy.

    Having said that, a clear place where the analogy breaks down is that diagonalization works to prove the halting problem undecidable, whereas it doesn’t work (at least not by itself) to prove that NP-complete problems are hard. This was explained by Baker, Gill, and Solovay in 1975: basically, the halting problem remains unsolvable in any “relativized world,” whereas P and NP can become equal in certain relativized worlds. Another place where the analogy breaks down is that, if a language and its complement are both recursively enumerable, then the language is easily seen to be decidable—but by contrast, we believe that P≠NP∩coNP.

    In short, the analogy has to be interpreted with great care! The way I like to put it is that computability was a wonderful “warmup” to complexity theory, but the latter is really much more complicated.

  181. Serge Says:

    Either P≠NP, which would account for the fact that some problems in NP remain unfeasible. But then, why is it so difficult to prove? It’s so appealing to mathematicians…

    … or P=NP, but then we must come up with another explanation for not being able to quickly solve all problems in NP. If the algorithm exists, then it must be physically unfindable – think of all the crazy consequences if it was found! I’m not surprised that some physicists prefer this version.

    Either the proof or the algorithm must exist, but for as long as none of them has been found, the practical consequences are the same. I think this strongly suggests undecidability.

  182. Scott Says:

    Greg #179: Well, you’re welcome to go over to Lubos’s blog and interact with him if you like—maybe you can even convince him to go back to doing research! All I say is that I, personally, don’t wish to interact with him for the next few years. And the reason, again, is not that he called me names (or is generally vile), but that he doesn’t even have the redeeming feature that there’s much of anything I can learn from him. Seeing the abysmal quality of his “thought” when applied to the subjects I know best really underscored that for me.

  183. Scott Says:

    Serge #181: No, there’s really no need to imagine anything so exotic. As I said before, we understand a great deal about why statements like P≠NP are so difficult to prove, even assuming they’re true! I don’t think there’s any particular mystery about why P≠NP hasn’t been proven yet—basically, math isn’t advanced enough yet, in specific ways that we can articulate. So the first “horn” of your dilemma doesn’t really have a sharp point at all!

  184. A.Vlasov Says:

    Scott #180, seems I asked not clear enough – I supposed situation then you have an oracle that may resolve halting problem. That classes are collapsed in such a case?

  185. Greg Kuperberg Says:

    Scott – “Well, you’re welcome to go over to Lubos’s blog and interact with him if you like—maybe you can even convince him to go back to doing research! All I say is that I, personally, don’t wish to interact with him for the next few years.”

    But that isn’t the present issue either. My point is that I don’t agree with the ad hominem tone of this post. (And not because your accusations are “wrong”. They may not be wrong, but I think they are wrong-minded.) Whether you choose to interact with Lubos in the future is up to you. Although I personally almost never make public vows not to respond to other people. Sometimes I privately decide not to, but for various reasons, it’s a weak thing to say in public.

  186. Fred Says:

    Taking 3dim-matching as an example, let’s say we consider the finite set S of all possible problems that have a solution for a given dimension size N.
    Does it make sense to ask what subset S’ of S are solvable efficiently by various heuristics and what subset S” represents problem instances that require brute force? This could be hard to assess since lower bounds are asymptotic behavior for N->inf. But if so, do we understand why some problem instances are particularly harder? I.e. do they have a particular structure that holds for all N?
    Does a harder instance for a given problem type stay a harder instance when transforming to another problem type? (E.g. Going from 3SAT to graph coloring or subset sum)

  187. Jeffo Says:

    Greg Kuperberg #179:

    “If Lubos has said damaging things about women and minorities” that “could be bad”!! Talk about looking the other way when all the necessary evidence has been given to you on a silver platter! And “it’s a mistake” for Scott to “group [him]self with them” (the “women and minorities” presumably)!? Perhaps Scott sides with them as fellow human beings who deserve dignity? If hateful speech is directed at me, I certainly hope that others “group themselves” with me. I am simply unsettled to read such comments from a researcher in a respected position.

  188. Raoul Ohio Says:

    I think the “electric fence” argument is strong evidence of P \neq NP. But I want to point out a somewhat similar argument that fails. This concerns “eigenvalue avoidance”, and is illustrated on the cover of “Linear Algebra” by Peter Lax. Check out the picture at:

    http://www.amazon.com/Linear-Algebra-Applications-Applied-Mathematics/dp/0471751561/ref=sr_1_1?ie=UTF8&qid=1394398367&sr=8-1&keywords=peter+lax

    According to Lax, this was a major issue in the early days of Quantum Mechanics, and the problem was resolved by a couple of famous guys (I forget who, maybe von Neumann and ?).

    Suppose our only knowledge of eigenvalues of matrices was from computed values (as opposed to being able to construct matrices with predetermined eigenvalues), and we had reason to think the following is true:

    For real symmetric matrices of moderate size n (10 or 12, say) there are no double eigenvalues.

    For a first examination of the problem, one can generate random n by n examples. The probability of finding a multiple eigenvalue is zero, so one is not likely to turn up.

    For a better understanding, take two random examples, A and B, and consider the parameterized matrix M(t) = (1-t)A + tB, and graph the eigenvalues for t in [0,1]. You will notice an interesting thing: as t increases, frequently the paths of two eigenvalues are heading toward each other, and “at the last moment” they veer apart and “avoid a collision”. Thus you have 12 wavy lines, often almost but not quite touching. This is the picture on the cover of the book. Notice how it often looks like two paths have crossed, but a close examination shows the paths do not touch, but take up the slope of the other path, which is why they look like they cross.

    Looking at this picture, one might propose an “electric fence” (EF) argument that keeps the paths from crossing, so there are no multiple eigenvalues. It is not likely that a counter example would ever be found by pure computation (with good random numbers!). But anyone can create a matrix pair A and B that has a root of any desired multiplicity at, say, t = 0.5.

    So, in this case, the EF argument is misleading.

    The reason for eigenvalue avoidance is that the set of matrix parameters giving a multiple root are a lower dimensional set than the set of all matrix parameters, so the probability of a random matrix turning one up is zero.

    Obviously this situation is trivial compared to the way Scott is using EF, but it illustrates the limitation of EF arguments.

    BTW 1: If your linear algebra chops are not rock solid, this is a great book for review. Lax’s presentation is so clear that all the proofs look obvious.

    BTW 2: The Peter Lax story is beyond interesting, check Wikipedia.

  189. fred Says:

    fred #7:

    http://en.wikipedia.org/wiki/Hidden_subgroup_problem

  190. Hal Swyers Says:

    @Scott #160
    Planck Scale would appear to be “de facto” a limit, but that is part of the motivation behind string theory…trying to break the de facto limit. The sting coupling strength and string length could be viewed as conjugate variables [1]. Planck length then does become a significant constraint on physics. However, such constraints might only be associated with particular dynamical conditions on the observations we make on celestial bodies.

    For example, we can place bounds on the types of weather events we observe…even if it is feasible for all the moisture in our air to suddenly vanish, it is extremely unlikely. Likewise, although one can propose there being a galaxy populated by mickey mouse look-a-likes, the odds are extremely low that such a thing would happen.

    Saying that the known laws of physics would have to change in order for some event x to occur is exactly the thing most physicists pray to happen someday.

    To date, there has been limited evidence that even the known laws of physics are a real barrier to questions of complexity. We may suffer from energy constraints in the present, but it is not unlikely that our future descendents will have similar constraints.

    Do we suffer from the speed of light (locality) and similar limits? Sure, to a point. Locality is a poorly defined concept within the limits of uncertainty, and is directly violated in string theory.

    Fundamentally, we know the laws of macroscopic physics are violated at microscopic levels. The only thing we know semi-certainly is uncertainty reigns.

    There is a fundamental level where in the TSP one can no longer state where cities are with precision. Cities are only localized within some uncertainty bound. The shortest path I find for one situation might be suitable for another, so while it is morally correct to say that one hasn’t satisfied a problem, physically its impossible to accurately define the problem in the first place. Even if one hypothesizes infinite precision, it is practically impossible.

    So one is forced back to the same situation. Moral correctness versus practical implementation. I can summarize the situation as, “Yes, you are absolutely right that such a thing is impossible, but I just did it anyway.”

    [1] http://books.google.com/books?id=XmsbvP1uUeIC&pg=PA260&lpg=PA260&dq=string+coupling&source=bl&ots=cX3WowaSss&sig=pEQsHe7FNtGKpmFpgnBuNuAjqpA&hl=en&sa=X&ei=TzkTU7D_D4Ll0wGNwoC4CA&ved=0CEQQ6AEwBA#v=onepage&q=string%20coupling&f=false

  191. Sasho Says:

    A. Vlasov #184: related to your question is a line of research on the power of having an oracle deciding whether a particular string is random. Randomness is defined in the Kolmogorov complexity sense, i.e. a string is random exactly when it cannot be computed from a shorter string. Once you fix the computational model, deciding Kolmogorov randomness is Turing-complete. Defining things properly is tricky in classical Kolmogorov complexity, and unsurprisingly becomes even trickier once you add complexity in the mix. But once you manage to get the definitions right, there are results giving upper and lower bounds on the power of such oracles in terms of standard uniform complexity classes. For example, the decidable problems which are truth-table reducible to deciding random strings lie somewhere between BPP and PSPACE. There is a caveat, that in order to prove that, you need go around the technicality that Kolmogorov complexity has to be defined with respect to some universal Turing machine: you don’t want any “cheating” in choosing the Turing machine.

    Dick Lipton and Ken Reagen wrote up a wonderful exposition over at GLL: http://rjlipton.wordpress.com/2011/06/02/how-powerful-are-random-strings/. Check the paper linked there as well.

  192. J Says:

    Scott 177 Thank you.

  193. Scott Says:

    Ignacio #165: OK, let me start with your science questions, and only then move on to sociology.

    Your question about how long it takes for information to escape from a black hole has an extremely interesting answer! The modern view is that you don’t need to wait for the hole to completely evaporate, but you do need to wait for it to halfway evaporate (i.e., for it to pass its so-called Page time, ~1067 years for an astrophysical-mass black hole). Why? Because when you calculate the entanglement entropy, you find that it’s only when there are more qubits in the Hawking radiation than there are in the black hole interior, that you expect the reduced state of the Hawking radiation not to be completely mixed, but to contain correlations dependent on the state of the infalling matter.

    No, I don’t think the Harlow-Hayden proposal is “equivalent” to black hole complementarity, nor do I think that computational complexity is some sort of “master key” to understanding black holes. I think Harlow and Hayden contributed *one* crucial insight, alongside many other insights that will surely be needed before we know the full story. Namely, HH explained why implementing the unitarity operator required for the AMPS experiment is unlikely to be possible (even in principle) within the lifetime of a black hole. You can watch my talk at KITP if you want my own thoughts about Harlow-Hayden, at least as of last summer (I’m working on a paper about it, but it’s not done yet).

    Lubos is free to write anything he wants on his own blog. In my update, I simply explained why I *personally* want nothing to do with him for a while, and in particular why I don’t want him here. That’s not “censorship,” any more than is kicking out the world’s rudest houseguest (one who, moreover, regularly kicks people out of his own house). If you spend any time on this blog, you’ll see that I constantly interact with commenters who not only sharply disagree with me, but sometimes even throw abuse my way. So saying “I might reach the point of kicking you out for a few years if you become Lubos” is like saying “Moore’s Law has to break down once transistors reach the Planck scale” — i.e., it just means there’s SOME finite upper bound to my patience, not that the sane people are anywhere near it! 🙂

  194. Scott Says:

    Greg #185:

      I personally almost never make public vows not to respond to other people. Sometimes I privately decide not to, but for various reasons, it’s a weak thing to say in public.

    For me, it was extremely important to make the vow in public, for the following reason. I know myself, and I know that if I’d only resolved privately not to engage with Lubos, then the very next time he said something outrageous that got under my skin, I’d abandon my vow and start arguing with him. Thus, the public vow is really just a device for committing myself to what I know is the wiser course of action.

  195. fred Says:

    If only Lubos would get hired by D-Wave…

  196. Greg Kuperberg Says:

    Jeffo – Perhaps Scott sides with them as fellow human beings who deserve dignity?

    There is a difference between siding with victims and grouping yourself with them. Scott has been forthright enough to say that he feels offended and provoked by Lubos’s description of him. Yes, Lubos has outright insulted Scott in the past, but it’s a mistake to respond in kind. My view is that if tenure at a good research university gets you anything, it should get you a thick skin. Okay, I do not always live up to my own ideal either, but I still believe in it.

  197. Greg Kuperberg Says:

    Scott – Hmm…I know myself too. If I publicly resolved to ignore someone, then I might well later break my promise and lose face. 🙂

    Besides, publicly announcing it is rude, obviously. In the old days of Usenet, it was called “the killfile”, and it became a verb, to “killfile” someone. “That’s it, I’m killfiling so-and-so.” It was not only rude, little more than claiming the last word, but also a recipe to lose face, when such posters couldn’t keep their word. So, that was my learning experience.

  198. Ignacio Mosqueira Says:

    I just watched your video. And it is certainly full of interesting ideas. I enjoyed it.

    I do think that it is overreaching to say that were P =NP to be proved that this would be a hum-drum result. I am not sure what drove Lubos to claim such a preposterous thing. But I do have a model of Lubos’ brain in my own. According to this model Lubos entire sense of self depends in his complete independence from ideas based on rabble-rousing, crowd-sourcing, PC-driven minds. I wager that this mindset serves him well in some cases and not in others.

    Nevertheless, in my view the BH problem will find a simple resolution that is not dependent on computational complexity or even esoteric notions of unitarity breaking theories or the like.

    We will see.

  199. Getting Clear Says:

    OK Greg, let me see if I understand correctly, you understand that it’s wrong to be rude, you don’t want to lose face, you don’t need to get in the last word, you don’t want to group yourself with these victims (IF they actually ARE victims), you don’t want to make a vow in public and look weak if you break it, you’re satisfied that tenure says what needs to be said, you don’t agree with the ad hominem tone of Scott’s post generally (but not because his “accusations” are wrong, but because they are wrong-minded), you recognize that “normal rules of discourse” are ill-defined (so inapplicable?), you of course know the art of responding to the preposterous and it’s a mistake to get angry, and on and on and on. You know, unless you’ve come up with algorithm that let’s you walk on water, your high minded condescending tone is getting pretty insufferable. 😉 I for one would like you to do us a favor and spare the lessons in social etiquette; stick to math and science and the other areas where we do look to you for thoughtful commentary.

  200. Greg Kuperberg Says:

    Getting Clear – “You know, unless you’ve come up with algorithm that let’s you walk on water”

    Yes, that’s what I am looking for. 🙂

    I think I’ll ask Jesus. Seriously, we have a guy named Jesus in our department.

  201. Marcus Campbell Says:

    I’m just an undergraduate, and feel rather intellectually outclassed on this blog, but hopefully you don’t mind me weighing in. I study math and biology, and it’s my hope that I bring a somewhat different (and useful) perspective to the discussion here.

    I liked the frog analogy, but I felt that it may not have been as direct of a comparison as Scott was aiming for. I’d greatly appreciate if someone could point out any errors in my reasoning.

    Since we’re talking about P = NP, we might choose to write the question of species equivalence in a similar way:

    Yellow frogs = Green frogs ?

    How direct of a comparison is this to P = NP ? For me, the problem with the frog analogy is that in our “frog equation”, we know a priori that the LHS and the RHS are qualitatively mutable. Regardless of their current states, there is a small (but quite positive!) probability that they could be equivalent at some point in the future.

    This partly explains why evolutionary biologists (and naturalists, etc.) would be so comfortable with classifying them as separate species, despite not having “proven” that they are.

    The principal reason why (most) evolutionary biologists care about species is that their existence constricts the flow of genetic information through space and time. Essentially, biologists tend to care much more about potential gene flow between groups than about whether or not those groups represent different species. Because it’s quite easy for almost any kind of ecological interaction to outweigh the effects of a tiny amount of gene flow, the situation with almost no gene flow is, for all intents and purposes, the same as the one with no gene flow whatsoever.

    If intermatings between the frogs aren’t producing viable offspring, then (for most biologists) it seems “good enough” to assume that they are different species and get on with life. Even if it’s not exactly true, we don’t expect it to cause a lot of error in our predictions, and we can justify those expectations based on our present understanding of how gene flow affects population dynamics.

    This seems (to me, anyway) to be a fundamentally different kind of equivalence relation than P=NP.

    There are formal definitions for both P and NP that are widely-accepted (as far as I know). A proof, either affirmative or negative, would have a much clearer interpretation, as compared to the frog relation. If, right now, we were to produce a correct proof that P=NP, we have good reason to expect that it would also be true tomorrow, and 100 years from now, and so on. Even if we forget about all the wide-ranging, “real world” consequences of proving P=NP, or its negation, we are talking about a much more “permanent” kind of equivalence relation, as compared to the species equivalence example. That alone might cause some people to feel uneasy about accepting anything less than a mathematically rigorous proof.

    If we do consider the “real world” consequences of P=NP being true (or false) then I’m not surprised that some people would feel even more strongly that we must be sure of such an answer. But that’s just how they feel. Is this just immaturity? Personally, I’m not sure. In the frog analogy, even if we don’t have exact equivalence, we can recognize, with a reasonable degree of confidence, when this carries a negligible cost. With P=NP however, the stakes associated with failing to notice equality are much higher.

    I’d just like to point out that my argument above is not meant to be a refutation of Scott’s main points. On the contrary, I agree with them wholeheartedly. Rather, I just felt that explaining how a biologist might view the frog situation could help elucidate: (1) why some people might demand an absolutely rigorous approach in one situation but not the other, and also (2) how difficult it is to use analogies to explain when and how we should relax our requirements for proof.

  202. Scott Says:

    Greg #196, #197: The fact that you would chide me for my “rudeness” (!) to Lubos, suggests to me that you haven’t yet appreciated the full vileness of his attacks on others, and how they place him entirely outside the domain of “rude” vs. “polite.” Did you actually open this document? Seriously, did you open it?

    Your view seems to be that, because I’m now a tenured professor, whereas Lubos (entirely because of his own actions, of course…) is not one, I therefore need to maintain a higher standard (turn the other cheek, as your colleague Jesus might say). But that just isn’t how I think. “On the Internet, no one knows you’re a dog,” and I try to hold everyone to the same standards.

    My whole life in academia, I’d always get annoyed when a junior person would scathingly critique the ideas of a more senior person, and the senior one would just chuckle and reply something along the lines of, “that’s a good point; your input is highly appreciated.” As if to say: “my status is so much higher than yours that I don’t even need to get down in the weeds with the likes of you.” Far better—and even more “egalitarian,” I’d say—to get angry!

  203. Scott Says:

    Marcus #201: Thanks for the interesting comment! I completely agree with you that my frog analogy breaks down when you push it as far as you have. I was just looking for some situation in natural science where you have two giant equivalence classes of objects, and the question is whether or not they’re really one giant equivalence class. And this was the best example I came up with—I’d be happy to hear suggestions for better ones! 🙂

  204. fred Says:

    Scott #203:
    Not sure if it’s a good analogy (I’m not a fractal expert), but I would imagine a situation where we wouldn’t know anything about fractals and be given a black box which returns a 1 or a 0 for every point in the real plane (x,y), actually describing the Mandelbrot set.
    The open question would be whether there exists any disconnected independent “islands” or if the black box describes a set that is totally connected.
    Whenever we think we could have found such a possible disconnected island, upon closer examination we would find it is always connected to the main bulk through some thin filament. The variety of possible islands and filaments would be huge.
    (it seems the property of “local connectivity” is still a conjecture for the Mandelbrot set).

  205. Douglas Knight Says:

    Several times, I’ve heard people express sentiments like:

    Yes, of course Luboš is a raging jerk and a social retard. But if you can just get past that, he’s so sharp and intellectually honest! No matter how many people he needlessly offends, he always tells it like it is.

    The only person I’ve seen say that is you, Scott.

    I suspect what people mean by “intellectually honest” is not the usual sense of being capable of changing his mind and being careful and honest with himself, but that he always lets people know that he disagrees. In particular, he expresses a lot of opinions that are common among string theorists that various ideas are wrong. Even if he is not intellectually honest in the usual sense and thus it is not useful to argue with him, it would still be valuable to learn what string theorists actually believe. But he is probably not reliable enough to be useful in this capacity.

  206. Scott Says:

    Douglas #205: Yes, you’re right that Lubos was useful for a while as “the id of string theory”—i.e., the guy who announced from a megaphone what many other string theorists believed but kept to themselves. However,

    (a) I now have enough “regular” string theorist friends that I no longer need such a service, and

    (b) Lubos has become so extreme (including in attacking other string theorists) that I doubt his views are currently a window into any community.

  207. Greg Kuperberg Says:

    Scott – Did you actually open this document? Seriously, did you open it?

    Yes I did, although I confess that I only read the first couple of pages. I have to say that this document leaves me a bit perplexed. It does make its point about how Lubos has behaved; but for what larger purpose? To educate Lubos himself? Or as a warning to others? If it’s the former, then it doesn’t seem like the best method. If it’s the latter, then some of it seems like protesting too much. Some of these quotes are indefensible from all sides. But others have some truth to them; they are only a lot more hotheaded than what I would say. Some of his targets are hardly paragons of good behavior.

    Heck, I can’t help but chuckle at some of this over-the-top stuff, even though I know I’m supposed to hate it. Some of it succeeds as shock art.

  208. Scott Says:

    Hal Swyers #190: Your speculations are getting further and further off the deep end! I don’t think anything about string theory supports the idea that it would allow one to evade the fundamental limits on computation imposed by the Planck scale. On the contrary, string theory emphatically upholds the holographic entropy bound, which is the thing that imposes the ~1069 bits per square meter and ~1043 operations per second upper bounds that I mentioned. Also, the picture that’s emerging now from AdS/CFT is that, if traversable wormholes (i.e., faster-than-light communication) are possible at all, then they should only allow signalling into the interior of a black hole—meaning you could only receive the signal if you wanted it so bad that you were willing to die moments later! At the very least, certainly the idea that we’ll be able to use quantum gravity to evade the limits imposed by known physics is not a “safe” assumption to make.

    Also, you say that with a big enough TSP instance, the locations of the cities would become “fuzzy” because of the uncertainty principle, but that itself is an example of fuzzy thinking. First of all, you could always get more cities by just considering a bigger and bigger planet, with the spacing between pairs of cities kept the same! But even more important, one reason why we care about the TSP is its ability to encode other NP-complete problems. And if you use TSP to encode some other problem (like, say, “find me a proof of the following theorem”), then you really do care about the exact distances between cities: not because the distances correspond to anything physical (they don’t), but because of what they represent logically.

  209. Rahul Says:

    In particular, he expresses a lot of opinions that are common among string theorists that various ideas are wrong.

    Which opinions are these? Also what are the ideas Scott mentions about Lubos shouting over the megaphone that other string theorists were shy about? Just curious.

  210. Vladimir Puton Says:

    Greg K.: Please take note of Scott’s

    “If he can get P vs. NP, and complexity and discrete math more generally, so howlingly wrong—if he can be so sure of himself despite knowing the technical facts so little—then why should I trust him about AdS/CFT, or other things that interest me but about which I know less? ”

    The answer to Scott’s question [I am a physicist, a string theorist actually] is this: You most certainly should *not* trust him on physics issues either. For proof, look at his many attacks on Sean Carroll’s ideas about the arrow of time. SC’s ideas are not universally accepted, but LM’s alternative is just 100% crackpottery. And that’s just *one* example. In fact, he is only really reliable when talking about Matrix Theory, a now antiquated theory which to which LM contributed aeons ago, and about which he continues to have delusions of grandeur [he thinks it can compete with AdS/CFT!].

    In short, it isn’t about LM personally, it is about debunking the strange notion that LM is somehow reliable. He isn’t. Not in complexity theory, and not in physics.

    One thing though: when I mentioned LM to some young colleagues recently, they said, “Lubos who?”. So the problem is on the blogs, not in reality…..

  211. Scott Says:

    “Vladimir Puton” #210: Thanks so much for the extremely helpful insight!

    Rahul #209:

      Also what are the ideas Scott mentions about Lubos shouting over the megaphone that other string theorists were shy about?

    Well, the most obvious examples would concern the alleged total worthlessness of loop quantum gravity, and the incompetence of the people who study it. (And no, I know for certain that not all string theorists share those assessments, but I also know that Lubos isn’t the only one who does.) Anyway, I’m sure going through the archives of his blog would turn up additional examples.

  212. Rahul Says:

    Greg Kuperberg #159:

    But yes, you could posit that P = NP and then try to escape from the cataclysm by saying that the CircuitSAT algorithm has time complexity Θ(n^10000). That has been a well-known quibble concerning the conjecture from the beginning.

    For known problems, what’s the highest degree polynomial algorithm that has been found? Or if it makes more sense, let’s stick to “natural” problems whatever that means. i.e. not something contrived just to exhibit high degree.

  213. asdf Says:

    Nice post, though I also think the Lubos stuff is overpersonalized.

    I’d be interested in an explanation (maybe another post) about why lower bounds proofs are so hard in general. The different barriers to proving P vs NP seem like a consequence of the overall difficulty rather than a cause.

    I also don’t understand the difference between “discrete mathematics” and number theory enough to understand why it was ok that everyone accepted the truth of Fermat’s Last Theorem for centuries before it was proven.

  214. Rahul Says:

    we understand a great deal about why statements like P≠NP are so difficult to prove, even assuming they’re true! I don’t think there’s any particular mystery about why P≠NP hasn’t been proven yet—basically, math isn’t advanced enough yet, in specific ways

    Is it possible to explain the “whys” to a non-expert audience? I’d love to know.

    My impression was we have no clue what kind of math or specific techniques will be needed to attack P≠NP.

  215. Scott Says:

    asdf #213 (and Rahul): OK, I’ll put a post about the difficulty of proving lower bounds somewhere on my stack! For now, you could check out my PowerPoint presentation Has There Been Progress on the P vs. NP Question?.

    Regarding the difference between “discrete math” and number theory, the reason you don’t understand is just that Lubos was insisting on a meaningless distinction! Basically, his distinction is between “math problems that he, personally, has an intuition for and likes,” and “math problems that he has no intuition for and despises.” But exactly the same sort of “inductive” evidence that was applied to Fermat’s Last Theorem for centuries, can be equally reasonably applied to numerous open problems in “discrete math” (e.g., combinatorics and computer science).

  216. Getting Clear Says:

    “Heck, I can’t help but chuckle at some of this over-the-top stuff . . .”

    There you go chuckling again Greg. Be careful; not only are you going to leave people with the idea that you want to be thought of as a true paragon of virtue — you know, sort of the Dalai Lama of mathematics — but also that you’re trying awful hard to get in the last word. 🙂 Not sure the two things work that well together.

  217. Hal Swyers Says:

    @Fred #163 Indeed Kurzweil does talk of these things and when I first read what he was getting at I was skeptical. However, after you work on real world problems enough you find there is a lot of truth to his thinking.

    @Scott #208 Indeed things are getting deep! First, I know it seems like an easy proposition to suggest we just grow the size of our planet, but if we are honest enough to bring in the Black Hole information paradox (BHIP) and firewalls, we have to be honest enough to say that there is an upper bound to the size of a planet we can consider before it collapses into a black hole itself. I understand what you are getting at though, but there are practical limits to rescaling even if there is no moral limit.

    As far as the BHIP is concerned, the arguments revolve around question of unitarity and purity which are ultimately related to questions of determinism. Pure states are free states, there simply isn’t any real interaction. Sidney Coleman in his discussion Quantum Mechanics in Your Face goes to great lengths to point out there is no interaction Hamiltonian when we deal with pure states and for good reason, interactions manifest themselves as one begins to reduce the density matrix as part of methods such as Hartree-Fock methods… which are really perturbative methods (which is why there is an interest in non-perturbative approaches to QG)

    Currently, the goal of BHIP and firewall arguments is to avoid crude cancellations in what should otherwise be a smooth boundary. My view on this right now is complexity is sufficient to put any information collected by an infalling observer outside the approximation bounds of an observer outside the black hole. This is an example of a one-way problem. So the infalling observer need not be destroyed at the boundary because there is nothing they could possible do that would be observable outside the black hole.

    This is why I am thinking the way I am right now. I think that P != NP is true at some asymptotic limit, but within our practical limits, P = NP effectively.

    Now there are some interesting questions about interpretation of non-linear mixing and I am not sure right now how that is fully understood.

    In any case, we are at best only able to provide approximations of what some would call “reality” which may actually work to our advantage.

    A nice model I like to think about in these discussions is to imagine your mind as a black hole. Because of the limitations in how information is presented externally, an external observer never really has perfect knowledge about what another person “thinks”. At best our knowledge of our peers is an approximation. The complexity of trying to really understand what another thinks is likely so great that you would have to physically destroy another in order to piece together all the interactions. However, if we simply accept the approximation, we can always live together peacefully.

    Interesting stuff.

    http://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method

    http://thefurloff.com/2014/02/17/sidebands-and-density-matrices-mixed-and-reduced-review/

  218. Māris Ozols Says:

    To me it seems that the “electric fence” argument suggests dichotomy rather than advocates for any particular way of resolving P vs NP. That is, the evidence so far suggests that the two classes are extremely robust and that “continental” problems like factoring, graph isomorphism, etc. should eventually assimilate onto either of the two coasts. This per se would not preclude the P vs NP question to be resolved in either way.

    If it turns out that P≠NP, then the “electric fence” suggests that the divide between the two classes must be extremely sharp, i.e., there is very little wiggle room between the two.

    On the other hand, if it turns out that P=NP, one has to a posteriori explain why there appeared to be an “electric fence” (or as you call it, explain the “spooky action at a distance”):

    > How do the algorithms and the reductions manage
    > to coordinate with each other, every single time,
    > to avoid spilling the subexponential secret?

    This might be explainable from the post-apocalyptic P=NP perspective as follows: the crucial insight that led to showing P=NP was extremely hard to come by. In the pre-apocalypse world nobody even came close to it and therefore they were deceived by the “electric fence”.

  219. luca turin Says:

    Hammer, meet fly.

  220. Blogman Says:

    Good for you, Scott.

    Was very tired of seeing academics on the blogosphere taking the “high ground” and disengaging with Lubos while he continued to foul mouth them in the crassest way.

    Great to see someone kick his ass and call him out for being a slimeball AND an idiot.

  221. anon Says:

    Raoul Ohio #189

    I think eigenvalue avoidance is actually a good example to explain why Scott’s argument above for P != NP is so strong. The non-crossing occurs because of almost trivial but very concrete reasons, which are not so hard to deduce (Wigner and Von Neumann probably spent 5 minutes thought on it). And once understood it is not hard to construct similar examples where crossings do take place eg Simple Waves Do Not Avoid Eigenvalue Crossings

    But in the case of the “electric fence” argument, decades of investigation have not led to the slightest hint of how one might get a violation. If an NP-complete problem lies in P it must involve such a convoluted chain of reasoning that Wiles’ proof of FLT would be child’s play.

    And yes I do understand that that’s not fundamentally proof of anything – but it does amuse me that (some) people can accept the uproven AdS/CFT conjecture so easily but froth at the mouth at the idea that P != NP is considered quite likely.

    Just sayin’

  222. Scott Says:

    Vladimir Puton #210: Actually, your comment reminded me of something. It seems to me that, right now, the world could really use a string-theory blogger who blogs regularly with passion, humor, and technical details, and who isn’t Lubos. (I know Jacques Distler blogged for a while, but he hardly posts anymore.) The lack of such a person creates a vacuum that Lubos then happily fills. Don’t let him be the online face of your community!

  223. Rahul Says:

    A layman question: Regarding string theory, is it really true that there’s a total dearth of any testable empirical predictions? At least for the foreseeable future of our experimental reach?

    Or is my impression wrong.

  224. Rahul Says:

    Does the P=?NP problem’s outcome have any effect on the open P=?BPP problem? Or not at all.

    Does one decide the other or at least make the other a lot more or less likely in some direction?

    How similar is the current state of the program for solving both problems?

  225. Scott Says:

    Rahul #223: That’s a huge, complicated, and contentious question, and other blogs (like Matt Strassler’s, Peter Woit’s, and Sean Carroll’s) are better places to discuss it than mine.

    What I think is uncontroversial is this: there are various phenomena that, if they were observed (either at the LHC or in cosmology), would provide an enormous boost to string theory. The most important and plausible of those is supersymmetric particles; but there are also more exotic possibilities, like large extra dimensions and cosmic strings. The central problem is that, if those phenomena are not observed (as indeed they haven’t been, despite searches for them), that doesn’t falsify string theory at all. For it’s perfectly compatible with string theory that these phenomena would only show up at much higher energies than are currently accessible to us.

  226. ungrateful_person Says:

    Feel compelled to say, I had to resist the temptation of writing on Lubos’s blog “You sir, are a jackass”.

  227. Greg Kuperberg Says:

    Rahul – “Regarding string theory, is it really true that there’s a total dearth of any testable empirical predictions?”

    There is a “dearth” of *directly* testable empirical predictions in string theory, but it’s misleading to use the word “dearth”. It’s been acknowledged from the beginning that it’s difficult to make any new predictions from any reasonable model of quantum gravity. What theorists can do, however, is offer a better explanation of the evidence that already exists. That is the main driving force of string theory: How to keep quantum mechanics and general relativity from contradicting each other; and how to keep the Standard Model of particle theory from contradicting itself. String theory is an incomplete effort to do that, but it is also “the only game in town”. It is the only known attempt at quantum gravity that isn’t in the hospital and close to flatlined. And, resolving the Standard Model of particle physics would be extra icing on the cake.

    Of course, any kibbutzer can talk about the problems of “theory without experiment”. But the truth is that the best theorists can get pretty far with just parsimony and logical consistency. Both general relativity and cosmological inflation — both of which are cousins of quantum gravity and string theory — were correctly developed before any help came from experiment. In the case of inflation, 20 years before good experimental confirmation.

    Indeed, quantum computation has the same intellectual feature. Theoretical quantum computation gets a little help from experiment, but really hardly any. It is for the most part a compelling extrapolation of existing physics.

  228. Scott Says:

    Rahul #224: Ironically, the central challenge in collapsing P and BPP is extremely closely-related to the central challenge in separating P from NP: namely, we need better circuit lower bounds!

    In particular, important works in the 1990s (Nisan-Wigderson, Impagliazzo-Wigderson…) showed that, if you can prove strong enough circuit lower bounds (for example, that there’s a language in E=TIME(2O(n)) that requires exponential-size circuits), then you get pseudorandom generators that are provably strong enough to imply P=BPP.

    Now, the circuit lower bounds that you need for P=BPP, are widely considered “not quite as hard” as the ones you’d need for P≠NP (but still extremely hard). So one could imagine P=BPP being proved before P≠NP, much more easily than the reverse—indeed, one could imagine a P=BPP proof being a crucial “step along the way” to a P≠NP proof. Having said that, there are no formal implications known between the problems. (Well, actually, a proof of NP⊄P/poly would give you some mild derandomization, but not P=BPP; while a proof that SAT requires exponential-sized circuits would imply P=BPP.)

  229. Raoul Ohio Says:

    anon,

    Thanks for the reference on continued developments in EA (eigenvalue avoidance) theory.

    I have always thought EA is a very interesting thing. It does not provide evidence for or against the EF (electric fence) argument, but is slightly similar. I stumbled onto a fact related to EA while playing around with the IMSL package in the 70’s: “Non-real eigenvalues of real matrices avoid the real line”. I tried to analyze this using the fact that the derivative of the eigenvalues WRT the matrix is singular as a complex conjugate pair approaches the line.

    Consider the AEFA (Aaronson Electric Fence Argument) category. I know of two members: EA and EF. Are any other examples known? (John Sidles: I am counting on you here!)

  230. Mike Says:

    Interesting comment Greg@227. I understand that a lot of String Theory progress was made in intermittent bursts over the past couple of decades. I was curious whether progress is still being made, and what you currently see as the main challenges that theorists are addressing.

  231. anon Says:

    I forgot to post a link to an english translation of Wigner and Von Neumann’s paper on crossing avoidance mentioned by Raoul Ohio #189

    On the Behaviour of Eigenvalues in Adiabatic Processes

    The original german version is Uber das Verhalten von Eigenwerten bei Adiabatischen Prozessen

    (I perhaps should concede that I was a little flippant above, they probably spent a bit more than 5 minutes thinking about this interesting phenomenon)

  232. Rahul Says:

    Raoul Ohio,

    I know of two members: EA and EF. Are any other examples known? (John Sidles: I am counting on you here!)

    No help there. I think he’s in the doghouse currently. 🙂

    Unless you melt Scott’s resolve. Has happened before. 😉

  233. Scott Says:

    Mike and Rahul: I should tell you that Greg and I have a longstanding (if minor) disagreement about string theory. I’m a huge fan of conceptual insights like AdS/CFT and black-hole microstate counting, and “some of my best friends are string theorists.” But I’d also say that there’s a big difference between quantum computing on the one hand, and string theory (or inflation or GR) on the other. Namely, the latter really are (or were) qualitative leaps into the unknown. They make inspired guesses about physics in unknown regimes that might turn out to be wrong. Obviously, something has to happen in unknown regimes to keep physics logically consistent, but that doesn’t mean it has to be something anyone has thought of yet. QC, by contrast, is way more conservative: you “merely” need to believe that QM will continue to be true for complicated, many-qubit states, and that no new principle will be discovered that somehow prevents QC from working.

    Note that the above difference, while real, is not necessarily a negative for string theory—as the previous examples of GR and inflation already suggest! (Indeed, in an earlier thread Lubos ironically agreed with me about this distinction between QC and string theory, but spun it differently: he would say, not that string theory is on shakier ground than QC, but that QC is “just boring, pedestrian engineering” that will obviously work as predicted, and that therefore has no hope of uncovering any new physics.)

  234. Mike Says:

    Scott@232,

    Understood. I adopt a similar analysis with respect to the MWI — it simply follows from QM taken seriously. 😉

    Nevertheless, I think Greg’s idea (which is too much maligned by some) that string theory is the “only game in town”, is true FAPP. In that context, I’d be curious to know more about where he sees theorists taking it over the next period of time.

  235. rrtucci Says:

    Raoul Ohio et al,
    I believe EA is nowadays called by physicists “accidental degeneracy”. It happens when, by varying a parameter, your system all of a sudden (“accidentally”) develops a new symmetry. So if you want to apply this to N!=NP, I think you would have to come up with such an accidental symmetry

  236. Scott Says:

    Mike #234: Yes, I agree with you that the math of QM “militates in favor of” both QC and MWI. The difference is, QC is “just” a straightforward scientific prediction that a particular technology should work, whereas MWI lands you in a dense thicket of mysterious, possibly in-principle untestable questions about the nature of personal identity and probability measures over conscious minds. So, that’s why I have less hesitation in accepting the former. 🙂

  237. Mike Says:

    Scott@234,

    Of course, that’s the fun part!! And you know, as far as the math goes, in for a dime, in for a dollar. 🙂

  238. Greg Kuperberg Says:

    Scott – Maybe the only disagreement at this point is whether we have any real disagreement about string theory. 🙂 You’re correct of course that string theory is new physics, while QC isn’t.

    In fact Lubos is half-right: A lot of people, not just him, consider QC “boring” as physics. It’s great as engineering or as applied physics, but as physics proper it is indeed meant as nothing new. From this viewpoint a QC “experiment” is not really an experiment at all, it’s an engineering demonstration that usually happens to be done in physics department. Experimental physicists themselves have said that it doesn’t feel like doing a physics experiment.

    Of course as has happened before, if Lubos dismisses QC, that is a mixture of wisdom and foolishness. Not everything that’s “boring” as physics is boring in general, nor even boring to physicists.

    Mike – It’s hardly my idea that string theory is the only game in town, and I’m hardly the best person to ask about the main challenges of string theory. That said, my impression is that one of the main challenges all along has been its incomplete mathematical development. Perturbative string theory is already non-perturbative conformal field theory, which is already a mathematically difficult topic. Non-perturbative string theory is still waiting to be defined, even by the standards of quantum field theory rather than rigorous mathematics. The “second superstring revolution” was revolutionary because it gave some glimpse as to what non-perturbative string theory should be. Wikipedia reminds me that the AdS/CFT conjecture, which is now also important because of applications outside of string theory, is a major outcome of the second superstring revolution.

  239. LK Says:

    Scott, as a physicist VERY interested in complexity theory I should say: great article. I was particularly interested in results like the one from D.Moshkowitz (your wife?) you quoted. Such results really give a sense about the P/NP “iron curtain” (just to quote LM ;> ).
    I’m more used to hear about quantum computing and I’m happy to learn also something about non-quantum TCS.
    BTW, good decision about LM: I’m abandoning his blog too since besides a couple of good didactical articles about string theory (mostly matrix theory) you do not learn anything more besides personal attacks.

  240. Rahul Says:

    Scott / Greg:

    Thanks for the insights.

    Another question: So leave aside empirical verification, but on a purely theoretical plane, are there any key holes, unproven conjectures etc. which if settled the other way would be a mortal blow for string theory or at least grossly weaken it? Or not at all?

    Is it all neatly tied up or are there weak spots? Which ones? Where might inconsistencies arise that spoil the party.

  241. Scott Says:

    LK #239: Thanks for the comment! Yes, Dana is my wife.

  242. Greg Kuperberg Says:

    It’s my idea that they should write a joint paper with this guy.

    http://en.wikipedia.org/wiki/Dana_Scott

  243. Mike Says:

    May be could be the John ‘Stewart Bell’ inequalities. 😉

  244. Bill Kaminsky Says:

    If I may, I’d like a technical question to verify my understanding about a side issue that has come up in this thread.

    Most broadly, I’d like to ask:

    —————-
    Just how much public-key cryptography is possible if P = NP?
    —————-

    Now, I do realize that P = NP precludes the “strong public-key crypto” scenario in which Eve, lacking Bob’s private key, is forced to take an exponentially longer time than Bob in reading a message that Alice encrypted with Bob’s public key.

    (* SIDENOTE: At the risk of insulting the intelligence of the readers of this blog, it’s conventional to discuss crypto protocols by naming the secretive sender “Alice,” the rightful recipient of the secretive sender’s message “Bob,” and the evil eavesdropper “Eve” )

    So, in light of the above, a more pointed formulation of my question is:

    —————-
    How good can “weak public-key crypto” get? To wit, what’s the best possible scheme to ensure that — even if Eve can solve NP-complete problems in linear(!!!) time — Eve will in fact have to take to take a substantial polynomially longer time to decrypt Alice and Bob’s communications messages than Alice and Bob do?
    —————-

    And, especially as Boaz Barak popped up in the comments on a previous thread, my actual question is the following:

    If I understood (or, better said, if I skimmed without grievously misunderstanding 🙂 ) the paper

    Boaz Barak and Mohammad Mahmoody-Ghidary, “Merkle Puzzles are Optimal –
    an \( O(n^2) \)-query attack on key exchange from a random oracle” http://www.boazbarak.org/Papers/merkle.pdf

    then the best presently known weak public key scheme isn’t Merkle’s Puzzles (see also https://en.wikipedia.org/wiki/Merkle%27s_Puzzles). Rather, the best presently known scheme is that of Impagliazzo and Rudich using a random permutation oracle (see also http://cseweb.ucsd.edu/~russell/secret.ps). Moreover, the best presently known attack on such a key exchange protocol is \( O(n^4) \) [as opposed to the \(O(n^2)\) of Merkle’s Puzzles]. How likely is it that \( n^4 \) is a tight lower bound?

    I ask because it seems to me forcing quartic overhead on Eve could actually be a pretty decent security protocol assuming \( n > 10^6 \), say.

    Thanks! 🙂

  245. tobias Says:

    Great!
    This is not only about computation complexity theory, but about a mindset that says something like: things with “theory” in their denomination are less real than really real “physics”-kind of things.

  246. Raoul Ohio Says:

    Rahul:

    Sorry to hear John Sidles is in the doghouse again. Actually, I was worried that he was lost on a desert island again!

    It is hard to figure out a system where JS is considered more obnoxious than LM.

    rrtucci:

    Thanks for reminding me about “accidental degeneracy”.

  247. Charles Says:

    Scott, I was a little surprised to see you put a figure as low as 99% on P != NP (but then again, you did mention balancing one side against the other…). What kind of odds would you put on the Exponential Time Hypothesis? It seems far shakier to me — I think not-ETH is hundreds of times as likely as P = NP — but I’d like to know what you think.

  248. Scott Says:

    tobias #245: I agree that some people have that mindset—but presumably Lubos doesn’t hate everything with the word “theory” in its name! 😉 For him, I think it really is just a simple matter of “nothing that anyone else knows but I don’t know can possibly be that important, since virtually everyone besides me is an idiot.”

  249. Scott Says:

    Charles #247: P[ETH]=0.97. (And yes, I’m bending over backwards to be open-minded with these estimates—ironic, given the criticisms of the know-nothingists…)

  250. anon Says:

    Does anyone else have the comments section suddenly become alternating grey and white backgrounds? (the text on grey backgrounds is hard to read)

  251. Joshua Zelinsky Says:

    Question related to the ETH: The fact that Grover’s algorithm is essentially best possible suggests strongly that if ETH holds then ETH should still hold for quantum computer. Can this intuition be made rigorous? (This is different from the observation that if Grover were counterfactually fast then we’d have NP in BQP.)

  252. Scott Says:

    Raoul #246: John Sidles gets out of the doghouse on April 24th. Lubos, not until March of 2017 (and even then, his ban will be renewed if he’s judged a continuing threat to reason 🙂 ). I think that accurately reflects the differing severity of their offenses.

  253. Scott Says:

    Joshua #251: You can certainly formulate a quantum analogue of the ETH. But we don’t have any result of the form, “if ETH holds classically then it also holds quantumly” (just like we can’t prove, e.g., that P≠NP implies NP⊄BQP). The issue is that a quantum algorithm might be able to see structure even that was invisible to a classical algorithm. There were even relatively-concrete speculations a while back about why the quantum adiabatic algorithm might be able to solve 3SAT in something like 2O(√n) time. They never panned out, but because of such possibilities, the probability that I’ll assign to the Quantum ETH is only, let’s say, 92%.

  254. Greg Kuperberg Says:

    Scott – The version of this that I find the most compelling (as I alluded above) is not exactly exponential in the size of the input, but exponential in the size of the amount of non-determinism. Consider for instance CircuitSAT, for concreteness with bounded gates. I might conjecture that CircuitSAT at linear depth and linear width (which of course implies a quadratic number of gates) requires time Ω(2^n) classically and Ω(2^{n/2}) quantumly, where n is the number of non-deterministic input bits of the input circuit, i.e., the size of the witness.

    Is there a standard version of this type of conjecture? It looks both stronger and weaker than the official “exponential time hypothesis”. (Weaker because the decision problem is harder; stronger because the conjecture is sharper than 2^Ω(n).) Or, is there a reason that I should like ETH better than a conjecture of this form?

  255. Scott Says:

    Greg #254: Your conjecture about CircuitSAT is plausible and interesting. However, note that the analogue of your conjecture for linear-size Boolean formulas is known to be false, at least in its classical version! (I’m not sure about the quantum version.) Santhanam (see here) showed how to solve FormulaSAT for such formulas in 2n-Ω(n) time.

    As another neat observation, Ryan Williams has shown that, if your conjecture were more than slightly false—that is, if CircuitSAT were solvable in anything less than 2n/poly(n) time—then it would follow that NEXP⊄P/poly. (See this survey for more.)

    The main reason why people like the ETH is that once you assume it, you can show that many other interesting NP-complete problems also require 2Ω(n) time (and you can even get consequences “elsewhere in the complexity hierarchy,” as my paper with Dana and Russell Impagliazzo did). The “CircuitSAT requires Ω(2n) time” conjecture is less useful as a starting point for reductions, since a typical reduction from CircuitSAT would produce way more than linear blowup in the amount of nondeterminism needed.

    (Addendum: Maybe an even clearer way to see the usefulness of the ETH is that, when combined with the latest PCP theorems, it implies, as your conjecture doesn’t, that even many NP approximation problems require close to 2Ω(n) time.)

    If you want deeper answers than that, you should go ask Ryan Williams.

  256. Sniffnoy Says:

    Totally tangential question, if you don’t mind: I initially misunderstood “bits of nondeterminism” as “bits of randomness”. Of course, when you’re working with randomized algorithms, bits of randomness used is something you can actually count and put limits on. At least, this is easy to count if you think of things as “computer with an RNG hooked up”. But a quantum computer can’t be thought of that way. Is there any sensible quantum equivalent to “number of bits of randomness used” (either maximum or expected or whatever), and (just for ordinary probabilistic computers) is there any easy way to count number of bits of randomness used (either maximum or expected or whatever) when sticking to a “state vector” formulation of how it works?

  257. A.Vlasov Says:

    Sasho #191, I think, your issue only weakly related with my
    question (e.g. if a “random” string is designed in such a
    way, to put calculation in a path leading to success). I am
    mot asking about such a way, I am talking about oracle that for given program always produces true answer, if it stop or not. I could explain, why such a question appear in relating with string theory, but it would not be interesting neither computer scientists, nor string theorist (and may cause ban to next 3000 years in all blogs of both communities).

  258. Dániel Marx Says:

    Greg #254: ETH essentially says that $n$-variable 3SAT has no $2^{o(n)}$ time algorithm (“has to be exponential in the nondeterminism”). It is an important and nontrivial consequence (the Sparsification Lemma) that this is essentially equivalent to saying that 3SAT has no $2^{o(n)}$ algorithm, where $n$ is the length of the input. It is this latter statement that can be used conveniently to prove lower bounds for other problems and get some “electric fence” results, see e.g. our survey:

    http://www.cs.bme.hu/~dmarx/papers/survey-eth-beatcs.pdf

    Here are some example results in the context of fixed-parameter tractability (assuming ETH):

    – k-path on planar graphs can be solved in time $2^{O(\sqrt{k})}poly(n)$, but not in time $2^{o(\sqrt{k})}poly(n)$.

    – deciding if a graph has an embedding into the line with distortion k has a $2^{O(k\log k)}poly(n)$ algorithm, but no $2^{o(k\log k)}poly(n)$ algorithm.

    – deciding if the edges of a graph can be covered by $k$ cliques has a $2^{2^{O(k)}}poly(n)$ algorithm, but no $2^{2^{O(k)}}poly(n)$ algorithm.

    PS: Hope the latex equations work.

  259. Klum Says:

    As I said before, I’m uncertain about the validity of the reasoning that is taking place here. Scott and others are trying to establish a new kind of a mathematical statement, something like: “the mathematical statement S is most probably true” (where here S=”P\neq NP”).

    I don’t know what could be the status of such a statement. Is there a sound foundation for such a reasoning? And if yes, then wouldn’t it be tantamount to a new kind of science, one that can be called “empirical mathematics”? Maybe we should not even strive very hard for proofs, just fish for evidences every time we face a hard mathematical problem, and then we get to have frequent “breakthroughs” in mathematics, with only slight compromise of validity?

    Perhaps, I’m wrong, and there is indeed such a subject called “empirical mathematics”? Is there?

  260. Scott Says:

    Klum #259: No, you’ve totally misunderstood me. There is such a subject as “empirical mathematics,” but the point of this post wasn’t to advocate it. I don’t advocate an approach to math different from the one mathematicians take now: just keep doing whatever’s successful! Rather I claim that, when doing actual their research (as opposed to writing up the results later), mathematicians already think like more-or-less like empirical scientists, and always have—and they couldn’t possibly succeed otherwise. We could never prove things if, before we proved them, we didn’t already have a mass of ideas in our heads about what was likely to be true or false, which ideas were promising or unpromising, etc. And the way we form and refine those ideas is via a process of “experimental testing”—trying things out, collecting data, etc.—that I don’t think is fundamentally different from what any other scientist does. If you like, then, the actual proved theorems are just the tip of the iceberg of mathematical thought—a tip that couldn’t exist without the whole rest of the iceberg underneath!

    Of course, when writing up their results, mathematicians (less so theoretical computer scientists) are infamous for trying to “erase their tracks”: making it look like the definitions, theorems, and proofs just sprang fully-formed from the brow of Zeus, without the mass of ideas and intuitions underneath them. This, probably, has contributed to the absurd idea that mathematical understanding consists only of the rigorous tip, that the ideas underneath don’t even exist.

    (Note that Lubos, absurdly, believes the only-the-theorems theory, but only for discrete math, not for continuous math! What’s the difference between the two? Simply that, for continuous math, he himself feels like he understands the intuitions. For discrete math, he doesn’t—and he therefore insists that there can’t be any, since otherwise people who are his inferiors would understand something that he doesn’t.)

    If there’s one thing in mathematical practice that I would change, it’s this practice of “erasing one’s tracks.” For not only does it make papers almost impossible to understand, it’s also contributed to an ironically wrong impression: that because we mathematicians can often prove our beliefs, therefore we don’t know anything until we do! This impression ignores that, even if it weren’t for the proofs, we could still be at least as confident in many statements as the physicists are in things they call “laws of nature.” And our confidence would come from a process of testing, criticism, and appeal to Occam’s Razor not at all dissimilar from what the physicists do. The fact that we can often prove things is a wonderful icing on the cake, but we shouldn’t let it detract from all the layers of understanding beneath that let us put on the icing in the first place.

    Look, you yourself admitted that, when choosing research problems, you assume P≠NP “for all practical purposes”! Whatever your reasons are for doing that, presumably they go beyond the content of the theorems that have already been proven. What you don’t seem to have understood is that, between the two of us, I’m the one who says you should keep doing your research the way you are now, while Lubos is the one who says you should be doing something completely different (for example, you should spend equal time searching for a proof of P=NP).

  261. Bogdan Says:

    Let us make a (natural) assumption that P vs NP problem is difficult. More concretely, that the shortest proof of either case is huge (say, needs 100 more years of math recearch to be written down). Then there are 2 cases:
    1) P != NP, but the shortest proof is huge
    2) P=NP, but the shortest proof is huge (moreover, assume that shortest algorithm solving SAT in subexponential time is huge also).
    And now the question is why 1) is more likely than 2).
    All the facts you are talking about can be perfectly explained by both 1) and 2). In case 1), the reason is trivial. In case 2), the parameters are as they are and not slightly better because otherwise this would yield a relatively short proof that P=NP, but such a proof does not exists by hypotesis, contradiction.

  262. Scott Says:

    Bogdan #261: I addressed this already in comment #45. Briefly, I’d say 1) is much more likely than 2) because of Occam’s Razor; in case 2), we’d have the burden of explaining why P was divided up into two gigantic components that were “ultimately” the same component, but only because of a hundred-page proof, with no hint of their equality before that proof.

    Or to put it differently: in asking me to accept your initial assumption, that the solution of P vs. NP requires a hundred years of research, you’ve already made it weird and inexplicable to me that they’d turn out to be equal. Algorithms, when they exist, are just much, much easier to find than lower-bound proofs!

  263. Scott Says:

    Mike #237:

      And you know, as far as the math goes, in for a dime, in for a dollar.

    Personally, I’d describe the leap from quantum computing to MWI as more like: “in for a dime, in for the entire universe’s GDP.” To which my reaction is: “well, maybe, but what does it even mean to be in for the entire universe’s GDP? If I were in for that much, then who would the counterparty be?” etc. 🙂

  264. Mike Says:

    ” what does it even mean to be in for the entire universe’s GDP?”

    I know it’s the well-worn riposte (and I apologize in advance), but I imagine that’s pretty much the same feeling a lot of folks had when it began to dawn on them that they should take the math and science seriously even if it meant that the earth revolved around the sun and the heavens were filled with an ‘infinite’ number of suns. 😉

  265. Scott Says:

    Mike #264: Well, in the earlier case, you could tell people exactly what it meant: it meant that in principle, if they got into a really fast spaceship, they could visit all those other suns, see the tiny earth orbiting the much larger sun for themselves, etc. And then they could even return to earth and tell all their friends about it.

    By contrast, the very structure of QM implies that you can’t have a “multiverse spaceship” that lets you visit the other Everett worlds. Presumably the only way to make their existence manifest, would be to upload your own consciousness onto a quantum computer and then “experience life in Hilbert space.” But then that’s exactly the thing that I’m not sure is sensible: what if decoherence (and the creation of stable memories and records) is an essential condition for consciousness? And in any case, even if it is sensible, you can’t tell anyone else afterwards what it was like! 🙂 (Since in order to communicate, the other people would first have to measure you, in which case they’d simply get whatever outcomes QM predicts with the appropriate probabilities.)

  266. Mike Says:

    “. . . you could tell people exactly what it meant: it meant that in principle, if they got into a really fast spaceship, they could visit all those other suns, see the tiny earth orbiting the much larger sun for themselves, etc. And then they could even return to earth and tell all their friends about it.”

    Actually, one didn’t know very much (if any) of that stuff back then — we learned it in the interim, you know it now. My hope is that somewhere on the path to a full theory of quantum gravity what seemed like ontologically superfluous aspects of the formalism will come to be seen as necessary pieces of the puzzle — but what do I know, actually very little about this, and so for me it remains more literature than science. 🙂 But fun nevertheless.

  267. Scott Says:

    Mike #266: No need for the “what do I know” self-deprecation; your position is one that lots of extremely thoughtful people accept and that I enjoy debating.

    But for me, your mention of quantum gravity gets to the heart of the matter. If quantum gravity were to modify QM, in such a way that “what seemed like ontologically superfluous aspects of the formalism [would] come to be seen as necessary pieces of the puzzle”—well then, that’s exactly the sort of thing it would take for me to declare the other Everett branches as “wholeheartedly real.” Alas, based on (e.g.) the black hole information puzzle, my personal suspicion is that quantum gravity might move us in the opposite direction: taking what we now see as “necessary pieces of the puzzle” and moving them into the “ontologically superfluous” category! 🙂

  268. anon Says:

    Daniel Marx #258

    you need double dollar delimeters for mathjax

    you last equation should have a small o I think

    $$2^{2^{o(k)}}poly(n)$$

    (re the grey background problem I mentioned – it only happens in firefox, with long comment sections, so I’ll just use chrome I think)

  269. Douglas Knight Says:

    Charles #247: P[ETH]=0.97. (And yes, I’m bending over backwards to be open-minded with these estimates—ironic, given the criticisms of the know-nothingists…)

    I’d rather you say what you think, not bend over backwards. Perhaps it would be helpful not to talk directly about P(P=NP), but say something like P(!ETH)=3 P(P=NP).

  270. Bogdan Says:

    Scott #262: It is natural that some problems in P have easier algorithms, some others – more complicated ones. Let L(A) be the shortest description of algorithm in P for problem A, including proof that it works. Then L(shortest path) is about 1 page, L(linear programming) maybe about 10, L(some problem connected to graph minor theory) maybe about 30-50, why not imagine L(graph isomorphism)=1000 pages, L(factoring)=10,000 pages, L(SAT)=100,000 pages or more. The situation seems to be natural and similar to other areas of mathematics: some equalions are easy to solve, some others (like Fermat Last Theorem) – requires handred pages to prove.

    Now, “why P was divided up into two gigantic components” – because it turned out to contain one problem with very nontrivial algorithm (SAT), which can be reformulated in a variety of different ways, forming the second component. What is wrong?

  271. Scott Says:

    Douglas #269: Well, what I really think is best described by a state of Knightian uncertainty. 😉 Part of me is utterly persuaded by the “electric fence” argument, and thinks that the probability of P≠NP is something like 99.999999%. Another part of me fears that, like the 19th-century physicists, we complexity theorists are all being held back by some misconception that we haven’t even articulated yet, but that will be obvious to future generations. This other part of me is still pretty persuaded by the arguments, but assigns “merely” a 99% probability to P≠NP. What I meant by “bending over backwards” is that I’m reporting the probabilities of my less confident half.

  272. Bogdan Says:

    I am thinking, what if a very rich person, before die, put $10^9 to the bank, and invent 10^3 “assets” each pay $10^6 to the owner on the day when P=NP will be established (and 0 if P!=NP). What would be the price of such “asset” today, and how actively it would be traded? Note that to make profit it is not nessesary to wait until P=NP will be proved: one may buy it today for $5, and sell later for $10, if lucky. It would be really interesting to see how intermediate results in either direction reflects in the price 🙂

  273. Greg Kuperberg Says:

    The problem that I have with both Everett’s “many worlds” and Bohm’s wave-surfing particles is that they completely go against the spirit of quantum mechanics as non-commutative probability.

    Purely for their own reasons, probabilists in mathematics describe a randomized system in terms of a commutative algebra of observables. It is really a very natural step to allow this algebra to non-commutative, and what falls out is the quantum probability underpinning of quantum mechanics. Operator algebraists, the people who study the relevant kind of non-commutative algebras, have always had an easier time accepting the Copenhagen interpretation than other pure mathematicians, because it resonates with their own mathematical intuition. Unfortunately, even many quantum physicists tend to dismiss this important mathematical model as mere formalism — although when physicists actually need these ideas, they decide that they like them after all.

    So, I see Everett’s and Bohm’s work as something like Tycho Brahe’s model of the solar system. He was forced to take the predictions of the Copernican model of the solar system. Then, for no reason other than nostalgia for Ptolemaic astronomy, Brahe rewrote it in the Earth’s reference frame to call the Earth fixed. Likewise, Everett forks the universe, to be able to talk as if a non-deterministic universe is deterministic after all. Bohm thought of a different and incompatible trick for the same general purpose.

    In fact both Bohm and Everett also go against the spirit of special relativity, again without technically violating it.

  274. Scott Says:

    Bogdan #270: For me the issue is this. In the hypothetical scenario you describe, almost all natural problems in P would “cluster” into two gigantic families. In the first family would be thousands of natural problems with 1- or 2- or 5-page algorithms. In the second family would be other thousands of natural problems, but all of which only have a thousand-page algorithm (namely, the supposed algorithm for SAT, plus a suitable NP-completeness reduction). There would be very few natural problems in between, or for that matter that require more than thousand-page algorithms: it would most just be these two families.

    Of course you could still repeat your question: “what is wrong?” And at a formal logical level, nothing is wrong. But the situation would be extraordinarily unnatural: why should the set of natural polynomial-time algorithms divide into these two gigantic clusters, with so little in between? The fact that something is a formal logical possibility doesn’t mean it’s a plausible one, or one that it’s productive to spend much time on. That’s a fact that all of us rely on in our day-to-day lives, and that we also rely on in practice whenever we do actual research (e.g., even if you can’t rule out a priori that proving some simple probability estimate for a paper you’re writing would require invoking Fermat’s Last Theorem, you probably won’t spend much time on it). All I’m advocating, in this post, is that we don’t need to be ashamed of working in this way.

  275. Scott Says:

    Bogdan #272: If you visit scicast.org (a site I’m slightly involved with, as a “question curator”), you can indeed place bets about P vs. NP and many other open math problems. (But alas, not with real money, for legal reasons.)

  276. Bogdan Says:

    Scott #273
    Ok, I completely understand that we are not talking about formal logical level, and my question “what is wrong?” should be understand as “why this is not natural?”
    So, you claim that the main mystery is why almost any natural problem in P is either easy or reformulation of SAT, with very few natural problems in between.
    But can you explain this mystery if P!=NP? Why almost any natural problem in NP belongs to one of “two gigantic families”: P or NP-complete problems, with very few natural problems in between? At least the author of the blog post below has no idea.

    http://blog.computationalcomplexity.org/2014/03/why-are-there-so-few-intemediary.html

  277. A.Vlasov Says:

    Greg #273, It was shown, that Everett interpretation can be compatible with special relativity and “spirit of relativity” is rather specific argument. I also not quite understand, how non-commutative probability could explain existence of Werner states – these states are classically correlated, but difference with “true” quantum states is a density operator proportional to unit, so the difference commutes with everything.

  278. Bogdan Says:

    Scott #275
    Thank you, interesting. However, real money is very important in this context. Some of my friends are often “99.999999% sure” that their favorite football team win, but do not agree even for FAIR 1-1 real-money bet. This is not very convincing. In contrast, if I go to the bookmaker company website and see that they suggest 10 pounds for 1 on the bet that the team A loose, this is already convincing for me that the team A is a strong favorite 🙂

  279. Scott Says:

    Bogdan #276: Ah, good! Yes, I think I more-or-less can explain that mystery if P≠NP—or at any rate, way more easily than if P=NP.

    Let me start with another question: in computability theory, do you find it mysterious that almost all problems are either computable, or else at least as hard as the halting problem? I.e., that there are virtually no natural problems with Friedberg-Muchnik-like intermediate status? I claim that we shouldn’t find it so mysterious, at least not after we gain some experience with computability. The way I think about it, the halting problem exerts a powerful “gravitational pull” on all other r.e. sets, “wanting” to make them equivalent to itself. Any time you see a problem involving unbounded search over all natural numbers, the thing that requires special explanation (if true) is why you can’t just build some gadgets embedding an arbitrary Turing machine into your problem, such that an integer with the desired properties exists if and only if the Turing machine halts. And sometimes there will be such an explanation, but almost all explanations will be so powerful that they even make the problem computable.

    Now, if you agree with that, then I’d say that essentially the same is true about the chasm between P and NP-complete. Problems like SAT exert a powerful “gravitational pull” on other combinatorial search problems, in the sense that as soon as you can encode logic gates like AND, OR, NOT into your favorite problem, BOOM! You’ve reduced SAT to your problem, and yours is NP-complete also. So the thing that requires special explanation is when a search problem has some mathematical structure that lets it resist the gravitational pull of SAT, even while still being potentially hard. Unlike in the computability case, we do have natural examples of such search problems—factoring, graph isomorphism, etc.—but there aren’t that many. The “natural” state of affairs for a search problem is to be able to encode a Boolean logic circuit.

    (See also this CS Theory StackExchange answer of mine, where I made much the same point.)

  280. Bogdan Says:

    Scott #279
    Thank you! I like this explanation. Because SAT exert the powerful “gravitational pull” you described, it can be reduced to any problem with sufficient combinatorial structure, therefore almost all natural problems in NP are naturally divided to 2 classes: NS “no structure” and S “with sufficient structure”. But, for me, this division holds in the same way, whether P=NP or not!

    If P!=NP, the class NS is P, and the class S is NP-hard problems.

    In the same way, if P=NP, but SAT turned out to have a very non-trivial algorithm, the class P is naturally divided to trivial problems NS, and problems with rich structure S. The latter have no easy algorithm exactly because they are as hard as SAT, and there are so few natural problems in between exactly because of the powerful “gravitational pull” of SAT.

    So, once again, I like your explanation, but, for me, it explains the division of P in the P=NP case almost at the same level of “naturality”.

  281. Peter Says:

    Rahul 212: One example of arbitrarily high degree polynomial is the following. You want to find a tight Hamilton cycle in a random 3-uniform hypergraph with edge probability $$n^{\varepsilon-1}$$. This is known to almost surely exist even with $$\varepsilon=0$$, but no polynomial time algorithm, randomised or not, is known that finds one with high probability. But if $$\varepsilon>0$$ is fixed, then a polynomial time randomised algorithm is known: its degree is something like $$1/\varepsilon$$. But this is almost certainly just an example of algorithm designers not coming up with the right ideas yet.

  282. Philip White Says:

    I have lost track of the thread here, but I wanted to play the devil’s advocate on my favorite metaphor of this piece.

    In the example, when scientists look at the mating habits of green and yellow frogs, they are using the scientific method (including experiments) and statistics to draw their conclusions. So when a scientist proclaims, “I have rejected the null hypothesis and supported the notion that green frogs and yellow frogs are unlikely to mate under any circumstances, although I lack mathematical proof,” this is generally the result of some sort of sample from both populations of frogs.

    In real/”scientific” life, the population of frogs on earth is finite. Thus, when scientists draw from their observations of a collection of frogs, they can use their understanding of how representative this set is to draw conclusions about the entire population of frogs. If a sample that is taken can be reasonably assumed to random, then of course the sample has some ability to represent the whole population.

    On the other hand, there are infinitely many NP-complete and P problems. So I suppose I am a *little* skeptical of the claim that there is a “99.99999%” chance that P != NP based on this metaphor, because it’s entirely possible that, given the existence of infinitely many green and yellow frogs, we might have been looking in the wrong places for the cross-species mating frogs all along.

    Further, while the electric fence metaphor is also pretty neat, I think this is just a fancy way of saying “it would be really surprising if P = NP were true.” At the same time, I wonder if it would also be surprising if computers could compute the truth value of any natural language sentence accurately and efficiently. I think it’s “beyond modern computer science,” but not impossible if the Church-Turing theses holds, since every human being can do it.

    My favorite argument for why maybe–just maybe–P = NP might be true is this: There are so many algorithms that nature has found that we have never found. These algorithms often are found in the set of “unique human capabilities”–e.g., vision, natural language processing, speech, the capacity to run and do other physical exercise with the human body, the capacity to compose fiction, etc. However, if P = NP, we could use Occam’s razor to build many of these capabilities. What if there’s no other way to find them except for brute-force? If we can’t do it, how did nature do it?

    Just my thoughts!

  283. Scott Says:

    Bogdan #280: Suppose that, for some reason, no one had proved that the halting problem was uncomputable. Instead, over half a century, they’d “merely” noticed that there are lots of computable problems, and then this huge cluster of other problems, which “succumb to the gravitational pull” of the halting problem because of their ability to encode Turing machines. Wouldn’t it then be bizarre if the problems in the huge cluster ultimately turned out to be computable? If their “mutual gravitational pull” had sufficed to attract them all to each other, then why didn’t it suffice, in all that time, to attract any of them “downwards” toward the computable problems?

    Actually, here’s a better way to ask the question. Can you give me a single example, in the history of mathematics, where the above is basically what happened? I.e., where for decades, a huge collection of objects all got classified into one of two giant equivalence classes, and only afterward did it turn out, to everyone’s surprise, that there was only one equivalence class? Now I’m genuinely curious whether such an example exists…

  284. fred Says:

    Scott,
    related to your proposal (https://scottaaronson-production.mystagingwebsite.com/?p=252) to find lower bound for permanent vs determinant for a 4×4 matrix, by doing aggressive pruning on all the possibilities… has there been any attempts/progress on this?
    I came across “Optimal Sorting Networks” (http://arxiv.org/pdf/1310.6271v2.pdf) where they prune from a ~10^30 solution space by encoding extra conditions as SAT and using a practical solver.
    But that solution space is tiny compared to the one with the 4×4 permanent (I came up quickly with 10^123 as well, without even considering the extra combinations due to the possible operands x,-,+,/).

  285. fred Says:

    Greg #273
    “The problem that I have with both Everett’s “many worlds” and Bohm’s wave-surfing particles is that they completely go against the spirit of quantum mechanics as non-commutative probability.”
    Well, wasn’t that the whole idea? 🙂
    DeBroglie/Bohm/Einstein wanted a model where a state exists as a “real” object between observations (my understanding that the Copenhagen gang said that it couldn’t be done and Bohm proved them wrong).
    And Many-World is an elegant way to get rid of the idea of “pure” randomness – for many people an effect without a cause is hard to swallow.
    Unlike models of the solar system, those are interpretations and I don’t think there are actual experiments that could validate one over the other.

  286. aram Says:

    Scott #283: Supersymmetry posits that fermions and bosons are, in a sense, the same. And in the past, progress in physics has meant unifying many things. I’m not sure this counts as what you’re looking for.

  287. Serge Says:

    Scott, you keep assimilating two notions which I think should be more carefully distinguished. For you, it seems obvious that to exist is the same thing as to be reachable. But this is mostly wrong… For example, there probably exist extraterrestrial thinking beings in our Universe, though we quite likely won’t ever be talking to them. And think of neutrinos: they can’t be isolated, but they do exist anyway.

    Of course I’m convinced – like you – that the known P problems and the NP-complete ones are of a very different nature. But, just because their fast solution doesn’t appear to be findable, you jump to the conclusion that it’s non-existent. I believe this constant but erroneous assimilation of “not findable” with “non existent” is what has been preventing mathematicians from making much progress on P vs. NP so far.

    With NP-completeness, the notion of existence of a solution isn’t as clearcut as it is with the easier problems. I view NP-completeness a bit as the quantum scale, where the familiar notions of position and speed lose their usual content. An algorithm may exist without being findable by any process of thought – whether artificial or natural. This is, IMHO, what makes NP-complete problems hard – and not the dubious fact that they don’t lie in P.

  288. anon Says:

    Scott #284

    …Can you give me a single example, in the history of mathematics, where the above is basically what happened? I.e., where for decades, a huge collection of objects all got classified into one of two giant equivalence classes, and only afterward did it turn out, to everyone’s surprise, that there was only one equivalence class? Now I’m genuinely curious whether such an example exists…

    Well, maybe not exactly what you want, but consider the status of integrable and non-integrable dynamical systems prior to the discovery of chaos theory – the important distinguishing factor was something completely different: non-linearity and sensitive dependence – no one had realised that before Poincare at the end of the 19th Century, and even then it took many decades before we really understood what Poincare had really discovered in his paper on the stability of the Solar System.

  289. Scott Says:

    Serge #287: No, I’m not “assimilating the notions” of existence and findability. I’m offering an argument that, in specific kinds of mathematical situations—and not in other situations!—the fact that you never see any hint of something gives you pretty strong circumstantial evidence, judging from past experience in mathematics, that the thing doesn’t exist. Even if I’m right, obviously one still wants a proof, rather than just circumstantial evidence.

    I’m curious: would you also find it plausible that counterexamples to (say) the Riemann Hypothesis and Goldbach’s Conjecture exist, and the problem is just that the counterexamples are not “findable by any process of thought”?

    Also, do you have any examples of what I was asking for in comment #283?

  290. Scott Says:

    aram #286: No, I don’t think unifications in physics are an example of what I’m looking for. I don’t just want a “unification” or “relation” between two unrelated-seeming things (e.g., elliptic curves and modular forms)—we have plenty of examples of those. What I’m looking for, rather, is an example where mathematicians spent many decades proving that a large collection of mathematical objects all belong to one of two equivalence classes—with hundreds or thousands of examples in each class—and only later did it turn out (to everyone’s surprise) that the two giant equivalence classes were really just one class.

  291. Sam Hopkins Says:

    In what sense are you dealing with “equivalence classes” and not just sets?

  292. Scott Says:

    Sam #291: OK, there’s a technicality here. P and the “problems equal in difficulty to NP-complete problems” (including coNP problems, PNP problems, etc.) are both equivalence classes of languages, with the equivalence relation being defined by whether two languages L and L’ are mutually polynomial-time Turing-reducible to one another.

    If you want just the NP-complete languages, and not the coNP ones, etc., then you can take equivalence under polynomial-time Karp reductions as your equivalence relation. In that case, your two equivalence classes will be
    (1) the NP-complete languages, and
    (2) the languages in P that are not the empty language or its complement.

  293. Sam Hopkins Says:

    It just seems to me that what is at issue is whether two sets are equal, when a priori all that is obvious is that one is contained in another. But results along these lines are extremely common in math. So for instance, the set of all manifolds that admit a C^1 structure are the same as those that admit a C^∞ structure, although at first all that is obvious is that the latter are a subset of the former (and note we could not replace C^0 with C^1; I think this is a theorem of Whitney). Even MIP = NEXPTIME is such a result, correct?

  294. Scott Says:

    Sam #293: Personally, I’d say that the phenomenon of NP-completeness means that we really are dealing with two equivalence classes here. Your manifolds example would be analogous if there existed a huge set of manifolds known to admit a C structure, and then a second huge set known to be “C1-complete,” meaning that they admit a C1 structure and that it’s as hard to put a C structure on them as it is to do so for any manifold with a C1 structure. And if those two classes had been studied (and had expanded in size) for many decades, with relatively little in between them, before someone finally managed to show that they were the same.

  295. Serge Says:

    Scott #289: I won’t try to provide the counterexample you asked in #283… because I think there’s none! As I said, we agree with each other on this matter: SAT and SORTING belong to very different equivalence classes. It’s just that we’re not using the same equivalence relation. You say that a fast algorithm for SAT just can’t exist, while I say it might exist, even though it can’t be found. And I’ll tell you why it can’t: there are many, many different fast solutions to SORTING, whereas there might exist only one bit string, one single Turing machine, which solves SAT efficiently. So that you have probability zero of stumbling upon it. Of course I have no proof of this, but it seems very plausible to me.

    Regarding RH: yes, *all* the zeros are on the same vertical line… but the *proof* of this might well be of the above-mentioned kind – unfindable. Numbers and algorithms are very different objects, for the mere reason that we also think with algorithms in our brains, and this circumstance places ourselves in the same awkward situation as the famous “observer” in quantum mechanics… Moreover, RH asks for infinite precision in the prime number distribution, so that its proof might be, likewise, infinitely difficult to find…

    For the Goldbach conjecture – and also the Collatz conjecture – I think they’re true and will get eventually proved as soon as mathematics is sufficiently advanced.

  296. Scott Says:

    Serge #295:

      there might exist only one bit string, one single Turing machine, which solves SAT efficiently.

    No, actually that’s not possible. I invite you to prove, as an exercise, that if there’s one Turing machine that puts SAT in P then there are infinitely many. 🙂

    More to the point: so if I understand you correctly, your conjecture is that P=NP, but that there’s no proof in standard formal systems such as ZF set theory? Even then, of course, the possibility would remain that we could discover the algorithm, and figure out empirically that it works, even without being able to prove it.

    In any case, I don’t see any support for your conjecture in the history of mathematics—we have essentially no experience of “natural” arithmetical conjectures turning out to be undecidable, and tons of experience of their ultimately turning out to be provable—but I do find it interesting that there’s at least one flesh-and-blood human being who actually holds your position (namely, you)! 🙂

  297. asdf Says:

    Bogdan #280 etc.: if SAT has a simple algorithm then all other problems in NP have essentially the same algorithm (by Karp reduction).

    Also, there is an existing, known, simple algorithm (due to Levin) that solves SAT, whose (unknown) running time is polynomial if and only if P=NP. Basically given a SAT instance, you enumerate all Turing machines and dovetail-simulate all of them in parallel until one of them finds the answer to your SAT instance. So whichever one of those Turing machines asymtotically solves SAT fastest will eventually “win” as the problems grow larger. This will be a polynomial-time algorithm if P=NP.

    I’d like to know if there is a way to effectively enumerate all polynomial-time TM’s. There is a known enumeration of P-time languages but that’s different. If there’s a way to list the P-time TM’s then P!=NP becomes a Pi-0-1 sentence like Goldbach’s conjecture.

  298. Serge Says:

    Scott #296 : sorry for the elementary mistake – though IMHO the essential point remains that Nature be protected from SAT-breaking by a wall of probability zero.
    I hadn’t noticed I was the only one with this opinion. Thanks a lot for letting me know. 🙂

  299. Darrell Burgan Says:

    I’ve been thinking about complexity groups and the whole notion of computability, and so I’d like to ask another newbie question. What is the relationship of complexity theory as applied to computer science to theoretical physics? I’m just a fascinated software engineer, but if it can be shown that a particular abstract problem does not have a tractable solution, doesn’t that say something about physics problems that have the same structure?

    I ask this because I’ve read several times that the P!=NP question is not a scientific question but a pure math question. But it seems like it does have something to say about whether certain physics problems will ever have tractable solutions … ?

  300. Greg Kuperberg Says:

    Actually, real analytic (C^ω) manifolds were studied (a little) as a separate class from smooth (C^∞) manifolds, until the Morrey-Grauert theorem established that every smooth manifold has a unique real analytic structure. I don’t think that this is the best analogy for P vs NP.

    I think that conjectures such as the exponential time hypothesis are more to the point. In the case of real analytic vs smooth manifolds, no one had any prior sense of what the difference would be. In the case of P vs NP, ETH gives you a refinement of NP and NP-hardness in which the refined NP-hard problems are thought to specifically take exponential time, or even (in some versions) non-uniform exponential circuit complexity.

    In other words, these ideas of O(n^10000) time algorithms are really only fiddling with certain fine print in the P vs NP conjecture that is meant to make it easier to state and prove. You can view what people really believe (or what I think they believe) as follows:

    Shannon and Lupanov showed that the worst-case Boolean function on n bits can be expressed with a circuit with Θ(2^n/n) gates. If you relax this somewhat, the worst Boolean function certainly requires 2^Θ(n) gates. The non-uniform version of ETH, together with the sparsification lemma, means that there is a specific Boolean function on n bits, derived from 3-SAT, which is in NP and that people think requires 2^Ω(n/log(n)) gates.

    Unfortunately sparse 3-SAT does not quite get you all the way to 2^Θ(n), because there is logarithmic overhead in writing down a sparse 3-SAT formula. Still, that’s clearly a quantified gap! And one for which you could hope for experimental data for small n.

    Of course you could say that even if the circuits require exponentially many gates for small n with some predictable exponent, that it’s all an illusion and you need only polynomially many gates for large n. But that becomes a lot like data for number theory conjectures like that there are infinitely many twin primes.

    Is there a known decision problem that falls under ETH, and such that the encoding is linear (rather than merely quasilinear) in the number of nondeterministic bits?

  301. asdf Says:

    Scott #296, Hilbert’s second and tenth problems both turned out to be undecidable. Calling them “unnatural” is sort of like the CS tendency of calling any useful technique that came out of AI research to not be AI, and then claiming AI hasn’t produced anything useful ;).

  302. Greg Kuperberg Says:

    Actually, regardless of ETH, probably cryptography allows you to make an example of a Boolean function that is in NP, and that looks like it goes all the way up to the Shannon-Lupanov limit of circuit complexity of Θ(2^n/n) gates. But that still leaves the question of whether there is an example that looks like ETH, or otherwise that looks more elegant than some cryptographic nightmare.

  303. asdf Says:

    Scott #296, if P=NP and that fact is unprovable in ZF, then I’d perversely expect it’s because the exponent is defined by one of those functions that grows too fast for ZF to prove it is total. So we’ll never notice it empirically…

  304. Fred Says:

    Scott #290
    “an example where mathematicians spent many decades proving that a large collection of mathematical objects all belong to one of two equivalence classes—with hundreds or thousands of examples in each class—and only later did it turn out (to everyone’s surprise) that the two giant equivalence classes were really just one class.”

    Easy! The class of all imbeciles and the class of string theori… Never mind.

  305. Łukasz Grabowski Says:

    Scott #296: “we have essentially no experience of “natural” arithmetical conjectures turning out to be undecidable,”

    As I said in #121, I personally think non-uniform circuit lower bounds are good candidates for such conjectures. Especially since in their case the argument you mention which is valid for uniform version – namely that we could find something empirically – is not usable anymore.

    Don’t you think this point of view is suggested for example also by the situation with the exponent in matrix multipllcation algorithm? Maybe I’m totally wrong here, but I seem to remember that actually someone (you?) on this blog suggested there might be no O(n^2) algorithm but a sequence of algorithms A_m with times O(n^(2+eps(m))) with eps(m) tending to 0.

  306. Vladimir Puton Says:

    Scott said:” It seems to me that, right now, the world could really use a string-theory blogger who blogs regularly with passion, humor, and technical details, and who isn’t Lubos. (I know Jacques Distler blogged for a while, but he hardly posts anymore.) The lack of such a person creates a vacuum that Lubos then happily fills. Don’t let him be the online face of your community!”

    Well [he said with an inward groan] actually a lot of us have been saying this for years. And nobody steps forward, because
    [a] We are all hoping that LM will just fade away. He has in fact been banned from just about every physics blog, so congrats on joining the club. BTW, even people like Greg K should welcome this: if LM finally realises that he is being completely ignored, he might seek help, or at least start doing something more useful with his life [such as, well, anything really].
    [b] We know that we would be subjected to a constant stream of abuse from LM. Of course we would ban him straight out of the gate, but still all that vitriol would be out there.
    [c] Who cares? Well, as the example of Greg K shows, there are still some people who do take him seriously. And, as you put it, he represents the id of our community: what he says [about physics] is often a grotesquely exaggerated version of what some in our community do think. Alas, the truth is that we have more than our fair share of sociopaths.
    [d] Who cares? Well, who knows who might be the referee of our next paper?
    [e] Conversely, writing a blog about string theory might get us branded as “Lubos Lite”.
    [e] Finally, you may have noticed that morale in our community is not so great, even among those of us working on AdS/CFT.

    So… sorry.

  307. Dániel Says:

    Greg #300 : Maybe you could have hacked Lubos’ intuition engine. (Too late now, I know.) “Except for an insignificant few functions unfairly favored by lazy computer scientists exactly because they have small circuits, all functions have 2^Θ(n)-size circuits. It’s clear to every rational person that it’s nonsense to expect polynomial-size circuits for SAT.” Okay, I’ll have to work on my Lubos impersonation a bit more.

  308. Łukasz Grabowski Says:

    Scott #283, perhaps Tarksi’s circle squaring problem?

    https://en.wikipedia.org/wiki/Tarski%27s_circle-squaring_problem

    So the two different classses of objects here would be those equidecomposable with circle and those equidecomposable with a disk.

    Existence of a polynomial reduction is the analog of existence of the equidecomposition.

    Equality of these two classes was shown by Laczkovich after 65 years from posing the problem. It seems to me that his result was a surprise. At the very least the opposite result would not be more surprising.

    The difference between P vs NP situation and Tarski circle squaring is that mathematicians of course didn’t really look for “new interesting sets equidecomposable with the circle”. However, there were concrete examples which people wanted to understand, notably Tarski’s original disk vs square and tetrahedron vs cube.

    In a similar spirit: “which polyhedra are scissors-congruent”?, i.e. what is nowadays sometimes called Hilbert’s third problem. Sydler showed after 64 years that Dehn invariant classifies the polyhedra. This seems to have been less surprising, perhpaps because the main example, i.e. cube vs tetrahedron, had been figured out before.

  309. Łukasz Grabowski Says:

    Above in #308 It should’ve been “So the two different classses of objects here would be those equidecomposable with a square and those equidecomposable with a disk.”

  310. Greg Kuperberg Says:

    Vladimir Puton – “Well, as the example of Greg K shows, there are still some people who do take him seriously.”

    For the record, I take Lubos Motl far more seriously as a physicist than as a blogger. And, in some ways, I take him seriously as a human being. Yes, he has a lot of problems of his own making. Yes, he has alienated a lot of people. If you notice, I haven’t said much to him or about him in years. But he’s not a criminal and I see no point in treating him as one.

    As a physicist, he’s still a smart guy and I have to emphasize the positive. He has done good research and he could do good research in the future.

  311. Scott Says:

    Lukasz #305: Even if you believe that there’s no single fastest (uniform) matrix multiplication algorithm, that doesn’t mean at all that the complexity of matrix multiplication has to be undecidable. On the contrary, maybe there’s a proof that you can get O(n2+ε) for any ε>0. The truth is that we have no idea right now.

  312. Scott Says:

    asdf #303: But if P=NP, then the exponent for the best SAT algorithm isn’t a function at all (let alone a non-provably-total one). It’s a constant.

  313. Scott Says:

    asdf #301:

      Hilbert’s second and tenth problems both turned out to be undecidable.

    Nooooo!!! You’re confusing undecidable computational problems (in Turing’s sense) with undecidable statements (in Gödel’s sense). Of course we have plenty of examples of the former, including the ones you mentioned.

    But as for statements that have been proved to be independent of powerful formal systems like PA or ZFC, the examples we have basically fall into three categories:

    (1) Gödel sentences (or things equivalent to them), which are specifically constructed to be independent of some formal system.

    (2) Statements about transfinite set theory, like the Axiom of Choice and the Continuum Hypothesis.

    (3) Things like the Paris-Harrington Theorem, which are independent of PA but not of ZFC (and are also pretty “unnatural”).

    I’d say that, 83 years after Gödel’s Theorem, we still don’t have a single good example of a “natural arithmetical statement” (something like Goldbach’s Conjecture, let’s say) proved to be independent of a powerful formal system. Of course some of today’s famous open problems might turn out to be independent, but that seems at present like a speculation with zero evidence behind it. Our experience with things like Fermat’s Last Theorem and the Poincare Conjecture has been that, impossible as they might have looked to generations, when people applied enough brilliance to them eventually they were solved.

  314. Greg Kuperberg Says:

    Theorem: If for every ε>0, there exists a matrix multiplication algorithm that runs in time O(n^{2+ε}), then there exists a single matrix multiplication algorithm that runs in time O(n^{2+ε}) for all ε>0, assuming that multiplication of the matrix entries is sequestered in a black box.

  315. Scott Says:

    Greg #314: Proof? I didn’t think that was known. (Note that we’re talking about Turing machines subject to a uniformity constraint, not about infinite families of circuits.)

  316. Greg Kuperberg Says:

    I was going to leave it as an exercise for the blogger! But since you ask, here is an attempt at a proof. For concreteness let’s work over ℚ, but still assume a black-box multiplication algorithm for its elements.

    Let M(n) be the number of scalar multiplications that you need to multiply two n x n matrices. Equivalently, M(n) is the tensor rank of the matrix multiplication map m:A x B -> AB. Then you can find M(n) eventually for any fixed n, working over the complex numbers. This gives you a non-uniform algorithm in which you may have to pass to a field extension of ℚ — but a fixed one for each n. In fact, uniformly, this is a recursive algorithm, just a slow one.

    Now, M(ab) ≤ M(a)M(b), because, using the tensor rank model, any formula of this sort extends to matrices with entries in a non-commutative ring, including a ring of matrices. Also, if a ≤ b, then M(a) ≤ M(b).

    So, if you want to multiply two matrices of order n that are really huge, then you should choose the largest a such that you can find M(a) in time O(n^{2+ε}). Also make sure that a = O(n^δ) for all δ, say a = O(log(n)). If n is exactly a power of this a, you just use this algorithm recursively. If n is not exactly a power of a, then you can just increase n to the next power of a.

    You can then send ε(n) to 0 slowly but at a recursive rate, since everything in the strategy has a priori time bounds.

  317. Douglas Knight Says:

    Scott 313: I don’t think you’re quite fair to asdf. It’s true that Hilbert’s tenth problem (Diophantine equations) turned out to be an undecidable problem, not an undecidable statement. But that means it yields many undecidable statements. Of course, you can reject the particular ones as unnatural. Maybe you file them under (1), but I think you should have said so.

    But Hilbert’s second problem was “prove PA is consistent.” That’s a statement not a problem. Sure, it’s the example everyone knows, but Hilbert seems like a good referee that it should count as “natural.”

  318. Greg Kuperberg Says:

    Sorry, the proof needs a little repair. You should find M(a) for the largest a that you can in time O(n^2), and also cap a at a = O(log(n)). Then increase n to the nearest power of a and apply the a x a algorithm recursively. You won’t get to control ε yourself if you don’t know how quickly the exponent of M(n) converges to 2.

  319. asdf Says:

    Scott #312, yes, it’s a constant, I just mean the constant is so enormous that it can never be observed emperically.

    #312: oh ok, I see what you mean. But the Tenth problem still counts (instance is a diophantine equation set; decision problem is whether the instance has a solution; MRDP proved undecidabillity). Examples from CS include (iirc) whether a given grammar is context-free; type inference for the fancier typed lambda calculi, etc. For that matter, whether a given arithmetic proposition is true (this question goes at least back to Frege, I think). Or the word problem for finitely presented groups (Dehn 1911?). Oh heck, there’s a big list of them here:

    http://en.wikipedia.org/wiki/List_of_undecidable_problems

    Lukas #308: Tarski’s circle-squaring problem is not an arithmetic problem. “Arithmetic” means any quantifiers only range over the natural numbers, rather than over sets of them.

  320. asdf Says:

    Also the incompleteness theorem (solution to Hilbert’s 2nd problem) basically says given an effective, consistent arithmetic theory, there’s no algorithm to decide whether an arbitrary sentence belongs to the theory (i.e. is provable).

  321. Peter Sheldrick Says:

    @Scott, #313
    “But as for statements that have been proved to be independent of powerful formal systems like PA or ZFC, the examples we have basically fall into three categories”
    Goldsteins-theorem http://en.wikipedia.org/wiki/Goodstein%27s_theorem, is both “natural” and unprovable in peano-arithmetic (it is provable in ZFC though but you did mention PA as an option).

  322. Peter Sheldrick Says:

    hmm i wrote “natural” since Wikipedia was also referring to it as “natural” – but apparently Wikipedia also refers to the Paris-Harrington Theorem as “natural” which you explicitly qualified as “unnatrual” – oh well Wikipedia can’t always be right…

  323. David Speyer Says:

    Regarding the question of surprising unifications in mathematics: How about diophantine sets of integers and recursively enumerable sets of integers?

    My impression (but I am not a historian) is that in the years between Davis (1949) and Matiyasevich (1970), many logicians took seriously the possibility that there was a class of sets of integers which could be encoded by number theoretic tricks, smaller than the class of all r.e. sets.

    For example, when Davis-Robinson-Putnam proved that all r.e. sets could be encoded by exponential diophantine equations, Kreisel’s MR review said “These results are superficially related to Hilbert’s tenth problem on (ordinary, i.e., non-exponential) Diophantine equations. The proof of the authors’ results, though very elegant, does not use recondite facts in the theory of numbers nor in the theory of r.e. sets, and so it is likely that the present result is not closely connected with Hilbert’s tenth problem. Also it is not altogether plausible that all (ordinary) Diophantine problems are uniformly reducible to those in a fixed number of variables of fixed degree, which would be the case if all r.e. sets were Diophantine. ”

    This is another example where one set is contained in the other, but I think Scott’s criterion from 294 applies. Obviously there are plenty of completeness results for r.e. sets, and reading papers on diophantine sets one has the impression that experts also knew lots of tricks for encoding one in another, although the modern language of completeness wasn’t available before they were unified.

  324. J Says:

    Scott: Regarding Dana, Impagliazzo and your result on free games, it looks like you have assumed ETH for the running time of free games is n^log(n). So it seems in a way cheating since ETH assumes non existence of 2^o(n) algorithms for 3SAT. Could you please explain why should assuming ETH not make the result less realistic? ETH seems much stronger than P=NP and ETH could fail right. May be we have 2^(logn)^(1+eps) best algorithm for NPC problems. This mandates P!=NP while fails ETH.

    The abstract is here http://eccc.hpi-web.de/report/2014/012/

  325. Sam Hopkins Says:

    Scott #313: I don’t understand the objection to Hilbert’s 10th problem as an example of an independence result. There are very explicit polynomials for which it is undecidable (in the sense of independent from ZFC) whether it has an integral root: see http://mathoverflow.net/questions/32892/does-anyone-know-a-polynomial-whose-lack-of-roots-cant-be-proved.

  326. Sam Hopkins Says:

    Also, this MathOverflow answer about the work of Harvey Friedman is relevant to the question of what independence results can be: http://mathoverflow.net/a/26605/25028

  327. Scott Says:

    J #324: Well, we unconditionally give an nO(log(n)) algorithm for approximating free games, and we unconditionally give a reduction from 3SAT to free games of size ~2O(√n). By combining the two, one can then say that, if either the algorithm or the reduction could be much improved, then the ETH would be violated. In my post, I said explicitly that I view phenomena like this (there are many other examples…) as a form of evidence not merely for P≠NP, but for the stronger claim that 3SAT requires exponential time (i.e., for the ETH).

  328. Scott Says:

    David #323: OK, thanks! That’s the strongest example I’ve seen so far.

  329. Scott Says:

    Peter #321, #322: Yes, Goodstein’s Theorem is closely related to the Paris-Harrington Theorem, and is firmly in my class (3).

  330. Scott Says:

    Douglas Knight #317, Sam Hopkins #325: Yes, of course any time you have an r.e.-complete set (like the set of solvable Diophantine equations), you can encode in a Turing machine that halts if and only if it finds an inconsistency in ZFC, or whatever other formal system you like. So, you can then combine that with Gödel’s Incompleteness Theorem, to get a statement that’s independent of that formal system. However, the statement produced in this way will be at least as “unnatural” (from a conventional arithmetical standpoint) as the original Gödel statement itself. So, this is firmly in my class (1) (from comment #313).

    More concretely, I recall that when Greg Chaitin once explicitly constructed a Diophantine equation that was provably independent of PA by such an argument, the equation took 200 pages to write down! I’m sure that that can be (and maybe has been) cut down, but in any case, it seems exceedingly unlikely that you’re ever going to get a Diophantine equation by these methods that “would arise in the normal course of number theory.” If that’s possible at all, then new ideas will be needed.

  331. J Says:

    @Scott 326 Thank you for clarifying.

  332. Scott Says:

    asdf #319: Dude. You’re still confusing undecidable statements with undecidable problems! Those two concepts happen to share the same word, and they’re even related to each other, but they’re also importantly different.

    A problem—which includes every one the examples you listed—is a function, something that takes an input and produces an output. In saying it’s “undecidable,” what we mean is that there’s no Turing machine that produces the desired output for every input. However, for any particular input, there might (or might not) be a proof of the output value for that input in your favorite formal system.

    A statement (like, say, “ZFC is consistent”) is something that doesn’t take any input. In saying it’s “undecidable,” what we mean is that there’s no proof or disproof of the statement in some particular formal system, like PA or ZFC. Crucially, this kind of undecidability—unlike the Turing machine kind—is always relative to some formal system. A true statement that’s unprovable in PA might be provable in ZFC, one that’s unprovable in ZFC might be provable in ZFC + large cardinal axioms, etc.

    Now, as I explained in comment #329, given most undecidable problems, there’s a standard way to combine them with Gödel’s Incompleteness Theorem to produce examples of undecidable statements (for your favorite formal system). However, the undecidable statements that you get that way will be just as “unnatural” from a conventional mathematical standpoint as the original Gödel statements, so this certainly doesn’t go outside of my class (1).

    Again, the thing that we don’t yet have a single example of, is a problem that “arises in the normal course of mathematical research” (like Fermat’s Last Theorem or the Riemann Hypothesis), is arithmetical (rather than being about transfinite set theory), and is proven independent of ZFC. (We do know a few examples independent of PA, like the Paris-Harrington Theorem and Goodstein’s Theorem, but those are provable by making mild use of infinity—which is why we call them “theorems” in the first place! 🙂 )

  333. Sam Hopkins Says:

    Scott: the way you write “combine with Godel’s incompleteness result” makes it seem like you view *that* as somehow more primary or the basis for these results; but then you also say that we have plenty of undecidable computational problems. Surely almost all of these are based on a reduction to the Halting Problem?

  334. Bogdan Says:

    Scott #283
    Several good examples of unifications are already listed. Yes, in that case the unified classes are not so huge as P and NP-complete problems, but my impression is that such HUGE classes, for which we cannot prove that they are the same but cannot prove that they are different, arose for the first time in mathematics, so we have no historical examples resolved in any direction.

  335. Bogdan Says:

    Scott #279

    I am interested, why you say that there are NO natural problems with intermediate status in computability (in contrast to the complexity – factoring, isomorphism). If you mean “possible candidates with UNKNOWN status”, there are a lot (for example, the word problem for hyperbolic group had an open status until recently). If you mean “natural problem with PROVABLY intermediate status”, then I know no examples of natural problems in NP, for which we have proof that they are neither in P not NP-complete unless P=NP. Are there? Moreover, are there any methods at all for proving results like this? Because factoring is probably not NP-complete, such methods would be the only hope to prove its hardness…

  336. Scott Says:

    Sam Hopkins #333: Yes, undecidable problems and undecidable statements are both generally produced by reduction. But I’d say that the crucial difference is this. Whether a given Diophantine equation is solvable, whether a given set of tiles tiles the plane, whether a given set of matrices generates the all-0 matrix—those are all natural mathematical problems. And all of them are provably undecidable in Turing’s sense. And arguably, the undecidability of these general problems is philosophically related to why certain specific, natural instances of them—say, Fermat’s Last Theorem—are found in practice to be profoundly difficult (even if not infinitely so…).

    But now suppose we want to combine these undecidable problems with Gödel’s Theorem to produce formally undecidable statements. Then sure, we can do that, but the specific Diophantine equation, set of tiles, set of matrices, etc. that we produce will be a huge and unnatural one—one that would never ever arise “in the normal course of mathematical research.” And the reason is that, in some sense, the specific Diophantine equation, set of tiles, etc. that we produce will need to encode the entire structure of the formal system (PA, ZFC, etc.) for which we want to prove independence. This is an issue that has no analogue for Turing-undecidability, and—unless and until someone figures out how to solve it—I regard it as a fundamental difference between the two.

  337. Bogdan Says:

    asdf 297

    “Also, there is an existing, known, simple algorithm (due to Levin) that solves SAT, whose (unknown) running time is polynomial if and only if P=NP.”

    Interesting. If we could push the idea and made it more or less practical, we would end-up with SAT-solver, and just see how it performs… By the way, if P=NP, this solver would change the world, even without proof that P=NP 🙂

  338. Scott Says:

    Bogdan #336: Do you have any idea how Levin’s algorithm works? It simply “dovetails” over all possible Turing machines in lexicographic order, treating all the ones that don’t work to solve SAT in polynomial time as just constant-factor overhead!

    Ergo, the formal existence of Levin’s algorithm is no help whatsoever, if your goal is actually to figure out how to solve SAT efficiently in practice. Or rather, it’s precisely as useful as telling someone: “just try every possible program, and see if one of them seems to work!”

  339. Scott Says:

    David Speyer #323 and Bogdan #334: OK then, here’s my next challenge. Is there any example in math of a “surprising unification” between two large sets of mutually-interreducible objects, such that prior to the unification, mathematicians had encountered “invisible electric fence” phenomena analogous to the ones I mentioned in the post when they tried to unify the sets?

  340. Serge Says:

    Serge #298:

    The essential point remains that Nature be protected from SAT-breaking by a wall of probability zero.

    The situation is – in my view at least – exactly the same as with the macroscopic behaviour of a gas. The Poincaré recurrence theorem states that certain systems – such as an enclosed volume of air molecules in a room – will, after a sufficiently long but finite time, return to a state very close to the initial state. Think about it: it’s only the infinitesimal probability of all the molecules going back to their initial confinement that prevents the people in a room from dying of asphixia… Thus with P=NP, it’s as well an infinitesimal probability that has always prevented evolution from outputting a method for breaking NP-completeness. One shouldn’t underestimate the power of low probabilities: they can have the same effect as an electric fence!

  341. Scott Says:

    Bogdan #335: Interesting question. As I mentioned in earlier comments, for factoring and graph isomorphism we have excellent theoretical evidence that, whether or not they’re hard, at any rate they shouldn’t be NP-complete. Namely, if they were NP-complete then the polynomial hierarchy would collapse. For none of the problems whose decidability status is currently unknown (e.g., solvability of equations over the rational numbers), do I know any analogous reason why they shouldn’t simply be equivalent to the halting problem.

    In addition, it’s just easier to prove unconditional results in computability than in complexity theory! I.e., of course we can’t prove unconditionally that factoring is NP-intermediate, since we can’t even prove P≠NP! By contrast, if there were natural “Turing-intermediate” problems in computability theory, I see no reason whatsoever why we wouldn’t be able to prove that using the mathematical tools we already know.

    The closest thing I’ve ever found to a “natural” Turing-intermediate problem, is what I call the “Consistent Guessing” problem (though it goes by other names). In this problem, you’re given as input a Turing machine M. If M accepts on a blank tape, then you have to accept, while if M rejects, then you have to reject. If M runs forever, then you can either accept or reject, but you have to do one or the other. This is a pretty natural task, and interestingly it can be proved “Turing-intermediate”—in the sense that it’s uncomputable and reducible to the halting problem, but there exist oracles for Consistent Guessing that don’t let you solve the halting problem. The issue is that Consistent Guessing is not a language, since it has the weird requirement that on certain inputs you can either accept or reject, but in any case you have to halt.

  342. rrtucci Says:

    For non-mathematician’s ears only:

    The sad tale of the two sets that kissed then parted

    (following Raoul Ohio’s observation about eigenvalue avoidance).

    Matrix M(t) had two eigenvalues x_1(t) and x_2(t) where t\in[0,1]. Let

    Set_1: (t, x_1(t)) for t\in[0,1]
    Set_2: (t, x_2(t)) for t\in[0,1]

    Originally, everyone thought that there was no symmetry in the system that M(t) described, which meant that Set_1 and Set_2 did not intersect.

    Then someone discovered an “accidental” symmetry that led to an “accidental degeneracy”, which led the two sets to kiss at a single point.

    Then someone realized that the original model was too naive and that in real life there is a perturbation that breaks the accidental symmetry, so an eigenvalue “gap” develops, which led the two set to stop kissing each other and part forever.

    THE END

  343. Scott Says:

    Serge #340: Except that the search for fast SAT algorithms is not a random process, like the motion of air molecules in a room. It’s a directed process, and if the algorithm is there, then it only has to succeed once!

  344. Jarosław Błasiok Says:

    Scott #338: I have seen already here and there over the internet statements of form: “If P=NP, Levins algorithm would be a polynomail algorithm for all NP-problems – with only constant factor overhead over opitmal one”.

    I must admit, I can’t see how is it true and couldn’t find any references with a proof of this form. On the other hand – I never saw those opinions explicitly corrected. Could anyone explain it to me in detail, or point out some references?

    In Levins original paper, he actually address a different problem: given easily computable function F, he shows that this kind of trying every possible machine gives algorithm for computing inverse F^{-1} with only constant overhead – and it is quite clear, eventually you’ll get to algorithm which computes inverse, you can easily check whether its output is correct, and in such a case break your computation. For example, FACTORING fits in this framework (where F would be multiplication), so if FACTORING has polynomial algorithm, Levins Universal Search is one.

    It doesn’t take a lot, to see that the vary same approach – with a little more technical issues can justify a sentence: “If P=NP then Levins Universal Search gives polynomial algorithm for $$NP\cap co-NP$$ problems.” That would just iterate over all machines (as in original proof), checking whether this one happend to generate proof for being YES or proof for being NO instance – and you break as first of those events happens.

    Nevertheless, I can’t see how it generalizes to algorithm for SAT – the issue here is, that you can’t really bound the time of your algorithm for NO instances – as long as you don’t have co-NP verifier for your problem in hand, you don’t know when to stop searching, given NO-instance – in fact not only I don’t know how to guarantee polynomial running time – I can’t guarantee any halting condition at all.

    Am I missing something?

  345. Greg Kuperberg Says:

    (So, I guess I propose a Levin-like argument for speed of matrix multiplication, based on the purely algebraic, recursive model of matrix multiplication in terms of tensor rank.)

  346. Scott Says:

    Jarosław #344: Yes, you’re right; you need to be careful in defining the success condition of the algorithm. The simplest way to do it is just to say that the universal search algorithm finds a satisfying assignment after poly(n) steps, whenever such an assignment exists to be found. (If there’s no satisfying assignment, then the algorithm runs forever.) See here (Theorem 2) for the details of that.

    Alternatively, one can also give an algorithm that halts in polynomial time, and that correctly decides SAT on all but finitely many instances. Moreover, the running time of that algorithm can be the running time of the optimal SAT algorithm, plus (say) an additive O(log log log n) overhead. It’s an extremely fun exercise to figure out for yourself how to do that; if you just want to see a solution, click here.

  347. fred Says:

    Is it conceivable that for NPC decision problems there could be an algorithm that examines the structure of the input and comes up with the right yes/no answer efficiently but without actually providing an actual solution in the case of yes? (finding an explicit solution would still be worst case exponential time)

  348. Scott Says:

    fred #347: Nope! A standard homework problem (I give it every year in my undergrad course) is to rule that possibility out. That is, show that if you have a black box for solving NP decision problems, then you can call that black box recursively in order to find satisfying assignments for the YES-instances, with only a polynomial increase in time. If you can’t figure out how to do it yourself, look up “equivalence of search and decision.”

  349. Scott Says:

    Greg #316: Wow! I finally had time to think through your argument, and it’s awesome. Thanks. The central ingredient I was missing was the fact that any arithmetic circuit for multiplying matrices over a field can be “automatically lifted” to a circuit of the same size that works over a noncommutative ring. (Actually, why is that true?)

    As a minor correction, I believe you’ll need to restrict the search for a matrix multiplication algorithm to matrices of size LxL, where L is (log n)1/3 (or more generally, (log n)1/c, where c is the best upper bound you know on the matrix multiplication exponent ω), rather than log n. If you don’t do this, then the time needed to brute-force search over all LxL matrices will be quasipolynomial in n, rather than polynomial.

    Also, unless I’m mistaken, your argument doesn’t extend to proving the stronger result that there’s a single, asymptotically-fastest algorithm for matrix multiplication. In particular, suppose it was possible (for example) to multiply LxL matrices in L2 log L time, where L=(log n)1/3. Then by iterating that, I “merely” get that you’d be able to multiply nxn matrices in time O(n2+logloglog(n)/loglog(n)), which is greater than n2 log(n).

  350. Sniffnoy Says:

    It had better not prove that, seeing as Coppersmith and Winograd proved there is no fastest algorithm for matrix multiplication…

  351. Scott Says:

    Sniffnoy #350: Whoa! The paper you linked to claims to prove that the matrix multiplication exponent ω is necessarily a “limit point”; i.e., it can’t be achieved by any single algorithm.

    So, now I’m wondering how to reconcile this with Greg’s argument from comment #316—showing that if, for all ε>0, there’s an algorithm to multiply nxn matrices in nc+ε time, then there’s also a single algorithm to multiply nxn matrices in nc+ε time for all ε>0.

    OK, OK, I guess the two statements don’t flat-out contradict each other. Is the resolution of the “paradox” simply that the algorithm produced by Greg’s argument must necessarily use asymptotically more than nc time (though still less than nc+ε time for all ε>0)?

    More worryingly, why doesn’t the Coppersmith-Winograd argument imply that nxn matrices can’t be multiplied in O(n2) time (since if they could, then they could also be multiplied in nc time for some c<2)? That would be a huge lower bound result, and I was certain it wasn’t known yet!

    Is the “catch” just the constant factor hidden in the O(n2)? Or is it that they’re using some weird definition of “limit point”? Or is it that their definition of “matrix multiplication algorithm” (λ-algorithms or whatever) is a nonstandard one? Greg: can you help? 😀

  352. Serge Says:

    Scott #343: “if the algorithm is there, then it only has to succeed once”. No, because that would imply you’re conscious of the properties of the algorithm you’ve found, whereas you can only test a finite number of instances. And you can be using a powerful method without being able to explain it completely. Put differently: we may all be using P=NP everyday without even knowing it!

  353. Greg Kuperberg Says:

    Scott – A lot here indeed stems from confusion concerning what counts as an algorithm. An algorithm could be:

    (1) Anything that a Turing machine can do to multiply matrices, even allowing that scalar multiplication might be black-box (= in an oracle).

    (2) An arithmetic circuit to multiply k x k matrices for some fixed k.

    (3) An arithmetic circuit to multiply k x k matrices, used recursively to multiply n x n matrices for n ≫ k.

    So, a key point is that each one of these can be seen as a special case of the others. First, to understand (2): If the arithmetic circuit is only allowed multiplication and addition, then since matrix multiplication is bilinear, you can show that matrix multiplication amounts to expressing the matrix product C = AB as some linear combination

    C = ∑_i C_i α_i(A) β_i(B) (*)

    where each α_i and β_i is linear. That is, if you express any arithmetic circuit in this form, then you might increase the number of additions, but you won’t increase the number of multiplications. Then, any such formula generalizes to matrices with entries in a non-commutative ring. Moreover, the number of multiplications is all that really matters in applying (2) to (3). Thus the number of terms in the equation (*) is the crux of arithmetic circuits for matrix multiplication. If we let M(n) be the minimum number of terms in (*), then we get by (3) and other remarks that log_n(M(n)) must converge to something as n → ∞. This is the asymptotic exponent of matrix multiplication.

    Coppersmith and Winograd show that log_n(M(n)) never equals its limit; it keeps creeping down approximately monotonically forever. But this does not really say that there can’t be any best algorithm in the sense of (1), only that there isn’t one in the sense of (2) and (3).

    On the other hand, (1) can’t be better than (2), at least not if the only cost is scalar multiplications.

    The optimization problem in (2) is poorly understood. Nonetheless, for any fixed k, it can be solved in a finite amount of work, even working over ℂ. As I argued, there is then a specific algorithm in the sense of (1) that is sort-of optimal. If c is the limit of log_n(M(n)), then this algorithm that I sketched works in time O(n^{c+ε}). But as Scott supposes, in this Levin-style algorithm ε might go to 0 very slowly. Not only very slowly, but inefficiently slowly. It is probably not optimal time complexity, even though it is has the optimal time complexity exponent.

    Finally Scott asks whether matrix multiplication can be done with O(n^2) operations. As far as I know, that is not excluded by the Coppersmith-Winograd results, you could say because of the constant factor. For instance, if hypothetically M(n) = 2n^2 for all sufficiently large n, then log_n(M(n)) still would decrease monotonically in n to its limit of 2.

    Note that multiplication of polynomials, which is what underlies multiplication of integers, had exactly the same status after the Karatsuba and Toom-Cook algorithms were found. It’s the same issues with bilinearity, recursion, and the asymptotic exponent. Of course then Schönhage–Strassen was found, settling that the asymptotic exponent is 1 for polynomials. So you could say that all known matrix multiplication algorithms are Karatsuba-Toom-Cook type, and people are simply still looking for a matrix Schönhage–Strassen.

  354. Peter Nelson Says:

    Scott,

    You’ve talked about classes of proofs that we know can’t answer P(!)=NP. Are there (significant) classes of algorithms we know can’t solve 3-SAT efficiently? I ask because it seems relevant to Serge#352.

    Phrased another way: Do we know that the sorts of algorithms we “use…everyday” can’t be efficient NP-complete problem solvers? If not direct proof ruling out large swaths of algorithms, what sort of strong evidence do we have against Serge’s supposition?

  355. Jonathan Dowling Says:

    Ribbet! Ribbet!

  356. Scott Says:

    Peter #354: Yes, absolutely, there are large classes of practically-used algorithms that have been shown to require exponential time for solving NP-complete problems (or sometimes, even for solving special cases in P 🙂 ). Here are three examples:

    1. We have exponential lower bounds for many DPLL-style (i.e. backtracking) algorithms, often by relying on lower bounds for proof complexity. See here and here for two examples pulled from Google.

    2. We know that, in some sense, any “linear-programming relaxation” of the Traveling Salesman Problem must be exponentially large. See this paper, which recently won the STOC Best Paper Award.

    3. For simulated annealing, tabu search, and other local optimization heuristics, it’s generally a simple matter to construct problem instances (even highly “symmetric” ones) for which they get stuck for exponential time in a local optimum.

  357. Greg Kuperberg Says:

    A question: Let’s accept convenient cryptographic assumptions that go beyond P ≠ NP. Suppose that you have an NP-hard problem L which has a Karp reduction from CircuitSAT, say, and suppose that there is an algorithm A that is touted for solving L but sometimes takes exponential time. Then can you perhaps always use cryptography to create another algorithm B that “embarrasses” A by being fast in some cases where A is slow? Of course, you might as well take L to be CircuitSAT or 3-SAT or any such favorite.

  358. Peter Nelson Says:

    Scott #356: Thanks!

    I’m looking forward to the next iteration of Serge’s algorithm-of-the-gaps argument.

  359. Sniffnoy Says:

    Greg Kuperberg: Thanks! I’d been wondering why that paper often seemed to be ignored in discussions of the matrix multiplication exponent. A bit odd that they’d phrase it that way, that there can’t be any best algorithm, if it isn’t even good enough to rule out M(n)=2n^2.

    To be clear, though — is it correct to say that basically all the progress in reducing ω so far has come from algorithms of type (2) and (3)?

  360. J Says:

    @Greg 353 Toom-cook algortihm (generalized) has complexity $n^(1+eps)$.

  361. Serge Says:

    Peter #354 and Scott #356: The probability of breaking NP-completeness is infinitesimal only when you add the requirement of knowing the algorithm that’s being used. An analogy can be carried out with great parallelism between complexity theory and quantum mechanics.

    problem = particle
    hard problem = small particle
    quantum scale = NP-completeness
    algorithm = position
    process = motion
    algorithm design = position measurement
    time-behaviour assessment = speed measurement

    This analogy shows that, when it comes to solving hard problems, the only algorithms you’ll ever be aware of are the slowest ones… which doesn’t prevent you from using faster algorithms without knowing their code!

  362. Scott Says:

    Serge #361:

    strained analogies = hunger
    clear thought = food

    Based on the above parallelism, I’ve proved that I should go to breakfast now rather than spending more time pondering how to answer your comment.

  363. Sam Hopkins Says:

    Scott, I wonder if you have some good examples of the phenomenon you expect from P vs. NP in math: that is, two classes of objects which a priori could be equal, but which people struggled for a long time to prove are not equal, eventually indeed showing that they are different.

    I have one (symplectic manifolds vs. Kahler manifolds), but I think your claim to mathematical history only goes through if one outcome of the phenomenon is very common and the other is not.

  364. Serge Says:

    Scott #362: as we say in France – and also everywhere else I think – Bon appétit ! 🙂

    Jokes aside, the “clear thought” could likely materialize in a neat proof of the algorithmic analogue to Heisenberg’s uncertainty principle, but I havent got it yet – though I foresee that we might be onto something rather groundbreaking…

  365. BHarris Says:

    “I want the nerd world to see—in as stark a situation as possible—that the above is not correct. Luboš is wrong much of the time, and he’s intellectually dishonest.”

    Would you say that this former University professor and author of several mathematical treatise is, in fact, a kind of Napoleon of intellectually dishonesty?

  366. Scott Says:

    BHarris #365: I’d say he’s a Lubos of intellectual dishonesty.

  367. Raoul Ohio Says:

    Breaking news on the P –NP front:

    My girlfriend just emailed me a link —

    http://phys.org/news/2014-03-difficulty-candy-addictive.html

    — showing that SAT can be reduced to CC (Candy Crush).

    Evidently CC is the new smartphone sensation that half the world is playing 24/7. I had only heard of CC because its makers have copyrighted the word “Candy”, but apparently the success of the game has propelled the maker onto the NYSE. (I am not making this up.)

    Since suffering severe Tetris Elbow a couple decades ago, I have been pretty good at staying away from all computer games (graphing cool stuff in MatLab excepted, of course).

  368. BHarris Says:

    @Scott But my serious question, without an assertion, is whether there’s more harm than good. I have trouble (not disliking) but rather ignoring someone who knows what the graph isomorphism problem is in the first case (if not its computational complexity) and who writes a blog that is in large part focused on discussing theoretical physics. Where would you draw the line for someone like a graduate student / postdoc / or collaborator?

  369. BHarris Says:

    “Where would you draw the line…”

    The line being the ability to make the statement that: “It’s good this person is here.”

  370. BHarris Says:

    I ask because it seems that a line was drawn here w.r.t. intellectual dishonesty, and this blog does have a kind of “moral weight” for a lot of us readers.

  371. Mike Says:

    “It’s good this person is here.”

    Not that I was asked 😉 but it depends on what you mean by “here”. I visit Lubos’ blog regularly to see what he has to say; it can be interesting and fun, in relatively small doses. However, it seems to me that bloggers are under no obligation to let their blog be open for everyone to comment on every question regardless of their behavior. It seems to me that very few lines are drawn — ever — in fact of the three categories you list: graduate student / postdoc / or collaborator, I’m unaware of any that have been banned from this blog (although their may be some I’m unaware of). If “here” means the “internet”, then I’m against banning in (virtually) each and every case. I only say “virtually” because I assume someone could posit extreme circumstances where I would have to reconsider my position. However, at the other extreme, in the case of an individual’s own blog, it seems that is a wholly personal decision. For me, it would be a balancing analysis: does the persons contribution (e.g., expertise, insight, questions, comments) outweigh the confusion, vitriol, misinformation or outright poison they bring to the discussion.

  372. Scott Says:

    Sam #363: That’s a wonderful question!

    The first example that springs to mind for me comes from set theory. Between 1939 and 1963, it was known that there are many statements about transfinite sets that are provable from the ZF axioms, and then a bunch of statements that resisted proof—including V=L, GCH, CH, AC, Zorn’s lemma, and well-orderability. Many implications between the proof-resisting statements were also known—e.g., the last three were known to be equivalent, and V=L was known to imply all the others. Only with Paul Cohen’s breakthrough was it discovered how to prove this entire cluster of statements independent of the ZF axioms (something that Gödel and others had correctly suspected earlier).

    In complexity theory, we also have examples of “clusters of interrelated problems” that were identified before they were ultimately proved to be hard, although of course in easier settings than P vs. NP. Here’s a little example from my own work: around 2000, we knew various oracle problems that quantum computers could solve exponentially faster than classical computers. And then there was a cluster for which no exponential speedup was known—including finding collisions in hash functions, distinguishing 1-to-1 from 2-to-1 functions, the “set equality problem” (whether two sets are equal or disjoint), calculating hidden-variable trajectories (e.g., in Bohmian mechanics), and “index erasure” (that is, mapping |x⟩→|f(x)⟩ given an injective black-box function f). It was also clear that all of these problems were closely interrelated; e.g., a lower bound for distinguishing 1-to-1 from 2-to-1 functions would imply the same lower bound for finding collisions in 2-to-1 functions. Finally, in 2002, I managed to prove the first quantum lower bounds for all of these problems (later improved by Shi, Ambainis, and others). See here for more.

    Anyway, now I’m extremely curious: do people have other examples of a significant-sized cluster of interrelated mathematical objects that were conjectured to be distinct from an “easy” cluster, and only later proved to be so? Maybe I’ll make this into a big-list question on MathOverflow…

  373. Mike Says:

    “. . . it would be a balancing analysis: does the persons contribution (e.g., expertise, insight, questions, comments) outweigh the confusion, vitriol, misinformation or outright poison they bring to the discussion.”

    One clarification: I would begin with a presumption that the person’s contribution is by its nature important, so that it would take a preponderance of bad behavior to get them banned. So, maybe this is even harder to figure out than whether or not P is equal to NP. There can never be a “final” answer because it always depends on an analysis of all of the facts and circumstances up to that point. 😉

  374. Scott Says:

    BHarris #368: I agree with Mike. I thought I’d made it reasonably clear that the real problem wasn’t Lubos’s ignorance per se, but rather the toxic combination of ignorance with aggression and bellicosity.

  375. BHarris Says:

    “One clarification: I would begin with a presumption that the person’s contribution is by its nature important, so that it would take a preponderance of bad behavior to get them banned.”

    Well, if Lubos starts to publish decent physics papers again, I hope that’ll count towards an unbanning (tipping my hand here towards my tolerance of a**hole people). 🙂

  376. Scott Says:

    BHarris #375: Yes, that would count toward an unbanning, when I reevaluate the situation 3 years from now.

  377. fred Says:

    fred#7
    “Is there any hint that it’s possible to find an efficient QC algo for graph iso?”

    Go some hints about this:
    https://scottaaronson-production.mystagingwebsite.com/?p=1458
    “For example, the collision lower bound rules out the most “simpleminded” approach to a polynomial-time quantum algorithm for the Graph Isomorphism problem (though, I hasten to add, it says nothing about more sophisticated approaches)”

  378. Greg Kuperberg Says:

    There is a world of difference between graph isomorphism and an NP-hard problem such as CircuitSAT, even without any mention of quantum computation. Namely, no one knows how to create two specific graphs for which it is actually difficult to determine if they are isomorphic. Whereas there are many methods to create specific, impossible-looking examples of CircuitSAT. People can give you a circuit problem with 128 input bits and one or two thousand gates that will simply not be solved by humanity for the forseeable future. Or 80 input bits, where you would have to shell out big bucks to ever calculate a solution.

    With graph isomorphism, if you show the algorithm first, then people can find pairs of graphs that defeat it. But if you show the graphs first, then people can make an algorithm to quickly determine if they are the same. So, graph isomorphism could well be in P; in any case something is missing from our theoretical understanding of it.

  379. Scott Says:

    Greg #357:

      Suppose that you have an NP-hard problem L which has a Karp reduction from CircuitSAT, say, and suppose that there is an algorithm A that is touted for solving L but sometimes takes exponential time. Then can you perhaps always use cryptography to create another algorithm B that “embarrasses” A by being fast in some cases where A is slow?

    That’s a great question!

    I think you’ll enjoy this paper by Gutfreund, Shaltiel, and Ta-Shma, which directly relates to your question. Briefly, they show that if NP is hard on average, then given any randomized algorithm A that purports to solve SAT in polynomial time, it’s possible to efficiently generate instances on which A usually fails.

    Their result simplifies in the deterministic case, where it boils down to one of my favorite tricks in all of complexity theory. Namely, here’s what you do: you ask the algorithm A to find a SAT instance for you on which it itself fails! If A succeeds, then you’re done, while if A fails, then you’re also done, since the instance you just gave A is the one you were looking for! 😀

    And note that we didn’t even need to make any strong cryptographic assumption: merely the assumption P≠NP.

    Now, a different way to interpret your question would be to say, you don’t get access to the code of A, but you are allowed to make a cryptographic assumption. And you want to generate SAT instances that are “cryptographically hard,” but that can nevertheless be solved in polynomial time by one particular algorithm, which happens to know a secret that no “normal” algorithm would know.

    The obvious way to achieve the above would be to use a trapdoor one-way function, and to encode the trapdoor information into one particular algorithm in order to make the instances easy for that algorithm. The issue with this approach is that the trapdoor information would presumably take poly(n) bits to specify! So you’d only get a class of circuits that can easily solve your SAT instances, rather than a uniform polynomial-time algorithm.

  380. fred Says:

    Greg #378
    Thanks a lot! Very interesting!

  381. Nick Read Says:

    Hi Scott,

    Re #379, could you elaborate on the meaning of “hard on average” here? What is the distribution? As you know I’m interested in average versus worst-case hardness. This result sounds interesting if we know what “existence of a distribution” means in realistic terms.

    Best

    Nick

  382. Scott Says:

    Nick: They generate a distribution that’s samplable in polynomial time, but crucially, the distribution that they generate depends on the particular algorithm A that they’re fighting against. If you want to know more, I suggest reading their paper! It’s been a while since I read it.

  383. Greg Kuperberg Says:

    Scott – Gutfreund, Shaltiel, and Ta-Shma show that you can easily find instances that are hard for an algorithm A. But, if I understand correctly, they don’t show that these easily-found instances are easy for some other algorithm B, right? They could be hard in general, as best I can tell.

    Of course you have to be able to know A in order to choose B. Because whatever B is, it could be incorporated into A.

  384. Greg Kuperberg Says:

    No, I take it back, I don’t understand correctly. I will think about it some more.

  385. NotEasyBeingYellow Says:

    I will make a tangential biological comment, regarding your frog colour analogy. See page 11 of the famous paper of de Grouchy “Chromosome phylogenies of man, great apes, and old world monkeys” (or search for “grouchy great apes”!), specifically the last sentence quoted below.

    http://link.springer.com/content/pdf/10.1007/BF00057436.pdf‎

    According to neo-Darwinism, gene mutations are the true means of evolution. By the combined action of geographical isolation and selection, they are supposed to be capable of producing such inter-specific barriers. In fact, such barriers appear to be extremely fragile. Gene mutations, even in great number, cannot prevent individuals from reproducing with their own kind. Neo-Darwinism confounds races and species. Races can differ greatly on morphological and genetic grounds, but they remain capable of reproducing between each other. The example of the seagulls (genus Larus) around the North Pole is remarkable. They are distributed in groups that differ only by the color of their irides and periorbicular regions, and which do not reproduce with each other. If the eyes of members of one group are painted in the colors of another group, members of the two groups immediately reproduce. A little paint is sufficient to break down reproductive barriers between groups which represent true species in the neo-Darwinian concept.

  386. Scott Says:

    Greg #383: Well, it’s a funny situation. Gutfreund et al.’s procedure, in the course of finding instances on which algorithm A fails, also learns the yes-or-no answers to those instances! For

    1. If A finds an instance φ on which it itself fails, then the witness of its failure is a satisfying assignment for φ that A failed to find (but that the larger call to A did find).

    2. Conversely, if A doesn’t find such a φ, then we again have a specific instance on which A fails!

    So in a sense, Gutfreund et al.’s procedure is itself in the business of embarrassing A. The one catch is that, in case 2 above, A is only “very mildly” embarrassed, since our knowledge that A failed on this particular instance is only as good as our starting assumption that every polynomial-time algorithm (so in particular, A) has to fail on some SAT instances.

  387. Jay Says:

    Scott,

    We know that, if only P!=NP, then there are, not one, but many many many NPI problems. What would you object to:

    “Shnoods! We see close calls again and again. There is just not enough space for many NPI problems.”

    ? 🙂

  388. Scott Says:

    Jay #387: I’m not sure if you were expecting a serious reply to that, but actually there is one! The borders of P and NPC should both be thought of tracing out complicated, jagged shapes in some huge number of dimensions. So there’s more than enough room for the borders to touch in some regions of problem-space (say, the constraint satisfaction problems)—thereby creating a need for the electric fence in those regions—while still being very far apart in other regions of problem-space (say, the number-theoretic problems), leaving lots of room for all the NPI stuff in between.

  389. Jay Says:

    As you guessed my intent was to say that the electric fence argument seems weaker than say, the multiple-surprises argument. But.. hey you’ve just changed my mind. Thank you!

  390. Douglas Knight Says:

    Greg:

    With graph isomorphism, if you show the algorithm first, then people can find pairs of graphs that defeat it. But if you show the graphs first, then people can make an algorithm to quickly determine if they are the same.

    Without disagreeing with your conclusion, I don’t think either of the specific claims are quite correct. According to my notes, there has only once been an instance found that was hard for the leading algorithm, namely Miyazaki’s instances were hard for McKay’s naughty. And it wasn’t easy to create a new algorithm; it was ten years later that Tener did so.

    Note that there is an asymmetry. If the pair of graphs in the hard instance is not isomorphic, you probably know that because you know a polynomial time invariant that distinguishes them and you can just add that to your algorithm. But if the graphs are isomorphic, as in Miyazaki’s example, it may be hard to modify the algorithm to quickly accept them without losing correctness.

  391. Gil Kalai Says:

    I regard the “electric-fence argument” for NP=!P as rather strong. However, one has to be careful: consider the hardness of “unique games” (or small set expansion). In various cases the “electric fence” for unique games is even more definite than it is for NP-hardness. I doubt if there is such a common consensus regarding the hardness of unique games.

  392. Mikko Särelä Says:

    Now coming back to physics and quantum theory. On that side of the fence, they have an idea, a hypothesis, about what the world looks like. And the equations that describe the system follow from that idea.

    Now this brings me (an amateur in the theory of computation) to an interesting question: do we have some kind of hypothesis about the nature of complexity of computational problems that would explain this rather interesting grouping of problems between P and NP.

    I’m not asking this just because it would be fun to know. I think this distinction between certain kinds of problems is really, really, underappreciated in the world today. I see many systems in the world that cannot be fully understood without understanding P!=NP. And I would like us get to the point where the fundamental points about computational complexity could be taught to the general populace – in the same rough idea level as theory of relativity or quantum theory are.

  393. Scott Says:

    Gil #391: How on earth is there a more definite “electric fence” separating unique games from P than there is separating the (known) NP-complete problems from P? Could you explain what you mean by that?

  394. fred Says:

    Probably a dumb question…

    1) with a set of integers of size n, e.g. {-3,-2,5,8}, n=4
    from that set we generate
    2) all the possible subset sums (size 2^n)
    e.g. {-5,-3,-2,0,2,3,5,6,8,10,11,13} (we omit repetitions)

    How hard is it to do the reverse? I.e. given a large set of numbers, find the minimum possible “generating” set for it.
    A sort of compression scheme.

  395. Nick Read Says:

    Scott: how about a short post explaining the current status of “unique games”? I gather it is related to my favorite problem, max cut.

  396. fred Says:

    Scott #392

    The “unique games” fence:
    http://tinyurl.com/kndxrav

    The P!=NP fence
    http://tinyurl.com/mnh99nl

  397. asdf Says:

    Scott #346: Levin’s universal search can also certify the NO instances of problems in NP. While it looks for a satisfying assignment it just also has to look for a proof that there is no satisfying assignment. This will always terminate due to the obvious exponential-sized proof by checking all assignments. If P=NP then P=co-NP and I think this means that NO instances should have short certificates, which universal search should also be able to find, along with a provably correct checking algorithm that (maybe not provably) runs in p-time, but I don’t see the proof of this (maybe I’m missing something obvious). If that’s right, universal search can look for the triple (short certificate, fixed checking algorithm, fixed proof of checking algorithm) where the last two components are more “constant overhead”.

  398. Scott Says:

    fred #395: LOL!

    Nick #394: Maybe I’ll ask Dana if she wants to write such a post. She’s certainly better-qualified than me…

  399. Scott Says:

    asdf #396: Ah! You raise a good point, but there’s still a subtle distinction between YES and NO instances with respect to Levin’s universal search. Namely, for the YES instances, all you need is the fact of P=NP. If that’s so, then Levin’s algorithm will eventually output a YES-witness for your satisfiable Boolean formula, and you won’t need to know anything about the reasons why P=NP to recognize it as a YES-witness.

    Now, since P=NP implies P=coNP, it’s also true (as you point out) that unsatisfiable Boolean formulas will always have succinct NO-witnesses. However, it’s only the proof of P=NP that would tell you what the NO-witnesses look like, or how you should recognize them when Levin’s algorithm spits them out! Or to say it another way: of the Turing machines you’re dovetailing over, one of them always satisfies satisfiable instances in polynomial time—so if it fails to satisfy an instance, then its failure constitutes the NO-witness you want. But you won’t know which of TMs you’re dovetailing over is that one! Even if a particular TM happens to find YES-witnesses extremely reliably in practice, that still doesn’t prove that it always finds them.

    Of course, as long as you’re dovetailing over all TMs, one could argue that you might as well also iterate over all P=NP proofs until you find a valid one, and thereby gain the ability to recognize NO-witnesses! And that’s fine, assuming a P=NP proof exists in your favorite formal system (and isn’t ridiculously longer than the Turing machine it proves correct). If, say, P=NP but the equality were unprovable in ZF set theory, then even if you used Levin’s universal search, you’d generally be S.O.L. in terms of finding NO-witnesses that you could actually be certain about.

  400. Jay Says:

    Fred #393,

    I’d guess it’s easy using the following hints: the maximum is the sum of all positive integers, the minimum is the sum of all negative integers, the sum of all integers is the same as the sum of the maximum and minimum, every integer is in the subset.

  401. fred Says:

    Jay #399,
    I feel stupid – it shouldn’t be too hard considering all the redundancy…
    But taking the example where n = 10 (for a total of 2^n=1024 numbers), say we’re told we have 5 positive ints (Pi) and 5 negative ints (Ni) (in practice we wouldn’t even know that), then:
    N0+N1+N2+N3+N4+N5= min (1)
    P0+P1+P2+P3+P4+P5= max (2)
    (your third hint is just the sum of (1) and (2), so I don’t think it helps)
    your last hint says that all {Ni,Pi} are one of 1024 possible numbers, but if you have ~500 negative numbers, that’s a hell of a lot of combinations (C(500/5)).
    Anyway I didn’t mean to derail the thread, I was wondering how NP-C problems (generated solutions) relate to searching a random set of solutions.

  402. fred Says:

    Jay #399

    I generated some quick test code, with 10 generating numbers
    {-124, -56, -43, -12, -2, 46, 97, 1001, 1230, 1234}
    you get a list of 1024 elements (with repetitions) that is pretty “dense”:
    [-237, -235, -225, -223, -194, -192, -191, -189, -182, -181, -180, … -70, -68, -67, -65, -58, -57, -56, -55, -55, -53, -51, -49, -45, -43, … -10, -9, -4, -2, -2, 1, 3, 5, 7, 17, 19, 27,… 3551, 3552, 3553, 3560, 3562, 3563, 3565, 3594, 3596, 3606, 3608]

  403. Scott Says:

    Mikko #392:

      do we have some kind of hypothesis about the nature of complexity of computational problems that would explain this rather interesting grouping of problems between P and NP.

    Well, P and NP are far from the only interesting complexity classes! There’s also BQP, PH, #P, PSPACE, and much more. I’d say the centrality of P and NP is partly for historical reasons (and because most other complexity-class separation problems seem of comparable difficulty to P vs. NP anyway), but partly because finding a solution and verifying one (in deterministic polynomial time) really do seem like two of the simplest, most natural notions to care about! And also, because so many practically-important problems turn out to be NP-complete (see here for my speculations about why).

  404. gentzen Says:

    Greg #378:
    I vaguely remember reading somewhere (maybe in this post about the group isomorphism problem) that the graph isomorphism problem instances derived from the group isomorphism problem for finite indecomposable p-groups (especially 2-groups) are good candidates for hard graph isomorphism problem instances.

  405. asdf Says:

    Scott #399 oh yes, I see. If P=NP then there is a polytime algorithm A that recognizes SAT, the succinct witness for this is the satisfying assignment, and the succinct witness for an UNSAT instance is an “instruction trace” of running the instance through algorithm A and having it not say “yes” within its time bound. But this is useless if you don’t know what A is, or even if you do know what it is (i.e.. Levin search) but you don’t know its runtime–it very well might not be provable. Is there something less non-constructive than that? I remember P=NP => P=co-NP as a basic fact about NP-completeness, but I don’t see a more useful proof than the one above.

  406. Scott Says:

    asdf #405: P=NP => P=coNP is a basic fact, trivial to prove. But yes, if you only had a nonconstructive proof of it, it’s totally unclear to me how you’d exploit that to find the succinct proofs of unsatisfiability that formally exist. But maybe we shouldn’t be so surprised! After all, Levin’s observation that we could exploit a nonconstructive proof to get proofs of satisfiability in polynomial time, is a fact that’s as weird and surprising as it is useless, and probably we shouldn’t expect that sort of “leveraging of nonconstructive arguments into formally ‘constructive’ ones” to be possible more generally.

  407. Raoul Ohio Says:

    Scott,

    I like the GP (Gravitational Pull) idea.

    My brief moment of thought on this issue is that Turing-complete is the “right definition”, which is why it has GP. A simpler example occurs with the real numbers, R. My take is this:

    The naturals, N, are either obvious or “god given” (someone said that; Kronecker?). From N, anyone doing simple math would come up with the integers, Z, and rationals, Q. When you start taking roots and doing calculus, it is clear that Q is not enough, but not obvious what to use. At least two usable generalizations (Dedskind cuts, Cauchy sequences) were constructed back in the day. A rite of passage for math grad students is showing the two resulting systems are equivalent. The resulting system is called the reals, R.

    I am not aware of any other generalization of Q,closed under the usual operations of calculus, that is NOT equivalent. Furthermore, one could come up with other constructions for R that ARE equivalent.

    Thus R can be considered the “right definition”, and it exerts GP on other constructions!

  408. asdf Says:

    Scott, two equivalence classes that surprised everyone by turning out to be equal: IP=PSPACE. Does that count?

  409. asdf Says:

    Raoul, the rules of calculus and the archimedian property of the reals don’t seem inevitable. 18th century calculus was done with infinitesimals and today, reals with infinitesimals can be formalized as nonstandard analysis. It was trendy for a while as a potentially better way to teach calculus (no more delta-epsilon) among other things.

  410. Scott Says:

    asdf #408: You also could’ve given the examples of PSPACE and NPSPACE, or PSPACE and QIP, or NL and coNL, or NC1 and BWBP-5. However, in not one of these cases would I say that there was ever an “electric fence” separating the two complexity classes. I believe they were all proved to be equal not long after anyone started working on the questions at all (which, of course, doesn’t detract from how surprising and important the results were). And unlike with P vs. NP, I don’t think there was ever (e.g.) a plethora of NC1 problems for which people struggled to find width-5 branching programs but couldn’t, or for which they almost could but it always turned out not to work in the end, or anything like that. At most, there were small picket fences separating these classes. People “only” had to have the insight to try and step over these picket fences, and often within a year or less, they succeeded.

  411. Jay Says:

    Fred #402,

    Yep, but it seems we can use min, max and grand total as boundary conditions to explain a fast-increasing number of these subsets.

    First we explain three subsets as the sum of the negative numbers (-237), as the grand total (3371), and as the sum of the positive numbers (3608).

    Then we find -2 from [-237, -235], and that explains -235, 3369, 3373, 3606.

    Then we find -12 from [-237, -225], and that explains -225, -223, 3357, 3359, 3383, 3385, 3594, 3596.

    Then we don’t consider [-237, -223] as -223 was explained.

    Then we find -43 from [-237, -194], and that explains -194, -192, -182, -180, 3314, 3316, 3326, 3328, 3414, 3416, 3426, 3428, 3551, 3553, 3563, 3565.

    etc… it seems very fast to constrain everything this way.

  412. asdf Says:

    Hmm, I had thought IP vs PSPACE were considered separate and their unification was a surprise, but you’re certainly more up on it than I am.

    In the other direction, how about P vs feasibly computable functions, upended by Shor’s algorithm? An apparent separation (P vs BQP) that truly shocked everyone.

  413. Fred Says:

    Jay #411 you’re right, thanks 🙂

  414. Aatu Koskensilta Says:

    (My earlier version of this comment was sent in the middle of furious editing, and as a result somewhat garbled. Hopefully this incarnation makes more sense, or, perhaps a tad more modestly, is at least grammatical.)

    Turing noted the Riemann hypothesis is Pi-2, and, I think, Kreisel improved on this, observing it’s in fact Pi-1, i.e. “Goldbach type” as Gödel would say. As for ZFC and independence results, it’s a powerful testament to the mathematical acumen and prowess of early descriptive set theorists that they managed to prove pretty much everything that can be proved without venturing beyond the machinery proffered in ZFC. It is to me completely obscure what Lubos is on about with all the blather about a “single discrete structure” posited to exist in the P is not NP conjecture. At first I thought he was somewhat hamfistedly remarking that the conjecture is found in the arithmetical hierarchy one level higher than the Riemann hypothesis, the Goldbach conjecture, Fermat’s last theorem, and the like. But on further reflection, and having read his further comments, it can’t be this triviality he has in mind here. On the face of it, and at first glance, it is a seemingly sensible notion — which, donning my logician and philosopher of mathematics hat on, does not in the final analysis really have much to recommend it either — that logical complexity should enter into our pondering and mulling over about mathematical likelihoods. It transpires that I was mistaken in thinking this was his point, in light of further evidence. But, then, just what is his point? I join Scott in his nonplussment and perplexement.

  415. Gil Kalai Says:

    Hi Scott (#393), you gave a couple of examples (Set Cover and Boolean Max-k-CSP) where for approximation problems if the approximation ratio is above a certain parameter T then the problem is in P, but if it is below (1-ε)T then the problem is NP-hard. My point was simply that for the unique game hardness there are *many more* examples where (provably) the gap between “P” and “unique game hard” is very narrow (between (1-ε)T and T).

    If having an electric fence (that was never crossed) in regions where the gap itself is known to be narrow is important to your argument then your argument fits very well as an argument for hardness of unique games.

    (Of course, a case for separating unique games from P also serves for separating NP complete from P and also for seperating P-space from P.)

  416. Scott Says:

    Gil #415: Ah, thanks for clarifying. I would’ve said: yes, there are particularly dramatic approximation/inapproximability “fences” for the UG-hard problems, and those fences do indeed lead me to think that the UGC is more likely true than false. I’m less confident than I am in P!=NP, simply because there are far fewer examples and they’ve been studied for far less time (and also because, with NP-completeness already around, one can ask, “if your optimization problem is so hard, how come no one has proven it NP-hard?”) On the other hand, if the UGC is eventually proven, I’d say it would make the electric fence argument for P!=NP all the more dramatic.

  417. Aatu Koskensilta Says:

    (Extremely pedantic nitpick: the algorithm we can extract from Gödel’s proof of the first incompleteness theorems, in its original form or in more modern incarnations, constructs for any given (effective description, e.g. an index of a recursive function churning out theorems, of a) formal theory a sentence that is, if the theory is consistent, not formally provable in the theory. But it might well be refutable! Take for instance the Gödel sentence of PA + “PA is inconsistent”. For undecidability, we have to either assume Sigma-soundess or alternatively consider the sentence obtained by diagonalizing the Rosser provability predicate.)

  418. Sam Hopkins Says:

    Would a disproof of the UGC have any effect on your belief of P vs. NP?

  419. A Says:

    Probably too late to ask this, but it would be interesting to know the intuition in the argument from Bram Cohen in #103 that uses the conjecture than the k-sum problem requires n^(k/2) complexity for even k to justify that P != NP.

    Just the fact that for all constants k there are problems not in O(n^k) is clearly not enough to justify P != NP, since it’s an actual fact derived from the time hierarchy theorem). So there has to be something special about the k-sum problem here, maybe some particular reduction to SAT?

    I’d also like to thank Scott for all of his insightful writing, definitely one of the best threads in a while – and as it shows, there are plenty of opportunities to clarify fundamental concepts in the process of explaining them to an ignorant audience that don’t involve putting up with Luboš.

  420. asdf Says:

    Aatu, I don’t think it’s about quantifier depth either: the twin prime conjecture is Pi-0-2 but its near-certainty follows almost immediately from Cramér’s probabilistic model for primes, that (with notable exceptions) has predicted the behaviour of primes pretty accurately since the 1930’s. It seems to me that we don’t have anything like that for algorithms. P vs NP quantifies over algorithms, which have a very complicated structure that seems harder to map to a probability distribution.

  421. Gil Kalai Says:

    Hi Scott, overall we have similar beliefs both regarding the NP=!P quesion and the UGC. Regarding the case for P=!NP

    “In half a century, this hasn’t happened: even as they’ve both ballooned exponentially, the two giant regions have remained defiantly separate from each other”

    This I regard as a very strong argument for P=!NP, probably the main one. Along with the huge additional theoretical infrastructure that added to the NP=!P picture more insights that also remained “untouched.”

    “But that’s not even the main point. The main point is that, as people explore these two regions, again and again there are “close calls”: places where, if a single parameter had worked out differently, the two regions would have come together in a cataclysmic collision. Yet every single time, it’s just a fake-out. Again and again the two regions “touch,” and their border even traces out weird and jagged shapes.”

    Here I don’t agree with the logic. What you refer to as the “main point” is certainly very interesting and important. But I don’t think it is the main point for a case that P=!NP. As I mentioned, this “close call” phenomenon is even stronger for the UGC where our level of belief is smaller, (and it is much weaker for P=!PSPACE). But even simpler than that: The first point, that a large number of explicit or implicit attempts to give P-algorithms to NP complete problems failed is a clear evidence for P=!NP (that different people give different weight to). However, for the “close call” argument it is unclear a priori why is it supporting the NP=!P case at all.

  422. Scott Says:

    Gil #421: The reason I regard “close calls” as in some sense the main point is that I believe in Popperian science—and in its principle that “the less likely your hypothesis was to pass some test a-priori, the MORE impressed you are when it does pass.” E.g., after Einstein tries to kill QM and fails, he doesn’t get to say “fine, QM goes back to being as plausible as it was before I started trying to kill it.” Whether Einstein likes it or not, QM is now STRONGER than before—all the more so because the would-be killer was Einstein, and because the objections looked so devastating at first.

  423. Scott Says:

    Sam #418: The UGC is the conjecture that Unique Games are NP-hard. If they were hard but not NP-hard, that wouldn’t really have any effect on my belief that P!=NP. If Unique Games turned out to be in P, on the other hand, then depending on the proof techniques, that MIGHT decrease my confidence in P!=NP from 99% all the way to, say, 98%.

  424. asdf Says:

    Speaking of IP=PSPACE, there is obviously an interactive proof that something is an UNSAT instance, that is polytime for the verifier. So if P=NP, can a version of Levin search equipped with a coin flipper certify the NO instances of SAT in polytime to probability 1-epsilon, by running the (randomized) interactive verifier concurrently with a universal search seeking answers to the verifier’s queries? This seems like a natural approach but I don’t understand the IP=PSPACE result well enough to see the fine points.

  425. Greg Kuperberg Says:

    So, why is it important to conjecture that UG approximation is NP-hard, rather than simply defining it as a complexity class? Perhaps there is no natural way to relativize it?

  426. Greg Kuperberg Says:

    …but it looks like there could be a natural way to relativize the Unique Games model. A “unique game” can be thought of as a graph with two types of vertices, variables and reversible gates such as Toffolis, etc. (So, it is like a reversible digital circuit, except with a graph-valued time variable.) So, you could make a unique game with oracle gates as well as k-local gates.

  427. Gil Kalai Says:

    The way I see it, along with the huge implicit and explicit effort to find efficient algorithms to intractable tasks, the other major scientific evidence, for NP=!P is the major coherent mathematical structure based on the NP=!P conjecture and related conjectures. (I suppose this is what Scott’s MO question is about.) In this large mathematical infrastructure there are pieces which represent the fact that in some respects NP=!P is a huge understatement (like the failure to find even 2^o(n) algorithms for SAT and the coherent picture based even on conjecturing that this is impossible), and there are pieces of evidence for which NP=!P represents a “close call.” Both these two extreme aspects of NP=!P are part of the large picture (that would collapse if NP=P) which seems quite robust. (Overall, my own beliefs are leaning to the pessimistic side. But in the practice of mathematics trying to prove a conjecture and trying to disprove it can be quite similar.)

  428. Scott Says:

    asdf #424: Neat idea, but unless you have not only P=NP but also P=PSPACE, it’s not going to work! To see why, it’s probably easier to ask yourself a simpler question: why can’t you simulate IP=PSPACE in BPPNP, by just using the NP oracle to guess the response to each challenge? The answer involves the need for memory (from one query to the next) on the part of the prover. A BPPNP prover is like a compulsive, easily-discovered liar: it just tries to make you happy on the query you’re asking right now; it has no global strategy for how to make you happy over the long term. That the need for “long-term memory” bumps you all the way up to PSPACE is subtle and merits some thinking about.

  429. Scott Says:

    Greg #425:

      So, why is it important to conjecture that UG approximation is NP-hard, rather than simply defining it as a complexity class?

    That was the question I asked all the time when I first heard about UGC! So I think I can tell you the answer. The experts have a huge amount of experience with various kinds of approximate constraint satisfaction problems (2-query PCPs, projection games, …) being NP-complete under more and more powerful types of reductions, and also with some of them being solvable using (e.g.) SDP relaxations. By contrast, they have virtually no experience with evidence for approximate CSPs being NP-intermediate (of the sort we have for, say, factoring or graph isomorphism). So, most of them believe (or at least, believed 10 years ago) that Unique Games are most likely NP-hard as well, and if they’re not, then they’re probably in P or BPP. The fact that they didn’t create a freestanding complexity class for UG simply reflects their views (or previous views…) about how things are likely to turn out.

  430. Nick Read Says:

    Guys, any chance of a bit more lower-level explanation here, even very brief, just about the *statement* of UGC? I’ve read brief accounts in texts and looked at some papers, but I still don’t feel satisfied I even know what the point is. Why is the statement “UGs are NP-hard”? What kind of problem is UG: decision, optimization, . . . ?

  431. Nick Read Says:

    PS: I understand UG as a constraint satisfaction problem quite well: you can think of it as a Potts model (k possibilities for the variable at each vertex) with some “twisted” interactions along the edges. Or as a lattice gauge theory for the symmetric group S_k, with Higgs fields on the vertices. And so on.

    It’s at the part about determining if you can satisfy a fraction at least 1-\epsilon or at most \epsilon of the constraints on edges that I get lost.

  432. Greg Kuperberg Says:

    Nick – In your language, yes, a “Unique Game” state model is a Potts model with n states for each atom, and a twisted pairwise ferromagnetic interaction for the edge of the graph. For each edge there is a bijection f of the n states; the energy is 0 if f(a) = b and otherwise 1. We consider the question of determining the ground state energy in units of average energy per edge. (I.e., normalized so that 0 ≤ E ≤ 1 for any state.)

    It is trivial to determine when there is a state with energy 0. Nonetheless, conjecture: There exists an ε such that there is no good algorithm to find a state with energy E ≤ ε (given the promise that that there is one). In fact, stronger conjecture: For every ε and δ, there exists n large enough so that there isn’t even a good algorithm (for that fixed n but for all graphs and all edge twistings) to tell whether the ground energy is at most δ or at least 1-ε.

    Various features of the Unique Games conjecture(s) look idiosyncratic to me. The name of the conjecture is a bit strange; that there is more than one conjecture with the same name; that one version is conjectured to be not just hard but NP-hard; that δ and 1-ε are mentioned when it could just have been ε and 1-ε; etc.

    Nonetheless the basic complexity problem is clearly interesting. The motivation comes from looking at other combinatorial optimization problems, if you like Potts-type models but not just with pairwise interactions, where perfection is hard to determine exactly, and on top of that hard to approximate. The new conjecture is that approximation can be hard to compute even when it’s trivial to tell perfection for a completely local reason.

    As such, the question is an intellectual analogue of Valiant’s theorem that counting perfect matchings (=dimer states) is #P-hard. It is no surprise that when existence in a combinatorial problem is NP-hard, counting states is #P-hard. Valiant showed that counting can be #P-hard even in a counting problem with very local rules where existence has a fast algorithm.

  433. Gil Kalai Says:

    Dear Nick, it is a decision problem: Given a graph with n vertices and an alphabet A of size k (think about it as integers modulo k), you look for labelings of vertices which satisfy certain constraints described by the edges. On each edge e={v,u} the labeling L(v) and L(u) defers modulo k by a constant c(e).

    It is NP-complete to decide if such a labeling exists. But we want less. UG refers to the case that you let k grows not too fast (like o(log n)) and you want only to distinguish between the possibilities: A) there is a labeling satisfying (1-ε) of the constrains B) there is no labeling satisfying ε of the constraints.

    This question -UG- seems hard and remarkably a lot of other questions that seems hard reduce to it.

    Khot’s unique game conjecture asserts that UG is NP complete. There are certainly people that believe that UG is in P (more than just believe – even work in this direction). And maybe it represents an intermediate complexity class. Perhaps thinking that it requires a new reduction method (that will confirm the conjecture) is the least surprising way it can go. Of course, any progress will be great.

    A related very intuitive question is “small set expansion”. A place to read friendily from 2010 is Steurer thesis http://www.cs.cornell.edu/~dsteurer/papers/thesis.pdf

    Nick, How is it related to Higgs fields for a lattice guage theory for S_k??

    (And while at free-style self-educational wish lists: Ignacio can you elaborate a little on: “He seemed to think that making operators dependent on the state might provide a way to avoid the firewall argument, despite his misgivings on making QM non-linear.”)

  434. Gil Kalai Says:

    (I made a freshman level mistake regarding the “ground state”. It should be (as Greg said) It is NP-complete trivial to decide if such a labeling exists. But we want less something else…)

  435. Nick Read Says:

    Greg #432, Gil #433:

    Thanks for taking the time to write.

    Greg, your description of a UG agrees with my understanding. I’m also aware that there is a multitude of approximation results for optimization problems (including constraint satisfaction ones), and that the extent to which they can be efficiently approximated varies a lot, even for NP-complete problems.

    I’m sorry, but language like your “whether . . . or . . . ” drives me crazy. There seem to be two ways you could read that. 1) There is a promise that either there is an assignment that satisfies at least a fraction 1-epsilon of the edges, or else the maximum is at most delta; decide which. 2) Decide between alternatives i) and ii): i) the maximum is either more than 1-epsilon or less than delta; ii) = not (i). Of course it’s not obvious which of these is more interesting.

    Gil seems to be saying option 1), but his initial description of UG may be different. In fact if that is the same as what Greg and I have in mind, then as Greg has pointed out, it is polynomial time to decide if such assignment exists, so not NP-complete! (It is the threshold version that is NP-complete, and hard in instances when no sat assignment exists.)

    Then I’m puzzled by the UGC that UG is NP-hard. Why not NP-complete (as Gil says actually)? Is option 1) above not in NP?

    I like Greg’s remark that “The new conjecture is that approximation can be hard to compute even when it’s trivial to tell perfection for a completely local reason,” however I would say that the example of max-cut—which is nothing but the case of UG with k=2 with the non-trivial perm in S_2 on each edge, aka Ising antiferromagnet—already shows that, the easy sat case being a bipartite graph. (But I know that the UGC implies the strongest approximation results for max-cut.) Perhaps you can sharpen the remark?

    Now for Potts, Higgs, and all that (I was being intentionally provocative :)). A Potts model is a generalization of the Ising model in which each variable or “spin” takes k states instead of two. Then it is natural to consider models that have permutation symmetry. The basic Potts ferromagnet has a ground state in which the constraints that for each edge the two variables at the ends are equal are all satisfied (and it always exists).

    In the UG the interaction is “twisted” by introducing a permutation on each directed edge (and the inverse for the reverse direction). That is, the constraint is satisfied if the variable at one end is the permutation applied to that at the other end.

    Such a twisting is a “vector potential” or bond variable for a gauge theory with group S_k (there they would be variables themselves, but admittedly in the present case they are fixed). In such a gauge theory, the variables on the vertices, which transform in the defining representation of S_k, correspond to scalar fields, aka the Higgs field.

    Thus UGs seem very natural to a physicist, but except for the case k=2 which is an Ising spin glass type of model, I don’t know if they were ever considered in the stat mech literature. The usual Potts spin glass simply has constraints of two “signs”.

    Finally, I have read that there is a subexponential time algorithm for solving a UG. I think what this means is that in the case when assignment with more than 1-epsilon^6 exists, an assignment with at least 1-epsilon is produced, otherwise not, thus deciding between the two options in version 1) above. Moreover, the time is subexponential, which contradicts expectations (the exponential time hypothesis) for NP-complete problems. Does that sound right? So assuming the ETH the UGC is refuted?

  436. asdf Says:

    Scott #428, hmm, but that’s for an arbitrary problem in PSPACE, not restricted to NP. For UNSAT maybe it’s not as bad.

    Does this work? There’s a brute force 2**n sized proof of unsatisfiability (check all assignments) that be converted to a poly(2**n) PCP for a known constructive polynomial (if I understand correctly). Call its degree d. So from the PCP theorem we need just a single, sdn-bit query (s is a constant we choose for the soundness we want) whose s-bit response we can check with a known, polynomial time verifier. This means finding the s bits is amenable to Levin search: am I missing something? This seems too good to be true, but I can see how it’s different from an arbitrary problem in PSPACE.

    If we really luck out and it turns out that P=PSPACE, I’m having trouble seeing how to make use of that fact with something like Levin search, but maybe there’s a way. It would mean we can play perfect (PSPACE-COMPLETE) generalized chess in polynomial time without knowing the actual algorithm for doing so.

  437. asdf Says:

    I guess in the above, using the PCP theorem requires knowing what the succinct witness of unsatisfiability actually looks like and how to check it, and not merely that it exists. So the idea above was bogus. So never mind ;).

  438. Gil Kalai Says:

    Hi Nick

    To the best of my memory the situation is this: you want an algorithm that if A) there is a labeling satisfying (1-ε) of the constrains it outputs YES, and if B) there is no labeling satisfying ε of the constraints it outputs NO and otherwise you do not care.

    (So there is no promise that either A or B holds but perhaps it does not matter.)

    The subexponential result does not contradict the unique game conjecture. It shows that certain strong reductions from NP complete problems to UG do not exist. (What is ETH?)

    The Potts connection is sort of nice. I remember a certain Potts-related new computational complexity class that I heard in a talk by Mark Jerrum but I dont remember the details.

  439. Greg Kuperberg Says:

    Nick – “There is a promise that either there is an assignment that satisfies at least a fraction 1-epsilon of the edges, or else the maximum is at most delta; decide which.”

    Yes, that’s what is meant.

    “Then I’m puzzled by the UGC that UG is NP-hard. Why not NP-complete (as Gil says actually)?”

    It goes without saying that everything related to Unique Games is in NP. So therefore NP-hard is synonymous with NP-complete. That is yet another idiosyncracy in the terminology.

    Finally, as you say, there is the recent subexponential time algorithm for Unique Games due to Arora, Barak, and Steurer. If we assume the exponential time hypothesis for 3-SAT, this establishes that certain forms of Unique Game approximation are not NP-hard with Karp reduction. As they point out, it does not strictly rule out all forms of NP hardness. Still, in light of this algorithm, I certainly would rather view UG as its own complexity class rather than to conjecture it to be NP-hard.

  440. Scott Says:

    Greg #439:

      there is the recent subexponential time algorithm for Unique Games due to Arora, Barak, and Steurer. If we assume the exponential time hypothesis for 3-SAT, this establishes that certain forms of Unique Game approximation are not NP-hard with Karp reduction.

    Actually, no it doesn’t. The algorithm takes time that’s something like 2n^poly(ε), where ε is the desired error. So, “all” that’s implied by Arora-Barak-Steurer is that, if there is a Karp reduction proving Unique Games NP-complete, then the reduction has to blow up the instance size by an npoly(1/ε) factor (assuming the ETH, of course). Dana tells me that there are many known reductions in the PCP world (for example, parallel repetition) that have that behavior.

  441. Scott Says:

    asdf #437: Yeah, even to give an interactive proof for a coNP language, as far as anyone knows today the prover needs to have at least the power of #P. Whether that can be improved (say, to a prover with BPPNP power) is one of my favorite open problems in complexity theory. We have no idea how to do it, but also no strong evidence against (e.g., that if it happened then PH would collapse, or something like that).

  442. Nick Read Says:

    Gil,

    ETH is a place in Zurich. No, ETH=exponential time hypothesis.

  443. Nick Read Says:

    Scott #436: that’s helpful. I can imagine that blowing up
    the size of the instance is what is needed to convert an ordinary 3SAT problem (or a simple threshold decision problem for a UG, no promise—presumably NP-complete for all k) into a UG with the promise of a gap in its satisfiability.

  444. Bram Cohen Says:

    Scott, I posted a comment on Lipton’s blog about the possibility of using linear programming as a proof technique, an idea which may have some philosophical implications given the paucity of approaches to solving NP-complete problems: that is, they’re basically all solved by either stochastic search or backtracking, with no third option. I propose a third option which may work well on certain very specific problems.

  445. Scott Says:

    Greg #425: Incidentally, I guess there are two reasons why no one has tried to relativize the UG complexity class:

    (1) It’s not clear how to do it. (There’s a similar fuzziness about how to relativize the PCP class—see the debate between Arora-Impagliazzo-Vazirani and Fortnow on that question.)

    (2) Even if you could do it, a proof of the UGC is just not the sort of thing that has any good reason to relativize. (Already the proof of the standard PCP theorem “fails to relativize,” for some reasonable definitions of the oracle access mechanism.)

  446. asdf Says:

    Scott #441, thanks, this starts to make sense. NP and co-NP misleadingly sound like “duals” of each other that (by the terminology) should somehow be equivalent in difficulty, but actually co-NP is one level higher in PH, so being able to solve NP doesn’t necessarily help, and even if P=NP (so PH collapses) unless you know the reason so you can use it somehow, NP and co-NP are effectively still separated.

    But, there is a famous theorem of Stockmeyer that approximating solutions to #P problems is in BPPNP which is what we’re dealing with here. Does that not count if the number of solutions is 0 (i.e. the instance is in a co-NP language)? I haven’t actually read the paper so I don’t understand the algorithm.

  447. Greg Kuperberg Says:

    Nick – I found a few papers on “random permutation Potts” models. Unique Games is not necessarily meant to be the random case, nor a repeating lattice model. Otherwise this seems to be a special case Unique Games. E.g. this paper:

    http://arxiv.org/pdf/0710.4246

    Scott – I see. Thanks to your comments on several points, I understand the issues better now. Maybe it’s simply provocative without being likely or unlikely to claim that UG approximation is NP-hard. And, if there is no natural way to relativize UG, maybe it’s not so bad not to make it a complexity class.

  448. quax Says:

    ” … even when Luboš is wrong about a straightforward factual matter, he never really admits error: he just switches, without skipping a beat, to some other way to attack his interlocutor.”

    If only FOX News viewer demographics could tolerate a science show there’d be a match made in heaven …

    Really surprising to me that it took you eight years to lose your patience with this guy.

  449. Scott Says:

    asdf #446: No, NP and coNP are duals of each other! It’s just that saying you want to verify a YES answer introduces an asymmetry between the two. It’s just as hard to verify a NO answer to an NP language as it is to verify a YES answer to a coNP language.

    Also, Stockmeyer’s algorithm certainly does let the BPPNP prover find out for itself whether the SAT instance has any satisfying assignments. (In fact, it doesn’t even need Stockmeyer’s algorithm: it can just use its NP oracle to ask whether satisfying assignments exist!) The whole problem is how to convince the polynomial-time verifier in case the answer is NO. This is an interesting general point: it can take more computational power to convince someone else of something than it took to learn the answer yourself!

  450. Nick Read Says:

    Greg #447: thanks for the reference.

  451. asdf Says:

    Hmm, ok, so NP and co-NP are duals about recognizing YES vs NO instances, but language membership is only about YES instances, so from that standpoint co-NP is still harder (Pi-1 in PH instead of Sigma-1). It’s just like in logic, where a Sigma-0-1 proposition (if true) is provable by showing a witness, but its negation is Pi-0-1 and might be undecidable. It just a while to get used to the idea that if somebody proves P=NP some day (but doesn’t tell us a concrete bound), then we’ve got this perfectly practical (heh) algorithm for SAT, but we’re no better off than before about UNSAT.

    I guess I got confused and it slipped that an NP oracle also has to recognize NO instances (so it’s also a co-NP oracle). It’s not like Levin search, which recognizes YES instances but can diverge on NO instances. Hmm, ok then, sure, if you have an NP oracle, you can recognize co-NP languages immediately.

    OK, how is this. Suppose P=#P (or even P=PSPACE) and you want an interactive proof of a co-NP language instance. Levin search finds a machine (call its index m1) that satisfies the first query. Then it searches for a machine m2 that satisfies two queries in a row, and so on. Eventually it finds the machine with the polynomial-time algorithm for all of #P and that one (or an earlier one) answers the desired number of queries. And if your instance turns out not to be in your co-NP language, you eventually get an existential witness, so this all terminates in polynomial time. And it still terminates unconditionally (in at worst exponential time) if P != #P. Does that work?

  452. Greg Kuperberg Says:

    Scott – Re the paper by Gutfreund, Shaltiel, and Ta-Shma. I think that this paper looks at a different kind of embarrassment than the one that I had in mind. The paper considers algorithms with a fixed time limit. It says (assuming that P ≠ NP) that any such algorithm can be embarrassed by playing it against itself. The construction is similar to Gödel-Turing diagonalization.

    What I had in mind is algorithms that do eventually give the correct answer, but are sometimes slow. Assuming for instance the exponential time hypothesis, and if you like even stronger cryptographic assumptions. Suppose that A is an algorithm for CircuitSAT that always calculates the correct answer, eventually. Then does there exist an algorithm B, and a constructible or samplable sequence of inputs, that embarrasses A by being *much* faster? It feels like B must make use of a trapdoor. (Or that its existence is close to a definition of a trapdoor.) At the same time, it is cheating to let B be non-uniform if A is uniform. Given identical rules for A and B, clearly the choice of B must depend on the choice of A.

  453. Sam Hopkins Says:

    Maybe another analogy to NP vs. coNP would be Noetherian vs. Artinian rings, where the definitions seem literally dual to one another, but as it turns out one property is much more restrictive than the other.

  454. Greg Kuperberg Says:

    Sam – No, Noetherian vs Artinian rings are not a good analogy, because NP and coNP really are equivalent. Another analogy you could make is between left modules and right modules. NP is like the set of left modules of a ring and coNP is like the set of right modules.

  455. J Says:

    I do not get it. How NP and coNP are equivalent? They are like Yin and Yang. How does your left and right module idea fit in?

  456. Nick Read Says:

    asdf#451, Sam #453: I’m no theoretical computer scientist, but I think I can answer this one.

    The statement “NP and co-NP are duals about recognizing YES vs NO instances, but language membership is only about YES instances, so from that standpoint co-NP is still harder (Pi-1 in PH instead of Sigma-1)” is not correct in the last part.

    NP and coNP really are dual; coNP is not harder than NP. In terms of languages, a language consists of a set L of bit-sequences. For a language L in NP, a bit-sequence x is in L if and only if there is a efficiently-verifiable certificate y for x—in other words, there exists a bit-sequence y that satisfies relation R with x for, and only for, YES instances x (where R is polynomial-time computable, and y is not too much longer than x). A language L is in coNP if the complement {0,1}^* – L is in NP. Thus for coNP, there are certificates for, and only for, NO instances. Now they might sound like they are the same. But the distinction between the two becomes clearest when you ask if a language belongs to both NP and coNP. That means there exist certificates both for YES instances, AND for NO instances (the certificates for the two might look very different).

  457. Greg Kuperberg Says:

    It is true that languages are the “yes” instances of a decision complexity class. However, this convention is only due to the historical origins of CS theory in logic and linguistics. For their own formal and historical reasons, set theorists often look at sets rather than functions from some universal set X to {yes,no}. Linguists for their part are more interested in the words that are in a given human language than the words that are not in a given human language.

    But in CS theory, the logicians’ conceptual asymmetry between yes and no, while it’s still sometimes there, has clearly faded. The main formal consideration in logic, that there is no set of all sets, does not apply because there is a set of all input strings. So there is nothing wrong with thinking of a “language” as a decision function from the set of inputs to {yes,no}. Or as a function from inputs to any set with two elements, {blue, red} say. Many complexity classes, including PH but also +P and PP, are a priori symmetrical between yes and no. Or SZK is not a priori symmetrical, but it is an important result that turns out to be symmetrical after all.

  458. asdf Says:

    Greg, that PH (with its distinction between the pi and sigma levels) is still a relevant object of study, indicates that the difference between accepting YES instances and accepting NO instances is significant. I agree in some situations the “symmetry” or lack of it is quibbling over terminology, but I found the difference real and useful in understanding why it was hard to get Levin search to accept co-NP instances. I thought at first that I just hadn’t found the right trick, since if P=NP then NP=co-NP and short witnesses exist for co-NP strings. The analogy between PH and AH (the arithmetic hierarchy from logic) with sigma-1 and pi-1 at differing levels was a good conceptual way to see why the “right trick” might not necessarily exist. AH’s levels are provably distinct in computability theory and PH’s are conjectured to be similarly distinct in complexity theory. Kleene’s conceptualizaton of AH also didn’t have anything to do with set theory AFAIK. It’s entirely about first-order arithmetic sentences: there is no mention of or quantification over sets at all, and the “set of all sets” is not an issue. The number of pi-quantifiers in an arithmetic sentence is basically the depth of halting oracles you might need to decide the sentence with an oracle Turing machine.

  459. Yike Lu Says:

    It’s funny you mention “casino” because this dispute would be so moot if Lubos were a gambler. It may be well and good that he thinks “50%” but what’s his uncertainty around that number? What bid ask spread would he post if he were forced to bet real money?

    THEN it’s either show the world your actual confidence interval, be right, or go broke. If he made something reasonable (say a 50-55 spread, which is actually somewhat wide) I’d buy him in a heartbeat.

    The kind of attitude you describe is, to me, emblematic of the combination of intelligence and arrogance without the taming influence of “risk intelligence”.

    In brief, complete certainty translates into zero spread and arbitrary quantity (you would borrow as much money as allowed in order to place your bets). How realistic is that?

  460. Scott Says:

    Yike #459: There are sites (like this one) where you can bet on P vs. NP (alas, not yet with real money). But the bets might be more informative if there were a better chance of this question being resolved while the participants were still alive.

    What gets me the most is that, even if P≠NP is proved within my and Lubos’s lifetime, he still won’t admit that he was wrong (which is what I care about more than money). He’ll simply say: “Oh, so it turned out P≠NP after all? Whatever. It’s just an arbitrary question, and I gave it 50-50 odds anyway.”

    And what if the same breakthrough techniques that let us prove P≠NP also let us prove P≠BQP, NP≠PH, PH≠P#P, P#P≠PSPACE, PSPACE≠EXP, P=BPP, NP=AM, …, whereas Lubos had given only 50-50 odds to the experts’ conjecture being right for each of those questions (and claimed that each such question was independent of all the others)? I imagine he’d find some way to wiggle out of that one too. He’d even manage to turn it around so that, in his mind, I’d be the one who was wrong.

  461. Nick Read Says:

    Scott, is the blog going on Daylight Saving Time at all?

  462. fred Says:

    What puzzles me is that if it’s been so goddamn obvious since 1971 that P!=NP, then why are we still talking about all that stuff?
    Complexity theory grew out of this initial observation and created lots of clever mathematical acrobatics and spawned as more classes than there are Twilight Zone episodes… but has there been any major practical breakthrough as a result of all this? (besides the hope of factoring numbers on D-Wave 5.0 in 2025).
    Refining asymptotic lower bounds only gets you so far – in the Real World™, the constants are what matters once you know the basic problem class and the problem size.

    It’s good old plain & boring programming and electronics engineering which got us from (1971)
    http://tinyurl.com/5bpsj6
    to (2014)
    http://tinyurl.com/mremfvp
    Looking at this you’d think we’ve showed P=NP in practice for all the problems that actually matter.

  463. Scott Says:

    fred #462: Yes, you’re correct that complexity theorists weren’t singlehandedly responsible for the personal computer revolution. (They also weren’t responsible for the fall of Communism, which occurred during the same time period.) The main thing they did, I’d say, is provide an intellectual core for computer science, a field that as a whole had a significantly larger role (e.g. creating the Internet). If you want more “direct” applications, a second thing complexity theorists did is lay the foundations for modern cryptography by creating schemes like RSA. (Note that Rivest, Shamir, and Adleman have each done things that were straightforwardly 100% complexity theory, like IP=PSPACE and BQP⊆PP, in addition to other things where complexity was just one aspect.)

    Anyway, cryptography is certainly one reason why people still talk about P vs. NP. But I’d say the main reason they still talk about it is that it’s a fascinating scientific puzzle—and as long as they remain unsolved, scientific puzzles don’t go out of fashion and need to be replaced at the end of each calendar year. On the contrary, they often get more interesting: like the fate of flight MA370, an open problem just grows all the more tantalizing with each new tidbit of information that dribbles out. Do you also ask: “if it’s been so goddamn obvious since the 1920s that general relativity and quantum mechanics need to play together somehow, then why are people still talking about it”?

    Frankly, I don’t get all the hostility directed at this tiny field studied by maybe a few hundred people. The fact that I blog about complexity theory doesn’t imply that I think it deserves credit for everything that’s happened in electronics and software, and need to be corrected in that mistaken belief. I blog about complexity because I work on it and know something about it, and I work on it because I personally think it asks some of the deepest questions ever asked that clearly have definite answers.

  464. Nick Read Says:

    Scott #463,

    I totally support this comment.

    Nick

  465. PeterM Says:

    Scott has some writings on questions if P is equal to NP given access to some oracles which are “physically feasible” (soap bubbles as sort of minimal surfaces and so on). As I remember the conclusion was that they still seem to be separated under these conditions, and I am curious, if there are also some “close calls” there? (In the sense that if a physical theory was “little bit different”, then it would allow oracles under which the two classes coincide)

  466. Scott Says:

    PeterM #465: You’re probably referring to this paper, about the solvability of NP-complete problems in the physical world. (It’s not exactly about P vs. NP relative to oracles, which is a different question—since when we relativize, the P and NP machines both get access to the oracle.)

    But yes, when one thinks about NP-complete problems and physics, one does also find some “close call” behavior! So for example, if the Schrödinger equation were just slightly nonlinear, then we could use quantum mechanics to solve NP-complete and even #P-complete problems in polynomial time. Likewise if the Born rule were replaced by almost any other rule (e.g., |ψ|2.001, suitably normalized). Admittedly, one can also give arguments against these possibilities that have nothing to do with complexity theory: for example, they would also open the door to superluminal signalling and closed timelike curves. But I think complexity gives you an additional source of intuition.

  467. fred Says:

    Scott #463

    Ok, Scott, I have to come clean…
    I’m working as an intern for NewsWeek, and I was hoping to troll you into finally admitting that you’re the real Satoshi Nakamato.

    Our evidence is irrefutable:

    1) We’ve been told by experts that all that bitcoin network ‘mining’ is really just evaluating all the possible 4×4 permanent circuits.
    http://www.scottaaronson.com/talks/wildidea.ppt
    Coincidence?!

    2) you keep making bets for millions of dollars and condos in Monaco at the slightest excuse. Obviously all those insane amounts of cash have to come from somewhere…

    3) we’re pretty confident that “Scott Aaronson” is an anagram of “Satoshi Nakamato” (our boys back at the editing room are still working on a proof).

  468. fred Says:

    More seriously… some people seem to believe that P!=NP is ultimately “undecidable”.
    Is that an actual possibility?

  469. Scott Says:

    fred #468: (sigh) See comments #296, #313, #332.

  470. fred Says:

    Scott #469 – ok, it’s clear now 🙂 thank you.

  471. Greg Kuperberg Says:

    Scott – If the Schrödinger equation were just slightly nonlinear, then we could use quantum mechanics to solve NP-complete and even #P-complete problems in polynomial time.

    If stochastic matrices were even slightly non-linear, then the consequences would be equally dramatic. The problem is, in the present interpretation of stochastic matrices, the hypothesis is nonsense.

    Since the Schrödinger equation is closely related to a stochastic differential equation, this analogy has strength. So yes, you could hypothesize something like this, but I consider it dubious to call it a “close” call.

    A looser analogy: If even a small fraction of human DNA contained special nucleotides not found in other living organisms, then the consequences could be dramatic. Namely, the Book of Genesis could be correct that humans were created separately from other species. It sounds like a close call, but not really.

  472. Scott Says:

    Greg #471: Yes, I agree that it’s dubious to call these things “close calls,” since as I said, there are other good reasons why the Schrödinger equation should be exactly linear, the Born rule should be exactly |ψ|2, and so on. But if someone had the wrong idea that these were “unlikely coincidences” (and hence, unlikely to hold exactly), one could point to the crazy consequences of any inexactness as a way of convincing that person they were on the wrong track.

  473. Jay Says:

    If a small fraction of Molt’s DNA was changed, he would be banned for throwing feces rather than intellectual dishonesty. Close calls everywhere. 🙂

  474. Raoul Ohio Says:

    Jay, has Motl molted?

  475. A.Vlasov Says:

    Greg 471, Scott 472 in probability theory linearity obviously follows from additivity on number of events. For Schrödinger equation without suggestion about precise Born rule such idea does not work. Nonlinear Schrödinger in one-dimensional case is integrable using inverse scattering method, i.e., with help of some linear differential equation. The theory resembling quantum mechanics considered from nonstandard angle of view and it is not clear, if nonlinear Schrodinger equation could indeed be used after some correction of method of application of Born rule.

  476. Jay Says:

    It seemš so.

  477. A.Vlasov Says:

    PS. I am really not about some correction of Born rule for nonlinear Schrödinger equation (due to known problems) – there is simpler possibility – to consider Born rule as approximation for linearized case that may not work precisely for nonlinear version…

  478. Cristóbal Camarero Says:

    It seems that all your arguments are perfectly compatible with P<NP<P/poly. It would explain the nonexistence of some kinds or proofs and the existence of good approximations.

  479. Patrick Says:

    @Sev #10

    I’m so late the discussion but…I don’t agree with this analogy to physics:

    “Let’s think of P vs NP in the same way: It seems very difficult to resolve this question rigorously. In analogy to physics, each attempt at bringing these two classes together can be thought of as an “experiment”, and each experiment so far has failed. One can argue that perhaps we weren’t clever enough in our proof attempt, but the same also holds true for physics, as one can argue that perhaps the parameters in a physics experiment weren’t quite precise enough or set correctly.”

    The problem is that a physical experimental model of mathematical belief fails empirically. If we used an experimental model for proof, we would have believed Fermat’s last theorem, or the Poincare conjecture, unprovable before they were proven. Every hard problem in math follows the pattern of many failed attempts before a brilliant success, most simply because, like search, you always find what you’re looking for in the last place you look for it.

  480. Jr Says:

    “Discrete math and computer science, you see, are so arbitrary, manmade, and haphazard that every question is independent of every other; no amount of experience can give anyone any idea which way the next question will go.”

    Here is a thought that occurred to me. Suppose you take the starting chess position but remove one of black’s rooks. I am pretty sure that Motl would be willing to believe that the resulting game is not a win for black, even in the absence of proof. (If you want to make it into a more conventional mathematical problem, define some suitable sequence of generalized chess positions where white starts with a crushing advantage and conjecture that black does not have a winning strategy.)

    But isn’t this a discrete mathematical problem where intuition should give no guide?

  481. asdf Says:

    Anyone have any idea if this guy was onto anything?

    http://math.berkeley.edu/~rhodes/

    Paper is here: http://math.berkeley.edu/~rhodes/JohnRhodes.pdf

    “Our viewpoint toward P vs. NP is that it is obviously true that P does not equal NP, but we need to get more sophisticated relevant mathematics involved to prove there are no polynomial-time programs for NP-complete problems”

    Are there any notable complexity theory folks in the picture at the first link? I don’t recognize any but I don’t know that many.

  482. asdf Says:

    Ehh re previous, Lance had a post: http://blog.computationalcomplexity.org/2006/03/on-p-versus-np.html

    I also found the conference program in the Wayback machine. Nothing looked exciting.

  483. Scott Says:

    asdf #481: Hard to argue with the idea of getting “more sophisticated relevant mathematics involved” when trying to solve P vs. NP! But no, that conference didn’t lead to anything that I’ve heard about. Maybe the most relevant thing to say is that, as Lance pointed out, there are/were world experts in complexity theory right there at Berkeley (Dick Karp, Christos Papadimitriou, Luca Trevisan), and not one of them was involved with the conference.

  484. Perl Encryption Primer: Public Key Encryption - Wumpus Cave Says:

    […] all. At the same time, there doesn’t seem to be any reason why P and NP should be separate. But they probably are. The reason is that we’ve been trying really, really hard on some of these problems for a […]

  485. Giorgio Camerani Says:

    This post and the vibrant discussion originated from it are very interesting, in particular for what concerns the conjectured existence of the invisible electric fence (the 3 examples used to present such conjecture are intriguing, especially the last one).

    Scott, what about the following hypothetical scenario.

    First, let me make a foreword. We know that hard-seeming tasks have been sometimes tamed by extremely powerful and clever theorems. Such theorems pointed out some previously unknown identity which rendered the task simple if not trivial. For example, consider the decision problem of determining whether an undirected connected graph has an Eulerian Cycle. As usual there is an exponential number of objects to be enumerated here, and one may be tempted to conclude, after several failures, that no quick procedure exists for such problem, or in other words that any procedure is forced to be, at its very core, some optimized version of the obvious brute force procedure. But then Euler appears, singing the following Theorem: “An undirected connected graph has an Eulerian Cycle if and only if every vertex has even degree”. Such theorem unveils an identity which let us solve the Eulerian Cycle problem trivially, in linear time.

    OK, now here is the hypothetical scenario. Suppose that our Universe is designed according to the following rules:

    1. Any attempt to solve a NP-complete problem by devising an algorithm which manipulates the obvious objects under consideration (boolean assignments, cycles, cliques, vertex covers, …) is doomed to failure, i.e. it will be buried by the exponential abundance of such objects, no matter how optimized such algorithm is. No algorithm exists able to always furnish the correct answer by examinating a polynomially sized subset of the exponentially sized set of objects under consideration. The key here is that you must not look at the objects you are interested in. They are like Medusa: if you look at them you die. As in the aforementioned Euler’s Theorem, you are interested in Eulerian Cycles but you look at some other property involving vertices.

    2. It is possible to solve NP-complete problems in polynomial time if, instead of focusing on the objects you are interested in and scanning them, you exploit the existence of some equivalent, quickly verifiable property. Such properties are warranted to exist, for each NP-complete problem, and in general there is no limit on how exotic and counter-intuitive they can be. The hard to find objects are tamed into easy to check properties.

    This may seem a bizarre speculation but… …can we, given current state-of-the-art knowledge, rule out such a Universe? I’m not sure we can. Moreover such a Universe would reconcile you and Lubos (!). Because Rule 1 seems to me a close relative of your invisible electric fence (as far as I know, any known sound and complete algorithm for any NP-complete problem boils down, at some stage, to the enumeration of an exponentially sized set of objects). While Rule 2 is enough similar to something Lubos once said (not during this discussion, but during a past discussion he had on his blog with Boaz Barak, if I remember correctly).

  486. Scott Says:

    Giorgio #485: Yes, the situation you describe is certainly logically possible; I addressed it in comment #45 and several other places. Personally, I consider it extraordinarily unlikely, based in part on previous experience: when, in the past, problems were shown to be in P after a “long journey”—e.g., Linear Programming and Primality—it was never the case that there seemed to be an electric fence separating the problems from P, which was then miraculously deactivated by some tour-de-force theorem. Instead, there was a much “smoother” journey down to P: the problems were known for decades to be solvable efficiently in practice, and the difficulty was “only” in showing that the practical easiness of the problems really translated into theoretical easiness.

  487. Thomas Says:

    Philip White: “Also, not to wax too theological, but do you think that P != NP has any relationship to questions about the existence of god? I always wondered how a god could have created something as complex as the world/universe in 7 or so days without a good SAT solver.”

    Given that god knows everything, we can safely assume he is a good SAT solver. 😉

  488. Serge Says:

    Scott #50: “It’s not merely a choice to get a simpler theory; it’s also a prediction that the simpler theory is more likely to be the true theory.”

    In my opinion, the prediction will come out as a result of physics, not one of mathematics. Indeed, I don’t think that mathematics alone has the power to prevent any processor from quickly undoing a multiplication. I rather think it’s the physical level that has this power.

    So you’re right in advocating for P≠NP as being the better of these two concurrent theories – since it’s the only one that describes reality accurately. However, the physics theory won’t make use of any such restricted concept as polynomial behavior. The right ones will rather be “feasible/unfeasible”, “executable/galactic”, etc…

    Of course, the final result will have the same practical consequences. But the axiom P≠NP isn’t anything more than a convenient – and as such, unprovable – mathematical translation of the physical reality. The set of problems solvable in polynomial time is certainly not “arbitrary”, but in any case it *is* “manmade”. 🙂

  489. Scott Says:

    Serge #488: You have no idea that P≠NP is unprovable. It could be, but as I’ve said, if it were shown to be independent of ZFC, that would be the first time in history that that’s happened for a non-Gödelian, non-transfinite mathematical statement.

    And yes, of course physics can change our notion of efficient computation—that’s why I’ve spent my career studying quantum computing! But if you agree that Nature is at least capable of universal classical computation, then proving something like P≠NP is a necessary first step—a prerequisite—to showing the hardness of NP-complete problems according to our best understanding of physics. I.e., if P≠NP then NP-complete problems could still (in principle) be efficiently solvable by quantum or other means, but if P=NP (and the constants are reasonable, yadda yadda) then there’s nothing physics could possibly do to make NP-complete problems hard.

    Finally, it would be helpful if you defined what you meant by “feasible/unfeasible,” “executable/galactic,” etc. The polynomial/exponential dichotomy at least has the virtue of being clearly defined, so that one can start doing things with it.

  490. Serge Says:

    Scott #489: Let’s say that a feasible problem is one which has a high enough probability of ever being solved with a reasonable degree of accuracy and efficiency. The already-known polynomial problems have this property, but the fact that these two classes actually coincide is unprovable. It’s just a belief that I share with you and a few others. I’m aware that it won’t be possible for me to work without a more elaborated concept, and also that I’ll have to do that on my leisure time. 🙂 I have great respect for the professional scientists, but on this precise matter I can’t agree with you Scott. A problem can be physically hard without being hard mathematically. It’s just that its mathematical solution won’t ever get found.

  491. Serge Says:

    Scott #489:
    1) “And yes, of course physics can change our notion of efficient computation.”

    I think exactly the opposite: the notion of efficient computation is independent of the computing model, be it quantum or classical. That’s why you *can’t* have provable accuracy with quantum computers: precisely because they *can* provide you with provable efficiency.

    2) “if you agree that Nature is at least capable of universal classical computation…”

    For me, Nature is capable of *both* quantum and classical computation. What Nature can’t do is provide you with accurate *and* fast solutions to hard problems, at least with a non-infinitesimal probability. That also decreases the likeliness to have no bugs whenever you’re using a (classical) low-level language, and also to output the fastest programs whenever you’re using a high-level language. I just view quantum computing as as still-lower-level language.

    3) “proving something like P≠NP is a necessary first step — a prerequisite — to showing the hardness of NP-complete problems according to our best understanding of physics”.

    My personal understanding of physics makes me view the main difficulties of complexity theory as yet another quantum effect. Therefore, for me P≠NP is not something that has any effect on a physics phenomenon. As Cantor said, the essence of math is in its freedom. So you can choose P=NP at will, but the physics won’t let you compute any faster…

    4) Points 1) to 3) might – or might not – have shown you that I have at least some idea that P≠NP is unprovable.

  492. Serge Says:

    If you will allow me to make my point more precise: the physical hardness of NP-complete problems can be consistently interpreted, either through a probabilistic quantum model that’s compatible with P=NP – which is my own interpretation – or also through the standard complexity-theoretic model as a consequence of P≠NP – which is the more common view.

    Each model could be developed on top of each other, if it wasn’t for the probabilities involved in one of them and not in the other one. They unfortunately rule out any hope of using these two models for proving that P≠NP is independent of ZFC.

    The impossibility to evidence which of the two above interpretations actually holds in Nature might turn out to be just another relativistic effect. Thus, P≠NP would be for undecidable mere physical reasons…

  493. Marion Says:

    Lubos Motl is an Anthropogenic Global Warming (AGW) denier. That makes him a rabid propaganda-spouting irrational illogical lunatic. Period. I have encountered his kind countless times before, and they are all the same. You cannot trust anything his kind says about anything.

    I’m curious about whatever connections might exist between P vs NP and, for example, exact solvability of nonlinear differential equations, if any exist.

  494. Jordan Ash Says:

    There are no laws, only human models and rules which conform to said models. (ex: Grover’s is optimal for linear QC, but non-linear QC allows constant time search).

    One day our models will show that P=NP if only because there are human beings intent on finding an equivalence between these two classes, and that seems to be the way the universe works (goals are always eventually accomplished).

    If we can’t show something as mundane as P=NP, we’ll never achieve starflight, time travel, and the other good things we envision.

  495. Scott Says:

    Jordan #494: ROTFL, but what if there are other human beings intent on proving P≠NP? How does our universe decide which group to grant the wishes of, and whose goals will eventually be accomplished?

  496. Marion Says:

    OMG! This Jordan Ash couldn’t possibly be the same Jordan who humiliates himself on YouTube with a laughable series of pitiful anti-science anti-math videos under the name “Spirit Science”? I know that SS’s first name is Jordan.

    That would be too amazing a coincidence.

    Ridiculous generalizations, which are trivially proven false, such as “goals are always eventually accomplished”, are the hallmark of this pure idiot.

    Tell that garbage to the losing side in any war or court case.

  497. Jordan Ash Says:

    Selective comment publishing — tisk tisk, Scott.

    In case you decide to allow this one through the filter, Marion #497: I don’t know the Jordan you’re referring to. Don’t generalize my generalization, I’m talking about the process of scientific discovery, not winning X battle in Y war. Although interestingly, wars tend to be won by the side who innovates faster.

  498. Keum-Bae Cho Says:

    Dear Professor Scott Aaronson,

    I have a question regarding P vs NP problem.

    If Intractable SAT requires exponential precision to acquire feasible solutions,
    does this leads to the conclusion that class P is not equal to class NP?

    I look forwad to your comments.

    Best Regards,

    Keum-Bae Cho Ph.D

  499. Scott Says:

    Keum-Bae #497: Your question doesn’t make sense to me. Precision in what? The desired solution in SAT is a Boolean truth assignment (i.e., a collection of ‘true’ and ‘false’ values), and the type of machine that has to produce that solution (whenever one exists) efficiently is clearly specified: it’s a Turing machine manipulating discrete values, or something polynomially equivalent to a Turing machine. So at the end of the day, P vs. NP is a well-defined, discrete mathematical question—not a question of physics—and the issue of “precision” never even enters into it.

    Having said that, if you could solve NP-complete problems efficiently using some sort of analog computer, then whether or not that would imply P=NP would depend on whether you could simulate the analog computer by a discrete computer with only polynomial slowdown. If you could do it, then you’d indeed get P=NP. But if you couldn’t do it, it still wouldn’t imply P≠NP—only that this particular approach to solving NP-complete problems efficiently didn’t work.

  500. Alexander Says:

    I appreciate your article very much, but I have a question with respect to the following comment:

    Scott #45:
    Since it would be inelegant and unnatural for the class P to be “severed into two” in this way, I’d say the much likelier possibility is simply that P≠NP.

    Is it really true that P was severed into exactly two parts?

    In other words: If P=NP, would the combined P/NP class really just be the union of P and NP as we know it today?

    After all, if every problem that is in NP currently, was solvable in polynomial time, this would also imply that there are a lot of new polynomial time certificates that could be used to define new problems that are also in the P/NP class, but which would probably be separated from both of the other two parts mentioned at the beginning.

  501. Scott Says:

    Alexander #499: Yes, you’re right, I was talking about what would then be the “two most important parts” of the class P, but there would also be infinitely many additional parts, presumably corresponding to problems like factoring and graph isomorphism (that had previously been supposed to be NP-intermediate).

  502. Alexander Says:

    I would like to add some further thoughts on the topic that I was not able to write down earlier when I was at work:

    1. If someone claims to be totally agnostic towards P vs. NP, he probably has never actively tried to find a polynomial time algorithm for an NP-complete problem on his own. In university I was also rather agnostic about this question. Years later I started to spend more time on it as some kind of weird hobby. The more time you spend on this question, the more likely it appears to you that P is unequal NP. And it’s certainly not only an excuse for one’s own failure to find a polynomial time algorithm.

    2. Instead of asking why NP-complete problems are so hard, the better question is probably why some other problems that appear superficially similar are so easy. After all, the generic NP-complete decision problem asks whether an exponentially large solution space contains at least one element with a certain property. Why should all of such questions in general be solvable in polynomial time?

    3. Problems that are superficially similar to NP-complete problems, but are known to be solvable in polynomial time, usually have some non-natural symmetries. Take XORSAT for example: “xor(not(x1),x2,x3)” can be rewritten as “x1=xor(x2,x3)”. Thus, we can eliminate literals in a XORSAT formula by substitution, and the solution will stay the same. Of course, this implies that many, many instances that have different solutions in SAT, are essentially the same in XORSAT.

    4. Is it possible that there are some unknown symmetries in NP-complete problems that would allow a polynomial time algorithm if discovered? Theoretically yes, but I am quite sure that a lot of time was spent searching for them. Even I was smart enough to compute a set of properties for different k-SAT formulae and search for systematic differences between the satisfiable and unsatisfiable instances. I also took a 3SAT formula that has exactly one solution, calculated random subsets of the clauses and spend quite some time on searching for some magic property that always stays the same. And if even I did that, then for sure many much smarter people have tried to do so, too. So the chances that suddenly someone comes up with a kind of Eulerian path criterion for 3SAT seem to be rather small.

    5. If we think of NTMs as a computing model with unlimited parallelism, P=NP would imply that parallelization in general cannot result in superpolynomial speedup even if the number of processing units grew exponentially with problem size. Why should this be the case? Of course there might be problems that are inherently sequential, but that does not rule out that there are other problems that benefit from a superpolynomial number of processing units.

    All in all, it is certainly justified to keep working on the P vs. NP question, but to claim that the odds are even is more than over-optimistic. Currently my gut says at least 10:1 for P unequals P.

    Best regards,
    Alexander

  503. Matt Says:

    Just another conceptual point of view (from a “layman”).

    The concept of complexity is an empirical category, not a logical (=mathematical) one, since it makes assumptions about (human) behaviour.

    Mathematics (and logics) is a deductive system based on certain axioms that do not have to be interpreted by reference to any existing world. So, in the narrow sense of the word, math is not science. In science we have hypotheses, in math we have conjectures. Math has the proof – without no doubt -, and science has theories, that haven’t been falsified yet. The salt of science is uncertainty, or as Einstein put it: “So far as the theorems of mathematics are about reality, they are not certain. And as far as they are certain, they are not about reality”.

    This is nothing else than Kant’s distinction between a priori/analytic = “math” and a posteriori/synthetic = “science”.

    If one would agree on shifting “complexity” over to the science part, different questions would arise, not anymore in terms of proof, but of falsification. In other words, n vs. np would appear as a pseudo problem (if treated as a mathematical question).

  504. Serge Says:

    Matt #502: I totally agree with you. In mathematics you have a clearcut notion of existence that’s subject to proof. It’s a black and white concept. In science you have observations that are subject to testing. It’s a statistical and probabilistic concept. But you can have a mixture of both: to evaluate the probability of finding a proof, to prove that you can’t compute something with some probability, and so on… where you can never tell whether you’re banging into a physical or a mathematical wall! In my view, what makes complexity theory so difficult is that it’s impossible to tell the physics and the mathematics apart. That’s why complexity theorists should really care about philosophy…

  505. Alexander Says:

    Matt #502:
    In science we have hypotheses, in math we have conjectures. Math has the proof – without no doubt -, and science has theories, that haven’t been falsified yet.

    This might be a bit off topic, but how does it come that in the anglophone world, “science” is often used as a synonym for “natural science”?

    Looking at the origin of the word, it derives from the latin word for “knowledge”. Similarly, the German term “Wissenschaft” derives from “Wissen schaffen” which roughly translates to “establishing knowledge”. That said, it seems pretty obvious to me that not only natural sciences, but also formal sciences as math are establishing knowledge. Thus, categorizing physics as science, but math as some kind of non-science, seems a bit crude to me.

    Furthermore, I am not fully convinced that math and “reality” are as unrelated as sometimes suggested. Picking up a question raised by Scott elsewhere, a world in which 2+2=4 does not hold, appears rather unthinkable to me – at least if we identify sums of natural numbers with the number of elements in the union of two disjoint sets, and if we identify sets with collections of distinguishable objects in reality.

  506. SAT for optimization Says:

    OK, this could be yet another “electric fence” situation…

    http://rjlipton.wordpress.com/2014/02/28/practically-pnp/#comment-53223

    http://rjlipton.wordpress.com/2014/02/28/practically-pnp/#comment-53252

    Had trouble with WordPress over there, so let me clarify where WP mucked it up… Paragraphs 4 and 5 can be combined/simplified as below:

    The generic expression for the OFC is “f(X) is at least K” for maximization based problems and “f(X) is at most K” for minimization based problems… Each of these conditions will take a bunch of clauses to express.

  507. Mario Says:

    In your opinion, what is the “right” (or average) time for the review process of an article that says to solve PvsNP problem?

  508. Kyle Cranmer Says:

    I was surprised I didn’t see reference to question 17 to Knuth on his belief that P=NP (particularly since Scott asked question 19)

    http://www.informit.com/articles/article.aspx?p=2213858&WT.mc_id=Author_Knuth_20Questions

    and more follow up here.
    http://www.reddit.com/r/compsci/comments/262tw7/knuth_on_why_he_believes_pnp_question_17/

  509. Byrd Says:

    What a charming post and analogy with the frogs. Thank you for taking the trouble to write it.

  510. Justin Says:

    Scott,

    I am an occasional reader of your blog, but I must say I rather more enjoyed the paper you wrote some years ago on the topic, “Is P Versus NP Formally Independent?”

    First of all, the P vs. NP question is a very difficult problem, and it does not impugn your intelligence, or the intelligence of computer scientists in general, that you cannot solve it. Even an expert could spend many years on it and become very frustrated. Sure, I can accept that it is a “fundamental hypothesis” that P != NP. All those proofs that this or that problem cannot be solved in deterministic polynomial time unless P = NP testify to that. And if you derive some important new result that you can only show to be true as long as P != NP, I doubt your peers would reject it solely on that basis.

    All your evidence shows is that as far as we know it would be reasonable and useful to assume that P != NP. For the most part it is accepted as a reasonable and useful assumption, so there is no problem there. But that doesn’t mean that “P != NP” is “99% likely” to be true, or provable even if it is true. And the rest of the world wonders what exactly you mean when you try to convince us of a claimed mathematical truth by some “scientific” reasoning that falls short of mathematical proof. A “scientific” case for P != NP would require peer review, and I should hope your peers would require a proof if you stated this unequivocally.

    I think academic standards are falling. Colleges have lowered the standard of proof for rape from “proof beyond a reasonable doubt” to “a preponderance of the evidence,” never mind the rights of the accused, because in many cases it’s just too difficult to actually prove rape. Now we’re supposed to believe some mathematical statement based on a preponderance of the evidence, too, where formerly a proof was required, just because this particular statement happens to be really hard to prove. The academic world is going mad.

  511. Serge Says:

    At second reading, I think you’ve managed to convince me that P≠NP. However I still disagree on the likeliness of undecidability. My insight is the following. Every algorithm can be encoded as a sequence of bits, but some other algorithm has to perform that encoding. And this other algorithm may in turn be encoded as a sequence of bits, and so on… Thus, some questions about algorithms are due to remain forever undecided in arithmetics.

    In other words: Gödel has shown that computer science didn’t know everything about mathematics, but the converse is also true – mathematics doesn’t know everything about computer science!

    Sometimes I wonder if the impossibility to solve NP-complete problems efficiently hasn’t been an incentive for the evolution of intelligence. After all, if there existed a single method for doing everything, would anyone need to be smart?

  512. steve cross Says:

    A Berkeley math professor first told me about the P vs NP question in the late ’80s, with the version which asserts that verifying a proof is or isn’t easier than finding a proof, with all his experience suggesting that it is.
    When I first saw the case that P != NP question is undecidable, it resonated intuitively – after all, it is a statement about proving a proposition about what you are unable to prove – most likely, you will get a lot of “isometric exercise”, in terms of mental muscles. The history of set theory shows that you can only prove this kind of thing about a simpler language than the one used to make the proof (meta language and object language).
    A little later, I found an historical analogue to this problem in the writings of Poincare, before the development of Zermelo Frankl set theory (in particular, the axiom of infinity).
    Poincare noted that a number of mathematicians had tried to prove mathematical induction as a theorem (he claims that even the great Hilbert fell into this trap), rather than an axiom. It looks like 2 statements.
    But Poincare gave a one-line independence proof – the statement is really an infinite number of statements, P(n) -> P(n+1) for all n – but there were only a finite number of axioms at the time. So you can’t get there from here!

    In the case of the P vs NP problem, you can also give one of these hand-waving arguments that P != NP, using the Hamilton Cycle problem for instance, but explain the analogy with the above, by showing that a rigorous proof would require the demonstration of the non-existence of an infinite number of algorithms, one for each degree of polynomial.

    The hand-waving argument for taking this as an axiom goes like this – given an algorithm to find a Hamiltonian cycle, there will be some graph that forces the algorithm to look at every possible path before finding a Hamiltonian cycle in the worst case. Just take a graph where there is no cycle, but falls short by just the last edge. Of course the algorithm would have to look at everything before giving up. Then change the graph to include this edge. Then the algorithm would have to look at everything before finding the cycle.
    But a rigorous proof would require showing that there is no O(n) algorithm for each n, and of course this won’t happen “by induction on the degree of complexity”, since these algorithms get more and more complex with increasing n.

    One of my goals in the above is to give a criterion to dissuade bogus attempts at proof, by giving that “elevator speech” – in the Abstract of the Paper, “does the proof deal with the requirement of showing the non-existence of an infinite number of algorithms?”
    And of course, it most likely won’t.

  513. Uncertainty, information and cryptography | David Ruescas Says:

    […] fact that no one has proved that factorization is in P count as evidence that it is not in P? Some say yes and some say no. But it seems less controversial to say that the fact that no algorithm has […]