Statement on conceptual contributions in theory

About six months ago, a group of theoretical computer scientists started raising concerns about what they saw as a growing problem in our field. (My parody post “FOCS’36 notification” was one attempt to explain what this problem is.) The group has now put together a statement, which I was happy to sign, and which is meant to serve as a starting point for further discussion at the STOC’08 business meeting. If you support this statement and want add your name to it, please say so in the comments section! Of course criticism is welcome too. –SA


We, the undersigned, are concerned about two related attitudes that seem to be increasingly prevalent in the TCS community, and in particular, are affecting its program committees and their decisions. The goal of this statement is to attempt to recognize and reverse this trend. We are happy to note that the STOC’08 PC made a conscious effort to move in the direction of this proposal. The trends that worry us are the following:

  1. Assignment of little weight to “conceptual” considerations, while assigning the dominant weight to technical considerations.
  2. The view that technical simplicity is a drawback, and the failure to realize that simple observations may represent an important mind-switch that can pave the way to significant progress.

Most works offer a mix of conceptual and technical aspects, where by “conceptual” we mean the aspects that can be communicated succinctly, with a minimum amount of technical notation, and yet their content reshapes our view/understanding. Conceptual contributions can be thought of as contents of the work that are most likely to be a part of a scientific hallway discussion. They may appear in a work’s “bottom line” or “along the way”.

  • A conceptual “bottom line” may be a result that affects the worldview of researchers outside the community studying the problem, or the introduction of a new problem that may appeal to the wider TCS community.
  • A conceptual aspect “along the way” may be an innovative way of modeling, looking at, or manipulating a known object or problem, including establishing a new connection between known objects/problems.

Needless to say, the above list is not exhaustive.

Once understood, conceptual aspects tend to be viewed as obvious, which actually means that they have become fully incorporated in the worldview of the expert. This positive effect is actually a source of trouble in the evaluation process, because the evaluators forget that these contributions were not obvious at all before being made.

Indeed, our community should be warned of dismissing such contributions by saying “yes, but that’s obvious”; when somebody says such a thing, one should ask “was it obvious to you before reading this article?”

We believe that the community needs to remain vigilant about these issues, and program committees should make a conscious effort to pay attention to conceptual contributions (as apparently done by the STOC’08 PC). This will enable our conferences to continue to be a driving force in the progress of our field.

Scott Aaronson
Allan Borodin
Bernard Chazelle
Oded Goldreich
Shafi Goldwasser
Richard Karp
Michael Kearns
Christos Papadimitriou
Madhu Sudan
Salil Vadhan

102 Responses to “Statement on conceptual contributions in theory”

  1. D Says:

    I totally agree with this statement, and I think it’s most welcomed.

    One related problem is the attitude of reviewers (apparently) failing to read a paper and writing laconic comments of the sort “the problem is not interesting, the formulation is not new” etc. without giving a concrete explenation/refutation etc. Such 3 lines reviews are quite annoying, and worst may ignore important contributions hidden within the paper.

    I personally think this problem is intimatelly linked to the “names” issue discussed before-the fact that submissions are not anonymous.

    While I don’t think there is a “mafia” so to speak etc. I think that there is a tendency to treat very differently “famos” names and write the type of reviews I was talking about for less known names.

    While blind submissions are not without problems I think they will greatly contribute to the procedural justice of the process of publishing a paper in those conferences.

  2. hhm Says:

    I absolutely agree, and I think this statement is very important. Thanks a lot for supporting the growth of science in so many ways.

  3. Nagesh Adluru Says:

    I agree with this statement. Actually it’s the conceptual results of TCS that improved my ways of thinking in my work (computer vision and software development), not the technical results. This would not have been possible without knowing the results.

  4. Travis Says:

    I’m a semi-outsider to the TCS community. I’m a physicist, but my undergrad degree was in both computer science and physics, and the work I do is in quantum information and has a computer science “flavor” to it.

    I think that concern (1) is very important, and unfortunately this problem is present in many fields–not just computer science. I don’t know of how to fix it, but building awareness is a good start.

    I’d also like to say that, as a physicist, there are a few things about the TCS world that puzzle me (see list below). I’m not posting this list to bash CS, but rather to try to build a constructive dialog that will help both fields. Similar comments about physics would be appreciated.

    1) The publishing model. Compared to physics, publishing in computer science seems to be a big pain in the rear. The most prestigious venues are conference proceedings, which means you either have to wait up to a year to attempt to get work published, or compromise on a “lower impact” venue in return for rapid publication.

    2) Journal access. Readers must deal with the awkward web interfaces offered by Springer, the IEEE, and the ACM to actually get at papers. Compare this to prola.aps.org — just go there, type in the volume and page number, and get your paper. It’s about as easy as it can get. Also, preprint services like the arXiv seem to be underutilized by the CS community as compared to physics.

    3) Excessive use of formal notation. Many times, I’ve seen CS papers devote large amounts of space to defining all sorts of complicated notation to explain something that could easily have been said in a sentence or two of plain English. (Note–I’m not criticizing the use of formal notation in proofs, where it is necessary and appropriate; I’m talking about explanations that are meant to communicate an idea rather than prove it.) I suspect this might be related to the bias towards technical complexity that Scott mentioned in his post.

    I guess I don’t understand why you guys put up with this stuff. Are there good reasons for these things that I don’t understand? After Knuth’s brilliant contributions to scientific publishing, it seems like things have stagnated in TCS communication. Is the problem that anyone in CS practical enough to fix these problems is instead working at Google? 🙂

  5. Alderman Says:

    Re: Journal access, getting a CS paper is no harder than typing the title into Google Scholar. Almost everyone posts their papers for free on their websites, and so while the first link that pops up might be to ACM or something similarly annoying, looking at the other links will get you to a hassle free PDF.

  6. Scott Says:

    Travis, a few other responses:

    Complexity theorists, at least, use the ECCC the same way physicists use the arXiv.

    At least with STOC/FOCS, you don’t have to wait a year between two opportunities to publish your work, just six months. But here’s the real key: most theorists don’t have to wait at all, since they don’t even start writing their papers until a week before deadline! (See how beautifully the system works? 🙂 )

    I tend to find physics papers as notation-laden as you find CS papers! But I agree that we could be doing a much better job of communicating in plain English that which is so communicable. (Unfortunately, it’s hard even to discuss this issue without referring to any concrete examples, but it’s hard to do that without insulting anyone…)

  7. Maleq Khan Says:

    I whole-heartedly support this movement. I am still a newbie in this TCS community, and I already had my experiences. We got some nice results using a very simple approach and by analyzing with moderately-difficult probabilistic techniques. Let alone FOCS, STOC, we could not get these results in SODA. The most annoying part was the reviewers’ comments, like “this is very simple”. Throughout the long journey (i.e., several submissions) of our paper, only one reviewer found our simple approach as “an asset”.

    If we would get the same bounds using a complicated approach and hard-to-understand mathematics, I am sure FOCS/STOC would find our results interesting. Couple of years earlier (than when we tried FOCS), a paper, which achieved much worse bound using a complicated (and inefficient) algorithm and complex proof, got published in FOCS proceedings.

  8. Mohammad Mahdian Says:

    I absolutely agree with your statement. Reviewers rarely consider conceptual contributions of the papers anymore. SODA has been somewhat better than STOC & FOCS in this regard.

  9. Maleq Khan Says:

    Just wanted to add little more. Here is my view:

    More weight should be given to a simpler solution and less weight to a complicated solution. If a solution/proof is very complicated, chances are the authors missed a simpler solution, unless it is a long-standing open problem.

  10. anonymous Says:

    Why do you believe that peer review would work at identifying conceptual contributions? At least with difficult proofs, there’s often a general agreement among reviewers that there’s something non-trivial there. But I doubt reviewers would agree on what’s important conceptually. And even if they do, they could be completely wrong.

  11. Michael Mitzenmacher Says:

    Naturally, I agree with the letter — I’ve frequently said that STOC/FOCS overemphasizes technical prowess over other considerations, and I support an effort to make conceptual utility more heavily weighted. However, I don’t think this goes far enough, in that I’d like to also see emphasizing potential practical utility as another goal. And I’m not talking about the “we’ll say in the intro there might be related practical applications” sort of thing, but giving more weight to actual evidence in a STOC/FOCS paper (via simulations, or a prototype implementation, or publicly available code, or reference to a second related paper where an application is explored in detail) that the work might be useful outside the theory community.

  12. Kirk Says:

    STOC and FOCS are theoretical Computer Science conferences. Maybe theoretical is a poor choice of adjective, but I believe that a basic tenet of our field is that “We do not fully understand a statement until we can prove that it’s true.” Certainly this is only a necessary, and often not sufficient. It places a heavy burden on our field, especially in producing conceptual contributions; if we have to prove things, our claims necessarily cannot be made as freely and boldly as in other areas. If we were physicists, we would have long ago mandated the SAT law: “There is no efficient algorithm for deciding satisfiability of a formula.” (In fact, Scott often advocates such a physical law.)

    This tenet “understanding requires proof” seems at tension with the conceptual contributions which Scott has defined as those which can be communicated in the form of one-liners and hallway conversations. It may be politically advantageous to walk around saying “The market cannot compute Nash equilibria! It’s a failed notion!” But I think, within our community, the statement that “2-NASH is PPAD-complete” has to be considered the (fundamental) contribution. I’m not saying that the conceptual contribution isn’t profound, or that it shouldn’t be communicated to the world in easy-to-understand terms, but I’m not sure that we on the STOC/FOCS program committees are the ones who should judge the predicted conceptual impact. This is not where our expertise lies.

    I think it’s appropriate in this discussion to bring up some examples. Here are four recent examples of “conceptual” papers that Scott may be advocating (Scott can confirm or deny).

    The Learnability of Quantum States
    Scott Aaronson
    “Physicists think tomography is hard, but it’s not.”

    The Myth of the Folk Theorem
    Christian Borgs, Jennifer Chayes, Nicole Immorlica, Adam Kalai, Vahab Mirrokni, Christos Papadimitriou
    “Economists think that finding a Nash equilibrium in repeated games is easier than in one-shot games, but it’s not.”

    Universal Semantic Communication I
    Brendan Juba, Madhu Sudan
    “Is it possible for two intelligent beings to communicate meaningfully without any common language or background? The magic 8-ball says: Reply hazy, try again.”

    Algebrization: A New Barrier in Complexity Theory
    Scott Aaronson and Avi Wigderson
    “In the last 20 years, we have used low-degree extensions to overcome relevativization barriers. This can only take you so far.”

    Of these 4, I would only strongly advocate the last to be in FOCS/STOC. Oddly enough, its conceptual contribution was already known at a hazy level amongst complexity theorists, but it’s the technical aspects of the paper that make it worth reading (because they confirm and reaffirm the haze).

    How are we supposed to judge whether physicists, economists, or philosophers will be shocked by your results or satisfied by your models? You are claiming a conceptual shift in a different community. It seems that STOC/FOCS PC members can only look to the technical aspects to see if you are doing something deep, or simply pulling the rug over our eyes.

  13. Jeff Erickson Says:

    Agree, agree, agree. Hopefully the FOCS ’08 PC will also make a conscious effort to reward technical simplicity and conceptual novelty. (At least two of us will.) Nondeterministically trivial results should be rewarded, not punished.

    I also agree with Michael’s addedndum, provided it’s not applied exclusively. Technically and/or conceptually brilliant results with no obvious practical utility are also worthwhile. The theory community cannot look only outward (or only inward!) and still remain healthy.

  14. Piotr Says:

    Hello all,

    I agree that the conceptual aspects of a paper should be appropriately valued. I also think that it is important to value the technical aspects of a paper, given that (it seems) we will need some more sophisticated tools to address the key problems in the field. Oh yes, and I also agree that the potential of practical impact should play a part in the evaluation process.

    There are several aspects of TCS research activity that have been pointed out recently as not appropriately covered. However, I believe that they are not being covered simply because the rug is too small. As one can see on these nice graphs, over the last decade, the number of submissions to STOC/FOCS went up, dramatically; at the same time, the number of accepted papers held steady, or even went down. I think that we are seeing the consequences of this trend.

    Given our ongoing efforts to engage non-TCS communities, the number of submissions is likely to continue to grow. So, I think we should consider reversing the low-acceptance trend.

    Best,

  15. Rahul Santhanam Says:

    I agree completely with the sentiment.

    But perhaps we should be alert to a possible side-effect of an increased emphasis on conceptual aspects. Since the “final word” on whether a contribution is conceptually significant typically takes time (years, even decades) making such a judgement for a contentious submission is going to be a very subjective matter. So maybe there will be more contentious PC meetings. Which is not necessarily not a bad thing. It’s better than a situation where a resource-bounded referee is happy with checking that a work “looks non-trivial”…

  16. Louis Says:

    Totally agree.

  17. Not Even Right Says:

    Assignment of little weight to “conceptual” considerations, while assigning the dominant weight to technical considerations.

    As a physics graduate, I know very little about CS. Can you explain what you mean by the above? Does it mean, for example, the CS scientists are assigning little weight to the concept of Object-oriented programming and assigning too much weight to making the next verson of MS Windows OS?

  18. Gil Kalai Says:

    Naturally, I disagree with this letter and I find it strange from several reasons. I will ellaborate a little later.

  19. Peter Morgan Says:

    Travis, re: notation. The introduction of a new, powerful notation (communication protocol?) is one of the most unsung conceptual shifts there is, and one of the hardest to do well.

    People working on the edge of knowledge frequently find existing notations inadequate, so they invent their own. The first notation in a significant area of research may well not be ideal, and we have to struggle until someone understands what is being done in a way that is conceptually different enough to make everything clearer, either showing us how to use an existing notation effectively or inventing a better notation. Then everyone gleefully uses that notation until the next time it is not quite adequate. Notation is important, and difficult to get right.

    On the main issue, theorists and practitioners will naturally agree without a fight that the other should get the money. The signatories think STOC’08 got it right, other conferences have not; others, apparently including Gil Kalai, will think differently. Enjoy! String theory did well for resources in Physics for a while, at the expense of other fields.

  20. Tal Malkin Says:

    If you support this statement and want add your name to it, please say so in the comments section!

    I do. I didn’t know that STOC 08 PC made an effort to reverse this trend (and I don’t know whether it is only a recent trend or not), but I completely support the statement and happy to add my name.

    Tal

  21. Stas Says:

    I fully support the statement. However, I’m not in academia, so my vote probably doesn’t count.

  22. Scott Says:

    conceptual contributions which Scott has defined as those which can be communicated in the form of one-liners and hallway conversations

    One clarification: this statement is from ten people. I didn’t write it, though I was happy to sign it.

  23. rrtucci Says:

    I fully support this statement. Although I don’t understand it, it sounds funny enough.

  24. Michael Bacon Says:

    ” . . . information leaks for a pair of
    substantially entangled qubits are themselves substantially positively correlated.”

    ” . . . in a noisy quantum computer
    with highly entangled qubits there will be a strong effect of error synchronization.”

    Gil,

    I just heard the above-one-liners in a hallway conversation. 🙂

    I overheard this

  25. Kurt Says:

    To echo what Stas said, I’m not in academia so this doesn’t count for much, but I fully agree with the statement. I might add that, as someone who is not at all good with technical details, either in creating them or following others’ work, it’s all about the concepts for me. Concepts rule!

  26. Mitch Says:

    Assignment of little weight to “conceptual” considerations, while assigning the dominant weight to technical considerations.

    As a physics graduate, I know very little about CS. Can you explain what you mean by the above? Does it mean, for example, the CS scientists are assigning little weight to the concept of Object-oriented programming and assigning too much weight to making the next verson of MS Windows OS?

    “Conceptual” (in this context) means ‘high-level’ explanation, mostly wordy English, with only well-trodden or easily translatable jargon (like Kirk’s Nash/PPAD example).

    ‘Technical’ means notation laden, inscrutable, lengthy language. (oops, almost used ‘technical’ in the definition)

    Of course, the demarcation line changes depending on how close one is to the topic.

    As to object-oriented vs. Windows development, I imagine many TCSists plunging foot-long knitting needles into their eyes.

  27. Job Says:

    Besides Windows development, theoretical computer scientists also study how to install RAM in a computer, or how to reset the BIOS on a motherboard.
    In last year’s STOC they have made significant advances in the field of using Outlook to send and receive mail.

  28. Job Says:

    I’m reminded now of that great paper by Shor where he describes an optimal process for adding and removing programs in Windows in three steps or less.

  29. Job Says:

    My FOCS entry for this year involves two papers on how to shut down and restart a computer. Probably more technical than they’re looking for but what the hey.

  30. Michael Mitzenmacher Says:

    I think some people are exaggerating a key point here. A conceptual contribution doesn’t have to be (and in fact, I’d think, almost assuredly wouldn’t be) devoid of technical content. It’s just that the technical content might not be the driver for accepting the paper– the value of the paper would be less about the technical content than the conceptual contribution. Often a conceptual contribution can be summarized in a small number of lines, but that doesn’t mean we’d expect the paper to be a conceptual statement devoid of other content (including evidence or mathematics to back up that conceptual statement).

    Speaking for myself, I think I’ve written several papers where the concept was more interesting than the final mathematics. (I’ll avoid citing specific examples here.) And in general, I avoid sending such papers to FOCS/STOC, precisely because my impression is that FOCS/STOC overweights its judgments in favor of technical content. Also, I should point out, I don’t think FOCS/STOC has historically ignored the power of concepts; a great concept has always been compelling to theorists. It’s just that I think the bar for this type of result is very high for FOCS/STOC, in my opinion overly so.

  31. Scott Says:

    Michael #30: Exactly, and well said!

  32. Gil Kalai Says:

    Having the right balance between conceptual and technical, between different subdisciplines, between core areas and applications, between in-fashon and out-of-fashion topics, and making the evaluation of a paper taking into account the difficulty, novelty, rigor, conceptual contribution, technical contribution, potential influence, etc. is the job of the referees/ members/chairs of program committees, as of journals’ editor and referees.

    Making very general “thumb rules” is ususally not very useful. People in the community are aware that important conceptual contributions are important, (but not every paper claiming to give an imoprtant conceptual contribution indeed does) and that things that look after the fact obvious are not always a priori obvious (but sometimes they are), and that short proofs can be ingenious (and not), so really making the call on these matters one paper at a time are based on referees and program committees’ members judgement and taste.

    I do not see any problem if different conferences have different traditions and different thresholds. It is good that the bar for papers which claim important conceptual breakthroughs is high and, in any case, for such papers, keeping the standards of the technical aspects is very important. The best papers with conceptual breakthroughs were ususally also very good in terms of the formalism and other technical aspects. Also good papers and good ideas can influence and prevail even if they do not appear in FOCS/STOC. These days people have many avenues to present their ideas and results. (Including hallways and blogs conversations.)

    I do not see evidence that this letter points to a real current difficulty, and I am not sure it can be of service. The signees already have sabstantial influence in promoting their attitude and tastes while serving in program committees and other bodies so this makes this “petition” a little strange.

  33. Ryan Williams Says:

    I am very glad that such a statement is being proposed and supported. Add my name. To say I agree with it is an understatement; at some points I have thought precisely some of those sentences.

  34. R Says:

    As stated, this manifesto seems to be of the sort where most reasonable people will agree in principle and then it will be very hard to achieve much reasonable consensus among any meaningfully large sub-group in practice.
    If I may offer a suggestion, it would be better if you could go one step further and offer a broad set of examples of what a good conceptually influential paper is (perhaps it might be reasonable to exclude the immediate 10 year window so it is easier to evaluate influence). Then, at the very least, people can evaluate any specific paper against a broad set and ask whether it meets the criteria.
    In this context, I found the following article very enjoyable reading:
    S. Santini, We are sorry to inform you…, IEEE/ACM Computer, Dec. 2005
    Perhaps the authors of the manifesto might consider writing an article…

  35. Kirk Says:

    One clarification: this statement is from ten people. I didn’t write it, though I was happy to sign it.

    Are you happy to explain and/or defend it? What is the point of the following paragraph?

    Once understood, conceptual aspects tend to be viewed as obvious, which actually means that they have become fully incorporated in the worldview of the expert. This positive effect is actually a source of trouble in the evaluation process, because the evaluators forget that these contributions were not obvious at all before being made.

    Are you saying that it’s a significant problem that a paper is submitted to a conference, and already it seems obvious (before submission!) because its message has been absorbed into the collective worldview of the experts? Or is it so influential that it is incorporated during the review process such that it seems obvious at the end?

    If the petitioners believe that the STOC’08 committee (which expressed its agenda early on in the call for papers) has made beneficial steps in this direction, please point us to these conceptual papers! The proof is in the pudding, right? Why rely on platitudes and generalities when you could just point to the papers that might not have had a shot without this modified philosophy?

    I fear that you are trying to move STOC and FOCS in the direction of grant proposals. Whoever makes the biggest, baddest, most unsubstantiated claims seems like they’re doing the best work on a conceptual level.

  36. Scott Says:

    Are you happy to explain and/or defend it?

    “Kirk”: Yes, I’m happy both to explain and defend what I sign, and to sign my real name in the first place.

    The way I’d personally interpret the paragraph you quote is the following: after reading a paper or hearing a talk, people will often say, “sure, but as soon as you’ve asked the question / defined the model that way, the answer is obvious.” They recognize, but don’t sufficiently appreciate, the fact that before the paper in question no one had asked the question or defined the model that way.

    As for the four papers you mentioned, I’d prefer to leave comments on the two I (co-)authored to others. The other two do seem like good examples of conceptually-oriented papers, but I haven’t studied either enough to have a strong opinion.

  37. Kirk Says:

    sure, but as soon as you’ve asked the question / defined the model that way, the answer is trivial

    More often than not in this situation, the questioner is right and the answer is trivial. A not so uncommon situation: There is a hard problem X which many people have been working on, with partial and interesting progress, giving a good name to the field that studies X. Then someone comes along and modifies X to X’ so that it sounds like the same problem at a “conceptual” level, but doesn’t confront the same fundamental problem as X. Then they fairly easily solve X’. I don’t think anyone is advocating this as the new CS approach: Got a hard problem? We can design an easier one!

    Secondly, just because no one wrote a paper about it, doesn’t mean they didn’t think about. There are many researchers who restrict themselves to writing papers they feel say something significant. It is often the case that people had already considered that model, saw that some things reduced to trivialities there, and decided that those trivialities did not represent an interesting insight.

    For the record, I don’t think either of these apply, for instance, to your paper “The Learnability of Quantum States.” But I do think they apply to a majority of the situations you describe.

    There is only so much room for new models. Presumably, models have to be studied, vetted, compared. It makes sense that the number of model-defining papers should therefore be much less than the number of model-vetting papers, if indeed people are actually exploring the new models.

  38. Piotr Says:

    Hello,

    After reading (and having been enlightened by) the last few comments, I will try to reiterate a point from a few comments ago. In a nutshell: I believe that the phenomenon identified in the letter is real, and its description is pretty accurate. However, it is a symptom; we also need to identify the root cause. Otherwise, we might simply end up accepting more conceptual papers, rejecting more good technical papers (or papers with potential practical impact), and we will have a very similar discussion again a couple of years from now, just in reverse. For an example of such a “novel vs incremental” discussion in a different community, see this post.

    To address the root cause, let me make the following points:
    (1) Conceptual, notably new model papers, are high risk (and potentially high gain) at the time of evaluation.
    (2) Technical papers, notably those solving open problems, are low-risk, and their gain can be easily assessed at eval time.
    (3) STOC/FOCS have been experiencing shrinking resources (i.e., the number of accepted paper slots) over the last decade, either in relative or absolute terms.
    (4) In a situation involving shrinking resources, a natural (if not a rational) approach is to focus on low-risk entities
    (5) Expanding the resources will allow incorporating more high-risk entities. Ergo, we will be able to accommodate more conceptual papers.

    In more detail:
    (1) I think that Kirk’s posts make the high-risk part pretty clear (it is often very hard to tell at the eval time if the model is realistic, expressive and interesting enough to make it; most of them won’t). At the same time, many
    of the classic papers fall into the model category.
    (2) The value of a technical paper with a crisp contribution can be easily assessed at the eval time (“what is the open problem ?”, “where is the beef?”).
    (3) As one can see on these nice graphs, over the last decade, the number of submissions to STOC/FOCS went up, dramatically; at the same time, the number of accepted papers held steady, or even went down.
    (4) If the resources are limited, people naturally prefer low-risk entities to the high-risk ones. E.g., as we all known too well, in any company with shrinking resources, the research division is the first to go. Presumably, there are good reasons for that – I will let more economically-minded bloggers comment on that.
    (5) Immediate, assuming the correctness of the above four points.

  39. Scott Says:

    Piotr, while I can’t speak for all of the co-signers, I personally strongly agree with you. I think that too few STOC/FOCS accepts is a big part of the problem, and that expanding the number (say, from ~80 to ~100 or ~120) would go a long way toward solving it. A related proposal would be to use a “tiered” system — currently in place at other conferences like QIP — where the PC could accept a paper into one of several categories (“long” talks, “short” talks, “medium” talks…). That way, if a PC was sharply divided on a paper, it wouldn’t have to reject it outright in favor of a “safe” paper — it could just assign it a lower tier.

  40. Vijay Krishnan Says:

    I work on Machine Learning and Data Mining and can totally relate to this. I have seen multiple instances of clean and simple solutions having a hard time getting accepted on the grounds of “not adequate technical depth”, while needlessly complex methods seem to have a much easier time getting accepted.

  41. asdf Says:

    Scott’s invention of quantum anthropic selection comes to mind here, highly conceptual (“to anthropically solve any NP problem in polynomial time, guess the answer, check it, and kill yourself if it’s wrong”) and simple, yet led immediately to the formerly very difficult theorem that PP is self-intersecting.

  42. asdf Says:

    Scott, what do you mean by ALL in “PP/rpoly = ALL”?

    You can’t really mean literally all problems, i.e. including those outside R (i.e. classically undecidable problems)–can you?

  43. asdf Says:

    Oh nevermind, I get it. ALL means for any language L you can decide whether any string of length N is in L, given what amounts to exp(N) advice bits that that let you look up the answer. No magic.

  44. Scott Says:

    Right, there’s no magic, but ALL really does mean ALL in this context! Ran Raz has also shown that IP(2)/rpoly=ALL.

  45. Greg Kuperberg Says:

    asdf: That is what is meant. PP/rpoly is an abnormally powerful class that is overloaded with advice and hints. An angel supplies the verifier with advice sampled from a fixed probability distribution, and then a Merlin who knows that distribution supplies the verifier with an exponentially long list of certificates in independent trials. Output is decided by majority rule. Since the angel provides a new sample from the same distribution each time, the verifier’s behavior can depend on an exponentially long list of real numbers, namely the individual probabilities of the angel’s advice strings. In other words, the PP part of the machine class can be used for partial tomography of the angel’s probability distribution. This is enough information to compute any desired set of answers for all inputs of length n, which is to say any language at all.

    This result is more a clever remark about definitions than it is a serious theoretical breakthrough. It says that certain combinations of help to a computer can create a loophole that gives away all answers. It is important to study loopholes so that you can understand definitions, but in the end you will want to change the definitions to close the loopholes.

    Here are some comparisons to help understand the loophole. First, P/poly already contains languages that are not in R. An angel provides an advice string for each input length n, and this advice string can encode a non-recursive function of n. But since the loophole only depends on the input length, it is tolerable. P/poly is relevant to, among other things, combinatorial circuit approaches to P vs NP.

    Another comparison is to P/rpoly or BPP/rpoly. Can the machine (“Arthur”) do more with randomized advice than deterministic advice? The answer is no, rather they both equal P/poly, but the proof is not trivial. (It’s not super-hard either.) BPP/rlog is strictly bigger than BPP/log under any reasonable interpretation of the latter.

    A third comparison is with the other reading of PP/rpoly. In the result that PP/rpoly = ALL, P is modified first by adding randomized advice and second by adding a majority-rule certificate. The class could more precisely be written P(P/rpoly). You could instead define it in the other order, as (PP)/rpoly, so that the angel would have to provide the same advice (albeit randomly chosen) for all certificates. This class equals PP/poly for the same reason that P/rpoly = P/poly.

  46. Piotr Says:

    Scott,

    I personally strongly agree with you.

    Hmm, I thought it was illegal to use these words on blogs 🙂


    A related proposal would be to use a “tiered” system

    I concur. Many if not most CS conferences that I am aware of have some kind of tier systems (talks/posters, etc), so this has been done before. And in STOC/FOCS we already have a tier system anyway, with the SICOMP special issue selections and best paper awards.

  47. Not Even Right Says:

    “Conceptual” (in this context) means ‘high-level’ explanation, mostly wordy English, with only well-trodden or easily translatable jargon (like Kirk’s Nash/PPAD example).

    ‘Technical’ means notation laden, inscrutable, lengthy language. (oops, almost used ‘technical’ in the definition)

    Of course, the demarcation line changes depending on how close one is to the topic.

    As to object-oriented vs. Windows development, I imagine many TCSists plunging foot-long knitting needles into their eyes.

    Mitch, thanks for your explanation.

    I think “Conceptual” is associated with “high-level” explanation. It is analogous to that mass causes curvature in space and time in General Relativity and “Technical” may mean all the complex mathematics involved in the General Relativity. I don’t know if I got it wrong.

    Shor and the others can join Dell/Hp if they wish!

  48. Not Even Right Says:

    I fully support this statement. Although I don’t understand it, it sounds funny enough.

    Me too!

  49. Jonathan Katz Says:

    I completely agree with the sentiment of the letter, but I wonder whether anyone has any evidence suggesting that there is a problem in the first place. Even if a poll of the TCS community discovered that a large proportion felt that conceptual contributions were undervalued (and I’m not sure this would be the outcome) that doesn’t necessarily imply a problem. (Though even false perceptions should be corrected.) So — what is the evidence that technical papers have an easier time than conceptual papers?

  50. Anon Says:

    I think that in our time (with the internets and all) a good paper, conceptual or not, dont need FOCS or STOC or any other conventional venue to make its mark. If the paper makes a good point then people will hear about it, talk about it, blog about it etc…

  51. Alderman Says:

    FOCS and STOC are useful labels for new researchers looking to publicize their work. How do you choose which of the thousands of new papers each year to read? The way I do it is to look at conference proceedings, and also the websites of researchers whose work I am interested in. But a new researcher won’t have anyone trolling their website, so getting their work into FOCS/STOC conference proceedings is essential to getting it publicized.

  52. Henry Cohn Says:

    I’m a little outside the FOCS/STOC community, so perhaps I lack perspective on this issue, but I haven’t seen any compelling evidence of a problem with evaluating conceptual contributions.

    I’m sure everybody already agrees conceptual contributions are important. (In fact, using the term “conceptual contributions” guarantees this.) The only question is how to recognize when an important conceptual contribution has actually been made.

    I simply don’t believe important contributions are usually viewed as obvious afterwards. One good test case is cryptography, where there are lots of deep but simple ideas from the 80’s, such as the role of indistinguishability and simulations. These ideas are indeed very simple, but we know they aren’t “obvious” because we have to teach them explicitly to beginning students. The same is true for other conceptual contributions – experts may not talk much about them among themselves once they’ve got the idea, but they certainly spend time explaining them to students and other non-experts. The need for this explanation doesn’t come as a surprise, either. Before the need to explain the ideas ever arises, experts can generally predict which conceptual shifts will need especially careful explanation.

    The view expressed in the letter, that many experts dismiss important new ideas as obvious because they’ve quickly assimilated the ideas completely into their worldviews, seems implausible at best. Why then would someone claim an idea is obious? Aside from the obvious explanation (some ideas genuinely are obvious), I suspect one issue is that some ideas are basically in the air. Perhaps nobody has made a careful, explicit statement in print, but the idea just won’t surprise experts. Pinning down ideas like this is a valuable contribution, and occasionally it is profoundly important in shaping a field, but one shouldn’t overemphasize its value by forgetting that these ideas were really in the air. I’m not saying this is always the case, but it’s what I take “obvious” to mean here. When an expert dismisses a conceptual contribution as obvious, it doesn’t mean the contribution was literally obvious, but rather unsurprising to experts because the idea was already floating around in some form.

    So I fully agree that conceptual contributions are important and should be valued, but I don’t see the evidence that they are currently undervalued. Perhaps it’s out there – I’d be curious to see more discussion. I wish there were a tactful and non-inflammatory way to discuss specific examples. 🙂

    The part of the letter I found more compelling was the paragraph about technical simplicity as a drawback. This isn’t just a problem in the FOCS/STOC community, but more broadly among people interested in discrete mathematics. Too many people regard overcoming technical difficulties as a worthwhile goal in itself and a badge of honor, rather than a necessary evil which may or may not be justified by the worthiness of the overall goal or the broader applicability of the techniques. When a submitted paper includes no technical pyrotechnics, that is sometimes considered as indicating a deficiency in either the author or the subject matter.

    If I were trying to change the community’s evaluation standards, I’d forget about the issue of conceptual contributions and instead focus on one theme: technical simplicity is a virtue.

  53. Jonathan Vos Post Says:

    Re: #47

    The conceptual duality behind General Relativity is usually stated thus:

    “Mass tells space-time how to curve;
    space-time tells mass how to move.”

    Rather than “complex” I’d rephrase the next part of your comment thus:

    “Technical” may mean all the Tensor Calculus and Differential Geometry involved in General Relativity.

    But is this technical or conceptual? The Einstein convention that repeated indices are implicitly summed over. This can greatly simplify and shorten equations involving tensors. Einstein summation, was introduced by Einstein [1916, sec. 5], who later jested to a friend, “I have made a great discovery in mathematics; I have suppressed the summation sign every time that the summation must be made over an index which occurs twice…” [Kollros 1956; Pais 1982, p. 216]

    References:

    Einstein, A. “”Die Grundlage der allgemeinen Relativitätstheorie.” Ann. der Physik 49, 769-822, 1916.

    Kollros, L. “Albert Einstein en Suisse Souvenirs.” Helv. Phys. Acta. Supp. 4, 271-281, 1956.

    Pais, A. Subtle is the Lord: The Science and the Life of Albert Einstein. New York: Oxford University Press, p. 216, 1982.

    Weisstein, Eric W. “Einstein Summation.” From MathWorld–A Wolfram Web Resource.

  54. Greg Kuperberg Says:

    I completely agree with the sentiment of the letter, but I wonder whether anyone has any evidence suggesting that there is a problem in the first place.

    There is no question that you can sometimes gain security through obscurity by writing long papers that make your results look hard. After all, if you didn’t work an open problem yourself, or even if you did, how can you tell how hard it is? The first impression will come from the difficulty of the proof. On the other hand, there are few if any people who actively want things to be this way. Rather, it comes from structural problems in the research endeavor. So I partly agree with Gil Kalai and with you on this one. I think that the problem is real, and that it goes far beyond TCS, but I agree that it doesn’t help to just file a protest. It’s like protesting traffic jams — does anyone actually like them?

    Here is one possibly constructive suggestion. Instead of protesting a bad trend, you could try to give more reward to janitorial papers that clean up a mess in the field, whether or not they promise any big new conceptual breakthrough. In hindsight, many of these papers have had a lot of long-term influence, sometimes with a crescendo of citations and sometimes even without them.

  55. Raoul Ohio Says:

    Greg brings up an important topic; “janitorial work”. Any area of knowledge makes much better progress when a lot of (usually unsung) work is done in organizing, refining, cleaning up, etc. Developing better notation also falls in this category.

    The gold standard in applied mathematics is “Bessel Functions” by Watson. Written in the 1920’s, still selling well today, it reads like a novel. Everything seems so obvious you think you could have worked it out on a coffee break. Check it out.

  56. Jack in Danville Says:

    Scott,

    What keeps non-academics like myself interested in your work is the conceptual content. In the bastard stepchild world of computer applications creating a useful application through technically simple means is sometimes called “elegant”; and, BTW, the simple solution is almost always preferred.

  57. Paul Beame Says:

    I agree with Henry that a larger concern for PCs is in rewarding technical sophistication over simplicity. Too often papers at the margins get in because of technical sophistication over any other attribute. (Technical novelty that is likely to be useful to others in the future does seem to be a legitimate reason for acceptance but technical sophistication often seems to be mistaken for technical novelty.)

    I don’t completely agree with Henry’s take on how PCs tend to treat conceptual contributions. From what I have seen, PCs are only partially successful in dealing with conceptual contributions: they seem very good at accepting papers where the conceptual contribution involves the introduction of new (classes of) research problems but not as much in other respects. In papers that attack known problems, major conference PCs tend to treat the outcome – a better running time, a better approximation factor, a better lower bound – as the major factor and a completely new approach that reproves known results or produces a connection between subjects but does not yield a better quotable result is often not given the credit it deserves. (I have seen PCs and authors bend over backwards trying to find that little sliver of improvement in some aspect of a known result in order justify accepting a paper that has such a new approach. Imagine how many fewer such papers would be accepted if the authors couldn’t eke out such marginal improvements.)

  58. Maleq Khan Says:

    This comment is not related to the Scott’s current post. But I am taking this opportunity to leave a comment.

    I was reading Scott’s lectures on Great Ideas in Theoretical Computer Science, and found Edsger Dijkstra saying “computer science has as much to do with computers
    as astronomy has to do with telescopes.”

    The name “Computer Science” is misleading — it indicates that everything is to do with computers. Can we change the name to something like “Computics” or a better sounding and meaningful name?

  59. Daniel Golovin Says:

    Please add my name as well. I have often thought along the lines of this statement. I can offer the following anecdotal evidence that bias towards needlessly complex proofs is a real problem.
    I have had papers get borderline-rejected (with mildly favorable reviews), and afterward improved the results, simplified the proofs, and resubmitted (to venues of equivalent or lesser prestige) only to get the reviewer comments that are far less favorable than before. Ideally, this should never happen. Has anyone else had this experience?

  60. Daniel Golovin Says:

    @ Malek,
    The name “Computational Science” has been suggested, but I think we’re stuck with Computer Science for the foreseeable future.

  61. Scott Says:

    Can we change the name to something like “Computics” or a better sounding and meaningful name?

    No. 🙂

  62. Scott Says:

    I simply don’t believe important contributions are usually viewed as obvious afterwards. One good test case is cryptography, where there are lots of deep but simple ideas from the 80’s, such as the role of indistinguishability and simulations.

    Henry, that strikes me as an ironic example. Weren’t some of the great “conceptual” crypto papers from the 80’s, including GMR, famously rejected multiple times from STOC and FOCS?

  63. Scott Says:

    …it doesn’t help to just file a protest. It’s like protesting traffic jams — does anyone actually like them?

    Our hope was that, in this instance, just making PC’s and the community more generally aware of the problem could already have some positive effect. Compared to traffic jams, we’re dealing with a much less “visible” problem involving a much smaller number of people.

  64. Kirk Says:

    Henry, that strikes me as an ironic example. Weren’t some of the great “conceptual” crypto papers from the 80’s, including GMR, famously rejected multiple times from STOC and FOCS?

    A (famous) anecdote does not a trend make. The good news is that a far greater number of bad ideas were rejected multiple times from STOC and FOCS, and remained bad. Oh, and lots and lots of foundations of crypto papers were accepted, as was the GMR paper eventually.

    Our hope was that, in this instance, just making PC’s and the community more generally aware of the problem could already have some positive effect. Compared to traffic jams, we’re dealing with a much less “visible” problem involving a much smaller number of people.

    I think being so generally pro-conceptual without getting more detailed is going to push the community in a bad direction, especially if certain people continue to act unilaterally to make the changes they want.

  65. Henry Cohn Says:

    Henry, that strikes me as an ironic example. Weren’t some of the great “conceptual” crypto papers from the 80’s, including GMR, famously rejected multiple times from STOC and FOCS?

    There are definitely stories about the Goldwasser-Micali-Rackoff paper (which defined interactive proofs and zero-knowledge proofs) being rejected several times from FOCS and STOC. I assume most other important crypto papers from the early 80’s didn’t face multiple rejections, since I haven’t heard such stories about them, but I don’t know.

    It would be fascinating to see the submitted versions of the paper and the referee reports on them (I wonder whether anyone still has the reports?). My guess is that the stated reason for rejection was lack of importance rather than obviousness: the referees knew the paper was making conceptual contributions but didn’t think they would lead anywhere important. That’s certainly a huge mistake, but it’s a different sort of mistake from not even recognizing the conceptual contributions.

    I’d love to understand this case better, since it seems inexplicable in hindsight. I don’t suppose anyone reading this was actually involved in the rejections?

  66. Oded Goldreich Says:

    I usually don’t read blogs and find it way too time-consuming to react to the many comments. However, at an early statege I saw Gil’s comment #18 and was puzzled (by the “obviously I disagree” title). I thus kept looking for the follow-up, and saw it now as #32.

    I think this comment is based on a misunderstanding of the statement and/or the positions underltying it. Surely, all signers (and I think most of us in general) agree to Gil says in his first two paragraphs. We would also partially agree to the 3rd (although some will not agree that when a conceptual contribution is detected this should not lower the requirements from the technical ones). But the real disagreement is on the last paragraph (which Gil seems to take for granted).

    Our feeling and the reason for the statement is that things did change and do change in the direction of giving less weight to the conceptual aspects, and thus violating the balance advocated by Gil’s 1st paragraph. Of course, we don’t advocate any “rule of thumb” and/or rigid approach/rules, but rather want to call the community’s attention to this issue and to advocate the aforementioned balanace (which we feel is being violated).

    Lastly, I think that if one agrees to such sentiments and/or thoughts, then one should prefer the way of public discussion (which we chose) over pointed influence on PCs on which one happens to serve. The public way is preferable both for sake of a wide and open discussion, and also for a lasting effect.

    Oded

  67. Samuel Jackson Says:

    Kirk (Comment #64): A (famous) anecdote does not a trend make. The good news is that a far greater number of bad ideas were rejected multiple times from STOC and FOCS, and remained bad. Oh, and lots and lots of foundations of crypto papers were accepted, as was the GMR paper eventually.

    Can you define what you mean by a bad idea? I am struggling to understand this.

  68. Paul Beame Says:

    Henry and Scott: I don’t know of any of the 1980’s crypto papers other than the GMR paper that had a hard time getting in. As I recall, these papers were very well received at the time. Moreover, the GMR discussion is usually missing some key details. I was a grad student at the time and saw the paper drafts but not the referee reports. (This was before the electronic PC so most often one got at most a line or two beyond the all-important bit.) The rejected versions of GMR had fewer authors and less content. In particular, as I recall, the notion of simulation with rewinding (not an easy notion to get right) was not part of the paper until the version that got accepted (and graph (non)isomorphism wasn’t either). On the other hand, the protocols for quadratic residuosity and non-residuosity were there from the beginning and it was clear that there was something very interesting going on (though it wasn’t clear exactly what that was).

  69. Oded Goldreich Says:

    Just took another glance at this blog (since the windowwas still open…) shiould get out before it is too late.

    Again, I’m commenting just on the last post (#68),
    which is the only one I saw.

    I regret to disagree with aothe friend (Paul), but my recall of history is quite different. Firstly, Crypto papers did have a very hard time getting into FOCS/STOC in early/mid 1980s.
    As far as I recall (not sure on this..), even “probabilistiic encryption” of GM was not accepted in 1st try. What I’m sure about is anything that happened since 1983. So, firstly, the “random function” paper [GGM] was not accepted in 1st try, and the diff to the 2nd was rather expositional. As for GMR, this was rejected three times, and with no good reason at all. The definitional treatment did not change much during all revisions; the basic concepts were there from the very first version. I vividly recall reading one of the rejected version (I think it was the 2nd) and thinking “this is a great work”.

    So, indeed, as early as I can remember the evaluation process often failed to recognize novel conceptual contributions. But my own feeling is that there was greater concern about such failures and greater effort to try to avoid them. My current feeling is that there is more indifference. (Of course there is a mix of both in the two periods, I’m talking of proportions…) My point here is that making errors (esp with conceptual innovations) is forgiveable, while not caring is not. Trying hard and failing is forgivable, not even trying is not. Etc…

  70. NoJoy Says:

    Malek,
    Dijkstra generally referred to it as “computing science”.

  71. Gil Says:

    These are really thought-provoking remarks. Let me only make two side comments. The issues and sentiments raised here are not specific to computer science. Look, for example, at Ariel Rubinstein’s
    Motos for economists:

    “I have not seen any paper in Economics which deserves more than 15 pages (probably even 10)”.

    “A paper in Economics which is not rejected should not be published”.

    Oded wrote: “I regret to disagree with another friend (Paul), but my recall of history is quite different.”

    I find academic disagreement with friends really great. (Naturally, an opportunity to disagree with 10 friends simultaneuosly is something I couldn’t pass.) Also, I think that the fascinating history of computational complexity and cryptography of the last 5 decades deserves to be professionally and seriously recorded.

  72. Cynthia Says:

    With the STOC final versions due Friday, I thought the poor formatting of the title of this post had to be a LaTeX joke. Looking at the source code, though, it seems not to be (?).

  73. Kirk Says:

    Can you define what you mean by a bad idea? I am struggling to understand this.

    Crossing the street without looking both ways, invading and occupying a middle-eastern country without an exit strategy, and overgenerally modeling a problem so that you can prove it’s PSPACE-complete.

  74. Gil Says:

    More to Oded’s points. Personally, I like conceptual ideas which which can be communicated in the form of one-liners in hallway conversations (like those Michael #24 very kindly mentioned) and I like simple proofs. Finding simple proofs (or failing to find complicated proofs) represent most of my own works and as much as I value hard nose head on problem-solving with age I find myself interested more than before in foundational and conceptual matters. (Also, I like very much the STOC’08 committee.)

    But I admire technical strength and tour-de-force proofs. Yes, overcoming technical difficulties is a worthwhile goal in itself and a badge of honor. One clear trend that we do see is that many (but not all) very important (also conceptually important) results in theoretical CS are mathematically difficult and deep. The desire and ability to crack difficult problems whatever it takes is very valuable. Sometimes and more often than 20 years ago it takes extreme mathematical effort.

    Coming back to an example we already discussed: Introducing the concept of smoothened analysis was already a very nice conceptual achievment. But what made Spielman’s and Teng’s paper great was the definite result that accompanied this definition which was (and still is, after various simlification) quite difficult.

    Identifying an important conceptual contribution was and is difficult. In theoretical computer science and in other areas. I do not see the evidence that the balance was violated. Maybe when so many papers are mathematically difficult people less appreciate mathematically easy papers. This is not good. It is also possible that as the subject matures important novel conceptual ideas are harder to find. As many people remarked we simply do not see evidence beyond the sentiments. (But as Henry noted pointing to evidence can be quite problematic.)

    Even if there is a genuine problem, the letter is not just an invitation for a public discussion (which is welcomed), but a petition-style call for a policy shift. Since the letter mentions recent awareness to the issue by some PCs it is not clear at all that such a call is necessary. But even if it is necessary, a call for an “organized” policy shift is very problematic, both for the specific case at hand and for other cases down the road.

  75. Anonymous 101010 Says:

    Aside from the few anecdotal examples that have been mentioned, how do we know that the problem is in fact that PCs are favoring technical content over conceptual content, rather than that people are *working* more on technical results than on elegant, conceptual results?

    Put another way: is this letter really just a call for people to stop spending time improving a loglog factor to a logloglog factor that requires 20 pages of proof, and start spending more time on more elegant and more conceptual results? (If so, I agree wholeheartedly, not that that matters much, being anonymous.)

    It has been suggested before that having biannual conferences be the main driver of a field leads to this effect, in which case it seems there is little PCs could do to alter it…

  76. Anonymous 101011 Says:

    this letter really just a call for people to stop spending time improving…

    More than likely, this is actually in response to a handful of specific papers (one could try to guess which ones they are) that were not accepted over the past few years.

  77. Luca Says:

    I suspect this is all a convoluted way for the signers to state their support for Barack Obama over Hillary Clinton

  78. Greg Kuperberg Says:

    The more that I consider it, the more uncertain this proposal seems. Again, I totally agree with the sentiment. However, it brings to mind the cautionary tale of the ill-considered paper on “theoretical mathematics” by Jaffe and Quinn, published in the AMS Bulletin. They meant to express another familiar concern about style of research: that mathematics had departed too far from rigor. They wanted to characterize mathematics papers as either “experimental”, meaning that you unearth tangible evidence by proving theorems, or “theoretical”, meaning that you offer mathematical predictions without proving them. Unfortunately, they named names. It was clear that nobody would want to be a theoretical mathematician on their terms. You could easily read resentment or even hypocrisy in their thesis. (I do not mean that either in an ad hominem way. I think that they mainly wanted to be provocative, and they succeeded too well.) Although I would not have wanted to be an author of that paper, it did elicit more than a dozen interesting responses from luminaries, including one from Witten.

    Maybe the cleverest response was from Rene Thom, and I’m going to adapt his idea to this discussion. If you’re concerned about technicalities pushing out concepts in TCS, you could label them with symbols to tell which is which. If a paper is meant to have conceptual virtue, you could label it with an oval representing a saint’s halo. On the other hand, if it is meant as a technical effort, then you could label it with a mechanic’s wrench.

    I would label my papers with the wrench.

  79. Salil Vadhan Says:

    As a signer of the statement, I’d like to give some of my own
    reactions to the discussion (which are not meant to reflect the views of the other signers).

    First, points made by Jon, Gil, and others are well-taken. We do not have concrete evidence (eg statistics) that conceptual papers are having a harder time, rather it is our impressions as members of the community — attending the conferences, serving on PCs, and writing papers. Indeed, it is not clear what would constitute convincing evidence; both what one means by “conceptual” and what
    is the appropriate balance between conceptual and technical are subjective matters that are ultimately best left to
    individual scientific judgement. Thus, the statement does not attempt to specify either of these things. Personally, I feel that the mere process of reflecting on these issues will be beneficial for the community. We probably should have been clearer in the statement that our intent was to initiate such a reflection, not call for any official policy shift.

    I agree that the increasing technical sophistication and mathematical depth we see in our field is something to be welcome, and do not interpret the statement as saying that it should be valued any less (or that people should do less of it). But, like others, my sense is that this trend is making it increasingly difficult for PCs to accept papers that are more conceptual and/or technically simple, since such papers seem “riskier” than papers that make technical progress on well-established problems.

    However, I disagree with the solution that people have proposed to this problem — to increase the number of papers accepted at STOC/FOCS so as to make their PCs less risk-averse. I feel that this will largely eliminate the benefits of having a TCS-wide conference, which is the exciting cross-fertilization of ideas between areas. A significantly larger program (particularly if it involves parallel sessions) will mean that most people will hear many fewer talks outside their own specialities.

    My own view is that we should get over the idea that all excellent papers in TCS need to appear at STOC/FOCS. Some are better suited to more specialized conferences, not because of their quality but because of the audience that will benefit most from hearing about the work. (And I think it is good that our specialized conferences have started to receive some really strong papers.) To me, thinking of the audience criterion would seem to push STOC/FOCS in a more conceptual direction, since such papers tend to be of broader interest. But of course this a matter of balance, eg major developments on important open problems are certainly of general interest and we also benefit from an exchange of mathematical techniques between areas.

  80. Peter Shor Says:

    I have a somewhat different complaint with STOC/FOCS, although related.

    By refusing to get larger, STOC and FOCS are gradually shedding areas to specialized conferences. SODA has already reached equal prestige for algorithms, and if it becomes the premier conference for algorithms, and CCC becomes the premier conference for complexity, STOC/FOCS is going to be the place for the holes that have been left in the field when all the interesting coherent bits have been removed.

    I remember when the Computational Geometry conference was founded. At first, it just took the overflow from STOC/FOCS, but after a few years, when a CG paper had only a small chance of getting into STOC/FOCS, the CG papers they took weren’t necessarily the best ones (from lack of sufficient expertise on the program committee), and you’d see only a handful of papers you were interested in and a people you wanted to talk to at STOC/FOCS, the CG community left STOC/FOCS en masse.

    This wasn’t that bad, since CG isn’t a central area of theoretical computer science. But algorithms and complexity are, and if STOC/FOCS loses them, is what’s left really viable?

  81. Anonymous Prime Says:

    There is little doubt that the computational geometry community has split off from STOC/FOCS, and that SoCG is their premier conference. It’s a viable claim that SoCG is as prestigious for CG papers as STOC/FOCS.

    The same claim is NOT true of SODA, and I never find anyone who is willing to make that claim in person at the conference. The best 20 papers at SODA are very good, and would not be out of place at STOC/FOCS, but the middle (and tail end) of SODA is still full of mediocrity.

  82. Snake Pliskin Says:

    The best 20 papers at SODA are very good, and would not be out of place at STOC/FOCS, but the middle (and tail end) of SODA is still full of mediocrity.

    What is your standard for mediocrity? Papers in areas you don’t follow, or simply “papers that don’t go to STOC/FOCS”?

    What, pray tell, are these mediocre works cluttering up SODA?

  83. Anonymous Prime Says:

    Do you want an exhaustive list, or a general description? Papers that piddle around with minor variants of (solved) problems that have been studied for 20 years using the same analysis techniques we’ve seen for 20-40 years, with perhaps a made up example of an application to which they will never be applied. Their goal is not understanding or pushing any boundaries, but recapitulation, and often the goal is actually just to have another SODA paper. There are many people who can write these “stock SODA papers” consistently forever.

    I can even give you a recipe. Take your favorite problem, and add some constraint so that the standard analyses (which you are an expert on and know by heart) don’t directly apply. Justify that this added constraint makes the problem more practically applicable. Now, being a very clever person, use a combination of double inductions, charging arguments, randomized rounding, and estimation of binomial coefficients to analyze your new algorithm; the analysis should generally be in the same style and philosophy as for the known variants, but with more technical flare. Write, submit, and repeat.

  84. Peter Shor Says:

    I think SODA is doing this exactly right. If you want to have a conference that serves a viable community in algorithms, you have to take enough papers so that somebody going to the conference who is interested in algorithms finds enough papers to make it worth his while. You can’t do that by just taking the top 10 algorithms papers that are submitted. You have to take some papers which aren’t quite as good. I would claim that they’re still a lot better than mediocre, but this depends on where you set your cutoff for mediocre.

    I know people who would rather submit their top algorithms results to SODA than to STOC and FOCS, and judging by what happened in Computational Geometry, this may be the first step in a slow decline. The tipping point was when the top CG papers started to go to SoCG because at the STOC/FOCS conferences, there were too few people who were interested in listening to them and too few other good CG papers they wanted to hear about.

    On the other hand, there are still quite a few really good algorithms papers at STOC this year, so maybe my forecast is completely wrong.

  85. Job Says:

    I was going to write a mediocre paper by myself, but now i think i’ll just follow AnonymousPrime’s approach. Do you have any other tips? I’m going for something in the bad to mediocre range and your expert knowledge would be much appreciated, i’m currently stuck at plain-awful.

  86. Barbara Terhal Says:

    Let me add another comment related to the acceptance rate of STOC/FOCS. To make my case, I simplify matters somewhat. I believe that each year only very few truly outstanding papers appear (less than 10, say). Then there is a bulk of good quality papers which is larger than the number of papers that FOCS/STOC is willing to accept. Then there are the papers at the ‘bottom’ about which is not so hard to decide that they should be rejected.
    The problem is with the good papers. These differ largely by subject matter, how technically intricate or conceptual, how understandable for broad TCS audience etc.
    Inevitable rejection of papers in this set cannot always be well-argumented, which results in referee reports like the caricature report of FOCS’ 36 by Scott. This forced cut-off leaves room for more subjective influences such as taste of the PC, current fashionable trends in TCS, emphasis on technical stuff, in addition to feeding indignation by submitters that their high-quality work, on par with other work that did get accepted, did not make it. This, I believe is the core problem. One effect is that high-quality papers on the `periphery’ move out to different conferences which is not so bad.

  87. Chandra Chekuri Says:

    Some of my sentiments are reflected in posts by Peter Shor and Barbara Terhal. The selective conference system has worked quite well to highlight the best work but one may actually question whether that is in fact the purpose of conferences. Would the really top papers not be noticed if they did not appear in STOC/FOCS? I wouldn’t think so. A substantial amount of energy is spent on disambiguating roughly equivalent papers and this is noisy and leads to unhappiness with the system (for real pragmatic reasons) and some areas branching out never to return in any meaningful way. Salil argues for small programs to foster TCS-wide exchange of ideas. However, how many TCS-wide people other than the regulars attend STOC/FOCS when they don’t have a paper accepted? Also, one could argue that we should be hearing diverse ideas from non-regular STOC/FOCS attendees.

    Simply increasing # of accepted papers may not be the ideal fix. The more relevant question to debate is the role of conferences in the dissemination and exchange of ideas especially in the internet age, and what specific role that the community would want of STOC/FOCS in relation to other conferences.

  88. Piotr Says:

    I planned a longer reply to Salil’s post, but Peter, Barbara and Chandra effectively captured most of what I wanted to say. So in short: I agree that there are certain advantages of single-track meetings. However, there are several good reasons why larger meetings can do a better job on the exchange-of-ideas front, not worse. As mentioned earlier, those reasons include: (1) higher attendance: multi-track meetings tend to attract more people, so there are more people to talk to; (2) diversity: more papers/topics are present. And as per my earlier comments, there is (3) greater receptiveness to “high-risk” ideas, which do not exactly fit the mold.

  89. pq Says:

    The only technical contributions which make a lasting impact are those which contain new ideas. Important ideas can be technical. Sometimes, after a while, they are distilled into one line “conceptual” ideas, and some times they are really ideas which are of a technical nature. There is no technical breakthrough without new ideas. I love long arguments, and also short arguments, only if they have something really new in them. Saying that a paper is only “technical” is an easy way to dismiss it. Papers should be evaluated based on what’s in them, and good refereeing involves evaluating the new ideas in technical results. We should insist that PCs adhere to this standard, rather than distinguishing “technical” from “conceptual”, which simply misses the point. I have seen too many TCS researchers not read papers because they are technical or “heavy on notation”, which is a good way to miss great ideas. The reality is that as the field matures many more papers will require more work from the readers, since the new ideas will be contained in the technical lemmas and appendices.

  90. Salil Vadhan Says:

    I find Piotr and Chandra’s comments about how larger, parallel-session conferences might promote the exchange of ideas across areas to be thought-provoking. (I find the argument about noisy decisions less compelling – there is going to be arbitrariness around any threshold, and it will always annoy some people. Re risk-aversion, I agree a larger program could help, but I think this can also be achieved by the community remaining alert to the issue and people accepting the idea that not all strong papers need to appear in FOCS/STOC.)

    To me, the question then is whether the cross-area fertilization is better achieved by having a fat and shallow graph (eg by having very broad, parallel-session conferences with many attendees) or by one that is more deep and narrow (eg by having smaller, single-session conferences that might span fewer topics but have more intellectual coherence).

    For me (as an attendee), the latter model works better. Consider, for example, cryptography – which was an area that seemed to have some risk of splintering into a FOCS/STOC crypto community and a CRYPTO/EUROCRYPT crypto community. However, a few years ago, a “bridging conference” TCC was introduced, and it seems to have worked wonderfully in this role. Through it, the FOCS/STOC-oriented crypto researchers bring ideas from complexity and algorithms to the CRYPTO/EUROCRYPT crypto community, and other sorts of ideas (eg ones motivated more by practice) get transferred in the other direction.

    Finally re Peter’s comments: personally I don’t see any risk of the complexity theory community splintering from STOC/FOCS in the near future, even as CCC is becoming a stronger conference.

    [I’m about to leave for travel, so apologies in advance for not responding to further discussion.]

  91. Paul Beame Says:

    I’ll be an incrementalist. My general feeling is that it would be good to accept a few more papers at FOCS/STOC on average. Acceptance rates have varied dramatically over the years but it currently seems that the rates are low by historical standards. (ACM publishes the acceptance rates for its conferences by year on the digital library so one can see the rates for STOC -1999 is missing but the rate was 53%; maybe this is so embarrassingly high that it isn’t listed :). Historically, FOCS/STOC have substantially higher acceptance rates than most strong CS conferences. The long term average for STOC is around 33%. ) One caveat is that my sense that more papers should be accepted is not based on personal knowledge of the whole list of submissions – I don’t know whether STOC/FOCS are suddenly getting substantially more junky submissions than in the past.

    What do I mean by ‘a few more’ papers? There is a hard upper bound in the STOC/FOCS format that I would not want to exceed. 3 days with 2 parallel sessions and no plenary talks for award papers or invited speakers can accommodate up to 96 talks using 8:30-6:00pm days. Every invited plenary speaker cuts the number of papers by at least 4, every plenary award talk cuts it by 1. Going beyond this number would give us a vastly reduced set of options for conference venues (as SODA has).

    Would this ‘solve’ the problem? I doubt it. I just think that a bit higher number would be a bit healthier for the community overall and would not compromise the feeling or quality of the conferences.

    Finally, there is something I’ve seen in the sociology of PCs. At some point, people who have worked hard reading papers for a couple of months and have been in a room together for a couple of days get tired of making decisions. They either get very permissive or get grumpy, hence the widely varying acceptance rates. Of course, the hard upper bound is still enforced in the first case but has no impact in the second. It would be good if PCs were conscious of these tendencies and explicitly tried to avoid them.

  92. pq Says:

    I think that Paul Beame’s suggestions are great!

  93. Alef Says:

    It seems life in the math community is more relaxed and open, without the conference system with it’s pros and cons.
    The TCS field is already mature, diverse and large to consider dropping the conference proceeding system and move on to a multicultural
    structure without an authoritative fashionable bi-yearly meetings, as in math, physics and I guess other fields.

  94. Piotr Says:

    In response to Salil’s email: it is good to know the examples of how connections between areas can be made stronger by properly designed small conferences. At the same time, I do not see how this approach would work for connections that I personally treasure most: the unexpected connections. The best way to generate them (at least, as far as I know) requires getting a diverse set of people under one roof. Moreover, such discoveries are often stimulated not by intellectual coherence, but incoherence (e.g., when different notions are used to describe the same objects, or when different people value different aspects of an algorithm or a construction, or when people differ on basic assumptions). If we try to make our conferences “sufficiently coherent”, I believe we will reduce our ability to make the long-distance connections. Which is even more important these days, when we try to engage areas outside of CS.

    Also, I agree with Paul: incrementality is the way to go, no point rocking the boat too much.

  95. Jonathan Vos Post Says:

    I agree with Piotr: “… to generate [unexpected connections] (at least, as far as I know) requires getting a diverse set of people under one roof. Moreover, such discoveries are often stimulated not by intellectual coherence, but incoherence…”

    That was the joy of the Artificial Intelligence conferences (AAAI and IJCAI) of the 1970s, the Artificial Life conferences at the Santa Fe Institute, the invitation-only Hackers conferences in Silicon Valley, and the International Conference on Complex Systems series run by NECSI.

    In fact, dipping back further, that’s how Cybernetics and Artificial Intelligence were born as disciplines.

    Or maybe better to consider early meetings of The Royal Society.

    Let Aleph-null flowers bloom!

  96. Gil Says:

    I am not sure if STOC/FOCS should become even more inclusive and to accomodate even wider areas. There can be other conferences and other means of scientific exchange of ideas for finding unexpected connections and bridges between incoherent approaches.

  97. Mikkel Thorup Says:

    Let me be the negative one, trying to disagree with the above letter,
    the blog on Turing, and the congratulation of the STOC’08 PC (which
    surprised me by picking out the one of my submissions least likely to
    have lasting impact on the real world).

    All this stuff in the blog is cleverly written so that everything said
    is vacuously true. Obviously I agree that we for any problem want the
    simplest possible solution, and that it would have been a mistake to
    reject Turing’s article. The question is what conclusions should we
    draw?

    Should we all be posing new computational models and problems with a couple of simple algorithms, in the hope that it magically turns out a
    new Turing (Award)?

    I was myself extremely impressed when I read some of Turing’s
    stuff. He did not just present a simple new model. He made a very
    impressive philosophical argument for the far reaching power of his
    simple model. If Turing had not provided such a convincing argument,
    then I think a PC could have been rightly justified in rejecting his
    paper, for it is the authors problem to convince the PC of the
    lasting merits of a contribution. My guess is that Turing only produced
    such a strong support for his model because he knew that getting a new model or problem accepted by the scientific community required
    substantial support, and I think the support he produced is a main
    reason why he got such impact.

    Ultimately, I think TCS should be dealing with problems and
    issues of lasting relevancy. Broadening the field identifying
    new such problems is of course extremely important. However, I think
    there is too much game playing out there, and I think many of those
    games are based on assumptions that will never become true for real.
    Anything can claim to be a possible direction of the future, but
    the future will not go in every possible direction. One can always maintain that if clever people work on a problem, then
    even if the problem proves irrelevant, the ideas may later find relevant
    applications elsewhere, but I do think we need more focus than that.

    I would hope that when a STOC/FOCS PC accepts a paper based on a brand new model/problem, then it is because they truly believe in its lasting relevancy worth working on, the intent being to accept later technical progress on the problem.

    Finally, we have classic problems out there that have already proved
    their continued relevancy over a long period of time, hence which are
    known to be worth serious research. Of course, progress on these
    problems is hard to come by. Simple general ideas have already been
    exhausted, and then progress often depends on a deeper understanding of the nature of the concrete problem at hand. This
    the point where some superficial referees start objecting that
    the methods are not general. Superficial referees will also complain if the solution looks standard, not realizing that if the problem has been
    open for a long time, it is because it is hard to figure out how to
    apply standard techniques. A beautiful example is the work on the
    k-median problem where it took a long time before the first constant
    factor LP based approximation was found, and even longer to find out
    how to apply other standards like “primal-dual” and “local
    search”. While the progress has gone through many complicated
    solutions, the end result is a good understanding of the problem with
    a multiplicity of simple solutions with good approximation factors,
    not to far from the lower bounds. That’s a good theory story.

    A great non-trivial concept story is that of NP-completeness which has
    spread far outside our our own community. What makes this story
    particularly interesting is that NP-completeness is NOT a simple
    concept. It is quite hard to explain what NP-completeness really means
    to an outsider. However, the theory community managed to prove the
    that all kinds of relevant problems were NP-complete, demonstrating
    the ubiquity of the concept. It is this hard earned demonstrated power
    that has carried NP-completeness out to the rest of the world. Looking
    at neighboring physics, we see plenty of hard to grasp concepts of
    amazing power.

    What I am trying to say is that I think the good theory stories are
    those who involve substantial amounts of work, eventually converging
    on a deeper understanding, possibly even finding simple solutions when things have been properly digested. That’s what makes me proud to be a theoretician.

    I think it is fine to have some slots at STOC/FOCS devoted to new problems and models broadening the field, but not at a rate that sacrifices the depth of our field. If we aim at a depth
    of d (#papers per problem), then generally speaking, we should
    only have a fraction 1/d of the slots devoted to such starters. I prefer d>2.

  98. samuel jackson Says:

    There was a related interesting upheaval by the Graphics community, when Michael Ashikhmin left the field. All the bigwigs in the SIGGRAPH community weighed in at this forum (now mildly spam-infested). Maybe we could learn some lessons.

  99. Boaz Barak Says:

    Was just pointed out to this letter. I like conceptual papers but also technical papers, and like Gil, am somewhat opposed to general rules and prefer to leave such decisions to a case-by-case judgement in the hands of good people.

    I think the underlying issue may be that it’s becoming harder to review papers, because they either cover topics beyond the PC’s expertise or are technically harder. (Though it’s just my intuition – I have no evidence for that.) If this is the case, then PC’s are bound to make mistakes, and whether you favor conceptual/technical/practical papers, you’ll probably feel that the mistakes made hurt your favorite kind more than others.

    I don’t have any good solution though. A larger PC may be hard to get, and also harder to manage. Maybe a 2-tier PC? (Where a second tier reviewer only needs to commit in advance to reviewing 3-5 papers or so, something people could do at a more or less regular basis.)

    I am not sure a larger conference is a good idea, though am not sure it’s a bad one either. The one thing I know is that STOC/FOCS are pretty good conferences, so any change to them should be incremental and not revolutionary.

  100. Anonymous`` Says:

    The problem is not about technical vs conceptual. Useless conceptual papers with lots of technicality. One defines something and proves results about it. I don’t think such papers increase our knowledge. Both conceptual and technical paper should answer why this paper is worth reading. But because of the pressure on faculties to publish, many publish papers just to have publish something. Rejecting such a conceptual paper is much easier than a highly technical one, and this difference creates the bias in accepted conferences papers.

  101. Curious Says:

    Anonymous #100 wrote:
    “One defines something and proves results about it. I don’t think such papers increase our knowledge. Both conceptual and technical paper should answer why this paper is worth reading.”

    This would place an unrealistic burden on the author(s). (If they had all the answers, it wouldn’t be called research.) The problem is knowing when something they have defined or discovered is “worth” anything. For example, when imaginary numbers were discovered (or invented depending on your POV), were the ramifications and applications known? And one can think of other mathematical concepts as well.

  102. Errant Browser Says:

    The conceptual vs technical debate is of course ubiquitous, not just in TOC. A nice experiment which passed this hurdle relates how the ‘complexity’ of a boolean function affects its learnability — in actual human subjects.

    See the following article:
    http://www.nature.com/nature/journal/v407/n6804/abs/407630a0.html