Edging in: the biggest science news of 2015

For years, I was forced to endure life with my nose up against the glass of the Annual Edge Question.  What are you optimistic about?  Ooh! ooh! Call on me!  I’m optimistic about someday being able to prove my pessimistic beliefs (like P≠NP).  How is the Internet changing the way you think?  Ooh, ooh! I know! Google and MathOverflow are saving me from having to think at all!  So then why are they only asking Steven Pinker, Freeman Dyson, Richard Dawkins, David Deutsch, some random other people like that?

But all that has changed.  This year, I was invited to participate in Edge for the first time.  So, OK, here’s the question:

What do you consider the most interesting recent [scientific] news?  What makes it important?

My response is here.  I wasn’t in love with the question, because of what I saw as an inherent ambiguity in it: the news that’s most interesting to me, that I have a comparative advantage in talking about, and that people probably want to hear me talk about (e.g., progress in quantum computing), is not necessarily what I’d regard as the most important in any objective sense (e.g., climate change).  So, I decided to write my answer precisely about my internal tension in what I should consider most interesting: should it be the recent progress by John Martinis and others toward building a quantum computer?  Or should it be the melting glaciers, or something else that I’m confident will affect the future of the world?  Or possibly the mainstream attention now being paid to the AI-risk movement?  But if I really want to nerd out, then why not Babai’s graph isomorphism algorithm?  Or if I actually want to be honest about what excited me, then why not the superquadratic separations between classical and quantum query complexities for a total Boolean function, by Ambainis et al. and my student Shalev Ben-David?  On the other hand, how can I justify even caring about such things while the glaciers are melting?

So, yeah, my response tries to meditate on all those things.  My original title was “How nerdy do you want it?,” but John Brockman of Edge had me change it to something blander (“How widely should we draw the circle?”), and made a bunch of other changes from my usual style.  Initially I chafed at having an editor for what basically amounted to a blog post; on the other hand, I’m sure I would’ve gotten in trouble much less often on this blog had I had someone to filter my words for me.

Anyway, of course I wasn’t the only person to write about the climate crisis.  Robert Trivers, Laurence Smith, and Milford Wolpoff all wrote about it as well (Trivers most chillingly and concisely), while Max Tegmark wrote about the mainstreaming of AI risk.  John Naughton even wrote about Babai’s graph isomorphism breakthrough (though he seems unaware that the existing GI algorithms were already extremely fast in practice, and therefore makes misleading claims about the new algorithm’s practical applications).  Unsurprisingly, no one else wrote about breakthroughs in quantum query complexity: you’ll need to go to my essay for that!  A bit more surprisingly, no one besides me wrote about progress in quantum computing at all (if we don’t count the loophole-free Bell test).

Anyway, on reflection, 2015 actually was a pretty awesome year for science, no matter how nerdy you want it or how widely you draw the circle.  Here are other advances that I easily could’ve written about but didn’t:

I’ve now read all (more or less) of this year’s Edge responses.  Even though some of the respondents pushed personal hobbyhorses like I’d feared, I was impressed by how easy it was to discern themes: advances that kept cropping up in one answer after another and that one might therefore guess are actually important (or at least, are currently perceived to be important).

Probably at the top of the list was a new gene-editing technique called CRISPR: Randolph Neese, Paul Dolan, Eric Topol, Mark Pagel, and Stuart Firestein among others all wrote about this, and about its implications for creating designer humans.

Also widely-discussed was the discovery that most psychology studies fail to replicate (I’d long assumed as much, but apparently this was big news in psychology!): Nicholas Humphrey, Stephen Kosslyn, Jonathan Schooler, Ellen Winner, Judith Rich Harris, and Philip Tetlock all wrote about that.

Then there was the Pluto flyby, which Juan Enriquez, Roger Highfield, and Nicholas Christakis all wrote about.  (As Christakis, Master of Silliman College at Yale, was so recently a victim of a social-justice mob, I found it moving how he simply ignored those baying for his head and turned his attention heavenward in his Edge answer.)

Then there was progress in deep learning, including Google’s Deep Dream (those images of dogs in nebulae that filled your Facebook wall) and DeepMind (the program that taught itself how to play dozens of classic video games).  Steve Omohundro, Andy Clark, Jamshed Bharucha, Kevin Kelly, David Dalrymple, and Alexander Wissner-Gross all wrote about different aspects of this story.

And recent progress in SETI, which Yuri Milner (who’s given $100 million for it) and Mario Livio wrote about.

Unsurprisingly, a bunch of high-energy physicists wrote about high-energy physics at the LHC: how the Higgs boson was found (still news?), how nothing other than the Higgs boson was found (the biggest news?), but how there’s now the slightest hint of a new particle at 750 GeV.  See Lee Smolin, Garrett Lisi, Sean Carroll, and Sarah Demers.

Finally, way out on the Pareto frontier of importance and disgustingness was the recently-discovered therapeutic value of transplanting one person’s poop into another person’s intestines, which Joichi Ito, Pamela Rosenkranz, and Alan Alda all wrote about (it also, predictably, featured in a recent South Park episode).

Without further ado, here are 27 other answers that struck me in one way or another:

  • Steven Pinker on happy happy things are getting better (and we can measure it)
  • Freeman Dyson on the Dragonfly astronomical observatory
  • Jonathan Haidt on how prejudice against people of differing political opinions was discovered to have surpassed racial, gender, and religious prejudice
  • S. Abbas Raza on Piketty’s r>g
  • Rebecca Newberger Goldstein, thoughtful as usual, on the recent study that said it’s too simple to say female participation is lower in STEM fields—rather, female participation is lower in all and only those fields, STEM or non-STEM, whose participants believe (rightly or wrongly) that “genius” is required rather than just conscientious effort
  • Bill Joy on recent advances on reducing CO2 emissions
  • Paul Steinhardt on recent observations saying that, not only were the previous “B-modes from inflation” just galactic dust, but there are no real B-modes to within the current detection limits, and this poses a problem for inflation (I hadn’t heard about this last part)
  • Aubrey de Grey on new antibiotics that are grown in the soil rather than in lab cultures
  • John Tooby on the evolutionary rationale for germline engineering
  • W. Tecumseh Fitch on the coming reality of the “Jurassic Park program” (bringing back extinct species through DNA splicing—though probably not dinosaurs, whose DNA is too degraded)
  • Keith Devlin on the new prospect of using massive datasets (from MOOCs, for example) to actually figure out how students learn
  • Richard Muller on how air pollution in China has become one of the world’s worst problems (imagine every child in Beijing being force-fed two packs of cigarettes per day)
  • Ara Norenzayan on the demographic trends in religious belief
  • James Croak on amazing advances in battery technology (which were news to me)
  • Buddhini Samarasinghe on (among other things) the power of aspirin to possibly prevent cancer
  • Todd Sacktor on a new treatment for Parkinson’s
  • Charles Seife on the imminent availability of data about pretty much everything in our lives
  • Susan Blackmore on “that dress” and what it revealed about the human visual system
  • Brian Keating on experiments that should soon tell us the neutrinos’ masses (again, I hadn’t heard about these)
  • Michael McCullough on something called “reproductive religiosity theory,” which posits that the central purpose of religions is to enforce social norms around mating and reproduction (for what it’s worth, I’d always regarded that as obvious; it’s even expounded in the last chapter of Quantum Computing Since Democritus)
  • Greg Cochran on the origin of Europeans
  • David Buss on the “mating crisis among educated women”
  • Ed Regis on how high-fat diets are better (except, isn’t this the principle behind Atkins, and isn’t this pretty old news by now?)
  • Melanie Swan on blockchain-based cryptography, such as Bitcoin (though it wasn’t entirely clear to me what point Swan was making about it)
  • Paul Davies on LIGO getting ready to detect its first gravitational waves
  • Samuel Arbesman on how weather prediction has gotten steadily better (rendering our culture’s jokes about the perpetually-wrong weatherman outdated, with hardly anyone noticing)
  • Alison Gopnik on how the ubiquity of touchscreen devices like the iPad means that toddlers can now master computers, and this is something genuinely new under the sun (I can testify from personal experience that she’s onto something)

Then there were three answers for which the “progress” being celebrated, seemed to me to be progress racing faster into WrongVille:

  • Frank Tipler on how one can conclude a priori that there must be a Big Crunch to our future (and hence, the arena for Tiplerian theology) in order to prevent the black hole information paradox from arising, all recent cosmological evidence to the contrary be damned.
  • Ross Anderson on an exciting conference whose participants aim to replace quantum mechanics with local realistic theories.  (Anderson, in particular, is totally wrong that you can get Bell inequality violation from “a combination of local action and global correlation,” unless the global correlation goes as far as a ‘t-Hooft-like superdeterministic conspiracy.)
  • Gordon Kane on how the big news is that the LHC should soon see superparticles.  (This would actually be fine except that Kane omits the crucial context, that he’s been predicting superparticles just around the corner again and again for the past twenty years and they’ve never shown up)

Finally, two responses by old friends that amused me.  The science-fiction writer Rudy Rucker just became aware of the discovery of the dark energy back in 1998, and considers that to be exciting scientific news (yes, Rudy, so it was!).  And Michael Vassar —the Kevin Bacon or Paul Erdös of the rationalist world, the guy who everyone‘s connected to somehow—writes something about a global breakdown of economic rationality, $20 bills on the sidewalk getting ignored, that I had trouble understanding (though the fault is probably mine).

102 Responses to “Edging in: the biggest science news of 2015”

  1. Peter Morgan Says:

    Doesn’t a (stochastic) superdeterministic type of approach just replace a quantum mechanics that we don’t understand by an initial stochastic state we don’t understand? And, if so, what’s wrong with taking that type of approach, with the intention of looking for what might make the initial stochastic state slightly less not understood? Mathematics required. Ross Anderson brings to this that there is often nonlocal order in the initial conditions of thermodynamic systems, which perhaps might be enough to hint at useful ways to think about the initial stochastic state; or might not, but such is research, often absurd until it’s useful.

    Jan-Åke Larsson’s “Loopholes in Bell inequality tests of local realism”, J. Phys. A: Math. Theor. 47 (2014) 424003, doi:10.1088/1751-8113/47/42/424003, makes as clear as anyone could wish that the no-superdeterminism assumption is required and cannot be verified by experiment, which is why one finds in all the recent papers more or less generous admissions that “unless superdeterminism”. But you could go to my “Bell inequalities for random fields”, J. Phys. A: Math. Gen. 39 (2006) 7441–7455, doi:10.1088/0305-4470/39/23/018, for the same statement (but definitely not so nicely done), and of course to ‘t Hooft, quite a bit earlier and (inevitably IMO) even less clearly.

    You knew someone was likely to come after you for this one. I enjoyed /your/ Edge Answer for the feeling it gave me that Scott is above the Edge Question.

  2. anon Says:

    Scott, congratulations for your insightful piece: I liked it.
    I am anyway puzzled by the fact that they called an incompetent crank like Garrett Lisi to contribute to this.

  3. Scott Says:

    Peter #1: I’ve blogged about this so many times before that there’s probably no need to enter into it again. But basically, I regard “superdeterminism” as throwing away a massive presupposition for science itself to make sense (that there’s no demonic conspiracy controlling which experiments to do), and all in order to “solve” a problem that doesn’t need solving at all! I.e., absolutely nothing goes wrong if you just accept quantum mechanics, and accept locality in the sense in which QM is local (no one else can change your density matrix), and reject locality in the sense in which QM is not local (the sense that presupposes a classical world, which is not the world we live in). To be sure, there are interesting mathematical questions about what kinds of substructure you could or couldn’t consistently imagine underneath QM, and what kinds of experiments it would take to rule them out. I’ve written several papers about those questions myself and might write more! But committing yourself to a classical substructure underneath QM as something that actually exists, and that has empirical consequences, continues to strike me as a one-way ticket to WrongVille. Maybe I’m wrong, but that’s the bet I’ve taken, and so far it’s been a winner. 🙂 (And if we get QCs that demonstrate clear speedups, it will become even more of a winner.)

  4. Scott Says:

    anon #2: If Garrett is an incompetent crank, he’s the nicest, coolest, smartest, most competent incompetent crank I’ve had the pleasure of getting to know.

  5. Gautam Kamath Says:

    A small point, but I noticed that Laci’s name is spelled “Lazslo Babai” in the article, rather than “Laszlo Babai,” which seems to be much more common online.

  6. Jr Says:

    I am more surprised by Frank Tipler. I am not saying he is incompetent but he has some seriously crazy scientific opinions.

  7. Itai Bar-Natan Says:

    Based on this post, a question: How do you pronounce “‘t Hooft”?

  8. Scott Says:

    Itai #7: “ut Hooft,” I believe. See here for more details.

  9. CD Says:

    Thanks for the link to Croak’s note on battery technology. This is very encouraging. I’d share Pinker’s overall optimism about the progress and the ‘enlightenment project’, but I’m interested to hear what you make of Abbas Raza’s notes on Piketty r>g. I ask as you had a fairly blanket (Popper like) denunciation of Marx. I have enjoyed some of your work. I’m reading the ‘Ghost in the Turing Machine’, but I admit I was a bit taken aback by what I thought was a weak criticism of Mark.

  10. Peter Morgan Says:

    Scott #3, no worries. Although people often do commit to a particular structure, one doesn’t have to. I also have to not belabor a point I have made here before, the more so because it’s your blog, that Hilbert space models can as well be interpreted as a stochastic signal processing formalism in the presence of Lorentz invariant noise as they can be interpreted in other ways. Successful QCS speedup would only establish that Hilbert spaces are useful for physical models, it would not establish a preference for a particular interpretation of that mathematics. In more-or-less classical terms, stochastic signal processing introduces resources (Poincaré invariant noise, annealing, relaxation, nonlocal correlations, etc.) that are computationally more powerful than a classical field theory.

    Stochastic superdeterminism only determines that I will have coffee every morning with 99% probability, not that I always will or precisely whether I will on a given morning. We don’t need deterministic superdeterminism. Wigner’s friend applying QM/QFT to a coffee experiment would make an almost identical stochastic claim about Wigner, so do we throw out QM/QFT also? We don’t, insofar as we don’t lose too much sleep over the measurement problem. I take the free will argument and the we-can’t-do-science argument not yet to have been thought through anywhere near decisively.

    Again, it’s not that I specially prefer one interpretation over another, it’s that Nature and PRL are publishing rather metaphysical claims rather too uncritically. If we accept that Jan-Åke establishes that we cannot verify no-superdeterminism by experiment, which the recent Nature and PRL papers do in off-hand base-covering paragraphs, why the shouting?

  11. Scott Says:

    CD #9: I haven’t studied Piketty enough to have any confident opinion about him. If others want to discuss in this thread, that’s great, and maybe I’ll learn something. All I can say is that he seems obviously more quantitative and empirically-minded than Marx was (not a very high bar…), and that his basic r>g thesis strikes me as something that I understand and that could be true (or partly true). At least Krugman thinks so, and Krugman has an impressive record of making correct macroeconomic predictions. But again, I don’t really know.

  12. anon Says:

    Scott #4: I understand that in your position you prefer political correctness. So ok: let’s judge people’s research by their looks or coolness. Fortunately for you, you are both a cool guy and a great researcher. As for Lisi I said already what I think.

  13. Amir Says:

    I’d like to start with thanking you for the wonderful presentation in HUJI on Th-day. It was very interesting and I’m sorry to have missed the Q&A session.

    Now for QM and philosophy issues: There’s a local-realistic hidden-variable interpretation of QM that I learned about recently, and am surprised no one is talking about.

    Basically, if a particle has probability P(x) to be in position x, and moves under a potential V(X) together with a Brownian motion scaled by h/2m, then it behaves just like a quantum particle with a wave function satisfying |Ψ|^2 = P.

    For multi-particle systems, the same happens, where the speed of the Brownian motion each particle experiences is in an inverse relation to its mass. I’m not aware of any relativistic extension of this theory.

    This seems very exciting to me! The position basis is special, true, but there’s no need for a wave function and there’s no special “collapse” process in measurement. Unlike Bohmian mechanics, there’s no wave function computed “behind the scenes” that directs the hidden variables. Everything is just simple Newtonian mechanics plus some randomness.

    That’s the paper introducing this idea (I think):
    http://dieumsnh.qfb.umich.mx/archivoshistoricosMQ/ModernaHist/Nelson%20a.pdf

  14. eli Says:

    You think even NP having TCi circuits (non-uniform TCi) is impossible (this does not interfere with any known conjectures)?

    What does PH infinite w.r.t random oracle with prob 1 for NP = P or NP having TCi circuits (non-uniform TCi)? Does it prove NP is not P in some sense? It still seems PH is finite is definitely possible right?

  15. Douglas Knight Says:

    what I’d regard as the most important in any objective sense (e.g., climate change)

    Asking what is important would produce a lot of redundancy, which is bad. But Brockman did not ask what is important. He asked what you found interesting. But he still got a lot of redundancy. I suspect that the follow up question about what makes the interesting thing important caused you and many others to substitute “important” for the original “interesting” and greatly degrade the aggregate value of the collection of answers.

  16. prasad Says:

    Scott Alexander did a characteristically excellent critique of the study Rebecca Goldstein wrote about. Money paragraph:

    Okay. Imagine a study with the following methodology. You survey a bunch of people to get their perceptions of who is a smoker (“97% of his close friends agree Bob smokes”). Then you correlate those numbers with who gets lung cancer. Your statistics program lights up like a Christmas tree with a bunch of super-strong correlations. You conclude “Perception of being a smoker causes lung cancer”, and make up a theory about how negative stereotypes of smokers cause stress which depresses the immune system. The media reports that as “Smoking Doesn’t Cause Cancer, Stereotypes Do”.

    More interesting to me than that old dispute itself is that the media fell over itself to trumpet this study and its preferred conclusion, with nary a skeptical cough, when a child could see there’s an obvious alternate hypothesis here. Political bias in academy and the scientific media (another hobby horse of Jonathan Haidt) seems like the most obvious culprit here.

  17. Douglas Knight Says:

    I’d long assumed as much, but apparently this was big news in psychology!

    There are a lot of things to be learned from this, both about common knowledge and about institutions.

    There are two populations involved: psychologists and others. Do you really think that most psychologists would keep going, knowing that they were performing empty rituals? Of course not. They were subject to déformation professionnelle.

    As for outsiders, did they think that (social) psychology was bullshit? Maybe. Indeed, I suspect that the people writing on Edge (whom I haven’t read) are feigning surprise. But what can outsiders do? One option is to simply write off psychology. I suspect that the field of “Cognitive Science” exists for this purpose. One could tear apart individual papers, but this doesn’t work.

    Speaking of which, here is Scott Alexander tearing about a social psychology paper. What I thought “everyone knew” is that social science published in Science is fraud. But neither you nor Goldstein seem aware of that rule of thumb.

    No, the paper’s hypothesis not better than “other leading hypotheses.” The leading hypothesis is that math (eg, GRE scores*) predicts sex ratio. Alexander notices that he is confused by the paper’s claim that total GRE does not correlate with sex ratio. He is unable to replicate that, indeed to replicate the paper’s reported values of total GRE. It turns out that it defines total GRE as the math score plus three times the verbal score. It is a fun exercise to find the definition in the paper (in the supplements, I think). This is the definition I label fraud. I suspect that the authors are unaware of their definition, that they have put in a lot of work to fool themselves just as much as the reader.

    * This hypothesis does not make a lot of sense. It does not sound like a cause. It is a mix of the field being selective and the field being mathy. Is philosophy more mathy than history? A little, but it’s more that philosophy students are smarter than history students, all around.

  18. Alyssa Vance Says:

    “female participation is lower in all and only those fields, STEM or non-STEM, whose participants believe (rightly or wrongly) that “genius” is required rather than just conscientious effort”

    Scott Alexander explains why this result shouldn’t be taken seriously, in his essay “Perceptions Of Required Ability Act As A Proxy For Actual Required Ability In Explaining The Gender Gap” (http://slatestarcodex.com/2015/01/24/perceptions-of-required-ability-act-as-a-proxy-for-actual-required-ability-in-explaining-the-gender-gap/)

  19. CD Says:

    I think that Piketty is saying something that is not false, but perhaps not the whole truth in that the way he defines Capital may be better described as Wealth. There are some good discussions of this in a dedicated issue of the the Real World Economics Review. http://www.paecon.net/PAEReview/issue69/whole69.pdf

    The first two papers are interesting. The first is basically an introductory account from a development economist from LSE. The second is a more theoretical critique from the former Greek finance minister Varofakis (who was a game theorist initially).

    Varofakis describes himself as an ‘erratic Marxist’ and gives a very readable account of his position here http://www.theguardian.com/news/2015/feb/18/yanis-varoufakis-how-i-became-an-erratic-marxist. It would be interesting to see how it squares with your rather dismissive attitude to Marx.

    I should disclose that I’m not a marxist and never have been, but I do see him as being a very important thinker – even in economics where he used the axioms of Smith and particularly Ricardo to show that Capitalism thus defined had internal contradictions. As a period piece, it was perfectly fine. Aspects of it stand the test of time. Not as a predictive tool, but then you can’t use the classical economists for that either.

  20. Evan Says:

    @peter #1.

    >Doesn’t a (stochastic) superdeterministic type of approach
    >just replace a quantum mechanics that we don’t
    >understand by an initial stochastic state we don’t
    >understand? And, if so, what’s wrong with taking that type
    >of approach

    I think the problem here is that we do understand quantum mechanics and continue to learn to understand it better while a superdetermenistic theory is something that by construction we cannot understand.

    One thing that really strikes me about the EPR paper is that while in some ways it is a model of clarity in scientific writing, it is also surprising how much trouble those guys had in understanding and articulating what they didn’t like about quantum mechanics. They certainly had the basic shape of it, but it was clear it was even hard for them to think about properly. It took years for Bell to actually quantify it and produce Bell’s inequality. Those guys were giants, but they lacked the benefit of a centuries worth of experience in thinking about quantum mechanics, and now not only can we teach Bell’s theorem to undergraduate physics students of normal ability, but in many cases they can even do the experiment.

    Of course this happens all the time, that is what scientific progress looks like, but I always find this example striking because it EPR had all the facts and all the theories necessary: the advancement was really just learning how to think better about what we already knew.

    My point of all this is that we actually understand quantum mechanics really well, as show by everything from advances at the LHC, in quantum computing, in the loophole free test, atomic clocks and so on. So I am always surprised when people go looking for alternative theories on the premise that we don’t. Back in the 20s and 30s, that was maybe true, but the world has moved on.

    Of course I am not saying there is nothing new “beyond” quantum mechanics or that we shouldn’t look for anything new. but we now know that quantum mechanics is not a red herring leading us astray.

  21. Garrett Says:

    Thanks for that comment Scott, I may quote you on that. I was similarly challenged by the ambiguity of this year’s question, but instead of talking about CRISPR-cas9, which is probably the most impactful recent news in science, I decided to write about something I know more about and take the question as license to further endear myself to string theorists. Welcome to the Edge party! (May your Holidays be long so burdened.)

  22. Scott Says:

    Eli #14: The new result just shows PH is infinite RELATIVE TO A RANDOM ORACLE. Of course it doesn’t rule out the possibilities that PH is finite or even P=NP in the “real” world (with no oracles), but of course I don’t believe in those possibilities. Nor do I believe that NP (or P, for that matter) is contained in any TCi, though that also hasn’t been proved.

  23. Scott Says:

    prasad #16 and Alyssa #18: Yes, I actually first heard of this study through Scott Alexander’s critique of it! And Scott is entirely right, of course, that it’s bizarre to ASSUME “perceptions of the need for genius” are just a stereotype, without even considering the possibility that the perceptions might be correct. But Rebecca, who’s well aware of the issue, was careful to avoid making any assumption of that kind in her answer (I would say that she danced around it), and I also explicitly avoided such an assumption, through my phrase “rightly or wrongly.”

    In correspondence, Rebecca suggested to me an intermediate possibility that I hadn’t thought of. Namely, it could be that in certain fields, there really are more male “geniuses” than female ones. But it could also be that in those same fields, women and men have exactly the same ability to do “solid, competent work” (or the women are better)—but that nevertheless, women are dissuaded from trying to do solid, competent work in those fields (as the non-genius men aren’t), simply because of the halo effect of the geniuses. That would be a strange, interesting way for Larry Summers and his critics to both be right.

  24. AdamT Says:

    Scott,

    Any thoughts on Joscha Bach’s Edge answer, “Everything is Computation?”

    Adam

  25. Rahul Says:

    Scott #23:

    “In certain fields, there really are more male “geniuses” than female ones. But it could also be that in those same fields, women and men have exactly the same ability to do “solid, competent work” (or the women are better)”

    It sounds like a very non-intuitive phenomenon to me. Is there any empirical evidence for this?

    In general, are there skills where there’s such a “ability reversal”? i.e. Males do better at the fantastically hard tasks but women do better at the easier tasks?

    Or vice versa.

    Sounds like a very unlikely hypothesis.

  26. Scott Says:

    AdamT #24: In some sense, most of what he says is music to my ears! But what I found a bit unsatisfying was that his answer stayed at the level of slogans and generalities, without descending to describe any actual recent scientific advances that fit the theme of “everything being computation.” Had his answer done so, it would’ve addressed the prompt better, and also been more interesting for me to read.

  27. Scott Says:

    Douglas #15: I completely agree that the followup question about “importance” is part of what threw me off. But even without that, there was already an ambiguity in the first question! “What do you consider the most interesting news?” could be interpreted as either: “What news interested you the most personally?,” or “Which news is it your considered opinion that everyone ought to be the most interested in?” Worse yet, to my mind, the ambiguity seems almost purposefully to invite conflating these two questions—i.e., treating the boundaries of one’s own curiosity as the boundaries of the world. So I resolved that, whatever else I did, I wouldn’t fall into that trap in my own answer.

  28. Douglas Knight Says:

    Rahul, ability is hard to measure. The top people in virtually all fields are male, including female-dominated fields like fashion and psychology. But I imagine that a lot of that is ambition, not ability.

  29. Douglas Knight Says:

    Scott, I think “interested in,” especially “ought to be interested in,” means something very different than “find interesting,” which is about curiosity.

  30. Rahul Says:

    I’m excerpting from Lenny Susskind’s answer:

    “……the emergence of space behind the horizons of black holes is due to the growth of quantum complexity……is a surprising new connection between physics and quantum-information science……these connections may not only teach us new things about fundamental physics problems, but also be tools for understanding the more practical issues of construction and using quantum computers.”

    I was caught by surprise by the word “practical” in this discussion. My impression was these things were pretty much as theoretical as theoretical can get.

    Is there really impact on practical QC-building efforts from the black-hole horizon & quantum complexity connections? Can someone elaborate as to how?

  31. Scott Says:

    Rahul #30: Lenny is alluding to the idea that AdS/CFT, which provides a whole new class of examples of quantum error-correcting codes, might lead to codes that are practically useful for QC even if you don’t care about quantum gravity. I think it’s fair to say that this is sheer speculation right now, though not an obviously absurd speculation.

  32. Rahul Says:

    Douglas Knight #28

    Fair enough. But if so, then Rebecca’s hypothesis itself is meaningless. It isn’t a testable statement.

  33. eli Says:

    Scott Says:
    Comment #22

    NP or PH or even NEXP in non-uniform TC0 is consistent with all the uniform conjectures in complexity theory (it does not move the needle even a bit) like P=NP millenium problem.

    Doesn’t this seem odd and not indicate that PH in non-uniform TC0 is fair odds (as a bonus it may even indicate PSPACE and PH are distinct)?

  34. Scott Says:

    eli #33: No, there’s really nothing odd about the situation, once you understand why proving lower bounds is difficult. Totally plausible that most pairs of complexity classes are distinct (indeed in some sense that has to be true, by the hierarchy theorems!), but only in rare lucky cases can we currently prove it.

  35. Joshua Zelinsky Says:

    eli #33,

    Non-uniform TC_0 is contained in P/poly, so if NP is in TC_0 then the polynomial hierarchy collapses to a finite level, which we think is not the case.

  36. Rahul Says:

    I was puzzled as to why Christakis found the following fact “dispiriting”:

    “….results from a 2014 General Social Survey indicate that just 23 percent of Americans think we should spend more [on sending astronauts into space].

    By contrast, 70 percent of Americans think we should spend more on education and 57 percent think we should spend more on health.”

    I might consider that as Americans having gotten their priorities right.

  37. AdamT Says:

    Scott #26,

    Yes, the answer was short on specifics and it didn’t really answer why *2015* was crucial, but then this was true of so many Edge answers this year. While you were nonplussed by the question at least you made a good faith effort to answer it in your own way. Lots of others just took it like a politician would and extolled their own spin/talking points.

    I asked about this answer because it struck me that the obvious follow-up (assuming he is correct for which I am also amenable) is when will this generation start teaching physics and math as computation? If, in some deep sense, when we *do* physics and math we are just doing computation, then when will Intro to Newtonian Mechanics and Pre-Calculus be taught by asking students to hand in homework assignments in the form of a fully working program computing the result?

  38. Scott Says:

    AdamT #37: At MIT, Gerry Sussman has long taught a course that’s exactly like that, a classical mechanics course where the assignments are to write code (in LISP or Scheme, of course). But I’m not sure I’d want that to replace introductory physics wholesale, even were I academic dictator: it’s just a different kind of course that serves a different purpose. Ultimately, my ideal physics teacher wouldn’t be trying to produce code monkeys OR calculus monkeys: she’d be trying to produce Feynmans who can argue intuitively about what has to happen and why.

  39. luca turin Says:

    Scott, thank you for the pointers, very interesting (for the most part) collection of essays.

    First impressions: biology on a roll, physics at a crossroads, psychology down the drain, economics in fighting mood, sociology thriving, environmental science crying wolf.

  40. AdamT Says:

    Scott #38:

    Well, Feynman was famous among theoretical physicists as an amazing *calculator* if for nothing else. Perhaps not every course in undergraduate math/physics needs to be taught through a computer science lense, but I think if ‘Everything is Computation’ is true, then it would be a good idea to move in this direction. There are several advantages:

    1) Homework can be checked by a computer, saving the teacher’s assistant time to grade and the student can get instant feedback rather than waiting.

    2) Introducing physics through computation could make it more rigorous from the outset if using a typed language thanks to the Curry-Howard correspondence. I might add that it might give physicists more respect for true proofs rather than ‘proving’ impossibility theorems by approximation methods 😉

    3) By emphasizing the equivalence relations of turing complete languages from the outset it would free physicists from the notation-jargon-straight-jackets that take on an assumed importance by some students of a particular formalism or notation system. ie, showing that some particular formalism is not the only way to express the computation, but could be an accident of history and to see formalism and notation systems as tools to explore the underlying deeper ideas.

    It is this last point where I would say, not really knowing the particulars of Sussman’s class, emphasizing that all work need be done in one particular language – LISP/Scheme – is decidedly the *wrong* way to go.

  41. JollyJoker Says:

    Rahul #30: Much of current work in theoretical physics is really about developing mathematical tools to handle calculations that are currently impractical/impossible. Some development in calculating in QFTs that have nothing to do with reality is billed as “Spacetime doesn’t exist!” or “Entanglement knits space together!” although it can have very concrete, down-to-earth applications if it works for QFTs more generally.

  42. Rahul Says:

    @Adam T #40

    The whole proposal reminds me of convincing someone that we should all start using Esperanto.

    Sounds good in theory but I doubt it is ever going to happen outside a niche.

  43. Rahul Says:

    @Jolly Joker #41:

    Can you describe some of these very concrete, down-to-earth applications? Just curious.

    I guess the Error Correction Codes Scott described in #31 can be counted as one such application.

    Care to list any others?

  44. Aula Says:

    I find it interesting that Paul Davies doesn’t mention anything about the sources of the gravitational radiation that LIGO is supposed to detect. While it is well known that binary pulsars must radiate gravitationally, ground-based detectors like LIGO can’t detect that radiation, because there is far too much background noise in so low frequencies. LIGO can only detect gravitational radiation in a much higher frequency range, and as far as I know, there aren’t very convincing potential emitters in that range, so it’s really far too optimistic to say with certainty that LIGO will detect anything.

  45. Rahul Says:

    One naive question about the sea level rise and the apocalyptic predictions for the coastal towns:

    According to Wikipedia, even using the worst case rise in emissions the IPCC scenario predicts a 1 m rise in 2100.

    Why is a barely 3 ft rise in sea level so hard to protect against, with engineered solutions, especially on a approx. 100 year advance warning timeline?

    Can’t a 3 ft rise be protected by a 3 ft sea-wall (with some factor of safety) or is there some non-linearity here that I’m not considering.

    Or does the 1 m IPCC estimate have a huge standard deviation?

    Keep in mind, this is their worst case scenario prediction not the most likely one.

    I’m not using this as a “lets do nothing” argument just wondering why the things are as scary & apocalyptic as some of these articles make them sound.

  46. JollyJoker Says:

    @Rahul #43:

    The BlackHat program (http://arxiv.org/abs/0803.4180) is used to calculate backgrounds for the LHC.

    I actually happened to read about the Island of Stability and the problems with calculating stuff in QCD earlier today. Dunno if superheavy atomic nuclei would count as concrete enough – maybe not immediately useful.

    I would assume the amplitudes stuff would be useful in condensed matter physics / materials science, but I don’t really know anything concrete there.

    An old post by Lance Dixon explaining what was going on back then: http://www.preposterousuniverse.com/blog/2013/10/03/guest-post-lance-dixon-on-calculating-amplitudes/
    but sadly nothing about applications beyond a mention of BlackHat.

  47. Scott Says:

    Rahul #45: Depending on the poorly-understood physics of meltwater, the tails easily allow the possibility (not even a very remote one) of a 10-foot rise by 2100, enough to wipe out the places where well over a blllion people currently live. But, OK, suppose it’s “only” a 1-meter rise by 2100. Even then, history won’t suddenly stop then! Once the process gets going (as it already is), the ice sheets just continue melting—how many of our current population centers will still be there in 2200 or 2300?

  48. Shtetl-Optimized » Blog Archive » Edging in: the biggest science news of 2015 | the neuron club Says:

    […] Source: Shtetl-Optimized » Blog Archive » Edging in: the biggest science news of 2015 […]

  49. James Cross Says:

    Scott and Luca

    Issues with reproducibility are not limited to psychology and aren’t new.

    This article from 2010:

    http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off

    It isn’t all fraud or confirmation bias. It must be a lot more complicated.

    The case of Schooler in the article is a case in point. He gets remarkable results that are widely accepted, then begins to doubt his results and discovers he can’t duplicate them himself. And every time he tries the results are worse. Yet the results are still accepted as science.

    Then the Crabbe experiment injecting mice at three different locations and controlling for every imaginable variable. Yet the mice at one location behave completely differently from those at the other two locations.

    Then the 2 million to 1 runs on the Zener cards by J.B. Rhine which are easy to dismiss since we have no obvious physical explanation for ESP. Yet the pattern is the same of statistically strong results with later data wearing thin.

    Reproducibility is perhaps as big a problem in biomedical research as psychology. Think about that the next time you or somebody you know is prescribed the latest drug to come out of that research.

    I don’t know if this is out of date but the article claims issues with “the weak coupling ratio exhibited by decaying neutrons, which appears to have fallen by more than ten standard deviations between 1969 and 2001. Even the law of gravity hasn’t always been perfect at predicting real-world phenomena. In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.”.

  50. Will Says:

    Aula #44: Binary neutron star mergers are the most promising source of gravitational waves for LIGO to detect. Not looking for slowly radiating, nearly-steady-state binary systems, but for the inspiral that occurs just moments before the merger. It’s uncertain exactly how often these events take place, but form the LIGO FAQ: “For full-sensitivity Advanced LIGO, therefore, binary neutron star mergers should be detected with event rates between 0.4 to 400 times a year — with about 40 per year believed most likely.” This is the reason for optimism that something will be seen in the next year. http://www.ligo.org/science/faq.php

  51. Andrea Censi Says:

    I was disappointed by: CTRL-F “robot” – nothing found.

    Robotics is on a roll! I will make you excited about robotics next time I have the opportunity to talk to you.

    FAQ: Is robotics a “science”? Yes, definitely. (By the way, three days ago Science launched “Science Robotics”.)

  52. mjgeddes Says:

    I think you’ve missed the advance that is by far the most important. And ironically, the one you missed actually contradicts another one you listed as important, and puts that pick of yours in the ‘wrong’ basket 😉

    You need to keep your eye on the ball here Scott! The discoveries are coming thick and fast and there’s definitely some really exciting stuff in there. But the really significant advances are not necessarily the ones hogging the headlines right now.

    As regards artificial intelligence, lots of people are going on about deep learning and how impressive it is, and the importance of big data. While deep learning is impressive in a practical sense, it’s *not* actually that significant in the greater scheme of things. The fact is, it uses variations of algorithms that are decades old, and relies on huge amounts of training data.

    But there *was* a huge breakthrough in artificial intelligence this year, of a theoretical nature. It is the little-known advance that is being called ‘Bayesian Program Learning’. Read here:
    http://edge.org/response-detail/26671

    If this new form of machine learning can be generalized, then it will more or less supersede ‘deep learning’ altogether, because it does *not* require huge amounts of training data.

    “Their (Lake, Salakhutdinov, and Tenenbaum) system can learn very quickly, sometimes in one shot, or from a few examples, in a human-like way, and with human-like accuracy. This ability is in dramatic contrast to competing methods depending on immense data sets and simulated neural networks, which are always in the news.”

  53. Scott Says:

    mjgeddes #52: Actually, deep learning wasn’t one of the five things I picked as important for my own Edge answer; it’s just something that I talked about other respondents having picked as important in this blog post. But yes, several Edge respondents also talked about that Tenenbaum paper—it clearly has been getting a lot of attention—and maybe I should’ve mentioned it as well. But in any case, I don’t think you get to call other people “wrong” until your horse actually pulls ahead! 🙂 And the usual fate of new ideas in AI is that they don’t generalize, not that they do—for which reason, it’s much harder than it is in (say) CS theory to see immediately what will or won’t be important. So maybe one should wait and see.

  54. Scott Says:

    Andrea #51: What, specifically, was achieved in robotics in 2015 that was exciting? (Not a rhetorical question, a real one)

  55. luca turin Says:

    James Cross #49

    #Think about that the next time you or somebody you know is prescribed the latest drug to come out of that research.#

    I think Big Pharma is generally more careful than individual biomed researchers, not least because the liabilities are huge as compared to the usual slap on the wrist in academia.

    In my opinion the bane of therapeutics is fashion. A typical example is clopidogrel (Plavix). The stuff, though it arises from some brilliant science, is a-no better than aspirin, b- has vastly more side effects and c- costs a lot more, but it’s new and shiny (can molecules be shiny?)

  56. Ajit R. Jadhav Says:

    What is the time of the year when the Ig Nobels are announced?

    –Ajit
    [E&OE]

  57. James Cross Says:

    re: sea level

    Trivers and yours projections of sea level are really off the chart as far as the consensus is on these things. Where are you getting your numbers?

    The IPCC has the high end of the worse case scenario at 3 meters for 2300, not 2100. It has about 1 m for the same scenario by 2100.

    Also, you and Trivers keep talking about melting but most of the sea level rise is thermal expansion. Overall world ice mass balance is harder to project since Antarctica is gaining ice mass.

    “According to the new analysis of satellite data, the Antarctic ice sheet showed a net gain of 112 billion tons of ice a year from 1992 to 2001. That net gain slowed to 82 billion tons of ice per year between 2003 and 2008.”

    https://www.nasa.gov/feature/goddard/nasa-study-mass-gains-of-antarctic-ice-sheet-greater-than-losses

  58. Peter Morgan Says:

    Evan #20, certainly, “we now know that quantum mechanics is not a red herring leading us astray”. As to “we actually understand quantum mechanics really well”, I’m more inclined to say, as your following examples establish, that quantum engineering is here to stay, no matter how our understanding of QM/QFT might shift; I take our understanding always to be more-or-less incomplete.
    If I thought that “superdetermenistic theory is something that by construction we cannot understand [at all]”, I wouldn’t spend part of my time thinking about ways in which such initial conditions might make some sense (loosely, by analogy with the nonlocal initial conditions that are a commonplace in thermal physics), but this is much more relevant to the infinite-dimensional models of QFT (which, as my #10, can itself be understood as stochastically superdeterministic) than it is to QM.

  59. Ajit R. Jadhav Says:

    Dear Scott,

    My comment above (#56) was nothing more than fun, but it’s only now that I have noticed the (more or less) off-hand comment that you make in the post when you were covering Ross Anderson’s answer. I gotta get serious.

    Anderson’s (or others’) particular theory (or theories) might not be right, but the very idea that there can be this combination of a local action + a global correlation, isn’t. It is in fact easy to show how:

    The system evolution in QM is governed by the TDSE, and it involves a first derivative in time and a second in space. TDSE thus has a remarkable formal similarity to the (linear) diffusion equation (DE for short).

    It is easy to show that a local solution to the DE can be constructed. Indeed, any random walks-based solution involves only a local action. More broadly, starting with any sub-domain method and using a limiting argument, a deterministic solution that is local, can always be constructed.

    Of course, there *are* differences between DE and TDSE. TDSE has the imaginary $i$ multiplying the time derivative term (I here assume TDSE in exactly that form as given on the first page of Griffith’s text), an imaginary “diffusion coefficient,” and a complex-valued \Psi. The last two differences are relatively insignificant; they only make the equation consistent with the requirement that the measurements-related eigenvalues be real. The “real” difference arises due to the first factor, i.e. the existence of the i multiplying the $\partial \Psi/\partial t$ term. Its presence makes the solution oscillatory in time (in TDSE) rather than exponentially decaying (as in DE).

    However, notice, in the classical DE too, a similar situation exists. “Waves” do exist in the space part of the solution to DE; they arise due to the separation hypothesis and the nature of the Fourier method. OTOH, a sub domain-based or random walks-based solution (see Einstein’s 1905 derivation of the diffusion equation) remains local even if eigenwaves exist in the Fourier modeling of the problem.

    Therefore, as far as the local vs. global debate is concerned, the oscillatory nature of the time-dependence in TDSE is of no fundamental relevance.

    The Fourier-theoretical solution isn’t unique in DE; hence local solutions to TDSE are possible. Local and propagating processes can “derive” diffusion, and therefore, must be capable of producing the TDSE.

    Note, my point is very broad. Here, I am not endorsing any particular local-action + global-correlation theory. In fact, I don’t have to.

    All that I am saying is (and it is enough to say only this much) that (i) the mathematics involved is such that it allows building of a local theory (primarily because Fourier theoretical solutions can be shown not to be unique), and (ii) the best experiments done so far are still so “gross” that existence of such fine differences in the time-evolution cannot be ruled out.

    One final point. I don’t know how the attendees of that conference think like, but at least as far as I am concerned, I am (also) informally convinced that it will be impossible to give a thoroughly classical mechanics-based mechanism for the quantum phenomena. The QM is supposed to give rise to CM (Classical Mechanics) in the “grossing out” limit, not the other way around. Here, by CM, I mean: Newton’s postulates (and subsequent reformulations of his mechanics by Lagrange and Hamilton). If there are folks who think that they could preserve all the laws of Newton’s, and still work out a QM as an end product, I think, they are likely to fail. (I use “likely” simply because I cannot prove it. However, I *have* thought about building a local theory for QM, and also do have some definite ideas for a local theory of QM. One aspect of this theory is that it can’t preserve a certain aspect of Newton’s postulates, even if my theorization remains local and propagational in nature (with a compact support throughout).)

    [More than enough of a reply for an off-hand comment, right?]

    Best,

    –Ajit
    [E&OE]

  60. Arko Says:

    The whole episode involving Nicholas and his wife’s email made me wonder if:

    1) adult students WANT to be patronized. And if
    2) they go to university to enjoy a “safe place”.

    The whole idea of a “safe place” assumes that one’s ideas may be presumed correct a priori, which is absurd!

  61. fred Says:

    2015 was supposed to be the commercial launch of Virtual Reality 1.0, but it’s been sliding a bit into early 2016.

    The interesting thing about VR is that even though its components seem relatively trivial and well known (OLED screens, lenses, position tracking, stereo headphones…), it’s really way more than the sum of its parts because of the very intimate way it couples with the senses and the brain visual/audio systems.
    When it all comes together, a magical threshold is crossed which cannot be communicated with words but can only be experienced for oneself.

  62. Rahul Says:

    Scott #47:

    Well, if the sea level rises by only 3 ft by 2100 it hardly seems like an apocalyptic risk to me. Engineered solutions with a 100 year lead time ought to do the job in those cities at risk.

    And people on the coast are exposed to high levels of flood risk anyways. So three generations sounds like plenty of time to relocate some populations.

    Furthermore the sea level rise will be gradual. Humans have always been better at adapting to gradual changes rather than step changes.

    Ultimately though, a I think a lot of these decisions boil down to our individual discounting function of future risk. When I consider the portfolio of potential risks we are exposed to I can count many more imminent ones that scare me than a 3 ft sea level rise in 100 years.

  63. AdamT Says:

    Rahul #62,

    Aren’t you drastically underestimating the magnitude of the affected populations at risk from a 3 ft sea level rise by 2100 and drastically overestimating the capability and resources of those populations to effectively deal with the threat? The consequences of sea level rise are likely to be highly non-linear with regard to the relative impact on vulnerable and poorer areas versus richer countries.

    “… adverse effects of a warming climate are “tilted against many of the world’s poorest regions” and likely to undermine development efforts and global development goals” [1]

    “Low-income countries will remain on the frontline of human-induced climate change over the next century, experiencing gradual sea-level rises, stronger cyclones, warmer days and nights, more unpredictable rains, and larger and longer heatwaves, according to the most thorough assessment of the issue yet.” [2]

    The knock-on effects of population migration of vulnerable peoples from the coastal areas towards inland areas are likely to exponentially increase the suffering. It might be true that some segments of humanity will be able to engineer a way to mitigate the disastrous effects of sea level rise, but very large portions of the globe will simply not have the resources to deliver and people will suffer on a massive scale.

    In other words, why are you so sure that our future engineering prowess will be able to scale to the entire global population?

    1) http://tinyurl.com/mhe39vk
    2) http://tinyurl.com/ocyn2ag

  64. Rahul Says:

    @Adam T

    I’m not sure, that’s why I’m trying to get more concrete estimates than the vague prophesies of apocalyptic risk a la Edge.

    Coming to specifics, let’s take a city like Mumbai (Bombay). Coastal and fairly low income with the sea on three sides. 12 million inhabitants or more. I estimate the total coastline to be approx. 150 kilometers.

    Now let us say we need to protect with a berm / wall / engineered barrier that factors in a 3 ft sea level rise. What I’d love to know is what sort of barrier would this be? An RCC / masonary wall of certain height & thickness? Maybe someone with the right engineering knowledge can help?

    Once we have that estimate we can have a better idea of the costs involved.

    Note, that the design goal is to only protect against the 3 ft rise in sea level and not all sorts of storms, hurricanes, cyclones, tsunamis etc. Those are risks present even today and we don’t have any barrier in status quo that keeps them out.

  65. Michael Vassar Says:

    FWIW, I think Rahul’s right here.

    With the results in, I personally think
    https://edge.org/response-detail/26791
    Was the best answer.

    My apologies for what was frankly a poor argument. I’m not a very good writer and I also tried to tie a concrete observation which I think is important and easy to see with the cultural zeitgeist and point out a tenuous connection to some papers from decades ago which I think are astronomically important but abstract and bland and which have been studiously ignored, leading to, I believe, enormous economic destruction and the allocation of enormous scientific talent to antisocial behaviour.

    I’d be happy to explain The last paragraph in particular at some point, and to discuss some more speculative parts which I ultimately didn’t decide to include.

  66. Joscha Says:

    Argh! I really tried to please you, Scott. I deliberately ignored the stuff happening in my own area, because it is not earth shattering, wrote non-nerdy yet edgy to please the Edgies, and tried to find a topic near to your heart that literally combines melting glaciers, CRISPR, Deep Learning and emergence of spacetime from entanglement, and then you hate it. Excuse me, I will be busy drowning myself in the Charles river tonight.

  67. In QM, local action does make sense | Ajit Jadhav's Weblog Says:

    […] post on his blog covering his and others’ responses to this year’s Edge question [^], Scott singled out three answers by others (at the Edge forum) which he thought were heading in […]

  68. John Sidles Says:

    Rural knowledge  Re Rahul’s comment #64, farm-boys are taught young that water-tight earthern barriers are:

    (1) feasible on clay-ground (e.g., the Netherlands)
    (2) infeasible on sand-ground (e.g., Mumbai)
    (3) ludicrous on karst-ground (e.g., Florida)

    See for example Martin et al. “Sea-level Rise Impacts on Coastal Karst Aquifers”, and references therein.

    Conclusion  The world’s coastal grounds are, most commonly, geologically unsuited to dike-protection from sea-level rise, and engineers appreciate the economic implications of this reality more realistically than politicians.

  69. Rahul Says:

    @John Sidles

    Are you saying it is impossible to build a sea-wall in Mumbai?

    Or just more difficult? Sure, sand-ground might be more expensive to build on but isn’t “infeasable” too strong a word?

    Any opinions about the Backbay Reclamation Project in Mumbai which reclaimed approx. 400 acres of land from the sea back in 1930? And of course it is all surrounded by…….a sea wall.

    http://theory.tifr.res.in/bombay/physical/geo/backbay-reclamation.html

    In fact, large areas of Mumbai (e.g. East Docks, Sasson Docks, Marine Drive) wouldn’t exist had it not been a combination of sea walls & reclamation.

    https://pbs.twimg.com/media/CAy8zF5UUAAg6Qw.jpg

  70. James Cross Says:

    Rahul,

    About three years ago I looked at climate issues and posted this:

    http://broadspeculations.com/2012/08/26/climate-of-change/

    I didn’t try to argue about the reality of climate change but rather just accepted the predictions of IPCC about likely temperature rises to examine predictions about its effects. I happen to believe there is human caused global warming but I think the degree of warming is on the lower end of most predictions. For the sake of argument, I took the IPCC middle range predictions.

    Ultimately I concluded that sea level rise is the one clear effect of global warming that would be the most difficult and costly to remedy.

    Many other potential effects have multiple other variables affecting their outcome and ultimately are difficult to prove with a high degree of confidence. Many of the other effects also can be significantly mitigated by economic growth in developing countries. Mitigation of sea level could be accomplished with economic growth but is much more costly and difficult than other changes, such as improvement in agricultural practices, sanitation, and water management. Most of these other changes also are desirable in their right without respect to their ability to mitigate climate change.

    So in the end I think we need to worry about global warming and take steps to reduce greenhouse gas emissions through development of alternative energy sources but we need to be very careful not to reduce economic growth particularly in developing countries. I see this is a delicate balancing act and calls for drastic actions as misguided.

  71. Rahul Says:

    @James Cross:

    I think your comment is perfect. I think in the whole AGW denier saga the other question has been somewhat neglected:

    i.e. Given AGW exists, what are the magnitudes of the various changes and what are the relative costs & timelines of engineered solutions that might be possible to mitigate the effects.

    How to implement AGW measures without stanching Economic Growth in Developing nations is going to be a very crucial problem. But unfortunately I don’t see much focus on this delicate, nuanced balancing problem.

    Instead I see non-nuanced responses like “India / China = evil for embarking on a massive drive of coal fired power plants” and daring to boost the living standards of their people.

  72. John Sidles Says:

    In regard to Rahul #69, commended topics of study are “Holocene confining Layer” and “deep polder”

    In a nutshell  Because the Netherlands is naturally endowed with relatively impermeable subsurface confining layers, it can feasibly build and feasibly maintain deep polders (habitable land below sealevel); conversely regions lacking confining confining layers — like Florida, Mumbai and most other low-lying regions around the globe — cannot build or maintain deep polders by any demonstrated or envisioned geotechnic means.

    ————

    In regard to James Cross #70 “delicate balancing act and calls for drastic actions”, commended reading is historian Lawrence Freedman’s recent (and well-reviewed) Strategy: a History (2013, Oxford University Press);

    Here is a sampling of Feldman’s historical account, from his Preface, and from his concluding chapters “The limits of rational choice” and “Beyond rational choice”.

    Preface […] [This book] shows how in distinct military, political, and business spheres, there has been a degree of convergence around the idea that the best strategic practice may now consist in forming compelling accounts of how to turn a developing situation into a desirable outcome. The practice of thinking of strategy as a special form of narrative came into vogue as the 1960s turned into the 1970s, and disillusion set in with the idea that large enterprises and even wars could be controlled by means of a central plan. Developments in cognitive psychology and contemporary philosophy came together to stress the importance of the constructs through which events are interpreted.

    The limits of rational choice […] A particularly influential theory was one that stressed the benefits of treating all choices as if they were rational. Adherents were confident that they, almost uniquely, could offer a theory […] in which all propositions could both be deduced from a strong theory and then validated empirically. Though rational choice theory consistently delivered far less than promised, and its underlying assumptions became vulnerable to a fundamental challenge from cognitive psychology, it was promoted effectively and in a highly strategic manner. [Its proponents] were not deterred by the widespread apprehension that the theory depended upon an untenable view of human rationality. The claim, they insisted, was no more than that the premise of rationality helped generate good theory.

    Beyond rational choice […] A complex theory of decision-making emerged [that] was at all times influenced by the social dimension and emphasized the importance of familiarity; the effort required to understand the distant and menacing; the inclination to frame issues in terms of past experiences, often quite narrowly and with a short-term perspective; and the use of shortcuts (heuristics) to make sense of what was going on. None of this fit easily with descriptions in terms of the systematic evaluation of all options, a readiness to follow an algorithmic process to a correct answer, employing the best evidence and analysis, keeping long-term goals clearly in mind. […] John Lehrer summed up the implications of the research:

    “The conventional wisdom about decision-making has got it exactly backward. It is the easy problems — the mundane math problems of daily life — that are best suited to the conscious brain. […] Complex problems, on the other hand, require the processing powers of the emotional brain, the supercomputer of the mind.”

    Grand unifying conclusion  Feasible global warming strategies depend from twin anchors in quantum theory: the (mathematically rational) quantum radiation transport theory that governs CO2-driven global warming, and the (mathematically rational) quantum thermodynamic theories that govern both the technical feasibility of carbon-neutral production of energy and the efficient uses of that energy in fabrication, concentration, purification, and computation.

    Concomitantly, the 21st century’s evolving social constructs (that bridge these two quantum anchors) are surveyed in the evolving, tightly coupled, post-rational strategic narratives of small-“p” philosophers (John Rawls, Martha Nussbaum), small-“h” historians (Jonathan Israel, Lawrence Freedman), small-“e” economists (Francis Spufford, Amartya Sen), and small-“n” novelists (Annie Proulx, Dominique Edde, Ted Chiang) … practitioners whose works, in aggregate, are showing us the opening ways by which “the supercomputing powers of our emotional brains” increasingly nourish our shared hopes for the great challenges and opportunities of our century.

  73. mjgeddes Says:

    #Scott 53

    Good answer Scott! I’m impressed that you’re not sucked in by the ‘Deep Learning’ hype. I also think that you were wise to doubt the hype surrounding Bayesianism!

    Bayesianism is a ‘grand unified theory of reasoning’ that all of science be should be based on assigning (and updating) probabilities for a list of possible outcomes; the probabilities are supposed to indicate your subjective degree of confidence that a given outcome will occur.

    Contrast this with an alternative conception of rationality as espoused by David Deutsch.

    David Deutsch in his superb books, ‘The Fabric Of Theory’ and ‘The Beginning Of Infinity’, argued for a different theory of reasoning than Bayesianism. Deutsch (correctly in my view) pointed out that real science is not based on probabilistic predictions, but on explanations. So real science is better thought of as the growth or integration of knowledge, rather than probability calculations.

    So what’s wrong with Bayesianism?

    Probability theory was designed for reasoning about external observations – sensory data. (for example, “a coin has a 50% chance of coming up heads”). In terms of predicting things in the external world, it works very well.

    Where it breaks down is when you try to apply it to reasoning about your own internal thought processes. It was never intended to do this. As statistician Andrew Gelman correctly points out, it is simply invalid to try to assign probabilities to mathematical statements or theories, for instance.

    Can an alternative mathematical framework be developed, one more in keeping with the ideas of David Deutsch and the coherence theory of knowledge?

    My own idea is to separate out levels of abstraction when reasoning (or equivalently, levels of recursion). In my proposed framework, there are 3 levels, and each level gets its own measure of ‘truth-value’. The idea is that different forms of reasoning correspond to different levels of abstraction.

    Known popular types of truth-measure are Boolean (True/False) and probability (0-1). I’m proposing we should be using a different measure, called conceptual coherence (categorization measure).

    As a rough working definition of *conceptual coherence*, I would define it thusly;

    “The degree to which a concept coheres with (integrates with) the overall world-model.”

    Here’s what’s wrong with Bayesianism: There is not just uncertainty about our own knowledge of the world (probability), there is another meta-level of uncertainly; uncertainty about our own reasoning processes, or logical uncertainty. Bayesianism can’t help us here. Conceptual coherence can. Lets see how:

    All statements of the form:

    ‘outcome x has probability y’

    can be converted into statements about conceptual coherence, simply by redefining ‘x’ as a concept in a world-model. Then the correct form of logical expression is:

    ‘concept x has coherence value y’.

    The idea is that probability values are just special cases of coherence (the notion of coherence is more general than the notion of probabilities).

    To conclude, conceptual coherence is the degree with which a concept is integrated with the rest of your world-model, and I think it accurately captures in mathematical terms the ideas that Deutsch was trying express, and is a more powerful method of reasoning than Bayesianism.

  74. Scott Says:

    mjgeddes #73: Well, I try to avoid every kind of hype. On the other hand, I’m as impressed as the next person by the successes of deep learning in the past few years—even though the ideas are old, the level of success is new.

    And I never, ever want to be on the side that constantly has to explain why actual success in solving actual problems better than before “doesn’t count” or “doesn’t address the real issues,” which more and more is the tack that the anti-deep-learning people have to take.

    (In the same way, if D-Wave were actually solving real-life optimization problems better than you could with the best classical algorithms, you’d find me here defending them against critics! As I keep repeating, the scientific details of what D-Wave is doing matter precisely because they don’t win against the alternatives in a fair fight in the same sense that deep learning does.)

  75. Edge Annual Question 2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? | Shaun Ling Says:

    […] answers are quite enlightening. I was reading Professor Scott Aaronson’s blog, and he already commented on the answers this year, and he himself is one of the writer of answer. These answers will give […]

  76. John Sidles Says:

    A prominent instance, in the past decade, of “old ideas that achieve new levels of success” (in the phrase of Scott’s comment #74), has been the Moore’s Law growth of the math-and-physics literature of matrix product states and tensor networks (MPS/TN) … in that the doubling time of MPS/TN, sustained for the past decade, has been ~38 months.

    Q1  Aren’t MPS/TN algebraic state-spaces mathematically well-suited to the description of DWAVE-type quantum dynamical processes?

    Q2  Does it matter all that much, whether MPS/TN state-spaces are realized in billions of classical silicon gates versus millions of quantum DWAVE gates? Is it realistic to foresee that these two approaches will prove to have complementary advantages?

    Q3  Is it realistic to hope that, in the coming decade, continued advances in MPS/TN algorithmic capabilities and continued advances in DWAVE-type hardware capabilities can unite to mutually sustain a “more than Moore virtuous circle” of advances in quantum computation and simulation capability, in both algorithms and hardware?

    For sure, 2015 has given us plenty of quantum research articles whose findings are reasonably compatible with the answer to all of these questions being “yes”. So perhaps DWAVE’s investors and customers aren’t so dumb!  🙂

  77. amy Says:

    I agree, Scott, I don’t much care for the question either, but then Brockman’s an impresario and that’s his shtick. I find I’ve wandered far away from the festival of tweets and headlines that science writing has become — I’m still very glad of its rebirth, but I just don’t think it’s very interesting. For one thing, there’s much too much to know; for another, you might know that a new piece of work’s going to be extremely important, but in general not only isn’t that how it works, but it’s important only because of a whole rhizome of other work, most of which doesn’t sound all that very exciting, some of which may be pretty old. It’s exciting in the aggregate, in other words.

    Not too long ago the Atlantic went looking for a new science writer — Ed Yong took the job — and it was the same problem: they wanted someone who could “break science news”. It seems to me one of the least interesting available ways of talking about science, and the relentless focus on news-cycle timescales does a lot to guarantee that the discussion about science won’t be all that smart.

    My guess actually is that Brockman didn’t much care whether the question was taken very seriously, though, and that he was more interested in assembling a set of name essays.

  78. Sanketh Menda Says:

    Sir in the light of recent events.
    At QIP2016 Bravyi @IBMResearch showed Computing distance of a given stabilizer code is NP-hard. But for “homological CSS codes” is P.
    And a lot of other stuff like this has happened this year.
    Do you think we are approaching P = NP, or are all these just some specialized cases.

  79. Scott Says:

    Sanketh #78: Sergey’s result sounds great. But the truth is that we’ve had thousands of examples, since the 1970s, where a problem is NP-hard but an extremely similar-sounding problem is in P. This is not evidence for P=NP. If anything, I claim, it’s evidence for P≠NP—because if P=NP, then you would’ve expected that in at least one such case, the same problem would’ve been found to be both NP-complete and in P, yet somehow that never happens. (In an earlier post, I called this phenomenon the “invisible electric fence.”)

  80. John Sidles Says:

    The points that Amy raises (#77) are well-addressed in Terry Tao’s essay “What is good mathematics?” (arXiv:math/0702396 and Bull. Amer. Math. Soc. 2007; note that Tao acknowledges comments and suggestions from Gil Kalai). It is necessary only to map “mathematics” to “science” in passages like:

    The concept of  mathematical  scientific quality is a high-dimensional one and lacks an obvious canonical total ordering. … The very best examples of good mathematics [are] part of a greater mathematical story, which then unfurls to generate many further pieces of good mathematics of many different types. Indeed, one can view the history of entire fields of  mathematics  science as being primarily generated by a handful of these great stories, their evolution through time, and their interaction with each other.

    A ringing affirmation of Tao’s narrative-centric perspective is the concluding chapter “Stories and Scripts” of Lawrence Freedman’s historical survey Strategy (of #72) , which quotes sociologist Joseph E. Davis as follows:

    “Research suggests that power comes less from knowing the right stories than from knowing how and [how] well to tell them: what to leave out, what to fill in, when to revise and when to challenge, and whom to tell or not to tell.”

    Even in mathematics, there is a characteristically human, passionate, poignant aspect to Tao/Freedman-style story-construction; for example in the introduction to Dolgachev’s Classical Algebraic Geometry: a Modern View (2012) we read

    “How sad it is when one considers the impossibility of saving from oblivion so many names of researchers of the past who have contributed so much to our subject.”

    Similarly Tom Conte’s recent “Computing roadmapping: a proposal” — Conte is President of the IEEE Computer Society and co-chairs the IEEE Rebooting Computing Committee — is concerned with failures of the Moore’s Law narrative in computing, concludes with a somber yet hilarious conclusion

    “We’re all smart people, but we have to pay attention to the writing on the wall.”

    Note: the linked Far Side cartoon “Midvale School for the Gifted” is Conte’s.

    A very great virtue of Shtetl Optimized (as it seems to me) is the shared reading of “the writing on the wall”, especially when our various readings of that writing generate good stories in the sense of Tao and Freedman.

  81. John Sidles Says:

    Scientific advances without genius or heroism  One more optimism-inducing scientific event of 2015 is passing almost unnoticed among the Edge responders (as far as I know): in September of 2014, Type 2 poliovirus was declared to be eradicated in the wild.

    Now starts the politically delicate process of destroying laboratory stocks; Vincent Racaniello’s blog Virology describes the tricky details, in an essay “Virologists, start your poliovirus destruction!“.

    The roots of this scientific advance — including references to Dick Feynman’s seminal wet-bench research — are explored in James Crow’s and William Dove’s brief reminiscence on the early days of genomics “Paradox found” (Genetics, 2000), which concludes with the aphorism:

    There’s nothing like technical progress! Ideas come and go, but technical progress cannot be taken away.

    Unless we humans are exceedingly unwise, neither can the eradication of this deadly virus ever be taken away from us.

    For me at least, the eradication of type 2 poliovirus is a case study in which humanity is exhibiting collective genius and heroism, without excessive dependence upon individual genius and heroism. Now that is optimism-inducing. More please! 🙂

  82. Sniffnoy Says:

    So, not related to any of this, but: What happened to the tagline?

  83. Nick Read Says:

    Why has the blog turned blue?

  84. Scott Says:

    Sniffnoy and Nick: This is horrible! For some reason, WordPress just got rid of all the PHP scripts that made the blog look like it does, and reverted to the defaults. I have no idea what happened, and also no time to look into it right now. I’ll try to do it in a day or so.

  85. Ajit R. Jadhav Says:

    Scott,

    The reason cited is security.

    Mine is a free blog hosted right on wordpress servers, and it affected the ClustrMaps for my blog. I had no choice, but since yours is hosted on your own domain, guess you should be able to get the PHPs back in the running.

    If you succeed, please indicate what you did, so that, if the same thing is possible also for the wordpress-servers-hosted free blogs, I would see if I could use your tips too.

    –Ajit
    [E&OE]

  86. Mike Says:

    Like the reply function

  87. Anonymous Says:

    Don’t worry, Scott, it was ugly anyway.

    But seriously, zingers aside, can you at least make the blog more readable on non-cellphone screens while you’re at it? This user style helps, but I can’t always use it.

  88. Scott Says:

    Hooray, I managed to restore the original style from a backup!

    Mike #86: I do understand the advantages of multithreaded comments, but for me I think they’re outweighed by the disadvantage of losing track of temporal order. Still, if enough readers want it, maybe I’ll experiment with threads at some point in the future.

  89. Nick Read Says:

    Yay!

  90. Ajit R. Jadhav Says:

    Amir #13

    Began reading previous comments only today.

    Nelson himself seems to have abandoned his stochastic QM; see R. F. Streater’s comments here: http://www.mth.kcl.ac.uk/~streater/lostcauses.html#II

    Not knowing this development, I had written an email to him (sometime in the mid-naughties, probably in 2007), but (characteristic of those times) he had not replied it.

    I had published a couple of conference papers independently (i.e. not knowing about Nelson’s work) in 2006; my approach too was local, and it too had a connection to random walks. I had suggested a random-walking particle for a photon, not electron (and for an electron, it would have been more correct.) However, now these papers (of mine) do seem faulty to me.

    I anyway am completely revising my approach (though it remains local) and am likely to publish something hopefully this year. I will write about it first at my blog and then in a journal paper. Stay tuned. [This is not a device to attract more traffic to my blog. The point is: the subject is too complex for me to consistently remain on top of it—even if I think I have good basic ideas. I must therefore write bits of it here and there, and thereby get a better handle on the complex. In the process, I have realized that, sometimes, if I write a *public* post, then, even if only to defend myself, I end up being more careful about what I write. That’s why I am likely to end up writing the ideas in a piece-meal manner on my blog, and only then write up a proper paper. As usual, I invite well thought out comments; if well though out, they are actually likely to help me here!]

    Best,

    –Ajit
    [E&OE]

  91. Scott Says:

    Joscha #66: I’m glad to see that days have passed and you still haven’t “drowned yourself in the Charles River”! Please don’t. As I said, philosophically your answer was music to my ears. It’s probably a bad personality trait of mine that I focus on things to quibble about rather than on the areas of agreement—one of many reasons why I could never be a politician.

  92. Joscha Says:

    Scott, that trait of yours, combined with your kindness and integrity, is why I value your contributions to the sphere if ideas so much, and won’t easily discount your judgment.

  93. Rahul Says:

    ….one of many reasons why I could never be a politician.

    What are the other reasons? I think that’d make for a nice, fun list! 🙂

  94. Nick Read Says:

    Aargh! Now it’s green on the borders!

  95. Year's end choice articles from the arXiv - The Berkeley Science Review Says:

    […]  Scott Aaronson has a great writeup of the result.  Actually, Scott Aaronson also has a great 2015-in-review post as well, so just go read that, […]

  96. Alon Amit Says:

    Curious to hear your take on this 2015 PNAS paper, Scott:

    http://www.pnas.org/content/113/3/532.full

    “Quantum violation of the pigeonhole principle and the nature of quantum correlations”, Aharonov et al.

    Seems a bit of a fanciful interpretation to me.

  97. Links for January 2016 - foreXiv Says:

    […] Scott Aaronson reviews the recent slate of responses to the Edge question. […]

  98. jonas Says:

    In the meantime, the LIGO gravitational wave detector is featured in the Piled Higher and Deeper comic strip: http://www.phdcomics.com/comics/archive.php?comicid=1853

  99. jonas Says:

    There was a recent announcement about the LIGO having detected gravitational waves from the merger of two black holes. Scott, may I ask a question to you on the account of that news?

    The news seems a bit underwhelming to me, because it’s not matched with a detection through optical or X-ray telescopes or neutrino detectors. It seems like the types of events LIGO detects are very unlikely to be possible to match this way, especially since the gravitational wave detector can’t localize the events with any precision. Nevertheless, there seems to be quite some media hype about this detector right now, in fact I’m listening to a radio interview about it right now.

    Now you had expressed worries about the D-wave media hype possibly having a harmful effect on other practical quantum computer projects. So my main question is this. Should I be worried that the media hype around LIGO will make it more difficult to get funding to space-borne gravitational wave detectors?

  100. Scott Says:

    jonas #99:

      Should I be worried that the media hype around LIGO will make it more difficult to get funding to space-borne gravitational wave detectors?

    That seems ridiculously unlikely. I was worried (and still am, but less so than before) that the failure of D-Wave to see a real speedup would turn people off from quantum computing as a whole. But LIGO is not a failure! As of yesterday, it’s a spectacular success. And funding agencies love success. Now that it’s clear that you can detect gravity waves, lots of countries’ funding agencies will probably want in on it. This can only make it less difficult to get funding for space-borne detectors.

  101. Daniel Says:

    Jonas #99

    That seems like a weird assessment of the situation. You’re worrying that the detection of the black hole merger, via gravitational waves, was not matched to a detection via optical or x-Rays? They’re black holes, they don’t emit radiation. If it was a black hole + neutron star merger than maybe you’d have accretion disks and whatever, but two black holes?

    As I understand it, which is not a whole lot, that event was essentially invisible to every other instrument that we have. Not only have we detected gravitational waves, we also saw, through them, an event which has never been seen before. And you think that’s underwhelming?

    By the way, LIGO can’t localize any events with precision because there are just two of them. If anything, that gives more motivation for building further instruments of this type elsewhere, so that we *can* start to pinpoint the events in the sky at some point.

  102. jonas Says:

    Re Scott #100: good, thank you for your assessment!

    Daniel #101: Yes, my problem is that the detector probably can’t detect the kind of events that are also visible with other instruments, whereas I hope the space-borne detector might be able to. As for building more detectors in different directions to localize events, yes, that sounds like a good idea. It might be a good way to argue for more detectors indeed.

    Also, let me point to V. T. Toth’s blog entry on the same gravitational wave detection event (it also links to the article): https://spinor.info/weblog/?p=7532