“Quantum Computing and the Meaning of Life”

Manolis Kellis is a computational biologist at MIT, known as one of the leaders in applying big data to genomics and gene regulatory networks. Throughout my 9 years at MIT, Manolis was one of my best friends there, even though our research styles and interests might seem distant. He and I were in the same PECASE class; see if you can spot us both in this photo (in the rows behind America’s last sentient president). My and Manolis’s families also became close after we both got married and had kids. We still keep in touch.

Today Manolis will be celebrating his 42nd birthday, with a symposium on the meaning of life (!). He asked his friends and colleagues to contribute talks and videos reflecting on that weighty topic.

Here’s a 15-minute video interview that Manolis and I recorded last night, where he asks me to pontificate about the implications of quantum mechanics for consciousness and free will and whether the universe is a computer simulation—and also about, uh, how to balance blogging with work and family.

Also, here’s a 2-minute birthday video that I made for Manolis before I really understood what he wanted. Unlike the first video, this one has no academic content, but it does involve me wearing a cowboy hat and swinging a makeshift “lasso.”

Happy birthday Manolis!

34 Responses to ““Quantum Computing and the Meaning of Life””

  1. fred Says:

    Scott, info or intox?

    https://phys.org/news/2019-03-physicists-reverse-quantum.html

  2. fred Says:

    Life has no meaning, beauty is enough.

  3. Michael Says:

    Scott, you’re obviously not an impartial observer, but what is your take on Pachter’s (rather aggressive, but from an outside view, plausible-seeming) commentary/criticism?

    https://liorpachter.wordpress.com/2019/02/11/nonsense-methods-tend-to-produce-nonsense-results/

  4. Scott Says:

    free #1: I’ll just copy and paste what I wrote to a journalist earlier today:

    “It’s evident that, if you’re simulating a time-reversible process on your computer, then you can ‘reverse the direction of time’ by simply reversing the direction of your simulation. From a quick look at the paper, I confess that I didn’t understand how this becomes more profound if the simulation is being done on IBM’s quantum computer.”

    In general, though, I have to repeat the admonition that I made in the last comment thread:

    “In the future, PLEASE, no drive-by postings of off-topic papers asking me to respond to them.”

  5. Scott Says:

    Michael #3: Note that Manolis and his coauthors responded to Bray and Pachter at length in this document. I’d encourage anyone who’s interested to read Feizi et al.’s original paper, Bray and Pachter’s various attacks, and Feizi et al.’s response and then draw their own conclusions.

    I haven’t studied the complicated tangle of claims and counterclaims in nearly enough detail to have anything usefully technical to contribute. But I’ll say this:

    Even if we assumed that Bray and Pachter’s criticisms of the network deconvolution algorithm were 100% correct—which is not at all an assumption that I make here—I found Bray and Pachter’s bellicose tone, the wild accusations of “scientific fraud” (which, sort of like the term “genocide” … yeah, if you have to squint, then it’s not that), and the constant sneering, insinuation-laden references to their targets’ awards and honors (even when those targets were students), to be totally outside the bounds of academic discourse.

    I speak from some experience here: just like Pachter has, I’ve spent a lot of time using the platform of a personal blog to criticize papers that I thought were wrong or misleading or silly (or that were being wildly overhyped or misreported by the press or by the authors themselves). But crucially, even with papers that did stuff a hundred times worse than the very worst that Feizi et al. are even accused of—and believe me that I’ve come across many such papers in the areas that I know, including in Science and Nature—it still never crossed my mind to accuse the authors of “fraud,” explicitly go after their careers, etc., rather than just letting my arguments speak for themselves.

    Once I slipped up, and made a single ad hominem comment about Cathy McGeoch (lead author of the “D-Wave machine gets a 3000x speedup” paper). I quickly apologized and I still regret it. Besides that, the one time in 13 years that I think I even came close to Bray and Pachter’s tone, was with the aggressive Bell’s-theorem-denialist Joy Christian. The reader can judge for herself whether Joy Christian and Manolis Kellis inhabit the same moral or intellectual universe. Even there, though, I regret getting drawn into the mud; it would’ve been more effective on my part to keep things professional.

  6. Tamás V Says:

    There is a recent article in Quanta Magazine about attempting to test theories of consciousness, in case anybody is interested. While reading it, the following non-testable theory came to my mind: Can it be that consciousness permeates the whole universe (like air, or water), but its density is much higher at places what Hofstadter called “strange loops” in his book (e.g. human or animal brains)? So when a strange loop is created, it attracts consciousness from the environment (like air flowing into the lungs), and as the density is becoming higher and higher than that in the surroundings, the strange loop will seem more and more as if it was a separate conscious entity, like a developing drop of water. The only difference is that this conscious drop never fully detaches. (Sure, I know one counterargument: Alan Turing and Hitler were obviously not connected this way, it’s just impossible, right?) And when the strange loop perishes, the consciousness in it dissolves back into its environment.

  7. fred Says:

    Tamas V #6

    That seems very likely to me (“I’m a strange loop” is a pretty interesting read indeed).

    Consciousness has to be at every level of complexity/in every places. What varies is its content.

    The idea that, as the brain of a fetus is forming (atom by atom, cell by cell, connection by connection), suddenly consciousness would appear once a certain amount of matter is reached or some structure is reached… doesn’t seem to make sense.
    If consciousness is for example related to the formation of memories, then pretty much anything, down to single atoms, is conscious.
    It also makes sense once you realize there’s no free will and even no such thing as a choice, and the self is an illusion built by the brain (i.e. consciousness is not the source of our thoughts, actions, decisions, that consciousness is not opening a magical door where high level concepts can suddenly reach around and modify the basic rules of causality at the atomic level – whatever we do just is based on our atoms following the basic rules of physics). Consciousness is simply the space where sensations, emotions, thoughts (including intent, volition) appear, it’s not the author of its content (the “mechanical” brain and the environment are).

    The idea of universal consciousness also somewhat answers the question “why am a specific person among trillions upon trillions of possible others in all the places and times in the universe?” (Schrodinger noted that this flagrant asymmetry raises crucial questions). We’re basically not one specific person, we just have the illusion of being a specific person (the ego is the one that’s feeding on the illusion of giving itself all the credit for all the good things that are happening to it, when in fact it’s all based on luck and it’s all out of its control).
    If tomorrow you and I were to “swap consciousness”, i.e. I wake up as you and you wake up as me, we wouldn’t even notice it, because you would only have access to my memories and I would only have access to yours (even within a single human brain there are probably multiple feelings of identity). And you could imagine that this swapping is constantly happening, every microseconds, between all conscious things… which is to say that consciousness is universal, and its content are provided by particular loci of matter, separated in time and space.
    This idea that there’s no self, no free will, and that consciousness is universal, can be scary but it can also be a very strong engine for more compassion – it’s in the interest of everyone to lower the amount of suffering in the world.

  8. sohakes Says:

    Tamas V #6 and fred #7

    I have never read I am a Strange Loop, but it reminded of David Chalmers’ view. The view that consciousness permeates the universe solves a lot of problems I have with it. I think David reached this conclusion to solve his hard problem of consciousness.

    Another solution to these problems, I think, is that consciousness is just some kind of complex illusion that we can’t really understand, and that’s why we think things like qualia are special. It sounds weird since it looks like qualia is really something “different”, but almost all arguments I’ve seen about qualia and other properties we attribute to consciousness being special are always hand wavy.

    I guess I should read I am a Strange Loop.

  9. sohakes Says:

    I think the video was really interesting.

    I don’t know much about quantum mechanics (well, I’m trying), but I’m a little curious about the answer to the second question. Doesn’t the “expend much more computational effort” only apply if we think about our computers/quantum computers? Maybe quantum states are easier to simulate for whatever being is simulating us. Or maybe it’s a little more expensive, but it provides them better results. Maybe since it’s expensive they need to do some optimizations such as ignoring the state of some things until there is a need to use them (which is what I understood from the question). I think this is wrong, it’s probably obvious to someone who know quantum mechanics, but it didn’t seem you answered the same question he asked. Maybe you did and I simply didn’t understand the connection though.

  10. Tamás V Says:

    sohakes #8: Just to clarify, I only borrowed the term “strange loop” from Hofstadter, so don’t get disappointed if you buy the book and find that he draws a very different picture about consciousness than I did. (Still, I recommend reading it, great book.)

    To me, consciousness is a bit like deep learning: everybody knows how to make it, but nobody understands why it works 🙂

  11. Nick Says:

    Hi Scott,

    You don’t need to squint. Feizi et al. repeatedly tell the reader that they are inverting an operation that they are not, in fact, inverting. It’s right there in Figure 1a. The two options at that point are:

    1) They are unaware of very obvious facts about what their own method does
    2) They are misleading their readers

    If you’d like to argue for option 1, please feel free but personally I credit their intelligence a bit higher than that. As for option 2, while one might make allowances for shading the truth regarding certain details, misleading your readers about the basic nature of your work is fraud. If you have seen work a hundred times more deceptive and refrained from commenting, I’m not sure why that’s a criticism of us rather than you.

    As for “professionalism”: if a plumber came to your house and discovered that the reason you had water stains appearing on your ceiling was because the previous plumber had sealed joints using Scotch tape, I don’t think they’d feel that professionalism forbade them from saying that the previous plumber had no fucking clue what they were doing and that you should sue. In fact, I think you’d be quite annoyed if they hid that reality from you.

    You’re not the first to suggest that we should have confined our comments to technical matters and I’ve never been able to discern a foundation for that beyond abject narcissism. We’re all quite comfortable with the idea that the world of plumbers contains scammers and crooks. Scientists might like to think of themselves as a higher class of human being but science, like all other human activities, has its bad actors. Why are we expected to pretend otherwise?

    There is one part of your post that I do agree with, however: unfortunately, Manolis Kellis is indeed “known as one of the leaders in applying big data to genomics and gene regulatory networks”. That is exactly why we wrote what we did.

  12. JimV Says:

    Meaning of life: survival and reproduction, with mechanisms for adaptation to improve survival and reproduction. May or may not include computational ability, although computational ability is a valuable mechanism to have.

    Consciousness: ability to sense the external and/or internal environment, compute strategies for responses to that environment, make computations to choose which strategy to follow, and implement that strategy. May or may not be associated with life.

    How it “feels” (to live, sense things, compute things, be conscious): not important, except that it must feel some way (produce some sensation) in order to have effect. A rose might not smell as sweet if it had a different scent, but it would still attract pollinators. The Windows operating system must be able to sense keypresses and mouse clicks. How it feels to Windows does not seem important to me, but if it felt nothing it would not detect them.

    That’s my viewpoint; and yet a group of smarter people feel the need for a seminar on these things. (I hope it is just to get down into the details, which are complex and important.)

  13. fred Says:

    sohakes #8

    the way I think about qualia is that if you consider all the concepts inside our brain, it’s exactly like all the definitions in a dictionary. Every word is defined in terms of others. But when you think about it, there has to be some concepts that can’t be defined in terms of other words, fundamental concepts that are at the very bottom. Otherwise all the definitions would be circular, which would be strange – how could something perfectly circular get bootstrapped? A bit as if we were to find out that quarks are made out of galaxies.
    So those “rock bottom” concepts are the qualias.
    E.g. the sensation of the color red just can’t be defined in terms of anything else. You just can’t communicate it to someone who is color blind. No physical explanation of it in terms of wavelength, cone cells in the eyes, etc can describe the conscious experience of redness. To us, redness is a fundamental mystery.

    Note that this is all from the perspective of our direct experience of things, through what appears in our consciousness, not a metaphysical discussion that tries to explain anything, but just a description of what it is to be alive.

    Saying that consciousness is an illusion makes little sense because it’s really the one thing that can’t be disputed – we can be confused about the nature of physical reality, the nature of matter, whether this is all a simulation, whether we’re just brains in vats,… but the one thing we can’t doubt is the reality of what it is to experience being alive, moment to moment, the “knowing” of all the transient things that appear in the space we call consciousness. We can doubt what they are and where they come from, but their appearances can’t be disputed.

  14. fred Says:

    Given that physical matter is based on deterministic behavior (whether it’s predictable or not doesn’t matter, whether there’s extra “pure” randomness or not doesn’t matter either), the impression of free will and choice are the expressions of the fundamental laws of physics. Our sense of “will” is actually induced by the natural flow of the physical laws.

    We are bound to them, and while it’s all deterministic, they don’t feel necessarily restraining because they can be very “constructive” forces. They are what feels right.
    So the “forces” that push you to take a specific decision are the exact same forces that gathered space dust into stars and solar systems, and assembled basic molecules into DNA.

    There’s really no separation of things at different levels, between atoms, and high concepts in the brain – they’re all different sides of the same dice, and expressions of the same laws of nature, all in harmony.

  15. Scott Says:

    sohakes #9:

      Doesn’t the “expend much more computational effort” only apply if we think about our computers/quantum computers?

    Sure, of course if the “simulating beings” had access to a quantum computer, it would be much much easier for them to simulate QM. But Manolis, as I understood him, was asking me whether I thought QM might be evidence of some sort of approximation being made in a computer simulating our world. I stand by my response, which was that the situation is exactly the reverse: quantum systems could only be harder to simulate than classical systems, in the sense that they contain systems that behave classically as special cases but also contain much more.

    A scalable quantum computer is just the most obvious example of a system that’s harder to simulate because of QM (in the sense that, as far as we know today, efficiently simulating a QC seems to require another QC). Complicated many-body systems in chemistry and condensed-matter physics and nuclear physics provide additional examples. And sure, if you were simulating our universe on a classical computer, you could (and should) resort to approximations for the many places where the world is approximately classical—but then that just means that the running time of your simulation will be completely dominated by those places where the approximations break down.

  16. meltem.demir Says:

    On the PECASE photo there is six-degrees-of-separation (no diagonal moves allowed) between Scott and Obama.

  17. Scott Says:

    Nick #11: Your core objection, if I understand it, is that Feizi et al.’s algorithm involves a scaling parameter that needs to be set by hand, and that wasn’t discussed in the main body of the paper but only in
    (1) the Supplementary Material (i.e., appendix) specifically devoted to that subject, and
    (2) the short, publicly-available source code.

    If so, then … “fraud”? WTF?!

    It sounds to me like basically just following the usual conventions of Science and Nature papers (which are different from the usual conventions of CS conference papers). Crucially, it’s not as though the existence of a single tunable parameter renders the results meaningless—that would be the case only if the number of tunable parameters were comparable to the total amount of data to be explained, thereby leading to overfitting (right?).

    I’m sure that there’s much more to be said both for and against this specific algorithm. The trouble is, every word that you and Lior have written about Manolis and his collaborators has been filled with so much rage and invective and personal animus, that much like with Lubos Motl (to take the example that sprang first to my mind), it’s really hard for me to trust your judgment even about the purely scientific aspects. There’s basically one thing that would sway me: namely, if neutral third parties, who I had independent reasons to trust and who were experts in applied network algorithms, told me that this paper was crap. Any suggestions?

  18. Lior Says:

    Hi Scott,

    You noted that “Crucially, it’s not as though the existence of a single tunable parameter renders the results meaningless” and then asked whether in the Kellis paper it is the case that the number of tunable parameters were comparable to the total amount of data to be explained.

    The answer is that Kellis did not have “one tunable parameter” but the latter situation (number of parameters is larger than the number of data to be explained) is the case. I wrote a whole follow up blog post (https://liorpachter.wordpress.com/2014/02/18/number-deconvolution/) to explain this in simple terms, but the simplest way to think about it is this:

    Suppose I have generated 1,000 numbers a_1,…a_{1000}, but I don’t show them to you. Instead I show you numbers b_1,…b_{1000} where b_i = c * a_i for each i. Kellis’ defense of his paper is that there is only one tunable parameter, namely c, that is discussed in the supplement. But good luck trying to figure out a_1,…a_{1000} without knowing c. The point here is that the number of parameters is actually 1001 (a_1,…a_{1000} and also c). And the number of datapoints is 1000 (b_1,…b_{1000}). Saying that “c is the only tunable parameter” doesn’t help to guess the a numbers from the b numbers. This is exactly the issue in Kellis’ paper, and what makes it fraudulent is that clearly he (and coauthors) understood this, but they published it anyway.

    I understand that it is shocking and unbelievable that (a) someone would actually try to publish something like this and (b) that the journal would actually publish it, but here we are.

    There are plenty of third parties who understand the fraud of Kellis’ paper. First, I’ll note that one of them could be you. Just go ahead and read his paper. This is really not quantum science. As Nick said, you don’t need to squint to see this fraud. Then, you could ask to see the reviews of his paper. They have never been published but I’m sure as a friend he will be happy to show you. I know the reviewers had the same concerns because they told me (albeit anonymously, as they were afraid of retaliation). I wish I could tell you that truly impartial 3rd parties had adjudicated the matter by virtue of assessing our claims in the formal context of a journal commentary. I can’t do that because our commentary was rejected by NBT (where Kellis et al. published). Such rejection is commonplace at “high profile” journals, where authors themselves can effectively reject commentaries critical of their work.

    A final note: you have characterized me in your two comments on this blog as targeting Kellis’ (and students) awards and honors, as being filled with rage and invective… but where? Please point to the exact places where that happened. It didn’t happen. I accused him of fraud. And fraud he did commit. Also not just once (see https://liorpachter.wordpress.com/2014/02/12/why-i-read-the-network-nonsense-papers/)

    Lior

  19. mjgeddes Says:

    I agree that meaning of life is tied in with consciousness, but I doubt that consciousness has anything to do with QM. Cognitive science is the place to look probably, not quantum computing 😉 So Axiology (philosophy of value), Rational Choice Theory (philosophy of decision theory) and Psychology (philosophy of mind in general).

    It’s doubtful that there’s any ‘universal’ set of values that applies to all minds, but a weaker form of moral realism might still be sustainable. That is to say:

    *If* mind design X, *then* objective set of values Y

    So *given* the set of minds that care about morality and aesthetics at all (Xs), *then* I think there’s a core set of values that apply all such minds (Ys), and these could reasonably be said to constitute ‘the meaning of life’.

    My wiki-book ‘Axiology’ provides an effective sampling of human values – click here:
    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Axiology

    It’s pretty complex, as you can see (and the non-western values I list are particularly interesting), but some of these values seem to be more fundamental (general) than others.

    I’m tending to think that aesthetic principles are at the base of value systems, since aesthetics can be applied to the internal workings of individual minds. Then ethics and morals (which applies to relations with other minds), come in at a higher level of abstraction.

    What I reduced all these values down to was 3 core principles : {Beauty, Perfection & Liberty}, and if these 3 values are indeed in some sense ‘the building blocks’ from which all the other values are built up, then they could reasonably be considered as ‘the meaning of life’.

  20. James Gallagher Says:

    fred #13

    I like your arguments here, but there is one other thing that can’t be disputed – we do not have memories of conscious events as embryos or young babies. It seems our brains require a year or two for the memories to start forming in a stable fashion and for our “consciousness” of the world to become something that actually survives falling asleep at night.

    I wonder if it is just the feedback loop of our brain making memories of the sensory data incoming that is “all that consciousness is”. Have you ever pointed an HD USB webcam at your screen? It struggles to create the image at the centre where it has a feedback loop. Maybe all that screaming feedback from rock bands since back in the 1960s (Beatles first to record it on I Feel Fine intro I think?) was an early simple example which we should have taken more seriously as scientifically interesting?

  21. fred Says:

    James #13

    If consciousness is indeed linked to the formation of memories in the present moment, it is by definition, at first, short term memories.
    People who practice meditation/mindfulness do notice that it’s very hard to direct attention to a certain apparition in consciousness for more than a minute or so, because the brain is mostly looking for changes in sensations and steady state sensations tend to be filtered out over time. The brain also loves getting lost in obsessive thoughts, replaying something that has bothered you that day, or role-playing possible scenarios of what’s going to happen in the near future (this is all borderline psychotic, imagine if you were to say out loud all the thoughts that are constantly floating up into your consciousness).
    The brain doesn’t need to commit new storage to the same thing over and over, instead it can just invoke the memory. So short term memories are committed to long term storage as the brain is replaying them over and over, and we’re reliving them as vivid apparitions in consciousness (distracting us from forming new memories/experiencing the present moment). That’s why old people tend to remember the most old events rather than recent ones, because they’ve been reminiscing on the old events over and over throughout their lives, and obviously those tend to be either the very happy and the very painful ones (my grandma would clearly recall WW1 and WW2 events, but would constantly misplace objects).

    I’m pretty sure that babies have very clear memories of recent events, but everything is new and amazing to them, as if their brain is in some psychedelic state 24/7, so only a relatively small portion of that flood of sensations can be committed to long term memory storage – they really don’t have the time to reminisce much since there’s a constant flow of fresh experiences coming their way. Also it’s probably easier to commit long term memories once we master spoken language, around the age of 3 – and I do remember clearly some “traumatic” events (for a toddler) that happened to me when I was a 3 year old. Some people claim they have earlier memories than this.
    There are also suggestions that every memory formed never really gets lost, people with photographic memories can recall every experience, and for normal people, lots of things can be recalled through hypnosis (once I did use self-hypnosis to remember clearly my classroom and classmates when I was 5 or 6 years old, it was an amazing experience).
    On the opposite side, it’s also the case that memories can be lost through brain trauma or chemical interaction (amnesia after anesthesia or alcohol).

    Whatever we remember has always appeared in our consciousness at some point, but we can never remember things that never appeared in our consciousness – unless we’re talking about memory implants (like those Blade Runner replicants being confused about what memory is real or not) or incorrect recollection of events, where different memories getting mashed up or distorted – once I had a seizure and my consciousness got flooded with made-up experiences that felt both entirely new and familiar at the same time, it was very strange (there’s a way to recreate this by describing your dreams as soon as you wake up and recording yourself, you will then forget if you wait long enough, then later you listen to your own accounts, and you will have this very odd feeling of remembering something that’s both strange and familiar at the same time).

  22. fred Says:

    James #20 (I meant #20 in my previous post too).

    That sort of feedback loops being the basis of consciousness is precisely what Hofstadter posits in his book “I’m a strange loop”.

    Another interesting question is whether consciousness plays any active role or not.

    If all the brain activity is purely based on electro-chemical interactions (atoms just doing what atoms do), then why is there even consciousness? (the classical zombie argument)

    Some say that consciousness is purely a side-effect with no importance, in the same way that the shadow of a blind machine is definitely a thing, but it can’t influence the machine.

    But it is the case that we are all actively talking about consciousness in this thread, therefore it has some clear second order effect on the world, in the same way that the shadow of a machine that has vision can influence the machine as soon as its own shadow enters its field of vision.
    But this doesn’t mean that consciousness is some independent thing (the shadow of the machine is not an independent thing), it could simply be the flip side of the electro-chemical interactions in the brain (the shadow of the machine can be derived entirely by the current state of the machine).

  23. James Gallagher Says:

    fred #21

    I expected you to come back with something like “where are your qualia in the feedback loop?”

    I have no idea either.

  24. Scott Says:

    Lior #18: Thanks for the substantive reply, but I confess that I’m not understanding something even on the level of basic logic. Explaining 1000 data points using 1000 tunable parameters would be “overfitting,” just as much as explaining them using 1001 tunable parameters would be. So if what you say were right, then the one extra scaling parameter would be completely irrelevant to your critique … but if so, then why did your original posts harp on it so much?

    As for targeting Kellis’ and his students’ awards and honors, and writing posts filled with rage and invective, it’s hard even to know where to start, so let me focus on just one example for brevity. In this post, seeking to explain the origin of your strange beef with Manolis, you attack an earlier paper by him and Joshua Grochow (who was then a Masters student, and is now a complexity theorist who I know well)—basically accusing them of not understanding Erdös-Renyi phenomena in random graphs, and outing yourself as a reviewer who tried to get their paper rejected from RECOMB but was overruled. After pages of attacks, you write:

      Joshua Grochow went on to win the MIT Charles and Jennifer Johnson Outstanding M. Eng. Thesis Award for his RECOMB work on network motif discovery.

    In context, you’re obviously insinuating that the award was not only undeserved, but further evidence of a corrupt academic system.

    Now I’ve been accused of “nastiness” more than once, for pointed criticisms on this blog of papers that I disagreed with (as I bet you have as well). And yet still, I find it hard to imagine the state of mind that would cause me to go after a then Masters student the way you did above. As my mouse cursor hovered over the “Publish” button, I’d be asking myself: how sure am I that my critique isn’t mistaken? 99%? 99.9%?

    In this instance, it looks to me like your critique was mistaken. For starters, Grochow is a pure mathematician at heart, not a comp bio person, so it beggared belief to me that he could undertake such a project without ever thinking to ask himself about the properties of random graphs—that he’d basically get back your referee report, slap himself on the forehead, say “gosh, how could it not have occurred to me that some substructures will be in a network purely for combinatorial reasons?,” and then tack on a section about that without crediting the referee. And in their response to you, Grochow and Kellis write that … well, let me just quote them.

      what Pachter claims was ‘gleaned’ from his report in the revised version of our paper was in fact included in Joshua Grochow’s thesis which was submitted on August 10, 2006 to MIT, 1 month before we even submitted the paper to RECOMB and 4 months before we received Pachter’s review. Months before Pachter’s review of our paper, Josh’s thesis wrote: “the frequency of the motif is due to choosing 3 nodes from the large independent set on the left and four from the large clique at the bottom of the figure,” and includes the exact same equation “(12 choose 3)(9 choose 4)=27,720” (Figure 3-5 legend on page 50), while the number or the equation didn’t even appear in Pachter’s critique …
      In any case, it should be clear to the reader that we knew of these ideas prior to receiving any reviews. Moreover, the suggestions that somehow we had not recognized the combinatorial nature of these examples, that it was news to us, that we had to somehow deal with them, and that his review was somehow a revelation to us are clearly refuted by the discussion in Section 3.4.1 of Joshua Grochow’s thesis, finalized and published months before we received any reviews and one month before we even submitted our paper to RECOMB.
  25. Lior Says:

    Scott,

    You seem to understand the Kellis paper perfectly well. Exactly! the whole thing makes no sense and yet that is what Kellis is doing, while hiding that fat from the reader. The “hiding” consists of pretending in Figure 1 and in the main text of his paper that there is some neat and tidy mathematical formulation of a problem, one that leads to some “closed form solution”, when in fact the whole thing is incoherent. Apparently I cannot explain the incoherence to you (even by analogy) because your reaction is, that’s incoherent!

    Regarding the RECOMB paper, again, you write “so it beggared belief to me that he could undertake such a project without ever thinking to ask himself about the properties of random graphs” and yet… that’s what happened. It’s not surprising to me that Grochow, an excellent theorist, would stumble in an applied setting- this happens to theorists all the time. The problem is the authors’ response to my pointing it out in review, and whether it was Kellis or Grochow who were dishonest in the revision I don’t know.

    I certainly admit that on occasion I can be wrong, and have sometimes been wrong in my work and in my career. Yet what you are effectively suggesting in your defense of Kellis is that one should never speak up in the face of misconduct. For you, it seems, 99.9% certainty is still not enough to stand up and say “the emperor has no clothes”. I observed, over the course of many years, multiple situations where Kellis was shockingly dishonest and fraudulent. The reason he’s been able to survive and thrive in academia for so long is, precisely, as you write, because of a corrupt academic system. One where a relationship and friendship built together during 9 years at MIT is a prior that dominates the science.

  26. Anja Says:

    Scott #24

    Their response seems to imply that they knew something with great explanatory power to their results that they did not put in the paper on purpose. This sounds like very bad practice to me. Also how is it a defense that this result is included in some thesis? The reviewer has to judge the paper. How do you conclude from this response that the critique of the paper is mistaken? Maybe I’m missing some important point here.

  27. Scott Says:

    Anja #26: In Grochow and Kellis’s reply, their argument is precisely that some substructures occur for purely combinatorial reasons, and others for more network-dependent reasons, and that their algorithm is helpful in distinguishing which are which. I agree with you, it sounds like this should have been discussed in the original submission, and that Lior was right to call attention to the issue in his referee report. But Lior has now gone much further—claiming that it invalidates or renders trivial the whole project (rather than just being one issue to understand among others); that Grochow and Kellis weren’t aware of the problem until seeing his referee report; and that they then dishonestly tried to paper over it in response to his report (rather than just filling in another issue that they’d thought about but hadn’t had time or space or whatever to discuss in the original submission—i.e., the sort of thing that happens all the time when papers get revised).

    This is actually a perfect example of the uncharitability that seems to characterize Lior’s thinking—leaping from a valid technical observation to a grand allegation about the authors being “fraudulent.” Sometimes these things are subjective or hard to judge, but in this case, we happen to have Grochow’s masters thesis, which looks like it sharply contradicts the narrative Lior constructed.

  28. Scott Says:

    Lior #25: I was genuinely open to learning something from you in this thread—even something that might reveal that my friends, Manolis and Josh, had made serious errors in their papers or engaged in bad research practices. (Which is obviously possible—we all screw up sometimes, and my being friends with someone doesn’t mean that I’ve checked their whole research oeuvre, especially if it’s in a field that I don’t know like comp bio!)

    But in your reply, you’ve just completely refused to answer my questions. If the entire Feizi et al. algorithm is incoherent, then why didn’t you explain that in your original post, rather than aiming your fire at a single scaling parameter that turns out (in your current account) to be basically irrelevant anyway? Also, why didn’t you address/acknowledge what looks like clear proof that Grochow and Kellis had thought about the combinatorial properties of random graphs months before seeing your review, rather than just repeating your original account of what happened?

    As for my own willingness to publicly call out what I’ve seen as bad research—even when doing so might create personal friction with my colleagues in quantum information—well, I’ll let the record of this blog speak for itself. It’s true that I can’t remember ever using my platform to go after a student’s Masters project, but maybe even that would change were the case sufficiently serious.

  29. Lior Says:

    Scott,

    I *did* explain how the Kellis network deconvolution paper was incoherent in my original blog post on the matter. It’s straightforward and I’m happy to repeat it here:

    1. Suppose one observes a matrix G_obs = \sum_{i=1}^{\infty} G_dir^i where G_dir is a matrix whose eigenvalues are between -1 and 1. The eigenvalue restriction is required to make sure the sum makes sense, i.e. converges. It is straightforward to see that G_dir = G_obs(I+G_obs)^{-1}. Figure 1 of the Kellis paper makes this observation. First I’ll note that this “result” (hard to call it that, this is basic first-course linear algebra) is hardly a discovery of any sort. But one might wonder whether this formula is useful in practice to make statistical inferences in the context of some model for something. Reading the start of the Kellis paper you might think it is… they write “We formulate the problem [recognizing direct relationships between variables connected in a network] as the inverse of network convolution, and introduce an algorithm that removes the combined effect of all indirect paths of arbitrary length in a closed-form solution by exploiting eigendecomposition and infinite-series sums. ” I’ll note that “exploiting eigendecomposition and infinite-series sums” is just the matrix inversion (I+G_{obs})^{-1}, which frankly an MIT graduate ought to just call “matrix inversion”, but that is certainly not fraud, just embarrassment.

    2. What is actually done in the Kellis paper is code and results based on the following: given a matrix G_obs = \gamma * \sum_{i=1}^{\infty} G_dir^i for some matrix G_dir with eigenvalues between -1 and 1 recover G_dir by first dividing G_obs by \gamma and then applying the matrix inversion formula from #1. The number of observations is the number of elements in G_obs = n^2 and the number of unknowns are the elements in G_dir = n^2 and there is also gamma to be figured out.

    #2 is completely incoherent. There is no way to set \gamma. There is no way to learn it from the data (which is nothing more than the matrix G_{obs}. The whole thing makes no sense just like my analogy (which you rightly dismissed as incoherent). The authors claimed in the original version of the paper that it didn’t matter what it was set to (which is incoherent). Then they claimed that it should be set to be near one (which is incoherent). Then they set it to 0.5 for one dataset.

    And yes, as you point out #1 is incoherent to begin with. In practice the way people actually recover a G_dir from G_obs is to use some regularization, e.g. regularized partial correlation, and in that way reduce the number of parameters being estimates from n^2 to something much smaller.

    The fraud is not in the incompetence. It is the fact that the authors try to pass off #1 as some neat and tidy “closed-form” solution for some problem (where you might imagine there are some details like regularization that need to be taken care of and those are in the supplement) but actually then do #2 which is literally senseless. That is why in the blog post we had to explain the parameter \gamma, it is part of understanding what they actually did. BTW they didn’t even do #2; there are a bunch of other heuristics thrown in just to get something out. It’s not honest to write a paper pretending to be one thing, and in reality it is something completely different. I’ll also add that I would have had no problem with them writing up a heuristic with some constants they set arbitrarily if they could show it works well, sometimes methods like that can be useful (although rarely). But they didn’t even show the method works well and that is the content of my most recent blog post, the one that was posted here originally. In a recent 3rd party benchmark the heuristic performed worse than not doing anything.

    I can tell you that when we wrote the blog post we worked very hard to explain the incoherence, and the fraud in the most straightforward way possible. But neither writing the post, nor posting it, was easy. Still, I think it was meaningful and worthwhile. Sometimes doing the meaningful things in life is not the same as doing the convenient or easy things.

  30. fred Says:

    James #23

    Feedback systems do take inputs 😛
    .
    .
    .
    There is some interesting evidence of the existence of multiple loci of consciousness within a single mind (at least two…).
    Experiences were conducted on people who got their two brain hemispheres severed from one another (to cure severe epilepsy), and it goes something like this:

    The left hemisphere is usually the very predominant siege of language and writing.
    They block the subject’s right eye (connected to the left hemisphere) and show the subject a random object (e.g. an egg), and ask the subject what they saw and they say “nothing”.
    Then they unblock the subject’s right eye and present them a collection of objects, and ask them to (silently) pick what the object they previously saw, the subject picks up the egg (presumably through the control of the right hemisphere, which saw the egg through the left eye).
    They then ask the subject to say why they picked the egg, and the left hemisphere always comes up with some story to rationalize why they picked the egg, something like “hmm… I think because I had an omelette the other day”.
    This seems to suggest that the two hemispheres work independently but they still try to maintain a coherent world story as much as they can whenever they can share information.

  31. Sniffnoy Says:

    OK, I’m confused. It looks to me like G_obs = \gamma * \sum_{i=1}^{\infty} G_dir^i has only one tunable parameter, not n^2+1. You have G_obs, you set gamma, and that determines the n^2 entries of G_dir. You don’t get to set the entries of G_dir independently of one another; the space of possibilities for G_dir is still one-dimensional, as determined by gamma. What am I missing?

  32. James Gallagher Says:

    Hi fred,

    I didn’t see your post #22 when I replied to #21, so sorry if it appears like I was ignoring your very relevant and interesting points there.

    Oliver Sacks brilliant book “The Man Who Mistook His Wife For A Hat” is required reading for anyone interested in the science of the human brain.

    But personally, for me, a big impact, apart from seeing my baby daughter develop, was a brief period in Manchester, England in a hospital ward, where, for a few days, I met a man who had approximately 20 seconds of memory, and relived this 20 secs over and over again with a little riddle about how to spell Piccadilly Station. It was such a peculiar situation, everyone treated this man normally, and he was able to eat meals, but he never remembered anything after about 20 seconds and restarted the spelling of Piccadilly ritual over and over again.

  33. fred Says:

    When it comes to the topic of computing and consciousness, I’ve been wondering how we would design the memory system for a general AI, in a way that’s similar to humans (assuming there such a thing as “designing it”).

    So, when we ask our AI to “remember” one of its experiences, how would this work?
    We could certainly record perfectly the totality of the stimulus (sound, vision, feel) that feed into it.
    One way for it to remember an event would then be to replace the current inputs with a sequence of stored inputs, so it would be as if the AI were in a perfect virtual reality first person movie for the duration of the memory (I call it a movie because the actions have to be replayed in the same way).
    But is this “remembering”? The AI would still have its current internal state (with possibly internal inputs such as emotions).
    On the opposite side of the scale, we could also include the internal state of the AI as part of the memories, and replaying the memory would just reset both inputs and internal state to what they were at a point in the past, and let them rerun exactly has they happened. But once the sequence is over, we’d have to somewhat restore the AI internal state to what it was before the memory was played, and the AI would have no memory of remembering… (it would feel as if it had been unconscious for a while).
    What’s not clear is how to be somewhere in the middle, like humans are… we can experience memories from an emotional standpoint while still being an outside observer to them.
    Or maybe it’s not that complicated? It’s about recreating the memory from a third person point of view, just like a movie we’re familiar with and that we care about, and because of empathy (with itself) the AI will relive fresh new emotions that would be similar to the ones first experienced, just a bit duller – we can get very upset when replaying some event that was recent, like remembering a road rage incident from 10 minutes ago, … but after a day or two, you just won’t feel the same intensity. Maybe this “dulling” of memories is probably because the current state we’re in when we remember a memory does alter back that memory, like some kind of extra meta data attached to it, an interpretation based on each new replaying of the data (a bit like the review of the review of the review of a movie).

  34. mjgeddes Says:

    fred,

    The notion of ‘universal consciousness’ doesn’t seem to have any explanatory power, and , as you yourself point out, it would seem to reduce to consciousness to a mere epiphenomenon. As to Hofstadter’s ‘strange loop’, it just seems like a buzz-word with a very unclear meaning. But at least these are ideas ‘outside the box’.

    The most popular current theories of consciousness are hopeless in my view. Integrated Information and Global Workspace might well be interesting functional features of the brain for sure, but I fail to see why they should have much to do with consciousness directly.

    I tend to favour HOT (Higher-Order Theory), which has gone out of fashion, but basically, the notion of consciousness as ‘thoughts about thoughts’ (higher-order representations) seems by far the most sensible.

    Consciousness , I’m now convinced, is the brain’s ‘model checker’ , the ‘formal verification system’ of the brain. The brain forms a model of it’s own operations , and checks this model against what’s actually happening. If I’m right, then all the comp-sci logic used for model checking should in principle apply to an explanation of consciousness. That’s temporal logic and branching-time logic. As I said in earlier threads, TPTA – Temporal Perception and Temporal Action – consciousness is a symbolic language for modeling the flow of time – i.e., it’s the brain’s ‘model checking’ system.

    See:
    https://en.wikipedia.org/wiki/Model_checking
    https://en.wikipedia.org/wiki/Formal_verification