Superposition your mouse over these five exciting QC links!
(1) My TEDx talk from Dresden, entitled “What Quantum Computing Isn’t,” is finally up on YouTube. For regular Shtetl-Optimized readers, there’s unlikely to be much that’s new here: it’s basically 15 minutes of my usual spiel, packaged for mass consumption. But while it went over well with the live audience, right now the only comment on the video is—I quote—“uuuuuuuuuuuuuuu,” from user “imbatman8472.” So if you feel so inclined, go over there, watch it, and try to start a more contentful discussion! Thanks so much to Andrés Goens, and everyone else in Dresden, for inviting me there and hosting a great visit.
(2) On December 4-6, there’s going to be a new conference in Mountain View, called Q2B (Quantum Computing for Business). There, if it interests you, you can hear about the embryonic QC industry, from some of the major players at Google, IBM, Microsoft, academia, and government, as well as some of the QC startups (like IonQ) that have blossomed over the last few years. Oh yes, and D-Wave. The keynote speaker will be John Preskill; Google’s John Martinis and IBM’s Jerry Chow will also be giving talks. I regret that another commitment will prevent me from attending myself, but I hope to attend next year’s iteration. (Full disclosure: I’m a scientific adviser to QC Ware, the firm that’s organizing the conference.)
(3) On October 24, the House Science Committee heard three hours of testimony—you can watch it all here—about the need for quantum information research and the danger of the US falling behind China. In what I believe is my first entry in the Congressional record, I’m quoted (for something totally incidental) at 1:09. John Preskill was mostly just delighted that the witness, Jim Kurose, referred to me as a “physicist.”
(4) For several years, people have been asking me whether Bitcoin is resistant against quantum attack. Now there’s finally an expert analysis, by Aggarwal et al., that looks into exactly that question. Two-sentence summary: the proof-of-work is probably fine, although Grover’s algorithm can of course be used against it, which might eventually necessitate adjusting the difficulty parameter to account for that, and/or migrating from a pure preimage search task to collision-finding, where my result with Yaoyun Shi showed that quantum computers offer “only” an n2/3 black-box speedup over classical computers, rather than a square-root speedup. The scheme for signing the transactions, which is currently based on elliptic curve cryptography, is the real danger point, but again one could address that by migrating to a post-quantum signature scheme. My main comment about the matter is that, if I’d invested in Bitcoin when I first learned about it, I’d be rich now.
(5) In the first significant victory for my plan to spend a whole sabbatical year just writing up unwritten papers, I’ve got a new paper out today: Shadow Tomography of Quantum States. Comments extremely welcome!
Comment #1 November 3rd, 2017 at 9:07 am
Are you suggesting we should all pool money together to buy a D-Wave machine to do bitcoin mining?!
Comment #2 November 3rd, 2017 at 9:15 am
Scott, I think you could consider a parallel career in stand-up comedy.
Comment #3 November 3rd, 2017 at 9:26 am
fred #1: No.
Comment #4 November 3rd, 2017 at 9:52 am
fred #2: But don’t I already effectively have that? A comedy career that’s so “parallel,” it’s literally built around the QC = exponential parallelism fallacy? That’s, like, the market I’ve cornered.
Comment #5 November 3rd, 2017 at 10:05 am
We are also holding our ThinkQ conference at IBM on Dec 6-8, 2017, focused on near-term quantum computing
https://www.research.ibm.com/ibm-q/thinkq/
Comment #6 November 3rd, 2017 at 10:41 am
David #5: Thanks! I also regret that I won’t be able to make that one.
Comment #7 November 3rd, 2017 at 12:24 pm
Here is the link to 1:09 in the congressional testimony. (Just hit share in youtube and it offers a chance to set the time.)
Kurose is quoting you from Scientific American, which already called you a physicist.
Comment #8 November 3rd, 2017 at 12:40 pm
A quick comment on the intro to Shadow Tomography (which I’m looking forward to reading carefully): it’s not _so_ easy to show that ~D^2 copies are needed to learn a D-dimensional state. Yeah, there are D^2-1 free real parameters, but… well, that’s not a complete proof. The lower bound in the Haah et al. paper is not completely trivial, I don’t think. And if you are willing to learn rho in Hilbert-Schmidt distance, you can do it with O(1) copies…
Comment #9 November 3rd, 2017 at 1:14 pm
Ryan #8: Thanks! But can’t we appeal to Holevo’s Theorem, at least to get a lower bound of Ω(D2/log(D))? If tomography were possible with fewer copies than that—at least in an entrywise norm—then doesn’t it give Alice a way to send Bob n bits by sending him fewer than n qubits? If you wanted a lower bound that talked about trace distance, then I guess you’d also need a set of mixed fingerprint states of size ~exp(Ω(D2)), but I think Andreas Winter or someone constructed such a set a while ago…
Comment #10 November 3rd, 2017 at 2:02 pm
Hi Scott,
Is there a reason why your paper is on ECCC but not on the arXiv? Same for your P=?NP paper. Do you see any disadvantage to uploading your paper to both ECCC and the arXiv? I ask because I think more people read the arXiv than ECCC, and I think it’d be a pity if they don’t come across your papers.
Comment #11 November 3rd, 2017 at 2:14 pm
John #10: I also uploaded to the arXiv this morning, but it’s the weekend, so it will take a bit more time to appear.
Comment #12 November 3rd, 2017 at 2:58 pm
Glad to hear that Aggarwal et al., say more or less, the same as what I shared with the QC Meet-up group here in Toronto a while back.
Enjoy your sabbatical. may it be productive!
Comment #13 November 3rd, 2017 at 4:22 pm
stupid remark about your paper: it seems that the only hyperlinks that work correctly are those that point to your papers. How did you achieve that?
Comment #14 November 3rd, 2017 at 5:53 pm
In some sense being a quantum computing researcher makes you a theoretical physicist
Comment #15 November 3rd, 2017 at 10:51 pm
Anthony #13: I just tried and was unable to reproduce the problem. For me, all the hyperlinks point to the bibliography, while there are no hyperlinks from the bibliography to (e.g.) the arXiv.
Comment #16 November 4th, 2017 at 6:02 am
Scott #3, Matt Parker made a living as a “stand-up mathematician” so maybe you can be a stand-up theoretical computer scientist.
Comment #17 November 4th, 2017 at 8:18 am
Scott, I love you and you’re an incredible blogger, but please for the love of God learn not to ‘uh’.
Just, if you notice that you’re getting ahead of yourself and you’re about to start ‘umm’ing and ‘uhh’ing, pause for a moment and plan the rest of your sentence. It takes practice, but it’s not that hard to catch yourself doing it; it took me maybe a year to get good at it but now I can catch pretty much 80% of the ‘uh’s before I say them.
Comment #18 November 4th, 2017 at 9:58 am
Eldritch #17: I have a much better idea. This is the last time that I ever agree to have a talk posted on YouTube.
It’s for exactly the same reason why I don’t have a Twitter account, and never will: I refuse to use my brief time on earth to participate in forms of social media that remind me of high school.
FWIW, my speaking style seems to work fine in person—the TEDx organizers told me that when they polled attendees afterward, mine was the highest-rated talk of the conference. But it doesn’t translate to video—possibly because, when people do something that’s like watching TV, they’re primed to expect a level of fluency that I don’t have and will never have.
Comment #19 November 4th, 2017 at 12:23 pm
Great talk! btw, your uh-rate seemed significantly lower then usual (impressively so, said by one with the same tendecy..), and didn’t distract once from the content of the talk…and please keep your videos on youtube 🙂 it’s a great channel for knowledge to laypersons
Comment #20 November 4th, 2017 at 2:08 pm
Scott #18,
Eh, I’ve seen you talk both in person and on video and they both seem fine.
Also, even if obnoxious people are going to comment, the rest of us benefit from your talks being on Youtube; many of us would strongly like that to continue. If the comments are an issue, you can ask that the comments be turned off for your videos.
Comment #21 November 4th, 2017 at 4:19 pm
Please keep your talks available! It was a great talk and the audience clearly agreed given the huge applause at the end. Sorry if the comments above triggered you to bad memories of high school. Know that your talks are very much appreciated and admired.
Comment #22 November 4th, 2017 at 8:31 pm
hey, youve really hit the big time with a TED talk. congrats.
your latest paper sounds very intriguing, congrats, have sprinkled ref around a little on stackexchange. cant quite follow it, but a similar idea has been running thru my head, a bit haunting. we believe that QM computers are exploring a large “state space” and its crucial for realization of their capabilities. but how do we measure how effectively the machines are *really* exploring this large state space?
for example, suppose that there is some hidden variable theory of QM (which is not yet devised or noticed). would that cause some kind of bias in the state space sampling that could be measured? this question relates also to the recent IBM results that allowed simulation of nearly 50 qubits using classical computing using a new “more efficient” algorithm.
one of my ideas to test this is as follows: maybe create random qbit gate arrays and see if there is any unexpected bias. also try compression algorithms on measurement results such as SVD to see if there is lower “information content/ spread” than expected ie some kind of multidimensional clustering of results/ measurements.
(actually would like to work on code to test this “biased state sampling” idea, but ofc QM computing is not my day job and havent met any generous angel investors in a personal minimum basic income to free me up for arbitrary volunteer work.)
while on the subj here are some recent links/ essays others might find amusing.
killing copenhagen interpretation via fluid dynamics
https://vzn1.wordpress.com/2017/09/08/latest-on-killing-copenhagen-interpretation-via-fluid-dynamics/
qm computing highlights summer 2017
https://vzn1.wordpress.com/2017/07/17/qm-computing-summer-2017-update/
Comment #23 November 5th, 2017 at 1:38 am
vzn #22: This was my second TEDx talk; the first was in 2011. (That one also went over really well in person—Stephen Hawking was at that one, and apparently he loved it—and also led to a YouTube video full of snide comments about my speech patterns and mannerisms. Maybe I should start using a voice synthesizer like Hawking’s.)
The ultimate test of a quantum computer is of course whether it actually works to solve a hard problem—ideally, a hard problem whose answer you can check yourself, either because the problem is in NP (like factoring), or because it can be compared against nature (quantum simulation). Such a QC would still only be “exploring a tiny corner of Hilbert space” (namely, that which is accessible to polynomial-size quantum circuits), but probably as much of it as we ever can explore. At that point, in particular, it becomes really hard to imagine a hidden variable theory that would get rid of the huge Hilbert space: any such theory would presumably imply BPP=BQP. So at that point, I think the QC engineers would be justified to ask, “how much more proof of the reality of exponentially-large Hilbert space did you want?” 🙂
Comment #24 November 5th, 2017 at 6:52 am
Scott, from your #18 above, you likely don’t have a YouTube account to reply to my comment on your video:
“Perhaps we could agree to call it “Hilbert Computation”, given that it’s all about Hilbert spaces and the non-commutative algebras of operators that act upon them, not specially about woo-oo quantum theory? One has Hilbert spaces and non-commutative algebras of operators in classical signal analysis, for example, because of the omnipresence of fourier analysis, so that could equally well be a model of the Hilbert space mathematics. Hilbert computation perhaps should be taken to imply that we use mostly or only projective operators, so it’s a particular restriction away from signal analysis in full generality, but hey. But then, QFT=signal analysis —modulation of the vacuum state and measurement of those modulated states— is my thing.”
Perhaps you just don’t have a reply to the question?
There is an implicit question here that I’d like to tease out (this is more about what quantum computing *is* than about what it is not, but hey): can any physical system that is well-modeled by Hilbert spaces and by a non-commutative algebra of operators acting upon them be as good for general computation purposes as a specifically “quantum” system (in which case we would be well not to call it quantum computation, even if “Hilbert computation” might seem a bridge too far), or do other properties of quantum systems make a necessary contribution? Two candidates for resources important to the success of quantum computation: nonlocality, and quantum fluctuations; following Nelson, one might also mention that quantum theory has a non-Markovian nature; others? Or a formulation of any of those rather vaguely put candidates in terms you would prefer? [I would prefer to specify the microcausal but anti-local nature of the differential operator √(∇²+m²) in field theory in the derivation of the free field propagator instead of “nonlocality”, for example, again because of my field theory instead of computation roots.]
Comment #25 November 5th, 2017 at 10:03 am
Scott #18: Please, do not stop posting your talks on YouTube. I watched both and they were both amazing, interesting talks. I am a highschool student interested in quantum computing (especially the experimental side of things; sorry) and it would be a great loss for people not to be able to see your talks. To be quite honest, I’m not so sure why so many people are commenting about “um”-ing – it’s really not that bad. Thank you.
Comment #26 November 5th, 2017 at 11:18 am
Scott: I am interested in your thoughts on the main complexity theoretic assumptions of something like bitcoin. My main concern is this: who is to prevent someone from investing in enough computational power to get 50% of bitcoin, and then falsify records? This seems to have nothing to do with “complexity theoretic” concerns at all and is more economic and political than anything.
Moreover, even if bitcoin mining could be done with a quantum computer, wouldn’t everyone just start using that quantum technology to mine bitcoins, and we’d be reduced to “bitcoin is secure because its hard for any one entity to get 50% of computing power.”
Comment #27 November 5th, 2017 at 12:08 pm
Chris #26: On your first point, I have nothing to say that other more knowledgeable people haven’t said better. Yes, the whole idea of Bitcoin is that the longest available blockchain represents the “consensus reality,” and the only feasible way to falsify the consensus reality is to control the majority of the mining power in the world. If you do that, you can bend the system to your will—much like, to turn the US into an authoritarian dictatorship, all it takes is to convince approximately half the voters. Them’s the breaks.
Your second point is made in the article—did you read it? For the proof-of-work, the main thing you might worry about is if one entity has a huge bank of QCs able to Grover-search for hash preimages, while everyone else is stuck with classical computing—during that time window (like the period from 1945 to 1949 when the US was the sole nuclear power), the world’s sole QC-bearing entity might try to take over the entire consensus reality of the blockchain. But even if it’s nefarious enough to try that, it seems unlikely that Grover would provide enough of an advantage for it to work—and once enough people have QCs, you can just make the proof-of-work harder and be back where you started, exactly like you said.
Comment #28 November 5th, 2017 at 12:11 pm
heather #25: OK, you win. I’ll let my talks go up on YouTube, but with the comments turned off. If people want to comment on my speaking style, they can come to this blog and do it to my face. 🙂
Best of luck with your studies!
Comment #29 November 5th, 2017 at 3:47 pm
@Scott: So, I guess we are in agreement that crypto-currency is only reliable if no one gets a huge advantage in computing power. If, however, one entity were to secretly gain quantum computing power, then it could take over the block-chain.
To me, this is an excellent argument for crypto-currency investors to fund pure and applied computer science researchers whose job is to write papers, so that “public knowledge” of computing techniques is likely to be at the forefront, and no nefarious private entity can overtake this public knowledge.
Comment #30 November 6th, 2017 at 7:18 am
As a big fan of yours who wants to see you live a long life, I do hope you will also use this sabbatical to focus a bit on your physical body and health (you’re still young enough to do important lifestyle changes relatively easily) 🙂
Comment #31 November 6th, 2017 at 11:47 am
fred #30: Well, my current plan is simply to get a little bit fatter and weaker every year, until I die of a heart attack in my mid-50s—but taking care to have proved most of the quantum complexity theorems I want to have proved by then, and also to have cleared the starred emails from my inbox. The rationale for this plan is that I love eating and despise exercising—and I spent the first quarter-century of my life that way and didn’t get fat, and I refuse to accept that anything has changed. I have my own alternative facts.
Having said that: yes, your rather different plan merits consideration as well. Taken under advisement. 🙂
Comment #32 November 6th, 2017 at 2:47 pm
Scott #31
It can be something as simple as buying a good pair of running shoes and taking long walks everyday … and you can still think about quantum complexity while doing it!
Comment #33 November 6th, 2017 at 4:51 pm
Is shadow tomography easier if the input state is known to be pure?
Comment #34 November 6th, 2017 at 6:12 pm
[off-topic] I don’t want to spam your email but I have a (probably) dumb question about QCSD. Are you still entertaining those? If so where?
I’m one of those 10/10 readers of the book. It’s one of my ten favorites and I understand 1/10 of it. 🙂
Comment #35 November 6th, 2017 at 6:34 pm
fred #32: Well, I also hate driving, so I do walk just about anywhere that I’m able to. That’s probably why I’m not even unhealthier than I am!
Comment #36 November 6th, 2017 at 6:39 pm
PSST #33: No, it’s not significantly easier for pure states—at least not in terms of the number of states you need, which was my main concern in the paper (it might be up to quadratically faster in computation time). One way to see this is that we can purify any mixed state at the cost of doubling log(D), which would affect my bound on the number of states by only a constant factor. So if we did have a significantly better algorithm for pure states, it would imply a better algorithm for mixed states as well.
Comment #37 November 6th, 2017 at 6:41 pm
Don #34: Eh, go ahead and ask your question here.
Glad you liked the part of QCSD that you understood!
Comment #38 November 6th, 2017 at 9:39 pm
scott #23: exactly/ agreed, any comprehensive hidden variable theory proof or disproof might be nearly a BQP=?BPP proof wrt complexity class separations. but flipping this around, plz take this overall idea seriously: imagine a QM algorithm that attempts to construct “problem instances” to submit to a QM computer to test if that QM computer truly has more power than BPP, using a finite # of trials and interpreting the results. what would such an algorithm look like? what would be the best/ optimal sample problems to submit to the QM to determine its capabilities? obviously this is going on in a informal way every time QM researchers implement an algorithm, but thinking outside the box & in a more mathematical/ systematic way, what would be the “ultimate” such algorithm?
Comment #39 November 6th, 2017 at 11:58 pm
The comment “uuuuuuuuuuuuuuu” is actually the announcement of the first quindecaquark particle, a major breakthrough in physics!
Comment #40 November 8th, 2017 at 1:40 am
http://www.sciencedirect.com/science/article/pii/S0926224517300876
Comment #41 November 8th, 2017 at 5:19 am
orbitoccurance [sic] #40: Looks like an interesting paper!
From now on, though, I’m going to institute a blanket ban on “drive-by linkings,” which shove a paper in my face unrelated to anything we’ve been talking about and ask me to respond to it.
Comment #42 November 8th, 2017 at 11:19 am
Just as bitcoin could address quantum attacks on elliptic curve discrete log by migrating to a post-quantum signature scheme,
so it could address quantum speedups of its proof of work by migrating to a quantum resistant proof of work, such as finding cycles in huge random graphs.
Comment #43 November 8th, 2017 at 10:52 pm
Hi Scott,
I’m currently in the process of getting through the link (3) you posted: fascinating stuff!
At 1:04:26-1:04:46, Dr.Binkley names a few simulations (like heat flow, structural properties of materials) that I believe are currently done via FEM, which I believe essentially just involves solving linear equations. If I understand this correctly, he seems to suggest that these problems will continue to be solved by classical HPC in the future, rather than QC. Aren’t there already algorithms (albeit with caveats, like the need for qRAM) that show that in principle one can solve linear equations more quickly (by which I mean the scaling is better) on QCs than on classical computers?
I’m sure you hear this a lot, but I’m a big fan of your work, and I hope I can meet you some day 🙂
Comment #44 November 9th, 2017 at 1:47 am
Juan #43: You’re talking about the HHL algorithm. There’s a long list of conditions that need to be met before this algorithm can give you a speedup for a practical problem, but if those conditions are all met (and we don’t yet really understand how common that will be), it can give exponential speedups. For more, see my Read the Fine Print essay.
Comment #45 November 9th, 2017 at 5:24 am
Scott #31: Rather than worry about health now, notice that adding further victories to your “plan to spend a whole sabbatical year just writing up unwritten papers” could also count as “lifestyle changes”. Those unwritten papers can increase your stress level and thereby affect health as well. And writing them is significantly easier during your sabbatical year, while including more sports in your daily routine as easy or as hard as it is for you, independent of any sabbatical year.
Comment #46 November 9th, 2017 at 5:59 pm
Aaronson / Arkhipov update in phys.org:
https://phys.org/news/2017-11-hard-problem-solvable-quantum.html
Comment #47 November 10th, 2017 at 2:05 am
The Hafnian, huh? Which is to the Pfaffian as the permanent is to the determinant? Makes me want to ask about what happens if one takes other generalizations of the determinant and tries to find analogies for the Pfaffian, but somehow I suspect that won’t work as well…
Comment #48 November 10th, 2017 at 10:11 am
50 QuBits computers available?
http://www.mercurynews.com/2017/11/10/ibm-ups-pressure-on-rivals-with-quantum-computer-prototype-2/
Comment #49 November 10th, 2017 at 12:23 pm
OK sorry can I just talk about the Hafnian a bit since this is the first time I’d seen the concept? The Hafnian just seems kind of weird to me. Like, one of the big things about the Pfaffian of an anti-symmetric matrix is that if you square it you get the determinant. If you take the Hafnian of a symmetric matrix and square it, do you get the permanent? Well, no, you don’t; being allowed to put things on the diagonal screws that up. Well, OK, what if we require 0s on the diagonal? Then does it work? Still no! Look at a 6×6 matrix with 1s everywhere but the diagonal; the Hafnian will be 15 but the permanent will be 265 (which is not equal to 15².) I guess “things not working” is kind of appropriate for something analogous to the permanent, though. 😛
What’s odd though is that for a 4×4 symmetric matrix with 0s on the diagonal, the permanent is the Hafnian squared. (And also trivially for 2×2 and 0x0, but nothing surprising there.) Like, just write them out as polynomials, one is the square of the other. And yet for higher numbers it doesn’t work. I don’t know what to make of that.
Comment #50 November 10th, 2017 at 5:02 pm
Sniffnoy, I think of Pf²=det as a version of multiplicativity for Pf/det. Since the permanent lacks that, I wouldn’t expect the Hafnian to square to the permanent. The 4×4 case is probably just a low degree coincidence.
Comment #51 November 11th, 2017 at 12:48 am
Actually, having thought about it some more now, I think there is indeed a good reason: Odd cycles! What goes wrong in higher dimensions is that unlike in the determinant, in the permanent the permutations with odd cycles don’t cancel out, but they won’t appear in the square of the Hafnian. I’m pretty sure the permutations with only even cycles continue to match up just fine in this new setting, but the ones with odd cycles ruin it. And at 4×4, there are no derangements with odd cycles. At 6×6 and higher, there are. So it’s not a coincidence after all! Problem solved.
(In a sense, the problem where I had to put 0s on the diagonal is just a special case of this — a fixed point is just a type of odd cycle, after all. But that was only enough to get things up by 4×4…)
Comment #52 November 11th, 2017 at 4:30 am
Raoul Ohio #46
We also know that for the true complexity of computing the permanent to be 2^n we need that it’s no easier to count the number of matchings in a bipartite graph than in a general graph. This is because there is a 2^n time algorithm for the latter problem. This seems surprising, if true.
Comment #53 November 14th, 2017 at 3:24 pm
” One of the biggest challenges for these experiments is generating a large number of single photons. Since perfectly deterministic sources of single photons are not currently available, all of the experiments performed so far have used photon sources that are probabilistic rather than deterministic.”
Is this a purely engineering problem, or could it be that nature makes it an extremely hard thing to do at a fundamental level (a bit like the non-cloning theorem and such)?
Comment #54 November 15th, 2017 at 6:30 am
fred #53: Well, there’s nothing in currently-understood physics that would explain why it should be impossible at a fundamental level. On the contrary, there are designs on paper for “optical multiplexers” that would produce a single photon on demand, with arbitrary reliability—it’s just that building these is incredibly hard in practice, due to photon losses and other issues.
Comment #55 February 7th, 2019 at 8:55 am
Hi Scott! I was reading through your shadow tomography paper and I couldn’t understand how Lemma 15 follows from Lemma 14. Lemma 14 says that if you’re given a promise that i) either there exists an E_i for which Tr(rho E_i) > c or ii) for all E_i Tr(rho E_i) c – e? In other words, how does Lemma 14 still give you something useful even though the promise is broken?
Thanks!
Comment #56 February 7th, 2019 at 12:08 pm
Navneeth #55: In the future, please email me rather than leaving comments on old posts.
The procedure of Lemma 15 is specifically designed in such a way that, if the promise is violated, then it doesn’t matter which answer is returned. If “no,” then you’ll just look elsewhere for the great measurement that’s guaranteed to exist. If “yes,” then you may keep looking in this region, where a measurement necessarily exists that’s almost as good as the best one.