32

The Aaronson-Ambainis Conjecture (2008-2019)

 4 years ago
source link: https://www.scottaaronson.com/blog/?p=4414
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Around 1999, one of the first things I ever did in quantum computing theory was to work on a problem that Lance Fortnow suggested in one of his papers: is it possible to separate P from BQP relative to a random oracle ? (That is, without first needing to separate P from PSPACE or whatever in the real world?) Or to the contrary: suppose that a quantum algorithm Q makes T queries to a Boolean input string X. Is there then a classical simulation algorithm that makes poly(T) queries to X, and that approximates Q’s acceptance probability for most values of X? Such a classical simulation, were it possible, would still be consistent with the existence of quantum algorithms like Simon’s and Shor’s , which are able to achieve exponential (and even greater) speedups in the black-box setting. It would simply underscore the importance, for Simon’s and Shor’s algorithms, of global structure that makes the string X extremely non -random: for example, encoding a periodic function (in the case of Shor’s algorithm), or encoding a function that hides a secret string s (in the case of Simon’s). It would underscore that superpolynomial quantum speedups depend on structure.

I never managed to solve this problem. Around 2008, though, I noticed that a solution would follow from a perhaps-not-obviously-related conjecture, about influences in low-degree polynomials. Namely, let p:R n →R be a degree-d real polynomial in n variables, and suppose p(x)∈[0,1] for all x∈{0,1} n . Define the variance of p to be Var(p):=E x,y [|p(x)-p(y)|], and define the influence of the i th variable to be Inf i (p):=E x [|p(x)-p(x i )|]. Here the expectations are over strings in {0,1} n , and x i means x with its i th bit flipped between 0 and 1. Then the conjecture is this: there must be some variable i such that Inf i (p) > poly(Var(p)/d) (in other words, that “explains” a non-negligible fraction of the variance of the entire polynomial).

Why would this conjecture imply the statement about quantum algorithms? Basically, because of the seminal result of Beals et al. from 1998: that if a quantum algorithm makes T queries to a Boolean input X, then its acceptance probability can be written as a real polynomial over the bits of X, of degree at most 2T. Given that result, if you wanted to classically simulate a quantum algorithm Q on most inputs—and if you only cared about query complexity, not computation time—you’d simply need to do the following:

(1) Find the polynomial p that represents Q’s acceptance probability.

(2) Find a variable i that explains at least a 1/poly(T) fraction of the total remaining variance in p, and query that i.

(3) Keep repeating step (2), until p has been restricted to a polynomial with not much variance left—i.e., to nearly a constant function p(X)=c. Whenever that happens, halt and output the constant c.

The key is that by hypothesis, this algorithm will halt, with high probability over X, after only poly(T) steps.

Anyway, around the same time, Andris Ambainis had a major break on a different problem that I’d told him about: namely, whether randomized and quantum query complexities are polynomially related for all partial functions with permutation symmetry (like the collision and the element distinctness functions). Andris and I decided to write up the two directions jointly. The result was our 2011 paper entitled The Need for Structure in Quantum Speedups .

Of the two contributions in the “Need for Structure” paper, the one about random oracles and influences in low-degree polynomials was clearly the weaker and less satisfying one. As the reviewers pointed out, that part of the paper didn’t solve anything: it just reduced one unsolved problem to a new, slightly different problem that was also unsolved. Nevertheless, that part of the paper acquired a life of its own over the last decade, as the world’s experts in analysis of Boolean functions and polynomials began referring to the “Aaronson-Ambainis Conjecture.” Ryan O’Donnell, Guy Kindler, and many others had a stab. I even got Terry Tao to spend an hour to two on the problem when I visited UCLA.

Now, at long last, Nathan Keller and Ohad Klein say they’ve solved the problem, in a preprint whose title is a riff on ours: “Quantum Speedups Need Structure.”

Their paper hasn’t yet been peer-reviewed, and I haven’t yet carefully studied it, but I could and should : at 19 pages, it looks very approachable and clear, if not as radically short as (say) Huang’s proof of the Sensitivity Conjecture . Keller and Klein’s argument subsumes all the earlier results that I knew would need to be subsumed, and involves all the concepts (like a real analogue of block sensitivity) that I knew would need to be involved.

My plan had been as follows:

(1) Read their paper in detail. Understand every step of their proof.

(2) Write a blog post that reflects my detailed understanding.

Unfortunately, this plan did not sufficiently grapple with the fact that I now have two kids. It got snagged for a week at step (1). So I’m now executing an alternative plan, which is to jump immediately to the blog post.

Anyway, assuming Keller and Klein’s result holds up—as I expect it will—by combining it with the observations in my and Andris’s paper, one immediately gets an explanation for why no one has managed to separate P from BQP relative to a random oracle (but only relative to non-random oracles). This complements the work of Kahn, Saks, and Smyth , who around 2000 gave a precisely analogous explanation for the difficulty of separating P from NP∩coNP relative to a random oracle. Unfortunately, the polynomial blowup is quite enormous: from a quantum algorithm making T queries, Keller and Klein apparently get a classical algorithm making O(T 18 ) queries. But such things can almost always be massively improved.

Feel free to use the comments to ask any questions about this result or its broader context. I’ll either do my best to answer from the limited amount I know, or else I’ll pass the questions along to Nathan and Ohad themselves. Maybe, at some point, I’ll even be forced to understand the new proof.

Congratulations to Nathan and Ohad!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK