2

Quantum Algorithms A Different View—Again

 1 year ago
source link: https://rjlipton.wpcomstaging.com/2010/08/25/quantum-algorithms-a-different-view-again/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Quantum Algorithms A Different View—Again

August 25, 2010

A view of quantum algorithms via linear algebra

hilbert1.jpg?w=130

David Hilbert is famous, beyond famous. He is one of the greatest mathematicians of all time. I wish to quote him on theories:

A mathematical theory is not to be considered complete until you have made it so clear that you can explain it to the first man whom you meet on the street.

Today I want to expand on a previous discussion on quantum algorithms (QA). I had several ideas that I tried to convey in that discussion. Perhaps it was a mistake to try and do more than one. Oh well.

Let me try, again.

Algorithms Not Physics

My goal was to remove QA’s from any physical considerations. If this offends the quantum experts, then I am sorry. No I am not sorry. I think we need to do this to make progress.

We have done this decoupling almost completely with classical algorithms. When we explain a new classical algorithm we talk about high level operations—we do not talk about transistors and wires. Nor gates and memory cells. I would venture that few theorists could explain the details of how an L2 cache works, or how a modern arithmetic unit does pipelining, or in general how processors really work. The beautiful point is, it just does not matter for most of theory.

It is this ability to think at a high level for the creation and analysis of classical algorithms that, I believe, is the reason theory has been able to create such powerful algorithms. These algorithms are often described at a level way above any implementation details. I believe this is fundamental, and one of the cornerstones of why theory works.

Here are two classical algorithms: RSA Encryption and the Number Field Sieve (NFS). Both have changed the world, and were on Dick Karp’s list at the last Theory Day at Tech as great algorithms. Neither is described at a low level. For example, RSA is usually described in terms of operations on large integers—each of these operations requires millions of binary operations to perform. NFS is described also at a very high level. Without this ability I doubt that either algorithm would have been discovered.

I think that to make radical progress on QA’s we need to move away from a bit-level understanding and analysis and move to a high-level understanding. This is what I tried to supply in the previous post on quantum algorithms. If I failed in that goal, I apologize to you, but I believe that this goal is the right one. Perhaps someone else will be able to succeed in raising the level of detail—I hope so.

Geometry Not Quantum Mechanics

In that previous discussion I tried to make the assumptions needed to understand the Deutsch algorithm as simple as possible. This was to help a non-expert to “see” what was happening without any quantum mechanics details. I also believe that by stripping away tensor products, qubits, Hadamard Transforms, and all the rest we could lay bare what is really happening. I agree that none of these concepts are too difficult, but are they essential to make a person have a “feel” for what is happening with QA’s? I do not believe that they are.

I also hope that this view can be used by theory experts to make further progress on QA’s. If the details are pushed aside and all that is left is a simple question about unit vectors on the sphere, then I hope that we might be able to find new insights. I am working right now on doing this, and I believe that I may succeed. If QA are “just” questions about geometry, then I assert that we may be able to find new QA’s or prove new lower bounds. I still believe that is true.

Grover’s Algorithm

Lov Grover discovered one of the most important QA known in 1996. Or is that “invented?” The algorithm named after him is able to search a space of size in cost . Of course, this is much faster than any classical algorithm, which would take linear time, even for random algorithms.

Grover’s beautiful result has many applications. You can think of his algorithm as allowing the computation of an sized “OR” in time square root of . This is so general that it has had many applications in various areas of complexity. For example, it can be used to perform triangle detection faster than any known classical method—see the beautiful paper by Frederic Magniez, Miklos Santha, and Mario Szegedy.

It should be noted in this and many other applications there are nice interactions between Grover and standard methods. That is, the optimal results often follow from a “clever” use of Grover’s algorithm.

The Linear Algebra View

Let me explain Grover’s algorithm using just linear algebra. We begin in a state of not knowing anything about the space of possible solutions— is the all- vector, but divided by to make it a unit vector. Let us suppose there are solutions. We do not know in advance. However, we will argue that it is enough to know to within a factor of . Then we will apply the standard idea of first trying , then , then and so on, and this will not affect the order of the running time. So we may suppose is known after all—and what we really do not know is the location of the solutions. Our goal state is the vector that has a in those positions that belong to , and elsewhere, this time dividing by to make a unit vector .

All states that we reach along the way will be linear combinations of our initial ignorance state and the unknown full-knowledge state , that is

where to preserve a unit vector. Initially we have , . When we measure, the chance of getting the index of a solution is . Initially this is just , which is just the chance of guessing a solution at random. If is high, however, then we stand a good chance of getting a solution. So that’s the goal of Grover’s algorithm—to stay in the simple two-dimensional subspace spanned by and , while moving from close enough to for a measurement to give a solution with high probability. We just need some operations that keep this subspace fixed but move points well enough inside it.

The first operation is an by matrix that is the identity matrix—except that the diagonal entries corresponding to solutions are instead of . That is, . This matrix is given as a “black box”—we are not allowed to inspect it. It really represents the verification predicate that an element is a solution. Since , clearly is unitary. The second operation is simply the matrix

where is the all matrix. Each row has one entry of and entries of . Its norm is therefore

All pairs of different rows have dot product . So is also a unitary matrix. To verify that these operations conserve the subspace , note that while , and compute:

The algorithm then simply computes

where . Then, it measures the vector . The claim is with high probability the measurement yields some index so that .

Grover Analysis

The key here is to prove that the algorithm works: that is after applying the matrix about square root of times the measurement yields a solution with high probability. Given our insight about preserving a two-dimensional subspace, this comes down to simple linear algebra on matrices. The algebra is the same as on Wikipedia’s page on Grover’s algorithm for the case , except we replace factors by .

We can see intuitively from the above action of and on and that effects a rotation through an angle of , which is roughly when is small relative to . Starting from which is the point in this co-ordinate system, it takes about iterations to get near the point . We get near within distance about , in fact, and the error when measuring is hence order-of the square of that, i.e. only order . Note that as it iterates, the algorithm gets progressively closer to its goal—in the sense that if “Simon says: measure” at any time, its chance of finding a solution always improves.

Finally, as we noted above, if we do not know then we make a “net” by guessing then then and so on. When our “” is off by a factor of 2 we will not get so near to the point . However, there will be multiple iterations for values in our net that are close enough to give a reasonable success probability. The full analysis involves a tradeoff between random guessing working well when is large, versus the error being small when is small, and repeating some measurement trials to make the net a little finer than stepping down by factors of . The details need to be worked out, but these details are not quantum mechanical—they are ordinary theory-of-algorithms details.

Open Problems

The goal here was not to supplant the current view of QA’s. Nor was it to not teach all the details to our students. The goal was both less and more important. It was to start to try to create a way to look at QA’s that was simpler, more accessible to many. And at the same time could lead us to discover new insights into QA’s. I believe that the more ways we can define and view a mathematical topic, the better is our understanding.

I hope that I helped in this regard. In any event thanks to all who read that discussion and this one. Thanks for being supportive—even those who disagree with this approach.

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK