5

We Believe A Lot, But Can Prove Little

 2 years ago
source link: https://rjlipton.wpcomstaging.com/2011/01/19/we-believe-a-lot-but-can-prove-little/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

We Believe A Lot, But Can Prove Little

January 19, 2011

The gap between what we believe and what we can prove

Stan Eisenstat is a numerical analyst who works on various aspects of real computing, and is one of the top experts in solving sparse linear systems. He is a faculty member at Yale in Computer Science now, and was there when I was at Yale in the 1970’s.

Today I want to discuss what we believe and what we know about complexity classes.

This is something I have commented on before: beliefs vs knowing. But I feel that it is worth discussing further, and I hope that you agree.

I had the pleasure, when I was at Yale, to work with Stan on a variety of projects. Stan combines a deep understanding of the mathematics of algorithms with a great feel for what is practical and what is important. He is also a master programmer—a great deal of his early work was on actual implementation of algorithms for numerical computing. He is a wizard at solving recurrences; I believe this ability is related to his understanding of differential equations.

Stan used to tell me, more than once, two statements:

  • The function {\log n} is bounded.
  • The function {\log\log n} is {1}.

He would tell me these when I ran into his office excited that I had improved some algorithm by some small amount—perhaps by a log. If the new bound was {n \log \log n} from {n \log n}, Stan’s comment would be: So it went from linear to {n}.

Both his statements are technically false, but both have a grain of truth to them, especially back in the 70’s when Stan told me his insights. They are related to our notion of galactic algorithms that Ken Regan and I discussed before here. Even today {\log n} is not too large, and certainly {\log\log n} is very small. A related quote is:

{\log \log \log x} goes to infinity with great dignity.

—Dan Shanks

Please keep these insights in mind as we examine what we know about complexity classes and what we believe.

Time vs Space

One of the oldest, one of the best, and one of the most beautiful results we know concerns the relationship between time and space. The best theorem to date is the classic theorem due to John Hopcroft, Wolfgang Paul, and Leslie Valiant:

Theorem: For any function {t(n)},

\displaystyle  \mathsf{DTIME}(t(n)) \subseteq \mathsf{DSPACE}(t(n)/\log t(n)).

This is a wonderful result—one of my all time favorite theorems in complexity theory. Most of us believe more should be true, but it is unlikely that we can prove much more soon—their result is over 30 years old. There is a technical reason to believe that the term {t(n)/\log t(n)} could be slightly improved, without a huge breakthrough, but I will not go into the details.

The proof is very clever and if you do not know it take a look at the famous paper. I will not go into any details of the proof, but will discuss the pebble game that the proof rests on, among other tricks and ideas. The pebble game is a solitaire game that is played on an acyclic graph {G}. The rules of the game are quite simple:

  1. A pebble can be placed on any vertex {v} provided there are pebbles on all the vertices {x} with incoming arcs to {v}.
  2. A pebble can be removed from any vertex at any time.
  3. The goal of the game is to get a pebble on a particular vertex {t}—the target.

It should be clear the game can always be won: just place pebbles on all sources. Then on all vertices that have all their incoming vertices pebbled and so on. What makes the game interesting is the number of pebbles is limited. Thus pebbles must be placed and removed in a strategic manner. The key result is that any bounded degree acyclic graph can be pebbled with

\displaystyle  O(n/\log n)

pebbles: the constant is only a function of the degree of the graph.

The rationale behind the game is that you should think of placing a pebble on a vertex as a step of the computation. A step is only possible if all the inputs to the vertex are known, corresponding to those which are already pebbled. The removal rule allows computations to be forgotten. Essentially this rule corresponds to the ability of memory to be reused, since once a pebble is removed it can be used elsewhere on the graph.

Bob Tarjan and Thomas Lengauer in 1982 proved lower bounds on the number of pebbles need to win on a bounded degree graph. Their paper is titled, “Asymptotically tight bounds on time-space trade-offs in a pebble game.”

Stan had an early practical result around when the time-space separation theorem was proved. He had an algorithm that “threw” away information it had computed, and re-computed that information later on—again. His algorithm, which was joint with Martin Schultz, essentially played the pebbling game.

I recall talking to them about their result and telling them it was related to a recent major theory result. The reason they needed to throw away information was that the limiting resource for their algorithm was space not time. They discovered that the algorithm ran faster if rather than writing to disk and storing the information, they simply threw it away and re-computed as needed.

Deterministic vs Nondeterministic

The belief that {\mathsf{P}} is not equal to {\mathsf{NP}} shows that most believe that nondeterministic time {n} is not equal to deterministic time {n^k} for any {k}. This is a strong belief, one that many be true, but the best we know is vastly far from this.

The best theorem to date is the classic theorem due to Wolfgang Paul, Nick Pippenger, Endre Szemerédi and William Trotter:

Theorem:

\displaystyle  \mathsf{DTIME}(n) \neq \mathsf{NTIME}(n).

Rahul Santhanam, more recently, showed an improvement to their classic result:

Theorem:

\displaystyle  \mathsf{DTIME} \left( n\sqrt{\log^*(n)} \right) \neq \mathsf{NTIME} \left( n\sqrt{\log^*(n)} \right).

Recall Stan’s comments about the growth of functions. Forget the square root, the function {\log^* n} by itself grows to infinity very vey slowly. I am not sure what Stan would say about {\log^* n}: it is exactly {1}, forever? From his comments one might pooh-pooh the result as being just fuss about notions of “linear,” but this only strengthens the technical appreciation that one must have for it. That it took years to be proved also shows how deep are the mysteries of nondeterminism.

One of the strange properties of both these theorems is they do not extend to arbitrary running times. One of fundamental tricks in complexity theory is the notion of padding, which often allows a result of the form

\displaystyle  \mathsf{X}(n) \neq \mathsf{Y}(n)

to yield

\displaystyle  \mathsf{X}(n^2) \neq \mathsf{Y}(n^2),

for example. Here {X} and {Y} are complexity classes. That padding and all known tricks do not work here is an indication of how deep the above two results are. And in particular the importance of Santhanam’s contribution.

Quantum vs Nondeterministic

Atri Rudra, Ken, and I have been looking at what is provable about quantum complexity, mainly the class {\mathsf{BQP}}. This is the complexity class of sets accepted by polynomial size quantum circuits, with two-sided bounded error. The class is the quantum analogy of {\mathsf{BPP}}.

Most believe that quantum is much stronger than classic computation. There is ample evidence of its power; perhaps the main one is Peter Shor’s famous result that factoring is in {\mathsf{BQP}}. However, the best concrete result that we have been able to come up with, the best where there is a proof, is the following theorem:

Theorem: At least one of the following is true:

The class {\mathsf{BQTIME}(t(n))} is not contained in {\mathsf{NTIME}(t(n)^{1.99})}. The class {\mathsf{NTIME}(2^n)} is not contained in circuits of polynomial size.

It is weak because it is a disjunction of two possible results—both are probably true. But who knows. A long-time friend, Forbes Lewis, called this type of result: “truth or consequences.” The point is that we can also view this theorem as saying: If {\mathsf{BQTIME}(t(n))} is in {\mathsf{NTIME}(t(n)^{1.99})}, then the class {\mathsf{NTIME}(2^n)} is not contained in circuits of polynomial size.

Open Problems

There are several interesting open questions. Of course the fundamental one is to improve any of these or related results about complexity classes.

{\bullet }Time vs Space: Can this bound be improved? The graphs that Bob and Thomas construct show that the pebbling game cannot be improved. However, I believe that there is a small loophole. The worst case graphs that they construct may not occur on Turing machines, and this would allow an improvement to be possible.

{\bullet }Deterministic vs Nondeterministic: Can this bound be improved? I find this one of those mathematical embarrassments I have talked about before here. How can we know separation only for {n} and a slightly higher function, and yet not be able to prove that separation occurs for any function?

{\bullet }Quantum vs Nondeterministic: Is the stated theorem the best known? We may have missed something—if this is the case we apologize in advance. If it is the best known, can it be improved?

Like this:

Loading...

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK