41

Galactic Algorithm

 4 years ago
source link: https://www.tuicool.com/articles/QfyAzyj
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

A galactic algorithm is one that runs faster than any other algorithm for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in practice. Galactic algorithms were so named byRichard Lipton and Ken Regan,as they will never be used on any of the merely terrestrial data sets we find here on Earth.

An example of a galactic algorithm is the fastest known way to multiply two numbers,which is based on a 1729-dimensionalFourier transform.This means it will not reach its stated efficiency until the numbers have at least 2 1729 digits, (vastly) more digits than there are atoms in the universe. So this algorithm is never used in practice.

Despite the fact that they will never be used, galactic algorithms may still contribute to computer science:

  • An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms.
  • Computer sizes may catch up to the crossover point, so that a previously impractical algorithm becomes practical.
  • An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or alternatively show that conjectured bounds are wrong. As Lipton says "This alone could be important and often is a great reason for finding such algorithms. For an example, if there were tomorrow a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring." Similarly, a O( N 2 100 ) algorithm for the Boolean satisfiability problem , although unusable in practice, would settle theP versus NP problem, the most important open problem in computer science.One immediate practical effect would be to earn the discoverer a million dollar prize from the Clay Mathematics Institute .

Examples [ edit ]

There are several well-known algorithms with world-beating asymptotic behavior, but only on impractically large problems:

  • Matrix multiplication: The first improvement over brute-force multiplication, O( N 3 ) is theStrassen algorithm, a recursive algorithm that is O( N 2.807 ). This algorithm is not galactic and used in practice. Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, delivering O( N 2.373 ). These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical."
  • Claude Shannon showed a simple but impractical code that could reach the capacity of a channel. It requires assigning a random code word to every possible N bit message, then decoding by finding the closest code word. If N is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel. Unfortunately, any N big enough to beat existing codes is also completely impractical.These codes, though never used, inspired decades of research into more practical algorithms that today can come quite close to channel capacity.
  • The problem ofdeciding whether a graph G contains H as aminor is NP-complete in general, but where H is fixed, it can be solved in polynomial time. The running time for testing whether H is a minor of G in this case is O ( n 2 ),where n is the number of vertices in G and thebig O notation hides a constant that depends superexponentially on H . The constant is greater than (using Knuth's up-arrow notation ), and where h is the number of vertices in H .
  • For cryptographers, a cryptographic "break" is anything faster than a brute-force attack – i.e., performing one trial decryption for each possible key in sequence. In many cases, even though they are the best known methods, they are still infeasible with current technology. One example is the best attack known against 128 bit AES, which takes only 2 126 operations.Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns.

References [ edit ]


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK