top of page

Busting (some, anyway) myths of P versus NP


Mythbusters time! Sorry, we couldn't get Adam and Jamie along for this article, and we've completely run out of ballistics gel. It's time though to have a look at some myths and conceptions which have grown up around the P versus NP problem and try and put them to rest. Here we go:

Myth 1 : If P=NP then internet security is annihilated

Easy to see where this thinking comes from. A lot of internet security relies on results being hard to obtain yet relatively easy to check. For example, the RSA encryption algorithm relies on the fact that while it is relatively easy (at least, for a computer) to take two huge prime numbers and multiply them together, it is extremely more difficult for someone else to take that resulting number and work backwards to determine what the two original factors were.

There is an overall state of mind among mathematicians which goes along the lines that as a general rule, P = easy and NP = difficult. This is common and so generally true that it has led to a rule known as "Cobham's Thesis" (named after Alan Cobham) but it turns out this is not always true 100% of the time.

For example, whilst no polynomial time solution has been found for factorising a large integer, the task of determining whether or not a large integer is a prime number or not is actually in polynomial time!

And the point that many people overlook is that a theoretical algorithm may well be in polynomial time yet have such extremely large factors or exponents that it still makes it impractical to crack in a reasonable amount of time with current technology.

So : whether or not any specific cryptographic algorithm is annihilated by P=NP depends on the factors involved. If we found a O(n^3)-time algorithm to solve the problem, then I think it's fair to say the algorithm is pretty much destroyed as a security measure without using such colossal keys that it just becomes impractical. On the other hand, if someone found an O(n^100)-time algorithm to solve it, then theoretically, yes it is in polynomial time, but in practical terms it would still take such a long time to crack, that with a suitably long enough key, we can overcome this shortcoming to all practical purposes.

Myth 2 : If P is not equal to NP then internet security is safe

Well, I don't think I'd get into too much trouble for saying it's safer than if P=NP but that's no reason to get complacent.

In a kind of reverse corollary of myth 1, it follows that just because a problem is in NP, that doesn't necessarily make it difficult.

From Wikipedia : "even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to tackling the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the travelling salesman problem and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low"

Myth 3 : If a problem is NP-hard then the only way to solve it is by trying out every possibility in turn

Following on from myth 2 above - not necessarily. For example the well-known knapsack problem, given as an optimisation problem (Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible) is NP-hard.

However, several algorithms are available to solve knapsack problems, these are based on dynamic programming, branch and bound approaches - or some kind of combination of the two.

Courtesy of Zolton Mann from Budapest University of Technology and Economics :

[you can construct a algorithm in dynamic programming] ... "that uses only time proportional to đ‘›âˆ™đ¶ (where n is the number of items and C is the capacity of the knapsack). This is a so-called pseudo-polynomial algorithm : its runtime is in general not polynomial with respect to the input's size (because đ¶ can be exponentially high), but if đ¶ is polynomially bounded, then it is a polynomial-time algorithm, and for many practical settings, it is significantly faster than checking all 2^n possible subsets."

Myth 4 : The P versus NP problem on really applies to conventional computing. With quantum computing, every problem can be solved in polynomial time.

This assertion seems very reasonable indeed and generally based something along the lines of this type of thinking : current technology and algorithms tend to work like "deterministic" Turing machines, trying out different possibilities in turn. With quantum computers, this all changes! The quantum computer can enter a "superposition" of states all simultaneously, when one of these states produces the "correct answer" then the wavefunctions of all the other answers cancel out and collapse, leaving us - as they say on the game show "Who Wants To Be A Millionaire" - with the one remaining correct answer! So a quantum computer is essentially a non-deterministic Turing machine.

Actually, as it turns out ... no it isn't, and this is a somewhat simplistic thought process of how quantum computing works.

As this article has gone on for long enough, if you are interested in reading further about the limitations of quantum computing, I can seriously recommend pointing you in the direction of this excellent article by Scott Aaronson, which is extremely user friendly and approachable and gives you a superb insight.

http://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf

Thanks for reading the article. MYTHS BUSTED !


bottom of page