I am very excited that Northeastern is hosting a series of lectures by Cris Moore of the Santa Fe Institute. Moore has his background in physics, but he has also distinguished himself in the fields of mathematics, computer science, and network science.
Basically, he knows pretty much everything.
He began his lecture series yesterday with a talk on “computational complexity and landscapes.” Exploring qualitative differences between different types of algorithms, he essentially sought to answer the question: why are some problems “hard”?
Of course, “hard” is a relative term. In computer science, the word has a very specific meaning which requires some history to explain.
In 1971, Stephen Cook posed a question which remains open in computer science today: if a solution to a problem can be (quickly) verified by a computer can that problem also be (quickly) solved by a computer?
This question naturally sorts problems into two categories: those that can be solved in polynomial time (P) and those for which an answer can be can be checked in polynomial time (NP for ” nondeterministic polynomial time”).
Consider the famous Bridges of Königsberg problem. Euler, living in Königsberg in 1736, wanted to find a path that would allow him to cross each of the city’s bridges exactly once without ever crossing the same bridge twice.
Now you may start to appreciate the difference between finding a solution and checking a solution. Given a map of Königsberg, you might try a path and then try another path. If you find that neither of these paths work, you have accomplished exactly nothing: you still don’t have a solution but neither have you proven that none exists.
Only if you spent all day trying each and every possibility would you know for certain if a solution existed.
Now that is hard.
If, on the other hand, you were equipped with both a map of Königsberg and a Marauder’s map indicating the perfect route, it would be relatively easy to verify that you’d been handed a perfect solution. Once you have the right route it is easy to show that it works.
Of course, if the story were that simple, “P versus NP” wouldn’t have remained an open problem in computer science for forty years. Euler famously solved the Bridges of Königsberg problem without using the exhaustive technique described above. Instead, in a proof that laid the foundations of network science, Euler reduced the problem to one of vertices and edges. Since the connected path that he was looking for requires that you both enter and leave a land mass, an odd number of bridges meeting in one place poses problems. Eventually you will get stuck.
In this way, Euler was able to prove that there was no such path without having to try every possible solution. In computer science terms, we was able to solve it in polynomial time.
This is the fundamental challenge of the P versus NP dilemma: a problem which initially appears to be “NP” may be reducible to something that is simply “P.”
There are many problems which have not been successfully reduced, and there’s good reason to think that they never will be. But, of course, this is the typical NP problem: you can’t prove all NP problems can’t be reduced until you exhaust all of them.
As Cris Moore said, “the sky would fall” if it turned out that all NP problems are reducible to P.
To get a sense of the repercussions of such a discovery: a problem is defined as NP-hard if it is “at least as hard as the hardest problems in NP.” That is, other NP problems can be reduced, in polynomial time, to NP-hard problems.
That is to say, if you found a quick solution to an NP-hard problem it could also be applied as a quick solution to every NP problem. Thousands of problems which currently require exhaustive permutations to solve could suddenly be solved in polynomial time.
The sky would fall indeed.
His lecture series will continue on Thursdays through November 12.