That was my thought initially too... since the number of solutions of a TSP with n cities is n! ... which grows much faster than c^n (which they are implying, with c == 10).
But apparently if you solve the TSP using dynamic programming, you can do it in c^n time (at least according to the Wikipedia page on Big-O notation... I haven't researched it much further). This is still solidly NP territory, but would provide a basis for their claim that one city translates into 10 times harder.
It's not just that. The article makes three attempts to explain what P = NP means in the first few paragraphs (which is about half of the article). All of them sloppy. It is mostly an elaborate dance around actually defining what polynomial and exponential mean, but good enough to give most readers a feel for the problem.
cool to see mainstream news coverage of a theoretical CS problem. it'd be even cooler if they described one compelling concrete example of a specific way that the world would differ if P=NP, or if P!=NP
To paraphrase a recent blog post (possibly by rjlipton?), one good reason to avoid a specific concrete example is that the answer to P=NP or P!=NP isn't quite as important as the specific time complexities involved in solving NP problems - for most practical purposes 1.000000001^n would be much more favourable than n^1000000000.
You've got 100 rocks of various weights and three buckets. You have to distribute the rocks in the buckets (yes you have to use ALL the rocks) to make the total weight of all the rocks in each bucket as equal to the others as possible.
What's interesting is that, once you have a solution, you can check that it's optimal pretty quickly. But it will take a very long time to come up with a solution. Unless P=NP, in which case the time may not be equal but will at least be reasonable.
I read a LOT about that problem at one point and seem to recall that someone had proven that a naive solution was only on the order of 17% less than optimal in the worst case.
There's a lot of results like that. You can do some pretty fast algorithms that are provably "not so bad" compared to the optimal solution and run in polynomial time. The problem is that you generally can't combine these with problem reduction to get a "not so bad" solution to a different problem (at least as far as I know).
I'm confident that if there is a solution it will be on the order of trickey of Godel's Incompleteness Theorem. Something like feeding the property of validation back into the algorithm somehow. But what do I know..
I'm not talking about actually solving P vs NP. I'm talking about approximation algorithms. My point is this... lets say you have two problems A and B. Even if you have an approximation algorithm that solves A to within 50% of optimal, then reducing B to A, solving A and converting to solution back to one for B won't necessarily a 50% approximation for B.
(Sorry if I missed your point and this explanation is redundant).
Yeah definitely. There's a lot of potential in approximation (something humans are naively good at for NP complete). My little irrelevant point was that approximations hold an incremental solution (as you've noted along with the fact they aren't necessarily transposable) but are unlikely to lead to a revolutionary solution.
Solving the problem of P=NP? implies that you would also be able to write an algorithm for such a task that currently we don't know the answer to (whether a task can be completed in polynomial time, just as it can be verified) -- once this algorithm is written, basically every other problem in that category (class) would also be pretty much solved. We're talking about thousands of such problems.
It's funny because in the average CS degree the P=NP problem isn't even covered (my sample at least all learned about it online). I don't know if I know a concrete example of a specific way the world would differ either. Maybe quicker routing, cancer algorithm solving, simulations, broken encryptions that kind of stuff?
Really? Where did you get your CS degree? I thought complexity theory is an essential part of any algorithms module, and an algorithms module is in turn an essential part of any CS degree. I'd hate to have to hire a CS grad who hasn't taken a class in algorithms.
Are you sure you didn't just not turn up that day? >:)
We did comprehensive work on algorithm complexity/data structures etc. But the NP Complete problem itself was never even touched on beyond a side reference to the travelling salesman problem at best.
To add a personal experience to what your saying, it's been taught in a fair amount of detail in two of my classes (at the University of Utah) and has been glossed over in a few others.
Besides the obvious repercussions (some optimization problems get easier, a lot of crypto and I think most every one-way function turn to junk), P=NP does something else: it means mathematicians are unnecessary.
> One can obviously easily construct a Turing machine, which for every formula F in first order predicate logic and every natural number n, allows one to decide if there is a proof of F of length n (length = number of symbols). Let ψ(F,n) be the number of steps the machine requires for this and let φ(n) = maxF ψ(F,n). The question is how fast φ(n) grows for an optimal machine. One can show that φ(n) ≥ k ⋅ n. If there really were a machine with φ(n) ∼ k ⋅ n (or even ∼ k ⋅ n2), this would have consequences of the greatest importance. Namely, it would obviously mean that in spite of the undecidability of the Entscheidungsproblem, the mental work of a mathematician concerning Yes-or-No questions could be completely replaced by a machine. After all, one would simply have to choose the natural number n so large that when the machine does not deliver a result, it makes no sense to think more about the problem.
Being computer users, we should be astonished since we know that everything is math, or can be made math. But here's a restatement from Scott Aaronson http://scottaaronson.com/blog/?p=190 :
> That said, certainly if P=NP there would be dramatic consequences for AI. For example, you could efficiently find the shortest program that outputs (say) the complete works of Shakespeare or stock market data in a reasonable amount of time. The question is just how much that would do for you — i.e. once you had that short description, could you use it to predict the future or write plays that Shakespeare himself never wrote?
I don't think that would really excite most people. Most people probably think mathematicians are largely unnecessary to begin with. Even among people who recognize the value of mathematics, proving new theorems isn't really exciting for them.
the computing time required to solve them increases exponentially. In other words, one more city makes the problem 10 times harder.
That's just sloppy.