N vs NP is not really that important because n^1000 might as well be NP. However, understanding that something is O(1) vs O(log x) vs O(x) vs O(x log x) vs (x ^2) and you can often drop down a level depending on what tools you use can becomes vary important the second you start testing small datasets to represent large ones. Wow 1000 takes 1/2 a second and 2000 takes 2 seconds I wonder how long it's going to take when I dump a million in there?
I have written low level networking code and you can abstract that to high level networking code. You can make handling thousands of threads easy. But, there is nothing to abstract away when you want to know everything within 10 feet of each object in a list. Granted, there are way's of solving that with a billion object list but they all depend on how the data is setup and not a general solution.
"P vs NP is not really that important because n^1000 might as well be NP"
And yet P seems to capture pretty well the concept of "problems for which there are efficient algorithms." For whatever reason, no algorithm seems to have a complexity like O(n^1000). I don't know why. I'm not sure anyone does. But the highest exponent I can think of right now is n^12 for the original upper bound on the AKS primality testing algorithm, and I think that was later reduced to n^6.
What are the units? 10^14 is indeed far too much if it's seconds, but might not be beyond reason for a set of size 10 million if it's nanoseconds (then you have a bit under three hours.) So think about which units this would actually have.
Indeed, an algorithm being in P may not be sufficient to scale, but if a problem is NP-complete (assuming P != NP) there's almost certainly no scalable algorithm for it.
I should have said If N^2 means 10 million... Anyway, I tend to think of 10^10 nanoseconds as vary bad (10 seconds) and (10^14) > 1 day as unreasonable, but that's just a rule of thumb from the days of 1Ghz CPUS's.
I have written low level networking code and you can abstract that to high level networking code. You can make handling thousands of threads easy. But, there is nothing to abstract away when you want to know everything within 10 feet of each object in a list. Granted, there are way's of solving that with a billion object list but they all depend on how the data is setup and not a general solution.