Most CS students I've met hardly understand complexity theory. They just know what algorithms map to what alphabets. And 90% of those that do know complexity theory will never use it in their life. And when they do need it, they need it once or twice in most types of projects. It's a very specialised topic, but just like the singleton pattern, it's easy to understand and sounds impressive, so people tend to namedrop with it.
That's not real CS. CS is mostly about managing complexity across disparate intercommunicating systems, not about the speed of algorithms.
N vs NP is not really that important because n^1000 might as well be NP. However, understanding that something is O(1) vs O(log x) vs O(x) vs O(x log x) vs (x ^2) and you can often drop down a level depending on what tools you use can becomes vary important the second you start testing small datasets to represent large ones. Wow 1000 takes 1/2 a second and 2000 takes 2 seconds I wonder how long it's going to take when I dump a million in there?
I have written low level networking code and you can abstract that to high level networking code. You can make handling thousands of threads easy. But, there is nothing to abstract away when you want to know everything within 10 feet of each object in a list. Granted, there are way's of solving that with a billion object list but they all depend on how the data is setup and not a general solution.
"P vs NP is not really that important because n^1000 might as well be NP"
And yet P seems to capture pretty well the concept of "problems for which there are efficient algorithms." For whatever reason, no algorithm seems to have a complexity like O(n^1000). I don't know why. I'm not sure anyone does. But the highest exponent I can think of right now is n^12 for the original upper bound on the AKS primality testing algorithm, and I think that was later reduced to n^6.
What are the units? 10^14 is indeed far too much if it's seconds, but might not be beyond reason for a set of size 10 million if it's nanoseconds (then you have a bit under three hours.) So think about which units this would actually have.
Indeed, an algorithm being in P may not be sufficient to scale, but if a problem is NP-complete (assuming P != NP) there's almost certainly no scalable algorithm for it.
I should have said If N^2 means 10 million... Anyway, I tend to think of 10^10 nanoseconds as vary bad (10 seconds) and (10^14) > 1 day as unreasonable, but that's just a rule of thumb from the days of 1Ghz CPUS's.
I find complexity theory to be extremely important to CS students. In fact, complexity theory is exactly what differentiates the good programmers from the bad programmers. Not that they use an asymptote in their day-job, but because they have an inherent knowledge of the cost of an algorithm. Now of course, in practice it might be the "bad" algorithm that wins because N is small enough. It often is.
But the masters inherently know when they need to switch to a more complex algorithm with a better bound. You can't get that without understanding of complexity theory.
There are several really interesting problems that people might want an algorithm for which lies in the NP space. Granted, most ${STANDARD} programming is not about the solution to such problems. But then again, it was not the intention that people skilled in theoretical CS should use their time on such problems. In this sense, the theoretical people tend to be far far ahead the "real world".
That's not real CS. CS is mostly about managing complexity across disparate intercommunicating systems, not about the speed of algorithms.