> I don't get why it seems like languages are designed for convenience then later down the road billions of hours are spent trying to get better performance.
It's a tradeoff - what are you trying to accomplish, and what would you rather spend your time doing?
More flexible languages are often also more expressive and have better tooling (e.g. for a CRUD app it's hard to find something nicer than Django / Rails), but tend to be slower. The other end of the spectrum is more performance-oriented languages where everything is way more verbose, and has low expressiveness, there's a lot of boilerplate, writing any code that does something relatively simple is more onerous. So depending on what phase of development you're in and what you're doing, the "obvious" choice can be different. And as companies grow and mature, those goals also shift.
Granted, that gap is closing with compiled / systems languages getting more and more expressive (Rust, Nim, Go?, C++11/17?), but it's also closing with scripting languages getting faster (very blanket statements here).
The fact that python, and some Java, but decidedly not C++, is the de-facto standard tool for the scientist these days should give you pause. Machine learning, gpu computing stuff, large scale data processing with spark - you can all do it with python. Sure, it's all C and Fortran and Java behind the scenes, but who cares? No one wants to bother with that if they can avoid it. They just need to get their job done.
Another example, more towards your requests per second example, is Japronto (https://medium.freecodecamp.com/million-requests-per-second-...) - turns out if you take time to optimize the event handling and the http parsing, suddenly you're back in the ballpark of compiled languages. Sure, other bits of code are going to then become the bottleneck, but those can be made faster too.
The "high level developer interface + compiled and optimized internals" combo is just very hard to beat, because it's always going to be good enough where it counts.
Also, performance improvements for a language or specific libraries is something that can be done as a centralized effort by experienced and knowledgeable people on that specific subject. Chances are most people aren't going to write a matrix multiplication operation that'll beat BLAS stuff, so why bother unless you have an extremely good reason? This is much more efficient and happens "for free" from the point of view of the end-user developers (also computers are getting cheaper and faster, also for free). So why bother?
I think Facebook / HHVM / PHP was sort of the boundary of how far this can go, but I think it's a testament to it rather than against it. Granted, not every company is Facebook, but like I said, the gap is closing too. But I think more dynamic languages are always going to become "fast enough" faster than more static languages becoming more expressive and nimble, and will often beat the latter in available tooling (I remember "package management" and build tools in C++, ugh.)
It's a tradeoff - what are you trying to accomplish, and what would you rather spend your time doing?
More flexible languages are often also more expressive and have better tooling (e.g. for a CRUD app it's hard to find something nicer than Django / Rails), but tend to be slower. The other end of the spectrum is more performance-oriented languages where everything is way more verbose, and has low expressiveness, there's a lot of boilerplate, writing any code that does something relatively simple is more onerous. So depending on what phase of development you're in and what you're doing, the "obvious" choice can be different. And as companies grow and mature, those goals also shift.
Granted, that gap is closing with compiled / systems languages getting more and more expressive (Rust, Nim, Go?, C++11/17?), but it's also closing with scripting languages getting faster (very blanket statements here).
The fact that python, and some Java, but decidedly not C++, is the de-facto standard tool for the scientist these days should give you pause. Machine learning, gpu computing stuff, large scale data processing with spark - you can all do it with python. Sure, it's all C and Fortran and Java behind the scenes, but who cares? No one wants to bother with that if they can avoid it. They just need to get their job done.
Another example, more towards your requests per second example, is Japronto (https://medium.freecodecamp.com/million-requests-per-second-...) - turns out if you take time to optimize the event handling and the http parsing, suddenly you're back in the ballpark of compiled languages. Sure, other bits of code are going to then become the bottleneck, but those can be made faster too.
The "high level developer interface + compiled and optimized internals" combo is just very hard to beat, because it's always going to be good enough where it counts.
Also, performance improvements for a language or specific libraries is something that can be done as a centralized effort by experienced and knowledgeable people on that specific subject. Chances are most people aren't going to write a matrix multiplication operation that'll beat BLAS stuff, so why bother unless you have an extremely good reason? This is much more efficient and happens "for free" from the point of view of the end-user developers (also computers are getting cheaper and faster, also for free). So why bother?
I think Facebook / HHVM / PHP was sort of the boundary of how far this can go, but I think it's a testament to it rather than against it. Granted, not every company is Facebook, but like I said, the gap is closing too. But I think more dynamic languages are always going to become "fast enough" faster than more static languages becoming more expressive and nimble, and will often beat the latter in available tooling (I remember "package management" and build tools in C++, ugh.)