In real life, you will always start with simple working implementation and go with it.
Then if things are slow, you profile your code with a good profiler while running for some kind of real life scenario and spot the slow parts (also keep in mind that profiling may affect the program's behaviour).
After that you may want to consider alternatives with less asymptotic complexity iff that's the part causing slowness.
Once I was asked to look for one project to see that if there is any room for improvement to speed up the program.
After I profiled the program with the test data, I saw that program was severely affected by a "size" method call on a lock-free concurrent list. Since the data structure is lock free, size method is not a constant time operation and calling it in a large list takes too much time. It was just there to print some kind of statistics, I changed the code so that it is called only necessary not every time some operation occurs. This immediately made program 2-3 times faster.
There were also some parts I changed with some algorithms with less algorithmic complexith to make it faster.
Overall, I made the program 6x faster. So sometimes you need to use fancy algorithms, sometimes you just need to change one line of code after profiling.
Although I mostly agree, there’s certain types of simple, fundamental performance considerations you want to take when writing code the first time, otherwise you’re just needlessly dealing with tones of performance issues in prod. Like make sure your DB queries are well covered by indexes, if you’re repeatedly looking up items from a list, turn it into a map or set first, in FE code try to keep re-renders to areas of the UI that need updating, etc.
Correctness and simplicity are almost always things you want to focus on over performance, but there’s still SOME level of simple performance best practices that you want to stick to up front. Similar to the tradeoff between YAGNI and software architecture - if you don’t start with some sort of coherent architecture, your project is gonna descend into a spaghetti mess. Ignore simple performance best practices and you’ll spend way too much time fighting performance issues.
> Correctness and simplicity are almost always things you want to focus on over performance
Right, it's a balance good developers know how to tread. Obviously if you can use a set instead of a list and it's one line change, go for it. But as the meme in OPs post, if you're gonna USE SEGMENT TREES and all, then it better be worth the amount of complexity and time you're putting into it.
Part of what makes a good engineer is being able to quickly tell where it's worth optimizing and where it isn't. Or as the meme says, when nested loop goes BRRRRRR.
There is also such thing as "design for performance," and sometimes - just sometimes - any amount of effort spent later on optimization would end up going nowhere, because what you are trying to optimize is in fact a huge steaming POS.
Once I was asked to look for one project to see that if there is any room for improvement to speed up the program. After I profiled the program with the test data, I saw that program was severely affected by a "size" method call on a lock-free concurrent list. Since the data structure is lock free, size method is not a constant time operation and calling it in a large list takes too much time. It was just there to print some kind of statistics, I changed the code so that it is called only necessary not every time some operation occurs. This immediately made program 2-3 times faster.
There were also some parts I changed with some algorithms with less algorithmic complexith to make it faster. Overall, I made the program 6x faster. So sometimes you need to use fancy algorithms, sometimes you just need to change one line of code after profiling.