Brilliant insight. Always remember: (1) make it work, (2) make it right, (3) make it fast. 80% of projects get scrapped in between (1) and (2) because you end up realizing you wanted something completely different anyway.
> (1) make it work, (2) make it right, (3) make it fast.
I've always disagreed with this. In my view you should make it a habit to write optimized code. This isn't agonizing over minor implementation details but keeping in mind the time complexity of whatever you are writing and working towards a optimal solution from the start. You should know what abstractions in your language are expensive and avoid them. You should know roughly the purpose of a database table you create and add the indexes that make sense even if you don't intend to use them right away. You should know that thousands of method lookups in a tight loop will be slow. You should have a feel for "this is a problem someone else probably solved, is there a optimal implementation I can find somewhere?". You should know when you use a value often and cache it to start with. Over time the gap between writing unoptimized and mostly optimized code gets smaller and smaller just like practice improves any skill.
> In my view you should make it a habit to write optimized code.
It depends on your domain.
If you're writing for embedded, or games, or other things where performance is table stakes, then sure.
If you're writing code to meet (always changing) business requirements in a team with other people, writing optimized code first is actively harmful. It inhibits understandability and maintainability, which are the most important virtues of this type of programming. And this is true even if performance is important: optimizations, i.e. any implementation other than the most obvious and idiomatic, must always be justified with profiling.
You're mostly right, but even in typical LOB applications, there are some low-hanging fruits you should really pay attention to. One common example are N+1 queries.
And if you do find yourself writing an algorithm (something which happens more rarely in LOB applications, but can still happen occasionally), it's probably still good to create algorithms that are of a lower complexity class, provided they are not that much harder to understand or don't have other significant drawbacks. I remember that I once accidentally created an algorithm with a complexity of O(n!).
> You should know that thousands of method lookups in a tight loop will be slow.
That's not always the case. Modern compilers do a lot of things like inlining and unrolling. These days I mostly try to write code that is easy to understand.
> Modern compilers do a lot of things like inlining and unrolling
Smart ones do, I've been writing Java lately and that behavior tends to be unpredictable and rare[0]. I'd use a inline keyword if I had one, or preprocessor directive of some kind if I had that but I don't. I agree it's harder to read but I feel like changing a JVM flag to get a behavior that I want is more inscrutable than having a long method with a comment noting that this in inlined for performance reasons. With modern machines and the price of memory I tend to lean hard to the memory side of the time-memory tradeoff.
[0]"First, it uses counters to keep track of how many times we invoke the method. When the method is called more than a specific number of times, it becomes “hot”. This threshold is set to 10,000 by default, but we can configure it via the JVM flag during Java startup. We definitely don't want to inline everything since it would be time-consuming and would produce a huge bytecode."
https://www.baeldung.com/jvm-method-inlining