"The simplest implementation that works" is not the antithesis of "Code without technical debt." For example, code that has not been thoroughly tested can not be said to "work." Likewise, one form of technical debt is code that is not the simplest possible implementation. For example, Dick and Jane each implement their own versions of the same functionality, producing code that is not DRY. This has local simplicity, because Dick and Jane did not need to coördinate their implementations, but it does not have global simplicity.
"Premature prevention of technical debt is the root of all evil."
If you are wittily comparing the reduction of technical debt with optimization, I am confused. Optimization often increases technical debt by adding complexity, so "premature" optimization is a little like buying a call option rather than selling one. The word "premature" gives us an out, of course, but overall I would say that optimization and the reduction technical debt are not isomorphic.
Those are fair points and I think that the example you use with Dick and Jane is a good one:
Suppose Dick and Jane both implement their simple feature which is fairly similar. I'd estimate that much of the time it's costless to have two slightly different approaches in the codebase. In those cases it's also nearly costless to retroactively standardize.
In the remaining cases where it isn't costless to retroactively standardize, a better optimized abstraction would have been useful. But, depending on the product's life cycle, building that abstraction may or may not count as premature optimization. Suppose Dick's feature takes off but Jane's ends up being phased out, now the extra month building the abstraction was totally wasted.
Is that technical debt a big leveraged call option that could cripple the team/org? Maybe, but maybe not. Maybe it's just a hassle that needs to be borne only if the feature succeeded, and thus would have been premature optimization if the feature didn't succeed. In other words exploration vs exploitation and the risk of cementing possibly unneeded conceptual overhead. I guess my intuition is that premature optimization comes from attempting in advance to pave the way for a low technical debt future on an unknown feature horizon.
It takes real skill to avoid this if one is too attuned to technical debt elimination as a design goal.
Suppose Dick and Jane both implement their simple feature which is fairly similar. I'd estimate that much of the time it's costless to have two slightly different approaches in the codebase. In those cases it's also nearly costless to retroactively standardize.
My experience disagrees on both counts. Furthermore my experience is that if I'm actively developing code and feel that a particular piece of cleanup is a good idea, the usual time to my discovering that it really was a good idea is about 2 weeks. (In maintenance it proves itself much more slowly.)
Of course I have quite a bit of experience and a well-trained intuition. I've seen other people whose "cleanups" were anything but, and who were fond of entering abstractions to simplify the future that would never pay off. (I'm a big believer that making something simple with little code makes virtually any future rewrite easier. This helps. A lot.)
Well, returning to the OP, selling a call short does not automatically mean you lose money. It's just that once in a while you are exposed to losses that far outweigh the price of the call.
So perhaps the Dick and Jane example is one where you are ahead of the game by leaving the code as is and only refactoring it later when needed.
It does take real skill to forecast what is needed and what isn't, we 100% agree on that :-)
I think easiest or quickest is what is really meant in this formulation, but rarely do either of these result in simplest. I start with the former and strive to iterate towards the latter, because the former starts cheaper but ends up more expensive in the long run.
I've found this not to be the case. If you write custom applications and they are solid, when it comes time to add a new feature, you can do it quickly and keep the customer happy. If you are riddled with technical debt, the cost of adding a small feature approaches the cost of having a competitor rewrite the whole application.
I'm not advocating taking on technical debt, just arguing that sometimes the anti technical debt crusade turns into premature optimization and unnecessary concept creep.
Unfortunately, in practice the "simplest implementation that works" is understood as the crappiest implementation that seems to work.
IF you are young and naive, you could argue that this was an honest mistake. In every other case, trying to push to a client the "simplest thing that works" it is either professional negligence or borderline fraudulent cost externalization.
Not being paid by the hour is not a valid excuse. It is a symptom of either naivety or shady business practices.
Trying to push to a client the crappiest implementation that seems to work is certainly everything you say that it is - but that's not the same thing as delivering the simplest implementation that works.
Premature prevention of technical debt is the root of all evil.