The article makes the excuse that static (singleton) interfaces are inherently simpler and less resource intensive enough for it to be worth skipping regression testing (because testing that style of code is hard.)
Reading between the lines, it sounds like maybe they know they've dug themselves into a very expensive hole to dig out of. I've jumped head first into codebases that made the same excuse, and to me it seems severely limiting in terms of long term productivity. There are definitely benefits to simple, "static" singleton interfaces, and such interfaces aren't inherently incompatible with mocking and dependency injection. It does mean you likely need a layer of indirection somewhere, but if you're sweating the overhead of a single vtable lookup or explicit function pointer call on code that's internally accessing a non-trivially sized data structure, how are you even able to measure it?
The problem is that a handful of foundational static interfaces never got unit tested, and all the other code calls directly into them, so nobody anywhere wrote any unit tests because they couldn't (without touching foundational code that's unsafe to modify for lack of unit tests.) The first step in the right direction is to fix the foundation. This is terrifying for the folk that have been around a while and cemented in their assumptions about risk because they've tiptoed around modifying portions of the codebase for years. Luckily, a big benefit of static singleton interfaces is that it's very easy to modify them with O(n) developer time (where n is the number of references in the codebase). So, you just have to buckle down and get your hands dirty. You're pretty much guaranteed to find at least one latent bug in any old piece of code that you unit test, and so folk start to see the merits of the test coverage and it becomes easier to prioritize refactoring more and more ancient and scary code. After you've done it to half a dozen or so disjoint pieces of code, something magical happens. Suddenly, the vast majority of the codebase becomes easily unit testable.
The post also presents a false choice between what’s known as “the singleton pattern” versus just allocating a single instance of the object and passing it around where needed:
> We don’t write a lot of these because our code doesn’t follow standard decoupling practices; while those principles make for easy to maintain code for a team, they add extra steps during runtime, and allocate more memory. It’s not much on any given transaction, but over thousands per second, it adds up. Things like polymorphism and dependency injection have been replaced with static fields and service locators.
“dependency injection” doesn’t require constantly allocating new instances of the things your classes need.
Reading between the lines, it sounds like maybe they know they've dug themselves into a very expensive hole to dig out of. I've jumped head first into codebases that made the same excuse, and to me it seems severely limiting in terms of long term productivity. There are definitely benefits to simple, "static" singleton interfaces, and such interfaces aren't inherently incompatible with mocking and dependency injection. It does mean you likely need a layer of indirection somewhere, but if you're sweating the overhead of a single vtable lookup or explicit function pointer call on code that's internally accessing a non-trivially sized data structure, how are you even able to measure it?
The problem is that a handful of foundational static interfaces never got unit tested, and all the other code calls directly into them, so nobody anywhere wrote any unit tests because they couldn't (without touching foundational code that's unsafe to modify for lack of unit tests.) The first step in the right direction is to fix the foundation. This is terrifying for the folk that have been around a while and cemented in their assumptions about risk because they've tiptoed around modifying portions of the codebase for years. Luckily, a big benefit of static singleton interfaces is that it's very easy to modify them with O(n) developer time (where n is the number of references in the codebase). So, you just have to buckle down and get your hands dirty. You're pretty much guaranteed to find at least one latent bug in any old piece of code that you unit test, and so folk start to see the merits of the test coverage and it becomes easier to prioritize refactoring more and more ancient and scary code. After you've done it to half a dozen or so disjoint pieces of code, something magical happens. Suddenly, the vast majority of the codebase becomes easily unit testable.