Personally I like when code gets defactored*. Defactoring removes undocumented abstractions, call proxying, dynamic dispatch, higher order args, boolean args, utility source files, etc, to a reasonable extent. Code wtfness is its total length x complexity^2 / documentation. Successful defactoring leaves you with less wtfness for a reasonable amount of time.
Re-factoring may increase (in-/en-) or decrease (de-) it. If it becomes self-opinionated, buggy and slows you down, that likely increased. Textbook refactoring usually trades amortized length for complexity, not the other way round, but that’s okay because they take it from a “completely flat code” perspective.
So it really depends on your coeffs in that formula and ideally should be managed by someone who tracks these indicators. Not by a passer-by whose taste was offended that day.
* google has some definitions, but I used the term freely
I like the breakdown and agree with your general line of reasoning.
I’m confuse though why you don’t just say refactor. This just reads like you redefined “refactor” to be adding layers and “defactor” to removing layers.
Textbook refactoring, changing a program internally without changing it external. It’s up to the refactorer to choose what to apply.
Applying a refactor is not only adding FactorBeanProxies. It can, and must be, used to remove parts as often as adding them.
Re-factoring may increase (in-/en-) or decrease (de-) it. If it becomes self-opinionated, buggy and slows you down, that likely increased. Textbook refactoring usually trades amortized length for complexity, not the other way round, but that’s okay because they take it from a “completely flat code” perspective.
So it really depends on your coeffs in that formula and ideally should be managed by someone who tracks these indicators. Not by a passer-by whose taste was offended that day.
* google has some definitions, but I used the term freely