You try to make the best architectural decision that you can at the time, with the knowledge and resources you have available. Time passes and you learn new things -- maybe the problem changes, or your understanding of it improves, or your understanding of alternative implementation strategies improves. For whatever reason, you can now imagine a new architecture that would be superior, if only it were implemented to replace the old architecture. Now you have technical debt. This doesn't necessarily mean that the best decision now is to pay off the debt by reworking the architecture -- that depends upon a cost/benefit/opportunity cost analysis.
edit:
Tangentially, there's a pretty interesting presentation by Kevlin Henney titled "The Architecture of Uncertainty" [1]. My poor summary: When designing the initial architecture of a system, Kevlin suggests that the team brainstorm to identify which parts of the system have a lot of uncertainty. Each region of uncertainty then becomes a subsystem. Put interfaces between the subsystems that need to be connected. Hopefully you now have an architecture with stable interfaces, even if individual subsystems need to be completely rewritten during the course of the project.
Disclaimer I haven't watched the presentation, though I do have first hand experience refactoring large systems that grew slowly over time.
Based off of your edit's description, this technique feels like a patch that inevitably fall apart. It relies on the assumption that your team can correctly identify the centers of uncertainty, and that that uncertainty model will continue to apply. Thinking about such things is an excellent idea, but it is not sufficient - uncertainty is, after all, uncertain.
I think in many cases the more important thing is to create an abstraction that allows you to perfectly represent your existing business logic in the most concise way possible. This should let you cut down on the number of edge cases outside the model, and generally simplify the system. Simplicity in specification is important, because it will allows newcomers to quickly understand the inner workings of your code, quickly correlating business logic with real code - if they can understand it, and can work within it, then they will not be tempted to hack around it (which is the root of code deterioration). I strongly believe that human friendliness and understandability should be key design goals in ANY new system, not an after thought.
So long as no one breaks the abstraction, the 99%, day to day changes should be easy. When you finally do hit a case that requires a significant abstraction change, then your concise code will make it obvious that it's outside of your abstraction model, and can evaluate options at that point.
You try to make the best architectural decision that you can at the time, with the knowledge and resources you have available. Time passes and you learn new things -- maybe the problem changes, or your understanding of it improves, or your understanding of alternative implementation strategies improves. For whatever reason, you can now imagine a new architecture that would be superior, if only it were implemented to replace the old architecture. Now you have technical debt. This doesn't necessarily mean that the best decision now is to pay off the debt by reworking the architecture -- that depends upon a cost/benefit/opportunity cost analysis.
edit:
Tangentially, there's a pretty interesting presentation by Kevlin Henney titled "The Architecture of Uncertainty" [1]. My poor summary: When designing the initial architecture of a system, Kevlin suggests that the team brainstorm to identify which parts of the system have a lot of uncertainty. Each region of uncertainty then becomes a subsystem. Put interfaces between the subsystems that need to be connected. Hopefully you now have an architecture with stable interfaces, even if individual subsystems need to be completely rewritten during the course of the project.
[1] http://www.infoq.com/presentations/Architecture-Uncertainty