I think it was he who talked about maintaining old versions not by backporting bug fixes but instead by rewriting the old version to be a thin layer that gives you the interface of the old version upon the code of the new version. That, my friends, is how you can provide the old interface without the engineering overhead of maintaining billions of different versions of your codebase IMO.
It goes like this: First you have v1. Eventually you make v2. Now you write a translation layer that you replace v1 with that has the same interface as did v1 but which transforms arguments passed to it as necessary before passing them on to v2 interfaces and likewise transforms the result as necessary.
Now when you get to v3 you don’t rewrite v1, only v2. So requests to the v1 endpoint will go v1 -> v2 -> v3 and back. This means a bit of extra computational overhead but it will certainly be worth it in terms of the engineering overhead saved.
In an old draft of my post I called this out more explicitly. There are two things worth thinking about.
First, if you think about actually doing this in a traditional programming language, you can quickly see that this is going to result in a lot of boilerplate. REST APIs make this a lot easier, because it's easier to say something like, "change the meaning of this command, but have everything else stay the same." Functions and classes are not setup for this kind of metaprogramming.
Second, sometimes you are breaking BC because the old behavior is extremely inconvenient to provide under a new internal implementation. This is pernicious, because "I didn't like the old function name" is not a good reason to break BC, but "I can't rewrite the internals and support the old functionality" is a good reason to break BC. So you're in a situation where your mechanism for maintaining old versions breaks down /precisely/ when you would have gotten the most utility from breaking BC.
In practice that isn't quite as simple as you make it sound. People come to rely on bugs, weird edge cases, undefined or undocumented behaviours in v1 so you end up having to spec and replicate them in your v2 shims.
Convincing people to stop using libraries in such a fragile way would be the real trick.
"semver" is jargon we already know: MAJOR.MINOR.PATCH, where each of those "meanings" is up to the library author's whim. From a library consumer's point of view, "semver" is BREAKING?.BREAKING?.BREAKING? because one person's "security patch" is another person's "breaking change".
What this author is advocating already has a name: "backwards compatibility". Like "semver", it's also imperfectly implemented, but unlike semver it's at least not undecidable. For example you can enforce (at the package manager level) that new versions are append-only additions to older versions.
> choosing a new name at every backwards-incompatible change is difficult and unhelpful to users
My position is that keeping an old name (and changing its meaning) is much, much worse!
What the author may be missing in the vgo command is that v1 can reference v2 because they are name differently. That, along with alias, provides a powerful way to have a single unit of of code that is represented in two ways.
Also, their point about not asking developers to choose the semver just like you don't ask developers to write diffs is valid. This is exactly what rsc proposes the "(v)go release" command do in https://research.swtch.com/vgo-cmd under "Preparing New Versions (go release)".
I advocate for this approach and want to see it adopted everywhere. It is essentially a hybrid of the opt-in approach (pinning) and opt-out approach (floating), where you can get essential security updates while not having interface changes forced on you.
Semver is not enough. This week I spent the better part of an entire day hunting down an obscure bug that couldn't be replicated by someone else on the team. It was a new build environment, so it was unclear where the problem existed. The dependencies were pinned—but some of the dependencies' dependencies were floating (e.g., "^1.2.3"), and one of those dependencies introduced a subtle breaking change in a minor version. This could have been resolved much faster if the locked dependency list was checked in (poor practice in an inherited codebase), but the problem remains.
It's not surprising that Russ Cox has been thinking about this. Just like Go's explicit error handling forces the developer to consider every error condition, "imver" forces the interface designer to explicitly consider older versions, instead of letting versioning implicitly handle it. The result is, hopefully, a more thoughtful change management process.
The alternative is a package manager like yarn or paket where even your transitive dependencies are pinned in a lockfile. I don't see how someone can look at a version spec made of only top level dependencies and call themselves immutable.
HN discussion: https://news.ycombinator.com/item?id=13085952
I think it was he who talked about maintaining old versions not by backporting bug fixes but instead by rewriting the old version to be a thin layer that gives you the interface of the old version upon the code of the new version. That, my friends, is how you can provide the old interface without the engineering overhead of maintaining billions of different versions of your codebase IMO.
It goes like this: First you have v1. Eventually you make v2. Now you write a translation layer that you replace v1 with that has the same interface as did v1 but which transforms arguments passed to it as necessary before passing them on to v2 interfaces and likewise transforms the result as necessary.
Now when you get to v3 you don’t rewrite v1, only v2. So requests to the v1 endpoint will go v1 -> v2 -> v3 and back. This means a bit of extra computational overhead but it will certainly be worth it in terms of the engineering overhead saved.