> Within your company you don't want every team to have to operate as a Library Vendor
But you want some teams to operate this way. And the best way to do it is by drawing boundaries at the repo level.
This is similar to monolith-services debate. Once monolith gets big enough there's benefit in breaking it down a bit. Technically nothing prevents you from keeping it modular. Except that humans just really suck at it.
> take advantage of the command economy you operate in to drive changes across the company rapidly
Driving changes across the company is a self-serving middle-manager goal. There's a reason why central planning fails at scale every single time it is attempted.
> Teams getting out of sync with the latest versions of dependencies is a Really Bad Thing
It definitely can be a bad thing. But you know what's even worse? Not having the option to get out of sync. If getting out if sync is a problem, polyrepo offers simple tooling to address it.
The assumption that you are making is that polyrepos will spend the vast amount of engineering effort to maintain a stable interface. Paraphrasing Linus: “we never break userspace.”
In practice internal teams don’t have this type of bandwidth. They need to make changes to their implementations to fix bugs, add optimizations, add critical features, and can’t afford backporting patches to the 4 versions floating around the codebase.
Repos work for open source precisely because open source libraries generally don’t have a strong coupling between implementers and users. That’s the exact opposite for internal libraries.
> In practice internal teams don’t have this type of bandwidth
You don't need bandwidth to maintain backward compatibility in polyrepo. As you said yourself, you need loose coupling.
When you are breaking backward compatibility, the amount of bandwidth required to address it is the same in mono- and polyrepos (with some exceptions benefitting polyrepos).
The big difference though is whose bandwidth are we going to spend. Correct me if I'm wrong, my understanding is that at Google it's the responsibility of dependency to update dependents. E.g. if compiler team is breaking the compiler, they are also responsible for fixing all of the code that it compiles.
So you're not developing your package at your own pace, you are limited by company pace. The more popular a compiler is, the slower it is going to be developed. You're slowing down innovation for the sake of predictability. To some degree you can just throw money at the problem, which is why big companies are the only ones who can afford it.
> can’t afford backporting patches to the 4 versions floating around the codebase
Backporting happens in open-source because you don't control all your user's dependencies. Someone can be locked into a specific version of your package through another dependency, and you have no way of forcing them to upgrade. But if we're talking about internal teams, upgrading is always an option, you don't have to backport (but you still have the option, and in some cases it might make business sense).
> open source libraries generally don’t have a strong coupling between implementers and users. That’s the exact opposite for internal libraries.
I disagree. There's always plenty of opportunities for good boundaries in internal libraries.
Though I'll grant you, if you draw bad boundaries, polyrepo will have the problems you're describing. But that's the difference between those two: monorepo is slow and predictable, polyrepo is fast and risky. You can reduce polyrepo risks by hiring better engineers, you can speed up monorepo (to a certain degree) by hiring more engineers.
When there's competition, slow and predictable always loses. Partially that's why I believe Google can't develop any good products in-house: pretty much all their popular products (other than search) are acquisitions.
> Within your company you don't want every team to have to operate as a Library Vendor
But you want some teams to operate this way. And the best way to do it is by drawing boundaries at the repo level.
This is similar to monolith-services debate. Once monolith gets big enough there's benefit in breaking it down a bit. Technically nothing prevents you from keeping it modular. Except that humans just really suck at it.
> take advantage of the command economy you operate in to drive changes across the company rapidly
Driving changes across the company is a self-serving middle-manager goal. There's a reason why central planning fails at scale every single time it is attempted.
> Teams getting out of sync with the latest versions of dependencies is a Really Bad Thing
It definitely can be a bad thing. But you know what's even worse? Not having the option to get out of sync. If getting out if sync is a problem, polyrepo offers simple tooling to address it.