Too slow as in "to do it" or too slow as in "to use it". In either case I think if that were true there wouldn't be monorepo's at Google, Facebook, and Microsoft. I will say it's true that didn't come for free, e.g. Microsoft had to make GVFS due to the sheer enormity of their codebase but that's already done and works pretty well.
I agree share library style makes more sense in most cases though. The main problem with it is forcing everyone to use the latest library versions but that isn't insurmountable by any means.
My old boss was an engineering manager at Google in the 90s and early 2000s. He used to tell us that _everyone_ he interacted with at Google _hated_ the monorepo, and that Google’s in-house tooling did not actually produce anything approaching a sane developer experience. He used to laugh so cynically at stories or that big ACM article touting Google’s use of a monorepo (which was a historical unplanned accident based on toppling a poorly planned Perforce repository way back when), because in his mind, his experience with monorepos at Google was exactly why his engineering department (several hundred engineers) in my old company did not use a monorepo.
My understanding from many Google employees is that the properties of the system that caused problems in ~2000 - 2010 are largely still the same today: the canary node model of deployment, fixed small set of supported languages, code bloat, inability to delete code, bias towards feature toggles even when separate library dependency management would be better for the problem at hand, various firefighting when in-house monorepo tooling breaks, difficult on-boarding for people unfamiliar with that workflow, difficult recruiting for candidates who refuse to join if they have to work under the limits of a monorepo like that.
I work at one of the monorepo companies that you mention and there’s some truth to the “too slow” part. Although it’s it’s been a lot better lately (largely, due to the efforts of the internal version control dev teams), I’ve noticed at times in the past that you could do a ‘<insert vcs> pull’, go on a 15 minute break and it wouldn’t be done by the time you’re back.
Personally, I think there’s a place for mono repos and there’s a place for smaller independent repos. If a project is independent and decoupled from the rest of the tightly coupled code base (for instance things which get opesourced), it makes no sense to shove it into a huge monorepo.
I hate how these monorepo pieces gloss over the CI requirements. Just checkout the code that's affected by the change? Either you have a shared buid job that adds thoussands of builds a day & matching a commit to a build takes ages, or you have a plethora of jobs for each subrepo and Jenkins eats all the disk space with stale workspaces. And let's not talk about how to efficiently clone a large repo... our big repo took 5 minutes to clone from scratch, which killed our target time of 10 minutes from push to test results. We ran git mirrors on our build nodes to have fresh git objects to shallow/reference clone from to get it down to 30 seconds, and the whole system had to work perfectly or else hundreds of devs would be blocked waiting to see if their changes could be merged.
I agree share library style makes more sense in most cases though. The main problem with it is forcing everyone to use the latest library versions but that isn't insurmountable by any means.