You would need to provide me with a concrete example, because I've never really had the issue you're describing. I'm willing to bet it's fixable by having better management.
Assume you're having to ship a large single release bundling the code of multiple integrating teams: a bug in one of those teams' code will likely delay the release of the entire bundle: bugs are inevitable and no amount of better management will solve them. With microservices, you add coordination overhead but teams have more autonomy in shipping their own individual bundles.
1) if you have a blocking bug or code dependency between two microservices OR two monolith modules, that should of course block until it is fixed and resolved. Microservices do not magically solve that.
2) other code changes or features that are not blocked, can be unblocked by being released in a smaller release ( git rebase those commits
to a new release branch on main, and cut, test and deploy a smaller release -- basically gitflow)
3) once the blocking modules are fixed, they should be rebased onto the latest main.
The blocking "effect", even in a monolith, seems like it would be better solved by using a version control system with good rebase support.
Coordination needs to happen for larger changes, and/or the changes need to be smaller.
It just doesn’t seem like that’s scales. The application I work on has a half dozen teams. Releases includes tons and tons of commits soaked for up to weeks. We can’t solve this with cherry picks lol.
Wouldn't it be more sensible to release often and in smaller increments instead?
Why can't you do that? You only release one feature on top of master, test it to make sure nothing is broken then release it. The second feature has to merge on top of the new stable git hash once it's ready to be released. And so on... Bundling a bunch of stuff all at the same time seems like asking for misery and weird integration issues in production.