Being saddled with an old code base with a mountain of tech debt does not invalidate the OP's argument about modularity and microservices. I feel for your pain. You describe a great approach of how to tackle a monolith with mountains of tech debt by breaking out a microservice.
However, having a monolith does not automatically mean you abandon all addressing of tech debt. I worked on a large monolith that went from Java 7 to Java 21, it was never stuck, had excellent CI tooling, including heavy integration/functional testing, and a good one-laptop DX, where complex requests can be stepped through in your IDE all the way thru in one process.
Your argument dooes not invalidate a service-oriented approach with large (non-micro) services. You can have a large shared code base (e.g. domain objects, authentication and authorization logic, scheduling and job execution logic) that consists of modular service objects that you can compose into one or three or four larger services. If I had to sell that to the microservice crowd, I would call them "virtualized microservices", combined into one or many several deployment units.
In fact, if I were to start a new project today, I would keep it a monolith with internal modularity until there was a clear value to break out a 2nd service.
Also, it is completely valid to break out into microservices things that make absolute sense and are far detached from your normal services. You can run a monolith + a microservice when it makes sense.
What doesn't make sense is microservices-by-default.
The danger of microservices-by-default is that you are forced to do big design up-front, as refactoring microservices at their network boundaries is much more difficult than refactoring your internal modules.
Also, microservices-by-default means you now have so many more network boundaries and security boundaries to worry about. You now have to threat-model many microservices because of the significantly increased number of boundaries and network surface. You are now forcing your team to deal with significantly more distributed computing aspects right away--so now inter-service boundaries are network calls instead of in-process calls, requiring more careful design that has to account for latency, bandwidth and failure. You now have to worry about the availability and latency of many services, and risk a weakest-link-in-chain service bring your end-user availability down. You waste considerably more computing resources by not being able to share memory or CPU across all these services. You will end up writing microservice caches to serve over the network that which could've been an in-process read. Or if you're hardcore about having stateless microservices (another dogmatic delusion), you will now be standing up Redis instances or Memcached for your caches--to be transferred over the network.
However, having a monolith does not automatically mean you abandon all addressing of tech debt. I worked on a large monolith that went from Java 7 to Java 21, it was never stuck, had excellent CI tooling, including heavy integration/functional testing, and a good one-laptop DX, where complex requests can be stepped through in your IDE all the way thru in one process.
Your argument dooes not invalidate a service-oriented approach with large (non-micro) services. You can have a large shared code base (e.g. domain objects, authentication and authorization logic, scheduling and job execution logic) that consists of modular service objects that you can compose into one or three or four larger services. If I had to sell that to the microservice crowd, I would call them "virtualized microservices", combined into one or many several deployment units.
In fact, if I were to start a new project today, I would keep it a monolith with internal modularity until there was a clear value to break out a 2nd service.
Also, it is completely valid to break out into microservices things that make absolute sense and are far detached from your normal services. You can run a monolith + a microservice when it makes sense.
What doesn't make sense is microservices-by-default.
The danger of microservices-by-default is that you are forced to do big design up-front, as refactoring microservices at their network boundaries is much more difficult than refactoring your internal modules.
Also, microservices-by-default means you now have so many more network boundaries and security boundaries to worry about. You now have to threat-model many microservices because of the significantly increased number of boundaries and network surface. You are now forcing your team to deal with significantly more distributed computing aspects right away--so now inter-service boundaries are network calls instead of in-process calls, requiring more careful design that has to account for latency, bandwidth and failure. You now have to worry about the availability and latency of many services, and risk a weakest-link-in-chain service bring your end-user availability down. You waste considerably more computing resources by not being able to share memory or CPU across all these services. You will end up writing microservice caches to serve over the network that which could've been an in-process read. Or if you're hardcore about having stateless microservices (another dogmatic delusion), you will now be standing up Redis instances or Memcached for your caches--to be transferred over the network.