I feel like this misses too much real world context and caveats.
What is the problem with Monoliths? Nothing. Until there is.
The problem with monoliths is when you have a million LoC Java application that is on Java 6, and will take months of work to get up to date, take 20 minutes to load on a dev machine, starts to fail because its getting too big for a dev machine to handle, can't bring in any new dependencies because of how old the Java version is, and has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years.
So what do you do? Breaking off pieces of the code into microservices that can run on a new Spring boot and can run on a newer nice IaC set up is an easy win. Sure you basically have a microlith, but it increases your dev velocity.
I think Monolith issues are typically a symptom of a few other things:
1. accumulated deferred maintenance and tech debt
2. Inadequate developer tooling
3. Inadequate CICD tooling
4. Rarely scale until you really start to hit the size of like Google, Uber, Facebook, etc.
> The problem with monoliths is when you have a million LoC Java application that is on Java 6, and will take months of work to get up to date, take 20 minutes to load on a dev machine, starts to fail because its getting too big for a dev machine to handle, can't bring in any new dependencies because of how old the Java version is, and has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years.
- a million LoC Java application that is on Java 6 -> Congrats, now you have two half a million LoC Java application on two different Java versions. And if the set up is like most apps, you will likely need both running to debug most issues because most issues happen at the system to system interface
- take 20 minutes to load on a dev machine -> that is fair enough, I have only ever seen an app that takes that long on a modern machine once, most shops doing micro services don't have apps that big
- has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years -> you can have the same problem on a micro service architecture, you can actually have that problem multiplied by 10 and now you can spend a whole sprint updating dependencies. Fun!
Breaking off pieces of the code into microservices that can run on a new Spring boot and can run on a newer nice IaC set up is an easy win -> You conveniently forget to mention the additional team to fix issues related to system to system communication
> > has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years
> you can have the same problem on a micro service architecture, you can actually have that problem multiplied by 10 and now you can spend a whole sprint updating dependencies. Fun!
Definitely true... Not to mention when your entire orchestration becomes too big to run anything locally, that's where the real fun and complexity starts. There's definitely such a thing as too many micro-services, or too micro for that matter...
> And if the set up is like most apps, you will likely need both running to debug most issues because most issues happen at the system to system interface
wrong. well architected services would have a good interface and problems rarely span multiple services.
>- has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years -> you can have the same problem on a micro service architecture, you can actually have that problem multiplied by 10 and now you can spend a whole sprint updating dependencies. Fun!
this is no un-nuanced. the point is if you have decomposed the codebase into smaller ones - migrations are easier.
Surely if your monolith codebase is good enough, migrations will be easy too. And a well architected system can be run on a single machine. If you don't believe me, try running windows 95 on your machine. These are True Scotsman fallacies that devs often use to justify whatever paradigm/philosophy they believe in. Most code in the wild is not perfectly designed or if it is, it doesn't stay so for long. Paradigms that rely on optimal conditions to work decently aren't universally useful imo.
If your car only works on pristine, smooth roads, it's not a good car, no matter how many cool features it has or how fast you can ride it.
You are focusing on the hyperbole to ignore the basic point: being unable to change, build and release different parts of the code independently can and eventually will bring your development velocity to a crashing halt.
Sure you can release quickly, but when your whole constellation of services is too big to be run on a local machine, your development velocity will also come to a crashing halt
I think the big problem no one speaks about is the name "microservices" was incredibly poorly named.
Your design goal should not be "create the smallest service you can to satisfy the 'micro' label". Your design goal should be to create right-sized services aligned to your domain and organization.
The deployment side is of course a red herring. People can and do deploy monoliths with multiple deployments and different endpoints. And I've seen numerous places do "microservices" which have extensive shared libraries where the bulk of the code actually lives. Technically not a monolith - except it really is, just packaged differently.
Another key is that you should always be able to reasonably hack on just one of the "services" at once— everything else should be able to be excluded completely or just run a minimal mock, for example an auth mock that just returns a dummy token.
If you've got "microservices" but every dev still has to run a dozen kubernetes pods to be able to develop on any part of it, then I'm pretty sure you ended up with the worst of both worlds.
> Your design goal should not be "create the smallest service you can to satisfy the 'micro' label".
A place I worked at years ago did what I effectively called "nano-services".
It was as if each API endpoint needed its own service. User registration, logging in, password reset, and user preference management were each their own microservice.
When I first saw the repo layout, I thought maybe they were just using a bunch of Lambdas that would sit behind an AWS API Gateway, but I quickly learned the horror as I investigated. To make it worse, they weren't using Kubernetes or any sort of containers for that matter. Each nanoservice was running on its own EC2 instance.
I swear the entire thing was designed by someone with AWS stock or something.
I know one place that did all of their transactional payment flow through lambdas. There were about 20 lambdas in the critical auth path, and they regularly hit the AWS per-global-account limits.
Another place did all their image processing via lambdas, about fifty of them. They literally used lambdas and REST calls where anyone sane would have done it in one process with library calls. It cost them tens of thousands of dollars a month to do basic image processing that should have cost about $100 or so.
I agree with this. Personally I think the two pizza team model, single responsibility isnot a great idea. Most successful "microservices" model I've worked on actually had 100ish devs on the service. Enough to make on-call, upgrades, maintenance, etc. really spread out.
Agreed. This is why I prefer the term “service oriented architecture” instead. A service should be whatever size its domain requires - but the purpose is to encapsulate a domain. A personal litmus test I have for “is the service improperly encapsulating the domain” is if you need to handle distributed transactions. Sometimes they are necessary - but usually it’s an architectural smell.
I think updating dependencies is maybe the most important point. Different microservices can have different versions of libraries and frameworks, as long as their APIs return and do what they should, other microservices don't need to care about what version of some library is used. Being able to update dependencies for a smaller amount of code at a time can make all the difference between "no, that will be too much work right now" and "it's doable".
But, if you have a modular monolith, it will be easy to split it up into separate services, whether microservices or just services. It will be a good test to see how modular your system/monolith really is.
Then your beautiful microservice gets old and requirements change. Now you have to fix 15 different services, rework interfaces, coordinate with multiple teams. My golden rule is: if you can’t do a monolith right, you will fail even more at microservices. I think moving some parts into services but I see a lot of simplistic “we have users, so let’s do a user service. Then we have files, so let’s do a file service”. This will become a maintenance nightmare in my view.
> No you can update ONLY the micro services that are impacted by the new requirements without impacting the other micro services.
Only if the new requirements don't require a breaking API change. Microservices make API breaks more difficult, since they're loosely coupled it's harder to find all users of an old API & ensure they're updated than it is with a tightly-coupled system. Microservices make non-breaking changes easier, and help ensure all access is gated through an API.
I think the framing of "monoliths vs. microservices", with the implication that you must either have a mountain of a codebase or a beach of grains of code-sand, is not helpful. Good modularity means that different levels of tradeoffs can be made without huge effort.
True, that's why I titled the article "Modular Monolith and Microservices: Modularity is what truly matters". Modularity is crucial here - you can mix it up on multiple levels; having a single modular monolith, a few bigger services that have many modules inside each or finally, microservices where you treat each service itself as a module.
Modularization is what's primary here and gives you flexibility; not having one vs multiple units of deployment
> microservices where you treat each service itself as a module.
Microservices is where you treat each team of people as their own independent business unit. It models services found in the macro economy and applies the same patterns in the micro economy of a single organization. Hence the name.
The clearest and probably simplest technical road to achieving that is to have each team limit exposure to their work to what can be provided over a network, which is I guess how that connotation was established. But theoretically you could offer microservices with, for example, a shared library or even a source repository instead.
Microservices was originally envisioned to literally create the smallest possible service you could, with canonical Netflix use cases being literally only one or two endpoints per microservice.
Which is great I guess for FAANGs. But makes no sense for just about anyone else.
> Microservices was originally envisioned to literally create the smallest possible service you could
"Micro web services" was coined at one time, back when Netflix was still in the DVD business, to refer to what you seem to be speaking about — multiple HTTP servers with narrow functional scope speaking REST (or similar) that coordinate to build something bigger in a Unix-style fashion.
"Microservices" emerged when a bunch of people at a tech conference discovered that they were all working in similar ways. Due to Conway's Law that does also mean converging on similar technical approaches, sure, but because of Conway's Law we know that team dynamics comes first. "Microservices" wasn't envisioned — it was a label given to an observation.
Microservices came about because Netflix and soon some other FAANGS found they were so big that it made sense to make single-function "micro"-services. They literally chose to massively duplicate functionality across services because their scale was so big it made sense for them.
This is great for FAANG-scale companies.
It makes little sense for most other companies, and in fact incurs all of the overhead you would expect - overly complex architecture, an explosion of failure points, a direct elongation of latency as microservices chain calls to each other, chicken-and-egg circular references among services, and all of the mental and physical work to maintain those dozens (or hundreds, or thousands!) of services.
The funny thing to me is people point to monoliths and say "see, problem waiting to happen, and it will be so hard to undo it later!". But I see microservices, and usually the reality is "We have problems right now due to this architecture, and making it sane basically means throwing most or all of it away".
In reality, unraveling monoliths is not as hard as many people have made out, while reasoning about microservices is much harder than advertised.
Tooling in particular makes this a very hard nut to crack. Particularly in the statically typed world, there are great tools to verify large code bases.
The tooling for verifying entire architectures - like a huge set of microservices - is way behind. Of course this lack of tooling impacts everyone, but it makes microservices even harder to bear in the real world.
Forget about convenient refactoring, and a thousand other things....
Nah. You've made up a fun story, but "microservices" is already recognized as being in the lexicon in 2011, while Netflix didn't start talking about said system until 2012.
> This is great for FAANG-scale companies.
Certainly. FAANG-scale employees need separation. There are so many that there isn't enough time in the day for the teams to stay in close communication. You'd spend 24 hours a day just in meetings coordinating everyone. Thus microservices says instead of meetings, cut off direct communication, publish a public API with documentation, and let others figure it out — just like how services from other companies are sold.
If you are small company you don't have that problem. Just talk to the people you work with. It is much more efficient at normal scale.
If you go up to this scale then yes - it probably makes sense to have a few - reasonably amount - services. But I would ask - why does it take for this kind of app to be up and running so long? It's rather not because of its code size - something else has gone wrong :)
Even in this hypothetical scenario, you’re radically rearchitecting your entire product to save “months of work” and the cost of some beefier dev machines. How can that be rational?
There is a reason, which is the overall capabilities of the development org at a company. It takes more discipline and skill to do that consistently. Separate services are a forcing function for modularity.
> Breaking off pieces of the code into microservices that can run on a new Spring boot
I was with you until this part.
The correct answer is:
> Breaking off pieces of the code into microservices that no longer have Spring Boot as a dependency so you are not pulling in unknown numbers of unneeded dependencies that could have an unexpected impact on your application at surprising times, and forced version upgrades for security patches that also make major semantic breaking changes.
> So what do you do? Breaking off pieces of the code into microservices that can run on a new Spring boot and can run on a newer nice IaC set up
I’m not sure why that’s your first instinct as opposed to splitting up your monolith into multiple Java packages that only have a downstream dependency relationship. (This is the second option in the article.) Spinning up microservices is hardly an easy win compared to this approach.
The other big advantage is you can monitor and scale your services independently and decouple outages.
If one endpoint needs to scale to handle 10x more traffic, its wholefully inefficient to 10x your whole cluster.
Ideally you write the code as services/modules in a monolith imo. Then you can easily run those services as separate deployments later down the line if need be
You also have to determine how much traffic for monitoring is due to microservices itself - where the messaging and logging might happen in memory, it now has to be read, and written, in much more expensive exectuions.
There's not one silver bullet. It's not 100% monoliths, or 100% microservices for all.
Learning from the things we don't do, haven't done yet, in the ways you haven't yet thought of also helps expand one's skills.
This is because clever architecture, will always beat clever coding.
Im not sure what you mean. Whats the difference in messaging and logging? What you mean by messaging?
Like through network versus code running on the same machine? Cause that should already be distributed unless you can really fit your whole needs on a single machine
There isn't that much difference between an application and a library. You can always create multiple deployments of the same code configured differently.
We have an app with two different deployments. One is serving HTTP traffic, and the other is handling kafka messages. The code is exactly the same, but they scale based on different metrics. It works fine.
> Breaking off pieces of the code into microservices ...
I was with you until "into". Then continue with "Maven modules" (or Gradle modules, or some other kind of modules) and solve some real issues instead of imaginary deployment structure issues.
He's saying instead of using "microservices" to modularize your shit, you can use folders to modularize your shit. Folders? Files? When someone told me that I could use folders to modularize stuff instead of entire microservices the concept was so foreign to me that it opened up a hole new world.
Indeed! And getting odd behaviour and trying to see what went wrong and takes you ages until your realise that one of the 10 micro services was failing rather than it being an actual bug in the micro service you were playing with. Plus, upgrade and maintenance gets multiplied by 10.
I watched 1000-node microservice systems start up. Most nodes start really fast, and most of the system is up in seconds, maybe 15-20 seconds, as the flurry of service registration passes. A few nodes would take inordinate time to start up, apparently because they are unlucky and repeatedly get less CPU, less I/O, more retries on congested links, etc.
But you don't need to do this on your dev machine. Nearly a decade ago at GrubHub we already had a setup that allowed to run a few microservices under development locally, while relegating the rest to the staging environment that just runs every microservice, like prod, but in small quantities.
A JVM-based microservice used to take, say, 16-20 MiB of RAM; a 50-MiB service was considered a behemoth that may need slimming down. You could run quite a number of 20-MiB containers on a laptop with 16 GiB, along with all your dev setup, some local databases, etc.
Being saddled with an old code base with a mountain of tech debt does not invalidate the OP's argument about modularity and microservices. I feel for your pain. You describe a great approach of how to tackle a monolith with mountains of tech debt by breaking out a microservice.
However, having a monolith does not automatically mean you abandon all addressing of tech debt. I worked on a large monolith that went from Java 7 to Java 21, it was never stuck, had excellent CI tooling, including heavy integration/functional testing, and a good one-laptop DX, where complex requests can be stepped through in your IDE all the way thru in one process.
Your argument dooes not invalidate a service-oriented approach with large (non-micro) services. You can have a large shared code base (e.g. domain objects, authentication and authorization logic, scheduling and job execution logic) that consists of modular service objects that you can compose into one or three or four larger services. If I had to sell that to the microservice crowd, I would call them "virtualized microservices", combined into one or many several deployment units.
In fact, if I were to start a new project today, I would keep it a monolith with internal modularity until there was a clear value to break out a 2nd service.
Also, it is completely valid to break out into microservices things that make absolute sense and are far detached from your normal services. You can run a monolith + a microservice when it makes sense.
What doesn't make sense is microservices-by-default.
The danger of microservices-by-default is that you are forced to do big design up-front, as refactoring microservices at their network boundaries is much more difficult than refactoring your internal modules.
Also, microservices-by-default means you now have so many more network boundaries and security boundaries to worry about. You now have to threat-model many microservices because of the significantly increased number of boundaries and network surface. You are now forcing your team to deal with significantly more distributed computing aspects right away--so now inter-service boundaries are network calls instead of in-process calls, requiring more careful design that has to account for latency, bandwidth and failure. You now have to worry about the availability and latency of many services, and risk a weakest-link-in-chain service bring your end-user availability down. You waste considerably more computing resources by not being able to share memory or CPU across all these services. You will end up writing microservice caches to serve over the network that which could've been an in-process read. Or if you're hardcore about having stateless microservices (another dogmatic delusion), you will now be standing up Redis instances or Memcached for your caches--to be transferred over the network.
What is the problem with Monoliths? Nothing. Until there is. The problem with monoliths is when you have a million LoC Java application that is on Java 6, and will take months of work to get up to date, take 20 minutes to load on a dev machine, starts to fail because its getting too big for a dev machine to handle, can't bring in any new dependencies because of how old the Java version is, and has an old bespoke Ant + Maven + Jenkins + Bash + Perl build and deploy system that has been built up over the last 30 years.
So what do you do? Breaking off pieces of the code into microservices that can run on a new Spring boot and can run on a newer nice IaC set up is an easy win. Sure you basically have a microlith, but it increases your dev velocity.
I think Monolith issues are typically a symptom of a few other things: 1. accumulated deferred maintenance and tech debt 2. Inadequate developer tooling 3. Inadequate CICD tooling 4. Rarely scale until you really start to hit the size of like Google, Uber, Facebook, etc.