I'm consulting on a micro services back end right now with mostly prior experience with monoliths. What is the selling point that drives companies down this direction? It's insane, and my client keeps trying to hire new developers and bring on more consultants to build this thing, but the amount of knowledge required is more than any one person can handle. I have similar issues with their choice of db (nosql) and its inflexibility.
The biggest reason for choosing microservices _should_ be scaling development teams: microservices allow multiple teams to work on different code bases, without stepping on each other's toes.
What actually happens, tough, is that (uninformed) people choose it because they think it brings them scalability (wrong), it's more cloud compatible (wrong) or the worst offender, it's more modern.
How does a properly designed microservice architecture not achieve greater scalability over a monolith? Of course you can scale up a monolith but with microservices you can independently scale selected services thus increasing your benefit cost ratio.
There's a great deal of overhead when communicating between services that isn't typically present in a more tightly-integrated monolith. Networks are very, very slow relative to RAM, setting aside the other costs involved in serializing/deserializing every communication.
And "properly designed" is very hard to achieve.
It's often much more productive to start with a monolith and experience the pain of extracting a piece that needs horizontal scaling, than to start by horizontally scaling everything.
Microservices can be a good architecture, but it's far from a no-brainer, even for high scalability. Servers are incredibly fast these days and can typically scale very well vertically.
I personally feel like microservices don't necessarily assist in allowing multiple disconnected teams to work on different codebases. In my mind the reason is similar to why it's so difficult to reuse code - requirements are slightly different between two things, so you either design a service to handle both, turning it into mini rather than micro, or you have two services - in which case you're just adding more plumbing to what could have just been a separate monolith.
The "services" part makes a lot of sense in certain situations, and I feel like one of the best application architectures is what I call disconnected monoliths. Centralise and standardise core concerns - like authentication, external communications. Build monoliths for everything else.
> If you achieve this scale of teams suggested, it means you have a lot of implicit gains here as well, right?
> Such as higher quality, higher "velocity", separation of concern and hopefully a clear sense of ownership.
No, it means you pay a huge overhead. Quality and velocity both drop as your day-to-day development requires a lot more setup and faff to do anything, and counterintuitively so does separation of concerns as your interfaces become more rigid. Small organisations should do things that don't scale, turn their size into an advantage.
If you think of your overhead as ax + bx^2 where x is the number of developers, microservices are a way to reduce b, but at the cost of a big increase to a. It makes sense when x is huge but not before. My litmus test would be: do you need to do multiple (unrelated) deployments of different services at once? If your organisation is small enough that you can get away with only deploying one thing at a time, you'll probably have less overhead if you work without microservices.
Automated deployment and the like is worthwhile whether your system is microservice or monolith. But no amount of automation can eliminate the overhead a network boundary brings to local development.
We view this a bit different, and that’s fine (not the automation part, here we agree).
Automation in this context is for me more than just the deploy bit, it’s also about testing and service management which includes for example service relationships and discovery.
If you could do local dev on that app as a monolith, it can most likely be done broken up in smaller services as well.
There are no silver bullets to be had anywhere, right?
Just use whatever processes and tools that work, until they don’t I guess.
> If you could do local dev on that app as a monolith, it can most likely be done broken up in smaller services as well.
It's possible, but the overhead is a lot higher, and that weighs down everything you do. Your edit-test cycle gets longer, development gets slower.
> There are no silver bullets to be had anywhere, right? Just use whatever processes and tools that work, until they don’t I guess.
Nothing is perfect but often one choice is better than another. I've seen microservices go badly much more often than monoliths, and most successful microservice systems were built as a monolith first with services separated out only when it became necessary.
Yes, there is bound to be overhead as you describe.
For some it might be worth the effort, but no doubt effort is involved.
Regarding monolith -> ms — I’m starting to think this is the way to do it — once you have the flow of data already estblished and the (working) system(s) somewhat defined it becomes easier to decouple bits and pieces.
In think your interlocutor's point is that type checking by a compiler is a lot simpler, more reliable and more performant than any networked service discovery scheme so far conceived.
I think microservice architectures can genuinely decouple teams to iterate faster and consolidate efforts. But it's not a free lunch.
What I was trying to say is that _if_ you go down the many services route, a lot of automation and integrations will be needed, and perhaps in completely new places.
No free lunches ever... just a lot of hard work. :)
Microservices requires proper, structured systems management.
Most seem to think "devops" and microservices are ways to ignore this, when in fact the complete opposite applies.
Make sure you have a true service delivery and service management pipeline in place where it is real easy for dev and ops to deploy and decommission services.
Service metadata is key in my experience. Not too much though, just enough (such as service owner, deployment scopes, a release database, etc...).
Nothing gets deployed without this metadata present, and make sure to automate the creation and keeping of these records.
Make sure convention before configuration applies, in most cases this is doable.
This stuff, in my experience, takes somewhere 6-12 months to put into place, including the actual automations/orchestration (be it the hashicorp stack, kubernetes, dc/os, triton, whatever).
Do not even think about starting to move to prod before everyone have agreed upon the conventions and operational situation surrounding the stack and services.
Other than this, it's just code and integrations. Business logic. =)
In my last project we moved from 0 to 200 fully managed microservices using the hashicorp stack (plus a bunch of other stuff and homegrown things as well such as service metadata and release database/apis) and the biggest challenge was having everyone agree on the conventions.
Convincing the developers that this type of house-keeping will be necessary, perhaps not today, or next month, but down the line took a few months. In the end it was a massive success that really accelerated the way we could deliver value to the business.
When it comes to the data and ETL part you get a lot for free if the above is done decently.
From my perspective the data scientists and ML guys need this stuff as well to be able to deploy and modify ETL flows and data pipelines at will.
They'll most likely want to deploy some python, R and/or shiny apps as well! =)
They (can) benefit greatly from getting integrated in the same automations
Selling points achieved in above mentioned "transformation"/project:
- 14.000 production deployments with 45 devs and 3 "ops" guys (dev, ops, devops?)" in about a year.
("But why!?" someone will ask! And I'll be happy to respond.)
- Things started to happen, such as "could we just not provide this [insert awesome thing] to end users?"
And what would have taken 6 months before could be pushed to canary days or even hours later.
- One specific feature I can think about increased revenue with more than a million dollars per month, and it took one of the teams exactly two days to build and release.
Granted, this was within a multibillion turnaround enterprise.
Most businesses would benefit from this transformation, but it requires a blend of people and technology that probably is difficult to attain outside the proper "tech" industry.
And my experience from within "small" tech is that focus sometimes is, understandably so, not situated around what I call systems management and service delivery.
The money bit was more about rolling a change that means instant revenue increase in the millions, rather than people or product scale, but I agree that you need to put some serious effort into service management that might not make sense for a really small team.
Not sure where there’s a natural threshold other than when you realize you can’t deliver on business needs and requirements rapidly enough.
This is probably not in the beginning with a small team.
Because once you are up to speed the business and devs work more in concert, which lets things flow fast, especially if changes are small and many, rather than fewer and larger.
As devs grow secure in the infrastructure (”it kinda just works” from their end) as well as the ability to roll back within seconds, deploying things to production becomes no big deal.
The ”separation of concerns” and containerization enable some of this, but only part is tech — I find a great deal is rooted in the culture and people of the same mindset working together.