Hacker News new | past | comments | ask | show | jobs | submit login

Microservices lets you horizontally scale deployment frequency too.



I think this was the meme before moduliths[1][2] where people conflated the operational and code change aspects of microservices. But it's just additional incidental complexity that you should resist.

IOW you can do as many deploys without microservices if you organize your monolithic app as independent modules, while keeping out the main disadvantages of the microservice (infra/cicd/etc complexity, and turning your app's function calls into a unreliable distributed system communication problem).

[1] https://www.fearofoblivion.com/build-a-modular-monolith-firs...

[2] https://ardalis.com/introducing-modular-monoliths-goldilocks...


An old monolithic PHP application I worked on for over a decade wasn't set up with independent modules and the average deploy probably took a couple seconds, because it was an svn up which only updated changed files.

I frequently think about this when I watch my current workplace's node application go through a huge build process, spitting out a 70mb artifact which is then copied multiple times around the entire universe as a whole chonk before finally ending up where it needs to be several tens of minutes later.


Even watching how php applications get deployed these days, where it goes through this huge thing and takes about the same amount of time to replace all the docker containers.


I avoid Docker for precisely that reason! I have one system running on Docker across our whole org - Stirling-PDF providing some basic PDF services for internal use. Each time I update it I have to watch it download 700mb of Docker stuff, instead of just doing an in-place upgrade of a few files.

I get that there are advantages in shipping stuff like this. But having seen PHP stuff work for decades with in-place deploys and no build process I am just continually disappointed with how much worse the experience has become.


One approach I've seen rather successfully is to have a container that just contains the files to deploy, and another one for the runtime. You only need to update the runtime container ~ once a week or so (to get OS security updates), and the files container is literally just a COPY command to a volume.

I've only seen that in one place, ever. Most people just do the insane 40 minute docker build -- though I've also seen some that take over 4 hours...


That makes a lot of sense to me!


> which only updated changed files

You pay for this with the inability to maintain instance state (even caches) and a glacially slow runtime. It's a tradeoff.


Not sure what you mean about either of those two things? Never had any issues with instance state in our primary production environments, which were several instances of load balanced web servers. No idea what you're referring to as "slow"?


Yeah, if something even simpler works, that's of course even better.

I'd argue the difference between that PHP app and the Node app wasn't the lack of modularity, you could have a modulith with the same fast deploy.

(But of course modulith is too just extra complexity if you don't need it)


> I think this was the meme before moduliths[1][2] where people conflated the operational and code change aspects of microservices.

People conflate the operational and code change aspects of microservices just like people conflate that the sky is blue and water is wet. It's a statement of fact that doesn't go away with buzzwords.

> IOW you can do as many deploys without microservices if you organize your monolithic app as independent modules, while keeping out the main disadvantages of the microservice (infra/cicd/etc complexity, and turning your app's function calls into a unreliable distributed system communication problem).

This personal opinion is deep within "not even false" territory. You can also deploy as many times as you'd like with any monolith, regardless of what buzzwords you tack on that.

What you're completely missing from your remark is the loosely coupled nature of running things on a separate service, how trivial it is to do blue-green deployments, and how you can do gradual rollouts that you absolutely cannot do with a patch to a monolith, no matter what buzzwords you tack on it. That is the whole point of mentioning microservices: you can do all that without a single meeting.


I seem to have struck a nerve!

While there may be some things that can come for free with microservices (and not moduliths), your mentioned ones don't sound convincing. Blue-green deployments and gradual rollouts can be done with modulith and can't think of any reason that would be harder than with microservices (part of your running instances can run with a different version of module X). The coupling can be just as loose as with microservices.


Blue-green deployments is a buzzword no matter what color you tack on it.


It’s a monkey’s paw solution, now you have 15 kinda slow pipelines instead of 3 slow deployment pipelines. And you get to have the fun new problem of deployment planning and synchronizing feature deployments.


> It’s a monkey’s paw solution, now you have 15 kinda slow pipelines instead of 3 slow deployment pipelines.

Not a problem. In fact, they are a solution to a problem.

> And you get to have the fun new problem of deployment planning and synchronizing feature deployments.

Not a problem too. You don't need to synchronize anything if you're consuming changes that are already deployed and running. You also do not need to synchronize feature deployment if you know the very basics of your job. Worst case scenario, you have to move features behind a feature flag, which requires zero synchronization.

This sort of discussion feels like people complaining about perceived problems they never bothers to think about, let alone tackle.


Not a silver bullet; you increase api versioning overhead between services for example.


> Not a silver bullet; you increase api versioning overhead between services for example.

That's actually a good thing. That ensures clients remain backwards compatible in case of a rollback. The only people who don't notice the need for API versionin are those who are oblivious to the outages they create.


The comparison is with a monolith where versioning isn’t required at all, since you can refactor all clients atomically in one change.


True but your API won't be changing that rapidly especially in a backwards-incompatible way.


The more/smaller microservices you have, the more frequently your APIs must change. It’s more fruitful to recognize that this is a dimension of freedom rather than a binary decision.


I mean I recommend that you don't go micro right away. But a few well-placed service boundaries that align with your eng org chart pay dividends and help build discipline and rigor. Even if the services are all running on the same box to start and just IPCing over localhost or w/e. I prefer my teams build habits while it's easy and fun rather than once the monolith gets unwieldy and drawing boundaries becomes painful.


What's that got to do with microservices?

Edit, because you can avoid those things in a monolith.


You avoid a lot of things in a monolith. But normal services mapped to your org chart tend to be pretty nice. I hate how people spam "micro" service talk as if that's the end game for putting a physical boundary between software.


As long as every team managing the different APIs/services don’t have to be consulted for others to get access. You then get both the problems of distributed data and even more levels of complexity (more meetings than with a monolith)


> As long as every team managing the different APIs/services don’t have to be consulted for others to get access.

Worst-case scenario, those meetings take place only when a new consumer starts consuming a producer managed by an external team well outside your org.

Once that rolls out, you don't need any meeting anymore beyond hypothetical SEVs.


You can do this with a monolith architecture as others point out. It always comes down to governance. With monoliths you risk slowing yourself down in a huge mess of SOLID, DRY and other “clean code” nonsense which means nobody can change anything without it breaking something. Not because any of the OOP principles are wrong on face value, but because they are so extremely vague that nobody ever gets them right. It’s always hilarious to watch Uncle Bob dismiss any criticism with a “they misunderstood the principles” because he’s always completely right. Maybe the principles are just bad when so many people get them wrong? Anyway, microservices don’t protect you from poor governance it just shows up as different problems. I would argue that it’s both extremely easy and common to build a bunch of micro services where nobody knows what effect a change has on others. It comes down to team management, and this is where our industry sucks the most in my experience. It’ll be better once the newer generations of “Team Topologies” enter, but it’ll be a struggle for decades to come if it’ll ever really end. Often it’s completely out of the hands of whatever digitalisation department you have because the organisation views any “IT” as a cost center and never requests things in a way that can be incorporated in any sort of SWE best practice process.

One of the reasons I like Go as a general purpose language is that it often leads to code bases which are easy to change by its simplicity by design. I’ve seen an online bank and a couple of landlord systems (sorry I can’t find the English word for asset and tenant management in a single platform) explode in growth. Largely because switching to Go has made it possible for them to actually deliver what the business needs. Mean while their competition remains stuck with unruly Java or C# code bases where they may be capable of rolling out buggy additions every half year if their organisation is lucky. Which has nothing to do with Go, Java or C# by the way, it has to do with old fashioned OOP architecture and design being way too easy to fuck up. In one shop I worked they had over a thousand C# interfaces which were never consumed by more than one class… Every single one of their tens of thousands of interfaces was in the same folder and namespace… good luck finding the one you need. You could do that with Go, or any language, but chances are you won’t do it if you’re not rolling with one of those older OOP clean code languages. Not doing it with especially C# is harder because abstraction by default is such an ingrained part of the culture around it.

Personally I have a secret affection for Python shops because they are always fast to deliver and terrible in the code. Love it!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: