Hacker News new | past | comments | ask | show | jobs | submit login
Monoliths are not dinosaurs (allthingsdistributed.com)
266 points by davidrupp on May 5, 2023 | hide | past | favorite | 234 comments



Software would be 50% better if every developer understood Tesler's Law:

"Complexity can neither be created nor destroyed, only moved somewhere else."

The drive to simplify is virtuous, but often people are blind to the complexity that it adds.

Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?

The correct solution depends on the circumstances. There are excellent uses of microservices. There are excellent uses of monoliths. There are excellent uses of monorepos. There are excellent uses of ... (wait never mind monorepos are just better).

Understand what is ESSENTIAL complexity and what is INCIDENTAL complexity. Simplify your system to remove as much incidental complexity as possible, until you are left with the essential and be satisfied with that.


Given your last sentence incidental complexity can be created and destroyed (and is more difficult to destroy than create).

The quote would probably be more accurate as:

> "ESSENTIAL Complexity can neither be created nor destroyed, only moved somewhere else."


Essential complexity can also be created and destroyed, though sometimes it happens earlier in the design process. Picking the problem you choose to solve is how you control essential complexity.


Essential complexity is inherent to the problem you have. The solutiom is layered between product design, technical design and implementation. What is essential complexity for a layer can be accidental for the layer above.


That just makes it a tautology. It basically says “essential complexity exists”.


It's often a matter of framing. When you abstract, refactor or move complexity it should serve to make the rest of the system/application easier to understand or for those adding features into the application(s) to be more productive as a whole. If it doesn't do one of those things, you probably shouldn't do it.

It's one thing to shift complexity to a different group/team, such as managing microservice infrastructure and orchestration that can specialize in that aspect of many applications/systems. It's very different to create those abstractions and separations when the same people will be doing the work on both sides, as this adds cognitive overhead to the existing team(s). Especially if you aren't in a scenario where your application performance overhead is in eminent danger of suffering as a whole.


It's a frame of mind.

Often a developer will see something big or complex and see it as a problem.

But they should consider whether this matches the size/scope of the problem being solved


> But they should consider whether this matches the size/scope of the problem being solved

In professional software development projects, specially legacy projects, often times the complexity is not justified by the problem. There is always technical debt piling up, and eventually it starts getting in the way.


> often times the complexity is not justified by the problem.

Often times means not always -- what would you say some projects are doing right so that their complexity is justified by the problem?


Maybe but it still useful to know that essential complexity exists and to identify it in your project.


Yes :)

That is the more precise adaptation of Kelsers Law.

Obviously you can always add also superfluous complexity :)


The talk of essential and incidental complexity is, in practice, far less useful than it seems. It's just very easy to just agree in trying to just get rid of incidental complexity, in the same way that it's easy for people to agree on getting rid of unnecessary, wasteful government programs, until the second you define which ones you mean.

Every time I've had that argument at work, ultimately there's no agreement of what is essential. In one of the worst cases, the big fan of the concept was a CEO considering themselves very technical, who decided that everything they didn't understand was obviously valueless and incidental, while everything they personally cared about was really essential complexity, and therefore impossible to simplify... even though the subsystem in question never had any practical applications until the company failed.

So ultimately, either the focus on avoiding incidental complexity is basically a platitude, or just a nice way to try to bully people into your favorite architecture. A loss either way.


The strict definition would be essential complexity is only the complexity that would be required to implement in an "ideal system" (eg no latency, memory concerns, etc). By that definition you cannot just abandon all incidental complexity (as our systems are no where near ideal). Instead, this way of thinking is helpful for keeping the essential complexity implementation isolated and free from the incidental complexity.


The disagreement is on what (sub)systems are even necessary/important. Heck, we usually have trouble agreeing on what goals we are achieving. Once we settle on what truly matters, separating the essential is not that hard.


> Complexity can neither be created nor destroyed, only moved somewhere else.

I have to say that this short quote is not the whole story; for example it's ridiculously common for artificial complexity to be introduced into a system, like using microservices on a system that gets 1k users a day.

In which case, it is sometimes possible to remove complexity, because you are removing the artificial complexity that was added.


I think that quote makes sense if you assume that the complexity is required. Over-engineering is a whole different topic.

The problem is that the shops that move from monoliths to distributed systems are under the impression that all of their problems are now magically over "because microservices".


> (...) like using microservices on a system that gets 1k users a day.

This sort of specious reasoning just shows how pervasive is the fundamental misunderstanding of the whole point of microservices. Microservices solve organizational problems, and their advantages in scaling and reliability only show up as either nice-to-haves or as distant seconds.

Microservices can an do make sense even if you have 10 users making a hand full of requests, if those services are owned and maintained by distinct groups.


> Microservices can an do make sense even if you have 10 users making a hand full of requests, if those services are owned and maintained by distinct groups.

Maybe, but after the next CEO comes in, those groups would be reorganised anyway :-/

Few companies maintain their org chart for a large length of time. My last place had the microservices maintained by distinct groups when I joined. When I left a third of the people were gone and half the groups were merged into the other half.

This is not an uncommon thing. Going microservices because you don't want to step on other peoples toes is a good reason, but it's an even bet that the boundaries will shift anyway.


> Maybe, but after the next CEO comes in, those groups would be reorganised anyway :-/

That's perfectly fine, because microservices excel in that scenario: just hand over the keys to the repo and the pipelines, and you're done.


> Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?

The net gain was composability of microservices, distribution of computing resources, and the ability to marshall off implementation details. Just because those requirements were routinely ignored in the era of monoliths doesn't mean the complexity wasn't essential or didn't exist.


Before "microservices" there are services, which are also composable. And in the realm of monoliths there are also modules. Which are the key to composability.

What microservices give you is a hard boundary that you cannot cross (though you can weaken it, you cannot eliminate it) between modules. This means the internal state of a module now has to be explicitly and more deliberately exposed rather than merely bypassed by lack of access control, or someone swapping out public for private, or capitalizing a name in Go. If there's any real benefit of microservices, this is it. The hard(er) boundary between modules. But it's not novel, we've had that concept since the days of COBOL. And hardware engineers have had the concept even longer.

The challenge in monoliths is that the boundary is so easily breached, and it will be over time because of expedient choices rather than deliberate, thoughtful choices.


"The challenge in monoliths is that the boundary is so easily breached, and it will be over time because of expedient choices rather than deliberate, thoughtful choices."

I just doubt that people who don't have the discipline to write decent modular code will do any better with microservices. You will end up with a super complex, hard to change and hard to maintain system.


Over focusing on source leads to stand conclusions.

True remedy in both cases is refactoring. So if team don't have time for refactoring in monolith, then switch to microservices would need to free up enough time for the team to start doing it.

Can that even with on a level of a single team?


Exactly. 100% right.


There are tools for enforcing boundaries.

One name for this is "Modulith" where you use modules that have a clear enforced boundary. You get the same composability as micro-services without the complexity.

Here's how Spring solves it: https://www.baeldung.com/spring-modulith

It's basically a library that ensures strict boundaries. Communication has to go through an interface (similar to service api) and you are not allowed to leak internal logic such as database entities to the outer layer

If you later decide to convert the module into a separate service, you simply move the module to a new service and write a small API layer that uses the same interface. No other code changes are necessary.

This enables you to start with a single service (modulith) and split into microservices later if you see the need for it without any major refactoring


> The challenge in monoliths is that the boundary is so easily breached

The biggest challenge with monoliths is the limits of a single process and machine.


Typically the application server is stateless and any persistent state is kept in a database, so you can just spawn another instance on another machine.


Sure but there's still limits such as the binary size and working memory etc


Could you give a concrete example from your experience? I ask because in my experience, services have had a relatively small (say less than a few hundred GB) fixed working memory usage, and the rest scales with utilisation meaning it would help to spawn additional processes.

In other words, it sounds like you're speaking of a case where all services together consume terabytes of memory irrespective of utilisation, but if you can split it up into multiple heterogeneous services each will only use at most hundreds of GB. Is that correct, and if so, what sort of system would that be? I have trouble visualising it.


Let's imagine Facebook, we can partition the monolith by user, but you would need the entire code base (50+ million lines?) running in each process just in case a user wants to access that functionality. I'm not saying one can't build a billion dollar business using a monolith, but at some point the limit of what a single process can host might become a problem.


Things like Facebook and Google are at a level of scale where they need to do things entirely differently form everyone else though. e.g. for most companies, you'll get better database performance with monotonic keys so that work is usually hitting the same pages. Once you reach a certain size (which very few companies/products do), you want the opposite so that none of your nodes get too hot/become a bottleneck. Unless you're at one of the largest companies, many of the things they do are the opposite of what you should be doing.


I agree that most will be fine with a monolith, I never said anything to the contrary. But let's not pretend that the limits don't exist and don't matter. They matter to my company (and we're not Facebook or Google, but we're far older and probably have more code).


We've been here before with object oriented programming which was supposed to introduce a world of reusable, composable objects, even connected by CORBA if you wanted to run them remotely.


This is a very nice academic theory, but in real life you get half-assed, half-implemented hairballs that suck out your will to live if you have to untangle that.


What are the sequence of events that happen in real life that take us from nice theory to hairballs, that academics fail to foresee?


Most companies severely underestimate the complexities of a distributed system and the work that goes into a truly resilient, scalable setup of this kind.

An infrusctructfre of this sort is meant to solve very hard problems, not to make regular problems much harder.


There's also the distribution of work... if one set of teams manages the deployments or communications issues between the microservices, while the micro-service developers can concentrate on the domain it can be a better distribution of work. Where as if the same teams/devs are going to do both sides of this, it may make more sense for a more monolithic codebase/deployment.


Not true. You can make any system arbitrarily complex. And 95% of software developers IMHO are hell-bent on proving that true every single day. Micro-services is a GREAT example of this.


> You can make any system arbitrarily complex.

But isn't that introducing incidental complexity? Not sure if you actually disagree.


Both essential complexity and accidental complexity can be created or destroyed based on how functional scope or technical scope is defined and understood. In a live and evolving system, a lot of complexity is also because of not having a vision for how the product domain or market or ecosystem might evolve and having a coherent shared understanding of it amongst product managers and platform builders. When product direction flip-flops or technical choices flip-flops too widely, it creates lots of complexity lag/debt which can be very murky to clearly identify and attack.


Do you have a suggestion on how to avoid the flip-flopping problem? Or is there a way to turn with the wind without increasing complexity? Genuinely curious.


Well, flip-flop is usually due to heavy investment too early – on poorly formed thesis of a potential product-market fit or business model (ie., vision). Early days has to be about optimizing aggressively for speed of learning/validation/formation of thesis.

There has to be an overarching thesis that is well-formed (formed based on experience/tests from adjacent/related markets, key assumptions validated at reduced scale, strong backing by investors/founders etc).

During this vision formation/validation stage, keep things very lightweight and don't over-invest in prolonged tech platforms.

For example, if your product/service has an on-the-field operational part to it, then run that part with manual operations with pen/paper – pen/paper is extremely flexible and universally usable and survives all kinds of surprises in the field – don't wait to build a software solution to test your thesis in the field. Manual ops can scale quite well especially during learning phase (scale is not your goal, learning/validating is). Choose the use-case for your experiments carefully – optimizing for fast learning and applicability of that learning for next adjacent use-case you intend to expand to.

Once you get going, still build the tech systems without dogmas or baking strong opinions in too much. Keep your engineers generalists and first principles problem solvers and interested in solving both tech and functional-domain problems and encourage them to be humble and curious – because both the functional and tech world is constantly changing. Don't hire a huge product management org – every engineer/manager should be thinking about customer/product first. Over time, parts of your product are more mature and parts of your product are very nascent and still volatile. If your entire team is still thinking customer/product first and build tech to solve problems from first principles, then they should have found the right coupling/cohesion balance between different parts of the system to avoid shared fate, high blast radius or high contagion of that volatility/instability affecting more mature parts of the system.


This is an incredibly insightful response. Thank you!


> Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?

The main misconception about microservices is that people miss why they exist and what problems they solve. If you don't understand the problem, you don't understand the solution. The main problems microservices solve are not technical, but organizational and operational. Sure, there are plenty of advantages in scaling and reliability, but where microservices are worth their weight in gold is the way they impose hard boundaries on all responsibilities, from managerial down to operational, and force all dependencies to be loosely coupled and go through specific interfaces with concrete SLAs with clearly defined ownerships.

Projects where a single group owns the whole thing will benefit from running everything in one single service. Once you feel the need to assign ownership of specific responsibilities or features or data to dedicated teams, you quickly are better off if you specify the interface, and each team owns everything begind each interface.


> Once you feel the need to assign ownership of specific responsibilities or features or data to dedicated teams, you quickly are better off if you specify the interface, and each team owns everything begind each interface.

If team A needs a new endpoint from team B, what would a typical dialogue look like under microservices and a modular monolith, respectively?


> If team A needs a new endpoint from team B, what would a typical dialogue look like under microservices and a modular monolith, respectively?


How teams interact is a function of the team/org, not the software architecture.

What microservices easily provide in this scenario that is far harder to pull off with a monolith is that with microservices the service owners are able to roll out a dedicated service as a deliverable from that dialogue. Whether the new microservice implements a new version of the API or handles a single endpoint, the service owners can deploy the new service as an extension to their service instead of a modification, and thus can do whatever they wish to do with it without risking any impact on their service's stability.


Aren't microservices an example of complexity creation over the fundamental base case?


reminds me of that law:

"For something to get clean, something else must get dirty"

and the corollary:

"You can get something dirty without getting anything clean"


How does that apply here? (I’m not being facetious)


I was sort of thinking along the lines of...

You must remove complexity to make understandable code.

You can remove complexity without making anything understandable.


Last paragraph about essential and incidental complexity rings my ear about Rich Hickey's "Simple Made Easy" talk.

Talk: https://youtube.com/watch?v=SxdOUGdseq4


I'd add that abstracting complexity should be done where it makes the rest of the system easier to understand over the abstraction. Too much abstraction can make systems harder to understand and work with instead of easier.


Assuming you are not perfect, you must have implemented too much abstracted systems at some point. What went through your head when you did that? What thoughts legitimized that extra layer of abstraction that turned out to be superfluous?


I think the best example I can think of is implementing the Microsoft Enterprise Library Data Access Application Blocks. EntLib came out of MS consulting as a means of normalizing larger scale application development with .Net. DAB in particular entailed creating abstractions and interfaces for data objects and data access components. In practice in smaller projects, this made things far more difficult, as in general the applications only targeted a single database and weren't really doing automated unit testing at the time.

It was kind of horrible, as VS would constantly take you to the IFoo interface definition instead of the DbFoo implementation when you were trying to debug. Not fun at all. It was much earlier in my career.

Probably the most complex system I designed was an inherited permission interface down to a component level through a byzantine mess of users/groups and ownership for a CRM tool in a specific industry. If I had to do it today, it would be very, very different and much simpler. Also much earlier in my career (about 20 years ago).

These days I've really enjoyed using scripting languages, mostly JS/Node and keeping things simple (I don't care for inversify for example). In general the simpler thing is usually easier to maintain and keep up with over time. Yeah, frameworks and patterns can make boilerplate easier but it's often at the expense of making everything else much harder. Vs just starting off slightly harder from the start, but everything is simpler overall over time.

Aside, been enjoying Rust as well.


Does Tesler’s law apply to lines of code or architecture?

I have absolutely seen complex code that was created (often for perceived “best practices” like DRY) which could be removed by simplifying the code.


I think it applies to problems, not solutions. The complexity of a given problem cannot change. If you try to ignore part of the inherent complexity of a problem (also called essential complexity) in your solution, it does not disappear but someone else must solve it somewhere else, or the problem is not really solved. If you build a solution that is more complex than the problem itself (in other words, if you add incidental complexity), this does not increase the complexity of the problem either, only the complexity of the solution.

I think a good solution to any problem needs to match it in complexity. I regularly use this comparison as a benchmark for solutions.

Of course, you can also see it this way: The complexity you remove from the code by making it cleaner is added to your team communication because you now have to defend your decision. (Only half joking.)


>>Understand what is ESSENTIAL complexity and what is INCIDENTAL complexity

Succintly put!


Microservices reveal communication.

I believe we do not have the right tools yet.


there is no silver bullet


>Complexity can neither be created nor destroyed, only moved somewhere else.

Not true. I blow.up a car. Complexity is destroyed. I rebuild the car from blown up parts. Complexity is created.

There's no underlying generic philosophical point about microservices and monoliths. What we can say is that microservices are not necessarily less complex than monoliths, but this relationship has no bearing on the nature of complexity itself.


> I blow.up a car. Complexity is destroyed.

Wouldn't the blown-up car be more complex than the non-blown-up car?


In that case complexity is created thereby proving my point anyway.

Typically I define complexity as a low probability macrostate meaning low entropy. So debris generated from the explosion has a high probability of randomly occuring, while a new car has a very low probability of randomly occuring.

Following this definition you arrive at a formal definition of complexity that is intuitive.

Imagine a tornado that randomly reconfigures everything in it's wake. The more complex something is the less likely the tornado is going to reconfigure everything into that thing. So it is very likely for a tornado to reconfigure things into debris and extremely extremely unlikely for the tornado to reconfigure everything into a brand new car. A tornado reconfiguring atoms into a car seems to be impossible but it is not, it is simply a low probability nearly impossible configuration.

Therefore the car is more complex then exploded debris.

Think about this definition because it also aligns intuitively with the effort required and technical complexity of any object. The more effort and technical complexity it has the lower chance it has of a tornado randomly reconfiguring atoms to form that object. Thus that object is more "complex".

Whatever your definition is, the quote saying complexity cannot be created or destroyed is fundamentally from any perspective usually not true. If you want to define it as simply microservices or monoliths it still doesn't make sense. Whose to say that when converting a monolith to microservices that complexity remains conserved? Likely the complexity rises or lowers a bit or a lot. Complexity doesn't remain the same even if you use informal and inexact fuzzy definitions of complexity.


> So it is very likely for a tornado to reconfigure things into debris and extremely extremely unlikely for the tornado to reconfigure everything into a brand new car.

Is this not a linguistic sleight of hand? There are billions of trillions of states we label with "debris" but only a few thousand we would call "car". So a specific state of debris, then, is equal in complexity to a car?


The technical term for this is macrostate. It is not a linguistic sleight of hand.

It is literally within the formal definition entropy. Debris is a high entropy macrostate, while a car occupies a lower entropy macrostate. There are less possible atomic configurations for cars then their is for debris. Each of these individual configurations of atoms is called a microstate.

A macrostate is a collection of microstates that you define. Depending on the definition you choose that definition has an associated probability. So if you choose to define macrostate as a car, you are choosing a collection of microstates that have a low probability of occuring.

The law of thermodynamics says that systems will, over time, will gain entropy meaning they naturally progress to high probability macrostates over time. So in other words, complexity is destroyed over time by specific laws of entropy.

https://en.m.wikipedia.org/wiki/Entropy_(statistical_thermod...

The reason why this occurs is straightforward. As a system evolves and randomly jiggles over time it trends towards high probability configurations like "debris" because simply that configuration has a high probability of occuring. Generally the more billions of microstates a macrostate contains the higher entropy that macrostate is.

Through logic and probability and the second law of thermodynamics we have a formal definition of complexity and we see that complexity naturally destroys itself or degrades with time.

This is the thing that confuses people about entropy. It's definition is a generality based on microstates and macrostates you choose to define yourself. It's similar to calling a function with generics in programming where you choose to define the generic at the time of the call.

But even within this generic world their are laws (traits in rust or interfaces in c++) that tell us how that generic should behave. Just like how entropy tells us that macrostates we define will always trend towards losing complexity.

The heat death of the universe is the predicted end of the universe where all complexity is inevitably lost forever. You can define that macrostate as the collection of all microstates that do not contain any form of organization.


> There are less possible atomic configurations for cars then their is for debris.

That doesn't really line up logically. :/


You're mistaken and you're intuition is off. It lines up absolutely.

Debris is almost any configuration of atoms that are considered trash or unusable.

There's are much more ways you can configure atoms to form useless trash then you can to make cars. Case in point "you" can manufacture "debris" or "trash" by throwing something into a trash can. Simple.

When's the last time you manufactured a car? Never.


Wow. You're talking atomic scale stuff, but ignoring obvious things like oxidation occurring during an explosion. :/

Seems like a case of you including and excluding things to support your "logic". Ugh.


Thanks. I was vaguely familiar with physical entropy from earlier but this answered multiple questions I've had but not dared ask before!


A good rule of thumb is that if you’re starting a new project and immediately implement microservices, you’ve already failed.

Never seen it work, don’t think I ever will. The only way microservices can ever work is if they’re extracted over time as the monolith gets fleshed out, likely over the course of years. And even they’re probably a bad idea.


I think that splitting things can be a good idea once in a while. The important part is that you only make a new service because some job fits being its own service.

The microservice syndrome begin once you start splitting services simply because you arbitrarily declare them too large.


But you can make a "new service" in code in the same binary communicating through memory. You can then separate it to a separate binary, when the network and serialisation overhead worth it. 80% of the time it will never come.


Microservices are not only "function calls over TCP" though. There are other concerns such as database per service vs a single shared database. There are also security implications that you don't have with a monolity. Designing it that way from the beginning could easily be as complicated as making them separate services.


I wouldn't call that microservices.


> because some job fits being its own service

Determining this at all takes skill. Determining it that far ahead of time requires so much skill and foresight that suggesting that people try is bad advice.


Or... if you're working in an organization that already has a Microservice based infrastructure in place.

Otherwise, I generally agree... I'll usually take a monolith approach and break things off in ways that make sense. Usually starting with long running processes that can simply be workers off of queues. Sometimes potential bottlenecks that have higher compute overhead, such as passphrase hashing and comparison which is relatively easy to DDoS, but if broken off only effects new logins and password changes.


Isn’t something paraphrase hashing something that should be heavily rate limited?

In order to DoS your typical site through passphrase hashing you would need to be:

- have a ton of valid usernames/emails of accounts that need to be checked (because a typical password check will rate limit by account) - send in a massive torrent of traffic from a ton of IP addresses (because a typical password check will be rate limited by IP, even more than typical IP based rate limiting)

While this is not impossible if you had those resources it still might be easier to just DoS the site though standard pages/ endpoints by sheer traffic.


Correct microservices don't magically prevent DDOS attacks. They can actually make things much worse.


I've never done it myself, but it seems startups are succesfully deploying using for example auth0. Outsourcing auth like that is using microservices right?


Kinda... it's breaking off the domain into a different service (SaaS in this case). I usually separate auth out even for localized authentication, as stronger passphrase hashing is a relatively low bar for DDoS attacks in practice. If hashing a passphrase takes 0.5s of a single core, then it wouldn't take THAT many authentication requests to overload a system... and even then if you use other mitigations, that can have its' own design overhead.

So I generally try to design with a separated authentication from the beginning. I'll start with a "dev" auth service that will just have a form you can fill out whatever you want for the "user" and permissions, then sign a token to get into your local/dev environment application. From there it's really easy to create a more robust system or wrappers for external systems (Auth0, Okta, Azure AD, etc).

In general, however, MicroServices is about completely separating a given context domain from other services in a larger system. Account Management is separate from Assets, etc. In practice this means you will have a much more complex orchestration, deployment and communications system in place, often with some sort of Distributed RPC, Queue or other layers on top. You also have to be much more concerned with communications between teams and service versioning and availability.

This complexity isn't less complex, it just shifts the complexity, which can help with larger organizations, but for smaller teams it can really bottleneck everything. This is why the general concensus is to start with a more monolithic codebase designed to still scale horizontally, then break off systems as the need arizes.

The exception being if you are in an organization that has already paid the overhead/debt of setting up for microservice orchestrations.


That's a different thing really. It's like using sendgrid.


Can't we all just go back? Seriously every system I've worked on in the last 10 years seems worse in every metric than what I worked on 2000-2010.


There is no going back. The industry has seen a 5X raise in money and people.

It's only natural that a job that used to take 2 now takes 10 people because of those facts.

And I agree. We've traded safety and a few other things that could have been ironed out in exchange of massive gridlocks, headaches and colossal facepalms. Performance is only on par if you consider the hardware improvements, it's crazy.


If you can have a monolith for $500/month with a team of 2, or a distributed system for $50,000/month with a team of 50, what manager in their right mind would choose the former. Zero prestige in that.


In the past and in other industries you'd choose the former because you'd get hit hard if your manager realized what you were doing. But in the easy-money FAANG world your manager is usually playing the same game, so they actually prefer that you go with the latter.


I agree. These decisions are not made economically, they are made politically. And the big complicated thing is far more prestigious.

Then let's say half of our profession has less than five years of experience (though I would guess it's closer to two years) -- they see what the elders do and copy that, like any good apprentice would. Suddenly, it's all they know.

Wild conjecture, of course. I would like to see it studied!


Not just managers, but the senior engineers too. You can aggregate all the inter-service communication and say that you’re managing 1000 QPS service mesh rather than a meager backend with 10 QPS.


Not to mention the resume padding technology.

You can no longer claim to have expertise in OpenShift, Redis, Kafka and RabbitMQ and machine learning from having implemented a todo app :-(


Recently turned off a service that ran at 40,000 QPS


The first thing I hear from the tech lead/CTO or whatever when I walk into a company is 'we are going to use microservices', then a lot of kubernetes, sqs, lambda etc. And THEN they talk about what the project is. I only didn't manage to keep away from a monolith for a fraction of the money and time; it was a huge fail; 20m$ burnt on engineers and aws and the endresult was a painful burning pile of nuclear waste. They had money to burn so they are still alive, but I should've walked out after the CTO pushed microservices past his team (his team + my team) who said it was nonsense in this case (it's in all cases, but whatever, I cannot prove that so).


Even for a tech lead the former is definitely very bad for your resume. You pretty much kill your own career.


On a vaguely related note but still shows how the World works. Back in 1992, my Eastern European computer science teacher told us a story. He used to work as a programmer, doing "ERP" software for communist companies (state-owned, of course). The software would run on machines featuring Z80 CPU with CP/M operating system. Anyhow after the change of regime they got 386 PCs and rewrote the software to run on them. The performance gain was staggering but ... users (directors of said companies) complained. The software would cost a lot but seemengly would not "work hard" for it. On the old machines it would take a lot of time to process anything, it was obvious the software was "working hard". On the PCs the same operation which would take minutes or hours would complete instantly. Obviously the job was too easy and thus much overpriced.

Faced with this serious problem impacting sales, the programmers thought about it and eventually found a solution. They added... SLEEP instructions. All over the code in key points, sleep(seconds), calibrated so the response time was about the same as on ye olde CP/Me machines.

Customer reaction: completely satisfied.

... I think we can draw some conclusions on the universality of human stupidity and incentives to act that way.


This shows what they failed to consider in those optimistic views of the 1930s like John Maynard Keynes who predicted that his grandkids would work just 15 hours a week.

Slaves don't trade productivity for their lives, they trade time.

No matter how many trillions of times more productive human activities will become, slaves's time will still belong to the masters and very little of their life will belong to them.


[flagged]


So using the "wage slave" rethoric is punished now by the Thought Police. Happy happy thoughts and be greatful for the opportunity to trade our lives for a measly pay working for the man.


It's just kinda crap. It's not the same bloody thing. FWIW, I agreed with the general point but disagreed with the language.

I commented to make the OP aware, as I always dislike being downvoted with no explanation. Shrug, it's not like I can pay my mortgage with HN karma.


"Slavery" is a powerful word, understood universally, with high chances of proving the point and making an impact when used.

Your censorship of it's usage may have been done on good intentions but the results are nefarious.

Does it seem to you that we are living in some Nirvana / restored Heaven Of Humanity? Take a look at this, wage slavery makes 25% of work classification: https://www.nirandfar.com/wage-slaves/

Confining the word "slave" to some well-in-the-past primitive dawn of humanity and punishing us for using it to depict the deplorable state of humanity today does nothing for improving said state and makes you in the best case one of the "useful idiots" of the establishment ( https://en.wikipedia.org/wiki/Useful_idiot ).

How do you define a "slave"? Think of this definition: "a slave is any person that trades their time in exchange for the right to live". Does it seem to you that Elon Musk is a slave? How bout someone working on a computer 1,000,000 times more powerful than 30 years ago and going through even more drudgery with less stability and guarantee of making a living out of the productivity increase?

Is it really the best that civilization can throw at the versatility and capability of the human mind? Fixing bugs in the 100'th million line of crap framework code and the perspective of being replaced by a statistical model? That's your glorious "non slavery" world you censor from using?


What do you mean zero prestige in that? Do you know the difference in salary, importance and status in managing a team of 50 vs one of only two?


I think you misread GP. That was the point they were making.


Because we can't definitively prove which one is better.

There is no formal theory around system design so you get people complaining about how monoliths are error prone and way too complex and you get people complaining about microservices with the exact same argument.

The result is that with a period of about a decade the industry just oscillates between the two methodologies in an endless flat circle.

Monoliths will come in vogue again in the 2020s.


Yeah. I think what it boils down to is inexperienced developers and shortsighted product managers are capable of bad design choices in any paradigm. And handing them a new paradigm that's en vogue will all but guarantee they'll mess it up.

Perhaps the real crux is that large teams don't build good software. Especially not from the ground up.


Fun part is there is no “better” in generic terms.

Another fun part is that people on the internet complain but without merit. I cannot judge some project because I usually don’t know the project. So might as well be that monolith or not team was having high attrition or there might be many other things that would affect project.


>Fun part is there is no “better” in generic terms.

No. It goes further then this. We actually don't even know if one is better or not.

There is no formal theory that allows us to prove these designs. For example we have formal theory behind algorithm complexity that quantifies runtime cost but no such theory exists for engineering design patterns. We aren't even sure what to quantify.

But just because a formal theory doesn't yet exist doesn't mean one can be developed in the future. Until then these system designs will be like history... Always repeating itself.


We will be going back. After the last binge, torching through piles of cash to do resume-driven development, companies are now looking to be lean. You can't really survive if you are doing microservices and trying to actually build something with a team cut by 75%.

I am already seeing this newly discovered love of simple, fast monoliths and the developer ergonomics they offer. What is old is new again - we are at the beginning of this cycle. The older generation learned that "complexity kills", the new generation is beginning to get it as well.


All the laid off tech workers disagree


People doing development/marketing/sales of software for money will not go back, but end users who can write software could go back.

The systems can be worse because of what other commenters mention, the financial incentives, but also the companies profiting from worse systems must ensure that expectations are gradually lowered amongst people who write software and that these peoples' "skills" are progressively dumbed down. That is how they ensure that status quo is maintained and that people will never "go back".

Meanwhile, hardware and networking improvements have been amazing.


Agree. Very few systems benefit from the increased complexity and decreased productivity.

DHH has a couple of posts on the topic recently: https://world.hey.com/dhh/how-to-recover-from-microservices-...


Rare case of DHH being cogent, I wonder if this was a result of him rediscovering Traefik and Docker Swarm?


He's always been pushing monoliths.


Then I ask forgiveness for ever doubting him. Can he help us turn back from this Cloud Hellscape?


He is attempting it! I've been following his journey moving his apps out of the cloud and I just say it, I want to try some of his deployment tooling he's making as a result.


Kubernetes makes Websphere 5 development feel refreshing.


HN could be a little less pessimistic. People aren't choosing microservices merely because of the hype.

Here's why I'd choose microservices for a large project:

1. People don't produce uniform code quality. With microservices, the damage is often contained.

2. Every monolith is riddled with exceptional cases in a few years. Only a few people know about corner cases after a few years, and the company becomes dependent on those developers.

3. It's easier for junior developers to start contributing. With a monolith you'd need to be rigid with code reviews, whereas you could be a little lax with microservices. Again, ties into (1) above. This also allows a company to hire faster.

4. Different modules have different performance and non-functional requirements. For example, consider reading a large file. You don't want such an expensive process to compete for resources with say a product search flow. Even with a monolith, you wouldn't do this - you'd make an exception. In a few years, the monolith is full of special cases which only a few people know about. When those employees leave, the project sometimes stalls and code quality drops. Related to (2).

5. Microservices have become a lot easier thanks to k8s and docker. If you think about it, microservices were becoming popular even before k8s became mainstream. If it was viable then, it's a lot easier today.

6. It helps with organizing teams and assigning responsibility.

7. You don't need super small microservices. A microservice could very well handle all of a module - say all of payments (payment processing, refunds, coupon codes etc), or all of authentication (oauth, mfa etc).

8. Broken Windows Theory more often applies to monoliths, and much less to microservices. Delivery pressure is unavoidable in product development at various points. Which means that you'll often make compromises. Once you start making these compromises, people will keep making them more often.

9. It allows you the agility to choose a more efficient tech/process when available. Monoliths are rigid in tech choices, and don't easily allow you to adopt a different programming language or a framework. With Microservices, you could choose the stack that best solves the problem at hand. In addition, this allows a company to scale up the team faster.

Add:

10. It's difficult to fix schemas, contracts and data structures once they're in production. Refactoring is easier with microservices, given that the implications are local compared to monoliths.


I very much agree with these points.

1 & 8: This is something I've experienced first-hand quite a bit. When code quality starts slipping in service, it accelerates and inevitably leads to more hacks built on hacks. The containment afforded by service separation helps keep these hacks quarantined and minimizes the pollution and spread.

6. There's no ambiguity or question of who owns what when services/packages are separated and have distinct owners. Transferring ownership means transferring ownership of the service. In large companies with some teams working on secret projects, it's much easier to have viable workflows with service separation than with a monolith. Permissions management and so many other things are easier.

7: I think this is an excellent point. Another way of looking at this is as having multiple focused monoliths.

I'd also add that there's a benefit in blast radius minimization. Updating a library or merging in a risky change doesn't risk breaking literally everything all at once. One service can update something and you know for a fact that it can't break anything other than that service and its dependents.


Yes! Finally! People keep making the assumption that going for Services/Microservices is merely technical. It’s almost all about people and organization.

Point 7 is the most important of the technical considerations: just make your services big enough to make sense as a separate unit and small enough to not be another unchangeable monolith. Yes, it’s not sane to have 300 Lambdas that add one number to another talking to each other over network, so just don’t do it.

Microservices the “Netflix way”, with the huge graph of services, gives a bad name to the idea of factoring out modules into independently deployable parts, which always made sense. Kubernetes just helps with deployment, but how coarse or fine you factor is on you.


For "large projects" there are merits to microservices, but large projects are the exception - most projects aren't large. Unfortunately, a few years ago the consensus seemed to be that microservices should be applied to every project size, because they were always better - which generated so many problems.

The simplified view I hold is this: if your main problem isn't a lack of engineers, but how to split work between hundreds, or even thousands of engineers, then microservices make a lot of sense; that's true at FAANG and some other companies. If your main problem is how to get the maximum out of a limited number of engineers you can afford to hire, then microservices is not the solution. The latter is by far the most common case.


The truthfulness of this comment depends very heavily on four words: 'for a large project'. This is the problem with the microservices hype. It is so very easy to ignore this four word restriction and start applying them everywhere. As such your post is a contribution to the problem and not the the solution.


Microservices are rebrandend SOA.

We weren't doing Sun RPC, PDO, CORBA and DCOM everywhere for fun, yet with some rebranding, SOA should be the solution for everything.


I prefer the SOA terminology because as the post you're replying to says, you don't need to make these services too small, whereas "micro" implies that they are.

Monoliths vs microservices is really a spectrum.


I fully agree with that.


tldr:

Microservices trade design complexity with operational complexity, to address the shortcomings & patterns of the modern IT sector, at both worker and management levels.

This was an excellent choice for consulting (companies like ThoughtWorks, who were pushing microservices) as it optimized their workflows and bottom line: the answer to every client's question of "what's the architecture?" was "Microservices". The companies pimping m.s. were internally optimized for this architecture - running a consulting shop where the time between "here is our project" to "here is our proposal" is approaching zero is profitable.

And the best part -- just like all those "GPT-processes" now being pushed by companies built around an api -- is that by the time clients encounter operational complexity of running micro-services, the consultants have long left the building.


I would argue Microservices trade 100% design complexity with 100% operational complexity and 20% design complexity. Everytime I saw a microserviced design, I always wondered how would they debug this easily and re-operationalize it. Sometimes it is better to let the entire system die and spawn it back instead of having something running to make it look like it's alive.


Or more than 20%.


> Microservices trade design complexity with operational complexity

Yup!

And it annoys me so much how architects/consultants get away with just proposing 'microservices' without being held responsible for all the increased operational complexity. It's left as something for devops to figure out later.


I think the first step toward sanity is to stop factoring services by team sizes - “we have 100 people so require 20 microservices”.

Instead, factor services along natural fault lines. These are areas in the solution that scale differently from other parts and can tolerate communicating over http or message queue.

It is fine to have lots of people work on a single service. We compose things using 3rd party libraries all the time. Just treat internal code a bit more like 3rd party libraries.


“we have 100 people so require 20 microservices”.

More like 3 people requiring 13 microservices


I'm convinced that micro-service hell happens when the primary app calcifies and becomes too difficult to modify.


Micro-services are currently on trend so hard to argue with unless you are a founder. Maybe the best thing to do for the industry in general is go absolutely nuts with microservices. Factor projects so every method is a separate service / lambda. If anyone complains, use the single responsibility principal to snuff out all opposition.

Once we've explored this to its logical conclusion the pendulum will swing back to something more logical. Of course, it will swing too far and the cool kids will only want to work on monoliths even when it doesn't make sense (kind of like what is happening now with SQL maximalism).


I got tired of arguing against microservices just like I got tired of arguing against SPAs and multi-repos.

And at least with one of them, the pendulum has already started swinging back. The cool-kids (aka "javascript hipster devs") have re-discovered the concept of a monorepo and are pushing for it hard, making it seem like some sort of new concept with fancy tooling. It'd be funny if it wasn't so caricatured and a sad state of our industry.


>> re-discovered the concept of a monorepo

Monorepo and monolith are somewhat orthogonal. The desire to use a monorepo likely stems from wanting to emulate Google.


What are the specific decisions along this course of events? I guess nobody says "Our primary app has calcified and become too difficult to modify. Let's make microservice hell!"

But what do people say that leads down that road, in your experience?


Pretty sure this would be fairly common:

    Welcome to your new job as Architect for our software!

    What are some of the main bullet points you have in mind
    for our next major release... ?


Micro service hell happens with new apps that are designed from scratch.


You're right that factoring into services should be based on natural domain boundaries. That said, it feels like a bit of a strawman to suggest that people are driving their architecture with naive math like this. I've definitely seen journeyman engineers coming out of FAANG and other big companies proposing overly complex service topographies, but there is always at least a veneer of semantic justification.

That's not to say there's no relationship between team size and the applicability of a service-oriented architecture. Microservices are a way of drawing hard boundaries around blocks of logic. These boundaries come with cognitive and operational costs, so they represent significant overhead, but they are a tool for abstracting both logic and physical operations to the maximum degree possible (100% is never possible for a single application). In order to get a net benefit, you have to have enough engineers that they can be experts in a subset of services, and the interfaces to their peer services have to be reliable and stable enough that they can be productive without knowing anything about their internals. So while I agree with you there is no universal floor of microservices that makes based purely on team size, there definitely is a ceiling.


>> there definitely is a ceiling

Is there though? Operating systems have 10K - 20K devs based on a casual internet search. Of course there are many services / processes but they are not communicating using an http API or message queue. Similarly, Postgres has 350 people. I don't think team size is a factor at all.


I was thinking of common server-based applications serving millions of clients. Obviously lower-level applications that must run on a single box need different techniques to encapsulate logic and enable scaling of their teams.


Agree. However, this seems to be pointing back to the problem again, not team size.


Could you elaborate on what you mean by "natural fault lines"? The rest of that paragraph seems to refer to performance -- is that the criterion? Do the natural fault lines shift if you manage to optimise the performance of a component, so that it starts scaling at the same rates as its neighbours?


It is hard to say without understanding the system. In my own case it is auth, output (1D,2D,3D), logging and sim but that is meaningless outside of my application's context.


> "we have 100 people so require 20 microservices"

Is this something that actually happened? Not heard from some third party - actually has someone experienced this as a decision in a technical team? It seems... unlikely.


It's usually a consequence of something like Bezos' API mandate https://api-university.com/blog/the-api-mandate/

Multi-team ownership of a single deployable has downsides to tradeoff against the complexity of more service and harder boundaries.


That doesn't translate from the mandate, which is what I'm trying to cautiously call out. There's a big difference between "our company will not have a god database" and "we need to make more services just because we have more people". The link also mentions that the mandate was "something long these lines" - we don't know exactly what the split rules were.

I think people bring up an exaggerated idea of extreme microservices being forced on people when it rarely if ever happens. If not, I'm happy to hear actual first hand experiences.


It does if there's an upper bound on the number of people in an effective team.


It's not "there's an extra person, now we need to create a new service". It's more likely "we need this extra service and that means some people will need to take care of it". It's an inverse of what the comment I'm disputing said.

Anyways, it's still not an example of this happening. It's a recollection of a letter that maybe implies this result.


This is an application of Conway's law. If we want small five-person teams then there are 20 teams and it makes sense to simplify communication by decomposing the system into 20 parts.

(Not saying it's the right way to do things, just the natural way to optimise if you need to keep 20 teams busy working on one system. I think the incorrect assumption is that all 20 teams need to be busy doing the same thing.)


Decomposing the system into libraries is just as effective from this perspective as decomposing into services however. The thing that a typical http API gives you is stability (documentation, versioning and a deprecation policy). This can be done with a library as well. We know this works as 3rd party libraries work well and no communication with the authors of the library is needed.

This is why I suggest team size is not a valid reason to break a problem into microservices. The problem itself is the thing to analyze.


I personally know of a two person team that claims to have over 100 microservices. I think most are just lambda functions however.


Sure, but the question was - are they forced to split new microservices just because they have more people? Or is it just how they decided to split up the app in a logical way.


You know, I'm pretty sure you could build a PHP monolith in 2023 without a framework and it would do what it needed to do.


Bad coding is infinitely worse than a “bad” language.

I don’t know how many of you are musicians, but many people are surprised to find that the tone quality of a guitar has way more to do with the person playing it than the way it was built.


Bad languages, more than anything else, frustrate your expectations and encourage developers to do things that are difficult to maintain. Sure, a great basketball player can dribble with gloves on his hands, but he’s making the task harder on himself, and anyone who’s less than great is less likely to be able to hold on to the ball at all in that case.


A great player can make a Squire sound like a Studio Strat. Indeed. The type of strings make a difference too.


What do you mean "the type of strings"? In pretty much every programming language, String is one of the most fundamental types. There's no "different types of string".

(yes, this is a joke, a bad one at that. badum tish)


When I worked at Google, they had their own string type in place of std::string for various reasons.

Python has byte strings, Unicode strings, format strings, regular expression strings, and probably a few others!

Rust has str and String, depending on ownership.


honestly the difference between basic types like String in different contexts in different languages and dbs is enough to give me ptsd lol


> Rust has str and String, depending on ownership.

Why I hate Rust


C++ has string and string_view. Python has memoryview. It seems like a necessary evil if you don't want folk working in raw pointers?


In some languages it's two or more of the fundamental types. Obviously you haven't been using the big brained languages like Rust, Haskell, or Elixir.

(Also a bad joke.)


Obviously you haven't seen the Substring [1] type of Swift.

[1] https://developer.apple.com/documentation/swift/substring


But this is a slice. So technically a []char?


Not really. It's only a reference to the original string, a start index, and an end index. It doesn't store a second (partial) copy of the string.


there's String, string, std::string, and the satanic *char. Just like there's Slinky, Super Slinky, Extra Slinky, Heavy, and Pro. Nickel-wound (ref counted), steel wound (raw array), and classical nylon (a fucking pointer).


What are gut strings then? Indexed addressing?


lol, those would be raw pointers as well but at the asm level with registers. Joking aside, gut strings are extremely rare outside professionals. I’m not surprised to see it mentioned with this crowd. Cheers for tickling this music nerds interests.


Meh, "text" is where it's at. "varchar()" is so Oracle. :p


Surprising! Will look into


Can confirm. judyrecords is built on a near framework-less monolith in PHP + 2 search servers. Searches complete within 100 milliseconds and SRPs return in less than 15 milliseconds on average. Runs a trimmed down version of CodeIgniter 2 (released in 2012) updated to run on PHP 8.2. Still got the CI 2 profiler.

https://postimg.cc/TLbvvSzm

https://news.ycombinator.com/item?id=30481230


I wonder how many people did their own CodeIgniter updates. I’ve also almost completely transformed the thing, but the base is still CodeIgniter.


Curious, if you don't mind sharing, what site/app!?


Questions for every 100 dynamic websites. How many can truly outgrow PHP + sqlite + jquery ?


I get your point on PHP and jquery, but why sqlite over Postgres or MySQL? Is sqlite just easier to get set up?


Also entirely self-contained and doesn't require a separate daemon


Yes, I recently built a small PHP example project for educational purposes and I intentionally avoided all external dependencies for demonstrating how you would solve things without a framework or library. You can do almost everything you need with plain PHP without too much effort except a few exceptions that you shouldn't waste your time on, most of them are tooling however and not for the application itself, like unit testing, error tracking and logging, database migration, but for that you of course use libraries.


Sure, but framework-less PHP incentivizes poor architecture (hey we can do everything in 1 file to KISS x 100 endpoints = microservice hell). Requires an expert and disciplined engineer to execute well.


Sure you could, but it would take a much longer time and you'd miss lots of functionality. How long would it take you to build user registration / verification / password recovery / 2fa / sso / profile editing / spam registration prevention / etc. ? How long would it take instead to include some Laravel module which does all of that? If you're building a service with some purpose, do you really want to spend days writing the first option instead?


How many hours spent on battling Laravel to do what you want? How many hours spent on upgrading Laravel versions?

Framework solves a general problem, but we try to solve a specific problem, thus it will always be a mismatch, frameworks usually solves that with a combination of magic and opaque abstraction layers.

Frameworks works best for short running projects, long running projects tend to outgrow the framework.

Prefer libraries over frameworks.


That works for some things better then others. For example user management is hard to do as a library, because now you need to hook up all those actions / pass the data to them manually. Then make sure you handle all the headers correctly. Then handle the sessions/invalidation yourself. Potentially implement your custom authz as well because that's also happening throughout your app. There's still a lot of glue code that would be involved here - it may be worth trading it for the occasional Laravel upgrade.


Kind of depends what you’re building.


100%; we do that (see profile).


PDO databases are pretty nice.


Now that PHP has lost its massive deployment advantage, step up to Rails or Python.


Rails and python aren't a step up from PHP for webservers. Both are massive steps down in terms of real world performance. https://www.techempower.com/benchmarks/#section=data-r21

Personally if I was "stepping up" from PHP and wasn't going to use something powerful (or didn't need all that power) I'd be stepping up into javascript and nextjs and stuff like that.


But PHP is blazing fast compared to either of those (hot code ready to accept traffic in all scenarios so we can ignore warm-up times)


I hear about Python all the time. Rarely hear about Rails now. I think the world picked one over the other.


I was at RailsConf 2022 and it was packed, even considering the strict covid rules. There were lots of people there from teams transitioning from React+Go to Rails.


GitHub & Shopify both run substantially on Rails.


Werner's advice in my own words:

1. Don't pick an architecture because it's all the rage right now.

2. Don't pick an architecture that mimics your org's structure. Aka don't fall prey to Conway's law: https://en.wikipedia.org/wiki/Conway%27s_law.

3. Don't pick an architecture that your team can't operationalize--e.g. due to lack of skills or due to business constraints.


> Aka don't fall prey to Conway's law

You can't not fall prey to Conway's law. You can only choose how you organize your people and their interfaces.


You can't do either. How a system is structured drives how the organization running it is structured; how an organization running a system is structured drives how the system is structured.

This is natural and even beneficial: it reduces communication and coordination costs, which is the costliest and hardest part of any organization or system. Perhaps that sometimes lands you in a suboptimal local minimum, but escaping that is usually quite costly in the short term. Think about how often top-down reorgs actually generate value.


Right! Decide on the shape of the software, then rearrange your organization to match.


This is probably good advice for CTOs and directors who manage software products, but it seems to imply the rest of us have to make software that already matches the political structure.


Try building software that doesn’t match the political structure. I guarantee you will regret it.

You’ll constantly be dealing with all the problems because nobody else will care.


Sounds like a way to spend a lot of your time doing “cross functional work.”


I never thought about that as a euphemism for "you're doing it wrong." Makes me rethink my priorities...


It’s not, and it can be very useful to the org. But I find it a huge pain in the ass.


Another way of saying that is to be strategic about where you go against the grain of the organization. I’ve seen that work well when it was something important where the value was high and poorly when someone was either just being dogmatic or overestimated their ability to negotiate.

A long time ago I remember hearing a good bit of advice that for a new project you should innovate the product or the tech stack but not both. I think this is similar: unless you have a lot of resources (people + political clout), don’t buck the existing org chart in more than one area if you can avoid it.


Org will be orgs… and I find it hard to disagree with you :(

The only counterexample to Conway that I have seen working is “away teams”:

https://pedrodelgallego.github.io/blog/amazon/operating-mode...


Conway's Law and Murphy's Law. They'll get you every time.


(Some of) Werner's advice in his own words (from the linked article):

* "I always urge builders to consider the evolution of their systems over time and make sure the foundation is such that you can change and expand them with the minimum number of dependencies."

* "There are few one-way doors. Evaluating your systems regularly is as important, if not more so, than building them in the first place."


> Evaluating your systems regularly is as important, if not more so, than building them in the first place.

This is useful advice to people who can make high level org decisions. But they don’t necessarily know what’s going on at a software level.

The people who do know what’s going on often have a very hard time getting buy in for refactoring. Many of them (rightfully) conclude it’s better to just make management happy churning out features.


Like most articles in distributed systems, this makes wild assumptions about the most important, i.e. the human layer. I would bet $100 that this is written by the same sort of person who thinks "Managers – what do they do all day exactly?" [1]

> If you hire the best engineers....

Guess what, there is no broad consensus on what "best engineer" means. I bet your org is rejecting really good engineers right now because they don't know what Kubernetes is. Same goes for literally any other technology that has been part of a hype cycle (Java in 2001, Ruby on Rails in 2011, ML in 2011; no the precise years don't matter).

> ...trust them to make the best decisions.

A lot of work encapsulated there in less than ten words. If you hire a bunch of people and tell them "you are the best", you think they are going to sit around and run the Raft protocol for consensus on deciding how to architect the system? No, each of them is going to reinvent Kubernetes, and likely not in an amazing way.

Microservices are often best deployed when there is a mixture of cultural and engineering factors that makes it easy to split up a system into different parts that have interfaces with each other. It has little to do with the superiority of the technical architecture.

----------------------------------------

[1] Looks like the article was written by the CTO of Amazon, which...surprises me a bit. Then again, from all accounts, Amazon's not exactly known as the best place to work; so maybe I'm right? In any case, anything written by Amazon is not directly applicable to the vast majority of small-to-medium companies.


> Managers – what do they do all day exactly?

In companies I'm familiar with projects are run by technical leads and product leads.

Managers don't interact with anyone doing actual work, which is probably for the best.


Their point on hiring the best engineers was specifically about not following the hype, and allowing engineers to engineer.

Also, they never claimed technical superiority of microservices, quote:

> For example, a startup with five engineers may choose a monolithic architecture because it is easier to deploy and doesn’t require their small team to learn multiple programming languages.


> Their point on hiring the best engineers was specifically about not following the hype, and allowing engineers to engineer.

That’s weird. Any time I’ve seen anyone let their engineers engineer, those engineers have immediately started following the hype.


I have many times seen management hire what they thought were "the best engineers" and let them completely loose for months or years straight to eventually deliver... nothing.

Literally nothing. Or at least nothing that actually works in any way.


> Also, they never claimed technical superiority of microservices, quote:

No, but they are arguing against the strawman of the perceived technical inferiority of monoliths. Look at the title of the article.

I am simply calling out that strawman as such.

If they wanted to convey the message of "It all depends, use the best architecture for the job", they should reflect that in the title.


Maybe we have radically different experiences, I don't see that as a strawman, the industry as I have seen it largely does view microservices as superior.


I can’t believe it’s news that someone said this. I thought everyone understood: you don’t try to do microservices until you have to. Before you get to that point you make your monolith modular enough that if you ever need microservices you’re prepared to break them out.


A lot of people are focused on microservices as a way to address scaling (of load, team size, etc.) but there's other reasons to choose a microservice. A pretty basic one is when an existing microservice already does what you need and no suitable module or library exists. In that case, there's no "until". Start with a microservice. There's a number of open source projects that are pluggable microservices, for example.


This is not well understood at all - and a lot of frameworks don't really lend to simply splitting a piece off.

I've seen terrible spaghetti code apps where literally nothing can be refactored because of the model / view, God object dependency stuff all over the place.


This is really it. You can have everything in a monolith and scalable and modular by following good architectural practices.

If the need arises, you could move into SOA, and break out a couple of your larger domain modules. Continue following these same rules though.

If the need arises still, you could move into micro services, and break out more / all of your domain modules. Truly understand first whether you actually need this first.

But the "let's do micro services this monolith is old junk" trope, abandonwaring the codebase, building out a bunch of services without strong, fundamental domain knowledge, and then complaining when shit is expensive and fragile and broken- it's getting tiring.



Neat!

It would be awesome if you could set up a parallel “monolith-stories.com” site with the exact same content, except for inverting the sentiment scale :)

Seriously!!


Bookmarked; thanks for that link. Bonus points for sentiment analysis.


Nice, love it


"Since its launch in 2006 with just a few microservices, (AWS) S3 has grown to over 300," I would love if articles like this would give a bit more context on the size of codebase and team.

Like is it two pizzas per microservice or 10 microservices per one pizza.


> Like is it two pizzas per microservice or 10 microservices per one pizza.

Nearly spat out my coffee. Thank you for making my morning.


I think it's reference to a team that can be satisfied with a pizza in take out event.

Increasing number of pizzas you get bigger teams.

Increasing number of microservices you assign more of them to a single team.


There was a perceptive comment on HN a few days ago [1] to the effect that microservices are a useful way to package a body of code if the team that is consuming the code doesn't trust the team that built it. This brings to mind Conway's law - "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure" [2] - and also that meme from a few years back about the internal structure of tech companies [3]. So you can argue that monolith/non-monolith it not purely a tech-driven consideration

[1] Too lazy to keep looking right now

[2] https://en.wikipedia.org/wiki/Conway%27s_law

[3] https://www.reddit.com/r/ProgrammerHumor/comments/6jw33z/int...


Microservices were a zero-interest rate phenomena that benefited no one but cloud service marketing teams.

As money becomes more expensive and as we inch further towards a massive economic crisis, companies that have allowed their R&D budget to bloat out of proportion with needlessly-distributed architectures are NGMI.


The monolith/microservice dichotomy is a red herring.

What even is a microservice?

There are other distinctions, borders and splits that are more important to consider.

Here's a koan that highlights a few of these considerations:

> If you run a kubernetes cluster on a single physical server, is it a monolith or is it a microservice architecture?


So how does that all help me, who is seeing "microservices AWSGCPAzure EDA" job ads exclusively, not a single regular job ad anymore. It's all hypeshit shit shit.


Microservices are, like Prozac, to help with a very specific problem and not without side effects. Don't do Prozac if you don't need it.


Lot of false comparisons here.

It’s not “monolith vs distributed”.

It’s “good monolith vs bad monolith or big ball of mud vs domain-driven design”

It depends on what your primary domain is, level of complexity, number and makeup of enterprise integrations, and more.

Some monoliths are very bad.

Some distributed systems are very bad.

My rundown is:

- is it a simple crud system? —-- monolith

Otherwise: - model it, identify bounded contexts, proceed accordingly


For many years to come, for better or for worse, people will point to the Prime team's blog post as the definitive proof that microservices are inferior. And instead of perspective of nuance, it'll be used for absolutist arguments. I'm already tired of it in anticipation...


I imagine there are not many video companies running on AWS.

It's really only Amazon themselves that can afford to do so.


Netflix is one of AWS largest customers.


True, but I'd say there's zero relationship between the AWS price list and what Netflix pays.

So Netflix doesn't really count as an example of how AWS is a practical option for video hosting.


Because Netflix doesn’t do video hosting on AWS. “Running” on AWS isn’t necessarily hosting on AWS.


The whole debate is rather silly.

Just pick the right architecture for the given problem. Sometimes it's monolith, sometimes it is not. The end.


This really is the issue. When you have no informed opinion, it may be better to start with monolith. This Microservice culture was all about "let me reduce the design complexity by using MS"


Definitely less footguns too. The thing that makes me lean towards microservice thinking overall is the abundance of free tier building blocks.

Obviously doesn't scale but aluring anyway


> My rule of thumb has been that with every order of magnitude of growth you should revisit your architecture, and determine whether it can still support the next order level of growth.

The last hyper growth startup I worked at grew 10x in scale every 2-3 quarters for nearly 3 years at meaningfully large scale (millions of monthly transacting users). In that time, the number of different business-lines/categories and amount of functional flows and their intersecting/overlapping complexity also grew multiple folds.

So, we were adding whole new things and throwing away old things and basically refactoring everything every 18-months. Without knowing consciously, the superpower we had was our ability to refactor large live systems so well. In hindsight it became clear to me that our ability to do this hinged on a few different things:

1. A critical mass of engineers at both senior and junior levels understood the whole systems and flows end to end. A lot of engineers stayed with their own team developing strong functional-domain understanding. Similarly a good number of senior engineers rotated across teams.

2. The devops culture was extreme – every team (of 10-12 engineers) managed all aspects (dev-qa-ops-oncall etc) of building and operating their systems. This meant even very junior engineers were expected to know both functional and non-functional characteristics of their systems. Senior engineers (5-10 yr experience) were expected to evaluate new tech stacks and make choices for their team and live with the consequences.

3. Design proposals were openly shared and sought critical feedback. Technical peer reviews were rigorous. Engineers were genuinely curious to learn things, ask and understand things, challenge/debate things etc. Strong emphasis on first-principles thinking/reasoning and focusing on actual end-to-end problem-solving without being territorial or having dogmas was strongly encouraged and the opposite was strongly discouraged.

4. Doing live-migrations – we mastered the art of doing safe live migrations of services whose API schema or implementation was changing and of datastore whose schema or underlying tech was changing. We had a lot of different database tech migrations – from monolith SQL dbs, to NOSQL clusters to distributed SQL dbs and their equivalent in-memory dbs and caches.

Surprisingly, the things we didn't do so well but didn't really hurt our ability to refactor safely were:

1. Documentation – we had informal whiteboard diagrams photographed and stuck in wiki pages. We didn't have reams and reams of documentation.

2. Tests – we didn't have a formal and rigorous test coverage. We had end-to-end tests for load-testing and we had a small manual QA for doing end-to-end integration testing for critical flows. These came about a bit later – but trying to scale them effectively proved very challenging. But these were not seen as hurdles for doing refactors.

3. Formal architecture councils and formal approval processes – we didn't have these. Instead we had strong people to people connect and strong team level accountability – culturally people owned up their mistakes and do everything they could to fix things and do better next time. Humility was high.

Later, I worked at a large mature company with very large scale – and everything was exactly flipped on all the points above and any major refactors were a serious pain and migrations took forever and actually never completed. The contrast was very eye-opening and I realized in hindsight the above contributing factors.


Cloud computing is not a religion could be another good title


For the context this is most likely in response to DHH’s post [1] where he heavily came down on AWS’s serverless offerings.

He’s been ranting against cloud too.

[1] https://world.hey.com/dhh/even-amazon-can-t-make-sense-of-se...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: