Hacker News new | past | comments | ask | show | jobs | submit login

How microservices are still a default systems design architecture in anything but the largest orgs still puzzles me.



I feel the same way about most cloud-native services.

Sure, Lambda is fine for that small app, but I once inherited a 100k/month mess of SQS, Step Functions, API Gateway, Cognito, Lambda, ECS, AppSync, S3 and Kinesis that made me want to go into carpentry.

It wasn't simple, it was't quick to make, it wasn't cheap, it wasn't fast, and no: it did not scale (because we reached the limit of Step Functions).


Unless you've asked for a limit increase _multiple_ times, I can guarantee you haven't reached the limit of step functions.

The default limits are _very_ conservative in large regions

(Admittedly, by the time you've asked for those limit increases you should probably reconsider what you're doing, you're bleeding $$$ at this point)


When I was growing up there was a shop a couple towns over that didn’t have better prices than the local one but he would have discounts that made people feel good and so they got suckered into driving a half hour away to get bilked. Even my dad who initially complained.

Feeling like you’re getting a special deal overrides objective thought. There’s a bunch of this stuff in AWS and it all feels dirty and wrong.


I think there complexity literally brings more money for the hyperscaler.

Serverless monolith ftw


Serverless monolith gang checking in!

I wrote a bit on how to achieve this with .NET (but probably applicable to many other frameworks/runtimes): https://chrlschn.dev/blog/2024/01/a-practical-guide-to-modul...

(It's inspired by the Google paper, but obviously a much simpler implementation appropriate for most non-Google scale teams)


Conway's Law was written about 57 years ago.

Theoretically, microservices allow for each team to deploy independently, thus the choice is made up front, before any part of the system is designed, because it looks like it reduces the effects of inter-team communication lag.

i.e. Docker lets you better push the org chart into production.


It makes figuring out that the boundaries of responsibility in your app/org are poorly defined harder to address.

The biggest place I ever worked, I came to believe that their chaos worked because it was self organizing. They’d split a large project into parts, and the groups that didn’t work well would find the boundaries of their mandate constantly eroded by their more capable neighbors upstream and downstream from them. Eventually all the gaps would fill in, which is why the company worked. But it meant many orgs and applications did work that would have made more sense to be done at a different step in the process, if not for incompetence/bandwidth. Things would happen here or there not because of some waterfall design but because of where the task was in the development flow and who had more bandwidth at the time.

They kept a lot of old guys around not because they were effective but because they were historians. They knew where the bodies were buried, and who was the right person to ask (not just which team but who was helpful on that team). We had a greybeard who basically did nothing but was nice to be around and any time you had a problem he knew who to introduce you to.


> We had a greybeard who basically did nothing but was nice to be around and any time you had a problem he knew who to introduce you to.

This is absolutely a feature and this guy probably deserves his salary.


    > Theoretically, microservices allow for each team to deploy independently
You can still do that with a monolithic codebase. A Google team published a related paper: https://dl.acm.org/doi/10.1145/3593856.3595909

    > When writing a distributed application, conventional wisdom says to split your application into separate services that can be rolled out independently. This approach is well-intentioned, but a microservices-based architecture like this often backfires, introducing challenges that counteract the benefits the architecture tries to achieve. Fundamentally, this is because microservices conflate logical boundaries (how code is written) with physical boundaries (how code is deployed).


What usually happens is the the same team ends up owning 5 microservices that all have weird interdependencies with leaky abstractions, shared code, and unwritten interface contracts between them.


I always view it as a very good sign when senior leadership is aware of Conway’s Law.


I feel the same way about SPA.

At work the decision was made to rewrite it all in React because it was supposedly easier to find people who knew React, instead of any good product fit.


Easy decision to make if it's not your money your spending, I guess.


Most of the strange things in the software business can be explained by the combination of

1. susceptibility to fads

2. path dependency,

or, to borrow a term from evolutionary biology, punctuated equilibrium.


Because it gives teams the illusion of fast progress, without being burdened by pesky things like “a consistent API,” or “not blowing up shared resources.”


> How microservices are still a default systems design architecture in anything but the largest orgs still puzzles me.

A system that's made out of smaller single-purpose programs that are all made to be composable and talk to each over a standard interface, is not exactly an unproven idea.


Composable single-purpose modules that communicate over a standard interface can be more easily achieved without involving a network and the complexity that comes with it.


IMO, there are only a few cases where the added network traversal make sense.

1. There's some benefit to writing the different parts of the system in different languages (e.g. Go and Python for AI/ML)

2. The teams are big enough that process boundaries start to form.

3. The packaging of some specific code is expensive. For example, the Playwright Docker image is huge so it makes sense to package and deploy it separately.

Otherwise, agreed, it just adds latency and complexity.


It's actually really weird, if you think about it, that point 1 should involve the network. We should be able to just call a function in one language from a function in another language.

Actually this happened to me once. We had two components that needed to talk to each other - one with an Erlang server and C client library that communicated over a socket with a proprietary protocol - and the other in node.js. The first attempt was to write a C translator that took requests over another socket with a different proprietary protocol, but this one was proprietary to us so we could use it directly from node.js. The second, much better attempt was to learn node's C++ module interface and write a C++ node module wrapper around the C client library.

This third-party Erlang component benefited from being an independently restartable process and therefore needing some RPC, but we also had a mess of C/C++ components inter-connecting over RPC that in reality probably didn't need to be separate processes, but for some reason we'd already decided that architecture before we started writing them.


> It's actually really weird, if you think about it, that point 1 should involve the network. We should be able to just call a function in one language from a function in another language.

If you have two languages that both are not C or C++ , and have more involved runtimes, how well does this work? I know for some cases you have things like JRuby or IronPython, but say mixing a JVM language and a CLR language?


For those cases you have to bring the runtimes with you.

With JVM and CLR you can use JNI and COM to generate SOs/DLLs, and both of them can use any SOs/DLLs via FFI. There is also IKVM and Jni4Net that allowed Java code to run in .NET (or at least used to be, I last used it 15 years ago). Results may vary.

For other languages it can be a bit more involved: if there's no such thing as exposing as a library, you must embed the interpreter, which typically involves using C++.

It's not fun. This is why people end up using network requests.

If you can have a text-only interface, or even involve files, you can also just invoke the other app as a process.


The level of reductionism of that comment is honestly quite amusing given the topic. Maybe we can use it as an unintended warning of not going too far in the pursuit of simplicity.


Separation of concerns is the false promise of all these so-called "architecture patterns." Their advocates make you believe that their architecture will magically enable separation of concerns. They offer blunt knives to make rough slices, and these slices always fail at isolating relational concerns, inviting entirely new layers of complexity.

You had a relational database, designed to store and query a relationship between a user and their orders. Now, you have a user management service and an order service, each wrapping its own database. You had a query language. Now, you have two REST APIs. Instead of just dealing with relational problems, you now face external relation problems spread across your entire system. Suddenly, you introduce an event bus, opening the gates to chaos. All this resulting madness was originally sold to you with the words, "the services talk to each other."

Who ever claimed that REST services compose well? Because they can "talk to each other"? Really? Only completely disconnected architects could come up with such an idea. REST services don’t compose well at all. There aren’t even any formal composition rules. Instead, composing two REST services requires a ton of error-prone programming work. A REST service is the worst abstraction possible because it’s never abstract—it’s just an API to something extremely concrete. It doesn’t compose with anything.

Microservices aren’t micro. They’re hundreds of large factories, each containing just one small machine. Inputs need to be packaged and delivered between different factories in different locations, adding complexity every step of the way. This is what happens when enterprise architects "rediscover" programming—but from such a disconnected level that the smallest unit of composition becomes a REST API. Rather than solving problems, they create a far larger problem space in which they can "be useful," like debating whether a new microservice should be created for a given problem, and so on.

The same critique applies to "hexagonal architecture." In the end, with all of these patterns, you don’t get separation of concerns. The smallest unit of the architecture was supposed to be the isolation level where your typical problems could be addressed. But your problems are always distributed across many such units, making them harder to solve, not easier. It’s a scam. The truth is, separation of concerns is hard, and there’s no magical, one-size-fits-all tool to achieve it. It requires significant abstraction work on a specific, concrete problem to slice it into pieces that actually compose well in a useful and maintainable way.


Because microservices have a granularity that allows a sort of distinction as an architecture that a big ball of mud cannot provide. The sign that the design is bad in the first case is that the services are far too chatty, but that is not a bright line distinction: it is always subjective if the services are chatting too much or not, when is the messaging some kind of messaging spaghetti? The mere fact that you developed your monolith into a big ball of mud is bad design manifest. So microservices make it harder to identify bad design. Designing a modular monolith from the ground up will feel like overengineering to many, until you arrive at the big ball of mud and it is too late.

Simplistic is often sadly seen as an effective replacement for the difficult achievement of simple.


Mostly because people are isolated from the consequences of their shitty architectures through the joys of being employable somewhere else.


microservices is about workforce rotation more than anything else.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: