Hacker News new | past | comments | ask | show | jobs | submit login

That is EXACTLY what I have observed, time and time again.

If you have trouble managing ONE application what makes you think you will be better at managing multiple?

Also, running distributed system is way more complicated than having all logic in a single application. Ideally you want to delay switching to distributed system until it is inevitable that you are not going to be able to fulfill the demand using monolithic service.

If your application has problems, don't move to microservices. Remove all unnecessary tasks and let your team focus on solving the problems first and then automate all your development, testing, deployment and maintenance processes.

Or call me and I can help you diagnose and plan:)




Monoliths are also distributed systems and will run on multiple hosts, most probably co-ordinating on some sort of state (and that state management will need to take care of concurrency, consistency). Some hosts will go down. Service traffic will increase.

I understand your point. You are using distributed in the sense of "how is one big work distributed", you probably also hate overly "Object Oriented code" for similar reasons.

But distributed systems is a well understood thing in the industry. If I call you and you tell me this, then you're directly responsible for hurting how successful I would be by giving me a misleading sense of what a distributed systems is.


> But distributed systems is a well understood thing in the industry.

Wait, what?

Distributed systems are one of the most active areas on CS currently. That's the opposite of "well understood".

It's true that most systems people create are required to be distributed. But they are coordinated by a single database layer that satisfies approximately all the requirements. What remains is an atomic facade that developers can write as if their clients were the only one. There is a huge difference between that and a microservices architecture.


Distributed systems are well understood though. We have a lot of really useful theoretical primitives, and a bunch of examples of why implementing them is hard. It doesn't make the problems easier, but it's an area that as you say, has a ton of active research. Most engineers writing web applications aren't breaking new ground in distributed systems - they're using their judgement to choose among tradeoffs.


Well understood areas do not have a lot of active research. Research aims exactly to understand things better, and people always try to focus it on areas where there are many things to understand better.

Failure modes in distributed systems are understood reasonably well, but solving those failures is not, and the theoretical primitives are way far from universal at this point. (And yes, hard too, where "hard" means more "generalize badly" than hard to implement, as the later can be solved by reusing libraries.)

The problem is that once you distribute your data into microservices, the distance from well researched, solved ground and unexplored ground that even researchers don't dare go is extremely thin and many developers don't know how to tell the difference.


Correct. That doesn't make monolithic systems "not distributed".

Secondly, I don't know why you say "distributed systems are an active area of research" and use this as some sort of retort.

If I say "Is a monolithic app running on two separate hosts a distributed system or not", if your answer is "We don't know, it's an active area of research" or "It's not. Only microservices are distributed"


Hum... I don't think you understood what I said.

Most of what people call monolithic systems are indeed distributed. There are usually explicit requirements for them to be distributed, so it's not up to the developer.

But ACID databases provide an island of well understood behavior on the hostile area of distributed systems, and most of those programs can do with just an ACID database and no further communication. (Now, whether your database is really ACID is another can of worms.)


Different kinds of distributed systems have wildly different complexity in possible fun that the distributed nature can cause. If you have a replicated set of monoliths, you typically have fewer exciting modes of behaviour and failures.

Consider how many unique communciation graph edges and multi hop causal chains of effects you have you have in a typical microservice system vs having replicated copies of the monolith running, not to mention the several reimplementations or slightly varying versions and behaviours of same.


I don't even consider replicated set of monolyths as a distributed system.

If you've done your work correctly you get almost no distributed system problems. For example, you might be pinning your users to a particular app server or maybe you use Kafka and it is Kafka broker that decides which backend node gets which topic partition to process.

The only thing you need then is to properly talk to your database (app server talking to database is still distributed system!), use database transactions or maybe use optimistic locking.

The fun starts when you have your transaction spread over multiple services and sometimes more than one hop from the root of the transaction.


> Monoliths are also distributed systems and will run on multiple hosts

... not necessarily. Although the big SPOF monolith has gone out of fashion, do not underestimate the throughput possible from one single very fast server.


Well, no matter how fast a single server is, it can't keep getting faster.

You might shoot yourself in the foot by optimizing only for single servers because eventually you'll need horizontal scaling and it's better to think about it in the beginning of your architecture.


> eventually you'll need horizontal scaling

This is far from inevitable. There are tons of systems which never grow that much - not everyone works at a growth-oriented startup - or do so in ways which aren’t obvious when initially designing it. Given how easily you can get massive servers these days you can also buy yourself a lot of margin for one developer’s salary part time.


Whatever happened to premature optimization being bad?


Even in a contrived situation where you have a strict cache locality constraint for performance reasons or something, you'd still want to have at least a second host for failover. Now you have a distributed system and a service discovery problem!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: