Hacker News new | past | comments | ask | show | jobs | submit login

Could someone please point me in the direction of some solid documentation about deciding on when / how to split out microservices? So many of the cases I see them used are overkill and just make development and devops far more complicated for the scale of the application & amount of data/users being processed. I find myself usually comfortable with splitting out Auth (Cognito?), Payments (Stripe?), and not much else.



My favorite way to think about this is to apply a reverse Conway's law. Architect your microservices like your org chart.

One of the main benefits of decoupling a monolithic application is getting the freedom to release, scale and manage different components separately. Each team becomes empowered to define their own practices, procedures and policies that are appropriate to their business requirements.

Have a mission-critical component that needs 5 9s reliability (auth or billing)? Great, release that carefully over multiple days with canarying across failure zones. Working on an experimental API for a new mobile app? Awesome, ship every commit.


Architecting microservices like your org chart would be Conway's Law, correct?


Yeah, I don't think it's "reverse" either.

The common quote "if you have four groups working on a compiler, you’ll get a four-pass compiler" clearly has architecture following from organizational structure.


It's reverse because Conway's Law, as stated, says that code structure inadvertently arises from org structure, i.e. code = f(org). The reversal is to architect the org structure to ensure optimal code structure, i.e. org = f(code).


That is reverse, yes.

I think your original comment was unclear, if not misstated. I had read the "like" in "[a]rchitect your microservices like your org chart" as something like "to resemble", and the rest of the comment seems to follow just fine. Other readings seem possible but I don't see one that gives the sense you want without what feels like some serious contortion.

In any case, no worries, we're clearly on the same page now - just trying to figure out what went wrong.


The way I've generally thought about it and have seen it done successfully in practice is to create a microservice for each component that scales independently. Auth and payments make sense because they scale independently. You may get authorization requests and financial transactions at a different rate than traffic to your application itself.

Similarly, if you run a website that does batch processing of images for example, the image processor application would be a microservice since that would scale independent of website load. It could be that you would need to process 100 or even 1000 images for each user, on average, and it doesn't make sense to scale your whole application when a bulk of the application processing is for image processing.


That might be good criteria. But it still depends on your application. Most web apps don't have hardly any overhead when under load. So it's essentially just as efficient to load the whole codebase as a monolith into each node as you scale up.


Correct, the hierarchical breakdown of services is orthogonal to the scaling unit of code. If every node in the cluster could execute every function, there is no need to split things out.

When deployment and coordination become an issue, that is when _deployment_ needs to get split up. But given our current RPC mechanisms, deployment and invocation are over-coupled so we have to consciously make these decisions when they could be made by the runtime.


You probably won't find much that will help you because there really is no "right time". I've done a lot of surveys, and what I've found is that if you're running microservices, about 25% of your engineering time will be spent managing the platform/DevOps/Not writing product code. That time can either be 25% of every engineer, or 25% of your engineering staff.

In either case, the best time to do it would be when you feel the increased speed and agility is outweighed by the added overhead. The speed and agility comes from small teams being able to operate mostly independently.

There are of course a ton of exceptions to this. For example, if you're using AWS Lambda or Google cloud functions, they do a lot of that 25% for you, as long as you're willing to do it their way, so now you have an incentive to go microservices sooner. Also, going microservices will probably allow you to scale up faster if you've done it right. So if you expect a huge spike in traffic, that's a good time to go microservices.

There is lots of good material on the pros and cons of microservices, but when to actually make the switch, or if you should start out at microservices, is a very situation specific question and relies on a lot of external factors.

The best I can say is look at the pros and cons and figure out what that costs or gains you in your particular situation.


Microservices solve a business organization need. It brings nothing to the table from a technical standpoint except complexity and overhead which might be required if you run a large company.

I would say, don't use Microservices unless you have 100 or more employees.


Look for domain driven design. It’s all about the data and understanding that microservices benefit from data redundancy.


Domain driven design from Eric Evans is a great resource for learning about how/where to separate out services.

You will learn about how to model complex domain models and how to decompose them into Aggregate's. The Aggregate is the key to where to create new microservices, as each aggregate should only contain domain objects that relate to itself.

Payments is a great example of an Aggregate.


I'd say a bounded context is a better border. When doing REST you can then begin by implementing each of your aggregate roots within the context as a resource.


I've never been able to make sense from including more than one aggregate root in a bounded context. What are your thoughts?


I asked this once at a K8S presentation and the person answered that he found it was about a team size of 35-40 engineers that migrating to microservices started to make sense.

#anecdata


The boundaries are between logical components of your application in your business domain. The two you identified (auth and payments) are a good fit, especially because they also rely on external services a lot.

The other heuristic to follow is how your company is organized. If you have a separate team working on a major feature then perhaps that could be its own service so they can own the full-stack.

Otherwise, stick to the monolith for the best results.


I am reading "Building Microservices" by Sam Newman. He has got some really good insights into how to break up a monolith into micro-services.


I choose to split out a new service if I need

- independent deployment of this service

- independent scaling of this service

- independent implementation stack for this service


When you split off a team to work on something, you can consider splitting it off into a microservice.


The key is pre-emptively identifying cases where that will happen before the need actually arises, since splitting out microservices from a monolith retroactively can be exceedingly painful and complex. You have to strike a balance between the short-term and the long-term.


Simple test: is a component of your application, which you could logically separate into its own thing, going to be used by at least “the app” + 1 other app?


That's one heuristic, but not the only one you should use. It can make sense to break out microservices that only have one consumer for various reasons.



my main reasons to split are the following:

- security - minimize privilege escalation and access to infrastructure

- scalability - different workloads need different metrics to scale(CPU bounds vs connection bound)

- dependencies - why burden devs with all dependencies especially some that are more difficult to setup dev environments for

- service per backend dbs/etc (overlaps with the above)

- domain - can place domain specific skills on a team


Security? Really? Too many moving parts, too many holes and places to exploit... fixing the same sec problem is 30 different places does t sound like great security.


More like giving the web facing application full access to the payment database vs having a separate internal service with a very limited API. A problem with the frontend does not always immediately compromise the database.


split any thing that can

kafka ->process message -> kafka




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: