Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've taken the approach of building the monolith first with an architecture that allows me to easily pull parts of it out into services when I hit constraints and need that scalability.


This is the approach we used at a certain budget shaving startup and it worked very well.

The boosts to dev speed from a monolith / monorepo over a more "scalable" approach are huge and allow you to focus on business problems instead of self imposed tech challenges.


How?


If you push to keep your code organized into modules with simple function APIs, and make sure that code outside your modules doesn't use the package internals directly, then your function APIs can be easily extracted into a REST/gRPC call, and the only thing that changes (to the first approximation) is your internal service (function) calls become external service (api) calls.

Obviously you now need to add an API client layer but in terms of code organization, if your packages are cleanly separated then you've done a lot of the work already. (Transactions being the obvious piece that you'll often have to rethink when crossing a service boundary).

The advantage of this approach is that it's much easier to refactor your internal "services" when they are just packages, than it is to refactor microservices after you've extracted them and set them free (since upgrading microservice APIs requires more coordination and you can't upgrade clients and servers all in one go, as you often can inside the same codebase).


I've been there. This only seems to 'work'--until you try to raise throughput/reliability and lower errors/latency. What ends up happening is that the module boundaries that made sense in a monolith don't make sense as microservices when communications are allowed to fail. Typically the modules are the source-of-truth for some concern with consumers layered above it. This is the worst pattern with microservices where a synchronous request has to go through several layers to a bottom-level service. With microservices you want to serve requests from self-contained slices of information that are updated asynchronously. The boundaries are then not central models but rather one part of a workflow.


Or you can do poor man's microservices and use the same monolith with different production flags to load balance it.

Keep all your code in one repo, deploy that codebase to multiple servers, but have it acting in different capacities.

1 email server, 5 app servers dishing out html, 2 api servers

Etc

It works very well and was able to handle spikes of traffic during super bowl ads without any problems.


This is the biggest favor you can do for yourself. The developer experience is as easy as production without descending into container induced madness.


Testing is a breeze too because you are using the same tools across the board.

I don't know why it fell out of fashion, but for your average web app it is the gold standard imo.


This is exactly it.


Elixir and Phoenix. The contexts pattern used in Phoenix is the most modular, easily microserviced way of structuring apps I’ve ever used. I slapped myself on the forehead when I first saw it. Duh. It’s really fantastic. Highly recommend


Do you have a good reference link to learn more about what you're talking about?



Service Oriented Monolith. I.e. you can organise a monolith in pretty much the same way you would organise a micro service architecture.


"Service Oriented Monolith", I'm loving it ! ;)


Bingo.

A properly organized codebase scales very well when partitioned into services.

It is my default approach for all new projects for the last 10 years or so.


https://medium.com/@dan_manges/the-modular-monolith-rails-ar...

His talk was pretty cool and helpful if you want to split monolith!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: