Hacker News new | past | comments | ask | show | jobs | submit login
The Neomonolith (inconshreveable.com)
20 points by inconshreveable on Oct 8, 2015 | hide | past | favorite | 7 comments



I've seen a variant of the "Neomonolith" where you run the same code base on every server, but use a proxy layer (Zuul/Apache/Nginx) to isolate specific parts of the code base on a set of servers for specific, high-traffic use cases. That way you're still only building one image, but you can still get some of the performance tuning / concurrency benefits of micro services without all the headache. Still doesn't come close to solving all the problems of the monolith, but it's another band-aid you can apply if a micro services architecture doesn't make sense...


Have you got any examples of this? I would like to see if you do.


Not very difficult to set up -- just use a reverse proxy on certain parts of the path (e.g. one set of reverse proxy entries for /api/foo and another for /api/bar). That way if the foo API gets hammered, the performance of bar is unaffected (assuming your proxy layer is adequately sized).

Note this doesn't help if your persistence layer gets overwhelmed (since in a monolith you'll still probably have a single persistence layer). Separating the persistence layer is by far the hardest part of moving from a monolithic to a micro services architecture.


This looks very similar to Paul Hammant's "cookie cutter scaling" pattern (http://paulhammant.com/2011/11/29/cookie-cutter-scaling/), which I find equally baffling. Perhaps I've just never encountered the conditions which are favorable to it, but what I like about microservices is how they decouple every services' rate of change, and going back to lock-step deploys throws that all away.

> In addition, with a monolith, the work of monitoring, alerting, configuration, and a local development is paid once. But with a microservice design, that cost must be paid for every service.

The second sentence is only true if _every service_ actually rolls its own monitoring, alerting, etc. I cannot imagine a situation like that in practice. The standardization of those things (and more, such as build and deployment pipelines) is necessary to enable microservices in the first place. It's true that plenty of organizations committed to their microservice framework will also run some things outside of it, but I've never seen the ratio of things in the framework closer to 0 than 1.


Developing in erlang/elixir gives you a varient of this, in that your code is broken up into many small "processes" which are messaging each other even when you're only deploying a single app to a single server. Then if you need to distribute and scale, introducing networks between the components is relatively small change (sun's fallacies of distributed computing meaning it's never a 0 cost change however)


This is the missing piece for a yoeman generator I've been working on: https://github.com/zbyte64/generator-batteries

The issue I bumped into is getting /api to talk to /auth; OP recommends making them talk over RPC.


This approach often works for third party services, too. For example, you can put a mongoconf/mongod/mongos triplet on every box. Your app would connect to the mongos on localhost which then forwards requests to whichever mongod happens to be master. The same approach works for redis & redis_sentinel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: