Hacker News new | past | comments | ask | show | jobs | submit login
Build the Modular Monolith First (fearofoblivion.com)
144 points by kiyanwang on Nov 13, 2022 | hide | past | favorite | 86 comments



There are more than two architectures.

I agree if you don’t have organizational scale, microservices solve problems you don’t have.

But there are at least three separate things that are ‘monolithic’ about a classical monolith. 1) the codebase, 2) the database, and 3) the build and release process.

And you can certainly modularize your codebase into libraries or independent modules, but retain monolithic builds and releases against a monolithic relational database - a ‘modular monolith’.

But I’m not sure that’s the interesting part to break up, especially if you are not trying to solve the organizational problems microservices help with.

On the other hand, modularizing your build and deployment process could be helpful even if you are a solo developer - because it helps you reduce cycle times, and make smaller changes which means that you can trace bugs more quickly.

And modularizing data access is better for security and reducing blast radius of bugs or errors.

Modularizing the codebase seems much more like a problem you only need to tackle when your organizational complexity increases.


Similarly, I feel like microservices are implemented to help with "scale", but it's often fuzzy which metric is being scaled. You can scale

1. Number of developers in your org 2. Number of users on your platform 3 Amount of data or compute used

Not all products need to scale all of these dimensions, and a microservice architecture may or may not help much. I'd argue that you probably get better results for #2 by optimizing database queries and implementing a good catching layer


Microservices are primarily good at scaling the human organization (#1).

Plenty of monoliths handle large numbers of users, large data sets and large amounts of computing resources.


> Microservices are primarily good at scaling the human organization (#1).

I agree but this is what worries me about them; micro-services encourage team scale up and that is really difficult to do well.

In theory you'd have separate services with well defined APIs. You'd know upfront what behaviour belongs where and could get the right people to deliver changes on schedule.

The reality is often a form of disguised waterfall development and it rarely works out well unless you are very careful.

Instead you've gained the problem of refactoring multiple services, in multiple languages, owned by multiple competing teams, each with their own management and agenda. Productivity takes a nosedive and more bodies get added in an attempt to make up for it. The result is the opposite[1].

You're much better off striving to keep a small team as productive as possible with adding team members a last resort. Project management at scale is such a difficult task that if you can delay doing it, you should.

Startups have no choice but to use small teams. As a consequence they have good productivity and low barriers to communication without really thinking about it. The problem in larger enterprises is that the highest status project manager is often the one with the biggest team and not the highest productivity.

Personally, I'd rather have a few guys with massive servers than a massive team with lots of small services.

-

1. This nothing new, it's what Fred Brooks was talking about in the Mythical Man Month back in the 1970s.

https://en.wikipedia.org/wiki/The_Mythical_Man-Month


Agreed. I've seen product say "we need to make sure it scales [to our dream number of users]", and have engineering respond "we'll use microservices to scale" seemingly without addressing what kind of scale they'll help with.

A lot of orgs end up using the same word to describe completely different needs and outcomes


The core question “what is this in service of?” is what separates good developers from mediocre ones.


I would add 4) documentation to that list. A wiki full of docs is the closest analog I've seen to a monolith ball of mud codebase.


Any team starting with microservices on an unvalidated concept likely hasn't built a big project from the ground up before

If it's a small engineering team there is nothing more optimal than working on a big scappy vertical codebase in the early stages

In the fortunate situation you need to start scaling -- breaking that out into MS later is usually low effort and fun

If you break it up to early you often end up with logic ghettos forming in the wrong stack that become near impossible to relocate later

I was talking to a startup last year who were hiring several hundred engineers to build a handful of microservice stacks in anticipation of the traffic they may get at launch (success expected because of previous unrelated founder experience), and wanting to make it easier to deploy vast engineering resource on it -- they've still not launched anything


> breaking that out into MS later is usually low effort and fun

In my experience, breaking down 5+ year old / multiteam monoliths has always been painful, frustrating, and required huge efforts, especially when nobody remembers why certain things are the way they are. In addition, getting enough support for business to do it was hard. I really find it quite surprising to hear someone share the opposite opinion and wonder what kind of environment and the size of the project you were in.


100% agreed there. I've never seen this go smoothly, and usually the monolith lives forever, slowly shedding functionality but never quite disappearing.


I think an important thing to keep in mind when discussing monoliths or Microservices is that you can build modular code without needing multiple binaries, processes, or server instances. If you follow best practices when creating a monolith, creating well-defined modules with clear input and output boundaries, then it should be relatively easy to split those modules into separate programs and create ways for those modules to communicate, if it later becomes necessary to move towards a micro-services architecture.


I think a (but not the only) central conceit of a microservices architecture is that most development teams cannot be trusted to maintain proper separation of concerns, and enforcing at a technical level the inability to call Any Old Function is a big part of what you’re buying into.


This always seemed strange to me. If your team can't be trusted not to make spaghetti in a monolith, what stops them from making distributed spaghetti in microservices? In theory the extra work of making an API call would give you smaller bowls of spaghetti. However, once you add some abstraction to making these calls it seems like developers are empowered to make the same mess. Except now it is slower and harder to debug.


> This always seemed strange to me. If your team can't be trusted not to make spaghetti in a monolith, what stops them from making distributed spaghetti in microservices?

It's far harder to update multiple services to handle requests they should not handle, let alone update a deployment to allow those requests to happen.

Walls make great neighbors, just like multiple services make teams great at complying with an architecture constraint.


> Walls make great neighbors

I think you're trying to solve a communication problem with a technical solution, which is a recipe for trouble.

If multiple teams working on interdependent components can't communicate well enough to keep from stepping on each other's toes, imposing technical barriers probably isn't going to make things better. Especially once you inevitably realize that you put the walls in the wrong place and functionality has to move across borders, which is now a major pain because you've intentionally made it hard to change.


> If multiple teams working on interdependent components can't communicate well enough to keep from stepping on each other's toes, imposing technical barriers probably isn't going to make things better.

But it actually does, and there is a lot of data to prove it. When you have a big project and a bunch of teams, the first thing you build is boundaries / walls. Then you get to defining interfaces between interdependent services. And this frees them up to get hacking on their modules in parallel - without stepping on each others' toes. Communication would have definitely helped, but it is way easier for smaller teams to own and operate their services and try to get a big organization plough through a big mess.

That said, microservices are just one way to solve a problem, and not always the right way. But there is always a place where you would look at the problem, the organization that is tasked to solve it, and it would fit just right in.


been there, done that.


> It's far harder to update multiple services to handle requests they should not handle, let alone update a deployment to allow those requests to happen.

I'm not sure I follow this. Doesn't this just mean that it is harder to make changes? Why would it be harder to make bad changes and not harder to make good changes?

> Walls make great neighbors, just like multiple services make teams great at complying with an architecture constraint.

I'm not sure I follow this either. Why would multiple services make teams great at complying with an architecture constraint?


Most compiled languages will allow a project to be split into multiple modules, and an `internal` access modifier can then be used to limit the use of an element to that module.


But what defines "best practices", in this case?


Law of Demeter, constraining responsibility in functions while avoiding tortellini code, clear variable names etc.


The assertion that monoliths are taboo is a bit odd; “monolith first” has been pragmatic best practice for years: https://martinfowler.com/bliki/MonolithFirst.html.


I think this depends on who you listen to, but I'm a huge fan of the monolith and wouldn't break one up unless I has a specific tech need(eg some weird library), a performance issue unaddressable in the monolith, or big organizational people problem and in that order.


Majority of people I work with think that there are only two architectures. Monolith and microservices. Microservices is the good architecture. Monolith is the bad architecture.

Such a sad trend :( but seems to be very prevalent


Exactly, Sam Newman says the same thing in "Building Microservices" book.

I don't know where people got this idea but I've noticed the less experienced, the more noise they make about it.


Microservices is too catchy name. It somehow implicitly means to people less complexity, but the reality is quite opposite. They should be called "a lot of little monoliths that we have to make always running and compatible with each other when updating" would reflect more appropriatelly what it is. Good analogy is replacing wheels for better ones while driving. In non microservice system you stop, change wheels and start driving again with new wheels. With microservices you need to do it while driving and it's going to be more complicated.


Imo there's a huge distinction between microservices in a monorepo or microservices where there are 100s or thousands of git repos.

Monorepo can be much easier as you can at least verify the given build is consistent.

Monoliths don't completely avoid the problem of version compatibility, they just reduce it to a more clear layer of external clients and compatibility between N-1th release and Nth release. That's certainly worth it in the early stages of a project


Microservices vs. monoliths is a false dichotomy in the present day.

If you put microservices in a monorepo with a good build tool (like NX), put the common auth/logging/types/dtos/etc. functions in reusable libs, and version/release all the apps together, you get the best of both worlds.

At a small scale, you can deploy your whole containerized stack on two big HA instances (monolithic infrastructure, so much easier). You can leverage common libraries without versioning hell/cross-repo coordination. If you have mediocre developers on the team and very aggressive development timelines, the monorepo structure helps enforce modular organization. If you are a good developer, your workflow gets very, very fast, and you can manage a lot of complexity.

Add: additionally, if you have ever-changing requirements and some are obviously bolt-on one-offs that aren't going anywhere, you can limit damage to the code quality by splitting those features into a separate microservices (making it much easier to safely deprecate/remove them in the future).


A monorepo of microservices is the best pattern, but only if you have a dedicated team that keeps the monorepo buildable, builds tooling for it, and enforces best practices and the right culture. If you don’t do that, you will end up with a huge mess — I’ve seen it happen.

For companies that can’t dedicate the resources to do a monorepo properly, a repo per team is the best approach. The true value of microservices is decoupling teams so they can move independently without blocking each other.

Also, needing to release the services together, or in a certain order, is a very bad and unscalable pattern. Teams need to be able to move independently. This requires a commitment to avoiding breaking API changes no matter what kind of repo structure you use — and for the love of God, never let more than one service access a database table! A table should only ever have one service that accesses it, and API boundaries need to be enforced as the only way other services get to the data. Do those things and you will be better off no matter what repo structure you use.


These are very good points. A few responses:

- The monorepo tooling I prefer at startup scale (NX) does have a learning curve, but it has been pretty easy to maintain using automation (good linting, good build/test pipeline, etc.). For me, learning and leveraging the right monorepo tooling is way easier than having to enforce consistency across large numbers of repos with a small team or watch a low-tooling monolith decay into tightly-coupled spaghetti. I am also in a particular situation where the need for eventual scale is obvious, will come very quickly (clients are big), and the thought of having to rush a scale-inexperienced team (management and devs!) through a monolith-to-microservice migration on a tightly coupled codebase while meeting SLAs is so unpleasant that it makes me a bit sick to my stomach to think about it.

- The monorepo framework/tooling matters a lot. For instance, NX is small-scale friendly and largely can be overseen by senior Typescript devs. Bazel, on the other hand, requires a hefty context switch and has a very steep learning curve.

- You are right that releasing all services together does not scale past a certain point. My point is that microservice monorepos let you pivot quickly between all-together (monolithic) releases and individual app releases as appropriate for your scale. With good caching and parallel blue/green deployments, adding new services to an all-at-once build/deploy pipeline just uses a bit more compute for the pipeline without meaningfully impacting the pipeline run times.

- Having to release services serially in a certain order obviously indicates something is seriously wrong under the hood.

- You are very right about monorepo tooling/best practices, but I would extend that to any project that eventually will need to scale. Someone has to take ownership and enforce good practices. Spaghetti code can be harder to prevent in a feature-heavy monolith and certainly is harder to deal with for me if it is in many different repos (i.e., several monoliths).

- You can have more than one monorepo for teams that are very different or have specific needs (i.e., different languages, mobile apps, etc.).

- I wish I were in a situation where my team could commit to avoiding breaking API changes, but it just is not so because of aggressive development timelines. (I am sure this will change in the future as we scale.) All-together versioning/releasing helps deal with (planned, intentional) breaking API changes very efficiently, because (as long as you can tolerate a failed API call once in a while as a blue/green deployment switches traffic) you're basically performing the deployment as if it is a monolith.

Add: 1000% agree about db access. That is very easy to enforce at the architecture/team level.


> If you put microservices in a monorepo (...)

What exactly do you gain with this approach, other than pinning versions of all microservices?

Any competent team working on services does not suffer from "versioning hell" because APIs are stable and tracked with integration tests and smoke tests, and you version the API and not the code.


It vastly reduces the complexity of things like dependencies, testing, CI/CD, versioning.

Dependencies can be relative paths. Relative path dependencies will break the build on your machine, so you don't get devs pushing broken stuff. CI/CD servers don't get plugged up with billions of repos building because one changed. CI/CD servers don't need to wait for other builds to finish and push artifacts to do their job. Testers don't need to say "I used version 10.2 of x and 11.5 of y", they can have a single version number for the whole stack.

All these things are possible with microservices in different repos. But they require a lot more time to set up and require regular maintianance. If your company isn't large enough to need that, then it makes no sense to move away from a monorepo.

Arguably no company is big enough, Google has all it's code in a single 80 terabyte monorepo. But I can understand why that wouldn't scale well either...


> It vastly reduces the complexity of things like dependencies, testing, CI/CD, versioning.

Did it, though?

The only conceivable advantage is version pinning. I makes absolutely no difference in terms of test coverage whether you use a monorepo or not. Dependencies-wise, the only thing that a monorepo saves is releasing individual components, which ultimately is not an advantage but a drawback and a liability.

So exactly what's the upside, if any?


what is the upside of separating them?


Exactly. Now it just needs catchy name so vocal juniors have one word to refer to it - they currently know only 2 words for architecture - bad monolith and good micro services.


Isn't that just a monolith with super slow function calls?


I think theres an order to abstraction. Variables, functions, classes / modules, classlibs / pacckages, microservices etc... and with each bump up comes maintenance overhead, version management, integration testing, monitoring etc. so it has to be warranted. There are obviously very legitimate, and pragmatic cases for microservices, but just a monoliths are a magnet attracting all code to an ever growing codebase (i.e. its easier to just add it to the monolith), microservices tend to breed microservices.

Conways law is also a real thing. Sometimes, microservices are a practical choice given team structures and ownership, rather than product reasons.... likewise the monolith. Sometimes with a smaller team(s), the monolith is the most pragmatic option, as increasing product complexity due to all the above needs, reduces the capacity for other work.

Sometimes - its a good thing to embrace conways and run with it, as opposed to discovering it as a side affect.


The whole monolith vs microservice discussion revolves around a false dichotomy and higly subjective and context-dependend definitions. For example, what if a monolith is integrated into a larger system landscape (e.g. due to an enterprise merger). Is it still a monolith?


Indeed. Most companies which are cited as 'running the whole system as a monolith' aren't really a monolith. They don't actually run their CMS, their online comment system, their payment processing, their payroll, their recruitment databases, their build pipelines, their bug tracker, and their hardware asset management system, all out of one codebase with a single RDBMS on the backend.

People will laugh at that concept, but go look at how mainframe systems work, and you might be surprised how close some old banking systems are to that level of monolithicness. There really are companies who run essentially all of those systems off a single ERP platform. Or worse, on Sharepoint.


A monolithic architecture isn't about being the one and only compute tier. It's about defining how many distinct things that any tier does, which is why microservices go well with partitioned teams. Smaller compute business domains means specialization and easily divisible work functions.


I think more startups need to consider building monoliths first.

Micro-service architecture definitely has the advantage at scale, but the advantage of monoliths is the speed that you can build and improve them. Microservices require much more planning and architecture discussions compared to monoliths.

Microservices also can have incredible performance improvements over monoliths (being able to scale or tailor each micro service to its' specific needs as opposed to "on size fits all" of monoliths). So yes they can be right-sized in your cloud easier and have tailored performance, but it comes at the complexity of the deployment and management of the system. By comparison monoliths can sometimes be managed without a dedicated SRE/DevOps/CloudEngineering team.

Monoliths tend to be cheaper to deploy (up to a certain point).

Startups should consider building on monoliths more often. For proof of concept and MVP tools, monoliths are the way to go. Solo developers also are much better served with monoliths compared to micro services. But I will say, eventually if you scale big enough it will eventually make sense to leave monoliths behind and move towards micro services. Yes there are exceptions (someone will write, "but XYZ company is worth 9 gajillion dollars and runs on a monolith), but generally speaking I consider starting with a monolith and moving to micro services after you hit market acceptance for your product.


They are better at scale (hundreds of developers or machines needed to run the system).

They are better at avoiding downtime during deployments because everything is built around supporting it - but at huge cost. Offline migrations - which may be taking just seconds/minutes - are so much easier to do. We're talking about order of magnitude difference in complexity of adding new features.

In typescript world, monorepo with shared packages where some are dockerized services (all under same monorepo wide version) is more than enough for most projects. Microservice crowd will call it monolith (because services can't be deployed independently, they likely share SQL database etc), but it's a set of services. You can run it from docker/swarm/k8s - scalability is not really a problem here until you're HUGE (hundreds of developers or hundreds of machines to deploy to). Refactoring, adding new features, dropping stuff etc. - is all easy, usually type checker will guide you to all places that need changing. Single set of migrations. Straight forward e2e. Fast local development where you can spin the whole thing etc. Why people like to complicate their lives when they have this?


> Microservices also can have incredible performance improvements over monoliths

At this point it's like "Mongo is Web scale" meme.

It's so false on so many levels without a proper context. Not surprising inexperienced devs starting use microservices for "pet shops", because "microservises are fast" and failing miserably.


During the Covid pandemic, I watched from a distance a group of 6 people start and destroy a project.

After the 6 months, they didn't even have a a MVP.

This was run on a 6 month fund from the local government. They didn't know each other but they all knew that after 6 months it'd be over unless they'd secure more funding.

They spent the time faffing around, configuring things, splitting things, ... effectively making things harder for themselves.


That's what we're currently working on.

Only 20% of the work now goes into actually improving the product.


That may be better than 100% of the work going into improving the product, taking 1/5 the speed it should.


We're still doing 100% of the work but with 20% of the quality.


I 100% agree w/ this and play an enterprise architect type role at my employer but I would totally get lambasted if I proposed this. Honestly a lot of the ppl I work w/ aren't developers and never were. They just have big mouths and climbed the ranks. Sadly they're the ones who write my reviews and could get me canned so I have to play ball.


Been there, done that, caused a burnout after a long time. 0/10, wouldn't recommend. Pay was good though.


what about playing ball elsewhere, the voice your opinion with your feet motif, and leaving the posers to rot?


This is written like it's a novel take bucking the trend, but I feel like most things I have seen on this topic for at least several years now have the same observations and conclusions?

If anyone wants to actually speak up for microservices, I feel like that's what needs a defense at this point!


Microservices to me are solving two problems: an organizational problem and a technical problem. The organizational problem is more obvious: well-factored monolithic codebases go poorly with large, interdependent groups of teams. The technical problem is that while it’s true that you can write a monolith in a modular style, in my experience enforcing the single responsibility principle in a monolith demands a level of discipline that is not feasible to maintain indefinitely in an environment with personnel turnover. When architectural coupling requires making coordinated changes to multiple services, the barrier is high enough that people will tend to avoid it instead.


Wholeheartedly agree, I also feel like people are overestimating how much discipline you can collectively have in an organization. And if you create a monolith with hard-enough boundaries to enforce modularization then you might as well create multiple services (and not necessarily the one-microservice-per-entity kind of architectures, just as many as the modules)


I think anyone who has taken a large monolith through a significant platform version upgrade or change, like .NET 2 to .NET 4, or Python2 to Python3, or Angular to React, would need a very persuasive argument to make them believe that starting a new project with a monolithic design was a good idea.


I’d tell anyone that thought that that it’s a very shortsighted view. If they had started with microservices there’s a good chance the project wouldn’t be around long enough to even go through that transition.


So don't start with microservices. But also don't start with a monolith!

If you have offline batch processes, don't make the mistake of implementing them in the same codebase as your website just because you already have the DB access and build/deploy tooling set up. If you have an admin portal and a public website that listen on different ports, don't run both listeners in the same process.

'don't build microservices yet' does not have to mean 'start with a monolith'.


Having upgraded both kinds of projects to Python 3, the monoliths were far simpler to deal with.


As someone who has done this, both for version upgrades and for substantial refractors of the codebase, I have to prefer monoliths. Atomic commits are cheaper than adding and releasing version support across a system that doesn't have it.


A colleague of mine is working on one of the worst software systems I have ever seen or heard of. It is a 20+ years old micro-services architecture. More than 50 services all interacting with each other in mysterious time dependent ways. It is almost impossible to debug or reason about. It took him most of a year to figure out how to reliably and automatically build and boot the system from scratch.

I have worked on systems with the same functionality but architectured as a single monolith + database. Maintaining that system was a walk in the park comparatively.


I think team size / responsibility is a big part of choosing how much you need to go down the micro service route as well. I've recently launched on a ~2 dev team, and leaning more and more towards monolith the longer we go.

The worst teams I have been on are the ones with multiples (~5x) as many repos & running services as we have developers.

Huge swathes of repos would go without commits, builds or releases for years. Inevitably the next time you had business requirements on that repo, something in CI/CD or environment had changed enough that you had to do a bunch of devops/infra/non-business work to get the plumbing working again to even start the task.

Even in teams where we eventually had nice central utilities & libraries with versioned dependencies, you never had the chance to do enough "catch up" releases to keep all 100 repos on some reasonably recent version of the core libraries. So the cost to activate some new observability function from your central libs was huge because you needed to make small changes on 50+ repos otherwise the observability was pointless since it had low coverage.


You can try to build a monolith that is modular enough to break up later. But I have never seen it happen, and I’ve been around for a while now.

What actually happens, 100% of the time in my personal experience, is that you end up with both the old monolith and new microservices, the monolith never gets fully broken up, and now you need to support two development paradigms forever.

Has anyone here ever seen a monolith be successfully 100% broken up into microservices?


Yes, and I've gone the other way too, i.e. taking a bunch of microservices and rewriting them as a monolith, saving the company over a million dollars a year in the process.

It isn't one or the other, the 'skill' is knowing which to apply when.

To make most of these types of projects work, the sponsor needs to demand an incremental approach that delivers real value, piece by piece, all the way up to retiring the original system(s). Tie-out is also necessary for many projects, and all the associated testing. It is a big grind, which is probably why so many of these projects fail.


I think that this is a very solid point, but perhaps the reasoning underlying this type of issue is somewhat more complex. For example, we run a what is now a v5 of our API as a bunch of microservices and have been running it as microservices since v4, about 4 years now. These are "children" of v3 - our monolith. The monolith is still alive and kicking in a much reduced state but the reason we cannot decom it has nothing to do with design or time or priorities - the monolith is sitting in a data centre and is the only regulatory permitted way (in our case) to access the mainframe. So, it sits there as a gatekeeper and will do so for the foreseeable future...


Not only broken up, but merged again after a while as well.


Why does the "old monolith" need to be broken up?

I'm all for ending old software when the time comes but not everything needs a rewrite to microservices.

Supporting two development paradigms isn't really a thing any reasonably sized organisation should be concerned about.

Especially when you're talking about adding one more build process to hundreds across all your services.


> you end up with both the old monolith and new microservices, the monolith never gets fully broken up, and now you need to support two development paradigms forever.

It's not a bad thing, it's actually a quite pragmatic approach. Some folks call it The Citadel:

https://m.signalvnoise.com/the-majestic-monolith-can-become-...

Building monolith is the best way to learn about domain and domain boundaries. Monolith is way more forgiving for your mistakes.

Going all in on microservices is like believing you learned everything there is. Some folks believe that, most of the time they are wrong.


Completely agree! We build our .NET projects in a similar way too.

My current SaaS company has 8 web projects and 1 core project. Some of the web projects are SPAs using BFF and others are APIs for customers or our mobile app. They aren't tiny sites. We're up to ~500 controllers. All in a single repo and automatically deployed as a monolith. We have a super small but very productive team which we attribute to this simple design.


BFF ?


Backends for Frontends. An alternative to making "one API to rule them all". Far less time spent trying to model an API in an abstract way that makes sense to many consumers.

Instead, an API and the endpoints are designed for a specific client. (e.g. A Mobile App or SPA) For us, this also meant a more RPC based API where and reusability is managed after the network hop.

https://samnewman.io/patterns/architectural/bff/


thanks


Backend for frontend! Specialized microservices that serve some frontend-consumable APIs e.g. when other internal APIs are too cumbersome to use or change too frequently.


Backend for frontend


thanks


What's an interesting consequence of microservice architecture is the effect on release management - it's always necessary to fully [user] test every deployable => so let's reduce what's being deployed.

Microservices make post-release bugfixing a breeze.


I have long running ruby and java apps that I use for this purpose. They have build scripts, test frameworks, etc. When I get an idea for something new I add them to one of these existing projects, but I do it in a way that I can easily excise it in the future.


Curious to hear more. Are these personal projects or serving users? What are some examples of unrelated ideas you’ve tacked on to existing app servers?


It’s mostly enterprise stuff. I have a local rails app and local springboot app that I keep around and am constantly trying stuff out in. If it starts to look like it would be generally useful I will rip it out, put it in it’s own repo, set it up in a jenkins pipeline, and then have the jar/gem published to an internal artifactory instance. Then I can just include the jar or gem in other codebases as needed. Works great and keeps me from doing a bunch of boilerplate for something which may go nowhere.


Oh I see. So you avoid the boilerplate of setting up a new project and configuring it how you like just to try stuff out. That's smart as that part usually takes people down rabbit holes that distract from the main goal.


Megalith!


The issue is not microservice is complicated. It's about CI/CD complexity.


Ditto. You should start with a monolith with a vision of how to break it down into microservices (if ever needed).

When my company got a contract for a new project, my colleague created a prototype, a bare-bone solution that had 9 web projects that communicated over REST, because microservices. I suggested starting with monolith. Guess whose design was accepted because it was more sexy. It never got to production for multiple reasons, but part of it was that development was awfully slow. Orchestrating changes across services when development is in flux is extremely hard.


This. I am working on a personal project and definitely initial design and dev is easier in a monolith. Microservices, at the stage my project is at, will bring too much overhead to solve a problem I don't have (scale).


I would go much further:

Don't build a modular monolith first, build a spaghetti monolith first.

You need to not die. That's your first prioritization. A "modular monolith" like the article talks about is still worrying about success. You should worry about failure!



And then on it because you most likely part of a project that will never have any requirement that needs anything else.


One of my first ever blog posts actually was about modular monoliths: https://blog.kronis.dev/articles/modulith-because-we-need-to...

On one hand, monoliths will always be simpler to work with, at least until some point in time where their complexity grows to a point where working with them is a drag on everyone's morale and productivity.

Depending on how they're built, they could still scale horizontally, unless you've written a "singleton" application where having more than one instance and routing traffic between those would break something.

I'd say that a lot of the benefits of microservices and adjacent approaches are actually cultural: having multiple decoupled components lets you focus on whatever the "main" ones are at any given point in development, mostly doing maintenance work on others, whilst still limiting breakages.

And should it come to pass that some of the more boring components will eventually rot away (e.g. be stuck on JDK 8 or Python 2 or whatever) and will need to be rewritten based on what EOL you might run into, then you can mostly do that without affecting the rest of the system. The opposite also applies - you can upgrade 9 out of 10 components to JDK 11 even when one cannot be upgraded, instead of that one module holding everything else back in some monolithic system.

That said, there are lots of complexities to microservices and it's easy to mess things up - I've seen projects where I'm the first person that figures out that some shared code should actually be a Maven/pip/npm package in Nexus/Artifactory instead of just copying and pasting code across codebases, because nobody has put in the work to set everything up properly before me. Don't even get me started on day 2 concerns like tracing and debugging, which the article touches upon.

Other times, even modular monoliths might run into issues because people aren't good enough with writing decoupled code without too many common dependencies (unless forced by the language and its mechanisms to have proper separation), or aren't up to par with utilizing feature flags and such properly.

I think that at the end of the day, all code rots and eventually is hard to work with, which also affects entire frameworks or even languages: https://earthly.dev/blog/brown-green-language/

Your architecture will sometimes have concrete technical benefits or characteristics to take into account, but a lot of the time most of the actual effects will be organizational - how you will or won't be able to build everything, who will be responsible for what and what risks any particular deployment or version bump will carry.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: