> Microservices are a solution to a people problem, not a technical problem. They are a solution when in a large organization groups find it is easier to focus on a small problem space and communicate with other groups less - and also have a bit more room to choose their own path.
Microservices are a solution for more than that. For one thing, it allows you to isolate components, which is rarely a bad thing. If something is going to die, I would much rather lose a single component of the application than the entire application. The component that suffered a failure may not be integral to some uses of the application (i.e. perhaps the email importing portion of a helpdesk app dies).
Beyond that, it helps with deployments to some degree. I deal with this all the time, it's much much easier to schedule and execute deployments if I only have to handle one group of people. Each additional person/group just increases the number of requirements that surround the deployment, and more often than not, it ends up taking longer to actually get to the deployment due to the amount of communication that needs to happen.
> Do them when you have a problem, tighter coupling in your app stack and having less types of images means a much easier deployment, and probably more efficient code with less moving parts.
It also means that there's a much greater surface area to investigate, although proper logging can help mitigate that. It's easier to find a bug in 2,000 lines of code than it is 10,000. The separation of services can also be useful for debugging purposes. If the issue impacts all of your services, it's an issue with something shared, like the OS (if containers or config management) or a library shared between the services. If you get a strange error with little indication of the issue on a more monolithic service, what does that tell you? Not much. The code may be more efficient, but the number of moving parts is dubious. The number of moving parts total? A monolithic application most likely will have fewer moving parts. The number of moving parts per app, however, is almost certainly lower with microservices.
> An argument for things like containerization is often one of utilization. I’ll tell you here that this is a false one. Why? Autoscaling, or even manual scaling. If you build an app stack that has a lot of small parts, yes, you don’t want to dedicate whole instances to those small parts. However, as you incur load as your company grows, eventually these will scale out to want larger instances. In the end, a somewhat more (but not totally) monolithic app will eventually consume a larger instance, and even more, as it scales out horizontally. Eventually you are AT utilization for every component in your stack.
This is true of both architectures. Yes, as your userbase grows the utilization of your app increases. What this doesn't mention is your guest density. Docker uses LXC under the hood, and [LXC has much higher guest density than KVM](https://insights.ubuntu.com/2015/05/18/lxd-crushes-kvm-in-de...). Most of the results I've seen indicate that it's several times the density for low load guests, Canonical claims it's about 15 times the density, though as the primary developers of it I'm skeptical of their results.
I can't believe an article about how Docker is bad doesn't even mention the difference in deployment. That's what I've always seen as the advantage of Docker, you don't ship an app, you ship a fully functional and contained system. You don't have to deal with annoyances like diverging environments (i.e. somebody installed something in dev, but not prod, and some service has slightly different parameters in cert, etc, etc). It also gives developers a lot more control over their own deployments. The way my company does them, the developers send me a document explaining how to do the deployment, but they're essentially flying in the dark. Nearly all of them can't log in to the prod servers, and the ones who can usually don't have the permissions to read things like config files. Not to mention that it vastly simplifies deployments. Rather than having to know how to deploy 300 different applications, Docker (especially with the tooling) has a pretty uniform deployment regardless of what's actually running on the server.
Microservices are a solution for more than that. For one thing, it allows you to isolate components, which is rarely a bad thing. If something is going to die, I would much rather lose a single component of the application than the entire application. The component that suffered a failure may not be integral to some uses of the application (i.e. perhaps the email importing portion of a helpdesk app dies).
Beyond that, it helps with deployments to some degree. I deal with this all the time, it's much much easier to schedule and execute deployments if I only have to handle one group of people. Each additional person/group just increases the number of requirements that surround the deployment, and more often than not, it ends up taking longer to actually get to the deployment due to the amount of communication that needs to happen.
> Do them when you have a problem, tighter coupling in your app stack and having less types of images means a much easier deployment, and probably more efficient code with less moving parts.
It also means that there's a much greater surface area to investigate, although proper logging can help mitigate that. It's easier to find a bug in 2,000 lines of code than it is 10,000. The separation of services can also be useful for debugging purposes. If the issue impacts all of your services, it's an issue with something shared, like the OS (if containers or config management) or a library shared between the services. If you get a strange error with little indication of the issue on a more monolithic service, what does that tell you? Not much. The code may be more efficient, but the number of moving parts is dubious. The number of moving parts total? A monolithic application most likely will have fewer moving parts. The number of moving parts per app, however, is almost certainly lower with microservices.
> An argument for things like containerization is often one of utilization. I’ll tell you here that this is a false one. Why? Autoscaling, or even manual scaling. If you build an app stack that has a lot of small parts, yes, you don’t want to dedicate whole instances to those small parts. However, as you incur load as your company grows, eventually these will scale out to want larger instances. In the end, a somewhat more (but not totally) monolithic app will eventually consume a larger instance, and even more, as it scales out horizontally. Eventually you are AT utilization for every component in your stack.
This is true of both architectures. Yes, as your userbase grows the utilization of your app increases. What this doesn't mention is your guest density. Docker uses LXC under the hood, and [LXC has much higher guest density than KVM](https://insights.ubuntu.com/2015/05/18/lxd-crushes-kvm-in-de...). Most of the results I've seen indicate that it's several times the density for low load guests, Canonical claims it's about 15 times the density, though as the primary developers of it I'm skeptical of their results.
I can't believe an article about how Docker is bad doesn't even mention the difference in deployment. That's what I've always seen as the advantage of Docker, you don't ship an app, you ship a fully functional and contained system. You don't have to deal with annoyances like diverging environments (i.e. somebody installed something in dev, but not prod, and some service has slightly different parameters in cert, etc, etc). It also gives developers a lot more control over their own deployments. The way my company does them, the developers send me a document explaining how to do the deployment, but they're essentially flying in the dark. Nearly all of them can't log in to the prod servers, and the ones who can usually don't have the permissions to read things like config files. Not to mention that it vastly simplifies deployments. Rather than having to know how to deploy 300 different applications, Docker (especially with the tooling) has a pretty uniform deployment regardless of what's actually running on the server.