Just like in Unix, we see the limits of composability: You can build some very basic, generic tools that are composable and easy to use. the greps and finds of the world. But there are relatively few of them out there, surrounded by a bunch of ugly, hard to read, hard to reuse glue. This is naturally occurring, because each piece of glue is not used many times, and its task isn't all that easy to define.
So, taking this to Microservices: while we can build a bunch of little services that will be reused everywhere. The large majority of a system will be this kind of glue: You can't make every single piece of code you write be as well defined as grep, but with a service interface. And while you will be able to track what the little pieces do, the bigger command and control pieces will always present a problem. We can't wish complexity away, no matter how hard we try.
So designing a bunch of microservices and hoping most of your problems will be solved is like trying to build something in Unix without perl and shell scripts. But I see companies, today, that think it's a silver bullet. They've not read Brooks enough.
I don't think you can quite call these patterns of failure mistakes.
They are effective, reliable way to get certain things up and running in a given time frame and in a decentralized fashion. If you have limited resources and need to have things working in that time frame, decentralized services can be the right decision even if they give you problems later.
Further, being a fairly reliable way to do things, they have appeal even if your time frame is far enough ahead to see the problems.
Coming from a .NET background, I had an interesting path. I started with DOS-based imperative programming, then databases, then OOP/OOAD, then finally functional programming with F#.
Once I truly got on the functional programming bandwagon, I started asking myself what was all this scaffolding for? Why didn't I just build composable functions that passed formatted files around?
This is 180-degrees from the way I used to code, but damn, I like it. A lot. I can use the O/S as an integration tool, and the entire deploy/monitor/change cycle is a million times easier.
I wonder how many other OOP guys are going to end up in my shoes in another 10-20 years or so?
Note: I see other commenters are talking about how you can't solve your problems simply by using micro-services. I'd agree with that, with one caveat: if you've coded your solution in pure FP, you've solved your problem in a way that's by definition composable. You can certainly decompose that solution into microservices. I think the question is whether or not you have to "re-compose" them into one app in order to make changes.
This is my intuition too - we can go an awful lot further than might be supposed with reusable components if we are sensible about the interfaces and streams / lists.
If you're writing pure transforms, you're already creating the micro-services. It's just a matter of where they live. But if you start to play fast and loose with imperative programming, sure, you're going to need some industrial-strength glue. Even then it's going to be a mess.
It'd be interesting to have a pure FP language where you could either compile the entire code as one piece, or automatically split it up into chunks and deploy separately. You could keep the code in one place and the only thing you'd need to tweak would be the chunking. (You could also layer in some DevOps on top of that where certain pieces would talk to other pieces on a schedule, or across a wire, and that could be specified in the code. You could even meld this into a puppet/ansible-style system where not only do you code the solution, but you code the deployment as well. Neat idea. Somebody go make that.)
The compensation for our failures, or sometimes overcompensation for our failures leads to interesting outcomes. Perhaps the answer lies not in monolithic vs micro approaches so much as knowing what the evolution between the two over time looks like.
> ...composability does not and can not exist everywhere simultaneously. It just won’t scale. Although the flexibility that a composable infrastructure provides is vital during times of rapid innovation, such that pieces can be mixed and matched as desired, it also sticks users with a heavy burden when it’s unneeded.
This is a good reminder. There is a lifecycle. Know when and where it starts, and when it ends.
If you're going to nitpick a strawman this hard, you have to say this whole command should just be replaced with "tail file" but presumably the author intended it as an example where in real life there would be options supplied to sed. Those options might cause the number of lines to change, necessitating the tail being after the sed, not before the sed.
God, I wasn't nitpicking -- I even said my comment was off-topic. I was throwing advice into the void, for the author if they read it or the commenters here.
And it's fairly important stuff for me on a regular basis. At work we generate hundreds of gigs of logs daily, and doing things in the right order with tail and grep etc is often the difference between a script working or not, or between it taking seconds and taking minutes.
cat helps with readability, as in having the input at the start. It's premature optimization to complain about it, I'm quite sure any performance difference is negligible.
So, taking this to Microservices: while we can build a bunch of little services that will be reused everywhere. The large majority of a system will be this kind of glue: You can't make every single piece of code you write be as well defined as grep, but with a service interface. And while you will be able to track what the little pieces do, the bigger command and control pieces will always present a problem. We can't wish complexity away, no matter how hard we try.
So designing a bunch of microservices and hoping most of your problems will be solved is like trying to build something in Unix without perl and shell scripts. But I see companies, today, that think it's a silver bullet. They've not read Brooks enough.