Oh boy. I'm currently battling with Kubernetes, and I mean it.
Compared to k8s systemd is simple and easy.
I can wholeheartedly say I hate k8s and its guts.
Everything is so overly complicated.
A bazillion configurations for every single little detail.
And she's a touchy little princess.
Hard to find help, expensive courses and a not so great documentation site, that kind of explains the components, but then again really doesn't in detail, also no complete configuration reference.
And no matter which way to setup you choose, something is always wrong.
It's a tool to drive you to expensive public cloud offerings.
For the small price of only ~$2600 per month you can have your 5 node k8s cluster on GCP, cheap cheap.
Burn money money burn money money burn.
Managing and maintaining k8s is a full-time job.
In comparison systemd is well documented and you don't really need to ask people for help.
You can easily use the shell, you don't have to battle with wrong nginx configurations that were autogenerated, because you wrote them and you know what you're doing.
Fleet was cool, but Redhat bought CoreOS and killed fleet, can't have a simple effective system, it has to be complex and enterprise so you can sell services and tutelage.
Fucking IT people.
Many, if not most, companies don't need the things Kubernetes is designed for, though. It's interesting tech and I can see why people are drawn to it, but I feel like some people pick it more because they want to use Kubernetes rather than it solving a real problem a company or organisation is facing.
To this day, I cannot tell what Kubernetes is designed to do. I hear about it constantly from this website, and based on the conversations you would think it was designed to do anything and everything, and all at the same time.
There's a ton of hype around k8s and tons of people positing it as a solution for everything under the sun. It's not. It's a good building block for everything under the sun, if and when you need the scale of "More than one team that doesn't want to get paged in the middle of the night". A ton of the solutions built on top of it are terrible.
What K8s is good at - Running collections of stateless application servers. If you have a dozen copies of the same process that are all identical, K8s is right for you. There's a lot more it can do, but that's the one that is most common.
If you need to host (hundred of) thousands of servers across the globe with an uptime indicated by more nines than sense, Kubernetes makes a lot of sense. You still need to write the tool that configures Kubernetes (you can't get away with manually written YAML at scale) but it solves a lot of problems, like "what happens when the master node goes down" and "how do we redirect to a fallback server without overloading the nodes" at the cost of spare, reserved capacity and compute+networking+memory overhead.
If you're hosting things like emergency services support, the extra spend and complexity can be worth it. It can even be worth it as a band-aid if your application isn't particularly stable and you want to increase uptime while you fight for developer capacity to fix the underlying design problems. If all of your IT team already understands Kubernetes, it may even be worth it to run it in scenarios where you want to set up and tear down quick development/test environments, assuming your company doesn't mind spending extra on Kubernetes specialists once the current IT team leaves.
It kind of was designed to be able to do anything and everything if you plug enough components into each other. That's probably why it's complex to the point of unusability; a framework that's designed to support an IRC server network ad much as it's designed to support MRI machines is very demanding for the people configuring it.
I think the problematic part is that many people portray it as "just fancy Docker that does most of the work for you". Once the cluster is set up, that's practically what it does, but the first time setup of Kubernetes is YAML spaghetti hell, and learning about tools upon tools upon YAML.
Yep. Needlessly complicated and completely dev-centric, it looses sight of the goal. Pretty hilarious, as the whole argument for outsourcing infrastructure to developers was to make delivery faster, simpler and easier. What a joke.
CoreOS switched attention to K8s over fleet fairly early into fleets life span so it never really had the chance to develop into a seriously used tool. Red Hat killed most things CoreOS did after fleet was dead. Etcd lives on and so does Fedora CoreOS. Its not quite the same distribution CoreOS was but takes the same principals and kept some really talented engineers. I wish it got more attention.
Fleet's code is still around if the pain of Kubernetes outweighs the benefits. The problem, in this case, is that systemd is not exactly minimalist, and Fleet built on top of it. I've used it in the past and it felt complex as well, especially when debugging problems.
For self-hosting I've found https://k3s.io to be really good from the SUSE people. Works on basically any Linux distro and makes self-hosting k8s not miserable.
I honestly fail to see in what aspect Kubernetes is poorly documented? It is complex yes, but just about any aspect I've come by is documented. I think that one reason that the documentation at kubernetes.io is kept in a rather short format may be to avoid it to become overwhelming.
In comparison systemd is well documented and you don't really need to ask people for help. You can easily use the shell, you don't have to battle with wrong nginx configurations that were autogenerated, because you wrote them and you know what you're doing.
Fleet was cool, but Redhat bought CoreOS and killed fleet, can't have a simple effective system, it has to be complex and enterprise so you can sell services and tutelage. Fucking IT people.