Putting on my Asbestos Longjohns: The more I look into k8s ecosystem, the more I'm convinced that it's one of those things that suits FAANG etc, but the regular Joe developer has caught on the fad and wants to add it to his repertoire, even though it's an overkill. After all no one got fired for buying IBM and recommending Kubernetes. Most teams need a simpler deployment strategies that other's have succinctly mentioned elsewhere in this page.
Kubernetes solves a very real and significant problem. But before you start using it, make sure you have the problem it solves.
If you're looking to have your small app eventually grow into a large one, read up on K8s and just make sure you're not blocking future-you from making your app work on it. E.g., work well in a container (which is useful for automated testing, deps management, etc), have a simple 'ping' endpoint to make sure the app is up, have a better config story than "recompile to change these variables", use a logging library, and tolerate any other services you're using to sometimes be down.
All useful things for a grown-up app to do anyways, all a bit of a PITA, and all better than trying to operate an app that doesn't do them.
Exactly this. Kubernetes is a service orchestrator, not a hosting platform. Those getting caught up in it just want a hosting platform, but those getting value out of it want a service orchestrator.
If you have one monolithic backend service (and most web applications really should start out this way), Kubernetes offers almost no benefits over alternatives.
Putting on my tinfoil hat: it suits FAANG to have potential competitors burn their runways on baroque tech fads like Kubernetes or <insert-react-state-management-architecture-of-the-month-here>. Extra credit if they end up hosting their overly complex solution on your platform.
I once have been told that development teams smaller than 20 developers have no business in using k8s, due to the complexity to brings. If something as essential as the infra so complex it is not readily understood by everyone on the team, a few (more than one) team members need to become the experts on the matter. For small teams this is simply not worth it.
As part of a small team currently using Kubernetes, I suspect it’s more how you use it - the tools and ecosystem have matured immensely in the last couple of years since I’ve first started using it.
I don’t think it suits all teams and use cases, but for us it’s absolutely fantastic and without going down the rabbit-hole of cloud-provider specific tools and recreating half the issues it solves, I’m not super sure what we’d use.
Agreed. I'm a solo technical founder and have been using k8s for all my hosting for 3+ years. It's so easy (for me) that I'm fine paying a premium for the managed service (GCP) since it saves me lots of time, my most valuable resource.
I've already climbed most of the learning curve so YMMV, but as a team of one and dozens of WordPress, MySQL, and bespoke app servers, kuberenetes makes ops manageable so I can spend time on things that really matter.
Deploying new web apps is trivial, declarative manifests are easy to reason about, TLS certs are issued and renewed automatically (cert-manager), backups are cheap and reliable (daily GCP snapshots), making changes to the cluster via declarative terraform is a breeze, etc etc. No way I could manage all the ops without leaning so heavily on the core foundation provided by k8s.
As in - a single workload that can't fit on a single physical machine? No I don't, although I certainly could if I needed to. Most of my workloads are either low-traffic WP sites or bespoke web-based business tools for clients with very bursty traffic.
Most of the value I get from k8s is the hands-off nature of it - I get slack notifications (prometheus+alertmanager) if anything is happening I need to address (e.g. workload down, node down, API not responding, etc). Otherwise I can safely ignore my cluster and know everything's good. Spinning up a new WP site takes 10m with backups, TLS, monitoring, etc built in.
I’ve run a production kubernetes cluster that was hosting a DGraph cluster of 3 machines on its own, some ML workloads, and 4-5 products (each consisting of multiple services) and that was more than a single machine would have been able to handle.
Well _technically_, sure, we could have run a bunch of those products on a single machine, but there goes your durability and the memory overhead on some of them was quite Hugh, and properly fitting them onto a single machine would have required more optimisation and technical skills than the devs I was working with had or were inclined to do.
Definitely; if you as a company want to do Kubernetes or even cloud services (beyond the easy managed service like Beanstalk or GCE), you need to have a dedicated expert on it. Or more abstractly, one full time unit (can be distributed). If it's some guy's part time hobby it will not work.
I think this is what went wrong with k8s. I saw lots of interests from hobbyist, and people proposing k8s in small teams. It become often hear that "you do containers in production? use k8s!" That's just a big disappointment waiting to happen.
If something as essential as the infrastructure is so complex that you need a dedicated expert on it, it's bad infrastructure. To take a offhand analogy, you don't need a dedicated highway maintainance engineer in order to drive your car.
I think Kubernetes in principle gets a lot of things very right - but it has over time grown into this huge amorphous blob of complexity that makes it very easy to shoot yourself in the foot with, as many people said :)
That issue is not endemic to Kubernetes, but rather to any larger system past a certain age, you learn stuff as you go along and would do stuff differently if you did it again today - but you can't easily, because you cannot break compatibility for everybody using your stuff.
As a concrete example from the Kubernetes world, there is a talk by Tim Hockin [1] about how today, they would fundamentally design the api-server differently and base pretty much everything on CRDs.
The industry and the k8s project are still figuring out the right way to do things that don't require the organization, size, and technical choices Google made.
A friend of mine is a contributor to k8s itself, and of course, this all comes incredibly easy to them. Following their recommendation, I gave it a shot for my single-person, single-node (!) homelab, all without using MicroK8s, k3s or similar.
After a week of almost full-time work, I threw in the towel. Admittedly, I also had to learn concepts like reverse proxies alongside, too, so I was by no means well-equipped to begin with.
Yet, tossing together some docker-compose.yml files and "managing" them with a Python script has worked very well. Kubernetes really scarred me in that sense, and I am healed! Also, Caddy has helped me in actually enjoying configuring the webserver.
Ah ok. For a single node homelab setups I just throw everything on hostNetwork, second choice is NodePort (if there are port conflicts). In general k8s ingress on baremetal requires deeper understanding of its network design
I would (probably) spin up an ingress-controller on ports 80 and 443, using hostNetwork, then use Ingresses from then on (and as it's a single-node cluster, just create a wildcard DNS A record, and possibly an anchor for other CNAMEs to point at (depending on DNS server) pointing at the IP said ingress-controller is running on).
Does mean that anything that upsets the ingress controller is an outage, but for experimentation, that's probably OK.
Yeah, of course running random docker compose files and containers from the internet and blissfully exposing your mongodb or whatnot service unsecured to the whole world seems like an easy, non-complicated alternative. Kubernetes has a few shitty defaults, like exposing a service account for all pods by default or allowing to mutate pod image tags, but most of the functionality it provides is a must have when you actually care about your SLA. Rolling updates with health check and configured back-off time? Separate ingress for OAM and live traffic with automatic HTTPS, etc? I could go on.
I am talking about a homelab, a single server at home, for home use. It's much safer now, with Docker compose, because I understand it and I wrote the core exposed part's configuration, the Caddyfile, myself, manually. I know exactly what's exposed, and it's exactly right the way it is!
The remaining risk comes from the services themselves having security holes, but k8s has that very same risk.
From my experience it's actually sold as a simpler alternative to other infra provisioning. So you end up with situations where a team deploys whatever with a helm chart, and it sets up the stuff like magic and they build on it. Then when something goes wrong they literally have no idea how to fix anything and it becomes a waking nightmare.
As a longtime and frequent user of k8s, I stay away from helm charts. I tried them out when they first got popular but I found they introduced more friction than they solved on the whole.
Not every addon/tool for the k8s ecosystem is worth it. I also don't bother with the ever-growing list of service meshes... not enough value to me for the overhead.
K8s is definitely the simpler alternative for me but there is still a lot of essential complexity in k8s due to the nature of the problems it's trying to solve. Mostly I like building on top of a solid foundation of standardized k8s API objects (pods, services, volumes, etc).
Tldr; Bring in only the add-ons and tools you really need so you don't add more complexity than necessary. Don't get swept up in the hype and marketing from other devs and cloud vendors.
This is such a great point and so frequently skimmed over in k8s discussions. We as tech folks tend to focus on the front page blog posts about 1000s of nodes and all of the orchestration that goes into complicated top 1% high-traffic/high-complexity use-case setups. In reality there are a lot of profitable businesses out there happily running a simple cluster set up with a single-digit number of deployments chugging along on it with zero down-time.
Really when looking at tools in the k8s ecosystem, it's better to approach it as you would importing a new library into your application. Most decent devs wouldn't blindly import a new lib so that they can copy/paste a single line of code they found online for a business critical function, and k8s tools should be no different. We must think about what value does a given tool bring, and is it worth the cost of learning/maintenance? Sometimes the answer is a resounding "yes", but too often the question isn't even asked.
I like Kubernetes, I don't overly like Helm charts because yes, they work, but you can install one without having to think about what's it putting in your cluster.
> The more I look into k8s ecosystem, the more I'm convinced that it's one of those things that suits FAANG etc, but the regular Joe developer has caught on the fad and wants to add it to his repertoire, even though it's an overkill.
You are absolutely spot on because this is how not to pass the behavioral interview for Engineering Manager.
but the regular Joe developer has caught on the fad and wants to add it to his repertoire, even though it's an overkill
There are very strong financial incentives for every individual developer and sysadmin to adopt Kubernetes, regardless of the impact it has on the organisation as a whole. In a sense this is engineering reaching the level of corporate maturity of the sales department who will optimise everything for their commission regardless of the organisations ability to deliver it at a profit, or even at all.
I'm sure there's a name to this phenomenon. Companies want stable software, and Regular Joe want better pay, but companies won't pay unless Joe starts doing crazy complex stuff that complicates things further.
Regular Joe learns complex stuff at your expense. He then leaves for greener pastures and higher pay thanks to the boost to his resume. You are then left with complex stuff you need to maintain and so you have to hire another Regular Joe for a higher salary than your first Regular Joe.
> There are very strong financial incentives for every individual developer and sysadmin to adopt Kubernetes, regardless of the impact it has on the organisation as a whole.
Then that organization is doing a terrible job of aligning incentives. I'm guessing their pay structure isn't terribly merit-based nor high enough that people aren't constantly thinking about other jobs.
If this is about FAANG (your comment wasn't, but others were), perhaps part of this is exposing larger problems in many smaller orgs. (note: I'm ex-FAANG and happily so)
Sorry to have to be the one to tell you: sometimes architectural decisions are driven by factors other than YAGNI. Right now you have throngs of young developers paying $50k+ a year for the privilege to learn how to use Docker and Kubernetes while in college, and when 90% of them inevitably get rejected from FAANG after graduation, you'll be able to hire them on the cheap and entice them with development stacks they're comfortable with.
In my opinion, k8s starts to shine when you have to manage hundreds of containers. When you have just dozens of them it's an overkill, but there's no way to smoothly slot in another solution between "docker-compose up -d" and spinning up a k8s cluster: you will (or think you will) hit a maintainability ceiling again and have to migrate to k8s.
There actually is, Hashicorp Nomad fits solidly in between those two options.
Nomad is way simpler to get a cluster up and running, has a great configuration syntax (I'll take HCL over YAML anyday) and had first class Terraform/Consul/Vault integrations.
Onboarding devs is fairly straightforward, if they can write a docker-compose.yml, it's an easy transition to a nomad job specification.
It took me by myself ~4 months to get our current hashistack(Vault/Consul/Nomad) stood up using Terraform+ansible. Two members of my team have been working to replace the hashistack with a self hosted K8's deployment and they just went over the 1 year mark and we still do not have something capable of hosting the workload currently running on the Hashistack.
This got a little long winded but I feel like this "it's docker compose or K8's, take your pick" mentality had led to a bunch of needless time being spent by smaller teams/companies on solutions that just aren't right for them.
What I think K8S (EKS, GKE, DO hosted environments at least) provide is a nice way to integrate things like Gitlab. This gives you a really easy to use CI/CD pipeline for very little work and configuration. This allows you to deploy production from your main branch and spin up feature branches that can be tested by the people that requested the feature very easily. This does not require an additional effort once the system is setup.
Also you can get red/green deployments and rolling deployments with little to no effort, which can be very nice, nice to have.
I think that an important distinction is between deploying on k8s and operating it. For a small team (not measured in the dozens), the latter is unaffordable but the working style of the former is still powerful.
This feature helps a lot with that problem by bringing GCP closer to where AWS has been with Fargate. k8s will still be more work than using AWS ECS but it might also be preferable if you dislike using the provider’s components and want the control of, for example, doing your own load balancing and storage management.
k8s and its ecosystem represent a data center in software. Data centers are fairly complex constructs. It is then to be expected that this complexity will shine through in k8s' API, UI, UX. k8s' main mission seems to be to provide a complete digital data center, not an easy to use one, and I would argue that that is exactly the right choice. Over time, as the core of the beast is figured out, there will be (as there have been) more and more opportunities taken to actually help users navigate that complexity and/or resolve it into more natural and less error-prone interfaces. But in the meantime it seems like it's mostly on the community to provide (usually temporary) solutions for the most pressing usability concerns.