Hacker News new | past | comments | ask | show | jobs | submit login

I find each new development in the field of deploying Kubernetes to be grimly humorous. We had lots of techniques to package and deploy things before Kubernetes, but they were complex and inadequate in some ways, and Kubernetes fixes some of those issues. But it turns out Kubernetes itself is substantially more complicated to package and deploy than the old solutions, if you're deploying it yourself rather than using some proprietary cloud. Oops!

What I just can't decide is whether we'll successfully put another layer of container-container-containers on top of Kubernetes, or whether the whole effort will eventually collapse and we'll extract the good parts out into something simpler.




> But it turns out Kubernetes itself is substantially more complicated to package and deploy than the old solutions

Not really. It's pretty much the same level of complication if you used the same components. You could use all the same tools: chef, puppet, ansible, etc. Once you have it available though other applications are easier to deploy.

At any rate, this tool provides something entirely different. It lets you image the entire data center and reproduce it somewhere else. Not sure how you would've done that before.


Kubernetes cluster is hard to operate indeed. Techniques that we always had to deploy before Kubernetes, for example package managers, don't always translate to operating a distributed system on a hundreds or thousands of servers.

Some of this complexity is definitely incidental, not all components are not always coordinating well with each other, e.g. docker and API server, networking layer during upgrades for example.

On the other hand lots of this complexity is essential - K8s is a distributed system with database, network and container orchestration layer solving a hard problem.


> we'll extract the good parts out into something simpler

It's already been done and couple solutions exist: docker swarm, nomad. Depending on your stack complexity level.

The problem you've described arises when you try to use the wrong tool for the job, e.g. you're trying to use kubernetes for simple projects with small teams without dedicated OPS and SRE teams.

When you have large and complex infrastructure (e.g. GitLab) with complex networking and balancing level where the kubertenes like tools bring the most value you actually win by making your ops team work like a uber drivers (a little bit exaggerated) making standard decisions in standard environment. You just check the licence (certificates) and your infrastructure just work. No need for customised solutions anymore.


Kubernetes isn’t just a way to deploy your apps, though.

It’s also doing a ton of orchestration that many people just simply weren’t even doing before, or they had a human at a keyboard doing it. There’s a lot of value in that.

All of the package and deployment stuff is moving very quickly because the variety of organizations using k8s is quickly growing and using it for new use cases.

Things like application distribution was never a focus of Borg, because google doesn’t really distribute applications.


> We had lots of techniques to package and deploy things before Kubernetes, but they were complex and inadequate in some ways

They worked very well for 30 years. And the new container-based tools inevitably end up doing the same mistakes, rediscovering the same solutions and we'll go back to the beginning.

> turns out Kubernetes itself is substantially more complicated to package and deploy than the old solutions

Exactly, and you can't solve a problem by making it more complex.


> Exactly, and you can't solve a problem by making it more complex.

Sure you can. A resource scheduler is more complex than not having one, but it solves the single-point-of-failure problem and the bin-packing CPU+memory problem.

A more complex infrastructure means you can have dumber apps.

And there are lots of areas where this is true: TCP is a complex protocol which makes it easier to build reliable communication, CPUs have complex caches which make simple code faster, RAID makes multiple disks behave like a single disk to improve reliability or performance, compression is very complex (esp for audio/video) but dramatically reduces the size...

The implementation of kubernetes may be flawed, but the idea of kubernetes makes a lot of sense. It solves real problems.


> A resource scheduler is more complex than not having one, but it solves the single-point-of-failure problem and the bin-packing CPU+memory problem.

And yet various FAANGs choose not to use a smart scheduler, because it does not improve efficiency and reliability enough to justify its complexity and scales poorly.


Kubernetes was based on Google's Borg [1].

Netflix uses Titus and Mesos [2].

I'm not sure what Amazon uses, but I'm sure they have some sort of system to do this. They offer plenty of managed solutions for customers (including EKS).

Apple's more of a product company, but they seem to use Kubernetes for some things [3].

And finally facebook apparently has something called Tupperware [4].

So all the FAANGs use something like Kubernetes to manage infrastructure.

[1] https://en.wikipedia.org/wiki/Borg_(cluster_manager) [2] https://queue.acm.org/detail.cfm?id=3158370 [3] https://jobs.apple.com/en-us/details/200120515/virtualized-c... [4] https://www.slideshare.net/Docker/aravindnarayanan-facebook1...


I worked on some of the technologies you mentioned but I cannot disclose more.

> So all the FAANGs use something like Kubernetes to manage infrastructure.

No, you cannot just hand-wave that they are "like Kubernetes".


I find all the complexity quite unnecessary. I think the "package managers" attempt to do too much, handling some sort of use case that doesn't affect me. My highest success rate with randomly deploying someone else's software comes from software that just provides a bunch of API objects in YAML form that you just apply to the cluster. My second highest success rate comes from just writing my own manifest for some random docker image. Finally, operators tend to do pretty well if the development team is sane.

At this point, I think the fundamental problems are:

1) People desire to give you a "works out of the box" experience, so they write a packaging system that can install everything. The app depends on Postgres? Fuck it, we'll install Postgres for you! This is where things start to go wrong because self-hosting your own replicated relational database instance is far from trivial, and it requires you to dial in a lot of settings. And, of course, it requires even more settings to say "no no, I already have an instance, here is the secret that contains the credentials and here is its address."

2) Installing software appears to require answering questions that nobody knows the answers to. How much CPU and memory do I give your app? "I dunno we didn't load test it just give it 16 CPUs and 32G of RAM, if it needs more than that check your monitoring and give it more." "I only have 2 CPUs and 4G of RAM per node." "Oh, well, maybe that's enough or maybe it isn't. It won't blow up until you give it a lot of users though, so you will get to load test it for us while your users can't do any work. Report back and let us know how it goes!"

I also noticed that when security people get at the project, it tends to become unusable. I used to be a big fan of Kustomize for manifest generation. Someone decided to build it into kubectl by default, and that it should support pulling config from random sites on the Internet. So now if you use it locally, you can't refer to resources that are in ../something, because what if a remote manifest specified ../../../../../etc/shadow as the config source? Big disaster! So now it doesn't work. (They also replaced what I thought was the best documentation in the world, a kustomization.yaml file that simply used every available setting, with comments, with a totally unusable mass of markdown files that don't tell you how to use anything.)

Obviously security is a problem, but they should have said "just git clone the manifest yourself and review it" instead of "you can't use ../ on your local manifests that are entirely written by your own company and exist all inside the same git repository that you fully control". But they didn't, and now it sucks to use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: