Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes is our generation's Multics (oilshell.org)
589 points by genericlemon24 on July 21, 2021 | hide | past | favorite | 538 comments



I'd be curious what a better alternative looks like.

I'm a huge fan of keeping things simple (vertically scaling 1 server with Docker Compose and scaling horizontally only when it's necessary) but having learned and used Kubernetes recently for a project I think it's pretty good.

I haven't come across too many other tools that were so well thought out while also guiding you into how to break down the components of "deploying".

The idea of a pod, deployment, service, ingress, job, etc. are super well thought out and are flexible enough to let you deploy many types of things but the abstractions are good enough that you can also abstract away a ton of complexity once you've learned the fundamentals.

For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.. That's complete with running DB migrations in a sane way, updating public DNS records, SSL certs, CI / CD, having live-preview pull requests that get deployed to a sub-domain, zero downtime deployments and more.


> once you set up a decently tricked out Helm chart

I don't disagree but this condition is doing a hell of a lot of work.

To be fair, you don't need to do much to run a service on a toy k8s project. It just gets complicated when you layer on all production grade stuff like load balancers, service meshes, access control, CI pipelines, o11y, etc. etc.


> To be fair, you don't need to do much to run a service on a toy k8s project.

The previous reply is based on a multi-service production grade work load. Setting up a load balancer wasn't bad. Most cloud providers that offer managed Kubernetes make it pretty painless to get their load balancer set up and working with Kubernetes. On EKS with AWS that meant using the AWS Load Balancer Controller and adding a few annotations. That includes HTTP to HTTPS redirects, www to apex domain redirects, etc.. On AWS it took a few hours to get it all working complete with ACM (SSL certificate manager) integration.

The cool thing is when I spin up a local cluster on my dev box, I can use the nginx ingress instead and everything works the same with no code changes. Just a few Helm YAML config values.

Maybe I dodged a bullet by starting with Kubernetes so late. I imagine 2-3 years ago would have been a completely different world. That's also why I haven't bothered to look into using Kubernetes until recently.

> I don't disagree but this condition is doing a hell of a lot of work.

It was kind of a lot of work to get here, but it wasn't anything too crazy. It took ~160 hours to go from never using Kubernetes to getting most of the way there. This also includes writing a lot of ancillary documentation and wiki style posts to get some of the research and ideas out of my head and onto paper so others can reference it.


o11y = observability


You couldn't create a parody of this naming convention that's more outlandish than the way it's actually being used.


Yes you can! Accessibility gets abbreviated to a11y, which is about as inaccessible as it gets.


Only if you've never seen it before. The word "accessibility" is incredibly inaccessible to non-native speakers and native speakers with learning disabilities or dyslexia. There's some double characters in there but which ones? Also it sounds like there's an a or "uh" sound in there but somehow it's all "i"s except one is an "e"? "a11y" is four letters (well, two of them are digits but who's counting?) and clearly refers to one particular concept.

Likewise "i18n" (internationalization/internationalisation) and "l10n" (localization/localisation) avoids confusion of whether it's "ize" or "ise", which is literally the problem those concepts try to solve.

I can somewhat excuse "k8s" with "nobody can remember how kubernetes is spelled let alone pronounced" (Germans insist pronouncing the "kuber" part the same way "kyber/cyber" is pronounced in other Greek loanwords, with a German "ü" umlaut) but I admit that one is a stretch and "visual puns" like "k0s" ("minimal", you see?) and "k3s" (the digit 3 looks like half of an 8 so it's "lightweight", right?) are a bit beyond the pale for me.


>The word "accessibility" is incredibly inaccessible to non-native speakers

There are at least a dozen languages where the English word "accessibility" translates to the same word spelled slightly differently.


I'm not sure what your point is. I qualified my claim very explicitly and what you said doesn't contradict any of it.

I'm not saying it's difficult to understand. I'm saying it's an unwieldy word and "a11y" is easier to remember and write correctly.


You specifically called it out as being "inaccessible" (ie, difficult to understand) to non-native speakers (of English).

Also, "a11y" looks too much like the English word "ally". That, IMO, is more likely to cause reading difficulties, particularly with non-native speakers and people with dyslexia.


o11y? In my head it sounds like it's a move in "Tony Hawk: Pro K8er"


It's the Wingdings of naming conventions.


You don't like n7ms?


I was originally confused because I thought the debugger `ollydbg` was being referenced.

https://en.wikipedia.org/wiki/OllyDbg


You still have to do that prod grade stuff, K8s creates a cloud agnostic API for it. People can use the same terms and understand each other


> That's complete with DB migrations in a safe way

How?! Or is that more a "you provide the safe way, k8s just runs it for you" kind of thing, than a freebie?


Thanks, that was actually a wildly misleading typo haha. I meant to write "sane" way and have updated my previous comment.

For saFeness it's still on us as developers to do the dance of making our migrations and code changes compatible with running both the old and new version of our app.

But for saNeness, Kubernetes has some neat constructs to help ensure your migrations only get run once even if you have 20 copies of your app performing a rolling restart. You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete. This translates to only 1 pod ever running the migration while other pods hang tight until it finishes.

I'm not a grizzled Kubernetes veteran here but the above pattern seems to work in practice in a pretty robust way. If anyone has any better solutions please reply here with how you're doing this.


Hahaha, OK, I figured you didn't mean what I hoped you meant, or I'd have heard a lot more about that already. That still reads like it's pretty handy, but way less "holy crap my entire world just changed".


> You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete.

Much simpler way is to run migration in init container itself. Most SQL migration frameworks know about locks and transactions, so concurrent migrations wont run anyway


I thought about doing that for a while too.

I think the value in the init+job+watcher approach is you don't need to depend on a framework being smart enough to lock things which makes it suitable and safe to run with any tech stack worry free. It also avoids potential edge cases if a framework's locking mechanism fails, and an edge case in this scenario could be really bad.

But it does come at the cost of a little more complexity (a 30 line YAML job and then ClusterRole/ClusterRoleBinding resources for RBAC stuff on the watcher), but fortunately that's only a 1 time thing that you need to set up.


It simpler than that for simple scenarios. `kubectl run` can set you up with a standard deployment + service. Then you can describe the resulting objects, save the yaml, and adapt/reuse as you need.


> For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.

I understand you might outsource the Helm chart creation but this sounds like oversimplifying a lot, to me. But maybe I'm spoiled by running infra/software in a tricky production context and I'm too cynical.


It's not too oversimplified. I have a library chart that's optimized for running a web app. Then each web app uses that library chart. Each chart has reasonable default values that likely won't have to change so you're left only having to change the options that change per app.

That's values like number of replicas, which Docker image to pull, resource limits and a couple of timeout related values (probes, database migration, etc.). Before you know it, you're at 15ish lines of really straight forward configuration like `replicaCount: 3`.


> I'd be curious what a better alternative looks like.

https://github.com/purpleidea/mgmt/

It's just not finished yet. with < 0.01% of the funding kube has, it has many times more design and elegance. Help us out. Have a look and tell me what you think. =D


My two cents is that docker compose is an order of magnitude simpler to troubleshoot or understand than Kubernetes but the problem that Kubernetes solves is not that much more difficult.


As a Kubernetes outsider, I get confused why so much new jargon had to be introduced. As well as so many little new projects coupled to Kubernetes with varying degrees of interoperability. It makes it hard to get a grip on what Kube really is for newcomers.

It also has all the hallmarks of a high-churn product where you need to piece together your solution from a variety of lower-quality information sources (tutorials, QA sites) rather than a single source of foolproof documentation.


> I get confused why so much new jargon had to be introduced.

Consider the source of the project for your answer (mainly, but not entirely, bored engineers who are too arrogant to think anybody has solved their problem before).

> It also has all the hallmarks of a high-churn product where you need to piece together your solution from a variety of lower-quality information sources (tutorials, QA sites) rather than a single source of foolproof documentation.

This describes 99% of open source libraries used.The documentation looks good because auto doc tools produce a prolific amount of boilerplate documentation. In reality the result is documentation that's very shallow, and often just a re-statement of the APIs. The actual usage documentation of these projects is generally terrible, with few exceptions.


> Consider the source of the project for your answer (mainly, but not entirely, bored engineers who are too arrogant to think anybody has solved their problem before).

This seems both wrong and contrary to the article (which mentions that k8s is a descendant of Borg, and in fact if memory serves many of the k8s authors were borg maintainers). So they clearly were aware that people had solved their problem before, because they maintained the tool that had solved the problem for close to a decade.


Kubernetes docs are pretty good, detailed, and kept up to date - a lot more than just API auto-documentation.


I find it's low quality libraries that tend to have poor documentation. Perhaps that's 99% of open source libraries.


I second this. I like "silent" new tech, which doesn't need to introduce dozens of new "concepts".

- containers focus on what you can do, easy to understand and you can start in 5 minutes

- kubernetes is the opposite, where verbose tutorials lose time explaining me how it works, rather than what i do with it.


I always find it surprising that I have yet to see or touch Kubernetes (and I work as an SRE with container workloads for several years now), and yet HN threads about it are full of people who apparently think it's the only possible solution and are flabbergasted that people don't pray to it nightly.

https://news.ycombinator.com/item?id=27910185

https://news.ycombinator.com/item?id=27910481 - weird comparison to systemd

https://news.ycombinator.com/item?id=27910553 - another systemd comparison

https://news.ycombinator.com/item?id=27913239 - comparing it to git


I think one part of this the lack of accepted nomenclature in CS - naming convention is typically not enforced, unlike if you'd have to produce an engineering drawing for it and have it conform to a standard.

For engineering, the common way is to use a couple of descriptive words + basic noun so things do get boring quite quickly but very easy to understand, say something like Google 'Cloud Container Orchestrator' instead of Kubernetes.


If only branding wasn't involved.


The Kubernetes documentation site is the source of truth, and pretty well written, though obviously no set of docs is perfect.

The concepts and constructs do not usually change in breaking ways once they reach beta status. If you learned Kubernetes in 2016 as an end user, there are certainly more features but the core isn’t that different.


So the basic problem with *nix is its permission model. If we had truly separable security/privilege/resource domains then Linux wouldn't have needed containers and simple processes and threads could have sufficed in place of Borg/docker/Kubernetes.

There's a simpler and more powerful security model; capabilities. Capabilities fix 90% of the problems with *nix.

There's currently no simple resource model. Everything is an ad-hoc human-driven heuristic for allocating resources to processes and threads, and is a really difficult problem to solve formally because it has to go beyond algorithmic complexity and care about the constant factors as well.

The other *nix problem is "files". Files were a compromise between usability and precision but very few things are merely files. Devices and sockets sure aren't. There's a reason the 'file' utility exists; nothing is really just a file. Text files are actually text files + a context-free grammar (hopefully) and parser somewhere, or they're human-readable text (but probably with markup, so again a parser somewhere).

Plenty of object models have come and gone; they haven't simplified computers (much less distributed computers), so we'll need some theory more powerful than anything we've had in the past to express relationships between computation, storage, networks, and identities.


Containers never solved the permission model. They solved the packaging and idempotency problem.

I really dislike when people assume containers give them security, it’s the wrong thing to think about.

Containers allowed us to deploy reproducibly, that’s powerful.


Absolutely true.

Docker replaced .tar.gz and .rpm, not chroots.

Most of the time the chroot functionality of Docker is a hindrance, not a feature. We need chroots because we still haven't figured out packaging properly.

(Maybe Nix will eventually solve this problem properly; some sort of docker-compose equivalent for managing systemd services is lacking at the moment.)


Er, just as a historical note, one of the primary uses of chroots was for packaging. Just like how Docker does it. That, in a sense, was even the original motivation. The security usage of chroots was a later innovation.


I mean, containers can provide isolation. Linux has had a hard time getting that to be reliable because it started with the wrong model: building containers subtractively rather than additively. Though even starting with the right model, until you have isolation for every last bit of shared context that the OS provides (harder to identify than it may seem at first blush!) you won't have a complete solution. And yes, software-based containers will tend to have some leakage. Even sharing hardware with hardware isolation features might not be enough (hello row hammer).

It would be good to have containers aim to provide the maximum possible isolation.


> Containers never solved the permission model. They solved the packaging and idempotency problem

Disagree. Containers are primarily about separation and decoupling. Multiple services on one server often have plenty of ways to interact and see each other and are interdependent in non-trivial ways (e.g. if you want to upgrade the OS, you upgrade it for all services together). Services running each in its own container provides separation by default.

OTOH, containers as a technology has nothing to do with packaging, reproducibility and deployment. Just these changes arrived together (e.g. with Docker) so they are often associated, but you can have e.g. LXC containers that can be managed in the same way as traditional servers (by ssh into a container).


LXC, freebsd jails & Solaris zones et al. are not the same as docker containers though.

The former were built with security in mind. The latter was most assuredly not.


> I really dislike when people assume containers give them security, it’s the wrong thing to think about.

to be fair, there is lots of published text around suggesting that this _is_ the case. many junior to semi-experienced engineers i've known have at some point thought it's plausible to "ssh into" a container. they're seen as light-weight VMs, not as what they are - processes.

> Containers allowed us to deploy reproducibly, that’s powerful.

and it was done in the most "to bake an apple pie from scratch, you must first create the universe" approach.


But you can ssh into a container.

You just need to install sshd and launch it. You also need to create a user and set a password if you want to actually log in.

Why? Because containers aren't a single process. It's a group of processes sharing a namespace.

And you can totally use a container as a light-weight VM. While most containers have bash or a your application as pid 1, there is nothing stopping you launching a proper initrd as pid 1 and it will act much like a proper OS.

Though, just because you can, doesn't mean you should.


I think you mean init, not initrd. An initrd is a RAM disk image loaded by Linux containing kernel file system and network drivers and is typically used to help minimize the size of the main kernel image.


It is possible to do that though. I'm perhaps getting too caught up on 'plausible'.


> There's a simpler and more powerful security model; capabilities. Capabilities fix 90% of the problems with *nix.

What do you think about using filedescriptors as capabilties? Capsicum (for FreeBSD, I think) extends this notion quite a bit. Personally I feel it is not quite "right", but I haven't sat down and thought hard about what is missing.

> we'll need some theory more powerful than anything we've had in the past to express relationships between computation, storage, networks, and identities.

Do you have any particular things in mind which points in this direction? I would like to understand what the status quo is.


I haven't looked at capsicum specifically, but from the simple overview I read it sounds like it is more similar to dropping root privileges when demonizing and not the basis for a whole-OS security model. E.g. there isn't (in my limited reading) a way to grant a new file descriptor to a process after it calls cap_enter. Consider a web browser that wants to download or upload a file; there should be a way for the operator to grant that permission to the browser from another process (the OS UI or similar) after it starts running.

To be effective capabilities also need a way to be persistent so that a server daemon doesn't have to call cap_enter but can pick up its granted capabilities at startup. Capsicum looks like a useful way to build more secure daemons within Unix using a lot of capability features.

I also think file descriptors are not the fundamental unit of capability. Capabilities should also cover processes, threads, and the objects managed by various other syscalls.

> Do you have any particular things in mind which points in this direction? I would like to understand what the status quo is.

Unfortunately I don't have great suggestions. The most secure model right now is seL4, and its capability model covers threads, message-passing endpoints, and memory allocation(subdivision) and retyping as kernel memory to create new capabilities and objects. The kernel is formally verified but afaik the application/user level is not fleshed out as a convenient development environment nor as a distributed computing environment.

For distributed computing a capability model would have to satisfy and solve distributed trust issues which probably means capabilities based on cryptographic primitives, which for practical implementations would have to extend full trust between kernels in different machines for speed. But for universality it should be possible to work with capabilities at an abstraction level that allows both deep-trust distributed computers and more traditional single-machine trust domains without having to know or care which type of capabilities to choose when writing the software, only when running it.

I think a foundation for universal capabilities needs support for different trust domains and a way to interoperate between them.

   1. Identifying the controller for a particular capability, which trust domain it is in, and how to access it.
   2. Converting capabilities between trust domains as the objects to which they refer move.
   3. Managing any necessary identity/cryptographic tokens necessary to cross trust domains.
   4. Controlling the ability to grant or use capabilities across trust domains.
A simple example; a caller wants to invoke a capability on a utility process which produces an output, to which the caller wants to receive a capability to read the output.

   The processes may not live on the same machine.
   The processes may not be in the same trust domain.
   The resulting object may be on a third machine or trust domain.
   The caller may have inherited privacy enforcement on all owned capabilities that necessitates e.g. translating the binary code of the second process into a fully homomorphically encrypted circuit which can run on a different trust domain while preserving privacy and provisioning the necessary keys for this in the local trust domain so that the capability to the new object can actually read it.
   The process may migrate to a remote machine in a different trust domain in the middle of processing, in which case the OS needs to either fail the call (making for an unfortunately complicated distributed computer) or transparently snapshot or rollback the state of the process for migration, transmit it and any (potentially newly encrypted) data, and update the capabilities to reflect the new location and trust domain.

   Basically if the capability model isn't capable of solving these issues for what would be very simple local computing then it's never going to satisfy the OP's desire for a more simple distributed computation model.
I think it's also clear why *nix is woefully short of being able to accomplish it. *\nix is inherently local and has a single trust domain and forces userland code to handle interaction with other trust domains except in the very limited model of network file systems (and in the case of NFS essentially an enforced single trust domain with synchronized user/group IDs)


Windows has capabilities. It's the combination of handles (file, process, etc.) and access tokens.

But you'll note no one is really deploying windows workloads to the cloud. Why? Well, because you'd still have to build a framework for managing all those permissions, and it hasn't been done. Also, you might end up with SVCHOST problem, where you host many different services/apps/whatever in one very threaded process because you can.

Capabilities aren't necessarily simpler. Especially if you can delegate them without controls -- now you have no idea what the actual running permissions are, only the cold start baseline.

No, I think the permissions thing is a red herring. Very much on the contrary, I think workload division into coarse-grained containers are great for permissions because fine-grained access control is hard to manage. Of course, you can't destroy complexity, only move it around, so if you should end up with many coarse-grained access control units then you'll still have a fine-grained access control system in the end.

Files aren't really a problem either. You can add metadata to files on Linux using xattrs (I've built a custom HTTP server that takes some response headers for static resources, like Content-Type, from xattrs). The problem you're alluding to is duck-typing as opposed to static typing. Yes, it's a problem -- people are lazy, so they don't type-tag everything in highly lazy typing systems. So what? Windows also has this problem, just a bit less so than Unix. Python and JS are all the rage, and their type systems are lazy and obnoxious. It's not a problem with Unix. It's a problem with humans. Lack of discipline. Honestly, there are very few people who could use Haskell as a shell!

> Plenty of object models have come and gone;

Yeah, mostly because they suck. The right model is Haskell's (and related languages').

> so we'll need some theory more powerful than anything we've had in the past ...

I think that's Haskell (which is still evolving) and its ecosystem (ditto).

But at the end of the day, you'll still have very complex metadata to manage.

What I don't understand is how all your points tie into Kubernetes being today's Multics.

Kubernetes isn't motivated by Unix permissions sucking. We had fancy ACLs in ZFS in Solaris and still also ended up having Zones (containers). You can totally build an application-layer cryptographic capability system, running each app as its own isolated user/container, and to some degree this is happening with OAuth and such things, but that isn't what everyone is doing, all the time.

Kubernetes is most definitely not motivated by Unix files being un-typed either.

I hope readers end up floating the other, more on-topic top-level comments in this thread back to the top.


The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn. To learn alternatives, it takes years, and applications built on alternatives will be tied to one cloud.

See prior discussion here: https://news.ycombinator.com/item?id=23463467

You'd have to learn AWS autoscaling group (proprietary to AWS), Elastic Load Balancer (proprietary to AWS) or HAProxy, Blue-green deployment, or phased rollout, Consul, Systemd, pingdom, Cloudwatch, etc. etc.


Kubernetes uses all those underlying AWS technologies anyway (or at least an equivalently complex thing). You still have to be prepared to diagnose issues with them to effectively administrate Kubernetes.


At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix. Moving providers with a k8s system can be a weeks long project rather than a years long project which can easily make the difference between surviving and closing the doors. It's not a panacea but it at least doesn't make your system dependent on a single provider.


If you can literally pick up and shift to another cloud provider just by moving Kubernetes somewhere else, you are spending mountains of engineering time reinventing a bunch of different wheels.

Are you saying you don't use any of your cloud vendor's supporting services, like CloudWatch, EFS, S3, DynamoDB, Lambda, SQS, SNS?

If you're running on plain EC2 and have any kind of sane build process, moving your compute stuff is the easy part. It's all of the surrounding crap that is a giant pain (the aforementioned services + whatever security policies you have around those).


I use MongoDB instead of DynamoDB, and Kafka instead of SQS. I use S3 (the Google equivalent since I am on their cloud) through Kubernetes abstractions. In some rare cases I use the cloud vendor's supporting services but I build a microservice on top of it. My application runs on Google cloud and yet I use Amazon SES (Simple Email Service) and I do that by running a small microservice on AWS.


Sure, you can use those things. But now you also have to maintain them. It costs time, and time is money. If you don't have the expertise to administrate those things effectively, it may not be a worthwhile investment.

Everyone's situation is different, of course, but there is a reason that cloud providers have these supporting services and there is a reason people use them.


> But now you also have to maintain them.

In my experience it is less work than keeping up with cloud provider's changes [1]. You can stay with a version of Kafka for 10 years if it meets your requirements. When you use a cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence. You are at their mercy. I am not saying it is always better to set up your own equivalent using OSS, but I am saying that makes sense for a lot of things. For example Kafka works well for me, and I wouldn't use Amazon SQS instead, but I do use Amazon SES for emailing.

[1] https://steve-yegge.medium.com/dear-google-cloud-your-deprec...


While in general I agree with your overall argument, when it comes to:

> cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence

AWS S3 and SQS have both gone down significantly in price over the last 10 years and code written 10 years ago still works today with zero changes. I know because I have some code running on a Raspberry Pi today that uses an S3 bucket I created in 2009 and haven't changed since*.

(of course I wasn't using an rPi back then, but I moved the code from one machine to the next over the years)


But "keeping up with changes" applies just as much to Kubernetes, and I would argue it's even more dangerous because an upgrade potentially impacts every service in your cluster.

I build AMIs for most things on EC2. That interface never breaks. There is exactly one service on which provisioning is dependent: S3. All of the code (generally via Docker images), required packages, etc are baked in, and configuration is passed in via user data.

EC2 is what I like to call a "foundational" service. If you're using EC2 and it breaks, you wouldn't have been saved by using EKS or Lambda instead, because those use EC2 somewhere underneath.

Re: services like SQS, we could choose to roll our own but it's not really been an issue for us so far. The only thing we've been "forced" to move on is Lambda, which we use where appropriate. In those cases, the benefits outweigh the drawbacks.


It’s time and knowledge.

It can be simple but first you have to learn it.

Given that life is finite and you want to accomplish some objective with you company (and it’s not training dev ops professionals), it’s quite interesting having the ability to outsource a big part of the problems needed to be solved to get there.

Given this perspective, much better to use managed services. Let’s you focus on the code (and maintenance) specific to your problem.


And don't you have specific yaml for "AWS LB configuration option" and stuff? The concepts in different cloud providers are different. I can't image it's possible to be portable without some jquery-type layer expressing concepts you can use and that are built out of the native concepts. But I'd bet the different browsers were more similar in 2005 than the different cloud providers are in 2021.


Sure, there is configuration that goes into using your cloud provider's "infrastructure primatives". My point is that Kubernetes is often using those anyway, and if you don't understand them you're unprepared to respond in the case that your cloud provider has an issue.

In terms of the effort to deploy something new, for my organization it's low. We have a Terraform module creates the infrastructure, glues the pieces together, tags stuff, makes sure everything is configured uniformly. You specify some basic parameters for your deployment and you're off to the races.

We don't need to add yet more complexity with a Kubernetes-specific cost tracking software, AWS does it for us automatically. We don't have to care about how pods are sized and how those pods might or might not fit on nodes. Autoscaling gives us consistently sized EC2 instances that, in my experience, have never run into issues because we have a bad neighbor. Most importantly of all, I don't have upgrade anxiety because there are a ton of services stacked on one Kubernetes cluster which may suffer issues if an upgrade does not go well.


> At least with building to k8s you can shift to another cloud provider if those problems end up too difficult to diagnose or fix.

You're saying that the solution to k8s is complicated and hard to debug is to move to another cloud and hope that fixes it?


> You're saying that the solution to k8s is complicated and hard to debug is to move to another cloud and hope that fixes it?

Not in the slightest. I'm saying that building a platform against k8s let's you migrate between cloud providers because the cloud provider's system might be causing you problems. These problems are probably related to your platform's design and implementation which is causing an impedance mismatch with the cloud provider.

This isn't helpful knowledge when you've only got four months of runway and fixing the platform or migrating from AWS would take six months or a year. It's not like switching a k8s-based system is trivial but it's easier than extracting a bunch of AWS-specific products from your platform.


It takes almost as much time and effort to move K8s as it does to reinvent one cloud implementation as another cloud implementation, and your system engineers still have to learn an entirely new system of IaaS/PaaS. You don't really save anything. The only thing K8s does for you is allow the developers' operation of the system to be the same after it's migrated.


> The only thing K8s does for you is allow the developers' operation of the system to be the same after it's migrated.

I mean, yeah, that’s exactly what’s required to happen, and it’s a good thing because only your system engineers need to do most of the legwork. If you have a team of system engineers, you probably have a much bigger cohort of application engineers.


Indeed. When we did a cloud migration we first moved all our apps to a (hosted) k8s first, and then to a cloud k8s cluster. This made the migration so much easier.


Only the k8s admins need to know that though, not the users of it.


"Only the k8s admins" implies you have a team to manage it.

A lot of things go from not viable to viable if you have the luxury of allocating an entire team to it.


Fair point. But this is where the likes of EKS and GKE come in. It takes away a lot of the pain that comes from managing K8s.


That hasn't been my experience. I use Kubernetes on Google cloud (because they have the best implementation of K8s), and I have never had to learn any Google-proprietary things.


Kubernetes on AWS is always broken somewhere from experience as well.

Oh it's Wednesday, ALB controller has shat itself again!


cloud agnosticism is, in my experience, a red herring. It does not matter and the effort required to move from one cloud to another is still non-trivial.

I like using the primitives the cloud provides, while also having a path to - if needed - run my software on bare metal. This means: VMs, decoupling the logging and monitoring from the cloud svcs (use a good library that can send to cloudwatch for eg. prefer open source solutions when possible), do proper capacity planning (and have the option to automatically scale up if the flood ever comes), etc.


> The alternatives to Kubernetes are even more complex. Kubernetes takes a few weeks to learn.

Learning Heroku and starting using it takes maybe an hour. It's more expensive and you won't have as much control as with Kubernetes, but we used it in production for years for fairly big microservice based project without problems.


This feels like a post ranting against SystemD written from someone who likes init.

I understand that K8 does many things but its also how you look at the problem. K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

Arguably, this is one problem that is made up of smaller problems that are solved by smaller services just like SystemD works.

Sometimes I wonder if the Perlis-Thompson Principle and the Unix Philosophy have become a way to force a legalistic view of software development or are just out-dated.


I don't find the comparison to systemd to be convincing here.

The end-result of systemd for the average administrator is that you no longer need to write finicky, tens or hundreds of line init scripts. They're reduced to unit files which are often just 10-15 lines. systemd is designed to replace old stuff.

The result of Kubernetes for the average administrator is a massively complex system with its own unique concepts. It needs to be well understood if you want to be able to administrate it effectively. Updates come fast and loose, and updates are going to impact an entire cluster. Kubernetes, unlike systemd, is designed to be built _on top of_ existing technologies you'd be using anyway (cloud provider autoscaling, load balancing, storage). So rather than being like systemd, which adds some complexity and also takes some away, Kubernetes only adds.


> So rather than being like systemd, which adds some complexity and also takes some away, Kubernetes only adds.

Here are some bits of complexity that managed Kubernetes takes away:

* SSH configuration

* Key management

* Certificate management (via cert-manager)

* DNS management (via external-dns)

* Auto-scaling

* Process management

* Logging

* Host monitoring

* Infra as code

* Instance profiles

* Reverse proxy

* TLS

* HTTP -> HTTPS redirection

So maybe your point was "the VMs still exist" which is true, but I generally don't care because the work required of me goes away. Alternatively, you have to have most/all of these things anyway, so if you're not using Kubernetes you're cobbling together solutions for these things which has the following implications:

1. You will not be able to find candidates who know your bespoke solution, whereas you can find people who know Kubernetes.

2. Training people on your bespoke solution will be harder. You will have to write a lot more documentation whereas there is an abundance of high quality documentation and training material available for Kubernetes.

3. When something inevitably breaks with your bespoke solution, you're unlikely to get much help Googling around, whereas it's very likely that you'll find what you need to diagnose / fix / work around your Kubernetes problem.

4. Kubernetes improves at a rapid pace, and you can get those improvements for nearly free. To improve your bespoke solution, you have to take the time to do it all yourself.

5. You're probably not going to have the financial backing to build your bespoke solution to the same quality caliber that the Kubernetes folks are able to devote (yes, Kubernetes has its problems, but unless you're at a FAANG then your homegrown solution is almost certainly going to be poorer quality if only because management won't give you the resources you need to build it properly).


Respectfully, I think you have a lot of ignorance about what a typical cloud provider offers. Let's go through these each step-by-step.

> SSH configuration

Do you mean the configuration for sshd? What special requirements would have that Kubernetes would help fulfill?

> Key management

Assuming you mean SSH authorized keys since you left this unspecified. AWS does this with EC2 instance connect.

> Certificate management (via cert-manager)

AWS has ACM.

> DNS management (via external-dns)

This is not even a problem if you use AWS cloud primatives. You point Route 53 at a load balancer, which automatically discovers instances from a target group.

> Auto-scaling

AWS already does this via autoscaling.

> Process management

systemd and/or docker do this for you.

> Logging

AWS can send instance logs to CloudWatch. See https://docs.aws.amazon.com/systems-manager/latest/userguide....

> Host monitoring

In what sense? Amazon target groups can monitor the health of a service and automatically replace instances that report unhealthy, time out, or otherwise.

> Infra as code

I mean, you have to have a description somewhere of your pods. It's still "infra as code", just in the form prescribed by Kubernetes.

> Instance profiles

Instance profiles are replaced by secrets, which I'm not sure is better, just different. In either case, if you're following best practices, you need to configure security policies and apply them appropriately.

> Reverse proxy

AWS load balancers and target groups do this for you.

> HTTPS

AWS load balancers, CloudFront, do this for you. ACM issues the certificates.

I won't address the remainder of your post because it seems contingent on the incorrect assumption that all of these are "bespoke solutions" that just have to be completely reinvented if you choose not to use Kubernetes.


> I won't address the remainder of your post because it seems contingent on the incorrect assumption that all of these are "bespoke solutions" that just have to be completely reinvented if you choose not to use Kubernetes.

You fundamentally misunderstood my post. I wasn't arguing that you had to reinvent these components. The "bespoke solution" is the configuration and assembly of these components ("cloud provider primitives" if you like) into a system that suitably replaces Kubernetes for a given organization. Of course you can build your own bespoke alternative--that was the prior state of the world before Kubernetes debuted.


That's not really any different for Kubernetes.

You still need to figure out where your persistent storage is.

You still have to send logs somewhere for aggregation.

You have the added difficulty of figuring out cost tracking in Kubernetes since there is not a clear delineation between cloud resources.

You have to configure an ingress controller.

You want SSL? Gotta set that up, too.

You have to figure out how pods are assigned to nodes in your cluster, if separation of services is at all a concern (either for security or performance reasons).

Kubernetes is no better with the creation of "bespoke solutions" than using what your cloud provider offers.

Compare this tutorial for configuring SSL for Kubernetes services to an equivalent for configuring SSL on an AWS load balancer. Is Kubernetes really adding value here?

https://blog.karmacomputing.co.uk/kubernetes-cluster-with-ss... https://aws.amazon.com/premiumsupport/knowledge-center/assoc...


Kubernetes is far better for each of the above tasks because it is a consistent approach and set of abstractions rather than looking through the arbitrary "everything store" of the cloud providers. I really don't have any interest in relying on 15 different options from cloud providers, I want to get going with a set of extensible, composable abstractions and control logic. Software should not be tied to the hardware I rent or the marketing whims of said entity.

Yes, there is choice and variety among Kubernetes extensions, but they all have fundamental operational assumptions that are aligned because they sit inside the Kubernetes control and API model. It is a golden era to have such a rich set of open and elegant building blocks for modern distributed systems platform design and operations.


Well, first of all, note how much shorter your list is than the original. So vanilla Kubernetes is already taking care of lots of things for us (SSH configuration, process management, log exfiltration, etc). Moreover, we're not talking about vanilla Kubernetes, but managed Kubernetes (I've been very clear and explicit about this) so most of your points are already handled.

> You still need to figure out where your persistent storage is.

Managed Kubernetes comes with persistent storage solutions out of the box. I don't know what you mean by "figure out where it is". On EKS it's EFS, on GKE it's FileStore, and of course you can use other off-the-shelf solutions if you prefer, but there are defaults that you don't have to laboriously set up.

> You still have to send logs somewhere for aggregation.

No, these too are automatically sent to CloudWatch or equivalent (maybe you have to explicitly say "use cloudwatch" in some configuration option when setting up the cluster, but still that's a lot different than writing ansible scripts to install and configure fluentd on each host).

> You have the added difficulty of figuring out cost tracking in Kubernetes since there is not a clear delineation between cloud resources.

This isn't true at all. Your cloud provider still rolls up costs by type of resource, and just like with VMs you still have to tag things in order to roll costs up by business unit.

> You have to configure an ingress controller.

Nope, this also comes out of the box with your cloud provider. It hooks into the cloud provider's layer 7 load balancer offering. It's also trivial to install other load balancer controllers.

> You want SSL? Gotta set that up, too. ... Compare this tutorial for configuring SSL for Kubernetes services to an equivalent for configuring SSL on an AWS load balancer. Is Kubernetes really adding value here?

If you use cert-manager and external-dns, then you'll have DNS and SSL configured for every service you ever create on your cluster. By contrast, on AWS you'll need to manually associate DNS records and certificates with each of your load balancers. Configuring LetsEncrypt for your ACM certs is also quite a lot more complicated than for cert-manager.

> Kubernetes is no better with the creation of "bespoke solutions" than using what your cloud provider offers.

I hope by this point it's pretty clear that you're mistaken. Even if SSL/TLS is no easier with Kubernetes than with VMs/other cloud primitives, we've already addressed a long list of things you don't need to contend with if you use managed Kubernetes versus cobbling together your own system based on lower level cloud primitives. And Kubernetes is also standardized, so you can rely on lots of high quality documentation, training material, industry experience, FAQ resources (e.g., stack overflow), etc which you would have to roll yourself for your bespoke solution.


Right, I really dislike systemd in many ways ... but I love what it enables people to do and accept that for all my grumpyness about it, it is overall a net win in many scenarios.

k8s ... I think is often overkill in a way that simply doesn't apply to systemd.


If you have to manage a large distributed software code base or set of datacenters, Kubernetes is a win in that it provides a consistent, elegant solution to a nearly universal set of problems.

Systemd comparatively feels like a complete waste of time given the heat it has generated for the benefit.


> The end-result of systemd for the average administrator is that you no longer need to write finicky, tens or hundreds of line init scripts.

Wouldn't the hundreds of lines of finicky, bespoke Ansible/Chef/Puppet configs required to manage non-k8s infra be the equivalent to this?


In my work, absolutely yes. Using Kubernetes has saved us sooo much nonsense. Yes we have a mix of Terraform and k8s manifests to deploy to Azure Kubernetes Service, but it works out pretty well in the end.

Honestly most of the annoyance is Azure stuff. Kubernetes stuff is pretty joyful and, unlike Azure, the documentation sometimes even explains how it works.


I can't say I have had the same experience.

Kubernetes cluster changes potentially create issues for all services operating in that cluster.

Provisioning logic that is baked into an image means changes to one service have no chance of affecting other services (app updates that create poor netizen behavior, notwithstanding). Rolling back an AMI is as trivial as setting the AMI back in the launch template and respinning instances.

There is a lot to be said for being able to make changes that you are confident will have a limited scope.


Does Kubernetes infrastructure also not require some form of configuration?

Yes, there is a trade off here. You are trading a staggeringly complex external dependency for a little bit of configuration you write yourself.

The Kubernetes master branch weighs in at ~4.6 million lines of code right now. Ansible sits at ~286k on their devel branch (this includes the core functionality of Ansible but not every single module). You could choose not to even use Ansible and just write a small shell script that builds out an image which does something useful in less than 500 lines of your own code, easily.

Kubernetes does useful stuff and may take some work off your plate. It's also a risk. If it breaks, you get to keep both of the pieces. Kubernetes occupies the highly unenviable space of having to do highly available network clustering. As a piece of software, it is complex because it has to be.

Most people don't need the functionality provided by Kubernetes. There are some niceties. But if I have to choose between "this ~500 line homebrew shell script broke" and "a Kubernetes upgrade went wrong" I know which one I am choosing, and it's not the Kubernetes problem.

Managed Kubernetes, like managed cloud services, mitigate some of those issues. But you can still end up with issues like mismatched node sizes and pod resource requirements, so there is a bunch of unused compute.

TL;DR of course there are trade-offs, no solution is magic.


Fair, I was just pointing out that there was more to the analogy. Systemd, like init, also requires configuration, though it is more declarative than imperative, similar to k8s. Some people may prefer this style and consider it easier to manage, however, I my opinions here are not that strong


Kubernetes removes the complexity of keeping a process (service) available.

There’s a lot to unpack in that sentence, which is to say there’s a lot of complexity it removes.

Agree it does add as well.

I’m not convinced k8s is a net increase in complexity after everything is accounted for. Authentication, authorization, availability, monitoring, logging, deployment tooling, auto scaling, abstracting the underlying infrastructure, etc…


> Kubernetes removes the complexity of keeping a process (service) available.

Does it really do that if it you just use it to provision an AWS load balancer, which can do health checks and terminate unhealthy instances for you? No.

Sure, you could run some other ingress controller but now you have _yet another_ thing to manage.


Do AWS load balancers distinguish between "do not send traffic" and "needs termination"?

Kubernetes has readiness checks and health checks for a reason. The readiness check is a gate for "should receive traffic" and the health check is a gate for "should be restarted".


If that’s all you use k8s for, you don’t need it.

Myself I need a to setup a bunch of other cloud services for day 2 operations.

And I need to do it consistently across clouds. The kind of clients I serve won’t use my product as a SaaS due to regulatory/security reasons.


Multi-cloud is one of the few compelling use cases I can think of for Kubernetes.

That said, there are relatively few organizations that actually require it.


> K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

K8S does very simple stateless case well, but anything more complicated and you are on your own. Statefull services is still a major pain especially thus with leader elections. There is not feedback to K8S about application state of the cluster, so it can't know which instancess are less disruptive to shut down or which shard needs more capacity.


> I understand that K8 does many things but its also how you look at the problem. K8 does one thing well, manage complex distributed systems such as knowing when to scale up and down if you so choose and when to start up new pods when they fail.

Also, in the sense of "many small components that each do one thing well", k8s is even more Unix-like than Unix in that almost everything in k8s is just a controller for a specific resource type.


I'm not sure that "fewer concepts" is a win. "Everything is a file" went too far with Linux, where you get status from the kernel by reading what appears to be various text files. But that runs into all the complexities of maintaining the file illusion. What if you read it in small blocks? Does it change while being read? If not, what if you read some of it and then just hold the file handle. Are you tying up kernel memory? Holding important locks? Or what?

Orchestration has a political and business problem, too. How does Amazon feel about something that runs most jobs on your own bare metal servers and rents extra resources from AWS only during overload situations? This appears to be the financially optimal strategy for compute-bound work such as game servers. Renting bare iron 24/7 at AWS prices is not cost effective.


> "Everything is a file" went too far with Linux

Having had a play with a few variants on this theme, I think kernel based abstractions are the mistake here. It's too low level and too constrained by the low-level details of the API, as you've said yourself.

If you look at something like PowerShell, it has a variant of this abstraction that is implemented in user mode. Within the PowerShell process, there are provider plugins (DLLs) that implement various logical filesystems like "environment variables", "certificates", "IIS sites", etc...

These don't all implement the full filesystem APIs! Instead they have various subsets. E.g.: for some providers only implement atomic reads and writes, which is what you want for something like kernel parameters, but not generic data files.


I feel like we've already seen some alternatives and the industry, thus far, is still orienting towards k8s.

Hashicorp's stack, using Nomad as an orchestrator, is much simpler and more composable.

I've long been a fan of Mesos' architecture, which I also think is more composable than the k8s stack.

I just find it surprising an article that is calling for an evolution of the cluster management architecture fails to investigate the existing alternatives and why they haven't caught on.


We had someone explore K8s vs Nomad and they said K8s because nomad docs are bad. They got much further with K8s in the same timeboxed spike


Setting up the right parameters/eval criteria to exercise inside of a few week timebox (I'm assuming this wasn't a many month task) is extremely difficult to do for a complex system like this. At least, to me it is--maybe more ops focused folks can do it quicker.

Getting _something_ up and running quickly isn't necessarily a good indicator of how well a set of tools will work for you over time, in production work loads.


It was more about migrating the existing microservices than some example app, runs in docker compare today. Getting the respective platforms up was not the issue. I don't think weeks were spent, but they were able to migrate a complex application to K8s in less than a week. Couldn't get it running in Nomad, which was tried first due to its supposed simplicity over K8s.


Several years ago -- so pre-K8s too -- I was tasked with setting up a Nomad cluster and failed miserably. Nomad and Consul are designed to be worked together but also designed distinctly enough that it was a bloody nightmare trying to figure out what order of priority things needed to be spun up and how they all interacted with each other. The documentation was more like a man page where you'd get a list of options but very little guidance on how to set it up, unlike K8s who's documentation has a lot of walk-through material.

Things might have improved massively for Nomad since but I honestly have no desire to learn. Having used other Hashicorp tools since, I see them make the same mistakes time and time again.

Now I'm not the biggest fan of K8s either. I completely agree that they're hugely overblown for most purposes despite being sold as a silver bullet for any deployment. But if there's one thing K8s does really well it's describing the different layers in a deployment and then wrapping that up in a unified block. There's less of the "this thing is working but is this other thing" when spinning up a K8s cluster.


For me when exploring K8s vs Nomad, Nomad looked like a clear choice. That was until I had to get Nomad + Consul running. I found it all really difficult to get running in a satisfactory manner. I never even touched the whole Vault part of the setup because it was all overwhelming.

On the other side K8s was a steep learning curve with lots of options and 'terms' to learn but never was a point into the whole exploration where I was stuck. The docs are great. the community is great and the number of examples available allows us to mix n match lots of different approaches.


There is a trap in distributed system design - seeking to scale-up from a single-host perspective. An example - we have apache and want to scale it up, so we put it in a container and generate its configuration so we can run several of them in parallel.

This leads to unnecessarily heavy systems - you do not need a container to host a server socket.

Industry puts algorithms and Big O on a pedestal. Most software projects start as someone building algorithms, with deployment and interactions only getting late attention. This is a bit like building the kitchen and bathroom before laying the foundations.

Algorithm centric design creates mathematically elegant algorithms that move gigabytes of io across the network for every minor transaction. Teams wrap commodity resource schedulers around carefully tuned worker nodes, and discover their performance is awful because the scheduler can’t deal in the domain language of the big picture problem.

I think it is interesting that the culture of Big O interviews and k8s both came out of Google.


Do you have any examples/ideas of what a non algorithm-first approach might look like?


Not sure if this is helpful, there are some notes at cthulix.com.


The problem is the devops culture that has burdened development teams with having to juggle a lot of complexity. The solution is having some separation of concerns. Development teams should not have to spend a lot of time on devops. That's something that should just work that you buy from someone. You pay for the privilege of doing more interesting things.

Kubernetes becomes a problem when you have people who are not operations people with many years of experience with this stuff trying to do this while learning how to do it at the same time. The related problem is that having people spend time on this is orders of magnitudes more expensive than it is to run an actual cluster, which is also not cheap.

A week of devops time easily equates months/years of cloud hosting time for a modestly sized setup using e.g. Google Cloud Run. And lets face it, it's never just a week. Many teams have full time dev ops people costing 100-200$K/year, each. Great if you are running a business generating millions of revenue. Not so great if you are running a project that has yet to generate a single dollar of revenue and is a long time away from actually getting there. That describes most startups out there.

I actually managed to stay below the Cloud Run freemium layer for a while making it close to free. Took me 2 minutes to setup CI/CD. Comes with logging, auto scaling, alerting, etc. Best of all, it freed me up to do more interesting things. Technically I'm using Kubernetes. Except of course I'm not. I spent zero time fiddling with kubernetes specific config. All I did was tell Google Cloud run to go create me a CI/CD pipeline from this git repository and scale it. 3 minute job to click together. Service was up and running right after the build succeeded. Great stuff. That's how devops should be: spend a minimum of time on it in exchange for acceptable results.


"Development teams should not have to spend a lot of time on devops. That's something that should just work that you buy from someone."

This is the fundamental disagreement. DevOps was a reaction to developers that build software that was nearly impossible to operate because they treated Ops as servants that paid to do the dirty work, rather than peers with a set of valuable skills that cover a scope beyond what many Dev teams have. And it was a reaction to Ops being ground down into becoming the "department of no", when really they should be at the table with the development team as a way towards a collaborative reality check. A model where one team gets to completely ignore the complexities of operational reality is a broken, inhumane, and unsustainable model.

That said, it's also unsustainable to expose all complexity to dev teams that don't have the skills or incentive to manage this. Progressive disclosure and composable abstractions are the tool to remedy this. Kubernetes was never intended to be exposed directly to app developers, it was a system developer's platform toolkit. Exposing it is misunderstanding + laziness on the part of some operations teams. The intent was always to build higher PaaS-like abstractions such as Knative (which is what Google Cloud Run is based on).


As a frontend developer, I love to run applications in production, being able to get a terminal to my server, setup metrics, and do all these devopsy things.

But it is a totally different experience from doing this with Appengine, Heroku, Tsuru, etc... than with a custom in house built kubernetes plus a thousand custom home made tools and 10 different repositories with custom undocumented YAML files and another 3000 "gotchas" of things that don't work yet, we're on it, we need to migrate to the new version,etc.

So I symphatize with the parent comment in the sense that, in this custom built mountain of stuff, I don't want to do deveops... if you give me an easy to use, well tested, well documented, stable production infrastructure as the ones I mentioend, then I'm all in.

I also agree with you on your last paragraphs about not exposing the raw thing to the developers. This is the key.

The problem is when the systems gurus want you to understand to the same level everything they understand, your frontend coworkers want you to be on the latest of every library, your product manager wants you to perfectly understand the product, your manager expect you to be the best at dealing with people, and you still have to smile and be happy about team building... oh, and don't forget the Agile Coach expecting you to also be good at all the team dynamics and card games.

I'm all in in operating the applications my team builds. Having to operate custom in house kubernetes clusterfucks is not my job.


100%. I spent 5+ years of my life helping cloud foundry take off, and saw the enormous benefits of having your own private Heroku.

But the market overwhelmingly decided it wanted to play with a lower level foundation (those CF instances mostly are still chugging along running hundreds of thousands of containers, but they’re in their own world… “legacy”?).

Let’s own it and not delude ourselves that the current state of Kubernetes is the end state. It’s like saying the Linux syscall interface is too complex for app developers. Well yes! It’s for system developers. We as an industry are working to improve that.


Treating ops as a separate janitorial service and how that goes south is nicely captured in this article:

https://machinesplusminds.blogspot.com/2012/08/the-carpets-a...


> Great if you are running a business generating millions of revenue.

It's not even great in that situation. Millions in profit, perhaps, but that $200k+ would probably better be spent elsewhere - enhancing functionality, increasing sales, support, etc.


One point where the analogy fails, is that Multics was never particularly popular. Although it was historically influential (especially but not purely through its influence on Unix), it was only ever a small player in the market. It was positioned as an operating system for high-end multi-million dollar mainframes, but in that market IBM was king (with thousands of sites), Multics wasn't even near being second place (with a mere 80 sites at its peak). Even for its vendor, GE/Honeywell, it was an also-ran – Honeywell ended up preferring GCOS as the solution for that market, which is part of why it killed Multics off. GCOS was no doubt technically inferior, but it was a simpler system which made more frugal use of system resources.

By contrast, k8s is wildly popular. I have no idea how many installations of it exist in the world, but it probably numbers into the millions.


I'm pretty biased since I gave k8s trainings and operate several kubes for my company and clients.

I'll take two pretty different contexts to illustrate why for me k8s makes sense.

1- I'm part of the cloud infrastructure team (99% AWS, a bit of Azure) for a pretty large private bank. We are in charge of security and conformity of the whole platform while trying to let teams be as autonomous as possible. The core services we provide are a self-hosted Gitlab along with ~100 CI runners (Atlantis and Gitlab-CI, that many for segregation), SSO infrastructure and a few other little things. Team of 5, I don't really see a better way to run this kind of workload with the required SLA. The whole thing is fully provisioned and configured via Terraform along with it's dependencies and we have a staging env that is identical (and the ability to pop another at will or to recreate this one). Plenty of benefits like almost 0 downtime upgrades (workloads and cluster), on-the-shelf charts for plenty of apps, observability, resources optimization (~100 runners mostly idle on a few nodes), etc.

2- Single VM projects (my small company infrastructure and home server) for which I'm using k3s. Same benefits in terms of observability, robustness (at least while the host stays up...), IaC, resources usage. Stable minimalists hardened host OS with the ability to run whatever makes sense inside k3s. I had to setup similarly small infrastructures for other projects recently with the constraint of relying on more classic tools so that it's easier for the next ops to take over, I end up rebuilding a fraction of k8s/k3s features with much more efforts (did that with docker and directly on the host OS for several projects).

Maybe that's because I know my hammer well enough for screws to look like nails but from my perspective once the tool is not an obstacle k8s standardized and made available a pretty impressive and useful set of features, at large scale but arguably also for smaller setups.


99% AWS? You can do Gitlab runners and pretty much everything else with ECS+Fargate. You wouldn't even need to maintain any nodes, clusters, etc!


We have both Nomad (Consul + Vault + Nomad) and Kubernetes (hosted and on prem) running, both excel at different things.

I love Nomad's flexibility and ease of use, a simple hcl file and I (and all the devs) can debug and understand what is going with the deployment without wasting a whole sprint, debugging and understanding the systems is trivial. However I agree parts of the documentation should be fixed and can confuse people who want to start up and it's also relatively "new" insofar that there is a small but growing community around it. I love Kubernetes because of the community, if there's a Helm chart for a service, it's going to work in 80% of the cases. If however there are bugs in the helm chart, or something is quite not on the beaten path, then good luck. Most of the time wasted on Kubernetes was the inexperience of the operators and also the esoteric bugs that can happen now and then. Building on top of things that have been done before is a great way to win time and flexibility but it shouldn't be an excuse to not understand them (helm charts as an example).

In both cases, you always need an ops team to take care of the clusters. For Nomad, 2/3 people are enough. For Kubernetes you will need 5+ people depending on the size and locality of the cluster, if you want to do things right, that is. If your dev team is managing them it's already game over and just a question of time until you made yourself more real problems than you initially had.

What bugs me the most however is the cargo culting around the tools serving as a "beating around the bush" technique to not do actual work. They're just that, tools, if you have to deploy a rails or django app with an sqlite database just do it on metal with a two liner "ci/cd" and grow from there. If it gets bigger, sure, go for Kubernetes to manage the deployments and auto scale, but be damn sure that you can debug anything that goes wrong within minutes/hours. If things go wrong and there's no hit on your googled error code you essentially fall from your highest level of abstraction and are at the mercy of consultants that will both waste your time in writing requirements and waste your money by taking too much time than was initially planned and agreed upon (my experience, sample size N=6).


One of the most relevant and amazing blogs I have read in recent times.

I have been working for a firm that have been onboarding multiple small scale startup or lifestyle businesses to kubernetes. My opinion is that if you have an ruby on rails or python app, you don't really need kubernetes. It is like bringing bazooka to a knife fight. However, I do think kubernetes has some good practice embedded in them, which I will always cherish.

If you are not operating at huge scale, both operations or/and teams, it actually comes at a high cost of productivity and tech debt. I wish there was an easier tech that would bridge going from VMs to bunch of VMs, bunch of containers to kubernetes.


> Kubernetes is our generation's Multics

Prove it. Create something simpler, more elegant and more principled that does the same job. (While you're at it, do the same for systemd which is often criticized for the same reasons.) Even a limited proof of concept would be helpful.

Plan9 and Inferno/Limbo were built as successors to *NIX to address process/environment isolation ("containerization") and distributed computing use cases from the ground up, but even these don't even come close to providing a viable solution for everything that Kubernetes must be concerned with.


I can claim electric cars will beat out hydrogen cars in the long run. I don't have to build an electric car to back up this assertion. I can look at the fundamental factors at hand and project out based on theoretical maximums.

I can also claim humans will have longer lifespans in the future. I don't need to develop a life extending drug before I can hold that assertion.

Kubernetes is complex. Society used to still work on simpler systems before we added layers of complexity. There are dozens of layers of abstraction above the level of transistors, it is not a stretch to think that there is a more elegant abstraction yet designed without having to "prove" themselves to zozobot234.


Claiming Kubernetes is Multics , and that UNIX is around the corner, is worthless claim without actual data or argument to back it up.

To me, Kubernetes is the new UNIX, centered around a small number of core ideas: controller loops, Pods, level-triggered events, and a fully open, well-standardized, and declarative, and extensible RESTful API.

The various clouds and predecessor cloud orchestrators were the infinitely complicated beasts.

OP just linked to a few rants about the complexity of the CNCF ecosystem (not Kubernetes), and extended cranky rant / thought exercise by the MetalLB guy. The latter is the closest to an actual argument against Kubernetes, but there’s a LOT of things to disagree with in that post .


What are the "fundamental factors at hand" with Kubernetes and software orchestration? How do you quantify these things?


> comments are intended to add color on the design of the Oil language and the motivation for the project as a whole.

Comments are also easier to write than code. He really does seem obligated to prove kubernetes is our generations multics, and that's a good thing.


The successor will probably be a more integrated platform where it provides a lot of stuff you've got to use sidecars, etc for.

Probably a language with good IPC (designed for real distributed systems that handle failover), some unified auth library, and built-in metrics and logging.

A lot of real-life k8s complexity is trying to accommodate many supplemental systems for that stuff. Otherwise it's a job scheduler and haproxy.



Nomad also doesn't have a lot of feature that are built into kubernetes, features that otherwise require other hashicorp tools. So now you have a vault cluster, a consul cluster, a nomad cluster, then hcl to manage it all, probably a terraform enterprise cluster. So what have you gained? Besides the same amount of complexities with fewer features.


I think Nomad sounds like the direction the OP blog post is proposing to move in: a set of largely independent tools which can each address some aspect of the problem kubernetes is trying to solve.


> a set of largely independent tools which can each address some aspect of the problem kubernetes is trying to solve.

But Kubernetes is already this. Sure the core is a lot bigger than something like Nomad, but the some of it is replaceable, and there are plenty of simpler alternatives to those built in.

And anyway, my point still stands. What's the point of having 20 different independent systems that address the aspects K8s is trying to solve versus one big system that addresses all the headaches? To me having 20 different systems that potentially have many fundamental differences is more complex than a single system that has the same design philosophies and good integration across the board.


AWS's Cloud primitives are certainly better. Of course it's not FOSS, though it proves orchestration can be done simpler.

https://ably.com/blog/no-we-dont-use-kubernetes

For local development (a must imo), just rock a docker-compose.yml that emulates your Cloud orchestrated with terraform/cloudformation.


This is absolutely not an alternative, not even close. AWS is exactly that: Amazon Web Services. Do you need to host your stuff somewhere else one day? Good luck re-inventing everything from scratch.

I am sort of k8s hater myself, because I've seen very simple and straight-forward production pipelines, reasonably well understood by admins, turn into over-complicated shit with buggy deploy pipelines literally 10 times slower that no one really understands. All of this to manage maybe 10 nodes per service. All of that said, I cannot deny that these new solutions are something that previous generation of ansible scripts and AWS primitives were not. Now we can move all of it to pretty much any infrastructure without changing much. And as much as I hate it, I don't really have an answer to "what else, if not kubernetes?" that doesn't feel a little bit dishonest. I seriously would like to hear one.


Comment on your first point— I have done the work you speak of (porting AWS-specific code to other cloud providers). It is absolutely possible and relatively painless if you design for that feature at the outset. Almost all of the lower level AWS services have a counterpart in the other ecosystems.

So if you build the right interface abstractions around those components, it gets you a long way.


if you are running say a monolith in container in Fargate fronted by ALB that talks RDS PG or Aurora there is not much complexity in moving that anywhere


Needs to have a really serious branding first.

Like Yolodyne Cybernetrix


I feel like k8s sits in the same space as git. One of those tools that is ridiculously complex, obtuse, un-userfriendly but at the same time worth sucking it all up because the win from consolidating your knowledge into something that is an industry standard is far greater than whatever particular things one doesn't like about how it works.

It is a fascinating dynamic however that generates these outcomes where a large numbers of people collectively settle on something that the majority of them seem to hate.


> A distributed OS that follows the Perlis-Thompson Principle would have fewer concepts.

Kubernetes is a relatively simple system with few concepts. You have manifests stored in etcd, behind the API server, and various controllers that act on these manifests. Some controllers (Deployment, StatefulSet, etc.) come standard out of the box, some are custom and added later. The basic unit of computation is a Pod, and DNS is provided with Services. Cluster administrators need to worry about the networking and storage layers, not cluster users. Honestly, that's pretty much it! Really not so complicated.

Now, does that help you write a manifest for the Deployment controller? No, and neither does it help you autoscale the Deployment via writing a manifest for the HorizontalPodAutoscaler controller, or setting up a load balancer by writing a manifest for the Ingress controller. But I wouldn't call the UNIX model complex because Linux distributions and package managers add complexity.


Kubernetes gets a lot of shade, and rightfully so. It’s a tough problem. I do hope we get a Ken Thompson or Rich Hickey-esque solution at some point.


I see the shade thrown at k8s... but honestly I don't know how much of it is truly deserved.

k8s is complex not unnecessarily, but because k8s is solving a large host of problems. It isn't JUST solving the problem of "what should be running where". It's solving problems like "how many instances should be where? How do I know what is good and what isn't? How do I route from instance A to instance b? How do I flag when a problem happens? How do I fix problems when they happen? How do I provide access to a shared resource or filesystem?"

It's doing a whole host of things that are often ignored by shade throwers.

I'm open to any solution that's actually simpler, but I'll bet you that by the time you've reached feature parity, you end up with the same complex mess.

The main critique I'd throw at k8s isn't that it's complex, it's that there are too many options to do the same thing.


I think part of the shade throwing is k8s has a high lower bound of scale/complexity "entry fee" where is actually makes sense. If your scale/complexity envelope is below that lower bound, you're fighting k8s, wasting time, or wasting resources.

Unfortunately unless you've got a lot of k8s experience that scale/complexity lower bound isn't super obvious. It's also possible to have your scale/complexity accelerate from "k8s isn't worthwhile" to "oh shit get me some k8s" pretty quickly without obvious signs. That just compounds the TMTOWTDI choice paralysis problems.

So you get people that choose k8s when it doesn't make sense and have a bad time and then throw shade. They didn't know ahead of time it wouldn't make sense and only learned through the experience. There's a lot of projects like k8s that don't advertise their sharp edges or entry fee very well.


> I think part of the shade throwing is k8s has a high lower bound of scale/complexity "entry fee" where is actually makes sense. If your scale/complexity envelope is below that lower bound, you're fighting k8s, wasting time, or wasting resources.

Maybe compared to Heroku or similar, but compared to a world where you're managing more than a couple of VMs I think Kubernetes becomes compelling quickly. Specifically, when people think about VMs they seem to forget all of the stuff that goes into getting VMs working which largely comes with cloud-provider managed Kubernetes (especially if you install a couple of handy operators like cert-manager and external-dns): instance profiles, AMIs, auto-scaling groups, key management, cert management, DNS records, init scripts, infra as code, ssh configuration, log exfiltration, monitoring, process management, etc. And then there's training new employees to understand your bespoke system versus hiring employees who know Kubernetes or training them with the ample training material. Similarly, when you have a problem with your bespoke system, how much work will it be to Google it versus a standard Kubernetes error?

Also, Kubernetes is really new and it is getting better at a rapid pace, so when you're making the "Kubernetes vs X" calculation, consider the trend: where will each technology be in a few years. Consider how little work you would have to do to get the benefits from Kubernetes vs building those improvements yourself on your bespoke system.


Honestly, the non-k8s cloud software is also getting excellent. When I have a new app that I can't containerize (network proxies mostly) I can modify my standard terraform pretty quickly and get multi-AZ, customized AMIs, per-app user-data.sh, restart on failures, etc. with private certs and our suite of required IPS daemons, etc. It's way better than pre-cloud things. K8s seems also good for larger scale and where you have a bunch of PD teams wanting to deploy stuff with people that can generate all the YAML/annotations etc. If your deploy #s scale with the number of people that can do it, then k8s works awesomely. If you have just 1 person doing a bunch of stuff, simpler things can let that 1 person manage and create a lot of compute in the cloud.


K8 is the semi truck of software, great for semi scale things, but often used when a van would just do fine.


To me, usefulness is less to do with scale and more to do with number of distinct services.

If you have just a single monolith app (such as a wordpress app) then sure, k8s is overkill. Even if you have 1000 instances of that app.

It's once you start having something like 20+ distinct services that k8s starts paying for itself.


Especially with 10 distinct development teams that all have someone smart enough to crank out some YAML with their specific requirements.


Kubernetes is an aircraft carrier, where most people just need a skiff.


> how many instances should be where?

Are you referring to instances of your application, or EC2 instances? If instances of your application, in my experience it doesn't really do much for you unless you are willing to waste compute resources. It takes a lot of dailing in to effectively colocate multiple pods and maximize your resource utilization. If you're referring to EC2 instances, well AWS autoscaling does that for you.

Amazon and other cloud providers have the advantage of years of tuning their virtual machine deployment strategies to provide maximum insulation from disruptive neighbors. If you are running your own Kubernetes installation, you have to figure it out yourself.

> How do I know what is good and what isn't?

Autoscaling w/ a load balancer does this trivially with a health check, and it's also self-healing.

> How do I route from instance A to instance b?

You don't have to know or care about this if you're in a simple VPC. If you are in multiple VPCs or a more complex single VPC setup, you have to figure it out anyway because Kubernetes isn't magic.

> How do I flag when a problem happens?

Probably a dedicated service that does some monitoring, which as far as I know is still standard practice for the industry. Kubernetes doesn't make that go away.

> How do I fix problems when they happen?

This is such a generic question that I'm not sure how you felt it could be included. Kubernetes isn't magic, your stuff doesn't always just magically work because Kubernetes is running underneath it.

> How do I provide access to a shared resource or filesystem?

Amazon EFS is one way. It works fine. Ideally you are not using EFS and prefer something like S3, if that meets your needs.

> It's doing a whole host of things that are often ignored by shade throwers.

I don't think they're ignored, I think that you assume they are because they are because those things aren't talked about. They aren't talked about because they aren't an issue with Kubernetes.

The problem with Kubernetes is that it is a massively complex system that needs to be understood by its administrators. The problem it solves overlaps nearly entirely with existing solutions that it depends on. And it introduces its own set of issues via complexity and the breakneck pace of development.

You don't get to just ignore the underlying cloud provider technology that Kubernetes is interfacing with just because it abstracts those away. You have to be able to diagnose and respond to cloud provider issues _in addition_ to those that might be Kubernetes-centric.

So yes, Kubernetes does solve some problems. Do the problems it solves outweigh the problems it introduces? I am not sure about that. My experience to Kubernetes is limited to troubleshooting issues with Kubernetes ~1.6, which we got rid of because we regularly ran into annoying problems. Things like:

* We scaled up and then back down, and now there are multiple nodes running 1 pod and wasting most of their compute resources.

* Kubernetes would try to add routes to a route table that was full, and attempts to route traffic to new pods would fail.

* The local disk of a node would fill up because of one bad actor and impact multiple services.

At my workplace, we build AMIs that bake-in their Docker image and run the Docker container when the instance launches. There are some additional things we had to take on because of that, but the total complexity is far less than what Kubernetes brings. Additionally, we have the side benefit of being insulated from Docker Hub outages.


I think a large part of the problem is that systems like Kubernetes are designed to be extensible with a plugin architecture in mind. Simple applications usually have one way of doing things but they are really good at it.

This begs to question if there is a wrong or right way of doing things and if a single system can adapt fast enough to the rapidly changing underlying strategies, protocols, and languages to always be at the forefront of what is considered best practices in all levels of development and deployment.

These unified approaches usually manifest themselves as each cloud providers best practice playbooks, but each public cloud is different. Unless something like Kuberenetes can build a unified approach across all cloud providers or self hosting solutions then it will always be overly complex because it will always be changing for each provider to maximize their interests in adding their unique services.


Having used Kubernetes for a while, I'm of the opinion that it's not so much complex as it is foreign, and when we learn Kubernetes we're confronted with a bunch of new concepts all at once even though each of the concepts are pretty simple. For example, people are used to Ansible or Terraform managing their changes, and the "controllers continuously reconciling" takes a bit to wrap one's head around.

And then there are all of the different kinds of resources and the general UX problem of managing errors ("I created an ingress but I can't talk to my service" is a kind of error that requires experience to understand how to debug because the UX is so bad, similarly all of the different pod state errors). It's not fundamentally complex, however.

The bits that are legitimately complex seem to involve setting up a Kubernetes distribution (configuring an ingress controller, load balancer provider, persistent volume providers, etc) which are mostly taken care of for you by your cloud provider. I also think this complexity will be resolved with open source distributions (think "Linux distributions", but for Kubernetes)--we already have some of these but they're half-baked at this point (e.g., k3s has local storage providers but that's not a serious persistence solution). I can imagine a world where a distribution comes with out-of-the-box support for not only the low level stuff (load balancers, ingress controllers, persistence, etc) but also higher level stuff like auto-rotating certs and DNS. I think this will come in a few years but it will take a while for it to be fleshed out.

Beyond that, a lot of the apparent "complexity" is just ecosystem churn--we have this new way of doing things and it empowers a lot of new patterns and practices and technologies and the industry needs time and experience to sort out what works and what doesn't work.

To the extent I think this could be simplified, I think it will mostly be shoring up conventions, building "distributions" that come with the right things and encourage the right practices. I think in time when we have to worry less about packaging legacy monolith applications, we might be able to move away from containers and toward something more like unikernels (you don't need to ship a whole userland with every application now that we're starting to write applications that don't assume they're deployed onto a particular Linux distribution). But for now Kubernetes is the bridge between old school monoliths (and importantly, the culture, practices, and org model for building and operating these monoliths) and the new devops / microservices / etc world.


I have borg experience and my experience with k8s was extremely negative. Most of my time was spent diagnosing self-inflicted probmems by the k8s framework.

I've been trying nomad lately and it's a bit more direct.


I think that's because Borg comes with a team of engineers who keep it running and make it easy.

I've had a similar experience with Cassandra. Using Cassandra at Netflix was a joy because it always just worked. But there was also a team of engineers who made sure that was the case. Running it elsewhere was always fraught with peril.


yes several of the big benefits are: the people who run borg (and the ecosystem) are well run (for the most part). And, the ability to find them in chat and get them to fix things for you (or explain some sharp edge).


I have borg experience and I think Kubernetes is great. Before borg, I would basically never touch production -- I would let someone else handle all that because it was always a pain. When I left Google, I had to start releasing software (because every other developer is also in that "let someone else handle it" mindset), and Kubernetes removed a lot of the pain. Write a manifest. Change the version. Apply. Your new shit is running. If it crashes, traffic is still directed to the working replicas. Everyone on my team can release their code to any environment with a single click. Nobody has ever ssh'd to production. It just works.

I do understand people's complaints, however.

Setting up "the rest" of the system involves making a lot of decisions. Observability requires application support, and you have to set up the infrastructure yourself. People generally aren't willing to do that, and so are upset when their favorite application doesn't work their favorite observability stack. (I remember being upset that my traces didn't propagate from Envoy to Grafana, because Envoy uses the Zipkin propagation protocol and Grafana uses Jaeger. However, Grafana is open source and I just added that feature. Took about 15 minutes and they released it a few days later, so... the option is available to people that demand perfection.)

Auth is another issue that has been punted on. Maybe your cloud provider has something. Maybe you bought something. Maybe the app you want to run supports OIDC. To me, the dream of the container world is that applications don't have to focus on these things -- there is just persistent authentication intrinsic to the environment, and your app can collect signals and make a decision if absolutely necessary. But that's not the way it worked out -- BeyondCorp style authentication proxies lost to OIDC. So if you write an application, your team will be spending the first month wiring that in, and the second month documenting all the quirks with Okta, Auth0, Google, Github, Gitlab, Bitbucket, and whatever other OIDC upstreams exist. Big disaster. (I wrote https://github.com/jrockway/jsso2 and so this isn't a problem for me personally. I can run any service I want in my Kubernetes cluster, and authenticate to it with my FaceID on my phone, or a touch of my Yubikey on my desktop. Applications that want my identity can read the signed header with extra information and verify it against a public key. But, self-hosting auth is not a moneymaking business, so OIDC is here to stay, wasting thousands of hours of software engineering time a day.)

Ingress is the worst of Kubernetes' APIs. My customers run into Ingress problems every day, because we use gRPC and keeping HTTP/2 streams intact from client to backend is not something it handles well. I have completely written it off -- it is underspecified to the point of causing harm, and I'm shocked when I hear about people using it in production. I just use Envoy and have an xDS layer to integrate with Kubernetes, and it does exactly what it should do, and no more. (I would like some DNS IaC though.)

Many things associated with Kubernetes are imperfect, like Gitops. A lot of people have trouble with the stack that pushes software to production, and there should be some sort of standard here. (I use ShipIt, a Go program to edit manifests https://github.com/pachyderm/version-bump, and ArgoCD, and am very happy. But it was real engineering work to set that up, and releasing new versions of in-house code is a big problem that there should be a simple solution to.)

Most of these things are not problems brought about by Kubernetes, of course. If you just have a Linux box, you still have to configure auth and observability. But also, your website goes down when the power supply in the computer dies. So I think Kubernetes is an improvement.

The thing that will kill Kubernetes, though, is Helm. I'm out of time to write this comment but I promise a thorough analysis and rant in the future ;)


Helm's biggest problem is...

Let me rephrase that. ONE of Helm's biggest problems is that it uses text-based templating, instead of some sort of templating system that understands the thing it's actually trying to template.

This makes some things much MUCH harder than they should need to be.

It makes it really hard to have your configuration bridge things like "you have this much RAM" or "this is the CPU you have" to flags or environment variables that your code can understand.

It also makes it hard to compose configuration.

As much as I don't like BCL, it is depressingly good at being a job configuration language for "run things in the cloud".


I think you actually touch on three good points here. One is that "foo: {{ var }}" is not a hygienic template. If var is equal to "bar\nbaz: quux", you've injected hard-to-debug additional keys into the output. The next is that there are common pieces that Kuberenetes defines, and they are all demoted to map[string]interface{}. For example, a lot of charts have "resources" attached to applications, and those are (in Go land), v1.ResourceRequirements. But it could be anything in Helm, it's just a JSON object. So helm itself can't say "you typed 1000M cpu, but probably meant 1000m cpu". And finally, each chart has total latitude to name anything whatever it wants. One chart could say "myapp: { cpu: 42 }" and another configures that as "yourapp: { resources: { requests: { cpu: 42 } } } }". You get to learn Kubernetes all over every time for each app. With zero documentation, usually, except a values.yaml to cut-n-paste from. (My success rate is low. Every Helm app I've installed has required me to read the source code to get it to do what I want. But, other people have better luck, to be fair.)

On top of all that, the value that Helm delivers to people is "you don't have to read the documentation for Deployment to make a Deployment". But then you have to debug that, and you have another layer of complexity bundled on top of your already weak understanding of the core.

Like I get that Kubernetes asks you a lot of questions just to run a container. But they are all good questions, and the answers are important. Just answer the questions and be happy. (Yes, you need to know approximately how much memory your application uses. You needed to know that in the old pet computer era too -- you had to pick some amount to buy at the memory store. Now it's just a field in a YAML file, but the answer is just as critical. A helm chart can set guesses, and if that makes you feel better, maybe that's the value it delivers. But one day, you'll find the guess is wrong, and realize you didn't save any time.)


And crucially, once you have given a resource limit, there's no way to (trivially) feed that back into an environment variable or flag to signal that to the app runtime (which, IIRC, is Really Handy for Java-based apps and can seriously improve the performance of Go-based ones).


Twice today I had to explain to coworkers than "auth is one of the hardest problems in computer science".

For gRPC and HTTP/2: you're doing end to end gRPC (IE, the TCP connection goes from a user's browser all the way to your backend, without being terminated or proxied)?


I don't think I have raw HTTP/2 streams from user to service anywhere. My preference is to have Envoy in the middle doing routing/statistics, and so the TCP session is not preserved from frontend to backend. Each request/response could be handled by a different backend instance. (I don't think Envoy strictly requires this, however; upgrade/websockets work somehow. But maybe only on HTTP/1.1.) This is generally what people want their load balancer to do; a common complaint is that gRPC opens long-lived streams (channels, actually, using their term), and so one client can overload one backend, when the other 100 replicas could happily handle their request/replies. (gRPC's mechanism for state between requests and replies is server stream/client stream/bidirectional stream, which is different than channels. The individual messages in streams can't be split between backends, and so the load balancer won't interfere with that.)

At work we have a service that communicates to clients over gRPC (the CLI app is a gRPC client). We typically deploy that as two ports on the load balancer, one for gRPC and the other for HTTPS. Again, the TCP connection isn't actually preserved while transiting the load balancer, but it's logically a L4 operation -- one client channel is one server channel. If the backend becomes unhealthy, you'll have to open a new channel to the load balancer to get a different backend. (This doesn't really come up for us, because people mostly run a single replica of the service.)


There are some attempts to gradually find alternatives to Helm while remaining compatible with it. See https://carvel.dev/ for example.

There is a lot of innovation possible in this space.


> The thing that will kill Kubernetes, though, is Helm. I'm out of time to write this comment but I promise a thorough analysis and rant in the future ;)

Too much of a cliffhanger! Now I want to know your pow :)


Ever since Microsoft acquired the company behind Helm and https://news.ycombinator.com/item?id=11922299 (try clicking the article link), it has been used as a showcase when onboarding azure customers, to somehow prove that "yeah azure is hip and we love open source".

So, yes, we need to know.


I don't know why anyone uses Helm. I've done a fair amount of stuff with k8s and never saw the need. The builtin kustomize for is simple and flexible enough.


I use Helm because I haven't found another tool that deletes resources in the cluster when I delete them from the yaml. kubectl --prune is unstable and super buggy. I would love to ditch Helm. Is there a tool I should know about that covers this?


Take a look at Kapp on https://carvel.dev/ for this, possibly.


kpt live apply prunes resources


Ditto.

Granted, I have to assume that borg-sre, etc. etc. are doing a lot of the necessary basic work for us, but as far as the final experience goes?

95% of cases could be better solved by a traditional approach. NixOps maybe.


If Antoine de Saint-Exupery was right that: "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." then IT as an industry is heading further and further away from perfection at an exponentially accelerating rate.

The only example I can think of where a modern community is actively seeking to simplify things is Clojure. Rich Hickey is very clear on the problem of building more and more complicated stuff and is actively trying to create software by composing genuinely simpler parts.


I'd argue that perfection achievement is not a linear process. Sometimes you have to add way too many things before you can remove all of the useless things.

Nobody is puppeteering some grand master plan, we're on a journey of discovery. When we're honest with ourselves, we realize nobody knows what will stick and what won't.


Absolutely, but dogma and "best-practices" anchor design discussions around today's norms. People get very defensive about tools they've invested in and that kind of dogma stunts imagination for different and better solutions.

Discovery is very rarely an accidental process so we can't take for granted that it will be inevitable.

I think it's important to recognize that most people are not interested in discovery at all. Practitioners are often not explorers, and that's okay. They may find incremental improvements through their practice, but paradigm shifting innovation comes from those willing to swim against the stream of popular opinion.

Discovery has to be an intentional pursuit of those brave enough to imagine a future beyond Multics/Kubernetes/etc despite the torrent of opinionated naysayers telling them they are foolish for even trying.


I guess I completely disagree. Discovery is nothing except a series of accidents and happenstance.

Nobody gets anything difficult right on the first try, and there’s an arrogance in thinking we could.


If you understand the quote to mean that the process of achieving perfection can only consist of removing things rather than adding them, how do you know whether you've really achieved perfection or just reached a local optimum?


Jonathan Blow has also been vocal on that regard.


Consider looking into Fuchsia's component framework for thoughts on what a distributed application looks like inside an operating system. https://fuchsia.dev/fuchsia-src/concepts/components/v2/intro...


Okay, right off the bat, the author is already giving himself answers:

> Essentially, this means that it [k8s] will have fewer concepts and be more compositional.

Well, that's already the case ! At its base, k8s is literally a while loop that converges resources to wanted states.

You CAN strip it down to your liking. However, as it is usually distributed, it would be useless to distribute it with nothing but the scheduler and the API ...

I do get the author's point. At a certain point it becomes bloated. But I find that when used correctly, it is adequately complex for the problems it solves.


After reading the title I worried this was going to be yet another k8s bashing post. Pleasantly surprised to see this take because it’s a refreshing look at kube and I strongly agree. I think it’s the absolute best way to deploy large systems today, especially if you’re a polyglot organization. But it can be tough to grok without lots of labbing and experimentation - it’s hard to approach.

We are really at the infancy of containerization. Kube is a springboard for doing the next big thing.


It looks to be getting more complex too. I understand the sales pitch for a service mesh like Istio, but now we're layering something fairly complicated on top of K8S. Similar for other aspects like bolt on secrets managers, logging, deployment, etc, run through even more abstractions.



Whatever Kubernetes flaws, the analogy is clearly wrong. Multics was never a success and never had wide deployment so Unix never had to compete with it. Once an OS is widely deployed, efforts to get rid of it have a different dynamic (see the history of desktop computing, etc). Especially, getting rid of any deployed, working system (os, application, language, chip-instruction-set, etc) in the name of simplicity is inherently difficult. Everyone agrees things should be stripped down to a bare minimum but no one agrees on what that bare minimum is.


Agreed; I think a better analogy for Kubernetes is XML. So many wasted meetings about where to split up namespaces and should every last thing be an attribute or a subtag; none of that added business value. JSON took all those decisions off the table. And yes, huge industrials validly complained that JSON didn't cover X or Y or Z, but for most users JSON is a much better solution then XML.

Kubernetes reminds me a lot of XML; there are too many decision points adding unnecessary complexity for the average user's needs. Too many foot guns. Too many unintuitive things.

People keep on describing it as "declarative", which seems to be about as true as saying that Java is a functional language. Hopefully someday we'll have something actually declarative, and much more intuitive, something more like AWS's CDK.


How is Kubernetes not declarative?

I don’t disagree about the exposed complexity, that’s a fundamental decision Kubernetes made about openness and extensibility. Everything is on a level playing field, there are no private APIs.


As I recall, running "kubectl edit deployment..." doesn't do anything except edit the definition of the config. Instead, to have it take effect you seem to have to manually kill pods, and the new pods will come up with the edited config. If it were declarative, it should detect what needs to be changed, and automatically update accordingly. Same thing with editing a config. It's possible it was the funnel my local DevOps forced on me (and lacking needed permissions at every turn), but my experience was that if you removed deployments, configs, etc on the next deployment, nothing would be cleaned up and you had to manually remove. Again, that's not declarative.

In my experience Terraform and CDK are much more declarative; where you never issue commands to delete a pod or a load balancer or similar. Instead you describe what you want, and their engine figures out what it needs to add or remove or change to get to that state.


That’s not accurate, Kubectl edit (or an apply on an existing resource) does immediately detect what needs changing.

For example if you edit a deployment, it will create a new ReplicaSet and new pods and do a gradual rollout from the old one.

There’s corner cases where a controller won’t let you edit certain fields of a resource because they didn’t cover that case, but that’s relatively rare.

Deleting a pod , which IME isn’t too common day to day but can be useful to recover from some failure conditions (usually low level problems with node, Storage, or network), is also a demonstration of declarative reactions at work: if it was created by a controller it will be immediately recreated. Pods are meant to be ephemeral.

Terraform certainly is declarative but it isn’t typically used as an engine that enables high availability and autoscale by scanning its declarative state and comparing to the real world. This is what Kubernetes excels at - continually scanning and reacting to changes in the world. Terraform I have found to be tricky to run continuously, any out of band state change can lead to it blowing away your resources.


That's not been my experience at all. Have had to manually delete pods all the time. Is it possible that this was something fixed in newer versions?

Example case: DevOps pushed out a new version of Istio (without talking with anyone) and even though the container configs are referencing the new version of Istio, only half of the pods in the namespace got restarted, so we get paged because a number of services can't make any network connections with the other services. Had to manually delete all the pods, and then the new pods all came up with the right version of Istio and are able to communicate again.

On a side note: how is it at all acceptable to have a networking "mesh" that isn't backwards compatible? I can count on no hands the number of times that my fargate/lambda services couldn't communicate because half of my fleet is running a different version of VPC. Thus far my experience with Istio is that it has never added any business value (for projects I've been involved in), and only adds complexity, headaches, and downtime.

Back to the declarative thing: I'm fairly confident I've edited service configs, added service configs, edited the container image, and container environment variables, and never saw kubernetes restart anything automatically; had to manually delete.


Istio is a whole different and very advanced beast, maintained outside of the Kubernetes core, and not for the faint of heart.

The issue there is that it literally needs to rewrite the pod YAML to inject the sidecar envoy proxy. So say you want to upgrade Istio. Well Istio needs to change the Pod spec, and it doesn’t do this automatically. If you look at the upgrade instructions here: https://istio.io/latest/docs/setup/upgrade/in-place/#upgrade...

Step 6 is “After istioctl completes the upgrade, you must manually update the Istio data plane by restarting any pods with Istio sidecars:

$ kubectl rollout restart deployment”

Istio can be useful (most security teams want it for Auto-mTLS, it also could save you from firewall hell by using layer 7 authorization policies, and can do failover across DCs pretty well) but is crazy to use on its own as unsupported vanilla OSS without a distro like Solo, Tetrate, Tanzu, Kong, etc., or without significant automation to make upgrades transparent. Istio is often very frustrating to me because of cases like yours: it’s too easy to make a mess of it. There are much easier approaches that covers 80% (an ingress controller like Contour or ngnix + cert manager).

On editing configs, one area Kubernetes does NOT react to is ConfigMaps and Secrets being updated. Editing an Image or Env var in a ReplicaSet or Deployment will definitely trigger a pod recreate (I see this daily).

Though take a look at Kapp (https://carvel.dev/kapp/) which provides clearer rollout visibility and can version ConfigMaps + trigger reactions to them updating, also there is Reloader https://github.com/stakater/Reloader


It's called "Images and Feelings", but I quite dislike using a the Cloud Native Computing Foundation's quite busy map of services/offerings as evidence against Kubernetes. That lots of people have adopted this, and built different tools & systems around it & to help it is not a downside.

I really enjoy the Oil Blog, & was really looking forward when I clicked the link to having some good real criticism. But it feels to me like most of the criticism I see: highly emotional, really averse/afraid/reactionary. It wants something easier simpler, which is so common.

I cannot emphasize enough, just do it anyways. There's a lot of arguments from both sides about trying to assess what level of complexity you need, about trying to right size what you roll with. This outlook of fear & doubt & skepticism I think does a huge disservice. A can do, jump in, eager attitude, at many levels of scale, is a huge boon, and it will build skills & familiarity you will almost certainly be able to continue to use & enjoy for a long time. Trying to do less is harder, much harder, than doing the right/good/better job: you will endlessly hunt for solutions, for better ways, and there will be fields of possibilities you must select from, must build & assemble yourself. Be thankful.

Be thankful you have something integrative, be thankful you have common cloud software you can enjoy that is cross-vendor, be thankful there's so many different concerns that are managed under this tend.

The build/deploy pipeline is still a bit rough, and you'll have to pick/build it out. Kubernetes manifests are a bit big in size, true, but it's really not a problem, it really is there for basically good purpose & some refactoring wouldn't really change what it is. There's some things that could be better. But getting started is surprisingly easy, surprisingly not heavy. There's a weird emotional war going on, it's easy to be convinced to be scared, to join in with reactionary behaviors, but I really have seen nothing nearly so well composed, nothing that fits together so many different pieces well, and Kubernetes makes it fantastically easy imo to throw up a couple containers & have them just run, behind a load balancer, talking to a database, which coverages a huge amount of our use cases.


I like this title so much I am finally going to give this shell a try. One thing I notice right away is readline. Could editline also be an option. (There's two "editlines", the NetBSD one and an older one at https://github.com/troglobit/editline) Next thing I notice is the use of ANSI codes by default. Could that be a compile-time option or do we have to edit the source to remove it.

TBH I think the graphical web browser is the current generation's Multics. Something that is overly complex, corporatised, and capable of being replaced by something simpler.

I am not steeped in Kubernetes or its reason for being but it sounds like it is filling a void of shell know-how amongst its audience. Or perhaps it is addressing a common dislike of the shell by some group of developers. I am not a developer and I love the shell.

It is one thing that generally does not change much from year to year. I can safely create things with it (same way people have made build systems with it) that last forever. These things just keep running from one decade to the next no matter what the current "trends" are. Usually smaller and faster, too.


Kubernetes is designed similar to the shell: the APIs are a uniform interface, designed for stabilization, while resources are composable and extensible through the it.

If you use the stable APIs, your code will run for decades. My hypothetical deployment from 2016 will not need touching (beyond image updates for CVEs) to keep running in 2026 or 2036.


I think that all this boils down to a rather simple dilemma for modern cloud-native infrastructural platforms [in terms of developer experience, i.e., external APIs etc., not internal architecture; and this is not even limited to this class of systems - it is general concept for all software systems]: a) universal, highly configurable & complex (K8s) OR b) highly opinionated and [relatively] simple (e.g., Nomad/Waypoint, Heroku, Apollo, CapRover, Dokku, Porter, AWS Elastic Beanstalk, Digital Ocean's App Platform, Fly, Render). Obviously, there exists a middle-ground category as well: relatively simple, but still opinionated and moderately (???) or highly (e.g., OpenShift) configurable platforms. Thus, the optimal choice depends on relevant team's or organization's priorities with respect to those attributes (configurability, complexity, level & scope of opinionation) as well as level of organizational standardization for IT environments, economic factors, vendor lock-in considerations and, perhaps, something else that I forgot to mention).


No, Multics was easier to understand, easier to manage, and more reliable.

However Multics didn't offer automatic/elastic cloud scaling, which seems to be the main selling point of modern, usually very complicated, container orchestration systems, nor was it designed for building distributed systems.

However, if modern Linux had a Multics-style ring architecture, it could replace many of the uses for virtualization and containers.


Add the two cents of http://adamierymenko.com/ports.html

"Since we chose the path of virtualization and containerization we've allowed the multi-tenancy facilities in Unix to atrophy and it would take a little bit of work to bring them back into form."


I wish that were so.

Multics made a big splash in the literature but in terms of use it was an obscure os on an obscure mainframe. It had nothing on TOPS-20 or VM/CMS.

Unfortunately many of us are suffering with Kube.


Hi Andy: if you see this, I'm the other 4d polygon renderer! I read the kubernetes whitepaper after RC and ended up spending a lot of the last year on it. Maybe if I had asked you about working with Borg I could have saved myself some trouble. Glad to see you're still very active!


Hi :) Yeah I think it's an interesting topic, and I'm not saying anyone should necessarily be doing something different. But if it "feels wrong", then that's not too surprising to me :) I'd be interested in hearing about any k8s experiences.


Sure, you don't have to use k8s. You can roll your own solutions to what it solves.

Your own custom built solution will work, but what in 5 years? 10 years? When it all becomes legacy what then?

Will you find the talent who'll want to fix your esoteric environment, just like those COBOL devs?

Will anyone respond to your job posts to fix your snowflake environment. Will you pay above average wages to fix your snowflake ways of solving problems that k8s standardized?

I bet your C-Level is thinking this. What's to say they won't rip out all of your awesomeness and replace it with standard k8s down the line as its dominating the marketshare.

When you're laid off in the next recession, is your amazing problem-solving on your snowflake environment going to help you when everyone else is fully well versed with k8s?


Whoa dude, ease up on the cool aid


Personally, I think this is an extremely mild version of the dire situation that most teams working with legacy systems often find themselves in.


Is it really that complex compared to an operating system like Unix though? I mean there's nothing simple about Unix. To me the question is, is it solving a problem that people have in a reasonably simple way? And it seems like it definitely does. I think the hate comes from people using it where it's not appropriate, but then, just don't use it in the wrong place, like anything of this nature.

And honestly its complexity is way overblown. There's like 10 important concepts and most of what you do is run "kubectl apply -f somefile.yaml". I mean, services are DNS entries, deployments are a collection of pods, pods are a self contained server. None of these things are hard?


What’s complex about *nix? All you need to understand are device files, POSIX permissions and ACLs, cgroups, tcp/udp sockets, nginx/haproxy, thread/process scheduling, (virtual) memory, PAM, dbus, syslog, pipes, unix sockets, 30 filesystem options, nfs, userspace vs. kernel space, sysvinit or 10 flavors of systemd files, iptables/ufw, networkmanager, ssh, selinux, chroot, flatpack, snaps, rpm, deb, ansible/chef/puppet.

Oh deploying on the cloud? Cloudformation/AzureRM as well.

Pretty easy. No damn complex k8s needed.


The irony in your comment is tools like networkmanager, snaps, systemd are kubernetes like and severely disliked by experienced unix admins due to the needless complexity and usability of them.


Well, given that Multics was much more secure than UNIX ever was, and written on a proper systems programming language that everyone (except UNIX folks) is trying to get back to, probably isn't that bad after all.


> proper systems programming language that everyone (except UNIX folks) is trying to get back to

Wikipedia: Written in PL/I, Assembly language

????


I advise you to learn about the safety capabilities regarding strings, arrays, pointer manipulation and references, numerics and enumerations in PL/I versus C.

Additionally, you can go over to Multicians and read the security assessemt reports of Multics vs UNIX done by DoD, back in the day.


I agree a lot with his premise, that Kubernetes is too complex, but not at all with his alternative to go even lower level.

And the alternative of doing everything yourself isn't too much better either, you need to learn all sorts of cloud concepts.

The better alternative is a higher level abstraction that takes care of all of this for you, so an average engineer building an API does not need to worry about all these low level details, kind of like how serverless completely removed the need to deal with instances (I'm building this).


That sounds like knative


I haven't heard of that. Took a look and it still seems too low level. I think we need to think much bigger in this space. Btw were not approaching this from a Kubernetes angle at all.


my problem with k8s, is that you learn OS concepts, and then k8s / docker shits all over them.


Yes, this is a core part of the design issue and argument I'm making.

The new concepts are leaky abstractions -- they wrap the old ones badly. You still have to understand both to understand the system. Networking in k8s seems to really suffer from this.

And the new concepts and old concepts don't compose. They create combinatorial problems, i.e. O(M*N) amounts of glue code.


It's a double whammy, you get the complexity of Kubernetes, and then you get to exec into a docker image that has been stripped of any useful debugging tools under the guise of security.

Its even better when its a busybox based image for that linksys router/80s unix troubleshooting experience.


K8s abstracts away much more complexity than it exposes, which is the hallmark of a great api. History will surely view it amongst the greatest api’s of all time.


Anyone want to fill me in on what this "Perlis-Thompson Principle" is?


I still have to explain it properly, but there is a pretty good sketch on a recent blog post, linked from this comment. (You will probably end up chasing a lot of comment threads, but it's mostly there.)

https://news.ycombinator.com/item?id=27914632

It's an argument about avoiding O(M*N) glue code. O(M*N) amounts of code are expensive to write, and contain O(M*N) numbers of bugs.



I had to Google it and scroll a blog post.


This whole article is, well, a little silly. It says that Kubernetes will disappear and be replaced by something simpler, because it's very difficult to create reliable systems that use it.

But...there are tons of reliable systems at Google, all using Borg, and that has a lot of features Kubernetes doesn't have.

Stripping down Kubernetes doesn't reduce complexity. It just shifts it.


I don't agree. I worked at Google for over 10 years, during the time when SREs started to make as much or more money than SWEs. There's a reason for that.

I also disagree that the systems are reliable. From the outside most the stateless services are fast and reliable; the stateful ones less so. From the inside, no: Internal services were unreliable and slow. (This could have changed in the last 5 years, but there was a clear trend in one direction in my time there.) There were many more internal services on Borg than external ones.


i thought that kubernetes is our generations jcl (job control language on ibm mainframes) ; There is a remote similarity in how we are writing descriptors for tasks and then submit it for execution and wait till the mainframe has considered our specification. (suddenly feeling old because of this comparison ...)


Yup lol, I've had this same thought. It's like neo-Tuxedo which is basically a mainframe TPS for UNIX

https://en.wikipedia.org/wiki/Tuxedo_(software)


it's funny when you think of it, most of all this distributed system magic was already there on the old mainframe, in some form. And it was there for ages...


Eh. Kubernetes is complex, but I think a lot of that is that computing is complex, and Kubernetes doesn't hide it.

Your UNIX system runs many daemons you don't have to care about. Whereas something like lockserver configuration is still a thing you have to care about if you're running Kubernetes.


Related: https://www.youtube.com/watch?v=3Ea3pkTCYx4

Key insight can be summarized as "code the perimeter"


(author here) Yes exactly! This is what I'm calling the Perlis-Thompson principle, although it still needs to be fully formed and explained. There are obvious objections to it (which I have some answers to).

Sketch of the argument here, with links: http://www.oilshell.org/blog/2021/07/blog-backlog-1.html#con...

Here's my comment which links the "Unix vs. Google" video (and I very much agree based on my first hand experience with Google's incoherent architecture, which executives started to pay attention to in various shake-ups.)

https://lobste.rs/s/euswuc/glue_dark_matter_software#c_sppff...

It links to my comment about the closely related "narrow waist" idea in networks and operating systems. That is a closely related concept regarding scaling your "codebase" and interoperability.

I have been looking up the history of this idea. I found a paper co-authored by Eric Brewer which credits it to Kleinrock:

http://bnrg.eecs.berkeley.edu/~randy/Papers/InternetServices... (was this ever published? I can't find a date or citations)

But I'm not done with all the research. I'm not sure if it's worth it to write all this, but I think it's interesting I will learn something by explaining it clearly and going through all the objections.

I'm definitely interested in the input of others. I have about 10 different resources where people are getting at this same scaling idea, but I can use more arguments / examples / viewpoints.


Going to post a lovely update for docker swarm here - Swarm simplifies/reduces the possibility space compared to K8, but i consider that a feature not a drawback. With Mirantis actively hiring and extending support for SwarmKit, it should be considered a viable 'batteries included' alternative to K8:

https://github.com/docker/roadmap/issues/175#issuecomment-82...


Two amazing quotes that really resonate with me:

> The industry is full of engineers who are experts in weirdly named "technologies" (which are really just products and libraries) but have no idea how the actual technologies (e.g. TCP/IP, file systems, memory hierarchy etc.) work. I don't know what to think when I meet engineers who know how to setup an ELB on AWS but don't quite understand what a socket is...

> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.


this is bound to happen. the more complicated the stack that you use becomes, the less details you understand about the lower levels.

who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

All of these were table stakes at some point in time. The key is not to understand all layers perfectly. The key is to know when to stop adding layers.


Totally get your point! But I worry the industry is becoming bloated with people who can glue a few frameworks together building systems we depend on. I wish there was more of a focus on teaching and/or learning fundermentals than frameworks.

Regarding your points, I actually would expect a non-junior developer to be able to write a libary in their main language and understand the basics of OS internals (to the point of debugging and profilling, which would include troubleshooting *nix processes). I don't expect them to know assembly or C, or be able to write a compiler (although I did get this for a take-home test just last week).


I think learning the fundamentals is a worthy pursuit, but in terms of getting stuff done well, you realistically only have to grok one level below whatever level of abstraction you're operating at.

Being able to glue frameworks together to build systems is actually not a negative. If you're a startup, you want people to leverage what's already available.


I agree. An ideal is far from reality.

I like to get deep into low level stuff, but my employer doesn't care if I understand how a system call works or whether we can save x % of y by spending z time on performance profiling that requires good knowledge of Linux debugging and profiling tools. It's quicker, cheaper and more efficient to buy more hardware or scale up in public cloud and let me use my time to work on another project that will result in shipping a product or a service quicker and have direct impact on the business.

My experience with the (startup) business world is that you need to be first to ship a feature or you lose. If you want to do something then you should use the tools that will allow you to get there as fast as possible. And to achieve that it makes sense to use technologies that other companies utilise because it's easy to find support online and easy to find qualified people that can get the job done quickly.

It's a dog-eat-dog world and startups in particular have the pressure to deliver and deliver fast since they can't burn investor money indefinitely; so they pay a lot more than large and established businesses to attract talent. Those companies that develop bespoke solutions and build upon them have a hard time attracting talent because people are afraid they won't be able to change jobs easily and these companies are not willing to pay as much money.

Whether you know how a boot process works or how to optimise your ELK stack to squeeze out every single atom of resource is irrelevant. What's required is to know the tools to complete a job quickly. That creates a divide in the tech world where on one side you have high-salaried people who know how to use these tools but don't really understand what goes on in the background and people who know the nitty-gritty and get paid half as much working at some XYZ company that's been trading since the 90s and is still the same size.

My point is that understanding how something works underneath is extremely valuable and rewarding but isn't required to be good at something else. Nobody knows how Android works but that doesn't stop you from creating an app that you will generate revenue and earn you a living. Isn't the point of constant development of automation tools to make our jobs easier?

EDIT: typo


IMO the problem with this is when you go from startup -> not a startup you go from creating an MVP to something that works with a certain amount of uptime, has performance requirements, etc. Frameworks will still help you with those things, but if you need to solve a performance issue its gonna be hard to debug if a you don't know how the primitives work.

Lets say you have a network performance issue because the framework you were using was misusing epoll, set some funky options with setsockopt, or turned on Nagle's algorithm. A person can figure it out, but its gonna be a slog whereas if they had experience working with the lowest level tools the person could have an intuition about how to debug the issue.

An engineer doesn't have to write everything with the lowest level primitives all the time, but if they have NEVER done it than IMO that's an issue.


I agree with what you said, but Isn’t the goal to survive the seed stage to find product market fit and customers at all costs? If you get that, you can raise money and hire engineers to rewrite your stack. If you fail to get customers, you might have a really maintainable codebase but no money and hence bankruptcy.

The point being that maybe it’s fine if there are a lot of people who only know how to glue frameworks together if they know enough to build useful products. Let all of them try; some of them might very well make it.


This totally matches my experience from two different perspectives.

1. Working as a programmer perspective: I worked at a company with good practices but so-so revenue. What happens: horribly underpaid salary, nice laptop (but not the one I want), nice working conditions. I am now working at a company with pretty great revenue and mediocre practices. What happens: good salary, I get the laptop I want (not the one I need), working conditions are mediocre.

2. UX perspective (I did a bootcamp for fun): UX'ers make throwaway prototypes all the time in order to validate a certain hypothesis. When that's done, they create the real thing (or make another bigger throwaway prototype).

I feel this is the best approach, from a business standpoint. This also means you have different kind of developers and it depends on the stage what kind they are. I'd separate it as prototype stage, mid-stage and massive scale stage.


That’s exactly what was covered in the Systems track of my CS undergrad. I’m always confused when people dismiss their own as irrelevant or primarily mathematical… we were coding and debugging toy schedulers, virtual memory managers, file systems, TCP stacks, IRC and mail servers, locking primitives, etc. in C.


I really like the way you've put it "Glue a few X together".

This is what most software development is becoming. We are no longer building software, we are gluing/integrating prebuild software components or using services.

You no longer solve fundamental problems unless you have a very special use case or for fun. You mostly have to figure out how to solve higher level problems using off-the-shelf components. It's both good and bad if you ask me (depends at what part of the glass you're looking at).


I also would have loved discovering electricity or information theory. Somehow it's convenient that people stacked on the shoulders of each other across a few generations made processors from that but it sadly put the bar pretty high to go further nowadays.

Thankfully I can use these cool processors to build the next CandyCrush and shine in our modern and innovative society.


This is something that I can’t show numbers for but it seems likely that the absolute number of jobs of people who do “build software” has likely increased with time, it’s just that the number of “glueing frameworks” jobs have increased by a lot more so you’re probably just in the wrong category. It seems difficult to think that there aren’t thousands of network engineers keeping the internet backbone humming along.


It's like building a house. Should I have the HVAC guy do the drywall and the drywall guy do the HVAC? Clearly software engineering isn't the same as building a house, but if you have an expert in JAX-WS/SOAP and a feature need to connect to some legacy soap healthcare system... have him do that, and let the guy that knows how to write an MPI write the MPI.


At the risk of falling down an analogy rabbit hole, I'll be upset if the HVAC guy assumes that air will flow freely throughout the house and has no understanding of walls, or if the drywall guy blindly screws into my air ducts. No abstraction is perfect; some knowledge of the other layers is necessary to do a proper job. Unfortunately, in software, it seems like our abstractions are particularly leaky, and knowledge of other layers is frequently necessary to do a proper job. In house building, issues are usually contained by physical proximity, whereas the same is obviously not true in software, particularly networked software.


the hvac guy does not know how drywall is made and would struggle to produce a piece od drywall. As a matter of fact, the drywall guy would struggle. They don’t build their own materials, they use materials they buy from Home Depot.


This isn't a bad analogy. Like modern houses, software has gotten large, specific, and more complex in the last 30 some odd years.

Some argue it's unnecessary complexity, but I don't think that's correct. Even individuals want more than a basic geo cities website. Businesses want uptime, security, flashy, etc... in order to stand out.


I've been (unfortunately) in a few houses refurbishing by now and the good workers are the ones that also know a bit about other domains in house refurbishing. The HVAC guy will know about wiring and the good dry wall guy will know a bit of the layman job as well. They don't necessarily have to, but the good ones will.


> How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

That's what I expect from someone who graduated from a serious CS/Engineering program.


you're mixing having an idea of how the OS works (ie: conceptual/high level) to having working knowledge and being able to hack into the OS when needed. I know this may sound like moving the goal posts, but it really does not help me that I know conceptually that there is a file system if I don't work with it directly and/or know how to debug issues that arise from it.


> having working knowledge and being able to hack into the OS when needed.

I'm going to parrot the GP: "That's what I expect from someone who graduated from a serious CS/Engineering program."

I know there are a lot of really bad CS programs in the US, but some experience implementing OS components in a System course so that they can "hack into the OS when needed" is exactly what I would expect out of a graduate from a good CS program.


I think your expectations are out of alignment with what's happening. I know software engineers who graduated with CS degrees from schools like MIT, Urbana Champaigne, and Stanford who took Operating System classes but could not realistically "hack into the OS". If those programs aren't consistently imparting that knowledge to students without an explicit interest, I don't see how others can be expected to...


> I know software engineers who graduated with CS degrees from schools like MIT, Urbana Champaigne, and Stanford who took Operating System classes but could not realistically "hack into the OS".

That's surprising. Recent grads?


By into I assume you meant on. The OS courses at UIUC (not a wine, btw :)), MIT, and Stanford def prepare you some kernel hacking if needed.


"into" was quoting an earlier poster and hasty typos abound :)

The discussion centers on the following expectation of graduates from strong CS programs.

> having working knowledge and being able to hack into the OS when needed.

Now, the course from the listed schools may prepare some students, but I am simply reporting that I have met numerous graduates who state very explicitly.

- they are not comfortable with a variety of operating system concepts

- they are not comfortable interacting with operating systems in any depth

I don't have a big diverse data set, but the impression given is that if you expect this level of expertise you will be disappointed regularly. If the strongest CS programs pre-selecting for smart and driven students can't reliably impart that skillset, why would I expect other schools to?


IDK, I think the convo is hard to have without explicit goalposts.

For context, the original quote was:

* > How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving nix process?

Writing a compiler, writing a library for their fav language, and troubleshoot a misbehaving nix process are all examples of things I would definitely expect a CS major to have done at some point.

A SoTA compiler for Rust or whatever? Ok, no. But, you know, a compiler.

Ditto for library -- better than the standard lib? Ok, no. But, you know, a standard lib that's good enough.

ditto for debugging nix processes. Not world-class hacker, just, you know, capable of debugging a process.

I guess the other examples in that quote seem to suggest that "OS internals" probably means something like "knowledge at the level of a typical good OS course".

And who knows what those people meant by "comfortable interacting with operating systems in any depth". There could also be some reverse D-K effect going on here... "I got a B- in CMU's OS course" still puts you very well into the category of "understand the OS internals", IMO.


> who ... understand the OS internals? ... How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

Ex-Amazon here. You are describing standard skills required to pass an interview for a SDE 2 in the teams I've been in at Amazon.

Some candidates know all the popular tools and frameworks of the month but do not understand what an OS does, or how a CPU works or networking and do not get hired because they would struggle to write or debug internal software written from scratch.

[added later] This was many years ago when the bar raiser thing was in full swing and in teams working on critical infrastructure.


LoL. Also Ex-Amazon here. I can tell you for a fact that most SDE2s I've worked with had zero clue on how the OS works. What you're describing may have been true 5-10 years ago, but I think is no longer true nowadays (what was that? raising the bar they called it). A typical SDE2 interview will not have questions around OS internals in it. Before jumping on your high horse again: I've done around 400 interviews during my tenure there and I don't recall ever failing anyone due to this.

Also, gate-keeping is not helpful.


Also, gate-keeping is not helpful.

This term is really getting over-used. The purpose of job interviews is to decide who gets to pass through the gate. It is literally keeping of a gate.


The term is perfectly apt and descriptive here, because gate keeping isn't about the keeping of a gate, it's about the inappropriateness of the criteria that is used.

Software engineers, even the ones that are so superpowered that they :gasp: got a job at Amazon once in their life, can go an entire successful career without knowing how to use a kernel debugger, or understand iptables or ifconfig, or understand how virtual memory works.

Some engineers might need to know some of those things, but it is absolutely bonkers to claim that you could never progress past level 2 at Amazon without knowing such things. I know this because I once taught a senior principal engineer at Amazon how to use traceroute.

For many roles in Amazon (particularly the tens of thousands of SDE positions that will end up working with the JVM all day long), asking such low level questions about how OSes work is about as useful of a gatekeeping device as asking them whether white cheese tastes better than yellow cheese. And that's why the term gatekeeping is used.


Yikes. Do you think Amazon engineers are overall just dumber or just less used to the lower abstractions? After all, I can’t even ssh into the machines my code runs on nowadays.


newer engineers are less used to lower level abstraction. anecdotal, but that’s what I observed


Yes they do. There is too much software to be written. A person with adequate knowledge of higher abstractions can produce just fine code.

Yes, if there is a nasty issue that needs to be debugged, understanding the lower layers is super helpful, but even without that knowledge you can figure out what's going on if you have general problem-solving abilities. I certainly have figured out a ton of issues in the internals of tools that I don't know much about.

Get off your high horse.


Says one guy. Sorry, there's lots of people who make a living writing software who don't know what an OS does. Gatekeeping helps nobody.


Current big tech here (not Amazon) and very few know lower level things like C, systems or OS stuff. Skillsets and specializations are different. Your comment is incredibly false. Even on mobile if someone is for instance a JS engineer they probably don't know Objective-C, Swift, Kotlin or Java any native APIs. And for the guys who do use native mobile, they can't write Javascript to save their lives and are intimidated by it.


I agree with you, as opposed to the other ex-amazon comments you've had (I had someone reach out to interview me this week if that counts? ;)).

Playing devils advocate I guess it depends on what sort of software you're writing. If you're a JS dev then I can see why they might not care about pointers in C. I know for sure as a Haskell/C++ dev I run like the plague from JS errors.

However, I do think that people should have a basic understanding of the entire stack from the OS up. How can you be trusted to choose the right tools for a job if your only aware of a hammer? How can you debug an issue when you only understand how a spanner works?

I think there's a case for engineering accreditation as we become even more dependent on software which isn't a CS degree.


But the value isn't equal. If you think of the business value implemented in code as the "picture" and the run time environment provided as the "frame" the frame has gotten much larger and the picture much smaller, as far as what people are spending their time on. (Well, not the golang folks that just push out a systemctl script and a static binary, but the k8s devops experts). I have read entire blogs on k8s and so on where the end result is just "hello world." In the old days, that was the end of the first paragraph. Now a lot of YAML and docker files and so on and so on are needed just to get to that hello world. Unix was successful initially because it was a good portable abstraction to managing hardware resources, compute, storage, memory, and network, over a variety of actual physical implementations. Many many of the problems people are addressing in k8s and running "a variety of containers efficiently on a set of hosts" are similar to problems unix solved in the 80s. I'm not really saying we should go back, Docker is certainly a solution to "depdendency control and process isolation" when you can't have a good static binary that runs a number of identical processes on a host, but the knowledge of what a socket is or how schedulers work is valuable in fixing issues in docker-based systems. (I'm actually more experienced in Mesos/docker rather than k8s/docker but the bugs are from containers spawning too many GC threads or whatever).

If someone is trying to debug that LB and doesn't know what a socket is, or debug latency in apps in the cluster and not know how scheduling and perf engineering tools work, then it's going to be hard for them, and extremely likely that they will just jam 90% solution around 90% solution, enlarging the frame to do more and more, instead of actually fixing things, even if their specific problem was easy to fix and would have had a big pay off.


Kubernetes is complicated because it carries around Unix with it and then duplicates half the things and bolts some new ones on.

Erlang is[0] what you can get when you try to design a coherent solution to the problem from a usability and first-principles sort of idea.

But some combination of Worse is Better, Path Dependence, and randomness (hg vs git) has led us here.

[0] As far as what I've read about its design philosophy.


Who is using K8s for Hello World levels of complexity?

Complex problems often have complex solutions, the algorithm we need to run as developers is - what's the net complexity cost of my system if I use this tool?

If the tool isn't removing more complexity than it's adding, you probably shouldn't use it.


(author here) The key difference is that a C compiler is a pretty damn good abstraction (and yes Rust is even better without the undefined behavior).

I have written C and C++ for decades, deployed it in production, and barely ever looked at assembly language.

Kubernetes isn't a good abstraction for what's going on underneath. The blog post linked to direct evidence of that which is too long to recap here; I worked with Borg for years, etc.


K8s may have its time and place but here is something most people are ignoring: in 80% of the time you don't need it. You don't need all that complexity. You're not Google, you don't have the scale or the problems Google has. You also don't have the discipline AND the tooling Google has to make something like this work (cough cough Borg).


For the things that are 1:1 comparable, the Borg abstraction leaks in pretty much the same places as the Kubernetes abstraction. In slightly different ways. The "kubernetes abstraction" spans a larger space than the Borg abstraction does (note, I count "Chubby" and "GSLB" as "not Borg"), so there are more abstraction leaks as a whole in Kubernetes.

Source, I was a Google SRE for 5 years (Ads, Traffic). I ran the in-house kubernetes clusters at a company for 3 years (so, no, no hosted kubernetes, we stood them up either on pretty naked VMs or bare metal).


Assembly aside, all the things you mention are things I would expect a software engineer to understand. As an engineer in my late twenties myself, these are exactly the things I am focusing on. I'm not saying I have a particularly deep understanding of these subjects, but I can write a recursive descent parser or a scheduler. I value this knowledge quite highly, since its applicable in many places.

I think learning AWS/kubernetes/docker/pytorch/whatever framework is buzzing is easy if you understand Linux/networking/neural networks/whatever the underlying less-prone-to-change system is.


Is there a networking-for-developers style course that you would recommend?


The one at your local university. Either one named something like "Introduction to Networking" or "Introduction to Distributed Systems", depending on what you want to learn.

You could also read some books. Rami Rosens "Linux Kernel Networking - Implementation and Theory" is quite detailed.

The "UNIX and Linux System Administration Handbook" (Nemeth et al.) covers a lot superficially and will point you in the right direction to continue studying. It's very practical-minded.

For low-level socket programming, you can probably read "Advanced Programming in the UNIX environment". It might be more detail than you need though.

At the other extreme, if you want to study distributed systems, you could read Steen & Tanembaums "Distributed Systems"


disclaimer: I don't mean this to come across as arrogant or anything (I'm just ignorant).

I'm totally self-taught and have never worked a programming job (only programmed for fun). Do professional SWEs not actually understand or have the capability to do these things? I've hacked on hobby operating systems, written assembly, worked on a toy compiler and written libraries... I just kind of assumed that was all par for the course


The challenge is that lower level work doesn't always translate into value for businesses. For instance, knowledge of sockets is very interesting. On one hand, I spent my youth learning sockets. For me to bang out a new network protocol takes a few weeks. For others, it can take months.

This manifested in my frustration when I lead building a new transport layer using just sockets. While the people working with me were smart, they had limited low level experience to debug things.


I understand that that stuff is all relatively niche/not necessarily useful in every day life (I know nothing about sockets or TCP/IP) - I just figured your average SWE would at least be familiar with the concepts, especially if they had formal training. Guess it just comes down to individual interests


I think you may have missed the point (as probably a lot of people did) I was trying to make. It's one thing to know what assembly is and to even be able to dabble in a bit of assembly, it's another thing to be proficient in assembly for a specific CPU/instruction set. It's orders of magnitude harder to be proficient and/or actually write tooling for it vs understanding what a MOV instruction does or to conceptually get what CPU registers are.

Professional SWE are professional in the sense that they know what needs to happen to get the job done (but I am not surprised when someone else does not get or know something that I consider "fundamental")


yes, some intermediate devs I've worked with are unable to do almost anything except write code. e.g. unable to generate an ssh key without assistance or detailed cut and paste instructions.


Shit, I google or manpage or tealdeer ssh key generation every single time....

Pretty much any command I don't run several times a month, I look up. Unless ctrl+r finds it in my history.


Maybe I should apply for some senior dev roles then :)


Many/most senior devs do not have the experience you described. But there are often a lot of meetings, reports, and managing other devs.


Yes, you absolutely should, unless you are already making a ton of money in a more fulfilling job.


It's extremely common. And many of them are fairly productive until an awkward bug shows up.


> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process? All of these were table stakes at some point in time.

All of these were still table stakes when I graduated from small CS program in 2011. I'm still a bit horrified to discover they apparently weren't table stakes at other places.


> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

Any one of the undergraduates who take the systems sequence at my University should be able to do all of this. At least the ones who earn an A!


And maybe to learn the smell of a leaking layer?


> who, today, can write or optimize assembly by hand? How about understand the OS internals? How about write a compiler? How about write a library for their fav language? How about actually troubleshoot a misbehaving *nix process?

But developers should understand what assembly is and what a compiler does. Writing a library for a language you know should be a common development task. How else are you going to reuse a chunk of code needed for multiple projects?

Certainly also need to have a basic understanding of unix processes to be a competent developer, too, I would think.


there is a huge difference between understanding what something is and actually working with it / being proficient with it. huge.

I understand how a car engine work. I would actually explain it to someone that does not know what is under the hood. Does that make me a car mechanic? Hell no. If my car breaks down I go to the dealership and have them fix it for me.

My car/car engine is ASM/OS Internals/writing a compiler/etc.


While I will not pretend to be an expert at either of those, having at least a minimal understanding of all of these is crucial if you want to pretend to be a software engineer. If you can't write a library, or figure out why your process isn't working, you're not an engineer, you're a plumber, or a code monkey. Not to say that's bad, but considering the sheer amount of mediocre devs at FAANG calling themselves engineers, it just really shines a terrible light on our profession.


abstractions layers exist for this reason. as much of a sham as the 7-layer networking model is, it's the reason you can spin up an http server without knowing tcp internals, and you can write a webapp without caring (much) about if its being served over https, http/2, or SPDY.


I would make a big distinction between 'without knowing' and "without worrying about." Software productivity is directly proportional to the amount of the system you can ignore while you are writing the code at hand. But not knowing how stuff works makes you less of an engineer and more of a artist. Cause and effect and reason are key tools, and not knowing about TCP handshake or windows just makes it difficult to figure out how to answer fundamental questions about how your code works. It means things will be forever mysterious to you, or interesting in the sense of biology where you gather a lot of data rather than mathematics where pure thought can give you immense power.


To be an engineer, you need the ability to dive deeper into these abstractions when necessary, while most of the time you can just not think about them.

Quickly getting up to speed on something you don't know yet is probably the single most critical skill to be a good engineer.


All true. The problems start getting gnarly when Something goes Wrong in the magic black box powering your service. That neat framework that made it trivial to spin up an HTTP/2 endpoint is emitting headers that your CDN doesn't like and now suddenly you're 14 stack layers deep in a new codebase written in a language that may not be your forte...


While I wouldn't judge someone not knowing anything about layer 1 or 2, knowing something about MTUs, traffic congestion, routing is something that should be taught at any basic level of CS school. Not caring if it's served over http2? Why the hell would you? Write your software to take advantage of the platform it's on, and the stack beneath it. The simple fact of using http2 might change your organisation from one fat file served from a CDN, into many that load in parallel and quicker. By not caring about this, you just... waste it all to make yet another shitty-performing webapp. In the same way, I don't ask you to know the TCP protocol by heart, but knowing just basics means you can open up wireshark and debug things.

Once again: if you don't know your stack, you're just wasting performance everywhere, and you're just a code plumber.


> knowing something about MTUs

isn't that why MTU discovery exists?

> Write your software to take advantage of the platform it's on, and the stack beneath it

sure, but usually those bits are usually abstracted away still. otherwise cross-compatability or migrating to a different stack becomes a massive pain.

> The simple fact of using http2 might change your organisation from one fat file served from a CDN, into many that load in parallel and quicker.

others have pointed out things like h2push specifically, that was kind of what i meant with the "(much)" in my original comment. Even then with something like nginx supporting server-push on its end, whatever its fronting could effectively be http/2 unaware and still reap some of the benefits. I imagine it wont be long before there are smarter methods to transparently support this stuff.


But this does matter to web developers! For example http/2 lets you request multiple files at once and server push support. If you don't know this you might not implement it and end up with subpar performance. http/3 is going to be built on UDP-based Quic and won't even support http://, will need a `Alt-Svc:` header, and removes the http/2 prioritisation stuff.

God knows how a UDP-based http is going to work but these are considerations a 'Software Engineer' who works on web systems should think about.


Someone writing the framework should absolutely be intimately familiar with it, and should work on making these new capabilities easy to use from a higher level where your typical web dev can make use of it without much thought, if any.


Err, no. Look at most startups and tell me how many of them care if they’re serving optimized content over HTTP/2?


you know. deep down inside: we are all code monkeys. Also, as much as people like to call it software engineering, it's anything but engineering.

In 95% of cases if you want to get something/anything done you will need to work at an abstraction layer where a lot of things have been decided already for you and you are just gluing them together. It's not good or bad. It is what it is.


This reminds me of Jonathan Blow's excellent talk on "Preventing the Collapse of Civilization":

https://www.youtube.com/watch?v=ZSRHeXYDLko


I honestly can't tell if this is sarcasm or not.

Which says a lot about the situation we find ourselves in, I guess.


It's not sarcasm. A lot of things simply do not have visibility and are not rewarded at the business level - therefore the incentives to learn them are almost zero


Likewise I don’t know what to think when I meet frequent flyers who don’t know how a jet turbine functions! :)

It is a process of commodification.


The people flying the airplane do understand it though. At least they are supposed to. Some recent accidents make one wonder.


Pilots generally do have some level of engineering background, in order to be able to understand possible in-flight issues, but they're not analogous to software engineers. They're analogous to software operators. Software engineers are analogous to aerospace engineers, who absolutely do understand the internals of how turbines work because they're the people who design turbines.

The problem with software development as a discipline is its all so new we don't have proper division of labor and professional standards yet. It's like if the people responsible for modeling structural integrity in the foundation of a skyscraper and the people who specialize in creating office furniture were all just called "construction engineers" and expected to have some common body of knowledge. Software systems span many layers and domains that don't all have that much in common with each other, but we all pretend we're speaking the same language to each other anyway.


I really like your analogy, I’m stealing it. As a pilot(devops) during interviews I’m often asked deep aeronautics internals (some graphs/tree question) about whatever plane that aeronautic (software) engineer built and it’s always annoyed me that that’s a game I have to play. Same realm but completely different fields, that are somewhat and yet closely intertwined. The frequency of this is quite common

I sometimes hate joke/fantasize about nailing a SE candidate with an obscure BPG or esoteric DNS question and then being outwardly disappointed in his response, watching him realize he’s going to lose this job over something I found completely reasonable to ask, but ultimately entirely useless to his position


It doesn't help that most of it is completely abstract and intangible. You can immediately spot the difference between a skyscraper and a chair, but not many can tell the difference between a e2e encrypted chat app and a support chat app. It's an 'app' but they are about as different between a chair and a skyscraper in architecture and systems.


Software has been around for longer than aeroplanes

Developers who can only configure AWS are software operators using a product, not software engineers. There’s nothing wrong with that but if no one learns to build software, we’ll all be stuck funding Mr Bezos and his space trips for a long time.


> Software has been around for longer than aeroplanes

Huh?


Ada Lovelace wrote the first program in 1842, it was another 61 years before the Wright brother’s inaugural flight


But it was never actually executed. Too tightly coupled to the hardware layer :/


I think the important point here is that even pilots dont know the full mechanics of a modern jet engine (AFAIK at least, I don't have an ATPL so not 100% on the syllabus). They may know basics like the Euler turbine equation and be able to run some basic calculations across individual rows of blades, but they most likely will not fully understand the fluid mechanics and thermodynamics involved (and especially not the trade secrets of how the entire blades are grown from single crystals).

This is absolutely fine, and one can draw parallels in software, as a mid level software engineer working in an AWS based environment wont generally need to know how to parse TCP packet headers, despite the software/infrastructure they work on requiring them.


> and especially not the trade secrets of how the entire blades are grown from single crystals

Wait, what? Are you telling me that jet turbine blades are one single crystal instead of having the usual crystal structure in the metal?


I'm not a materials guy personally so won't be the best person to explain the exact science behind them, but they're definitely a really impressive bit of engineering. I had a quick browse of this article and it seems to give a pretty good rundown of their history and why their properties are so useful for jet engines https://www.americanscientist.org/article/each-blade-a-singl...


Wow... Mindblowing stuff. Long but woth reading.


They are grown as single metal crystals in order to avoid the weaknesses of joints. They are very strong!



Yes and no, for a private pilot license you are taught through intuition and diagrams. No Navier Stokes, no Lattice Boltzmann, no CFD. The FAA does not require you to be able to solve boundary condition physics problems to fly an aircraft.


Modern jet pilots certainly know much less about airplane functions than they did in the 1940s, and modern jet travel is much safer than it was even a decade ago.


Software today is more like jets in the 1940s than modern day air travel. Still crashing a lot and learning a lot and amazing people from time to time.


Many of them know the checklists for their model of aircraft. The downside of the checklists is that they sometimes explain the "what" and not the "why". They are supposed to be taught the why in their simulator training. Newer aircraft are going even further in that direction of obfuscation to the pilots. I expect future aircraft to even perform automated incident checklist actions. To your point, not everyone follows the checklists when they are having an incident as the FDR often reports.


most pilots probably don't know how any specific plane's engine works further than what inputs give what outcomes and a few edgecases. larger aircrafts have most of their functions abstracted away with some models effectively pretending to act like older ones to ship them out faster (commercial pilots have to be certified per plane iirc, so more familiar plane = quicker recertification), which has led to a couple disasters recently as the 'emulation' isn't exact. this is still a huge net benefit as larger planes are far more complicated than a little cessna and much harder to control with all that momentum, mass, and airflow.


Perhaps it is not about a jet engine, but I find this beautiful presentation extremely fascinating:

https://www.faa.gov/regulations_policies/handbooks_manuals/a...


"I don't know what to think when I meet engineers who know TCP/IP but don't quite understand how photons are transmitted over fiber."

"I don't know what to think when I meet engineers who know UNIX but don't quite understand assembly."

What you quoted is tantamount to the lament of a dinosaur that has ample time to observe the meteor approaching and yet refuses to move away from the blast zone.

Less facetiously: the history of progress in most domains, and especially computing, is in part a process of building atop successive layers of abstraction to increase productivity and unlock new value. Anyone who doesn't see this really hasn't been paying attention.


> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.

Can we provide an example that isn't also a big company? I'm not really thinking of big companies that don't either dogfood their own tech or rely on someone bigger to handle things they don't want to (Apple spends 30m a month on AWS, as an example[0]). You could also make the argument that kind of no matter what route you take you're "relying on" some big player in some big space. What OS are the servers in your in-house data center running? Who's the core maintainer of whatever dev frameworks you might ascribe to (note: An employee of your company being the core maintainer of a bespoke framework that you developed in house and use is a much worse problem to have than being beholden to AWS ELB, as an example).

This kinda just sounds like knowledge and progress. We build abstractions on top of technologies so that every person doesn't have to know the nitty gritty of the underlying infra, and can instead focus on orchestrating the abstractions. It's literally all turtles. Is it important, when setting up a MySQL instance, to know how to write a lexer and parser in C++? Obviously not. But lexers and parsers are a big part of MySQL's ability to function, right?

[0]. https://www.cnbc.com/2019/04/22/apple-spends-more-than-30-mi...


I guess I don’t really understand what a socket is? It’s a magic thingy that allows two computers/processes to communicate and sometimes has trouble with NAT.

I know how to use it certainly, but how the hell it is implemented is more or less black magic to me.

Now that’s not to say I couldn’t learn how a socket works. It’s just never been at all relevant to performing my job.


Yes, but you should at least know some basic troubleshooting skills like running netstat to see a socket in syn sent or whatever to get an idea if there is a network connectivity issue to your endpoint.


The second quote resonates well with the old Joel Spolsky blog post "Fire and Motion" [1]. Chasing new technologies is something your huge competitors want, you keep adopting XML databases, Corba (in the olden days), NoSQL just a few years ago, today it is Kafka, Crypto, AI, kirjillion of AWS products instead of working on your business.

[1] https://www.joelonsoftware.com/2002/01/06/fire-and-motion/


I hope you and the author realise that sockets are a library. And used to be products! They're not naturally occurring.


Most of this stuff is completely over my head, and I'm certainly no kubernetes expert, but I'm working on a project that's deployed with kubernetes, and one of the steps in our process is running our e2e tests, also in a separate kubernetes deploy. These tests (using Cypress) have proven to be extremely flakey on the server. Locally they work fine, though. I was wondering if Cypress is simply crap, but this article makes me wonder if kubernetes might be the real culprit here.


Kubernetes for sure. But it will force you to write more relisient software. Since we migrated to kubernetes, we had to implement automatic retry strategies in every network exchange, http requests, database transactions, because the managed kubernetes of a major cloud provider is a train wreck.


If you're enjoying your Kubernetes, then have at it, but in my opinion it sounds like Stockholm syndrome.

The thing is so complicated that even the guys who wrote it probably can't figure it out.

I myself would rather sew together .BAT files, CORBA and COBOL into a shambling software frankenstein before I'd even consider using Kubernetes and get sucked into that mess.

But seriously, 99 percent of us, even on HN, don't have the problems that kubernetes is trying to solve.

Why do we put ourselves through this when we should know just looking at the thing that it's just going to be a nightmare when things go wrong?


.JCL files, not .BAT files and: yes.


I dislike the deification of Ken Thompson. He's great, but let's not pretend that he'd somehow will a superior solution into existence.

The economics and scale of this era are vastly different. Borg (and thus, Kubernetes) grew out of an environment where 1 in a million happens every second. Edge cases make everything incredibly complex, and Borg has solved them all.


Much as I am a fan of borg, I think it succeeds mostly by ignoring edge cases, not solving them. k8s looks complicated because people have, in my opinion, weird and dumb use cases that are fundamentally hard to support. Borg and its developers don't want to hear about your weird, dumb use case and within Google there is the structure to say "don't do that" which cannot exist outside a hierarchical organization.


Interesting. Perhaps k8s is succeeding in the real world because it is the only one that tries to support all the weird and dumb use cases?

> Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features.

> Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP.

Instead of seeing k8s as the equivalent of "cover fire" or Windows XP, a more apt comparison is probably Microsoft Office, with all kinds of features to support all the weird and dumb use cases.


I thought the same as well but then I went down the docker/container route for making something similar to k8s it turned out to be just reimplementing k8s badly. The reason k8s is so complicated is the horrible infatuation with walls of YAML and CRDS, think of YAML and CRDS as XML and XMLNS for those of us who lived through XML.


Claiming Kubernetes is Multics , and that the UNIX equivalent is around the corner, is worthless claim without actual data or argument to back it up.

To me, Kubernetes is the new UNIX, centered around a small number of core ideas: controller loops, Pods, level-triggered events, and a fully open, well-standardized, and declarative, and extensible RESTful API.

Kubernetes has its complexities - just like UNIX, because it's trying to solve two big problems: shifting the fundamental unit of computation into an immutable / ephemeral unit (rather than mutable), i.e. the Pod, and having a single open API for controlling almost every aspect of IT using control systems theory as the philosophy.

The various clouds and predecessor cloud orchestrators (Azure ARM, AWS Cloud Formation, etc) are (to me) the infinitely complicated beasts.

This article didn't have an argument beyond "I don't understand it, and therefore I don't like it". He just linked to a few rants about the complexity of the CNCF ecosystem (which is like complaining that "IT is complicated" - it is a reflection of reality, not Kubernetes), and extended cranky rant / thought exercise by the MetalLB dude. The latter is the closest to an actual argument against Kubernetes, but there’s a LOT of things to disagree with in that post. THAT would be an interesting debate.

The biggest issue with Kubernetes is the insularity of the culture to reject anything that doesn't think like Kubernetes (as defined by whoever might be running any given SIG). That is also its greatest strength. But if it doesn't compromise this vision in some respects, such as developer experience, it will be self-limiting.


I really love how kubernetes decouples compute resources from actual servers. It works pretty well and handles all kinds of sys-ops-y things automatically. It really cuts down on work for big deployments.

actually, it has shown me what sorts of dev-ops work is completely unneeded.


This post brings up a good question - how does one get better at low-level programming? What are some good resources?


...except people actually use K8S.


Kubernetes is fantastic if you're running global-scale cloud platforms, ie, you are literally Google.

Over my past five years working with it, there has been not a single customer that had a workload appropriate for kubernetes, and it was 100% cargo-cult programming and tool selection.


Your case is def not the norm. We’re not google sized but we are taking a big advantage of k8s running dozens of services on it - from live video transcoding to log pipelines.


i just want cloud agnostic faas with first class triggers and outputs

something like lambda and azure functions without feeling locked in


No one has used the words "docker swarm" on the comment section

Fill in the words

Kubernetes is to Multics as ____ is to docker swarm


People love to pooh-pooh "complicated" things like unit tests, type systems, Kubernetes, GraphQL, etc. Things that are solving a specific problem for LARGE SCALE ENTERPRISE users.

I will quote myself here: A problem does not cease to exist just because you decided to ignore it.

Without Kubernetes, you still need to:

- Install software onto your machines

- Start services

- Configure your virtual machines to listen on specific ports

- have a load balancer directing traffic to and watching the health of those ports

- a system to re-start processes when they exit

- something to take the logs of your systems and ship them to a centralized place so you can analyze them.

- A place to store secrets and provide those secrets to your services.

- A system to replace outdated services with newer versions ( for either security updates, or feature updates ).

- A system to direct traffic to allow your services to communicate with one another. ( Service discovery )

- A way to add additional instances to a running service and tell the load balancer about them

- A way to remove instances when they are no longer needed due to decreased load.

So sure, you don't need Kubernetes at an enterprise organization! Just write all of that yourself! Great use of your time, instead of concentrating on writing features that will make your organization more money.


(author here) The post isn't claiming that you should be doing something differently right now.

It's claiming that there's something better that isn't discovered. Probably 10 years in the future.

I will be really surprised if anyone really thinks that Kubernetes and even AWS is going to be the state of the art in 2031.

(Good recent blog post and line of research I like about compositional cloud programming, from a totally different angle: https://medium.com/riselab/the-state-of-the-serverless-art-7...)

FWIW I worked with Borg for 8 years on many applications (and at Google for over a decade), so this isn't coming from nowhere. The author of the post I quoted worked with it even more: https://news.ycombinator.com/item?id=25243159

I was never an SRE, but I have written and deployed code to every data center at Google, as well as helping dozens of people like data scientists and machine learning researchers use it, etc. It's hard to use.

I gave this post a modest title since I'm not doing anything about this right now, but I'm glad @genericlemon24 gave it some more visibility :)


This article really resonated with me. We are starting to run into container orchestration problems but I really don’t like what I read about K8s. Apart from anything else, it seems designed for much bigger problems than mine, and require the kind of huge mental effort to understand which, ironically, will make it harder for my business to grow.

I’m curious if you’ve taken a look at Nomad and the other HashiCorp tools? They appear focussed and compositional, as you say, and this is why we are probably going to adopt them instead of K8s - they seem to be in a strong position to replace the core of K8s with something simpler.


I use Nomad a lot in my company and I really like it.

Our team tried to migrate to AWS ECS a few times and found it much harder to abstract stuff from devs / create self-service patterns.

That said it's not a walk in the park. You will need to scratch your head a little bit to setup consul + nomad + vault + a load balancer correctly.


Thanks. We're going to start small with just nomad, then vault, and as our needs grow we will probably adopt consul (we already use terraform so hopefully not a huge stretch) and maybe boundary.

This is thing I like about the HashiCorp tools. You don't have to eat the whole cake in a single sitting.


There are some good ansible playbooks on GitHub for nomad, consul and vault. I personally don't use vault because it's overkill for the proeuct in working on at the moment.

To avoid the pain of managing a CA and passing out certificates for TLS between services, I use a wireguard mesh and bind nomad, consul and vault to these wg interfaces. This includes all the chatter of these components, as well as the services I deploy with nomad. It's configured such that any job can join the "private" wireguard network or "public" internet gateway.

It takes a few days to set up, but it's very easy to manage.


Do you have somewhere to point me to set things up in this configuration?

I’m a freelancer that hosts client stuff and I need something between “SSH into server” and “kubernetes.”

No, I never did buy the docker hype. Seem to be doing okay.


Have you looked into Fly.io or AWS Fargate?


>You will need to scratch your head a little bit to setup consul + nomad + vault + a load balancer correctly.

I've been wondering, would it make sense to try to package all that into a single, hopefully simple and easily configurable, Linux image? And if it might be, why hasn't anyone done that yet?


I've only looked at the HashiCorp tools, not really used them. My understanding is they originated in a VM-based world (?), and I've worked almost exclusively with containers. I'm sure that has changed over time.

I will say that I looked at HCL and it looks very nice:

https://github.com/hashicorp/hcl

But somehow it's not as popular as a mess of YAML and Go Templates? That genuinely leaves me scratching my head. I guess it's because people pick platforms and not languages? (BTW, in 2009 I designed and implemented the template language that Go templates are based on, and I find their common application pretty bizarre, e.g. in some Helm charts I looked at from this thread)

Oil is growing a config dialect that looks a lot like HCL (although it's convergent evolution; I've never used it.) I think there is a lot of room for mixing declarative and imperative; as far as I can see HCL is mostly declarative (defining data structures).

Anyway I'd be interested in reading about HashiCorp stuff but for some reason in my neck of the woods I don't hear too much about it. Maybe that's because they're paid services and the open source Kubernetes seems attractive by comparison? Or is it more of a VM vs. container thing?


All of the Hashicorp products are primarily open source products. While there are enterprise features and cloud-hosted versions of some of them, FOSS is the foundation of the company.


10 years ago there wasn't a Docker (released in 2013), and AWS was a tiny side player with most established businesses operating their own data centers.

I think it's safe to say that if the next 10 years are anywhere near as disruptive as the last 10 we will surely be doing a lot of things very differently.


Things have already changed since the first release of Kubernetes. Specifically hosted Kubernetes, aka GKE/EKS/AKS, is a marked step forwards from running Kubernetes yourself, that I think doesn't get enough recognition. We'll see what the future holds, but my prediction is that the future holds more layers of indirection, and the future of running web services is on AWS Lambda/Azure Functions/Google Cloud Functions, and other fully-managed PaaS, like Heroku, with more vendor agnosticism. Running Kubernetes, in addition to the technical benefits, also enables a company to treat AWS/GCP/Azure as a commodity, and can credibly threaten to move clouds when the contract is up for renewal.


Back in 2003 we had Solaris Zones (now called Solaris containers). Same concept as Docker, but we didn't knew exactly why it was such a good idea, and the hardware was expensive.

What made Docker spark was being abke to use commodity hardware and ush to production with the same exact environment and behavior.

You could have done the same with Solaris, if you developed on a Sun Ultra5 workstation and published the application in a zone in the server. But 2003 was a different world and not everyone had a Spark box to develop nearby.


IMO hashistack nomad provides a better development experience. The complexity is gradual and it doesn't try to do "everything", it can stay focused on workload orchestration(whether its a container, vm, or even a process) and delegates coordination out to specific services better suited for it(consul for service discovery, vault for secrets etc...)


Totally agree. Simple things stick, complicated things die.

If you’re explaining, your losing.


You're mixing together useful complexity with useless complexity.

Plus at the very least, I'd be very careful about putting type systems into the same basket as Kubernetes. One is a basic language feature used offline and before deploying. The other is a highly complex interwoven web of tools that might take your systems offline if used incorrectly.

Without Kubernetes, you need Debian and it's Apache and MySQL packages. It's called a LAMP stack and for many production deployments, that's good enough. Because without all that "cloud magic", a $50 per month sever running a bare metal OS is beyond overpowered for most web apps, so you can skip all the scaling exercises. And with a redundant PSU and a redundant network port, 99.99% uptime is achievable. A feat so difficult, I'd like to mention, that Amazon Web Services or Heruko rarely manage to...

Complexity has high costs. Just because you don't see Kubernetes' complexity now, doesn't mean you won't pay for it through reduced performance, increased bug surface, increased downtime, or additional configuration nightmares.


> You're mixing together useful complexity with useless complexity.

> Complexity has high costs

Complexity management is the central theme of building any large, valuable system. We would probably find that the more complex (and correct) a system, the more valuable it becomes on a relative basis to other competing solutions. The US tax code is a pretty damn good example of complexity intentionally taken to the extreme (for purposes of total market capture). We shouldn't be surprised to find other technology vendors framing problems & marketing their wares under similar pretenses.

The best way to deal with complexity is to eliminate it or the conditions under which it must exist. For example, we made the engineering & product choice that says we do not ever intend to scale an instance of our application beyond the capabilities of a single server. Consider the implications of this constraint when reviewing how many engineers we actually need to hire, or if Kubernetes even makes sense.

I think one of the biggest failings in software development is a lack of respect for the nature and impact of complexity. If we are serious about reducing or eliminating modes of complexity, we have to be willing to dig really deep and consider dramatic changes to the ways in which we architect these systems.

I know its been posted to death on HN over the last ~48 hours, but Out of the Tar Pit is the best survey of complexity that I have seen in my career so far:

http://curtclifton.net/papers/MoseleyMarks06a.pdf


Couldn’t agree more! The crazy, ridiculous Rube Goldberg machines that are being strung together from those AWS components to solve the most mundane problems are getting ridiculous.


Absolutely agree with you. I have seen the debate between accidental and necessary complexities very often. It actually depends upon stage of the organisation. In my opinion many devs in startups and smaller orgs try to accomodate the future expectations around product and create accidental complexities Accidental complexity becomes necessary complexity when an organisation scales out.


Another case of premature optimization, really


You really only need a single interview to find out companies care a lot about premature optimization.

They’re always asking if you’ve worked with the latest tech stacks, never if you can set up a really well optimized nginx server.


I see this as Resume Driven Development. Yak shaving is fun and at some point complex DevOps becomes necessary, but most CRUD apps benefit from a simple approach that allows high feature velocity. It's a balancing act between productivity and technical debt.

I worked for one startup lead by a Java Architecture Astronaut, and working with the byzantine patterns and build systems made adding even simple features a morale-draining slog. It killed the product.


> And with a redundant PSU and a redundant network port, 99.99% uptime is achievable.

It's really tempting to believe that with the right hardware, we can put everything on one powerful and inexpensive box. A couple of problems with that:

1. What happens when you have to reboot to apply a kernel update?

2. The geographic location of that single box is itself a gap in redundancy. This is one thing I like about AWS and the other hyperscalers, with their regions that each have multiple data centers connected by a private network, with load balancers and other things spanning the region.


This is a question that I have spent a ridiculous amount of time pondering.

My conclusions so far are this:

Single node application systems are by far the most reliable and manageable from a business logic standpoint. At no point does spreading a problem across more than 1 computer make that problem easier to solve.

If you are concerned about latency, you need to get really abstract with your problem and ask what is even possible in information theoretic terms. If you are truly constrained to 1 serialized, synchronous context (i.e. a competitive counterstrike match or a stock exchange), there is little you can do to alleviate the root problem as your users get further from the server. You can certainly look at using some consensus protocol like multi-paxos, but then your transaction latency goes from microseconds (if you were clever) to milliseconds, representing orders of magnitude slowdown in the typical case.

The best solution I can come up with is a synchronously-replicated append-only log store which is utilized in a primary/sync-witness/async-witness/... configuration. The first tier of resilience would be synchronous and provided by a set of witness nodes which must ack as a majority to progress primary. These nodes would ideally be within 1-2ms of the primary. The async witnesses could be in orbit and/or on mars. These are more about extreme geological disaster recovery. The witness nodes would also use a separate consensus protocol to decide when the primary needs to be taken down and replaced with a sync (or god forbid async) witness. They would be able to elect an emergency leader separate from the primary who would be authorized to stop the bad primary in the hypervisor, and edit any relevant DNS records to ensure traffic stops hitting the bad system.

For the customers I work with, it is wayyy easier to build & sell a system that operates on a single box with sync replication + manual failover. Our customers are tolerant to production having a brief outage for a few minutes during the business day. Especially considering the fact that I have still never had to do this exercise in a production setting. The hardware we run this stuff on is so ridiculously stable.


> Single node application systems are by far the most reliable and manageable from a business logic standpoint.

Manageable? Maybe. Reliable? No. Most companies don't need it but if a pipe bursts above the server room and now all of your medical records for your hospital are unavailable you're going to have a bad time.

> At no point does spreading a problem across more than 1 computer make that problem easier to solve.

I don't think anyone claimed it was easier (yet). The main thing people strive for is:

1. Dynamically reshaping your compute needs to match any needs.

2. Surviving failures of nodes or crashes in your application.

3. Managing all of the BS that goes with obtaining this (logs, etc).

> Especially considering the fact that I have still never had to do this exercise in a production setting. The hardware we run this stuff on is so ridiculously stable.

I hope this isn't the case but this sounds like quite the death flag.


I’ve only seen one burst pipe bring down a computer (Pitney Bowes, 1995), but I’ve witnessed several S3 outages.


You should spend a week working on a infra / ops team then.


I worked with multiple ops/infra teams, and have seen standard server rooms and advanced server rooms. They’re not supposed to have pipes leaking on the racks, that’s ridiculous. I know many stable redhat servers that didn’t need to reboot for years. I also maintain websites on hetzner servers that haven’t needed to reboot in years, under pretty high load. On the other hand, I got a hostnoc dedicated server and that thing went offline three times a month. So yeah, you have to know what you’re doing regardless of whether you’re building a server room or a k8s cluster.


More modern data centers have liquid pumps available for low-cost cooling.


> I know many stable redhat servers that didn’t need to reboot for years.

That's not actually something to brag about, as it implies they weren't receiving important kernel updates, particularly security updates.


So, we do ~daily restarts on our customers' systems. We have a solid 12 hours per day to patch up a server before it needs to be in production again.

That said. We communicate directly with other systems on the same premises that probably haven't been rebooted in over a decade.

Building something that can run 10 years without downtime is not happenstance. It is very intentional and deliberate engineering.


https://en.wikipedia.org/wiki/Kpatch

Allows you to patch the kernel without reboot.


As mentioned above, they were, both RedHat and Ubuntu now support kernel updates without restart.


Completely aligned with my parent, day-to-day operations are often slightly more complex than making sure your server run your app properly, I'd argue that our services are increasingly dependent on others (in both directions, dependencies multiply and are more and more critical). That's also by interacting more with external entities that they bring more value.

> The best solution I can come up with is a synchronously-replicated append-only log store which is utilized in a primary/sync-witness/async-witness/... configuration. The first tier of resilience would be synchronous and provided by a set of witness nodes which must ack as a majority to progress primary. These nodes would ideally be within 1-2ms of the primary. The async witnesses could be in orbit and/or on mars. These are more about extreme geological disaster recovery. The witness nodes would also use a separate consensus protocol to decide when the primary needs to be taken down and replaced with a sync (or god forbid async) witness. They would be able to elect an emergency leader separate from the primary who would be authorized to stop the bad primary in the hypervisor, and edit any relevant DNS records to ensure traffic stops hitting the bad system.

This part was what I felt deserved a counter-point though. Consensus is indeed at the core of the issue once you want distributed fault tolerance. However, I think you'll quickly hit two things with your approach: 1-2ms of latency, I fear that it may come with highly correlated failures on the "first tier of resilience". Moreover, the "second tier" being much farther, keeping them in consensus will imply harsh trade-offs. If you use synchronous consensus protocols, you'll slow down drastically the "first tier" (assuming you want consistency), if you go for the asynchronous replication (not consensus, this matters...) then the second tier can't really intervene on leader election or failover without risking a partition on a false positive (and if you try to be conservative there your RPO will suffer).

If you're into these (fundamental) issues, I'd recommend Leslie Lamport's work (i.e. https://www.college-de-france.fr/site/en-martin-abadi/semina... or http://www.lamport.org/), the paper pointing a disappointing impossibility: https://dl.acm.org/doi/abs/10.1145/3149.214121 and its generalization (https://dl.acm.org/doi/abs/10.1145/167088.167119).


I appreciate the commentary. I feel most of this effectively boils down to "it depends". Clearly, there is no one-size-fits-all solution that can make everyone happy all the time, especially considering all the constraints & variables present across the entire vertical.

Where your datacenters are geographically located is usually a big first step in even starting these types of conversations. The nature of "maybe sync replication represents a liability or is feasible" might be a conversation about the geography of a region and statistical likelihood of certain disasters impacting multiple sites simultaneously.

Some customers cant ever afford to lose a single transaction no matter what, some just need it to be reasonably stable but incredibly fast (e.g. gaming vs banking).

Will definitely be spending some time reviewing Lamport's works again. Establishing the notion of stable time between all participants is a fascinating way to solve a lot of problems in distributed systems.


1. I reboot it and my users will need to wait. 99.99% uptime is achieved with 53 minutes of downtime every year, largely enough for the occasional kernel update here and there.

If needed, I can choose to apply the update in the middle of the night in the timezone of most of my visitors.

If it really is a sensitive app and I can't afford any uptime, I just add another inexpensive box, put a load balancer in front (a Cloudflare load balancer would work fine). And since I now have 2 servers, I need a way to manage them without having to manually log in to each of them each time. Enter Ansible. And that's it.

2. Now that I have two cheap boxes, nothing prevents me from having them in two separate data centers and two separate providers.


>"It's really tempting to believe that with the right hardware, we can put everything on one powerful and inexpensive box"

This is what I have. 2 boxes only. One on my own premises in Canada and another on Hetzner in Europe. Each one can go down, no sweat. Also I do not strive for "live uninterrupted updates". All my applications and processes allow for short disruptions not to interfere with the main course.

Also I write native servers in C++ for backend and they process thousands of requests per sec without breaking much sweat. Way more than my business would ever need.

Same situations with my clients as well with the difference that for legal reason they rent computers or whatever goes as such from Amazon or Azure.


Correct me, but I'd dare say admins have been rebooting their machines and running services without geographic redundancy for decades. Uptime was still very high.


1. Planned maintenance in the middle of the night. Even Amazon and Heroku need to do it sometimes. At least we had a forced Postgres version update recently which was 30 minutes of downtime.

2. The latency within AWS is easily as high as US-EU backbone latency. Also, most web apps these days need 2000ms to load all the tracking and advertising crap, so 100ms location latency are negligible in comparison. Plus for most real companies, you'll have one website per country anyway. One server for .com and one server for .eu


Point 2. is relevant, point 1. not really, unless one isn't willing to pay for the specific OS version that allow live kernel updates.


This argument is often made and is ridiculous.

No one should or is using Kubernetes to run a simple LAMP stack.

But if you have dozens of containers and want them to be manager in a consistent, secure, observable and maintainable way then Kubernetes is going to be a better solution than anything you build yourself.


> No one should or is using Kubernetes to run a simple LAMP stack.

Yes they are. Some developer got all excited about the capabilities of k8s, and had an initial larger scope for a project, so they set it up with GKE or EKS, and it managed to provide just enough business value to barrow in like a tick that won't be going away for years.

Developers get all excited for new shiny tools and chuck it into production all the time, particularly at smaller orgs.


I have seen smaller RoR, Django or lampstack apps being deployed on kubernetes exactly for reasons you mentioned. It is often pitch as a silver bullet for the future.


When Boss says "my idea is gonna be HUGE, so make this go fast", you can either spend 4 hours optimize some DB queries, or you can spend 40+ hours in a broadly scoped "conversion" project and have a new thing to add to your resume, and then spend 4 hours optimizing some DB queries...


One of the worst, bazooka in knife fight moment was when I interviewed for a company that was a medium scale direct to consumer streaming service that was built some custom resource to create one pod per user, their worst case scenario was where they would have around 1000 concurrent users.


I worked in a Python web-scraping team doing 3 million requests per day. Their solution to manage concurrency was also one pod per crawl, each with its own DB connection. It really struck me as crazy that everyone thought this was a good idea.


Strikes me as someone buying a massive SUV, to only use it to move one bag of groceries from the store every week. Some day in the future they might suddenly have a family of 6 to feed or have to haul a boat somewhere, so better get the big, overkill car.


I have personally experienced the other end of that problem, and it applies just as well to the k8s discussion. When all the kids are gone it's hard to admit you're not a soccer mom anymore and adjust to grandma mode. It can be a hard thing to trade in the Suburban for the Lincoln. Oh wait, Lincoln's aren't the cool thing so much anymore. That whole world changed while I was driving Suburbans.


This describes ~70% of adults in the southeastern US.


Heh, as someone who came very close to doing this (i.e. using k8s for a LAMP-stack type app at a startup), it's not just "shiny object syndrome" driving people to do this. Here's what our progression looked like:

1. Start with basic LAMP app in git that's manually deployed to an EC2 instance 2. Add in CI / CD + CodeDeploy 3. Create a staging environment 4. Dockerize local environment to keep dev environments in sync and onboard easier (really, this part's a gamechanger for a small company) 5. Ok so now we have Docker for local dev environments but stage and prod are managed separately. Can we just run our Docker containers in stage / production?

When I researched step 5, the options were basically k8s or Docker swarm but Docker swarm didn't seem battle tested (for prod). k8s was clearly a nightmare for a small team to maintain so we started looking into GKE / EKS -- but EKS was still in beta. Thus we punted. We've actually started using ECS for a newer project and I'd likely go that route for step 5 instead.


It tends to happen with startups I think because developers know the startup is likely not going to be around for long, or they will move on in a couple years anyway. So why not shoehorn whatever tech you want and have that skill in your resume? Of course now the startup will need to hire for that tech when you leave and the cycle continues...


> initial larger scope for a project

So they weren't trying to run a simple LAMP stack then.


Trying and doing can be very different things.


I agree that you probably shouldn’t but if you think no one “is”, I’d point to my last job, an enterprise that went to k8s for a single-serving php service that reads PDFs.

I recently asked a friend who still works there if anything else has been pushed to k8s since I left (6 months ago). The answer: no.


Sounds familiar.


Alas, a lot of people are. One of the reasons there's such a backlash against k8s - other than contrarianism, which is always with us - is that there are quite a few people who have their job and hobby confused, and inflicted k8s (worse yet, raw k8s) on their colleagues not because of a carefully thought out assessment of its value, but because it is Cool and they would like to have it on their CV.


> No one [...] is using Kubernetes to run a simple LAMP stack.

Au contraire! This is very common. Probably some combo of resume-driven devops and "new shiny" excitement.


or if you have a small team that must manage hundreds of these lamps…

i started by simple wrapper around docker 8 or so years ago. over the years we’ve moved to k8s b/c it provided essentially the same api we had home grown. this reduced the LoC and moved the complexity of something like dynamic reverse proxy via nginx downstream into kubernetes ingress abstraction backed by nginx-ingress-controller.


That's not resume driven development though, that's picking the best tool for the job when you've outgrown the current solution.


I love the way that the Kubernetes debate always immediately devolves into Kubernetes vs. DIY where Kubernetes is the obviously correct answer.

Two groups of people shouting past each other.


My argument is that much of web development now often involves insane amount of tooling infrastructure and complexity all for the sake of delivering on cloud, scalable, virtualized, dockerized and otherwise over-processed equivalent of "hello world".

And 99% (rhetoric number of course) of needed solutions would be served by a single real server (rent one on Hetzner or wherever) with 0 need to ever upgrade.


Another interesting angle about the single real server is the reliability.. it'll eventually break from some physical cause, that happens, it hurts your uptime.

But sometimes with the big microservice/kubernetes solution, the configuration space is so big that miscommunications/mistakes could potentially take you down for more hours/year than the downsides of being single-hosted would. So now you invested all of that effort and for what?

One company I worked for wrote a distributed scheduled jobs system that was immune to single-machine failures, it went down like 4 times in a year, messing up my team each time. I was like "guys, if we just provisioned one machine with cron and no failover, it would have better reliability".


At a previous company we had Series C financing and enough people to take over two floors of an office building... and our entire SaaS offering for hundreds of white-label business customers ran fine on a couple of big load-balanced EC2 VMs and one big Postgres database with some read replicas.


Yep, HA clusters/load balancing has been around for a long time and will get you pretty far even in some pretty large environments. Hell my companies main LOB app is running off a single Microsoft SQL serve(I know, I know, we have a lot of tech debt we're working through atm and it's on the list to set up HA). The longest downtime we've had was when we had to take it down to migrate it to our new hardware cluster and that was because the data transfer took 20 hours over a 1 GB port.


That doesn't look good on a resume though and definitely won't get you hired at a FAANG company.


I design and implement products. That is what my resume says along with the list of said products and some references. This is usually enough to land me a contract. Been on my own 20 years already and make money from some products of my own or creating those for clients. The last thing I need is a job at FAANG.

Besides if it warm the cockles of your heart so much you can always take any of my products and shove it in the container and run it under k8.


> You're mixing together useful complexity with useless complexity.

Or, to channel Fred Brooks, essential and inessential complexity.


> useless complexity

Which items fall in this category?


Most of the time, K8s


Useless is a strong word in this context. Which parts or features of k8s are useless complexities? It's not like some random junior dev pulled a bunch of features out of a hat and implemented them. There was a ridiculous amount of thought put into its features and I can't think of a single complexity that is useless or even one that is useful but could be done in a more elegant /less complex way.

Yes, it is complex, but there are lots of use cases where it is the most elegant and least complex solution. Yes, it definitely does not make sense to use it for a lamp stack deployed to one server, but there are use cases where it's a huge improvement (e.g. Spark on Hadoop is extremely complex and clunky when compared to spark on kubernetes).


Like you said, if you're good with a LAMP stack and one server, then K8S is probably useless complexity.

The issue is that most people nowadays have never worked with a bare metal Lamp server, so they grossly underestimate how large their company can grow before needing any distributed HA solution. I'd wager 90% of startups go bankrupt or are acquired before outgrowing a single Lamp server.


"Write it all yourself"

- Install software onto your machines

Package managers, thousands of them.

- Start services

SysVinit, and if shell is too complicated for you, you can write totally not-complicated unit files for SystemD. For most services, they already exist.

- Configure your virtual machines to listen on specific ports

Chef, Puppet, Ansible, other configuration tools, literally hundreds of them etc.

- have a load balancer directing traffic to and watching the health of those ports

Any commercial load balancer.

- a system to re-start processes when they exit

Any good init system will do this.

- something to take the logs of your systems and ship them to a centralized place so you can analyze them.

Syslog has had this functionality for decades.

- A place to store secrets and provide those secrets to your services.

A problem that is unique to kubernetes and serverless. Remember the days of assuming that your box was secure without having to do 10123 layers of abstraction?

- A system to replace outdated services with newer versions ( for either security updates, or feature updates ).

Package managers.

- A system to direct traffic to allow your services to communicate with one another. ( Service discovery )

This is called an internal load balancer.

- A way to add additional instances to a running service and tell the load balancer about them

Most load balancers have built up processes for these.

- A way to remove instances when they are no longer needed due to decreased load.

maybe the only thing you may need to activelly configure, again in your load balancer.

None of this really needs to be written itself, and these assumptions come from a very specific type of application architecture, which, no matter how much people try to make it, is not a one-size-fits-all solution.


So instead of knowing about K8s services, ingests and deployments/pods I have to learn 15 tools.

Ingests are not much more complicated than an nginx config, services are literally 5 lines each pod, and the deployments are roughly as complicated as a 15 line docker file.


If you're familiar with Linux (which should be considered required-reading if you're learning about containers), most of this stuff is handled perfectly fine by the operating system. Sure, you could write it all in K8 and just let the layers of abstraction pile up. Or, most people will be suited perfectly fine by the software that already runs in their box.


I work in a small company, we don't have a sysadmin, so mostly we want to use managed services. Let's say we want a simple load balanced setup with 2 nodes. Our options are:

- Run our own load balancing machine and manage it (as said, we don't want this)

- Use AWS/GCP/Azure, setup Load Balancer (and rest of the project) manually or with Terraform/CloudFormation/whatever scripts

- Use AWS/GCP/Azure and Kubernetes, define Load Balancer in YAML, let K8S and the platform handle all the boring stuff

This is the simplest setup and already I will always go for Kubernetes, as it's the fastest and simplest, as well as the most easily maintainable. I can also easily slap on new services, upgrade stuff, etc. Being able to define the whole architecture in a declarative way, without actually having to manually do the changes, is a huge time-saver. Especially in our case, where we have more projects than developers - switching context from one project to another is much easier. Not to mention that I can just start a development environment with all the needed services using the same (or very similar) manifests, creating a near-prod environment.


I think the argument there is that it's only simple because the complexity of k8s has been taken away. I don't think anybody has claimed deploying to a k8s cluster is overly complex; running it well, handling upgrades, those are huge time sinks that need the requisite expertise.

Much like Multics was "simple" for the users, but not for the sysadmins.


That's the point though right? A good (couple of) sysadmins can run a k8s cluster that can be leveraged by dozens (even hundreds) of dev teams. Instead of every team having to re-invent the wheel you get a common platform and set of deployment patterns that can fit most any use case. Of course if you don't have multiple different teams (or every team is running their own k8s cluster) then that is definitely a problem. But just because a handful of teams make an ill-advised investment in k8s when they could do easily with something much simpler doesn't mean that k8s is "too complex." Too complex for that use case sure, but for the vast majority of k8s deployments I would wager that it does add a lot of value and subsume a lot of the inherent complexity of running distributed, fault-tolerant, multi-tenant applications.


Taking the complexity of k8s away was just gonna happen. As someone who built everything from scratch at a previous company, I chose eks at a start-up because it meant that the one-man-systemsguy didn't have to worry about building and hosting every single cog wheel that is required for package repos, OS deployment, configuration management, consul+vault (minimum), and too many other things that k8s does for you. Also, you can send someone on a CKA course and they know how your shit works. Try doing that with the hodge-podge system you built.


Training is a great point, and I think that's why major clouds are going to be stickiest (in terms of using them vs migrating to new things).

The central problem of most companies has been finding / affording people who can maintain their stuff.

If Amazon / MS / Google can make it simple enough that skilled people can be quickly cross trained, and then have enough architecture knowledge to be productive, that's a huge win over "require everyone to spend 6 months muddling through and learning our stack we built ourselves and partially documented."


Set up servers at linode and use the linode node balancer?

> Being able to define the whole architecture in a declarative way

With k8s (and other 'cloud' stuff) you seem to need to know a whole mess of a lot of the tool's stuff up front, vs a "progressive enhancement" way of doing one thing, getting it working, doing something else, getting it working, etc.


You run a small company, I'd argue that you aren't "the average user". For you, Kubernetes sounds like it integrates pretty well into your environment and covers your blind spots: that's good! That being said, I'm not going to use Kubernetes or even teach other people how to use it. It's certainly not a one-size-fits-all tool, which worries me since it's (incorrectly) marketed as the "sysadmin panacea".


I have been professionally working in the infrastructure space for a decade and in an amateur fashion running Linux servers and services for another decade before that and I am pretty certain that I would screw this up in a threat-to-production way at least once or twice along the way and possibly hit a failure-to-launch on the product itself. I would then have to wrestle with the cognitive load of All That Stuff and by the way? The failure case, from a security perspective, of a moment's inattention has unbounded consequences. (The failure case from a scaling perspective is less so! But still bad.)

And I mean, I don't even like k8s. I typically go for the AWS suite of stuff when building out systems infrastructure. But this assertion is bonkers.


Why? You still need to manage all that for your server even if you are running kubernetes on top of it.

I can’t imagine anyone with root access to a kubernetes server is any less dangerous that root on a simple webserver.


No, I don't, because I can yawn dramatically and I can go to any cloud provider and get a k8s cluster with generally consistent and at worst a moral-equivalent set of standard building-block cloud tools already set up. It won't cost me much, it will work mostly-predictably out of the box, and there's support right there for when it fails. Like, that's what k8s is there for. I use AWS pretty exclusively so this doesn't appeal to me, but what does is doing the moral equivalent and having ECS just...there. (Or even better, Fargate, if I can't solve the bin packing problem by myself.)

I haven't "managed a server" outside of my house for a few years now, and I quite like it. I theoretically have had root to ECS clusters, but I've never logged into them. Why would I? Amazon is going to be better at it than I am. Not only do I have more important things to be doing, but I'll do a worse job of it than they will. And to be clear: I consider myself pretty kinda really good at this stuff. But not good enough to make it a competitive advantage unless it's what I want to sell, and I sure as heck don't.

And the article's point, that whatever comes next will probably be better and might even be The Real Thing--I think that is wise.


> Why would I? Amazon is going to be better at it than I am.

Until it's not. Then suddenly you're trying to decipher cryptic cloud provider error messages in a service that made a false promise to you that it's abstraction was so air-tight that you'd never have to learn the underlying technology at all.

Then suddenly, you do need to know the underlying implementation, and quickly.


Yup! I used to feel exactly as you do, and I make it my business to understand what is below the abstraction besides because some old habits die hard (and because I just like this stuff, tbh). But I started working at places with the kind of conservation and pre-testing that make that much less critical. Those organizations also that pay a great deal of money for the kind of support to make knowledge a habit of curiosity and personal fulfillment rather than save-the-worlding.

I haven't needed to do something like that in production, as opposed to pre-production deployment suss-out, since (and I went and checked my enough to be sure) 2017. Though, to be fair, I've been working in devrel since last August, so call it four years of rooting around in the trenches, not five. ;)


> most of this stuff is handled perfectly fine by the operating system

No, you have to write or adopt tools for each of these things. They don't just magically happen.

Then you have to maintain, secure, integrate.

k8s solves a broad class of problems in an elegant way. Since other people have adopted it, it gets patched and improved. And you can easily hire for the skillset.


Okay, so let's add a couple of things.

How do you do failover?

Sharing servers to save on costs?

Orchestrate CI/CD pipelines, preferably on the fly?

Infrastructure as Code?

Eventually you a point where the abstraction wins. Most people will say "but AWS...", but the reality is quicker, easier to use, and runs via multiple providers, so I think it's going to keep doing well personally.


Not the OP here.

We aren't really comparing apples and oranges in all cases that have been talked about in the larger thread. Some of the comparisons seem to be between "self hosted LAMP stack" vs. "kubernetes as a service on AWS". These are vastly different things. We should compare "self hosted LAMP stack" vs. "hosted in cloud LAMP stack" for example or "self-hosted kubernetes" vs. "self-administered kubernetes on EC2" vs "kubernetes as a service on AWS". All of these will have vastly different characteristics, pros and cons depending on your company and teams' realities.

Failover is something that a load balancer does automatically for you. Your services just need to provide a health check. Now where you actually run those nodes is a different thing. These might be slow to procure servers hosted at your provider. Or these might be manually set up EC2 instances or terraformed EC2 instances. Dunno what everyone uses as load balancers nowadays but a previous place for example had F5s and we had our own vsphere farm.

Sharing servers: I don't think this is a good idea at all except if you mean internally and if you do that then there's good and bad ways (see above on vsphere farm. If one project caused another to starve performance wise because of what was running on the same physical machines it was easy to resolve. If this was virtual servers at a traditional hoster, good luck. AWS is probably somewhere in between with EC2 and especially their storage.

Dedicated CI/CD pipelines: This is an awesome one to have and can cost an arm and a leg. I enjoy this very much at my current place w/ EC2 CI agents that scale with the number of devs currently working and dedicated "complete copy of Prod" dev environments (basically a kubernetes namespace for each dev/QA person/e2e test run to play with as they like).

Infra as code: Does not require kubernetes at all but can be implemented with kubernetes. If you already used docker to run stuff anyway for example and you can "abstract away" the kubernetes complexities to your SRE team and/or AWS, go ahead and use kubernetes. But be aware that if nobody at your place actually know kubernetes because you just relied on the hosted version of it, you're at the whim of their support people when something blows up in Production. You may not be big enough to have your own SRE team to take care of this but then you might also just not really benefit enough from kubernetes complexity and a simpler arrangement could have been easier for the people you do have to actually understand.


I think you've missed the point I was making.

Essentially if you work back from the desired state of having IaC, CI/CD, test environments per MR, you likely see something like k8s as a framework that helps you achieve that.

Of course, if you start from "I just need a LAMP stack" you might have a very different conclusion. But when you reach the same endgame ( actually I need an environment for every MR ), you've probably incrementially built something more complex and bespoke.

This will explain why there are dozens of us who are quite happy with the product. The only real question is, do you already know it and do you find it much harder to ship a deployment to a managed k8s cluster vs systemd unit files?

If not, it might be an abstraction worth having. If you don't already know how though, then you might have better things to be doing with your time.


This really depends on how many boxes you have.


> I have to learn 15 tools.

kubectl, kustomize, helm, istio, graphana, various flavors of ingress controllers, overlay networks, service meshes, storage controllers, etcd, etc.

from tfa: https://landscape.cncf.io/

you still have to learn 15 tools, just now they are hidden behind the scenes, and you still have to understand the underlying systems to reason about your containers.

this isn't for or against k8s - i'm a right tool for the job guy - but as a tool kubernetes doesn't solve problems, it encapsulates them and shifts them around.


Plus all the cloud tools are immature with terrible error handling and logging.

So after an enjoyable time crafting a 30 level deep json file you get a failed helm deployment with a error message like "timed out waiting for the condition".


15 mature well documented tools are a lot easier than 15 kludged ill thought out Kubernetes definitions.

Any serious Kubernetes environment is not 5 lines per pod, its the hell of rbac and pod security policies and all sorts of overly cryptic cruft.


“ For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.”

Or… you could not.

https://news.ycombinator.com/item?id=9224


The difference, is that Dropbox is user-facing software, while Kubernetes is software engineer-facing. Dropbox has to be usable by tech-illterate people. Tech-illiterate people have no idea what a Kubernetes is.

There is value in creating a vertically integrated solution in a space, similar to what Dropbox did, so if you find yourself building many of the pieces of Kubernetes internally, it's worth considering if adopting Kubernetes wouldn't be a more efficient use of resources.


That comment has aged brilliantly.

Thanks for that!


how is quoting this here relevant? nobody's saying k8s isn't successful or going to be successful—the argument is whether its complexity and layers of abstraction are worthwhile. dropbox is a tool, k8s is infrastructure. the only similarity between this infamous post and the argument here is that existing tools can be used to achieve the same effect as a product. the response here is "that'll never catch on" (because obviously it has), rather it's "as far as infrastructure for your company goes, maybe the additional complexity isn't worth the turnkey solution"


"You don't need Kubernetes, for a Linux user you can already build a custom solution quite trivially by setting up a custom package repo then build and distribute your application using apt, then configuring SysVinit to monitor your services, whilst using Ansible to configure iptables rules in combination with a simple load balancer you can manage yourself, then use syslog to monitor logs across all your machines whilst hand-waving away secrets management as a problem with 'serverless'"

Yes, you could. Some people do. Others don't, because even if you need a small portion of the features a turnkey solution is likely a better choice in the long run than hand-rolling your own mix of 15+ different technologies to achieve the same thing.


Confounded why sshfs wasn't chosen.


So you have a version of Kubernetes that is as easy to use as Dropbox? Where do I sign up for the beta?



I'm personally glad that Kubernetes has saved me from needing to manage all of this. I'm much more productive as an applications engineer now that I don't have to stare at a mountain of bespoke Ansible/Chef scripts operating on a Rube Goldberg machine of managed services.


Instead, you can now admin a Rube Goldberg machine of Helm charts, which run a pile Docker containers which are each their own microcosm of outdated packages and security vulnerabilities.


> Rube Goldberg machine of Helm charts

I love k8s but I do want to say that I hate the 'standard' way that people write general purpose Helm charts. They all try to be super configurable and template everything, but most make assumptions that undermine that idea, and I end up having to dig through them to make changes anyway.

I have found much more success by writing my own helm charts for everything I deploy, and putting in exactly the amount of templating that makes sense for me. Much more simple that way. Doing things this way has avoided a Rube Goldberg scenario.


your argument seems to be "its ok if youre rube goldberg"

just wait till you have a predecessor


That's the opposite of my argument. I'm saying that the predominant style is Rube Goldberg, but Helm charts don't have to be written that way. Instead of writing an unreadable mess that is 90% template, just template the 5% that you need, and the whole thing is very readable.


thats what you are hearing. what everyone else is hearing: "_my_ code is self documenting, so obviously its more legible!"


This x10. Each such setup is a unique snowflake of brittle Ansible/Bash scripts and unit files. Anything slightly different from the initial use case will break.

Not to mention operations. K8s give you for free things that are a pain to setup otherwise. Want to autoscale your VMs based on load? Trivial in most cloud managed k8s.


> Remember the days of assuming that your box was secure without having to do 10123 layers of abstraction?

Yep, I remember when I deployed insecure apps to prod and copied secrets into running instances, too.


Remember how the ops team kept installing Tomcat with the default credentials?


This was the funniest point in that comment to me.

Read the intended way, it's borderline wrong.

Read as "remember when people assumed security without knowing" is basically most of computing the further back in time you go.


Have you ever tried to package things with .dep or .rpm? It's a f** nightmare.

A place to store secrets and provide those secrets to your services.

"A problem that is unique to kubernetes and serverless. Remember the days of assuming that your box was secure without having to do 10123 layers of abstraction?"

I remember 10 years ago things were not secur, you know when people baked their credentials in svn for example.


lol. as someone who has packaged stuff I can tell you that this K8S is orders of magnitudes more complicated. Also, once you figure out how to package stuff, you can do it in a repeatable manner - vs K8s which you basically have to babysit (upgrade/deprecations/node health/etc) forever and pay attention to all developments in the space.


   FROM python:3.8
   RUN apt install libsomething
   ENV RELEASE=production
   COPY . .
   RUN poetry install
   CMD ["poetry", "run", "server"]
What would be the rpm/deb equivalent of those 6 lines? Would it work on MacOS?


let’s unpack this for a while.

what is python:3.8? is this reproducible?

what is apt? where is the install coming from?

What about poetry?

Yeah it’s cool for shits and giggles but when this thing breaks you’re going to be in a world of pain

for the rpm equivalent:

python setup.py bdist_rpm

wat?


Oh my. I'm not sure that I'd use Python to make a point about easy interop with distro package managers. It quickly descends into a nightmarish hellscape if you have more than a few dependencies or, different versions of python, or god forbid: C extensions.


bdist_rpm isn’t equivalent to the Dockerfile above. It can be made reproducible with a few changes (locking the upstream image to a hash, locking the apt package version), but that’s likely overkill. Because when it breaks you’re not in for a “world of pain” at all, you just have a failing CI for an hour.

I take it from the lack of an answer to the question that the equivalent non-docker packaging would be much more complex.


.deb packages are literally just a compressed archive with a folder structure that mostly mimics your folder structure on the hard drive. You've got some pre- and post-hooks where you can write some shellscript to do fancy stuff, and a signing process to ensure authenticity. Autostart is a SysV init script or systemd xml file away. How is that a f* nightmare?


Checkinstall makes packaging pretty easy for anything you aren't trying to distribute through the official distro channels.

https://help.ubuntu.com/community/CheckInstall


I can setup a Kubernetes cluster, a container registry, a Helm repository, a Helm file and a Dockerfile before you are finished setting up the infrastructure for an Apt repository.


Exactly, an autoscaling cluster of multiple nodes with everything installed in a declarative way with load balancers and service discovery, all ready in about 10 minutes. Wins hands down.


My experience is the opposite - an APT repo is just files on disk behind any webserver, a few of them signed.

Setting up all the infra for publishing APT packages (one place to start: https://jenkins-debian-glue.org ) is far easier than trying to understand all the rest of the things you mention.


I mean, Kubernetes is just some Go binaries; you can have it up and running in literal seconds by installing a Kubernetes distribution like k3s. This is actually what I do personally on a dedicated server; it’s so easy I don’t even bother automating it further. Helm is just another Go binary, you can install it on your machine with cURL and it can connect to your cluster and do what it needs from there. The Docker registry can be run inside your cluster, so you can install it with Helm, and it will benefit from all of the Infra as Code that you get from Kubernetes. And finally, the Helm repo is “just files” but it is less complex than Apt.

I’ve been through the rigmarole for various Linux package managers over the years and I’m sure you could automate a great deal of it, but even if it were as easy as running a bash script (and it’s not,) setting up Kubernetes covers like half this list whereas setting up an Apt repo covers one item in it.


Yeah I don't understand where all this fictional .deb and APT "complexity" is coming from. Everything uses standard abstractions that are decades old at this point..... oh no, you have to make some directories! You have to put a manifest file in the right place! Oh my god, now you have to run a command!


Now make it not-brittle and prone to falling over, without using hosted k8s. ;)


... but then you could pay a fraction for bare metal cloud hosting instead of paying out the nose for managed K8S at Google or AWS.

Its complexity and fragility are features. It's working as intended.


no. you cannot.


This is supposed to be an argument against Kubernetes?


Nope, just an argument against the "you must write all of this yourself" line. :)


There was some project where one wrote all of that (essentially what Kubernetes does) in like 8k lines of bash script. Brilliant, yes. But there is not way I want any anything similar in my life.

I am not the biggest fan of the complexity Kubernetes is, but it solves a problems there is no way I want to solve individually and on my own.


I think the point of the blog post in the OP is that it should be a bunch of bash scripts with very few interdependencies, because most of the requirements in the grandparent comment are independent of each other, and tying them all together in a tool like kubernetes is unwieldy.


Some of these are decent points, but a couple are misleading.

The security one is the big one. Things were just not as secure (and did not need to be as secure) “back then”. K8s has a lot of complexity, and security should definitely be simpler so it’s harder to misconfigure, but not doing anything is not viable.

Saying “Package Managers” is fine until you realise they solve only part of the problem. The mainstream ones are good tools to update package (and dependencies) from version X to Y. When you’re running a distributed system, it’s often not that simple if you want to be reliable. Coordinating a slow global update of your application from version X to Y (safely) is pretty tricky and I’m not aware of good self-contained solutions to this.


You're making their point for them.


That escalated quickly. Unit tests and type systems are not complicated at all, and are applied by solo developers all the time. GraphQL and Kubernetes are completely different beasts, technologies designed to solve problems that not all developers have. There really isn't a comparison to be made.


Almost every team I've worked on has needed to deploy multiple services somewhere, and almost every app has run into escalating round trip times from nested data and/or proliferating routes that present similar data in different ways. While it's true to say not all developers have those problems, they're very common.


That's a very SaaS-centric way of looking at software development.

Unit tests and type systems are useful across the whole stack. Systems developers, application developers, embedded developers, mobile developers, even sysadmins and IT people - they all have a use for these basic principles of how to design a piece of software.

GraphQL and Kubernetes, on the other hand, are solutions designed exclusively for web services deployed into the cloud, and they're primarily useful in situations where there are many different teams each working on different services, with differing release schedules and engineering priorities. These situations might seem very common in large companies, but I don't think they represent common aspects of software development in general.


I agree. GraphQL is conceptually straightforward, even if certain implementations can be complex. Any developer familiar with static typing is going to get it pretty easily.

I’m far from an expert, but ISTM that Kubernetes is complex both conceptually and in implementation. This has implications well beyond just operational reliability.


Sure, but k8s isn't the only way to do any of those things, and it's certainly a heavyweight way of doing most of them.

It's not a question of k8s or bespoke. That's a false dichotomy.

I see way too many young/inexperienced tech teams using k8s to build things that could probably be hosted on a couple of AWS instances (if that). The parasitic costs are high.


I see way too many young/inexperienced tech teams STILL using an unmaintainable process of just spinning up an EC2 instance for random crap because there is no deployment strategy at the company.


Yup, k8s is at least standardized in a way that’s somewhat sane.

Before k8s every org I worked for had an absolute mess of tangled infrastructure


Not sure why this is being downvoted.

"We can do it ourselves!" attitude by people who are unskilled is the source of many legacy hell-webs sitting in companies all over the world that are desperately trying to be maintained by their inheritors.


Not responsive to the argument. k8s is maybe a "deployment strategy", but it's certainly not the only one. Or the best one for all circumstances.


"Large scale enterprise" is the key here.

Kubernetes was made by Google. Google is not your startup, it has millions of servers serving billions of users, of course it needs complex systems, and it has thousands of people to maintain them.

In a small company, you probably don't need much of what's in that "need to" list. Rent a server, maybe a second one for redundancy, install your packages, run your app, and if you did things well, you can do quite a lot with a single machine.

But a lot of people think they are Google, and get ready to scale to a level they will never reach, and do it badly.

I think that where most of the pooh-poohing comes from, the use of overly complicated solutions for your scale.


It's where Thomson-Unix way win as KISS and still work for small to large scale.


Comparing Kubernetes to type systems is like comparing a shack to a gothic cathedral. Type systems are incredibly stable. They have to be proved both sound and complete via meticulous argumentation. Once proven such, they work and their guarantees exist... no matter what. If you avoid the use of the `unsafe...` functions in languages like Haskell, you can be guaranteed of all the things the type system guarantees for you. In more structured languages like Idris or Coq, there is an absolute guarantee even on termination. This does not break.

Whereas on kubernetes... things break all the time. There is no well-defined semantic model for how the thing works. This is a far cry from something like the calculus of inductive constructions (basis of COQ) for which there is a well-understood 'spec'. Anyone can implement COIC in their language if they understand the spec. You cannot say the same for kubernetes.

Kubernetes is a nice bit of engineering. But it does not provide the same guarantees as type systems. In fact, of the four 'complicated' things you mentioned, only one thing has a well-defined semantic model and mathematically provable guarantees behind it. GraphQL is a particular language (and not one based on any great algebra either, like SQL), Kubernetes is just a program, and unit tests are just a technique. None of them are abstract entities with proven, unbreakable guarantees.

Really, comparing Kubernetes to something like system FC or COIC is like comparing Microsoft Word to Stoke's theorem.

The last thing I'll say is that type systems are incredibly easy. There are a few rules to memorize, but they are applied systematically. The same is not true of Kubernetes. Kubernetes breaks constantly. Its abstractions are incredibly leaky. It provides no guarantees other than an 'eventually'. And it is very complicated. There are myriad entities. Myriad operations. Myriad specs, working groups, etc. Type systems are relatively easy. There is a standard format for rules, and some proofs you don't really need to read through if you trust the experts.


Your post reads like a teenager yelling "you don't understand me" at parents who also were teenagers at one point. You really think that those are new and unique problems? Your bullet points are like a list of NixOS features. I just did all of that across half a dozen servers and a dozen virtual machines with `services.homelab.enable = true;` before I opened up HN while its deploying. I'm not surprised that you can't see us lowly peasants from your high horse but many of us have been doing everything you mentioned, probably far more reliably and reproducibly, for a long time.


> Your post reads like a teenager yelling "you don't understand me" at parents who also were teenagers at one point.

I don’t understand teenagers any more, and I’m barely 30. I don’t think this analogy really works.

I agree with your point though.


You don't have to understand teenagers to understand that their problems are the same that they have always been, except in different settings.


Yep, we used to setup these things with a bunch of different systems using our collection of Ansible playbooks. The playbooks are complex, so as to handle all kind of edge cases. Furthermore, since they are developed over a long period, the coding convention are not uniformed; it's quite hard to teach the new hires how to use and contribute to the playbooks.

We probably replaced tens of thousands line of Ansible code with a few thousands line of K8S code. We found the new code easier to maintain: because K8S is much stricter than Ansible, it's harder to deviate from the norm. Granted, we might be biased because K8S is all new and shiny, but so far we haven't regretted moving to K8S.


OK, true ... but if you do all that yourself, then "they" can never fire you, because no one else will know how the damn thing works. (Just be sure not to document anything!)


Unit tests and "type systems" have very little in common with Kubernetes and GraphQL.


Also, GraphQL is not complex. You can learn the basics in an hour or so.


Tangential to your point, but how did 'unit tests' end up in your list of complicated things? They are conceptually easy to understand, and they are certainly not only for large scale enterprise users. Granted, it takes years to learn how to write nice tests... maybe that is what you mean?


>> "complicated" things like unit tests, type systems, Kubernetes, GraphQL, etc.

Those are not even in the same ballpark in terms of how complicated they are. Unit tests and type systems are not complicated at all. GraphQL not really either. But Kubernetes is very, very much.


"People like to ..."

Perhaps the reason people question these complicated things is because they are, whether intentionally or not, being marketed to an audience on HN that includes small scale non-enterprise users.

I shall paraphrase others here: A problem does not exist for you simply because it exists for LARGE SCALE ENTERPRISE users.

What I would add to that is there is nothing particularly noteworthy about a large organisation's IT work simply because it is a large organisation or making billions in ad revenue, unless one is also working in a similar organisation. If some organisations are writing the next "Multics", it really should not be interesting to everyone. A single person who can do all the individual tasks you listed is likely to think critically when presented with "news" of organisations where no single individual can do those things. Its like how many Initech Corporation employees does it take to screw in a lightbulb.

I find some of the most interesting work is found in projects started by individual programmers working alone. luajit for example.


The Kubernetes marketing team has definitely gotten to you. The investment in DevRel is really paying off if people are unironically arguing that you _must_ use K8s or you're wasting money and time.

I'd be very curious to find your proposed cost savings after accounting for those teams of engineers tasked with maintaining a company's K8s clusters. There is no free lunch.


Even at smaller scale, dealing with any distro + k8s + helm can be simpler than doing ops on an bare ubuntu install.

If your dependencies are not too many / well supported you may get good results with nix.


Most people who are afraid of doing all of those things haven't actually done them, and would probably be shocked to find that they aren't actually that complex. Kubernetes actually makes them more complex, but simpler in the aggregate. In other words, most of the apparent value in kubernetes vanishes when you do a real bake-off between kubernetes vs rolling your own infrastructure. There is still SOME benefit, but it's usually exaggerated.

Another problem with kubernetes is the flexibility it gives you. Look at five engineering teams using kubernetes, and you'll see five wildly different setups. Within that maneuverability, in a project that ostensibly makes things "simple", hides the devils that will bite you when you least expect it.


Comparing type systems to kubernetes seems like an incredible category error to me. They have essentially nothing in common except they both have something to do with computers. Also, there are plenty of well-designed and beautiful type systems, but k8s is neither of those.


I do all of the above with my own jars I've written all by myself and I feel as if it was 10x faster than just scratching the surface of Kubernetes.

Resume driven development is very real if people can't write their own load balancer.


Recently I had cause too try kubernetes… it has quite the rep so I gave myself an hour to see if I could get a simple container job running on it.

I used GCP autopilot k8 cluster… and it was a slam dunk. I got it done in 30 minutes. I would highly recommend to others! And the cost is totally reasonable!

Running a k8 cluster from scratch is def a bigco thing, but if you’re in the cloud then the solutions are awesome. Plus you can always move your workload elsewhere later if necessary.


This is also my experience, k8s can get harder but for simple stuff it’s pretty dang easy


I’ve got a different experience with Kubernetes because from what I’ve seen in fails to provide most of the features you described out of the box. Or when it does, they have major issues of how to safely deploy and maintain them over time. I assume you mean all those services can be installed and configured on Kubernetes, once you have Kubernetes itself up and running. But that’s not the same thing.


You have to do those things WITH Kubernetes.

It doesn’t configure itself.

I focused on fixing Kubernetes problems at my last job (usually networking). How is that supporting the business (hint: it didn’t so management forced us off Kubernetes)

No piece of software is a panacea and shilling for project that’s intended to remind people Google exists, is not really putting time on anything useful either


My issue with Kubernetes and DevOps is companies that combine DevOps with development. As a developer, it is already hard enough to keep up with new frameworks. Now these companies want their devs to do DevOps, two vastly different expertise. Not sure how common it is in industry but I know enough developers who are now halfassing DevOps.


In my book DevOps is a set of practices that aims at improving the collaboration between Devs and Ops. I know that the term is now often used to label a role (or even a job description), but I think something important is lost in the switch.

According to what I put behind the concept, implicating Devs is at the core. You built it, you run it!


> In my book DevOps is a set of practices that aims at improving the collaboration between Devs and Ops.

I think that's what sold me on DevOps.

> You built it, you run it!

This adds too much responsibilities for devs but also makes it hard to find good enough developers who can also manage deployments and infrastructure. I have never seen happy and competent Dev+DevOps person. There is just too much cognitive load for a same person to do these two things right at the same time. The Hello World of Kubernetes deployment is easy on Cloud but anytime you need to do something a bit more complex, learning curve increases tremendously.

What seems to work is that each team having one or more dedicated DevOps person. Or I have seen a dedicated DevOps team in large orgs managing infrastructure for many other teams.


"Poeple like to ..."

Perhaps the reason people question these complicated things is because they are, whether intentionally or not, being marketed to an audience on HN that includes small scale non-enterprise users.

I shall quote thyself here: A problem does not exist for you simply because it exists for LARGE SCALE ENTERPRISE users.


Best luck finding some engineer can understand and do all those stuff today. It's possible, but it's hard. Everyone comes to the table with "Hey terraform and helm/k8s": D: D


If you don't write that yourself, you still have to understand how someone else wrote it so you can configure and use it properly, and understand how to debug it when it's not working.


> Just write all of that yourself!

At least then you’ll have a shot at actually understanding it. I can’t trust kubernetes when anything goes wrong because the system just isn’t very transparent.


The advantage of kubernetes is price and being a standard making staff hiring easier.

F5, autosys, spunk, etc are all much better products to do the tasks you mentioned but cost $$$ vs kubernetes.


> Things that are solving a specific problem for LARGE SCALE ENTERPRISE users.

So, just like mainframes?

/s

Seriously, though, I think that was the point. The future needs to be a much less complicated tool.


I like Chef Habitat. It pretty much does all this. It's how I've kept away from k8s this long.


Unit tests, GraphQL and type systems aren't complicated, or at least don't need to be.


Right. But that doesn’t mean that Kubernetes is the right solution for this set of problems.


Not all of those things are actually needed at small to medium size.


We did all that on AWS, and do it now on GCE. Load balancers, instance groups, scaling policies, rolling updates... it's all automatic. If I wasn't on mobile I'd go into more detail. Config is ansible, jinja, blah blah the usual yaml mess.


It's not K8S or nothing. It's K8S or Nomad, which is a much simpler and easier to administrate solution.


This is partially true. If the only feature you care about Kubernetes is container scheduling, then yes, Nomad is simpler. The same could probably be said about Docker Swarm. However, if you want service discovery, load balancing, secret management, etc., you'll probably need Nomad+Vault+Consul+Fabio/similar to get all the basic features. Want easy persistent storage provisioning? Add CSI to the mix.

Configuring these services to work together is not all trivial either (considering proper security, such as TLS everywhere) and there aren't many solutions available from the community (or managed) that package this in an easy way.


While this is not false, I don't think many of the posts critical of K8s hitting the front page are advertising for Nomad, or focusing on drawbacks that don't apply to Nomad.


[flagged]


What makes you so sure that the downvotes aren't because all you posted was a comedic reference?


I didn't know pooh-pooh was a genuine logical fallacy before now!


thank you lol. Hello World engineers will never stop criticizing K8s.


Yes, but you don't need many or any of those things to launch a Minimum Viable Product.

So Kubernetes can become invaluable once you need to scale, but when you are getting started it will probably only slow you down.


If you want your MVP to be publicly available and your corporations ops/sec to be on board with your plans then Kubernetes is an answer as well. Even if your MVP only needs a single instance and no scaling. Kubernetes provides a common API between developers and operations so both can do the job they where hired for while being in each others way as least as possible.


Pre-MVP, development and ops are likely the same people.


With Pre-MVP you mean installing it on your laptop right? It all just really depends on your companies size and the liberties you are given. At a certain size your company will have dedicated ops and security teams which call all the shots. For a lot of companies, Kubernetes gives developers the liberties they would normally only get with a lot of bureaucracy or red tape.


I have a Windows server and use dot net.

I press right click - publish and for prod, i have to enter the password.

Collecting logs uses the same mechanism as backups. They go to a cloud provider and are then easy to view by a web app.

Never needed more for after hours than this, perhaps upgrading a server instance from running too many apps on 1 server.


> solving a specific problem

The problem to me is that Kubernetes is not solving a specific problem, but a whole slew of problems. And some of them it's solving really poorly. For example, you can't really have downtime-free deploys in kubernetes (set a longish timer from SIGTERM to increase the chance that there's no downtime).

Instead I'd rather solve each problem in a good way. It's not that hard. I'm not implementing it from scratch, but with good tools that exists outside of kubernetes and actually solve a specific problem.


Why can you not have downtime-free deploys? You tell your applications to drain connections and gracefully exit on SIGTERM. https://pkg.go.dev/net/http#Server.Shutdown

If your server is incapable of gracefully exiting, that's not a K8s problem.


> Why can you not have downtime-free deploys? You tell your applications to drain connections and gracefully exit on SIGTERM. https://pkg.go.dev/net/http#Server.Shutdown

> If your server is incapable of gracefully exiting, that's not a K8s problem.

Also whatever load balancer/service mesh you have can be configured for 503 rerouting within DC as necessary too.


> You tell your applications to drain connections and gracefully exit on SIGTERM.

The problem is that k8s will send requests to your application after SIGTERM. So you have to wait some amount of time before shutting down to allow for that.

This was at least the case last time I used k8s, and it seemed like it was due to the distributed architecture, so something that was more than a mere bugfix away.


K8s has like, probably the most complete support for readiness/no downtime deploys in the whole damn industry so it's surprising to hear that...

https://cloud.google.com/blog/products/containers-kubernetes...


You can actually renew/upgrade your whole cluster with no downtime if you care enough to tackle the annoying bits that cost a few minutes in YOLO mode.


I think it will end up with a "simple" Distributed OS in the same way we have internal combustion engines: they're very hard to build, complicated to repair, moderately easy to maintain, very easy to use.

Here's the things I think we need in order to make a "simple" Distributed OS:

Cutting edge tech. If developers don't want to use it, it dies, period. It needs to be trendy.

Novel interaction of different versions of different software components. The model we use today is 40+ years old and does not scale past a single system. We have to make it easy for different versions of software to interact in any way, without making people think hard about it or use hacks. (There are solutions for this already but nobody uses them; we need a trendy blog post and some new code conventions to make them take off)

Novel network stack. Distributed systems have been twisting themselves into pretzels for decades to get Component A to talk to Component B over a network. You can have upwards of a dozen different components in between, all just dedicated to getting two components to talk to each other. The thing holding this up is the lack of integration between all the layers, and along hops.

Distributed Tracing. You can't troubleshoot a distributed system effectively without it. Lack of debugging tools means the systems won't be used seriously and the effort will die on the vine.

Distributed Computing Health Metrics as a higher level abstraction than "is this host-specific resource running out". Basically this requires a gossip network of health metrics and some fancy math to estimate probabilities of health.

Distributed Shared Memory for Threaded Applications. Yes, I went there. Building distributed systems will continue to be a pain without it. We have to make these systems stupid easy to program and use; if it takes a PhD or a two-pizza team of amateurs to program for it, it's just not gonna take off. (applies to the "Images and Feelings" part of OP)

Versioned Immutable Operating Models. Basically, distributed systems today are not immutable, because various layers of the "stack" that makes them up are not immutable or version-controlled. To reliably operate even a non-distributed system, you need this. It's especially important for SaaS, PaaS, IaaS, etc. We have built whole ecosystems of tools because many parts of a distributed system simply have bad operational models. You can start with building such a model for regular-old software, and each layer of software (and hardware!) around it should also develop such a model. A complete stack with that model will be very determinate, easy to operate, & easy to reason about. I estimate this will make 50% of the current distributed computing ecosystem redundant.

Federation, Encryption, Fine-Grained Access Control by default. We need any component to be able to talk to any component in a secure manner, again without jumping through a lot of hoops.

Distributed Control and Data Plane separation by default. This is both a novel I/O model, and a novel control plane for all components.

Resource Reservation. Software needs to specify the kind and amount of resources it will need before it even runs. This is necessary to prevent the inevitable resource exhaustion churn, ex. when competing pods spin up and die in a loop.

Distributed Networking Safety Conventions. The best practice stuff to prevent network storms on crowded resources. Throttling, backoff, jitter, quotas, etc.

Distributed Scheduler. Simple idea, difficult implementation. A generic scheduler that is smart enough to schedule all kinds of weird things across distributed systems.

Almost all of these things already exist, but that's not the hard part! The hard part is combining them all together in a way people want to use. The only way that's gonna happen is if we start up another research project ala Plan9.


>“Our generations Multics”

What, because they’re both complex?

Multics was never successful enough to be used much outside of Honeywell.

They’re both considered complex, but there are so many other examples of industry relevant technologies he could’ve used. Products that’ve actually gained communities and a user space enough to gain insight.

I get it, you hate using GooberBoobies or whatever.


Eh. I think people over complicate k8s in their head. Create a bunch of docker files that let your code run, write a bunch of yaml files on how you want your containers to interact, get an endpoint: the end.


I mean people over complicated software engineering in their head. Write a bunch of files of code, write some other files: the end. /s


Why the "/s"? -- sounds right.


... That's also a little bit like saying it's super simple to develop an app using framework X because a "TODO list" type of app could be written in 50 loc.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: