Hacker News new | past | comments | ask | show | jobs | submit login

If you're familiar with Linux (which should be considered required-reading if you're learning about containers), most of this stuff is handled perfectly fine by the operating system. Sure, you could write it all in K8 and just let the layers of abstraction pile up. Or, most people will be suited perfectly fine by the software that already runs in their box.



I work in a small company, we don't have a sysadmin, so mostly we want to use managed services. Let's say we want a simple load balanced setup with 2 nodes. Our options are:

- Run our own load balancing machine and manage it (as said, we don't want this)

- Use AWS/GCP/Azure, setup Load Balancer (and rest of the project) manually or with Terraform/CloudFormation/whatever scripts

- Use AWS/GCP/Azure and Kubernetes, define Load Balancer in YAML, let K8S and the platform handle all the boring stuff

This is the simplest setup and already I will always go for Kubernetes, as it's the fastest and simplest, as well as the most easily maintainable. I can also easily slap on new services, upgrade stuff, etc. Being able to define the whole architecture in a declarative way, without actually having to manually do the changes, is a huge time-saver. Especially in our case, where we have more projects than developers - switching context from one project to another is much easier. Not to mention that I can just start a development environment with all the needed services using the same (or very similar) manifests, creating a near-prod environment.


I think the argument there is that it's only simple because the complexity of k8s has been taken away. I don't think anybody has claimed deploying to a k8s cluster is overly complex; running it well, handling upgrades, those are huge time sinks that need the requisite expertise.

Much like Multics was "simple" for the users, but not for the sysadmins.


That's the point though right? A good (couple of) sysadmins can run a k8s cluster that can be leveraged by dozens (even hundreds) of dev teams. Instead of every team having to re-invent the wheel you get a common platform and set of deployment patterns that can fit most any use case. Of course if you don't have multiple different teams (or every team is running their own k8s cluster) then that is definitely a problem. But just because a handful of teams make an ill-advised investment in k8s when they could do easily with something much simpler doesn't mean that k8s is "too complex." Too complex for that use case sure, but for the vast majority of k8s deployments I would wager that it does add a lot of value and subsume a lot of the inherent complexity of running distributed, fault-tolerant, multi-tenant applications.


Taking the complexity of k8s away was just gonna happen. As someone who built everything from scratch at a previous company, I chose eks at a start-up because it meant that the one-man-systemsguy didn't have to worry about building and hosting every single cog wheel that is required for package repos, OS deployment, configuration management, consul+vault (minimum), and too many other things that k8s does for you. Also, you can send someone on a CKA course and they know how your shit works. Try doing that with the hodge-podge system you built.


Training is a great point, and I think that's why major clouds are going to be stickiest (in terms of using them vs migrating to new things).

The central problem of most companies has been finding / affording people who can maintain their stuff.

If Amazon / MS / Google can make it simple enough that skilled people can be quickly cross trained, and then have enough architecture knowledge to be productive, that's a huge win over "require everyone to spend 6 months muddling through and learning our stack we built ourselves and partially documented."


Set up servers at linode and use the linode node balancer?

> Being able to define the whole architecture in a declarative way

With k8s (and other 'cloud' stuff) you seem to need to know a whole mess of a lot of the tool's stuff up front, vs a "progressive enhancement" way of doing one thing, getting it working, doing something else, getting it working, etc.


You run a small company, I'd argue that you aren't "the average user". For you, Kubernetes sounds like it integrates pretty well into your environment and covers your blind spots: that's good! That being said, I'm not going to use Kubernetes or even teach other people how to use it. It's certainly not a one-size-fits-all tool, which worries me since it's (incorrectly) marketed as the "sysadmin panacea".


I have been professionally working in the infrastructure space for a decade and in an amateur fashion running Linux servers and services for another decade before that and I am pretty certain that I would screw this up in a threat-to-production way at least once or twice along the way and possibly hit a failure-to-launch on the product itself. I would then have to wrestle with the cognitive load of All That Stuff and by the way? The failure case, from a security perspective, of a moment's inattention has unbounded consequences. (The failure case from a scaling perspective is less so! But still bad.)

And I mean, I don't even like k8s. I typically go for the AWS suite of stuff when building out systems infrastructure. But this assertion is bonkers.


Why? You still need to manage all that for your server even if you are running kubernetes on top of it.

I can’t imagine anyone with root access to a kubernetes server is any less dangerous that root on a simple webserver.


No, I don't, because I can yawn dramatically and I can go to any cloud provider and get a k8s cluster with generally consistent and at worst a moral-equivalent set of standard building-block cloud tools already set up. It won't cost me much, it will work mostly-predictably out of the box, and there's support right there for when it fails. Like, that's what k8s is there for. I use AWS pretty exclusively so this doesn't appeal to me, but what does is doing the moral equivalent and having ECS just...there. (Or even better, Fargate, if I can't solve the bin packing problem by myself.)

I haven't "managed a server" outside of my house for a few years now, and I quite like it. I theoretically have had root to ECS clusters, but I've never logged into them. Why would I? Amazon is going to be better at it than I am. Not only do I have more important things to be doing, but I'll do a worse job of it than they will. And to be clear: I consider myself pretty kinda really good at this stuff. But not good enough to make it a competitive advantage unless it's what I want to sell, and I sure as heck don't.

And the article's point, that whatever comes next will probably be better and might even be The Real Thing--I think that is wise.


> Why would I? Amazon is going to be better at it than I am.

Until it's not. Then suddenly you're trying to decipher cryptic cloud provider error messages in a service that made a false promise to you that it's abstraction was so air-tight that you'd never have to learn the underlying technology at all.

Then suddenly, you do need to know the underlying implementation, and quickly.


Yup! I used to feel exactly as you do, and I make it my business to understand what is below the abstraction besides because some old habits die hard (and because I just like this stuff, tbh). But I started working at places with the kind of conservation and pre-testing that make that much less critical. Those organizations also that pay a great deal of money for the kind of support to make knowledge a habit of curiosity and personal fulfillment rather than save-the-worlding.

I haven't needed to do something like that in production, as opposed to pre-production deployment suss-out, since (and I went and checked my enough to be sure) 2017. Though, to be fair, I've been working in devrel since last August, so call it four years of rooting around in the trenches, not five. ;)


> most of this stuff is handled perfectly fine by the operating system

No, you have to write or adopt tools for each of these things. They don't just magically happen.

Then you have to maintain, secure, integrate.

k8s solves a broad class of problems in an elegant way. Since other people have adopted it, it gets patched and improved. And you can easily hire for the skillset.


Okay, so let's add a couple of things.

How do you do failover?

Sharing servers to save on costs?

Orchestrate CI/CD pipelines, preferably on the fly?

Infrastructure as Code?

Eventually you a point where the abstraction wins. Most people will say "but AWS...", but the reality is quicker, easier to use, and runs via multiple providers, so I think it's going to keep doing well personally.


Not the OP here.

We aren't really comparing apples and oranges in all cases that have been talked about in the larger thread. Some of the comparisons seem to be between "self hosted LAMP stack" vs. "kubernetes as a service on AWS". These are vastly different things. We should compare "self hosted LAMP stack" vs. "hosted in cloud LAMP stack" for example or "self-hosted kubernetes" vs. "self-administered kubernetes on EC2" vs "kubernetes as a service on AWS". All of these will have vastly different characteristics, pros and cons depending on your company and teams' realities.

Failover is something that a load balancer does automatically for you. Your services just need to provide a health check. Now where you actually run those nodes is a different thing. These might be slow to procure servers hosted at your provider. Or these might be manually set up EC2 instances or terraformed EC2 instances. Dunno what everyone uses as load balancers nowadays but a previous place for example had F5s and we had our own vsphere farm.

Sharing servers: I don't think this is a good idea at all except if you mean internally and if you do that then there's good and bad ways (see above on vsphere farm. If one project caused another to starve performance wise because of what was running on the same physical machines it was easy to resolve. If this was virtual servers at a traditional hoster, good luck. AWS is probably somewhere in between with EC2 and especially their storage.

Dedicated CI/CD pipelines: This is an awesome one to have and can cost an arm and a leg. I enjoy this very much at my current place w/ EC2 CI agents that scale with the number of devs currently working and dedicated "complete copy of Prod" dev environments (basically a kubernetes namespace for each dev/QA person/e2e test run to play with as they like).

Infra as code: Does not require kubernetes at all but can be implemented with kubernetes. If you already used docker to run stuff anyway for example and you can "abstract away" the kubernetes complexities to your SRE team and/or AWS, go ahead and use kubernetes. But be aware that if nobody at your place actually know kubernetes because you just relied on the hosted version of it, you're at the whim of their support people when something blows up in Production. You may not be big enough to have your own SRE team to take care of this but then you might also just not really benefit enough from kubernetes complexity and a simpler arrangement could have been easier for the people you do have to actually understand.


I think you've missed the point I was making.

Essentially if you work back from the desired state of having IaC, CI/CD, test environments per MR, you likely see something like k8s as a framework that helps you achieve that.

Of course, if you start from "I just need a LAMP stack" you might have a very different conclusion. But when you reach the same endgame ( actually I need an environment for every MR ), you've probably incrementially built something more complex and bespoke.

This will explain why there are dozens of us who are quite happy with the product. The only real question is, do you already know it and do you find it much harder to ship a deployment to a managed k8s cluster vs systemd unit files?

If not, it might be an abstraction worth having. If you don't already know how though, then you might have better things to be doing with your time.


This really depends on how many boxes you have.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: