I have to disagree.
It doesn't introduce burden and complexity. They're already there whether you use Kubernetes or not.
The difference is that if you did build it all by hand as the author said, if it ever scales, you're going to have double the job to make it scale.
It's all a question of: do I think my software will succeed?
If it's a hobby project that will never get big, it's not worth the hassle.
If it actually has a chance of succeeding, the small added complexity of Kubernetes will pay dividends extremely quickly when the system needs to scale.
Even with as little as two machines, I'd argue k8s is already adding more value than managing those by hand. People can say otherwise because they're used to it, but being used to it is not the point of the discussion.
The author also talks about Ansible which is another piece of complexity that would be comparable with doing it in k8s. I'd argue you have less YAMLs with k8s than with Ansible for a small project.
The only argument I see for doing anything by hand today is if it's a play thing.
I see what you mean, but I don't really agree. You can introduce that burden and complexity whenever you want. If you spend the start of your project working on this, you will be prepared for scaling (if you ever need it) but you could have used that time on actually working on the project and checking if you actually will need scale at some point.
I don't know about your projects, but I'm my case most (all) of them doesn't really need any kind of scale. Hell, this blog has a tiny 5$ DO machine and is still happily serving traffic from HN. I do understand not all projects are small blog instances, though :)
I guess in my case I prefer to just keep it simple and see how far I go with that setup than spending time working on making the perfectly-scalable project that is never serving more than 2 requests per second. If it ever grows, I will need to work to make it scale, sure, but on the other hand that is a good problem to have.
Anyway, I understand this is pretty subjective and depends on how you think about your projects and your requirements, so I do understand there will be people both in agreement and disagreement.
I completely agree it's overkill to run your own cluster. That'd be good for a learning experience, but way too complex to use/maintain otherwise.
However I read somewhere you had experience with Kubernetes, right? That means there is no extra work to learn the technology.
Now let's take your blog as an example. I'm gonna guess there's an official Docker image for whatever software you use and you could create an image, deployment + service + ingress in less than an hour for it (pretty much every example out there is about how to setup a blog, heh).
If you have to do all that manually through SSH, I'd argue it takes pretty much the same time and the complexity is the same. You will simply change the tools/concepts but won't be caught by manual gotchas.
Which is less complex? Which is beta, and thus could be changed over time (it happens a lot). Which one requires major (and breaking) infrastructure updates every 3 months?
> I'd argue you have less YAMLs with k8s than with Ansible for a small project.
Since you typically need one yaml document per K8s resource, and can describe multiple independent and idempotant actions in one Ansible document, I think this is easily demonstrable as false for small projects (and likely big projects as well).
The first one is a complete configuration you could kubectl apply into a cluster that sends traffic to a backend service that may be running across multiple instances on multiple machines.
The second is a configuration fragment that is useless by itself that would send all traffic to a single instance running on localhost.
Ok, great... I have a bunch of simple projects like that, one web instance running on a single host.
How do I safely upgrade it without downtime? Ensuring that the new version starts up properly and can receive traffic before sending it requests and stopping the old one?
With k8s: kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1 (or just use kubectl apply -f)
That moves the goalposts a bit. We've gone from a simple service to a fleet of highly available services on multiple hosts with zero downtime requirements.
Still A single service on a single machine, how do I safely upgrade it?
A rolling deployment in k8s does work the same way on a single host with minikube as it does on a 1000 node cluster, but I'm still just asking about the single host case.
> Still A single service on a single machine, how do I safely upgrade it?
Depending on your OS-package's provided init script (here Ubuntu), it's as simple as `service nginx reload`, (`service nginx upgrade` if upgrading nginx itself).
Or skip the init script entirely with `/usr/sbin/nginx -s reload`.
You could do "blue-green" deployments with port numbers... service rev A is on port 1001, service rev B is on port 1002... deploy new rev... change nginx config to point to 1002... roll back, you repoint to 1001...
So I need to write some code to keep track of which of A or B is running, then something else to template out a nginx configuration to switch between the two. Then I need to figure out how to upgrade the inactive one, test it, flip the port over in nginx, and flip it back if it breaks.
At minimum it requires good-enough health checks so that k8s can detect if the new config doesn't work, and automatically rollback, otherwise you're looking at "no downtime except when there's a mistake" situation.
...and to really check that the health check and everything else in your .yaml file actually works, you will probably have to spin up another instance just so that you can verify your config, unless you like debugging broken configs on live. Well, of course you can always fix your mistake and go "kubectl reaplce -f", but that kinda goes against the requirement of "no downtime".
I grant that k8s makes it easier to spin up a new instance for testing.
I'm pretty sure I could write a shell script to do that, with a bunch of grep/sed hackery in under an hour or two. For a single server, personal project, this is probably the simpler approach for me.
The new version fails to restart properly. Your site is now down. You upgraded the package so now you no longer have the old version on disk and can't easily start the previous version.
Let's not pretend that blindly upgrading a package is the same thing as safely rolling out the new version of a service.
I think this is proving my point. They're basically the same, but one is completely dynamic and the other will have to be changed as soon as anything changes.
We've been using Kubernetes in production for almost two years now and I have yet to face a big API change that breaks everything. The core APIs are stable. There's a lot of new stuff added, but nothing that breaks backwards compatibility.
As you just said, if it's a simple project, you can upgrade the infrastructure by clicking "upgrade" on GKE. We've only ever hit problems when upgrading when using the bleeding edge stuff and the occasional bug (once since Kubernetes 1.1 to 1.10 for a large Rails app).
Regarding the yaml document per resource, I mean... that's spaghetti Ansible. If you want to have a proper Ansible setup you will have separate tasks and roles. If we're going down the route of just have "less files" you can have all the Kubernetes resources in a single YAML file. I would definitely not recommend that tho.
While Kubernetes is a lot more verbose, it is light years better than the Ansible jinja2 weird syntax. Even someone that never heard of Kubernetes can read that Ingress resource and guess what it does. If we're being really picky actually, the proxy_path should be pointing to some "test" thing that would have to be an upstream that would already make the NGINX config more obscure.
I feel like people just hate Kubernetes to avoid change. These arguments are never about Kubernetes' strengths or faults but about why "I think my way is better". You can always do things in millions of ways and they all have tradeoffs. The fact is: ROI on Kubernetes is a lot bigger than any other kind of manual setup.
I'll repeat what I said on another comment: the only reason to do anything by hand today is if it's a play thing
Are you using GKE for your production environment? Kubernetes has lots of great features, and definitely imposes some good operational patterns, but it's no panacea, and for those of us who can't put their applications in the cloud (or don't want to) Kubernetes can be a complex beast to integrate.
Are you running databases in Kubernetes? Do you have any existing on-prem infrastructure that you want to utilize (or are required to, because sunk costs) to integrate with Kubernetes?
Is you company a startup with no existing legacy application that you need to figure out how to containerize before you can even think about getting it into Kubernetes? We've seen benefits from running it, for sure, but I'm honestly not sure if the amount of work it took (and still takes) to make it work for us was worth the ROI.
Sometimes I feel like Kubernetes is a play by Google to get everyone using their methodology, but only they can provide the seamless experience of GKE.
I think the key element here is that you are using GKE. Managed cloud Kubernetes, and self-hosted on-prem Kubernetes are two different beasts. Yes, it's easy when you don't actually have to run the cluster yourself.
Ansible has its warts, but it is great for managing and configuring individual servers.
Yes I agree. But in the context of this article, we're talking about small projects. So I wouldn't expect the need for an on-prem setup.
In fact I've used Kubespray[1] to setup a cluster with Ansible before with mild success. Nothing production ready, but it's actually a good tool for that job. At the end of the day you can't run Kubernetes on Kubernetes :D
I guess it depends on your definition of "small projects". I agree with the article that if you are interested in getting something out there for people to use and see what kind of interest you get, then adding Kubernetes to the mix doesn't really get you there faster. If anything, I think it would slow you down, unless we are talking about a very trivial app.
I was responding more to the comments I had been reading, not the premise of the article.
That's true. Stateless apps are a lot easier.
I have myself had to put a stateful service in K8s even before they had StatefulSets and it wasn't a walk in the park.
However doing what you described is hard... anywhere.
Even if you do it all by hand it's going to be hard and most likely brittle. It might take a bit more time/effort to do that on Kubernetes but I would say you would end up being a better solution that can actually sustain change.
As stated before by many, Kubernetes is not a magic wand. But it does force you to build things in a way that can sustain change without a big overhead.
We all know that different tools have different purposes and I'm not advocating it's perfect by any means. All I'm trying to say is that this idea - that a lot of people have - that Kubernetes is 10x more complex than doing X by hand is an illusion.
The difference is that if you did build it all by hand as the author said, if it ever scales, you're going to have double the job to make it scale.
It's all a question of: do I think my software will succeed?
If it's a hobby project that will never get big, it's not worth the hassle. If it actually has a chance of succeeding, the small added complexity of Kubernetes will pay dividends extremely quickly when the system needs to scale.
Even with as little as two machines, I'd argue k8s is already adding more value than managing those by hand. People can say otherwise because they're used to it, but being used to it is not the point of the discussion.
The author also talks about Ansible which is another piece of complexity that would be comparable with doing it in k8s. I'd argue you have less YAMLs with k8s than with Ansible for a small project.
The only argument I see for doing anything by hand today is if it's a play thing.