Hacker News new | past | comments | ask | show | jobs | submit login

I don't really see how nomad is operationally simpler than k8s. To run a service behind something like traefik on k8s I would:

  bootstrap a CA for cluster tls
  run etcd
  run k8s (apiserver,etc,kubelets)
  run your application
  run the traefik ingress controller
To run a service behind something like traefik on nomad you would:

  bootstrap a CA for cluster tls
  run consul
  run nomad (servers, clients)
  run your application
  run traefik somehow
I think the only thing that is really more complicated in k8s is the networking stuff, but that's only because it has more features, like having cluster dns automatically configured and being able to give every pod its own ip address which means every service can bind to port 80 without conflicts, and policy enforcement.



We are talking about different things. I'm talking about keeping Kubernetes/Nomad alive and breathing and happy. The ops part of devops. You are talking about running stuff under them. I agree they are similar in running applications under them.

Operationally simple:

* 1 binary, for both servers and agents. * 1 config file. For consul & nomad 2 config files. * Upgrades are simple, bring down a node, replace binary, start it back up.

Docs are straightforward, it's easy to understand how Nomad works it's not complicated, you can get your head around the server/agent split and nodes and everything in an hour probably (for both nomad AND consul). k8s is a very complex beast, with many, many binaries and configs, there are helpers that get you setup, but they all have their own pros and cons, and sharp edges. Chances are you would not want to use a helper in a production setup, which means you have to understand all those moving parts.

Keeping a k8s cluster running nicely and upgraded consistently requires many full-time admins. Keeping a nomad cluster running requires very little work(I do it part time, maybe an hour a month on a busy month).

Arguably for dev/testing under consul & nomad, you would do this:

  consul agent -dev &
  nomad agent -dev &
  nomad run myapp.nomad
  nomad run traefik.nomad
Adding vault to the mix for secrets: vault server -dev &

For production use it's obviously more involved, but most of that is just around understanding your operational requirements and getting them working with nomad, not really about nomad itself.


No, I'm talking about operationally too. If your consul and nomad deployments are only one binary and one config file then you're not using TLS . Half the effort of setting k8s up is bootstrapping the CA and certs for etcd and the k8s components

> k8s is a very complex beast, with many, many binaries and configs

Because it's much more in the unix toolset philosophy, let 1 tool do 1 thing well. Is that a bad thing now?

hyperkube does put all the server side components in a single binary, there's still a bit of configuration though. A lot of the options are repetitive, I bet one could wrap hyperkube with a single config file and some defaults and the end result would look a like like nomad.


You keep going on about setting k8s up, and not about maintenance. How much time in a week do you take to babysit your k8s cluster? Do you have an HA setup?

OK TLS takes 3 files, 2 for the key and crt and 1 for the config. If you get your TLS certs out of the vault PKI backend, it's very, very simple (https://www.vaultproject.io/docs/secrets/pki/index.html) the linked page covers the complete steps.

Again, I keep talking about maintaining Nomad/k8s for years. I've been running nomad in production for a few years now, I've had no downtime from nomad, and I spend about an hour doing upgrades every once in a while. I don't worry about nomad, it's just there and works. I run 3 nomad servers per data center for an HA setup. k8s doesn't even test their HA setup in development (source: https://kubernetes.io/docs/admin/high-availability/building/) . There is no way it works out well in real life, if they don't even test it yet.

Nobody I know that runs k8s pretends it's easy to keep running for years. Most places that run k8s have dedicated engineers to babysit k8s. I babysit our nomad, and lots of other infrastructure, and I do development of applications as well.


> How much time in a week do you take to babysit your k8s cluster? Do you have an HA setup?

I don't have a k8s cluster... so zero :-)

I don't have a nomad cluster either, because every time I look at it and start planning out what I would need to do to bootstrap consul+nomad and secure it, it starts to look more like a k8s install.

> There is no way it works out well in real life,

except that every cluster on GKE or created using kops, kubespray, or even kubernetes the hard way is HA, so it's not like no one is running an HA cluster. I think from k8s point of view, there isn't much to test as etcd is doing all the work.


Setup and install is the least of your issues when running something like nomad/k8s in production. The part that matters more, is what's it like to babysit it, and keep it running.

I agree people are running k8s HA in production, but there is a reason those people are dedicated k8s engineers. It's because it's a giant pain the ass to keep it running. Hence what I mean when I say it's "operationally complex".

Most people using GKE don't actually operate the k8s cluster, they let GKE run it for them. They just use it.

Using k8s and using nomad are similar from a developer perspective. Operationally they are night and day different.

Anyways, I suggest you go play with both systems, and try them out, put some non-important stuff in production under both of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: