Yes, True enough. Arguably, network policies are outside of Nomad's scope. It's a resource scheduler. Nomad doesn't turn up interfaces for you, or do routing of IP traffic.
Nomad is not a one-stop shop, like k8s tries to be, it does resource scheduling in a nice declarative manner, that's about it.
It's much more in the unix toolset philosophy, let 1 tool do 1 thing well, and make integration as painless as possible.
k8s is much more in-line with the systemd way of thinking, it owns all the things and can be the only shiny in the room.
I'm not sure I'd agree that turning up k8s is as easy as turning up consul, nomad and vault. K8s in my experience tends to require lots of babysitting. Consul can require some babysitting, but vault and nomad require basically no babysitting, except the occasional, mostly painless upgrade.
For me it's really about operational simplicity. I have to keep this stuff running tomorrow and next year. Shiny, super-fast moving be all and do all stuff like k8s, Openstack, etc tend to be a giant pain that requires loads of babysitting(most notably at upgrade time), as the edge cases tend to hurt a lot, when you hit them.
I don't really see how nomad is operationally simpler than k8s. To run a service behind something like traefik on k8s I would:
bootstrap a CA for cluster tls
run etcd
run k8s (apiserver,etc,kubelets)
run your application
run the traefik ingress controller
To run a service behind something like traefik on nomad you would:
bootstrap a CA for cluster tls
run consul
run nomad (servers, clients)
run your application
run traefik somehow
I think the only thing that is really more complicated in k8s is the networking stuff, but that's only because it has more features, like having cluster dns automatically configured and being able to give every pod its own ip address which means every service can bind to port 80 without conflicts, and policy enforcement.
We are talking about different things. I'm talking about keeping Kubernetes/Nomad alive and breathing and happy. The ops part of devops. You are talking about running stuff under them. I agree they are similar in running applications under them.
Operationally simple:
* 1 binary, for both servers and agents.
* 1 config file. For consul & nomad 2 config files.
* Upgrades are simple, bring down a node, replace binary, start it back up.
Docs are straightforward, it's easy to understand how Nomad works it's not complicated, you can get your head around the server/agent split and nodes and everything in an hour probably (for both nomad AND consul). k8s is a very complex beast, with many, many binaries and configs, there are helpers that get you setup, but they all have their own pros and cons, and sharp edges. Chances are you would not want to use a helper in a production setup, which means you have to understand all those moving parts.
Keeping a k8s cluster running nicely and upgraded consistently requires many full-time admins. Keeping a nomad cluster running requires very little work(I do it part time, maybe an hour a month on a busy month).
Arguably for dev/testing under consul & nomad, you would do this:
consul agent -dev &
nomad agent -dev &
nomad run myapp.nomad
nomad run traefik.nomad
Adding vault to the mix for secrets: vault server -dev &
For production use it's obviously more involved, but most of that is just around understanding your operational requirements and getting them working with nomad, not really about nomad itself.
No, I'm talking about operationally too. If your consul and nomad deployments are only one binary and one config file then you're not using TLS . Half the effort of setting k8s up is bootstrapping the CA and certs for etcd and the k8s components
> k8s is a very complex beast, with many, many binaries and configs
Because it's much more in the unix toolset philosophy, let 1 tool do 1 thing well. Is that a bad thing now?
hyperkube does put all the server side components in a single binary, there's still a bit of configuration though. A lot of the options are repetitive, I bet one could wrap hyperkube with a single config file and some defaults and the end result would look a like like nomad.
You keep going on about setting k8s up, and not about maintenance. How much time in a week do you take to babysit your k8s cluster? Do you have an HA setup?
OK TLS takes 3 files, 2 for the key and crt and 1 for the config. If you get your TLS certs out of the vault PKI backend, it's very, very simple (https://www.vaultproject.io/docs/secrets/pki/index.html) the linked page covers the complete steps.
Again, I keep talking about maintaining Nomad/k8s for years. I've been running nomad in production for a few years now, I've had no downtime from nomad, and I spend about an hour doing upgrades every once in a while. I don't worry about nomad, it's just there and works. I run 3 nomad servers per data center for an HA setup. k8s doesn't even test their HA setup in development (source: https://kubernetes.io/docs/admin/high-availability/building/) . There is no way it works out well in real life, if they don't even test it yet.
Nobody I know that runs k8s pretends it's easy to keep running for years. Most places that run k8s have dedicated engineers to babysit k8s. I babysit our nomad, and lots of other infrastructure, and I do development of applications as well.
> How much time in a week do you take to babysit your k8s cluster? Do you have an HA setup?
I don't have a k8s cluster... so zero :-)
I don't have a nomad cluster either, because every time I look at it and start planning out what I would need to do to bootstrap consul+nomad and secure it, it starts to look more like a k8s install.
> There is no way it works out well in real life,
except that every cluster on GKE or created using kops, kubespray, or even kubernetes the hard way is HA, so it's not like no one is running an HA cluster. I think from k8s point of view, there isn't much to test as etcd is doing all the work.
Setup and install is the least of your issues when running something like nomad/k8s in production. The part that matters more, is what's it like to babysit it, and keep it running.
I agree people are running k8s HA in production, but there is a reason those people are dedicated k8s engineers. It's because it's a giant pain the ass to keep it running. Hence what I mean when I say it's "operationally complex".
Most people using GKE don't actually operate the k8s cluster, they let GKE run it for them. They just use it.
Using k8s and using nomad are similar from a developer perspective. Operationally they are night and day different.
Anyways, I suggest you go play with both systems, and try them out, put some non-important stuff in production under both of them.
Nomad is not a one-stop shop, like k8s tries to be, it does resource scheduling in a nice declarative manner, that's about it.
It's much more in the unix toolset philosophy, let 1 tool do 1 thing well, and make integration as painless as possible.
k8s is much more in-line with the systemd way of thinking, it owns all the things and can be the only shiny in the room.
I'm not sure I'd agree that turning up k8s is as easy as turning up consul, nomad and vault. K8s in my experience tends to require lots of babysitting. Consul can require some babysitting, but vault and nomad require basically no babysitting, except the occasional, mostly painless upgrade.
For me it's really about operational simplicity. I have to keep this stuff running tomorrow and next year. Shiny, super-fast moving be all and do all stuff like k8s, Openstack, etc tend to be a giant pain that requires loads of babysitting(most notably at upgrade time), as the edge cases tend to hurt a lot, when you hit them.