Hacker News new | past | comments | ask | show | jobs | submit login

Swarm works (more or less) if one has just 2-3 hosts, use Docker for packaging, and want a semi-unified view of those machines.

Kubernetes seems to be quite highly opinionated toward "clouds" and "microservices". I just wasn't able to wrap my head over its concepts' applicability to my "I just have one server that uses Docker for packaging, and now want to throw in another, for resiliency" case.




Kubernetes isn't particularly opinionated at all. It runs containers, and doesn't care what those containers are or how they behave. Microservices and clouds not required.

Its core data model, simplified, that of pods. A pod specifies one or more named containers that should run together as a unit. A pod's config can specify many things, such as dependencies (volumes, secrets, configs), resource limits and ports (including how to perform health checks). You can deploy single-container pods, and this is the norm, but it's entirely feasible to run a whole bunch of containers that conceptually belong together.

To expose a pod's ports to the world or to other pods, you define services. These simplify specify what ports should go which pods, and Kubernetes will assign a persistent, internal IP address to it. Kubernetes will (typically) configure iptables so that the service is round-robin-balanced at the network level across all containers that it serves; the idea is that the pod should be reachable from any other pod in the cluster. Together with KubeDNS, which resolves service names, you can do things like call http://mylittlepod/ to reach a pod.

To achieve resilience, Kubernetes lets you define replica sets, which are rules that says "this pod should run with N replicas". K8s will use the scheduler to enforce this rule, ensuring that a pod is restarted if it dies and always has N replicas running, and it can automatically ensure that pods are spread evenly out across the cluster. Replica sets can be scaled up and down, automatically or manually.

There are other objects, such as deployments (handle rolling upgrades between one version of a pod and another), ingresses (configures load-balancers to expose HTTP paths/hosts on public IPs), secrets (encrypted data that pods can mount as files or envvars), persistent volumes (e.g. AWS EBS volumes that be mounted into a pod), and so on, but you can get by with just pods and services, at least to start.

Kubernetes is a bit pointless with a single server, but adds convenience even if you have just two or three.


> "I just have one server that uses Docker for packaging, and now want to throw in another, for resiliency"

Yes, when you only have a couple of servers, that is not the sweet spot for K8s.

But few people stop at 2 servers. A few months in, someone asks for a staging environment, and/or QA environment. Someone eventually realizes that they need to regularly test their fail-over and backups. Someone hires a contractor, and wants to give them a copy of the setup that won't block anyone else. Someone realizes we can centralize the logs from all these environments... And so it goes.

Even with one server, sometimes you go to do an upgrade, and find your "one server" is actually a tightly coupled bunch of services. (Made-up example: I want to upgrade Varnish, but it requires a newer library that is incompatible with my WordPress version.) That one server could be a server for the Database, one to run the cron jobs, a few for the cache layer, etc. If you break up those into different boxes, you can scale them better -- Instead of one big beefy server, you can have each layer at it's own scale (one or more wimpy boxes, dynamically adjusted).

You don't do this to save money directly. But by simplifying things, you make it easier to maintain. That saves labor, plus prevents problems (and makes it easier to hire and train ops.)

When you have just a few servers, it looks manageable. As you grow, it gets a lot harder to manage. K8s helps.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: