Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most single-instance, single-zone failover scenarios can be handled with shell scripts, the AWS API, and cron.

But the parent's comment is missing the point. K8s is not for failover. K8s is literally just a giant monolith of microservices for running microservices. It's not intended to provide failover for non-microservices, it's intended only to run microservices, and as a side-effect of needing to be able to scale them, it inherently provides some failover mechanisms.



Given GKE or AKS, not quite sure how "shell scripts, the AWS API, and cron" is "much simpler" than:

   $ docker build . -tag gcr.io/google-samples/hello-app:1.0
   $ docker push gcr.io/google-samples/hello-app:1.0
   $ kubectl run hello-server --image gcr.io/google-samples/hello-app:1.0 --port 8080
   $ kubectl expose deployment hello-server --type "LoadBalancer"
where the Dockerfile content is:

   FROM golang:1.8-alpine
   ADD . /go/src/hello-app
   RUN go install hello-app

   FROM alpine:latest
   COPY --from=0 /go/bin/hello-app .
   ENV PORT 8080
   CMD ["./hello-app"]
https://cloud.google.com/kubernetes-engine/docs/quickstart

https://github.com/GoogleCloudPlatform/kubernetes-engine-sam...


The difference in simplicity is not in the interface that is presented to you as a user. The difference is that your shell script will have a couple hundred lines of code, while the docker and kubectl commands from above will pull in literally hundreds of thousands of lines of additional code (and therefore complexity) into your system.

I'm not saying that is a bad thing by itself, but there definitely is huge amount of added complexity behind the scenes.


That's nice and all as long as it works. If there are any problems with it (network, auth, whatever) have fun with even diagnosing the bottomless complexity behind your innocous lil' kubectl command.


The error messages are quite good. There are extensive logs. There are plenty of probes in the system. Just the other day we `kubectl exec <pod_name> /bin/bash` to test network connectivity for a new service.

To the best of my exposure, Kubernetes is a well engineered system.


The point is that automation software for admin tasks is a zero-sum game: the more a tool does, the less your devops staff will be able to leverage their know-how. The more magic your orchestration does, the less your ops will be able to fix prod problems. And for getting anything done with k8s you'll need a whole staff of very expensive experts.


Look at it this way. Kubernetes is a Tesla Model X, and scripts/aws/cron is an electric scooter.

You can try to go cross-country with either of them. One was engineered to survive extreme environments, protect you from the elements, and move really fast. The other was engineered to leave you exposed, operate in a moderate climate, and go much slower.

If you have problems with the former, it's time to call the technicians. If you have problems with the latter, you might fix it yourself with a pocket screwdriver.


CodeDeploy has been doing this for ages.

https://docs.aws.amazon.com/codedeploy/latest/userguide/gett...

Distelli (now Puppet Pipelines) too:

https://www.youtube.com/watch?v=ZZlYADohS4Q

And then there are the PaaS options like Heroku, Beanstalk, AppEngine, Lambda, Serverless, and Apex.


> But the parent's comment is missing the point

That was my point. I wanted to point out that while some people have only/first heard about failover in the context of kubernetes, it is not something that is specific to kubernetes or even the problem that kubernetes was build to solve.

Of course it is not designed to be a failover solution specifically and using it (exclusively) as such would be ill-advised; I was just trying to be diplomatic while pointing that out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: