Hacker News new | past | comments | ask | show | jobs | submit login

I think the recommendation is not to use another orchestration service, but to keep the infrastructure simple and not use any orchestration or service discovery at all.

Adding service discovery and container orchestration will probably not make your product better. Instead it will add more moving parts that can fail to your system and make operations more complex. So IMO a "containerized microservice architecture" is not a "feature" that you should add to your stack just because. It is a feature you should add to your stack once the benefits outweigh the costs, which IMO only happens at huge scale.

Most people know that "Google {does, invented} containers". What not so many developers seem to realize is that a Google Borg "container" is a fundamentally different thing from a Docker container. Borg containers at Google are not really a dependency management mechanism; they solve the problem of scheduling heterogenous workloads developed by tens of thousands of engineers on the same shared compute infrastructure. This however, is a problem that most companies simply do not have, as they are just not running at the required scale. At reasonable scale, buying a bunch of servers will always be cheaper than employing a Kubernetes team.

And if you do decide to add a clever cluster scheduler to your system it will probably not improve reliability, but will actually do the opposite. Even something like borg is not a panacea; you will occasionally have weird interactions from two incompatible workloads that happen to get scheduled on the same hardware, i.e. operational problems you would not have had without the "clever" scheduler. So again, unless you need it, you shouldn't use it.

I think the problem that Docker does solve for small startups is that it gives you a a repeatable and portable environment. It makes it easy to create an image of your software that you are sure will run on the target server without having talk to the sysadmins or ops departments. But you don't need kubernetes to get that. And while I can appreciate this benefit of using docker, I still think shipping a whole linux environment for every application is not the right long-term way to do it. It does not really "solve" the problem of reliable deployments on linux; it is just a (pretty nice) workaround.




Without service discovery, you end up generating configuration all over the place just to wire together your services, letting everybody know about which IP address and port where - super tedious. Add in multi-tenancy, and you're in a bunch of pain. Now try and take that setup and deploy it on-prem for customers that can't use cloud services - you rapidly want something to abstract away the cluster and its wiring at the address allocation level.


I'm not sure I understand this. You have a product that is split into many different components, and when you deploy this product to a customer site, each component runs on different hosts, so you have a bunch of wiring up of service addresses to do for every deployment?

Could something like mDNS be a lightweight solution to that problem?

And also I am genuinely curious how kubernetes would solve that. When you install kubernetes on all of these machines, don't you have to manually do the configuration for that either? So isn't it just choosing to rather configure kubernetes instead of your own application for each deployment? If it is that much simpler to setup a kubernetes cluster than your app, maybe the solution is to put some effort into the configuration management part of your product?


I'm not sure you understand the value k8s proposes based on your comments throughout this entire thread.

Managing many nodes is the reason orchestration software exists. Your suggestion to "put some effort into configuration management" is effectively naive, homegrown orchestration.

Build or buy? That's the same argument - except k8s is free and available from many providers.


Kubernetes service Discovery is just dns, so it sounds like you're doing the same thing, but by exiting CNAMEs.


Ah, if you look under the covers, Kubernetes service discovery is actually kernel-space virtual load balancers running on each node in your cluster. The DNS record just points to the "load balancer".


If your docker containers all build off of the same OS and then you run them on a host using that OS, won't the containers share the lower filesystem layers with the host?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: