Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> because a lot of hard problems at medium scale and above just go poof with K8s.

No, they don't? I don't know why anybody would just assume that something as complex as kubernetes would just run flawlessly once you actually try to run it on thousands of servers. Must be something to do with google PR because people definitely don't seem to assume the same for e.g. hadoop or openstack. Make a guess at how many people large companies have to employ to actually keep their smart cluster scheduler running?



I was commenting on a specific usage context:

>when I started my new job on a 22 node GKE cluster

>problems at medium scale

vs

>Make a guess at how many people large companies have to employ to actually keep their smart cluster scheduler running

K8s obviously is not a silver bullet and of course there's ops work to be done. I can't comment on whether operating K8s clusters in other contexts makes economic sense, but I know it does for our current team.


> makes economic sense [...] for our current team.

But the reason for that is not that it makes "hard problems go poof at scale". The reason is that you're using a hosted service where somebody else (in this case, Google themselves) takes care of the problems for you for a fee and - at small scale - you only have to pay them a fraction of a single operation's engineers salary for it.

So of course it makes economic sense for you to use a hosted service where the sharing economy kicks in, but your recommendation to use kubernetes because it solves hard technical problems at medium scale does not follow from that.


> [Google themselves] takes care of the problems for you for a fee...

GKE's services are, as far as I can tell from their pricing page[0], free. The compute running on top of GKE is charged at the GCE rate, the master node spun up by GKE is free.

Disclaimer: I work at Google Cloud, but nowhere near these offerings.

[0] https://cloud.google.com/kubernetes-engine/pricing


How does this relate to my point that you're transitively paying somebody else to do ops for you? Maybe google's pricing model rolls this into the normal VM price? Or maybe it's currently offered at a loss to gain traction? Or are you saying they are not paying the SREs anymore and they work for free now?


Per their pricing, GCP doesn't charge for the master node VM that does the orchestration - you just get charged the normal price for VMs as if you'd provisioned them yourself. Thus, given the price of GKE is $0, the only thing I could see you "transitively paying somebody else" is experience - Google engineers become more versed in running managed Kubernetes and you don't. If the fee you perceive is dependency I'd agree - but I'd also opine that many startups/SMBs would be willing to accept that fee rather than onboard engineers to solve it.

As to why they're taking the loss on the master node VM, I don't know. I had previously expected that it was a cost and was quite frankly pleasantly surprised that it wasn't - it seems like the most obvious sell. If I had to guess as to why it's not my best assumption would be that there's far more to be gained in getting companies comfortable with scaling and from angles that go beyond just the strict monetary benefit of them going from 3 compute instances to 300.


If you are running on Google computer anyway, then I don't think you pay anything extra for gke. As far as I can tell, you are only paying for the VMs that you use. So the price for hosted k8s is already baked in whether you use it or not.


Yes, when you use GKE Google is operating K8s for you. Now ask yourself why Google and AWS and Azure offer K8s but they don't offer Swarm, Mesos, Nomad, etc. Maybe K8s solves some problems after all.


It was a response to what if gke question. So it follows in that context .


No it doesn't. You can not extrapolate from the fact that the hosted version "just works" that kubernetes at scale would also "just work" (kubernetes being the open source product that you run yourself here). Especially if the hosted version is offered by google and everybody knows that they employ truckloads of very good engineers keeping their stuff running.


The original comment asked “what about hosted kubernetes?” and the reply was discussing the benefits of that hosted solution. They didn’t say that running your own cluster would be the same. They didn’t even imply it.

Even if they had, it doesn’t justify your aggression. Maybe take a break from the internet for a bit to clear your head.


Well I am sorry if you felt the wording of my comments was too "aggressive"; personal angle is weird though. Also I think you are trying pretty hard to misunderstand what I am trying to say. Obviously, using google's hosted version will make all your problems go "poof" at any scale, because they are Google's problems now. But that is precisely the point I am trying to make: It's not a feature of using kubernetes - it's a feature of paying Google!


Or Azure. Or AWS. Or probably in the future pretty much anyone. If you want to operate your own cluster you will be able to, for everyone else it will come from their cloud provider as a "cost of getting your business." This is a great thing.


Yes, and that was their point: That the hosted solution did that. They didn’t say it was a feature of kubernetes. The entire thread started with a question about the hosted solutions and they talked about their experiences with that. Can you specifically point out where they say the hosted version’s benefit should translate to the self-hosted version?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: