> additionally: if your node becomes unhealthy then the workloads would be rescheduled on another node.
Well of course, but you're going to run into that issue (likely) on all of the nodes where the offending service lives.
> But let’s not argue things that aren’t true.
If what I've said is untrue, looking at open GitHub issues and the Kubernetes documentation is certainly no indication. That's a massive problem all by itself.
The first issue you've linked concerns quota support for ephemeral storage requests/limits - which is not about the limits themselves, but the ability to set a limit quotas per tenant/namespace. Eg., team A cannot use a total of more than 100G ephemeral storage in total in the cluster. EDIT: No, sorry, it's about using underlying filesystem quotas for limiting ephemeral storage, vs. the current implementation, see the third point below. Also see KEP: https://github.com/kubernetes/enhancements/tree/master/keps/...
The second is a tracking issue for a KEP that has been implemented but is still in alpha/beta. This will be closed when all the related features are stable. There's also some discussion about related functionality that might be added as part of this KEP/design.
The third issue is about integrating Docker storage quotas with Kubernetes ephemeral quotas - ie., translating ephemeral storage limits into disk quotas (which would result in -ENOSPC to workloads), vs. the standard kubelet implementation which just kills/evicts workloads that run past their limit.
I agree these are difficult to understand if you're not familiar with the k8s development/design process. I also had to spend a few minutes on each one of them to understand what the actual state of the issues is. However, they're in a development issue tracker, and the end-user k8s documentation clearly states that Ephemeral Storage requests/limits works, how it works, and what its limitations are: https://kubernetes.io/docs/concepts/configuration/manage-res...
So... why are these issues open?
https://github.com/kubernetes/enhancements/issues/1029 https://github.com/kubernetes/enhancements/issues/361 https://github.com/kubernetes/kubernetes/issues/54384
> additionally: if your node becomes unhealthy then the workloads would be rescheduled on another node.
Well of course, but you're going to run into that issue (likely) on all of the nodes where the offending service lives.
> But let’s not argue things that aren’t true.
If what I've said is untrue, looking at open GitHub issues and the Kubernetes documentation is certainly no indication. That's a massive problem all by itself.