> with each allocation limited to 1/4th of the available resources so not individual peak can choke the system.
This assumes that the scheduled workloads are created equal which isn't the case. The app owners do not have control over what else gets scheduled on the node which introduces uncontrollable variability in the performance of what should be identical replicas and environments. What helps here is .. limits. The requests-to-limits ratio allows application owners to reason about the variability risk they are willing to take in relation to the needs of the application (e.g. imagine a latency-sensitive workload on a critical path vs a BAU service vs a background job which just cares about throughput -- for each of these classes, the ratio would probably be very different). This way, you can still overcommit but not by a rule-of-thumb that is created centrally by the cluster ops team (e.g. aim for 1/4) but it's distributed across each workload owner (ie application ops) where this can be done a lot more accurately and with better results.
This is what the parent post is also talking about.
1/4th was merely an example for one resource type, and a suitable limit may be much lower depending on the cluster and workloads. The point is that a limit set to 1/workloads guarantees wasted resources, and should be set significantly higher based on realistic workloads, while still ensuring that it takes N workloads to consume all resource to average out the risk of peak demand collisions.
> This assumes that the scheduled workloads are created equal which isn't the case.
This particular allocation technique benefits from scheduled workloads not being equal as equality would increase likelihood of peak demand collisions.
This assumes that the scheduled workloads are created equal which isn't the case. The app owners do not have control over what else gets scheduled on the node which introduces uncontrollable variability in the performance of what should be identical replicas and environments. What helps here is .. limits. The requests-to-limits ratio allows application owners to reason about the variability risk they are willing to take in relation to the needs of the application (e.g. imagine a latency-sensitive workload on a critical path vs a BAU service vs a background job which just cares about throughput -- for each of these classes, the ratio would probably be very different). This way, you can still overcommit but not by a rule-of-thumb that is created centrally by the cluster ops team (e.g. aim for 1/4) but it's distributed across each workload owner (ie application ops) where this can be done a lot more accurately and with better results. This is what the parent post is also talking about.