If you use k8s qos levels "guaranteed" cpu resources will be distinct — via cpu sets — from the ones used by the riff-raff. This is a good way to segregate latency-sensitive apps where you care about latency from throughtput-oriented stuff where you don't.
1. Neighbours can be noisy to the other hyperthread on the same CPU. For example, heavy usage of avx-512 and other vectorized instructions can affect a tenant running on the same core but different hyperthread. You can disable hyperthreading, but now you are making the same tradeoff where you are sacrificing efficiency for low tail latencies.
2. There are certain locks in the kernel which can be exhausted by certain behaviour of a single tenant. For example, on kernel 5.15 there was one global kernel lock for cgroup resource accounting. If you have a tenant which is constantly hitting cgroup limits it increases lock contention in the kernel which slows down other tenants on the system which also use the same locks. This particular issue with cgroups accounting has been improved in later kernels.
3. If your latency sensitive service runs on the same cores which service IRQs, the tail latency can greatly increase when there are heavy IRQ load, for example high speed NIC IRQs. You can isolate those CPUs from the pool of CPUs offered to pods, but now you are dedicating 4-8 CPUs to just process interrupts. Ideally you could run the non-guaranteed pods on the CPUs which service IRQs, but that is not supported by kubernetes.
4. During full node memory pressure, the kernel does not respect memory.min and will reclaim pages of guaranteed QoS workloads.
5. The current implementation of memory QoS does not adjust memory.max of the burstable pod slice, so bursable pods can take up the entire free memory of the kubepods slice which starves new memory allocations from guaranteed pods.
There isn't any way on Linux to deal with processes that create dirty pages. It is folly to try. The only way to deal is to put I/O stuff on a whole box/node by itself, and outlaw block I/O on all other nodes.