First: they force kubernetes into a position where pods can’t be evicted.
Second: they use a version of node ports that bypasses CNI, so you directly connect to the process living on the node. This means there’s no hiccups with CNI if another node (or pod) gets unscheduled that had nothing to do with your process.
In most cases, web services will be fine with the kinds of hiccups I’m talking about (even websockets); however UDP streams will definitely lose data - and raw TCP ones may fail depending on the implementation.
What you're describing sounds like implementation bugs in the specific CNIs you've used not anything to do with the k8s networking design in general. At former gig I ran a geo-distributed edge with long, persistent connections over Cilium and we had no issues sustaining 12h+ RTMP connections while scaling/downscaling and rolling pods on the same nodes. I've consulted for folks who did RTP (for WebRTC) which is UDP-based also with no issues. In fact, where we actually had issues was cloud load-balancing infra which in a lot of cases is not designed for long-running streams...
Second: they use a version of node ports that bypasses CNI, so you directly connect to the process living on the node. This means there’s no hiccups with CNI if another node (or pod) gets unscheduled that had nothing to do with your process.
In most cases, web services will be fine with the kinds of hiccups I’m talking about (even websockets); however UDP streams will definitely lose data - and raw TCP ones may fail depending on the implementation.