Hacker News new | past | comments | ask | show | jobs | submit login

This is actually interesting, we chose to use an overlay network because Kubernetes was a new, complex system that we initially didn't have experience with internally, and so we wanted to isolate problems it could create as much as possible, as well as ensure that teams working on Kubernetes deployment weren't blocked by needing network engineering time.

This meant that we felt an overlay network was the most pragmatic (works out of the box) solution, even though IPIP has some drawbacks in adding complexity (within the Kubernetes cluster) and hashes across router ECMP / NIC RX queues poorly (neither look into the inner IP packet as part of hashing). It was definitely a concern of ours, though, considering we'd very intentionally avoided IPIP in our load balancer (https://github.blog/2018-08-08-glb-director-open-source-load... has a write-up of why).

We've considered the alternative of announcing a /24 or /25 or similar via BGP on each Kubernetes node, which is supported by systems like Calico, but so far the limitations of IPIP haven't caused an issue because of our Kubernetes clusters being large enough to hash evenly regardless of the addition of an overlay network, so we haven't needed to migrate. It's definitely an interesting trade-off on complexity overall, though - simple routed networks are much easier to understand than another layer of encapsulation.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: