Hacker News new | past | comments | ask | show | jobs | submit login
Cilium 1.4 released (cilium.io)
71 points by rochacon on Feb 14, 2019 | hide | past | favorite | 18 comments



Support for multi-cluster routing is huge. One of the main pain points managing Kubernetes in the large-deployment or cross-cloud use case is how to bridge multiple clusters. SIG multi-cluster works on related architecture issues/designs full time, and there are lots of other solutions out there, but they're all kind of large and bulky.

If one "thing" (Cilium) can give me local-node breakneck speed (via eBPF magic) and cross-cluster (v1.4) and node <-> node routing I would absolutely drop everything and switch to it (my deployment is very very small).

All that said, I'm currently very happy with kube-router[0], because it gave me all-in-one node & service/pod routing (as in, you can actually get rid of kube-proxy. Cilium is now way higher on my short list of things to check out now as well.

[0]: https://www.kube-router.io/


Another killer feature is that they can do networkpolicies on fqdns and not only on ips. That's not a new feature in 1.4 but outstanding though.


Whoa I didn't know that, that's huge as well -- There are some overlays that don't even come with NetworkPolicy support at all -- didn't know they supported FQDNs as well


Cilium also does Service IP routing, ala the kube-router feature you're referring to.


Unless I'm misunderstanding you I'd consider that table stakes -- you don't have much of a overlay network if it can't do at the very least pod<>pod and pod<>svc routing.

Thinking aloud, by "Service IP" routing do you mean that actual services can have external facing IPs? I had no idea kube-router supported that, maybe I'm on a version that's too old?

[EDIT] Looking back at the kube-router docs, I see no mention of what I was discussing above so I'm going to assume "Service IP" means the usual IPVS/LVS stuff that kube-router does. I find that feature doesn't really affect me directly as a cluster operator though... Though it's probably saving me some latency (I assume you can skip a DNS resolution or two?)


Sorry, I meant "Service ClusterIP", which a common name of the feature you're talking about that provides VIPs and load balancing for internal services.

Anyway, I was just pointing out it is also available via Cilium. To put it another way, Cilium can replace kube-proxy, just as kube-router does. As well as doing all of the other mesh networking functionality that k8s needs.

Put a final way, I still don't know what kube-router does or roles it can take, that Cilium can't also do or provide. But I'm not deep on them and am mostly replying here in the hopes that you will fill in a missing gap in my knowledge. :). Thanks!


How does kube-router compare to Project Calico?


Calico or Cilium? ;)

Compared to Cilium I would spontaneously say it's bpf and fqdn based network policies.


On what axis? kube-router is closer to Canal (Calico + flannel), and adds NetworkPolicy support, and the ability to remove kube-proxy completely from your setup.


For those of us that don't know everything, it is sometimes helpful to put a description of a project in the title when announcing a new release.

In case anyone is ignorant like me:

> Cilium is open source software for transparently providing and securing the network and API connectivity between application services deployed using Linux container management platforms like Kubernetes, Docker, and Mesos.


How does Cilium compare to HA-Proxy or Traefik? Are they operating in the same realm or is that on another layer?


They're completely different products for different uses -- Cilium is a overlay networking solution (so, more than one program working together) for use inside container orchestration frameworks like Kubernetes/Mesos/Nomad, where as HAProxy/Traefik are applications that do reverse proxying and load balancing.

Cilium may happen to do some proxying and balancing but you dont do things like "add a backend to Cilium" -- you "add a backend" to Kubernetes, Cilium is handling your networking, so it sees that, and ensure your frontend (which is already deployed let's say) inside the same kubernetes cluster can speak to the backend when it reaches out for it.

HAProxy is a TCP-only reverse proxy and load balancer similar to NGINX.

Traefik is a reverse proxy and load balancer (currently HTTP only I believe) built relatively recently with modern features like a HTTP administration interface, OpenTracing compliant (IIRC) request tracing, let's encrypt for certs, and automatic polling of a container orchestration layer (if you have one) to populate backends/frontends.


All cloud native networking options, including Cilium: https://landscape.cncf.io/category=cloud-native-network&form...

All service proxies, include HAProxy, NGINX, and Traefik: https://landscape.cncf.io/category=service-proxy&format=card...


This seems cool, but maybe I don't understand something... Wouldn't your frontend and your backend be in the same kubernetes cluster?


Yeah, they would be -- Cilium handles your inter-cluster routing first and foremost (making sure those two things can talk to each other, and have a unique IP, which is a requirement of Kubernetes).

Some companies that are trying to do multi-cloud or really large kubernetes installations (or just multi-tenancy) are finding that they have to split up the clusters for isolation or availability as well, so now Cilium offers a solution for that as well.


Thank you for explaining!


No problem! Counting on the HN community to speak up and correct any mistakes I may have made :)


Sorry about that, first time I submit something to HN, will keep that in mind for the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: