Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I thought I understood this, and that it replaced (and no doubt did a better job of) what I'd already done - WG to get nodes on the same network, CNI on top.

But requirement 2 confused me: > A Kubernetes cluster with IPAM enabled (--pod-network-cidr= when using kubeadm based install)

So, do node machines need to already be on the same network or not?



Sorry for the confusion, this requirement isn't about the node networking. This is about assigning subnets to each node for the pod IP addresses.

Kubernetes includes an optionally enabled IPAM, that can allocate subnets from a large subnet, into the /24 subnet assigned to each node. This becomes the spec.podCIDR field in the node object when enabled.

Instead of implementing another IPAM for the node cidrs, wormhole uses the built in IPAM within kubernetes. The only thing is, the ipam needs to be turned on. On a kubeadm based install, this is done by setting the pod-network-cidr flags during init. This sets --cluster-cidr / --allocate-node-cidrs on kube-controller-manager.


But what's the value of that flag if I'm going to use Wormhole only afterwards?

I set up a Wireguard /17 (iirc) mesh, and then kubeadm'd with --pod-network-cidr as that node's wg endpoint /24.

I suppose what I'm not understanding is how I can set up the cluster sufficiently to be able to kubectl apply wormhole if I haven't already setup a wireguard (or...) network between the nodes?


Ok, I think I might understand my miss-understanding. If I understand correctly, you're running you're kubernetes API / connectivity over a WireGuard network established between hosts?

Because the kubernetes APIs are protected by mTLS between nodes, the API connectivity doesn't need to be over WireGuard, unless you don't have bi-direction connectivity for other reasons (like NAT). Although, I don't generally recommend splitting nodes between disparate networks, as it's not a good failure model for kubernetes.

How this is modelled, is when setting up a kubernetes cluster, you establish node trust using the installation method, and whatever network exists between the hosts.

This creates a kubernetes installation, but doesn't setup network. When using the flags to kubeadm, this creates a kubeadm config file, and turns on the kubernetes subnetting. So say you were to configure 10.100.0.0/16 as a pod-network-cidr. The would cause the kubernetes controller manager to assign a /24 to each node for pods. So node 1 gets 10.100.1.0/24, node 2 gets 10.100.2.0/24, etc. The addresses from the node cidr are then assigned the pods as they are scheduled by the cni plugin. The wormhole plugin, then takes care of setting up all the networking that covers 10.100.0.0/24, because kubernetes has effectively been granted management of this subnet.

So, yes, this assumes some sort of network exists between nodes, which is the network that has been used to create mTLS trust between kubelet/api on all members.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: