Thanks for writing this up. I saw your original comment a while back was intrigued, and I got it working last weekend on a 2-vm vagrant private network setup, but only by constructing the network prefix by using a 32 bit ULA prefix concatenated on the full 32 bit ipv4. I think it worked (hooked up wireshark to a tcpdump of the vm's interface to the private network and saw the 6-in-4 packets) but I didn't need to use the `6rd-relay_prefix` argument to ip tunnel, do you have any idea why that might be? I suspect 32+32 is the default (vs. your 40+24), and if you want to only use the lower 24 bits of the ipv4, you need to indicate to the kernel what netmask you are using on your ipv4 network ("6rd relay prefix"). I guess that would buy you either more randomness in your ULA prefix, or some more ipv6 subnets per ipv4 vm.
You can explore this a bit further and set up some net namespaces, connect them to the host vm via a bridge device, assign them ipv6 addresses from the vm host /64 network, and then set up a default route inside each container to the /32 or /40 you're using via the vm host's address on the host local /64. It's cool to see one container on host 1 talk to another on host 2. You're basically subnetting the /40 for each host; each host is analogous to a household in 6rd, and the containers are like your devices on your lan with globally routable ipv6 addresses (except we're using a ULA prefix instead of a globally routable ipv6 prefix assigned to an ISP, so our containers are not globally addressable - whatever). I'd guess this is what the CNI thing you mention does under the hood. For others who are interested in this, it's basically combining OP's article with https://iximiuz.com/en/posts/container-networking-is-simple/ from a couple months ago.
The Teredo part was very interesting as well, I had not thought to worry about non-udp/tcp compatible intermediate systems.
A ULA prefix is, properly, 48 bits long -- `fd` plus 40 bytes of random data. Using the full 32 bits of IPv4 means that you'll be using only 24 bytes of randomness there.
Does this matter? Probably not. But when I have the option, I like to conform to the RFCs.
It really should have been IPv6 and if you needed IPv4 then you could add an IPv4 ingress or egress controller. Had Kubernetes and/or Docker been IPv6 we wouldn’t even need the overlay networks.
> Not that there's any Teredo going on in the article...
I'm not sure exactly what you mean, but note that using 6to4 with FOU produces packets that are exactly the Teredo protocol. This is why some packet analysis tools (for example Wireshark) are able to recognize the UDP-encapsulated packet as IPv6-in-UDP.
Teredo involves 2001:0::/32, Teredo servers and Teredo relays; connections over Teredo involve doing things like NAT traversal and asking the Teredo server to send a ping on your behalf to the target IP first. Teredo might use IPv6-in-UDP packets but that doesn't mean that every instance of IPv6-in-UDP is Teredo.
Since you're not using 2002::/16 it's not 6to4 either. It's 6rd, except tunnelling 6rd's 6in4 packets over UDP makes it incompatible with that. I was going to suggest "6rd-UDP" but https://tools.ietf.org/html/draft-lee-softwire-6rd-udp-02 exists/existed and it's different, so maybe something else.