I have used tinc [0] for this scenario successfully for 15 years. It not only supports full mesh but automatic full mesh. It will use UDP for the data stream when possible, supports RSA and Ed25519, and supports transporting either IP or Ethernet frames.
I used the mode where it supports Ethernet frames to merge the VLANs of two datacenters across the WAN (with some additional ebtables to prevent some kinds of frames) to add the ability to migrate systems from one datacenter to another during a partial outage.
Is tinc okay security-wise? IIRC last time I looked the older crypto was really iffy looking, and the newer crypto (ED25519 rocks) was only in the dev/unstable version.
There were some issues in 1.0 that are fixed in 1.1, for which the protocol is not yet finalized although the beta releases are stable. The lack of a final release is annoying, since it does not guarantee upgrades will work with older clients but this is because the protocol isn't finalized.
I was curious about that too. I know there were some WireGuard/OpenVPN benchmarks a while back. It'd be interesting to see these technologies compared for speed.
The published wireguard results are questionable, as the reported results 1011Mbps on a documented -Gbps NIC is impossible even before considering the protocol/framing overhead, which is within 2 bytes of IPSec ESP using AES-GCM-128 with the standard nonce.
Yeah, ZeroTier is awesome! Works great on every platform, simple authentication scheme, and it's always connected. I use it for access to remote servers (have the zt subnet set to bypass firewall for accessing various debug servers) for a nonprofit project and for all kinds of personal uses as well. The free hosted version has up to 100 devices on a network and you can self host as well.
Is it possible to use cjdns for that kind of thing? I'm genuinely asking as I don't understand it.
Once I tried to set up a VPN using some odd Windows software (OpenVPN, maybe?) and the results were disastrous. I didn't understand any of the jargon the program used, and I think I didn't get what was its main use case (most certainly it wasn't what I was trying to do, that was creating a local subnet between two computers or two LANs).
Then some months later I tried ZeroTier and was able to understand everything, it seemed a perfect fit.
But still, people call ZeroTier a VPN. So why is it so different? And why does it use a jargon so different?
I think zeroTier is meant to be easy whereas openVPN is versatile and can be configured in many ways.
One use case of a VPN is to connect two local networks together. Another is to have your traffic appear to be coming from a different geographical area.
In some cases you wouldn't want the local networks to be able to speak, like commercial VPN services, for security reasons.
I ran cjdns for 5 years but recently switched off it. For one, it seemed to have too much traffic over my limited outbound link when things should have been more idle. Also, the commit logs were a little unnerving to me for code I'm trusting with my network security:
Also it's ipv6 only, which took some fussing. Performance was fine, even on rpi. Reestablishing links was slow and sometimes required restarting of the daemon. Daemons would occasionally get wedged. Log output is unconventional and uses a special reader tool (a la adb for android). Occasionally routes wouldn't be chosen right (two home computers would route via an external cloud box), which I'd fix by restarting the right daemons.
I laboriously switched to openvpn (two networks) and haven't worked out all the routing and hotswitching for my roaming phone+laptop yet. Now I'm considering vita, tinc, zerotier, or wireguard. Probably I'll try out zerotier to see if it works out of the box, then try wireguard if I'm going to have to configure it a lot, since it seems like WG is the most unix-toolbox-do-one-thing out of all of them.
Check out Yggdrasil - https://yggdrasil-network.github.io/ - we've tried very hard to solve the problems that cjdns has, seem to be much more reliable in real world conditions and we send/receive much less idle traffic to do it. We also have Wireguard-like crypto-key routing for both IPv4 and IPv6. (I am one of the developers.)
Yes, some machines on different networks in my house plus a roaming laptop all got to talk to each other without hassle. They got random-looking ipv6 addresses, so I don't know if 'subnet' is the right word here.
Before, I had to act differently on the laptop based on where I was, and the raspi nodes on my guest wifi couldn't reach the influxdb server that was not exposed to the wifi net.
IPsec is pretty much universal in networking hardware and cloud provider networks nowadays. There's a better chance it'll work for you if you can't or don't want to control both ends of the connection.
Also, some software environments have better support for IPsec than Wireguard; a glance at the Algo docs (https://github.com/trailofbits/algo) suggests that Windows and OpenWRT are both in this category today.
FWIW, I work for Google, I haven't configured IPsec in forever, and I'll probably reach for Algo first the next time I think I need IPsec; I don't think I have enough endpoints in my home network to need hardware offloading :)
Last time I configured IPSec it was so horrible, really-really-really horrible, I will never touch it again with a ten-foot-pole. Starting from the fact that the software was hard to configure, so was it hard to find working (new) configuration examples as well as secure configurations. It never felt right after setting it up and I did not want to spend any more time on it, wireguard has been a blessing in that aspect.
Fair enough. My answer was mostly that it's far from the first time I see this "Why bother developing X, Y exists" argument, and it's rarely a good one. Alternatives drive innovation. Relying too much on a single solution can be dangerous.
Yes, but security requires near perfection to maintain its security guarantees. It is very difficult to have both perfection and diversity, as resources spread thin.
When you see that "argument" made in HN comments, I do not think it is for the reasons others in this thread are suggesting.
These commenters are potential or existing users of X, not the authors of X. The people writing X would never try to argue that people should not write Y.
As for the psychology that drives these sort of statements from HN commenters, that is left as an exercise for the reader. One theory is that some people do not like to make choices. They would rather be told what to do. Alternatives may mean choices must be made.
In this case, Snabb discloses that Vita was funded by NLnet.
NLnet once wrote an alternative DNS library and various programs (nsd, drill, unbound, etc.) that I would bet many former BIND users are now happily using. OpenBSD made the switch years ago and NetBSD is making the transition as well.
I think these "Why not use X" comments are a sign of users who dislike decision-making and want to be told what to do. I would bet some commenter was asking "Why not use IPsec?" when Wireguard was being introduced.
Wireguard is of course very Linux-specific. As of today, I still cannot use it reliably on BSD. What that means is that in order to experiment with Wireguard, 3xblah's router has to run Linux not BSD, 3xblah's preferred router OS. I am not anti-Linux, but that choice, being made for me, is significant.
I interpret the configuration choices that GNU/Linux distributions make as being told what to do. With BSD, NetBSD for example, programs that interact with the network are generally off by default. It is up to the user to decide which ones to start. IME, this is different from the Linux distros where programs are pre-configured to start without any input from the user.
Alternatives that work on a variety of OS are helpful for some users.
I think the "Why bother..." questions come from users who cannot be bothered with decision-making. More alternatives may mean more decision-making.
The reaction of these users might be "Why bother developing..." in which case they are upset that now with the arrival of new software they feel there is a choice to be made, or it could be "Why should I switch to..." which signifies they want someone else to make the decision for them.
I can't really agree with you. I understand the premise, and I certainly agree that some users genuinely don't like choice. But I think a much more reasonable explanation of that attitude fits the following two points.
1. "Why not use [X]" is a perfectly valid question for gaining insight into the merits of new tech. If I have problems that I've solved with [X] and you're now implying that I should use [Y], I want to know what makes [Y] different or better than [X] in your opinion. I want to know what problems you're solving, so I can compare them to mine. I want to know what benefits you're getting that I may not have considered. That's a totally valid approach to understanding a new product.
2. Assuming you aren't solving new problems with [Y] and you're really just directly competing with [X] and I already use [X]... I may be inclined to think it's a waste of effort because I won't get any benefit from your work (I'm already using [X]!). Worse, you could have been making [X] better instead of just competing with [Y]!
Of the two, I think both play a role in the mindset of people making that argument. I'm not very sympathetic to the second, but the first is fine.
"... and you're now impying that I should use [Y]..."
You perceive someone is "implying that [you] should use" [Y].
It is as if you believe the mere publication of software is somehow didactic.
As if the author by the mere act of writing and sharing a program is telling you what to do.
If the question in 1. were phrased as something like "How does Y compare with X" then that is not what I am addressing.
I am addressing this idea that someone (who?) is "implying" that you should use Y. What if that was not actually the case and it was just your interpretation?
What if the authors are not telling you what software to use.
What if it was simply a case of a person or group writing some software, e.g., maybe to scratch their own itch, and then publishing it in case others may want to use it.
Keep in mind I am now referring to the general case, e.g., each program source published on Github, not Vita.
It would be interesting if users, without any financial contribution, could tell software authors what programs to write or refrain from writing, but that is not what I see when I look at the large amount of software published on the internet.
If the authors of Vita were getting paid to write it, then I doubt they would view it as "wasted effort".
Dude, you are so incredibly far off in strawman land right now I can't even...
you got stuck on the work imply and just couldn't come back.
Even assuming they're just sharing their work with the world entirely for generosity (and they're not, it's sitting in a company owned repo and the tag line is literally "Software Bureau. Hire us to work on code.")
Then yes, I'd still say espousing the merits of your product publicly is a pretty strong implication that I should use it. That's also why I expect you to have shared it.
Lots of vying alternatives results in development being spread out and no one single solution getting all the features and clean-up. Plus, "more" does not equal "better" when it comes to security development; not that many people can write secure apps well.
No, but if it has more lines of code (not tests) than a competing product, then it is more complex, and has a higher probability of defects.
In security, defects have serious consequences, so it is in best interest of everyone to have the lowest possible complexity, much more so than for other types of software. The only "stricter" category would be software related to operation of equipment whose failure could cause direct physical harm.
Based on empirical observation, the number of bugs goes up with lines of code on average. There's also some point where something is too big for the human mind to understand in its entirety, either developer or reviewers. To say something is secure, you have to understand everything it will and won't do. Those analysis grow exponentially with size due to input/output ranges and combinations of paths (i.e. combinatorial explosion). So, smaller is better. Ideally, small like Wireguard so the most thorough analysis possible.
Lets look at your counter example. It's smaller than most databases as I advise. Following the other rule, it was untrustworthy by default with developers adding piles of tests to uncover bugs, increase confidence, and make changes with less breakage.. Nonetheless, CompSci papers I read on static analysis and fuzzing often test SQLite finding bugs. Even such a well-tested application still had plenty of bugs over time due to its complexity. Most of which might be just intrinsic to the kind of features they're developing.
Back to VPN's, you want to know it will maintain security policy (esp correct info flows) under all inputs in all states, normal and failure. And if no progress can be made, you want it to fail-safe. So, making it six times larger without a need to is definitely worse than not making it six times larger. The larger product will, based on empirical data like SQLite, have more bugs with more code injections or data leaks following from that. I advise minimal, careful, rigorously-evaluated implementation of a formally-verified protocol. Wireguard is closest to that right now.
Tinc could still make sense as a control plane for the WireGuard VPN though. There have been talks about WireGuard as a backend for tinc [0], hope that sees some progress.
Sadly, using wireguard would come with some notable drawbacks since the protocol isn't as flexible as tinc's.
First while the control plane would be TCP (since it's low traffic), the data plane would then be UDP-only leading to issues where the data plane would not work even though the control plane did. tinc currently starts the data plane out over TCP and migrates it to UDP if it finds it works, and later migrates it back if it discovers it stops working.
Second, wireguard only supports Ethernet frames, while tinc supports Ethernet frames or IP packets depending on the mode. This is useful for, for example, sending a bunch of IEEE 802.1Q VLAN tagged things over the VPN interface. This use case could be migrated to be VXLANs, but it would require breaking the existing tinc contract with its users.
Third, RSA keys are not supported and that is the primary mechanism used in tinc 1.0, it would be a breaking upgrade for all users or require a long migration time where RSA keys were replaced with wireguard compatible ones but both were still supported while wireguard was not used.
ZeroTier is great, but hosting the server is very complicated (I'm not going to stop using it now, since it's so easy and is already set, but it's good to know of alternatives).
It's based on Snabb, a user-space networking platform, so it'll need direct hardware access and supports only a few specific NICs. But in exchange for that, it should be really fast.
Static symmetric keys is one of the ways to authenticate an IPSec tunnel. It does also support certificates for authentication or fully unauthenticated connections (but still encrypted).
I used the mode where it supports Ethernet frames to merge the VLANs of two datacenters across the WAN (with some additional ebtables to prevent some kinds of frames) to add the ability to migrate systems from one datacenter to another during a partial outage.