+1 for tinc. I've used it for years in VPS providers and from home to VPS to cloak DNS from the ISP eyes and tampering.
It's not as fast as strongswan or wireguard, but it has dynamic mesh routing. If one of my nodes is down, I route through the others automagically, all in user space without having to enable forwarding on any nodes. This is handy when backbone providers are having issues.
In the UK residential ISPs are required by law to store logs of your browsing data, and to make it available to ~50 government departments with no warrant or any other kind of oversight required.
> In the UK residential ISPs are required by law to store logs of your browsing data, and to make it available to ~50 government departments with no warrant or any other kind of oversight required.
Super scary. It would be cool if someone currently working at one of those 50+ government agencies would make an AMA, maybe using a throwaway account.
Could you be more specific as to where the poster is going wrong? Is it merely a correction that it's the domains stored and not the individual pages? (not that that's possible over SSL anyway). Or is there more to what you say has been misrepresented?
"required communication service providers (CSPs) to retain UK internet users' "Internet connection records" – which websites were visited but not the particular pages and not the full browsing history – for one year;[41]"
>are vps providers somehow more trustworthy than ISPs?
Random small providers from lowendtalk or whatever may not be, but yeah vast majority of hosting providers will be far more trustworthy than any residential ISP.
However, life tends to be much easier if you avoid VPS providers and just get a cheap dedicated server from somebody like OVH instead.
1) Incentives, residential ISPs obviously have a far bigger incentive to try and monetize your traffic.
2) No lock-in for hosting products, way more competitive hosting market, hosting companies have incentive to provide better service than residential ISPs.
3) Residential ISPs tend to have a far bigger attack surface and less trained staff. I hacked many of the worlds biggest ISPs and hosting companies, the ISPs were always running Solaris from a decade ago.
One reason is that you and your ISP are almost certainly under the same jurisdiction. So your ISP is more likely subject to coercion by your government, compared with VPS providers in other jurisdictions. Further, you can choose VPS providers in jurisdictions where such coercion will not likely be successful.
Absolutely. Depending on your use case, you can queue downloads to your VPS nodes, then prune out things you don't need and even compress data before pulling it to your home.
I agree, VPS providers don't inherently provide any added security or privacy.There's not much that would stop these providers from jumping at an offer to hand over your data for a pile of cash. If you want privacy, use Tor. If you want to beef up security, check SSL and URL before entering creds. If you need to evade corporate firewall rules, a VPN can do the trick.
For my use case, yes. My DNS requests are cached / forwarded on my VPS nodes. My ISP can neither see nor tamper with my DNS requests. I do something unorthodox and set a min-ttl both on my home linux router and on my VPS resolvers using Unbound DNS.
I should add that my ISP used to mess with my traffic ages ago, then laws changed to prevent that. Those laws were recently changed again, allowing ISP's to start mucking about again. Maybe they won't, but I will stick with my current traffic model.
Are you also doing UDP encapsulation and any additional NAT's? I'm using transport mode on strongswan. I get about 3% overhead with strongswan and about 5% with tinc, but the throughput on tinc caps out much sooner for me than with strongswan when dealing with high RTT. It could be the tun driver in CentOS causing my issues, possibly.
Interesting, I usually just ssh tunnel (I liked the idea of https://github.com/apenwarr/sshuttle/) but I like the idea of making things a bit easier on myself, gonna have to checkout tinc.
Another interesting alternative to tinc is ZeroTier ( https://www.zerotier.com/ ). I am using it to remotely play Steam games over the Internet and it is surprisingly easy to set up. Probably due to existence of centralized hub.
Also softether is extremely underrated for what it can do.
Usually most VPN tunnels use 1 connection, softether can use 16... So for overseas where you tend to see slow single connection throughout, this can be a game changer. Also its backed by a great University.
Just putting these two out there.
Meshbird project (golang) is also very interesting but not production ready.
SoftEther is an impressive product, but I can't tell if it's ever received a security audit.
Also, I'm a weirdo and run a fair number of services at home with AD authentication. SoftEther has AD support native in the Windows server, which is great, but as far as I can tell there's no way to add two-factor auth.
Like other people answered I am connecting from my Macbook to my Windows gaming PC at home and play games on it. This feature is called 'Steam In-Home Streaming' and it works only over local network (or VPN in my case). 30Mbit is sufficient to remotely play at full HD.
It doesn't seem like exactly what OP is describing, but people buy cheap region-locked licenses to arbitrage different geographic pricing (i.e., it may be cheaper to buy a steam key in eastern Europe than in the US), and then activate the games over a VPN in the original region.
Most big companies will check the PayPal recipient address for country match (yes even for virtual), or an CC avs address match for country, or card BIN lookup. Spotify, Netflix, etc. This is often for compliance, for example VAT collection in EU, or license rights affected by geography. IP Geo is meh, I'd even prioritize your Accept header locale over that.
It is used in the VoIP market. Always behind NAT, yes. We found zerotier much easier to manage, for example it's so quick for support to join/leave networks.
Another use case is our Docker Swarm which runs completely on zerotier, most nodes are on premise but some are in the cloud to make the system publicly accessible.
Every other VPN is different from ZeroTier. I would say a large additional job of ZeroTier is direct connection facilitation between two endpoints rather than route all traffic through a server.
If you have two computers behind NAT, the ZeroTier will help you punch through your NAT and let the computers talk to each other directly. It does it extremely well, and I haven't seen anything like it.
Cool thing is, it can do everything that a normal VPN can. When the other commenters talk about them hosting the 'server'. They're talking about configs, etc. Traffic doesn't usually go through their servers. Just in rare cases where your ISP is really hell bent of preventing you from UDP hole punching.
People always forget about ZeroTier's network flow rules. In a little text file/field, you have a full-on software-defined networking appliance, with filters on any kind of Layer 3-4 information, and a capability model. You could regulate a medium corporate network in about 50 lines, giving people capabilities as required or segmenting areas with tags. And it would work exactly the same whether laptops were inside the building or not. And you can do mad stuff like 'copy all TCP traffic with dport X to some machine running tcpdump'. The whole thing is a dream. I love it.
I personally use it as a replacement for AWS VPN Gateway using a ZT managed route and a couple of VPC route table entries. I detail that setup in my ZeroTier Terraform plugin: https://github.com/cormacrelf/terraform-provider-zerotier
AFAIK you can run your own ZeroTier controller for free. It's just not documented too well and also it's missing the web UI for managing your networks.
I ended up writing a CLI to do it that's relatively full-featured. At some point, I intend to move functionality to a shared library between a CLI and a Web frontend, but for now, the CLI works tremendously well for my use cases:
It's $100/month for licensing our web interface for controller management. You're free to set up your own network controller and write your interface for managing it :)
I know this is old, but there's one thing that is a big advantage for tinc, and it's that it supports TCP P2P
If you're behind a restrictive firewall ZeroTier won't be able to punch through it, and will fall back to forwarding packets (encrypted) through ZeroTier servers, Even if a connection could've been made over TCP to the other client (because his firewall supports UPnP or is port forwarded) which creates a tunnel directly between them.
Note that UDP connections are always better for encapsulating TCP, but P2P TCP is better then TCP through an external server with limited bandwidth.
I'm a ZeroTier user though, and i've only encountered this to be a problem once. It's nice to know it'll always work well though.
I love tinc for another reason too. I've been using it for many, many years, and the one feature of the re-routing that always amazes me is:
I'm on the laptop, connected to Gb ethernet--- I do work on remote servers (via tinc).
I pull the cable, the lappy's network reconfigure to wifi, tinc re-connects and...
my connections to the remote serves have not skipped a beat. As far as x2go, ssh or VNC, the 'ethernet' it's using is still up; might have lost a couple packets, but that's it.
Have been usinh Tinc to create a private network between a few servers for a hosting that has servers in the same datacenter but only public IPs, no private networking available.
Transfers of quite large files as well as mysql/redis connexions work amazingly well. CPU gets loaded quite a bit, but overall it is fast (for such a setup) and easy to configure.
This is as good a time as any to point people in the direction of sshuttle, which is a very simple and elegant VPN that can use any SSH server as an endpoint.
* No configuration required for endpoints - any SSH server that you have a login on will work.
"sshuttle does not tunnel UDP out-of-the-box. It only works on Linux ..."
We (rsync.net) sponsored work to get UDP functionality working with sshuttle on FreeBSD. It is my understanding that it is committed to FreeBSD ... you might need to wait for 11.2 ?
From the docs, it appears to redirect new TCP sessions through an established SSH session, so performance would be on par with a simpler SSH port forward.
The main feature that sets tinc apart from the competition is automatic and reliable upgrading of proxied connections to direct connections through NAT hole punching.
This is the number one reason that I use tinc, and something that competitors (including WireGuard as promoted at the top of this thread) don't have without additional work.
I currently run OpenVPN on a $5 raspberry pi. Powered off the computer's USB port. Works great. Haven't given Tinc or Wireguard a try but will experiment. I see a lot of suggestions for ssh. It's best to use a VPN.
All incoming tcp traffic is blocked to my VPN, it doesn't respond to ICMP. Pretty much looks like a dead host unless you know there's a VPN. To connect, you need both the key and password. So it's quite secure. I can then ssh to my internal network. Nice way to access my home network without exposing it directly to the net.
Do you have good config files for this? I've tried all the prebuilt stuff and I have serious issues with it. Would love to have a cheap VPN termination point at my office and at home.
Tinc is pretty amazing. My only beef with it is that once a node is connected to the network, if you ever wish to revoke access, you have to update all nodes on the network to ensure the revoked node is now gone.
Right -- the trouble is tinc only halfway adheres to this. It will happily distribute a key. This means if you delete a node from 3/4 of your network, eventually that node's key is redistributed across the network.
Yeah, there are a number of out of band mechanisms available to manage keys, but the issue still remains that you have to rely on an out of band mechanism to revoke access. If nodes reside in multiple administrative zones the situation gets even more awkward.
This is as good a time as any to point people in the direction of WireGuard, Jason Donenfeld's modernized VPN:
* It inherits strong, modern crypto from Trevor Perrin's Noise Protocol Framework.
* It's designed to be extremely simple to configure for the common case.
* It has a microscopic trusted code base --- 4-5000 lines compared with hundreds of thousands for strongSwan --- and the protocol was specifically designed to enable that; for instance, the protocol makes specific allowances to enable implementations without any dynamic allocation.
* It's probably the fastest available VPN.
You can only use it on Linux at the moment, but that will change this year.
WireGuard is so good people at my company spin up Vagrant images on their Macbooks to use it. Check it out:
Benjamin Dowling and Kenny Paterson (a name you might be familiar with) just completed and published a formal analysis of WireGuard; the results are complicated (the eCK model WireGuard was proven under doesn't contemplate separate key exchange and data transport phases) but here's a TLDR from Kenny:
I also want to point out that Manuel Schoelling been working on and released a p2p [1] distributed hash table [2] work on top of WireGuard to build a VPN mesh. He talked about this recently [3] at FOSSEM.
Interesting! I really like this from their description: "Compared to behemoths like *Swan/IPsec or OpenVPN/OpenSSL, in which auditing the gigantic codebases is an overwhelming task even for large teams of security experts, WireGuard is meant to be comprehensively reviewable by single individuals."
It's so rare and so pleasant to see people treating small code base size and high comprehensibility as a major benefit, rather than a nice-to-have or a negative.
When looking at it from a business perspective, comprehensibility only makes sense to a certain degree. It often takes a lot of time to get to a certain degree of quality, which could otherwise be used elsewhere.
I think that part fits with the "nice to have", and the "small code base size" is sometimes viewed as either a nice to have or a negative. It's reasonable to abbreviate it this way instead of saying "people viewing a high comprehensibility as a nice-to-have or small code base size as a nice-to-have or a negative".
Well, they never phrase it that way. Instead they see its opposites, complexity and mystery, as positives. E.g., the complicated, innovative new architecture that turns out to be a boat anchor. Or the people who, intentionally or not, write themselves 200kloc of job security and treat the resultant incomprehensibility as a byproduct of their genius. Rather than their failure to apply that genius to making things clear and approachable to others.
Wireguard sounds intriguing, but it seems to lack one of tinc's main features, which is presenting a fully-reachable network regardless of the underlying connectivity.
I really like being able to roam with my laptop(s) and have my routing table look exactly the same, as opposed to having a "home network" and then having to explicitly "connect to the vpn" when away from it.
With a quick reading, it looks like the Wireguard protocol wouldn't preclude adding this functionality via userspace forwarding daemons on the better-connected hosts. It's just not there right now, especially in the just-works package tinc provides.
The amazing thing about wireguard is its ease of use once you understand its concepts. Just read the part under 'Cryptokey Routing' on the homepage and you're good to go. Also the this page[1] you want more sophisticated setup (like remote ssh to your computer behind NAT without affecting your normal browsing)
If anyone wants help with their setup, ask here[2]. I can help anyone out with questions. I didn't create the sub, but should be okay.
I was going to create it, but saw that someone else just did. You can try asking the person for mod access.
I don't mind getting help on IRC, but often there are problems that many people come across that can be solved by looking at other people's answers. Reddit is great for that, and StackOverflow hates being tech support and closes questions.
I understand if you want people to use the mailing lists for normal tech support, but I wonder if WireGuard's increasing adoption will bring a lot more people than everyone would like there.
Also the average person would find the use of mailing lists daunting, and tend to not love IRC.
WireGuard is likely going to be merged into Linux kernel[1]. And it's already supported by latest Systemd 237[2].
If you're comfortable with beta/dev software versions, or using ArchLinux/Debian Sid/other distro with the latest software—there's no reason not to rely on WireGuard today. Otherwise, wait until it arrives at your system, being built into the mainline kernel. Also, I would like it to get audited by independent third-party.
It's reasonable to want to see an audit report but bear in mind:
1. There aren't many audit firms qualified to do that audit, and only a subset of the people at most of the qualified firms are themselves qualified.
2. As a result, none of WireGuard's competition has been meaningfully audited --- all of them have been audited, but the projects are pretty much seen as a well that we can keep going back to for more bugs.
The only exception to that rule is probably OpenSSH, which despite the very complex code base has received pretty significant coverage --- not so much from formal audits (it's had some, but they're the same kind as I just described above) but from a decade of close scrutiny.
Against the desire for an audit, I'd also bank:
- The author is a Linux kernel vuln researcher
- The codebase is deliberately tiny
- The protocol was streamlined specifically to make it possible to implement as simply as it was
> 1. There aren't many audit firms qualified to do that audit, and only a subset of the people at most of the qualified firms are themselves qualified.
I know of one (and we're hosting the dude who wrote the Wireguard go implementation this summer (hey Mathias))
I'd rely on this code before I relied on literally any other VPN codebase. Jason should change the wording here, which I think might be old. Like I said: the protocol just got a set of formal proofs, in addition to the Tamarin prover work Jason did for it.
Just because the protocol has a formal set of proofs, doesn't mean it's production ready. The very fact that the only releases are snapshots and not eligible for CVE's makes me weary of utilizing this in a non-testing environment.
But how does a set of proofs let you know that it's not going to cause a kernel panic or stop forwarding traffic in edge-cases that aren't well-tested (e.g. lossy and/or high-latency links).
If you have nation-states for enemies, probably not. For most other uses, I'd say give it a try (I am typing this over a Wireguard connection right now, to a Streisand[1] vm at DO).
For me, it'd be almost the reverse: state-level adversaries almost certainly have remotes for strongSwan and OpenVPN, but due to its design, it's unlikely that they do for WireGuard.
Very interesting, thanks. A question: how does it run on embedded systems? I was wondering of two very small boards, such as a Nano/Orange PI or similar ones implementing a VPN portable box that can be used both to connect PCs/tablets etc, and/or offering encrypted voip communications: plug one here, the other one there, give each other's ip addresses (using encrypted mail?) and you get a secured pipe to talk or send data through. (I assume the keys are hardcoded)
Finding each other IP could be achieved by using an external trusted email server; each box cold employ a button to send a mail with a id:ip pair (or a daemon to do that every time it changes), so that the other box(es) know what is connected from where and keep the connection alive. Doable?
I'm not sure how complicated (or how stupid) this question is, but what's the difference in security between using this versus an SSH tunnel as a proxy?
* The SSH codebase is much, much more complicated than WireGuard's (but it has a very strong track record at this point).
* The underlying SSH protocol dates back into the 1990s, is cryptographically inferior to WireGuard, and does not have an especially strong track record (it's record is similar to that of TLS).
* SSH is opt-in secure for a selection of ports; WireGuard (really, any real VPN) is default secure for all traffic, which is why you use it.
Mostly, though, the reason you'd use a VPN instead of SSH is that VPNs are easier to use. The reason people use SSH instead of VPNs is that most VPNs are hard to set up. That's a big part of what WireGuard fixes.
Another stated reason is that SSH runs on TCP and running TCP apps on a TCP VPN is inefficient. It’s two layers of reliable delivery. It’s better to use UDP in the VPN which is what WireGuard does.
That's not the entire truth. TCP over TCP is not a good mix, but SSH tunnels are different.
SSH tunnels are not VPN tunnels. SSH will act as the endpoint for your TCP application and only tunnel the application payload over SSH (TCP). On the other side a new TCP session will be opened over which that payload is sent. So you never do TCP over TCP.
It's more so a proxy than a tunnel and that avoids the real issue, which is nested congestion control. In addition SSH should also be able to do so with less overhead (in bytes) than a VPN (which need to forward the IP and TCP header intact), but I don't know how well SSH takes advantage of that.
SSH tunneling can outperform OpenVPN running in TCP or UDP mode.
By SSH tunneling, I assume you mean SSH port forwarding (-R/-L). Recent OpenSSH also has Tunnel (-w) which provides an IP-level VPN, in which case it does do TCP over TCP.
Would you say this is true even if you configured openssh to only use the more modern options?
In recent years they've added a chacha20+poly1305 option as well as Curve25519 for key exchange and ed25519 for host and user authentication.
This would seem to bring it up to about an equivalent level cryptogtaphically speaking, in terms of application security, it's definitely more complicated, but it's also one of the most proven pieces of software around and much of that complexity is post-auth. The wireguard site itself pretty clearly states that it hasn't seen much in the way of field testing, though it does look extremely promising. I'll definitely be keeping on eye on it once it's available on more platforms.
Yes. The primitives aren't where protocols tend to go wrong; it's the joinery that's the problem. WireGuard is Noise, which was designed with 20 years of hindsight into what breaks transport protocols.
But, unfortunately for me, step 0 for Android is to install a new kernel. Whilst other people have already done the hard work to build kernels for my device (Pixel XL), it would create a significant maintenance headache for me to use one of them. I rely on the device manufacturer's security team to release updates to keep my device safe (particularly as I use mobile banking apps on it). But installing a custom kernel would create a large hole in that, and I can't dedicate sufficient time to reviewing the code patches and building the kernel myself. (Of course, better programmers could do that faster, so the trade-off would be easier.)
Do you happen to know whether WireGuard support will be in the mainline kernel in future?
SSH supports a wider array of features while having effectively the same cost in implementation and use. The only difference is SSH doesn't support dual IP roaming and IP over UDP. Probably didn't need to start a whole new project when they could have added this to SSH, but then crypto nerds couldn't reinvent the wheel (and SSH probably wouldn't accept the patches).
And this baffles me:
"In the server configuration, each peer (a client) will be able to send packets to the network interface with a source IP matching his corresponding list of allowed IPs. For example, when a packet is received by the server from peer gN65BkIK..., after being decrypted and authenticated, if its source IP is 10.10.10.230, then it's allowed onto the interface; otherwise it's dropped.""system administrators do not need complicated firewall extensions, such as in the case of IPSec, but rather they can simply match on "is it from this IP? on this interface?", and be assured that it is a secure and authentic packet."
If this is on a dedicated tun/tap interface, shouldn't it only be transmitting secure and authentic packets anyway?? Why is IP traffic on an interface just assumed to be authentic information, if for example you aren't sure what put that packet on that interface? And you still need firewalls, so this duplicates configuration. This is basically an .rhosts file for a VPN.
If WireGuard is as easy to set up as SSH, why not use SSH? Because you want a VPN. But if you want a VPN, don't you want VPN features? WireGuard doesn't have VPN features. So you have to get a real VPN, or use SSH.
To put it another way: if you only want roaming IP and public keys, use WireGuard. For any other use case, you're going to get a real VPN or use SSH.
Certainly not. However, I see how you might think that: the code base and design is deliberately small and readable. It's very far from being a toy, but it aims to be as auditable as a toy would be.
>If this is on a dedicated tun/tap interface, shouldn't it only be transmitting secure and authentic packets anyway?
I was confused about this too, but it made sense when I saw it another way. The network/IP address you put there is also added into the routing table, so it routes traffic to that network via the wireguard interface to the specific endpoint using the key associated with the destination address. While it might seem it doesn't have an important place in the receiving side (which I believe it does, especially when you have multiple hosts sharing a key), I feel it vastly simplifies things. You wouldn't have to worry about where a packet is getting routed if you look at the output of `wg` (if you keep the ACL minimal)
When the config is used with wg-quick(which you can setup as a service with systemd), it adds the address to the routing tables automatically and is less work for you to do.
>If WireGuard is as easy to set up as SSH, why not use SSH
Because SSH can't do many things that WireGuard can, and also speed. Especially since I believe it will get merged into the kernel. AFAIK Greg Kroah-Hartman is all for it[1], so I don't think it'll have any trouble.
Also, I'm interested in what you consider a 'real' VPN is. What do you think this cannot do, compared to say, OpenVPN?
I will never understand how people go through the trouble of adding cryptographic tunneling components to their network and manage to put "security" at the bottom of their desiderata.
You're right, there is no guarantee. Just that kernel folk aren't entirely opposed to the idea, or I think we'd heard about it by now.
I think what convinced me at least was the argument that larger codebases like IPSec were merged into the kernel, so why would WireGuard have any trouble?
Of course someone upstream might just decide to say no, and we'll have to live with the kernel module which is just as fast especially since it tends to reach line rate.
It looks to me like this is useful for these reasons:
When sending; a single interface may have many peers configured. In that case WG needs to be told which peer to tunnel the packet to or it wouldn't know. It's can be viewed as WG internal routing table, in addition to the ordinary IP routing table.
When receiving; although the packet is proven to be from a configured peer, that peer may have used any IP as a source (referring to the encapsulated traffic). Even if you trust all your peers are who they should be this is a potential security risk. For example a peer may spoof its source IP to imitate another peer. You could patch this hole with iptables rules, but this way is more convenient and has a better chance of being correct since it's simpler to configure.
If you don't want any of this I suppose you could just set up multiple WG interfaces instead, configuring only a single peer for each interface and setting AllowedIPs to 0.0.0.0/0. That should give you the same behavior as a what I think you mean by "real VPN".
Wireguard is blazingly fast - faster than ssh tunnels or other vpn solutions. Because it's tiny, well designed and baked into the kernel, it can really fly.
My intended goal was to evade the GFW for a two week work trip, whereafter I was deleting the droplet anyway. I didn't know what protocols would be effective against the GFW and have neither the time nor capacity to set these up manually. Moreover, as the sibling mentioned, you can pick which you install.
I agree that minimising the attack surface is the right course of action but on balance the increased risk of several protocols didn't outweigh the convenience given my purpose and temporary nature.
I fell back to global roaming data if I needed to do anything involving logging in, which I avoided doing unless strictly necessary anyway (e.g.: no banking, etc.).
All this is moot, however, as I was citing it for the point at hand rather than suggesting it over a manually set-up install of Wiregard.
Can it do mesh routing in the same way as tinc? Otherwise it's a poor replacement. It was not immediately evident in the three or so minutes I took to look at the project's web page.
So, I think WireGuard can't, but it can be used as a part of the puzzle. I'm running a mesh using WG as the security layer, and then l2tp to provide a layer 2 on which to run batman-adv. I have a bunch of machines getting DHCP addresses from my home router that way, and (I assume) batman gives me good routing. Of course, cobbling together your own "secure" mesh probably isn't ideal, but it works surprisingly well.
Mind providing more details on your setup here? I tried throwing together an overlay network using OSPF, but never really got it off the ground. I'd love to hear what you've got here!
I have SoftEther running at home - which is the opensource free enterprise version (development driven by Japanese university from what I gather). Which offer enterprise features and supports OpenVPN. This looks functionally rather poor compared so Softether...
OpenVPN is still going to be the best choice for some time. There's nothing else as well supported across platforms and it does everything most people want, including allowing connected clients to communicate directly. It doesn't have mesh support but that's probably a good thing in my experience.
If you need to securely connect servers that only have public network, I suggest you give Weave Net a try. It's developed for Docker and also runs on Docker, but the private IPs can be exported to the host machine so you can also use it as a VPN between the hosts. It's super easy to setup and reasonably fast since it uses ESP packets, which are encrypted on the kernel level.
Weave Cloud is the commercial service. It's a management platform for developers creating and operating Kubernetes-based applications. It helps you do CI/CD, observability, monitoring and networking/security[0]. It's across the whole 'developer experience' not just networking.
Weave Net is completely OSS and usable to create overlay networks between docker nodes or hosts. There's an extensive user guide [1] and project on Github [2].
You wouldn't need the commercial service for this sort of usage. Business users buy a subscription to get support for complex networking.
+1 for tinc, I can login to my server and see the other three computers, it's nice for ssh-ing or tunnelling.
On tinc, I was only able to get a Windows Remote Desktop connection for about 45 seconds, then it loses the connection. I'm guessing OpenVPN might not have this issue. Tinc is way easier to configure than openvpn though. I recommend it :)
In my experience for a simple connection between two hosts a port forwarded over ssh was significantly faster than tinc VPN. Obviously ssh is port forwarding is not a network, but sometimes you don't need a network.
The thread you link to has one performance test, with a difference of 750MB for OpenVPN and 870MB for SSH - how is that a factor of 6 to 8? And that's just for iperf/UDP traffic, which is hardly a good indicator of real world performance.
There's a similar software called peervpn that I use. It encrypts traffic as well, but I don't know about its cryptographic strength or implementation, so I don't rely on it. HTTPS/encrypted data through the tunnels only.
No citation, but IIRC the tinc 1.0 branch has little/no protection against replay attacks, at least in UDP mode.
I recall the 1.1 branch improving the protocol in this regard, though it's been a long time since I looked, and I can't vouch for its overall security. I'm surprised it hasn't been officially released yet; the branch was cut over a decade ago! (Also a little annoyed; I contributed improvements to tincctl back then, and they still haven't been released...)
Have you read the protocol and compared it with techniques used to build any
widely used cryptographic transport? I did.
Given that this is not point to point protocol, you cannot just assume that
the author used stock protocol (TLS or IPsec), because there's no such thing.
And unless there was an analysis that confirmed the protocol's strength or
the author is a recognized cryptographer, you cannot assume he did a good job.
> And unless there was an analysis that confirmed the protocol's strength or the author is a recognized cryptographer, you cannot assume he did a good job.
It was a few years ago, so I don't remember the details, but there were things
like lack of integrity protection for control messages sent between nodes or
keys with a very long life time shared by all the nodes.
It’s all about your use case and threat model. For me, I am just using it to get around traffic shaping by my shitty ISP. Easy setup and speed are the only parts that matter to me. If the government wanted to sniff my traffic I’m sure they could find a way, but my ISP is too dumb to do it on their own which solves my problem.
Weren't the tinc developers caught adding NSA backdoors into it and getting paid to do it last year? I remember something like this remotely from the Snowden leaks.
It's not as fast as strongswan or wireguard, but it has dynamic mesh routing. If one of my nodes is down, I route through the others automagically, all in user space without having to enable forwarding on any nodes. This is handy when backbone providers are having issues.