Hacker News new | past | comments | ask | show | jobs | submit login

The feature list reminded me of Tailscale so I went looking and found this on their website: https://www.netmaker.io/resources/tailscale-vs-zerotier

Their comparison graph at the bottom seems to indicate that the differentiating features between their product and Tailscale is that you can't self-host (ignoring the existence of headscale) and that WireGuard support is limited. I believe the latter point refers to the default Tailscale configuration that connects every node with every other node, whereas NetMaker allows different network configurations.

However, Tailscale ACLs should allow you to reconfigure the network into shape you want, so I'm not sure if that criticism still applies. Their claim that "data will pass through their relay (DERP) servers fairly regularly" also seems suspect, as that's only the case for networks where UDP traffic doesn't flow between clients despite STUN/TURN, which is very rare in practice.

The only advantage I can find is that NetMaker has a richer free plan and that they use the WireGuard kernel module where possible. I'm not sure why they didn't lead with that.




1. thanks for linking that

2. the comparison:

- protocol: wireguard is now baseline

- speed: netmaker is faster because it uses kernel wg - this is not going to hold true for all system configurations, certainly doesn't for macos

- flexibility: feels like it does the same as tailscale, marketed slightly different as they list common use-cases – egress and ingress gateways; network shaping with acls is also possible in ts; maybe someone will write-up an unbiased comparison on self hosting

- price: ts offer seems to be good enough for most users and their limits are "soft" anyway – i pay $45 a year because i want sustainable, not free

looking for reasons to change and don't find any


> - speed: netmaker is faster because it uses kernel wg - this is not going to hold true for all system configurations, certainly doesn't for macos

Unless something changed super recently, tailscale (per their own claims) are faster than (linux) kernel wg: https://tailscale.com/blog/throughput-improvements/

  Surprisingly, we improved the performance of wireguard-go (running in userspace) enough to make it faster than WireGuard (running in the kernel) in the best conditions. But, this point of comparison likely won’t be long-lived: we expect the kernel can do similar things.


Huh, would be interesting to see some non-tailscale benchmarks of this. Assuming the kernel impl is actually optimized it should be theoretically impossible to exceed the performance with userland wg?


I ran the same benchmarks they listed here[0], and did some practical tests. As of a week after the article being written, Tailscale was faster than kernel wireguard.

[0] https://tailscale.com/blog/more-throughput/


What’s your approach to routing the mesh? Static? BGP? Something else?


i don't route any traffic besides the odd exit-node, and tailscale takes care of that for me

the exit-node routing is for firewall circumvention - if i find myself unable to connect to git over ssh, i simply activate routing through the exit-node and continue working

the exit-node sits at home where i set the rules


Tailscale doesn't offer an official self-hosted control server, right? That seems to be an advantage of NetMaker.

It has no equivalent to tailnet lock, though, as far as I can tell.


It isn't official, but headscale exists: https://github.com/juanfont/headscale


Is the kernel module how they claim the 5x performance over tailscale? I haven't really done any real tailscale performance metrics but can't see how else they can claim this (unless there is infrastructure performance differences).


Afaik, tailscale recently made changes to their go user space implementation that actually made their version faster than the kernel implementation, at least in some cases.

I remember reading a blog post on tailscacles' website about it and how they are pushing their changes upstream (wg kernel and official wg go user space implementation).

Can't find the post now though.


I think this is the post you're referring to.

https://tailscale.com/blog/more-throughput/


Yes that's the one.


Go is a bit too slow for performant networking - not sure why people are hell-bent on forcing it into such spaces where it doesn't fit.


This is not true. When it comes to networking, most languages (Go included) are bottlenecked at context switching for syscalls.


Please, elaborate or provide sources.


My hazy understanding from previous conversations was the performance advantages claimed is basically how things were "out of the box" for comparison, for certain situations. However I've seen people claim the differences quickly close when someone that knows what they are doing optimizes a setup. Additionally other situations are much more similar.

The numbers Netmaker posted likely come from: https://medium.com/netmaker/battle-of-the-vpns-which-one-is-...

Note that a spreadsheet with raw data and command used (which is all `iperf3 -c <IP>`) is here: https://docs.google.com/spreadsheets/d/1Qy27zEERSqisdV1u-YUc...

While it's a good idea to be skeptical of a company tooting it's own horn, there does (or has) seemed to be a consistent performance advantage that is beneficial to those who aren't paid for or like being a network admin beyond setting a proper MTU: https://techoverflow.net/2022/08/19/iperf-benchmark-of-zerot...

However in light of https://tailscale.com/blog/more-throughput/ (thanks to FabHK for bringing this to my attention) testing should probably be redone.


They do highlight that they use kernel space Wireguard as opposed to Tailscale which uses user space Wireguard.

https://medium.com/netmaker/battle-of-the-vpns-which-one-is-...

https://techoverflow.net/2022/08/19/iperf-benchmark-of-zerot...


Hi, worth noting another point on this. Netmaker has "Client Gateways", which allow you to generate and modify raw WireGuard config files. This is extremely useful for integrating custom WireGuard setups. For instance, generate a config file, modify it, put it on a router, and boom, site-to-site. https://www.netmaker.io/features/ingress


I think it's worth doing your own investigation on how often traffic is getting relayed via Tailscale. We don't have numbers on it, but have had users who experienced very high latency with Tailscale, and after doing some traffic analysis, discovered it was getting relayed halfway across the country. Tailscale does a fantastic job at NAT traversal, but it's still a worthwhile consideration.


You seem very familiar with this topic... have you ever evaluated 'tinc'?


Tinc is dormant these days. Very little development going on.

Cool concept, but limited performance and limited uptake. Their approach to mesh was neat at the time.


Dormant, or stable?

I've been running a tinc mesh network for eons w/ my systems and it's never given me any trouble. I use git to check in the 'hosts/' folder and add/remove hosts as needed, pull down to all the nodes, and they can all connect.

I do wish the encryption + transport could be as performant as wireguard, but for my needs, I haven't been pushing it hard enough that it's a concern for me.


Tinc works, but is not really stable for my use case: strange network environment thanks to my school. It frequently falls into infinite loops, dropping all packets and fully use a CPU core (on Windows, Linux looks fine). It seems stable on an all-Linux network, but the moment a Windows client is added, things can go wrong.

It also does not really have a decent mobile client.


Thats very strange. I use tinc-vpn exclusivly for my network. Both Linux, Win32, and FreeBSD. Everything is very stable.

But I have to admit, I have private fork of tinc-vpn specifically for Win32. AFAIR I did only minor changes to TAP driver initialization and how scripts are executed. They have they own thread now and additionaly there is script called tinc-pre to handle IP initialization before TAP interface is up. Becuase meh, Windows Network Interfaces work strange :)


I somewhat agree with you.

I went with Dormant as 1.1 has been in development for years and no official stable release. Changes to 1.1 are the odd PR here and there, nothing really from the main author anymore.

Yeah things like improving the encryption I would expect even with a stable but active product like this. Especially given ChaCha20’s widespread adoption and optimisation these days. Likewise I would have liked to have seen decent tinc phone clients (ios, Android). But this is a failing of a lot of VPN clients.

Just look at the release interval on their news page.

Honestly I think the likes of WireGuard, Tailscale, etc took the steam out of the developer. It’s a shame because I do prefer to have a diverse range of VPNs rather than just OpenVPN, IPsec, WireGuard.

1. https://www.tinc-vpn.org/news/


It's not so much the other VPN products out there, rather no time and no other core developers. There have been lots of people contributing, some much more than others, but usually it was just to scratch their itch, after which they move on (which is perfectly fine).

I'm not sure how to revitalize development if there is not a large interest from developers, and I don't want to turn this into something commercial like OpenVPN did.


Yes completely understand you re time. Just one of those things that happens with life.

Thanks for creating tinc, truly an awesome approach to meshed VPNs!


Same here. Im very happy with tinc-vpn. It can easly push 100Mbit traffic through my severs and thats enough for my needs. Also, auto-mesh is very nice feature. I do NOT want to forward traffic via my central hubs.

Note: I use tinc-vpn only in switch (L2) mode. For routing good old quagga (forked) does it job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: