Hacker News new | past | comments | ask | show | jobs | submit | afeiszli's comments login


It's definitely lacking on the unit-testing side and we should do more there, but we think integration tests are the more important of the two in this scenario, because the most fragile bits are the interactions between client-client and client-server. It's a lot harder to self-contain those tests without deploying the platform and clients on various infrastructures.


i'd argue that is exactly what you want to unit test. any breakage of client->unit test client or server->unit test server, should be a big no no. otherwise you easily risk backwards compatible issues.


Worth noting, the WireGuard creator has specifically mentioned these sort of management features (user auth, automated configuration and coordination, ACL's) as out-of-scope of the WireGuard project. He wanted to keep it as simple as possible, and left it to 3rd parties to develop VPN platforms using WireGuard.


You can lock it down a good amount: - 80 is only required for Caddy to request certificates. If you BYO certs, you can take that off - TURN is optional, so if you disable TURN then dont need 3479 or 8089 - The remaining ports are only for specific features (EMQX and Prometheus exporter) which are not enabled by default.

So really, you could get it down to just 443. However, this should be better documented.

Also worth noting these are all server-side requirements. The actual WireGuard clients do not need these ports open.


I think it's worth doing your own investigation on how often traffic is getting relayed via Tailscale. We don't have numbers on it, but have had users who experienced very high latency with Tailscale, and after doing some traffic analysis, discovered it was getting relayed halfway across the country. Tailscale does a fantastic job at NAT traversal, but it's still a worthwhile consideration.


Netmaker also has a relay setup, when you don't want to do P2P: https://www.netmaker.io/features/relay


SSPL is a bit more restrictive, but politics also have a lot to do with it. "Open Source" is just a term, technically anyone could call their license open source, but most people only consider a license open source if the OSI (a foundation) specifically approves it. Mongo tried and failed to get SSPL approved by OSI: https://blog.tidelift.com/what-i-learned-from-the-server-sid...


Interesting. According to the article, it seems like the biggest complaint was that Mongo was a for-profit company and couldn’t be trusted? I agree that for-profit companies can’t be trusted, but I’m not sure I agree with the statement “that’s not open source because the license was written by a for-profit company”.


That article is obfuscating why the license was going to be rejected by the OSI. The OSI has as part of their definition of open-source that there can be no field of use discriminators. It had nothing to do with the fact that it was drafted by a commercial company. OSI has approved plenty of licenses drafted by for-profit companies (e.g. Intel, IBM, Microsoft.)

https://blog.opensource.org/the-sspl-is-not-an-open-source-l...

https://opensource.org/osd/


How come AGPL doesn't run afoul of #10 of the OSD definition, which is:

> 10. License Must Be Technology-Neutral

> No provision of the license may be predicated on any individual technology or style of interface.

Under AGPLv3 if I have AGPLv3 code on a computer and users can interact with it the requirements depend on the technology used by the users to interact with the program.

It had a provision that only applies to users who are "interacting with the remotely through a computer network".

So...if my users are at the same location as the server and interacting through a command line interface on serial terminals those provisions do not apply.

I want to add a few more terminals in a nearby room but don't want to actually run serial lines from all of them to the server. Instead I run ethernet to the room the new terminals will be in, and at each terminal place an RPi with the terminal connected to the RPi and the RPi connected to ethernet. The RPi runs software that connects to the server via ssh and then exposes that ssh session on the terminal so they can use the command line interface to that AGPL program.

Now the users are interacting with the server via a computer network and so those provisions of AGPL might now apply, depending on whether or not this counts as interacting "remotely".

Same thing but now I provide an app that users can run on their phones that that makes a hard coded connection to my server and runs a terminal emulator over that to a terminal session on the server, where the users can use the AGPL program. There doesn't seem any question that this is now definitely remote, and it is over a computer network, so those AGPL provisions now definitely apply.

And so we have essentially one thing, users using a terminal interface to interact with an AGPL command line program on my server, where what license provisions apply between me and any given user depends on just what technology is used to carry their typed text between their keyboard and my server, and to carry the program output text between my server and their display.

[1] https://en.wikipedia.org/wiki/ADM-3A


That provision is intended to exclude licenses that say things like "you must show a modal GUI window crediting this software" which would exclude the software from being used in a server application, library or command-line interface. Or a clause like "you must provide a copy of this software on a zip-disk if asked", which over time would become increasingly hard to do.

The AGPLv3 is not predicated on a particular technology or interface so it doesn't run afoul of this. You can use it in networked software, or un-networked software. If a license said something like "you cannot use this for software that users interact with over a network" then it would violate this principle.


Thanks. That makes sense.


Kilo is cool! And works. It will be similar, just sort of depends on what you're comfortable with and what sort of management features you need. Some people like a UI where they can see all their nodes and troubleshoot without having to SSH, which is a primary advantage, but Kilo is probably better for smaller setups that are purely for Kubernetes.


Netmaker guy here, and I'll be the first to tell you that if you have a static setup of, let's say 5 machines or less, then there's no need for something as complex as Netmaker. It's really useful for people who have many machines, or machines that will move around dynamically. Or, if you need to route traffic through a NAT gateway. A static setup is fine for technical people and small networks, it's just not scalable. As an analogy, you wouldn't run Kubernetes if you just need to deploy 3 docker containers, but as the complexity grows, you need a management system.


NAS should work if it's linux or freebsd based. For mobile you (currently) have to use our "client gateway", which accesses the network just using a regular WireGuard config file, which you can scan from the Netmaker UI using the WireGuard app on your phone. But it works well. However, we've got a mobile app in the works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: