Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
You may not need Cloudflare Tunnel. Linux is fine (kiwiziti.com)
279 points by mcovalt on April 8, 2022 | hide | past | favorite | 167 comments


What's actually being done here is buried in a mass of analogies. AFAICT it's:

* Exposing a server running on the home network (behind NAT on a dynamic IP) to the internet.

* Doing so by renting a cheap VPS and using wireguard to forward traffic to the server at home.

I love wireguard and use it continuously. My phone has always on wireguard to my home network so all my phone traffic goes through my home router/dns, I can access the various private servers I have at home, get dns based ad blocking etc. I use an ISP that give me a static IP so it was easy to set up. It works like a dream.

That said when I want to run a public server I just rent a VPS and run it on that. I don't want anything I don't own initiating connections to anything on my home network in any way.


> That said when I want to run a public server I just rent a VPS and run it on that

I have a setup similar to the OP and there's a really good reason for it: cost. For $2k in hardware (one time) and a VPS + electricity at $20/mo, I can run a 64 core, 192Gb, 24TB server. Let's call that $4400 over the course of 10 years. You would burn through that budget in roughly 1 month to get the same specs on AWS.

Obviously I'm neglecting the impact and cost of network, if it's a hobby you can just reuse your existing connection but you can always buy a dedicated line (and adjust the cost calculation accordingly)

In terms of security, you can mitigate the surface area by running a reverse proxy on the VPS. I've got nginx on the front line which does TLS then proxy_passes to my basement server at its wireguard IP address. So it's strictly limited to http, no direct database or ssh access.

I don't know if I'd ever run a "real" website this way but it's great for hobbies and side projects.


Would be surprised if you would still be using that server actively after 10 years.

I wonder how you build such a hardware with just $2k (3990x alone seems to cost $3.5k), but with €94/mon you can get 16C32T, 128Gb, 8TB server from Hetzner[1]. It has Zen 3 5950X which has better single core performance than your 64 core assuming that you refer to 3990X, and often single core perf is more important than total perf because you don't use all cores all the time.

I estimate such a configuration to be $2.5k. With $2.5k you can keep the server for 2 years, with network and electricity provided by Hetzner. Also you can terminate it and rent another, better server at any time.

[1]: https://www.hetzner.com/dedicated-rootserver/matrix-ax


> Would be surprised if you would still be using that server actively after 10 years.

I thank the death of Moore's Law. Compute power obsoletes very slowly these days. Back in the 90s I was buying new equipment every year or two.

My primary hobby server is now 12 years old and going strong. My youngest server is 8 years old.

Every now and then I start shopping for an upgrade but eventually decide I'll spend the money later. It's still good enough for my needs and works well.


Most things don't need that much power. My homelab, NextCloud and family Plex server is a 10 year old i7 with 32gb ram. The only upgrade in 10 years was a PCIe NVME adapter so I could boot from faster storage.


My search engine is self-hosted off hardware with those rough storage specs (well, sans ECC).

I do not regret one bit going for this set-up, although my next step toward scaling this up is indeed going to be some sort of hosted solution, the primary reason being the BTUs and decibels involved in running an actual server from my apartment.

Running a search engine or not, anyone with a serious interest in programming should have a server. I really can't recommend it enough. The ability to run large jobs that chew through hundreds of gigabytes of data for weeks if need be, without considering the cost, that dramatically increases the level of ambition you can have in your projects.

* I had an idea for a re-skinned wikipedia. So I processed all 40 gigabytes of wikipedia articles from an OpenZIM file. Took a week. No big deal. Later I used the same corpus to calculate language data models. That's another several days.

* I had an idea a while back. It required me to screengrab 500,000 websites. So I wrote a python script. Takes anywhere between 0.5-120 seconds per site. It's still not finished by any stretch, but I'm up to about 322,000 screenshots now.

It's just great.


2 years in, it's working fine and already paid for itself. I hope it makes it 10 years. Why do you think it won't?


I wouldn’t necessarily argue that the cloud is cheaper as a rule, but I would say that this comparison misrepresents how you’re supposed to use cloud computing and what its strengths are.

That 64 core server is probably sitting nearly idle most of the time. You don’t just spin up big systems on AWS and leave them idle. The whole point of cloud computing is the on-demand scalability.

You can basically spend nothing until a request comes in, and most hobby projects are low traffic.

For example, I found a project that runs plex media server encoding jobs as Kubernetes pods: https://github.com/munnerz/kube-plex

When you’re not encoding anything, you’re not running much compute.

I think I’ve heard of game servers that wait for players to attempt to connect before starting the instance. If all you’re doing is playing a multiplayer game with friends that instance is going to be off 20+ hours a day.

Still, I haven’t done any napkin math on what applications represent cost savings.


> The whole point of cloud computing is the on-demand scalability.

True, my system is fairly static. But it scales when I need it. As mentioned, it's for my personal use. "on-demand scalability" is a bullshit buzzword in this context. My goal was to buy a machine capable of several specific tasks.

> That 64 core server is probably sitting nearly idle most of the time > ... plex media server > ... encoding > ... game servers > ... that instance is going to be off 20+ hours a day.

Not those tasks :-) I have no trouble maxxing out my disk IO and memory, slightly more challenging to keep my CPU busy but it's keeping an average load of 4-8.

If you can't keep a machine busy, just turn it off. Simple as that. I would not have bought such a beast if I didn't have a plan keep it busy!


Explain how you got a 64 core 192 GB RAM 24 TB server or workstation for 2k please


LGA 2011, DDR3, Easystore shucks or data center decomms (8x3TB was $200).

I got 48 vCPUs and 96GB DDR3 for <$300 in 2016 off eBay. It isn't the most powerful, but it was cheap.


vCPUs is very different than dedicated cores. My 8 core machine could have 48 vCPUs as well, it doesn't say anything.


Not OP, but I have an R620 running in my garage I use for self hosting. It has 12 cores (Intel Xeon E5-2665), 96GB of RAM, 10x2.5" drive bays (came empty). The SSDs I use I shuck out of old external USB drives.

Together with a full standing rack on wheels, I bought them for $600 cash (plus U-Haul rental to get it home).

My trick is craigslist in a city with datacenters (PHX). There are resellers here who buy used hardware from local datacenters and resell them to smaller businesses. I've had it running 24x7 for almost 3 years now and it is still going strong.

https://www.reddit.com/r/homelab/ has some good references for acquiring hardware for a home lab setup.


That makes sense. 2 x 6 core Xeons were cheap even back in the day. And it came without storage. Again, as someone who actually buys straight off FB and Google when they dispose servers (as the companies who you buy from off eBay), I'd really like to know where he got that specific 64 core (which would be a quad socket if it's older than 2015) 192 GB 24 TB server because I'd like to order more than a few. Your server is not even worth 1/20th of what his is worth.

The closest I can get a disposed server with those specs is at least 8k


Not ideal but you can go for used rack mounted servers. You can easily get even better specs for less than 2k with reasonably recent CPUs too (E5-xxxxv3 and up).

At least that was the case when I set up my home server rack ~2 years ago. (Though 24TB might be too much since server grade SAS storage is very expensive even when it's used. So what I did was to buy servers with as little storage as possible and then just put SATA HDDs that I can get for dirt cheap.)


Refurbished Supermiro from The Server Store, there were plenty 2 years ago when I bought it - market has probably changed since then.


Same, that's going to be like 4x that price refurbished around the best case...


Ebay I would guess. Requires a bit of luck though


A machine like that is only 2k? Can you break down the cost into parts?I'm guessing the CPU is the Ryzen 3990X.

Is the RAM ECC ? Are the drives in a RAID configuration? Is it kept in a server rack? What router, switch, and firewall are your using?


It's a refurb supermicro 2U. DDR4 ECC Ram, 6 spinny disks in a zfs pool. It's older gen hardware and it's really loud. I put it on a makeshift rack and put it far away, heating the corner of my basement! I've got a consumer wifi router with DD-WRT, router has never been the bottleneck (that would be Comcast).


Are you running Linux, FreeBSD or Solaris?


> I have a setup similar to the OP and there's a really good reason for it: cost. For $2k in hardware

I think in you case colocation data center would be alternative solution.


I would but my local colo offers max 25 Mbps (!) networking unless you get the enterprise plan. I'm guessing there's better colo options near a big city?


You don’t have to colo locally.


> I just rent a VPS and run it on that. I don't want anything I don't own initiating connections to anything on my home network in any way.

I do this for my personal web server, but also set up the network rule so that its SSH port is only reachable from my VPN.

IMO, this is super convenient. I can keep my public servers out of my home network (so clear separation of private/public networks), but still a VPN connection is required to log into any of my servers.


What do you do if your VPN goes down and you need access to your server to debug it? Is physical access required or do you have other contingency plans?


> What do you do if your VPN goes down and you need access to your server to debug it

Here is exactly what I would do:

1. Sign in to AWS console (with my Yubikey).

2. Click "Lightsail > instances > <my-server>"

3. Click "Networking > Allow Lightsail browser SSH/RDP"

4. Click "Connect using SSH".

5. Do debugging!

In short, you can use cloud providers' web interface as an escape hatch. This just works as long as you manage the firewall using your cloud provider's network filter, such as Security Group, rather than, say, iptables.


Some providers like digitalocean use a serial connection for the terminal, so it works even when the network configuration is completely broken somehow (totally not because you set the input policy to drop without making an accept rule first)


EC2 has this too: https://aws.amazon.com/about-aws/whats-new/2021/03/introduci...

Works even if you mess up the boot process somehow

Disclaimer: I work here


These days, your ISP would (hopefully) give you at least a whole (static) /56, so you can always reserve some prefixes for your not-really-home networks ?

https://www.ripe.net/publications/docs/ripe-690#4-2-3--prefi...


Hahahahaha your ISP supports IPv6 already?


What you mean already? In Europe it has been for at least a decade. Even in Brazil v6 has been standard since 2014 or so...


Dont hold Europe up like a shining beacon..

It's a coin toss on if BT will properly allow ipv6 traffic each time the router reboots.

I've had to make provisions to force ipv4 on my machines because although composer et al. runs fine on my vps and elsewhere, if it tries to access it via ipv6 at home it more often than not hangs indefinitely.


>Europe >BT

shakes head sadly in Brexit


"In Europe"? Europe is made of many countries and all ISPs have a national structure (even if they operate in several countries) and they all decide to roll out or not ipv6 independently.


My bad. I forgot that one of the many issues I have with the EU includes the fact that it is also Big Telco's little bitch, and they only institute block-level policies when it suits Deutsche Telekom/Vodafone/Telefonica.


I don't understand. Are you suggesting that the EU should mandate telcos to provide users with ipv6? Why? And who is going to pay for that? Telcos will just push the price to the end users. I don't need ipv6.


Pushing IPv6 would more likely than not decrease the cost for end users, and it could allow competition from independent ISPs.


Well sure, if you already have your own IPv4 address(es) - but not everyone has !

You forget about the opportunity cost that comes from our broken Internet where you cannot assume end-to-end network connectivity because you might have to deal with NAT and especially CGNAT ! (How many protocols end up overcomplicated (=expensive) and dead in the water because of that ?)


In the uk we already pay bt tax money to keep our network up to date ipv6 should be included in that.

Most people want ipv6 so they can get more ips / static ips, but from a legislation perspective its more about future proofing and staying relevant.


Most people want ipv6? What % of people even know what ipv6 is?


They don't know it by name, but if you tell them that with IPv6 they could ditch their mobile phone service (their home router can run a SIP trunk just fine, so all they need is a non-NATed connection), or that you could create a FON-style mesh network where you could make money by becoming a hotspot provider, they would be more interested.


> Most people want ipv6 so they can get more ips / static ips

My statement was clearly the use case of ipv6 a direct answer to your question. Why? as in why would people want ipv6.

Anyway have fun playing dumb.


Most people want more ips or static ips? How many people know what an ip is even? Jesus.


Still not what I said. Gargamel.


I'm in Europe, I have yet to have used a residential (or even corporate!) network which has any form of IPv6 support. Of all the networks I've interacted with over the years, only my cell carrier supports IPv6.


Many years old routers had a single external ipv4 and behind the NAT each computer got an internal ipv4 as a way to never end up exhausting the pool of IPs that each ISP has. But it’s been a long while since it has changed:

Most of the home networks I use are still behind a NAT use a single ipv6 to interface the world and within the network each computer has an internally assigned ipv4 just in case some operating system may not be compatible with ipv4.

Still are many old routers using ipv4 only out there as not many people care to renew their hardware as long as their WhatsApp and Netflix works.


Well, if governments really care about IPv6 (and the previous campaigns to push it were not just about "look how progressive we are"), they'll do something similar to what they've done with digital TV broadcasting :

they would first ban the selling of new devices not compatible with IPv6, then forbid ISPs to advertise IPv4-only connections as "Internet", then ban the selling of new IPv4-compatible devices.

(In a few years we're going to reach anyway the point when IPv6-only connections (mostly in Asia) outnumber IPv4-only devices (mostly in Africa, which sucks because they're the least able to afford an upgrade, but then being forced behind CGNAT sucks too).)


Ok, at least in Germany I noticed since I moved here that all residential routers were dual-stack. I even ended up disabling v6 because I wanted to run some things in my home network and I wasn't too sure about the firewall configuration.


Something that I find incredible, but have seen no one contradicting yet, is how when ISPs have started to roll residential IPv6 by default some years ago, they first had no firewall on IPv6 and/or later it was disabled by default (so only few % of users would enable it at best). (Not sure if the situation has changed ?)

Now, IPv6 (when properly implemented, which is another failure mode) comes with much better safety out of the box (like not being able to scan all the suffixes in a reasonable amount of time to find computers on the local network to target), but I'm still impressed that now we seemingly have hundreds of millions of personal computers "directly" connected to the Internet with at best only the OS firewall as protection (when one exists), and it hasn't resulted in major hacking issues ! (yet.)


The difference is a direct IPv6 conmected Windows 7-11 is actually prepared to be on the internet in a way that Windows 95 never was.

There never really was any such thing as a safe network, but it used to be acceptable to assume LAN traffic was safe. Now we know you might be on airport wifi or a large corporate network with compromised systems and the OS and systems software needs to handle that.


Right - and pre-Windows Vista, you would have to install IPv6 support yourself ?

I guess the hackpocalypse will have to wait for when low-end computers (like those in tvs, cameras, personal assistants, connected doorbells, home automation, refrigerators) - you know, "Internet of Shit" that you can't really expect to feature its own firewall (?) finally get upgraded to IPv6 ? (How does it look like today ?)


Germany is one of the better countries when it comes to IPv6 adoption, and even then it's only about ~55%

https://www.google.com/intl/en/ipv6/statistics.html


Comcast in the US does (if you use your own router and configure it correctly.)

There's this goofy band of people that know just enough to bring their own router but don't understand why a misconfigured AAAA record can mess up the happy eyeballs algorithm and are convinced Comcast is "censoring" small business. It's funny and kind of sad to read comments from people who are certain their misconfigured networks are the result of some "international conspiracy" against them. That's probably why Comcast has it turned off on the routers they loan to customers.


Comcast Business requires customers to use a gateway provided by Comcast to support static IP addresses. I was thrilled to finally get IPv6 support, but it seemed impossible to prevent Comcast from inserting their own nameserver in IPv6 leases, which broke my network. I tried running my own dhcpdv6 and radvd servers to no avail. This took me way out of my comfort zone and much deeper into the weeds of IPv6 than I ever expected to go. I finally disabled IPv6 (and I can't even tell you how I did it) to get things working properly again. Maybe I don't have the required skillset to implement IPv6 properly, but I really got the feeling that Comcast introduces unnecessary obstacles for business customers with slightly advanced needs in order to serve the lowest common denominator (aside from my special requirements, IPv6 mostly worked out of the box, which is probably their goal).


Seconding this. It's a very irritating implementation detail of Comcast Business service. It can be worked around of course, if you have a VPS somewhere close-ish with a static IP you can do the whole WireGuard tunnel thing at the cost of some extra moving parts and maybe a minor latency hit. And while overall their business offering is certainly vastly better than their consumer shit, if you're on their router they're still jerks. I've noticed their own pushed WiFi keeps switching itself back on for example. Overall it reinforced my desire to use absolutely anything but Comcast whenever possible, though when it's not I'd still pick business service of consumer service.


Yeah, that has been typically the case a few years ago around here - you could have more than a single /64 on a residential connection (and better than the bare minimum of other IPv6 functionality), but it required to get your own router (and sometimes even spend time with tech support because they wouldn't test that use case as much as their own router that you'd get by default).


> These days, your ISP would (hopefully) give you at least a whole (static) /56

You must not have met many ISPs.


ISP might give me an IPv6 but my mobile can't connect to it.


You might be already in the minority here (even in the USA !), as it looks like that the cell carriers there now overwhelmingly run on IPv6, and tunneling IPv4, which doesn't always work well ?

https://old.reddit.com/r/tmobile/comments/97ifsv/issues_with...


I am in EU - Czechia. I have 5G, but unable to connect to any IPv6. :-/

I have the same network ISP for my apartment and they give me only IPv6.

I hate it.


That's weird... have you tried to find out what was the cause for lack of 5G (!!) IPv6 ?


In many parts of the world it's no IPv6 on mobile, just CGNAT with probably thousands of customers sharing one ip.


I have a static ip4 address, but were that not the case yes I'd take the ip6 plunge.

Edit: Which I could do as my ISP and mobile network both support ip6. Y'all need better ISPs :D


> Y'all need better ISPs :D

If only there was one around here, but alas, they're all basically the same 2-3 ISPs - a couple of main ISPs, then smaller ones who are just using their network and renting bandwidth.


However, VPS and cloud providers generally have bad IP reputation or block ports. We created a service called Hoppy Network that addresses this. It provides a /32 IPv4 and /56 IPv6 over WireGuard, costs $8/month, and can be used to host anything from web servers to SMTP servers. Here's a link.

https://hoppy.network


yes this is precisely how I connect all my computers. I use a VPS as a relay solely for its public static IP. Then every device I own uses that relay to connect via wireguard. It's been a magical experience having 16 bits of private address space populated with vms, r-pis, pcs, phones, etc.


i do it even easier. i have zerotier running and everyone is on the same "network".

you do need to look for an ISP that gives you a static ip which isnt even possible where i live


I recently did a deep dive into network overlays (like zerotier); mostly because I have a few self hosted services, and overlays seem to be the hot topic in the self-hosting community. I've come away with the feeling that most people using overlay networks at home are doing it completely wrong and opening themselves up to a world of hurt.

First, network overlays are not easier to setup than VPNs. Installing and configuring a network overlay client on every device is much more work than setting up a single VPN tunnel for every network you want to access. Overlay networks are just easier to plan because there is no planning. But they're not easier to implement.

Second, and far more important, meshing all your devices into a single flat network is dangerous. There is a reason why networks are designed with isolation strategies. Introducing an overlay into your networks breaks down these barriers for you, but also for an attacker.

The only overlay network that has built in firewall capabilities is Nebula. When I started configuring its firewall rules I found myself just recreating my existing segmented networks, but in a much more obtuse way. Instead of configuring a central firewall, I was configuring firewall rules on each device.

After all my research, I'm still running the same segmented network I was running before my overlay experiments. But I would like to give some praise to both Nebula and Yggdrasil. IMHO, these are the two most existing projects coming out of this space right now.


> Second, and far more important, meshing all your devices into a single flat network is dangerous. There is a reason why networks are designed with isolation strategies. Introducing an overlay into your networks breaks down these barriers for you, but also for an attacker.

What prevents you from meshing the individual services on a per-need basis? The fact that they're overlays makes them even more convenient for such isolation.

> The only overlay network that has built in firewall capabilities is Nebula.

Are those built in firewall capabilities really missing from the other networks?

As far as I know, ZeroTier is more of a SDN that an simple overlay (That's what made me interested in ZeroTier in the first place). If conventional "L4 transport-layer" firewall/routing capabilites are enough for real networks, then ZeroTier SDN capabilities are probably enough for it's virtual networks. Granted, it doesn't have some nice built-in high-level generic "L7 application-level" firewall capabilities, but I don't trust those in the first place.


I'm not suggesting overlay networks are useless (Slack uses it to connect thousands of machines around the world!). My comment is aimed at the self-hosting community using them as a VPN replacement for remote management / access. I don't see how they are any better than VPNs for this. They're probably worse once you start connecting all the devices you aim to remotely manage into a flat network (mixing internet facing devices that should be DMZ'd with internal devices).


My ISP only offers static in the business packages, which requires an actual business and it very expensive.


I have a business package but I can't get a static IP because the managed switch is unavailable for the foreseeable future due to components shortage. I was told I'd be lucky if I get one before my contract expires.


i just cron a curl to update my A records once an hour


This feels very much like the classic dropbox vs ftp+svn comment. I don't think the point of CF Tunnel is some novel technical capability or performance, but convenience and having a service you don't need to worry about and not have a server that you need to maintain.


It's also a pattern to commodify things that used to be basic knowledge for any systems administrator. That is both good and bad; new generations of computer people don't know how their systems work or how to actually do certain things, on the other hand, it makes it more accessible to more people.


its also worth noting that evicting this common knowledge from our collective sysadmin brains doesn't just leave our brains unoccupied. Now we can worry about a new class of things. Making meat available at the grocery didn't mean suddenly our lack of hunting knowledge left a void of stupidity -- we learned new things to fill that time and energy we used to spend on common things.


That is true as well, but as with most double-edged swords there can be a downside in that people that used to have the deep knowledge can now be swapped out with "cheaper" people that do not, and will not (by choice or by policy) do something in addition to the remaining shallow tasks.

That said, a classic entry-level "sysadmin" job was just a "jargon translation" job between a vendor and the local implementation of the vendor's products quite often anyway. That doesn't have to be a bad thing, but I do think it makes for a waste of human potential just to keep the anonymous cogs of an organisation running.


Absolutely. Look at ngrok. One command and boom I have a public address. People pay to avoid the headache.


Ngrok is one of those things that I know isn't that complex but I gladly pay for it year after year because I just don't want to deal with that stuff and it rarely lets me down.


Exactly, I’m running Cloudflare’s tunnel as a sidecar container in the kubernetes pods that need to be reachable from the internet. It’s a very convenient way of doing so, CloudFlare can even load balance it on their side, and it has been very stable.

It’s the convenience of it that is the big selling point to me.


Yep. The hardest part of a k8s cluster at home is ingress and storage. Previously I was using metallb and port forwarding, which worked ok, but not very reliably for various reasons. A cloudflare tunnel sidecar completely solved the ingress issues.


To me it's basically a time vs money choice.

As I get older, I'm less inclined to look into a DIY way to solve a technical problem. Even if it's "not too complex". When I was younger and had more time to kill (aka stay up all night) that was cool. Sometimes I just wanna get a full night of sleep and am fine paying a small fee for access to a tool or service that I don't have to maintain or think about too much.


"Get yourself a cheap VPS near you and make sure you get two IPv4 addresses."

If you're going to rent a server, why not just put your stuff there?


I ran a Minecraft server off a raspberry pi like this for a while. I had the VPS for another project, so stuff was running there, but Minecraft was just too much for it. On the other hand it could run on my pi, but my router wouldn't let me open that up to the internet. So I used SSH to tunnel from the VPS to the pi and it worked pretty well.

It's also a lot easier to show off demo projects like this - you don't need to copy everything to your VPS and figure out how to run it, you just need to have it running on your local machine (e.g. your development laptop) and let other people access that. Obviously that's not a great system for anything long-term, but if you just want to show a friend something you've made, it's quite useful.


What kind of VPS is less powerful than a pi?


Pi 4 has a dedicated quad core ARM A72 @ 1.5Ghz and up to 8GB of RAM.

A $5 VM from Digitalocean has 1vCPU and 1GB of RAM.


Can confirm for the specific task of a Minecraft server that a Pi4 beats the pants off the first few tiers of DO VPS. You have to get up to a cost equivalent of nearly buying a (bottom-tier) Pi per month to get close.

One problem with these VPS is contention for physical CPU between multiple tenants. Lots of CPU context-switching. Kills performance. You can get dedicated-CPU VPS, but at that point you're basically renting a fraction of a real server and the prices tend to be high.

A $100-$150 old x86 workstation or server off Ebay will do even better (I run several things, including a Minecraft server, on mine, and it performs great for all of it), but your power use will be much higher than with a Pi.


Also, Raspberry Pi 4 (Model B) costs ~35 USD as a one-time cost, the 5 USD VM from DigitalOcean is per month. If you're planning to run something for longer than ~7 months, you'll save money (and get better CPU/IO [not network probably though] performance) by going with the Pi instead of DigitalOcean.


The SD card in the Raspberry has to be purchased + wears out pretty fast if you utilize it.

The VPS power bill is already paid with its price. For the Raspberry you have to pay the power bill.

Here in Germany we now reach 40 Euro Cents/kWh. 5 Watts 24/7 are 17.52 Euro/Year 10 Watts 24/7 are 35.04 Euro/Year. The Raspberry is somewhere between.

That is the reason why I replaced my Dell t30 Server with two Contabo VPS servers. I also don't have to worry about my ISP screwing up my connection.


> The SD card in the Raspberry has to be purchased + wears out pretty fast if you utilize it.

Buy a high-endurance card or simply use external media if you have I/O heavy services you run on it.

> The VPS power bill is already paid with its price. For the Raspberry you have to pay the power bill.

True, but that cost is so small. Not sure if those German prices are accurate for most places, but where I live, it's nowhere near 0.40 EUR /kWh, so the cost of electricity per year is marginal at worst, unnoticeable at best.

> That is the reason why I replaced my Dell t30 Server with two Contabo VPS servers. I also don't have to worry about my ISP screwing up my connection.

Taking a look at Contabo (never seen them before), it seems their "Cloud VPS" is all virtual CPUs (not dedicated ones), so not really comparable.


That SD card quickly wearing out might be only relevant to Raspberrys before v4, because they could (try to) draw up to 15 W, while USB 2 could only deliver 2.5 or 7.5 W at best ?


I don't see how you can run a minecraft server on those specs. Just turning the draw distance will cause problems. Add a couple users and you are fucked.


It's fairly trivial to overclock Raspberry PI 4 (Model B) to ~2GHz if you have active cooling, and with a single one I've run Minecraft servers for ~5 players without hiccups. Of course it's not gonna be able to host large servers as the specs are low, but for the price, it works out well for small friend-groups.


It's hard! I only had two users consistently (myself and a friend, and occasionally guests but they taxed the system a lot so it wasn't very often), but I still even up having to fiddle around with a lot.

* As someone else said, overclocking helped, and I had a reasonable passive cooling case to help there.

* I used Paper instead of the normal Minecraft server, and I ended up spending a decent amount of time optimising the configuration. Paper by default comes with a bunch of optimisations, I enabled some more, although I also disabled some that were interfering with the more technical areas of Minecraft that I enjoy more.

* Whenever things started lagging, I went on a killing spree for our main farms, and that tended to work well enough. Most of our contraptions were turned off by default, or designed not to be too laggy. I also restarted the server every night, which worked reasonably well as a sort of ultimate GC.

If I were going to do it again more seriously, I'd probably get a cheap mini PC and use that instead, but for what it was - me and a friend rediscovering Minecraft after having not played it probably in about 5-10 years - the pi4 held up pretty damn well.


VPSes usually have "virtual CPUs", meaning they are shared between people who rent "virtual CPUs" at that hosting company. When you have a Raspberry, you most certainly have a dedicated CPU. This makes a big difference in terms of consistent performance, you always know how fast it can go and won't suddenly perform worse because your "neighbor" on the VPS hosting is also using a lot of CPU.

This is also why dedicated instances are usually way better for performance-sensitive hosting compared to beefier VPS instances.


> > I had the VPS for another project, so stuff was running there, but Minecraft was just too much for it

Presumably the pi is just running Minecraft


T4g.nano has 2vcpus at 5% "baseline utilization" (=sustainable utilization). That is not very much. Also only .5GB memory


Wouldn’t the game need a udp connection and isn’t that not doable w the ssh tunnel ?


Last time I checked, minecraft uses TCP for its connections, which explains a lot of the performance issues with multiplayer minecraft. But games generally uses UDP, so it's a fair assumption to make.


IIRC the original Java version uses TCP and the bedrock one UDP.


Well... the cheapest VPS vendor I know offers something as low spec as 256MB RAM + 10G Disk(1). You might meet OOM Killer for just running Docker install script, don't even think about putting a Minecraft server on.

1: https://virmach.com/cheap-kvm-linux-vps-windows-vps/

I only need it's IP address and network, nothing else, to setup a stateless reverse proxy. Which I guess is the best use case for such VPS.


Thanks. I have used Vultr's $3.5/month before but this is even cheaper at $2/month, over a long period, the difference will get me a pizza.

Are they good?


It really comes down to what you want to achieve. Based on my experience, they're good enough as proxies, but I wouldn't install a database on there.

Keep in mind, you get what you paid for. Many low cost VPS vendors are also using low cost IDCs to host their hardware. Some of those IDCs might be heavily sanctioned by other online services (say, Google will always want to verify if you're a human if the IP of the VPS lands on the sanctioned range).

Other than that, they're fine. My VPS with Virmach has been up since 464 days ago, I consider it stable enough (again) for my application.

If you're interested in low spec VPS, I would point you to Low End Talk (https://lowendtalk.com/), which is a forum for low cost VPS vendor and consumers.


Because in the proxy setup you get to keep your TLS private key on your own physical infrastructure. The VPS is just passing opaque packets.

If you value the privacy / integrity of other data then that also is more protected.


Primarily for storage at this point. I run some CI/CD on my local network, and the artefacts produced by the building process are massive. Paying for that is not worth it.


Because putting your stuff there is expensive because of the necessary disk space, not to mention the company or LE can take your data from there easily.


If you're going to sell gasoline in downtown Manhattan, why rent a little filling station when you could just build a whole refinery there?


Cloudflare tunnels are INCREDIBLE for local development. Get a domain name for testing on Cloudflare, give your developers access to it, boom everyone gets an internet-accessible hostname wherever they are without having to mess with firewall rules. You can test external API calls coming into your apps and let other folks access your dev environment, all self-service for your devs.


AutoSSH and port forwarding. You can even forward points behind a NAT. I use this technique to get to “my” box on “your” network, no firewall config changes, no issues, it “just works”.

Now I’ve never tested this with a public/high(er) volume service but it lets me pen test internal networks just like I’m sitting in the NOC. And my “VPS” host can handle dozens of simultaneous connections to dozens of endpoints. I have SSH listening on a non-standard port (eliminates 95% of the script-kiddie noise) and cert auth. That’s the only listening service on the VPS box.

I am familiar with some “TCP-in-TCP” problems but I’ve never had any. If it falls down, it just reconnects when traffic can pass again.

So what am I missing?


I use this technique as well, and don’t understand enough about wireguard to know why it would be better.

AutoSSH has been 100% reliable for me, with any lost connection restarting without conflicts, duplication, or error. My AT&T connection is definitely not five nines, so any tunnel needs to deal with restarts very well.


I don’t really get the point of fronting and even caching for 99% of people out there. The analogies don’t really do it for me, do you expect to get DDoSed on a daily basis? That has not been my experience in 15 years of home-hosting. If you have FttH you will be ok in most situations.

The only upside I can see is that it can protect against targeted attacks on the crappy modem provided by my ISP. But if such an attack is widespread it will probably hit me anyway.


The advantage with cloudflare tunnel is that you can block all incoming requests via firewall. That alone drastically reduces your attack surface. And your actual location and IP is obfuscated via Cloudflare.

I'd imagine these two points are much more important than DDOS Protection and Caching for most people.


I believe that to be true for most cloud offerings these days. If you find the open source project that they are using internally, a cheap Linux VPS is all you need to copy it.

Yes, it won't scale to a million visitors, but then again, your purse won't scale to a million cloud visitors either.


>> Yes, it won't scale to a million visitors

His simple static page seems to be taking the load of making the front of HN.


I think people sometimes forget that tons of high-traffic sites used to be served from what would, today, be laughably weak machines and very simple architectures (say, just a failover second machine, or LB between just two servers, or sometimes even just a single YOLO server).

No, they didn't get the traffic of a modern Google, say, because not as many people were online, but they did receive as much traffic as an upper-mid-tier modern site, and served it with machines weaker than a lot of modern phones.

Just serving HTML and small media files is something computers are very good at, if you get out of their way.


Another factor 20 years ago was the Apache web server. Everyone used that software because it was available and seemed to do the job. Load balancing probably got developed 5-10 years earlier than it would have if Nginx had existed back then.


That's probably only 2k-3k unique visitors, though.


Per minute.


Yes? The whole point is it's barely any traffic for a static site.


3k unique visitors per 10 hours



You could put a CDN that does caching in front of the site and not worry about a power outage.


Hey look, my crush I wrote about! Naw, that's no fun. The better solution is buying a Honda generator or maybe trying to build one.


Sorry, I couldn't resist.


A CDN like CloudFlare with "Always Online"?


Almost everything Cloudflare does can be done as self-hosted, it's mostly a matter of your time vs your money


That applies to a huge amount of SaaS and cloud.


I wish we all switch to IPv6 asap so that we can avoid tunnels, NATs and all the mess that comes with it.


No mention of forwarding a port on your home router + dynamic DNS here in the comments yet. I would appreciate recommendations for dynamic DNS providers.


Did that in the past but I'm kind of glad these free tunnels exist now. I always felt a bit uncomfortable forwarding ports in my home setup, esp if it wasn't to a linux server that I could harden reliably.. For me, NAT was always a layer of security, much like a firewall.


I use Cloudflare for my DNS. ddclient has built-in support for updating Cloudflare DNS or if you have to use a device which only supports the dyndns protocol the DNS-O-Matic service can be used to update Cloudflare DNS. Also many ACME clients have built-in support for using Cloudflare for DNS-01 challenge verification to get certificates from Let's Encrypt.


Basically any decent normal DNS provider (eg Porkbun or Gandi) will have shorter TTL than most forwarding nameservers anyway, and have APIs that are fairly straightforward.


This solution doesn't require any port forwarding. The "home" device connects to the VPS letting the VPS talk to it over the Wireguard connection.



Namecheap's DNS provides dyndns support. It's now just an add-on to your DNS or domain name provider.


I do dynamic DNS by updating Gandi LiveDNS entries thanks to their API and a botched shell script.


I’ve setup ddns with digitalocean


This is actually very simple in concept and is just as simple or even simpler to do with tinc (https://tinc-vpn.org).

Since I can use tinc in bridge mode, I can run tinc on the upstream server and on a local machine which then provides access to several physical machines without running extra software on each of those machines, which is particularly useful for machines that are resource limited, like my Macintosh LC II and LC III+:

http://elsie.zia.io/

It'd be nice if it weren't so difficult to get public addresses.


I wouldn't use tinc, particularly version 1.0. Its protocol is weak. [1]

(Every time I see tinc mentioned, I'm frustrated 1.1 hasn't been released. I made contributions to it 15 years ago that still haven't been released.)

[1] https://www.tinc-vpn.org/pipermail/tinc-devel/2006-January/0...


I'm running 1.0.36, but I do see that 1.1pre18 is an option. I'll have to try that out some time.


You are missing out on the fact that Cloudflare Tunnel is free.

You don't need to pay for an vps nor an extra IP, plus, you don't need to learn about tunneling software such as Wireguard or Zerotier


free as in dollars, but not free as in privacy and solidifying monopoly.

although clicking the pricing tab - it says hobby / personal use free / "For professional websites that aren't business-critical." - $20 /month

and "For small businesses operating online." - $200 / month

custom price for non-small business..

Although I did not see tunnel there specifically, and the tunnel page just has a 'download the paper' CTA - so it's hard to know what price one should be paying, on top of the first two things of course.


I do something similar with a reverse proxy on a DO droplet (Pomerium so I don’t have to think about certs or SSO) which is on a ZeroTier network along with a box at home. I probably wouldn’t have bothered setting it up if Tunnel had been free at the time, but it’s very convenient to have a random box to do stuff on outside the network (and to be able to access services at home without having to install ZeroTier).


This looks neat, but I don't really understand how it works. I imagine that the DNS record is pointed towards the VPS, and the VPS just forwards all traffic to the actual server via wireguard?


Pretty much, yes.


Well, this is what I have believed, but I can't make work wireguard for my setup:

https://serverfault.com/questions/1098093/how-setup-wireguar...

The combination of wireguard + firewalls and the complexity of iptables is not intuitive at all...


I see you use NixOS. Check out these NixOS modules for setting up what I'm talking about: https://gist.github.com/mcovalt/c1fc476385bd2b65513809c5bc68...


Ok, internal is the server and external is the dev machine?


Sorry, confusing there. Other way around.


If you ran something like Headscale, Netmaker, or Netbird (WG mesh network managers), then all your traffic is direct point to point and you don't need to care about the limits imposed by a VPS.


I was hoping there wouldn’t be a renting a vps in the story.


I'd use a Tor Onion service over Cloudflare.


TOR is underrated for breaking through NAT and provide a link home. It's not really an alternative if you want to host stuff yourself, though.


I used Tor Hidden Service to access an SSH port to a router in my home network and it works great... as long as Tor works. Being constantly under attacks and censorship makes Tor unreliable.


A downside with tor is that it is slow (or, was slow at about five years ago when I tried it, even with a low hop count).


A port forwarding utility like rathole/tunnelto (tobaru/frp…) seems simpler than the proposed WireGuard setup.


You can just use ssh also. Which I have found to be not only easier but has better performance for a simple port forward case.


You can do that with iptables if you don't need traffic to be encrypted between vps and internal server.


*> Get yourself a cheap VPS near you and make sure you get two IPv4 addresses.

IPv4 isn't cheap these days, so those two requirements are not as easy to attain. Given they specifically mention Hetzner who significantly increased their prices for additional addresses in the middle of 2021 I'm going to assume this page was written some time ago.

Using a single IPv4 should be fine - just port forward that over the VPN. Given most of what people want to publish this way these days is wrapped in HTTP(S), if you want something both local to the VPS and back on your home⁵ server, use nginx or similar as a proxy to split traffic by [sub]domain. You probably want SSH to both the VPS to manage it and to the proxied home server, but that can be done many ways using just SSH¹² or better still use wireguard to connect to the VPN from your remote location and simply route SSH to the home machine over its VPN connection³.

But using something like wireguard is the way to go, many similar examples use SSH tunnels which while fine for some things (I use them all the time) will have additional performance issues in some cases due to TCP-in-TCP congestion management conflicts, and do not deal with temporary connectivity blips (not uncommon on home connections) as gracefully.

----

[1] Though most of these suffer from the TCP-in-TCP issues, that might be less significant than for hosting an app or other service but you are already using wireguard/similar so why not use it some more?

[2] The pure SSH options, which have different [dis]advantages depending on key management, interaction with other tools that wrap SSH, and so forth, include: just manually double-hopping, using the -J option to jump through in one command, configure an alternate named host in your .config using ProxyCommand to configure the second hop, and at least one other that has slipped my mind ATM.

[3] I would still be inclined to have a pure SSH option available as well, in case the VPN is blocked if I find myself constrained by a funky network at a client/other site that isn't limited enough to also block SSH⁴

[4] If you want to go a little more hacky to deal with networks that block try SSH completely but are fairly open wrt HTTPS, there are a couple of options there. I've used shell-in-a-box previously though that seems to be unmaintained ATM, Bastillion may be a better option though I've not tried it myself. Be careful how you secure these tricks if you use them…

[5] I've referred to a “home server” throughout as that is the most common use for this sort of thing in my experience, but it all applies to any other situation where you want to host something on a box that is NAT encumbered and/or not on a fixed IP address.


Nebula is probably a better choice for something a little longer term -- bandwidth isn't funneled through one VPN gateway.

Once the clients talk to the lighthouse to build the tunnel they communicate directly

https://github.com/slackhq/nebula


I've done this, I really wish I could recommend this.

When the NAT punching works it's great. However (AFAIK) there's no option to use the lighthouse as a backup for when NAT punching fails, and when NAT punching inevitably fails it just doesn't work, even when everything can talk to the lighthouse.


Ah, interesting!

I've only used this on very conventional networks, so I haven't quite noticed this difficulty. With this in mind, it is a little harder to generally recommend.


In China you need a VPN in order to be able to consume information that's blocked by the great firewall. In America you need a VPN to be able to serve information that isn't blocked by the great firewall. Which system is the more reliable?


The Chinese firewall is a government run censorship apparatus. The American one you refer to is standard network security (or CGNAT out of necessity) on networks you don't 100% control, but I'd hardly call it a "great firewall."

Those aren't really comparable. What does "reliability" mean in this context?


> In America you need a VPN to be able to serve information that isn't blocked by the great firewall.

I'm pretty sure this is true in China too. What point are you trying to make?


> In America you need a VPN to be able to serve information that isn't blocked by the great firewall.

Empirically false.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: