Hacker News new | past | comments | ask | show | jobs | submit login
I wish my web server were in the corner of my room (interconnected.org)
589 points by flobosg on Oct 11, 2022 | hide | past | favorite | 436 comments



In 2003 I had my web server in my college apartment bedroom. This is back when AOL Instant Messenger was popular.

I had a URL on my website called moo.html that wasn't indexed. My friends had it bookmarked, and when they visited it they got a picture of a cow, but it played a cow mooing in my bedroom. It was a nudge to come online and be social.

The End.


Similar story: In college much more recently (2019), I had a linux server running at my boyfriend's apartment since he was off campus and we were blocked from doing anything like that on the school's network. Sometimes, I would say hi to him or wish him goodnight by playing a little tune on the PC speaker hooked up to that computer. He'd always text me back with a smiley face or something like that. Feels like that kind of interaction is really rare on the web these days, but we had fun with it for a little while.


Wow cool but that’s bizarro world to me. In my days college was where everything awesome was happening because it had fast and basically unrestricted internet. A lot of the Napster and other P2P stuff that followed was being seeded from someone’s dorm. The best game servers, etc. On IRC in the early 00s, I did a lot of trading of video (live music footage) and one kid in a dorm somewhere could host an enormous amount of content by most home internet standards. Once I got off dialup download speeds, I could easily download more than I could afford to store. The cheapest thing for me to do was buy a massive stack of CDRs and start burning. If I remember correctly, the largest HDD at the time was about 40GB.


Our school's IT department used to go around with wireless scanners to make sure nobody was running networks without the school's permission. I knew people who got busted for stuff like that, but my roommates and I eventually hacked a way around this by naming our network "Dave's iPhone Hotspot" and never had any issues. At that point, the webserver moved from my boyfriend's place back to my own until we moved off campus the following year.


That's exactly why schools started filtering data.

The school I was in had to do a complete wipe of 5 TB of random shares (back then that was a lot of data)

and they expelled a bunch of students for pirating and hosting extreme content

After that the network filtering was standard policy.


In my college days the filtering started because half of the IT students were playing World of Warcraft from the school network. Which obviously triggered a cat and mouse game of introducing new blocking measures and trying to get around it. When we were not trying to get WoW to work we were busy showing off our Compiz rotating desktop cubes. Those where the days.


In those days, the profs were the ones playing the video games.

They were "testing the newly pulled fibre optics" by trying to saturate it with Wow, Diablo, Counterstrike, Doom, Quake, etc.

They would book the computer labs of 60+ computers and tell everyone to boot up a copy of (ahem pirated) Half Life and get everyone one on games and then run a traceroute/ping/packet trace, etc to measure what was happening.

Those were the days...


Games generally use very little bandwidth so that was not a clever test imo... :)


Games tend to need low latency though, so it's not a dumb test.


DR-DOS / Novell DOS actually shipped with a basic multiplayer space sim (https://en.wikipedia.org/wiki/NetWars) in the box, with the official explanation for its existence being "to test network topology and configuration".


Similar memories and tales here :) I had a 100Mbit unfiltered connection in my dorm room in 2001 (!). Pretty wild for the time. I guess it took until about 2018 before I finally surpassed that speed at home. Nuts really.

Lots of pirating but also, thanks to it having a public static IP, a fabulous way for me to learn all about running and configuring my own web server, irc server, mail server, dns server, ftpd etc. etc. It was my gateway into experimenting with linux, discovering luminaries like DJB & RS, learning about RFCs, appreciating the open source/free software movement and probably a major reason why I ended up on a sysadmin/ops/infra trajectory after uni that still serves me well to this day.


Yeah I started and sold a web hosting company around that time based on the skills I picked up with all those things. I didn’t host it from my dorm but just commodity servers with a shared hosting app installed. If I remember correctly I had about 5000 users and sold because I didn’t want to hire help when it started to encroach a bit too far into my leisure time.

A few of my customers were running legit and decent sized businesses and they had no idea it was being hosted by a college kid who actually wasn’t even monitoring things all that much. When I encountered a problem I couldn’t figure out from searching the internet, I’d usually just write some script and cron job to restart the service periodically. It also served as my everything (nameservers, support, e-mail, etc) so I remember one time there was a fire in my data center and it went down for about 36 hours. I had no way to communicate with my customers and the site was unavailable. Luckily I think only about 10 customers even noticed.


I remember seeing lots of jokes on the Internet about guys going to college "for the bandwidth"


In early 2000s we used to send each other messages using Query Strings or X-Headers....


or MSN Messenger or ICQ


Then again that kind interaction was rare in its hey day.


One night in the 90s I woke up at 1am because the server next to my bed started making a lot of noise! I quickly login and see a process by user "nobody" taking up 100% cpu! I'm being hacked! Quickly pull the network cable out of the wall, wide awake.

Turns out there is a cron job that updates the locate command's index.


At a web startup I worked at in 2008, we had some automated emails sent to all our users. We didn't have sendmail or postfix or whatever properly configured and so the emails came from nobody@ourdomain.com. Our CEO was pissed because he didn't understand that it wasn't like some intentional joke by our engineering team.


My first real job was to work with another intern to write a “tool to keep track of who’s been trained on what” since they were doing it with a spreadsheet. This was in 2001.

We wrote it on the LAMP stack which gave us the full suite of whatever you could find on a Linux CD at the time.

Fancy graphs? No problem.

Send reminder emails? Sure boss! We dutifully started hacking and testing and hacking to get that function in.

To test, we decided to send emails to jackfrost, santa, and so on from our own sendmail server to the corporate mail server. It worked fine because we weren’t spamming it, we were just sending a few messages every now and then as we debugged.

Turns out there’s really a Jack Frost that worked for us.

He was not pleased.

We were amused.

I think we apologized, and I forget how we figured out he was a real person.


I also had a server in my bedroom.

The problem was that I only had mechanical hard drives back then and they would spin up all the time and make noise.

In 2001 I started to systematically search for the causes of the disk noises and document it at https://www.agol.dk/quietlinux/

E.g., I spent a lot of time finding out that CUPS was generating a new certificate every 5 minutes. Nowadays it is a lot easier to investigate things like that. Back then I was patching the kernel.


I, too, got bothered by fan noise at night, and my server was being hacked.

Well. Hacked as in someone banging their password list against an SSH server that only accepts public key logins, at maximum speed with tens of simultaneous connections, pegging the CPU (and I was running junk hardware, I found that 800MHz Intel Coppermine CPU literally from a mixed waste trash bin). Usually the scripts doing this had the decency to keep the attempts to something like 1 per second which would be unnoticeable.


fail2ban could notice that.


Yep!

  updatedb


One day when I had just started using linux, this never happened to me either ;-)


I had the exact same experience :-) it probably wasn't that unique.


I remember mine (a Pentium 2 with a SCSI disk) would go off at 6 AM, just as I was about to fall asleep (aah that student life). One day I had an aha moment, that I could change the cron timer to make it run at 9am.


In 2001 I had an account set up for my girlfriend, now wife, so that she could telnet (openssh wasn't really widespread then!) to my desktop and it would play a sound and blink a light as part of the login procedure.

The light was controlled by an X10 "firecracker" module. Neat stuff, for the time.

Anyway, she would do that to get my attention if I wasn't by the PC and she wanted to chat via ICQ.


In the early 2000s I had a program on my girlfriend's, now ex-wife's, family computer that would kill the AOL process when sent a specific string over a TCP port so that I could call her when her sister was using the computer. That combined with a DynDNS client let me call anytime.

Her sister caught on but couldn't prove it.


Have a similar story around the same time, probably ~2002. Cell phones weren't that popular yet either.

As a high school student I helped my school do some sys admin stuff, and one day I was stuck in a server(?) room while the guy who had keys etc. was away in another room and floor. So I ssh-ed into the machine he was likely working on and ejected the CD ROM back and forth until I caught his attention :D


Free cup-holder!


I still use one of those firecracker modules to toggle a set of Christmas-type lights from the command line. I've gotten a ton of fun out of that little module in the 20ish years I've owned it.


Did you get it as part of a promotional deal for about $5 in the late 90s?


Yep! Money well spent.


Nice. If I remember correctly it was their "Powerhouse" deal and you just paid for shipping.

I still have the modules around here somewhere!

No native serial ports, though, and the restriction of things needing to be on the same circuit kind of puts a damper on things. In my college dorms everything in my room was on the same circuit.


Yeah-- that was the one. I've found additional modules here and there in thrift stores and garage sales. The stuff was always just flaky enough that I never wanted to trust it with anything serious. Turning on lamps and strings of xmas lights was fine because the occasional "freak out" that the modules inevitably would fall victim to, requiring a power-cycle to overcome, never caused any major inconvenience. Their hard-wired more "serious" brethren, though, scared the heck out of me. I can't imagine they were at all reliable over the long haul.

re: the serial ports - If I remember correctly the signaling to the module was done on handshake pings (because serial data could "pass thru" the module). They'd probably be pretty easy to bit-bang from any 5V logic source.


I did a similar thing with my family: I'd hooked a GNU/Linux box up to the family Hi-Fi system to play our various music libraries, and when I was living overseas I'd "call them" by ssh-ing in and asking mpd to start playing something. They'd come online and call me using Google Talk (the very first one, probably, because it was good, simple, built on open standards, and long dead).


Some relatives of mine have internet-connected RGB lamps that they use in a similar fashion. When one sets the color, the others automatically synchronize. It seems like a pretty neat low-stress way to keep in touch.


What's their encoding scheme?


Obligatory: https://xkcd.com/530/

I did something similar when I lost my phone but it was still connected to the network. Ssh into it and `while true; do espeak "I am here"; done`. Related: http://bash.org/?5273


The xkcd reminds me of a friend who was locked out of her car. The battery in her remote key fob had run down so the door would not unlock when she pushed the unlock button on it. She was still trying to figure out online how to get a new battery when I took her key from her and opened the door by inserting it in the lock. She was so embarrassed that she wouldn't talk to me for a few days.


I’m fairly certain we’ve recently fought to open a rented car because the keyfob died and the way to extract the key from the fob was non-obvious.

Then when we finally got inside, the car didn’t have a keyhole to start it at all. Ended up calling the rental agency that showed us how to invoke the magic sequence by holding the (empty) fob in front of the start button for a few seconds before pressing it. I guess it does passive RFiD or something?

Anyway, that’s the point where I decided modern cars are not my thing.


That's the immobilizer chip. It's a little pill-shaped RFID-like thing that's been inside keys since long before remote locks and push-to-start. Basically any key that has some plastic instead of being entirely metal. The reader is located immediately next to the ignition key hole on the steering column, and that location is sometimes used even in push-to-start cars although apparently in your case it was near the start button instead. Distance is limited to a couple centimeters max. Car won't start unless the immobilizer's reader sees the correct key. When a push-to-start fob's battery is in working order, the distance is moot because it uses full blown RF instead.


Modern is a subjective thing. I have a golf 5 from 2005 (tdi). One day I had to go somewhere and it turns out the car starts and shuts itself down immediately... It turns out the ground connection to the dash was intermittent. The dash for some unknown reason is the place where the immobiliser code/certificate is stored so when you start the car the Ecu has to talk to both the immobiliser coil next to the key and the dash.

I'll leave it to readers imagination how long it took me to troubleshoot the issue.


I did that, recently. My fob battery died, I unlocked the car with the key, opened the door and... the car alarm went off. I'm not sure what the designers were thinking.


You turn the alarm off by starting the car, because the ignition has an rfid-like close-range reader which only requires passive circuitry in the key. That's how you differentiate between a break-in and the legitimate owner.


My car has push start (like many new ones) & has no keyhole inside (it has one in door to open the door). Although it has a seat/slot for the whole key to go in, in case of low battery. I assume that will stop the alarm. :-|


My car is also push start and I have to hold the fob in front of the start button for a short while before turning on the car if the fob battery is out.


> Obligatory: https://xkcd.com/530/

Plot twist: It's a Ring doorbell and wi-fi is down.


I enjoyed the (brief?) times when the client would do string interpolation on the URL and tell you the screen name of the person viewing it.


You had to put a link in your profile that contained "%n", and the client would replace %n with the screen name of the person clicking the link. They never took that away as long as I was using AIM, but there was no way to see anyone simply viewing your profile without clicking a link as far as I can remember.


Could you have added an <img src="http://tracking.example.com/pixel.gif?name=%n"> to the profile, tracking them via the weblogs, or having a era-appropriate CGI script sitting there that kept track of them?


I'm pretty sure you couldn't embed images in your profile.


Ah that's right. I remember when it was still not widely known you could catch some people, but I think people caught on eventually.


I wasn't an AOL user so it took me a few reads to get the concept. What this must mean is something like:

    [Joe]  what's up <a href="//example.net/?username=
    [Jane] nm, wbu
    [Joe]  ">join my chess game?</a>
Which could show on Jane's screen, if there is no HTML escaping at all, as:

    [Joe] what's up
    [Joe] join my chess game? (<-link)
The message of Jane's would have looked like it got swallowed because it was inside the HTML tag, but so long as Jane doesn't know what's up and ignores it, clicking the link instead, the owner of example.net would see a pageload of https://example.net/?username=%0A%5BJane%5D%20nm%2C%20wbu%0A... and thus learn that the other person is called Jane. Then again, for this to work it would already have to be on the screen of the person clicking the link, but not of the person who sent the link or there would be no point. So I feel like I'm still missing something.


Less clever than that. jaywalk's comment got it. You could put a link in your away message/status/profile and see which people clicked it and/or were "stalking" you.


Something similar used to work on Facebook... and still does apparently!

At least if you are this person: http://facebook.com/profile.php?=73322363

(This link redirects to the profile of whoever clicks it)


In 89 we had 9600 baud direct connection from the dorms to the computing center… We’ve come a long way.

Though I did have a Linux ppc box serving website under my desk in the early 2000s.. until they blocked port 80.

https://web.archive.org/web/20020124163137/http://www.linuxp...


I miss these days of the internet.


I'm seeing this sort of thing a lot lately (perhaps I'm looking for it?). A lot of people expressing, in one form or another, that we, our society, have somehow have gone down the wrong path. Even in this article there is a subtext of change ... for the worse.

If you want to be dismissive, call it nostalgia — or point out that every generation feels this way about the way the world is compared to the way it was when they were younger.

But I think there is something truly broken in the world and I think people feel it too. The phrase that kept popping into my head when I was feeling particularly down about the way of things was, "No one would have chosen this." That is, in reference to the way the Western world is today.


Make Arpanet Great Again!


Thanks, I lolled.


Embrace Gopher, Gemini and IRC/Jabber again.


What mechanism tied an inbound http request to the moo?


I was using a log watcher that could run a command on a regex match, but I remember having an elaborate .htaccess that would shell out all kinds of things... many ways to tie them together, all very hacky.


thank you for this insight.

it might not even be that hacky to be honest. in some ways modern log aggregation isn’t that different, just insulated by more steps and safe guards. less moos though.


Years ago I had /var/www/lights_on.sh that turned lights on in my room. Only hardened against RCE by Wi-Fi password, but was possible. It broke later. The real problem was that browsers sometimes prefetched it.


I'm trying to imagine what was popular back then. A Perl CGI script?


Perhaps showing my age, but that is still how I would do it. It's dead simple.


Even simpler is tailing the access log into a script that makes sound upon matching the correct path.


I love this story! A few years ago I had a server in my apartment with a similar 'dingdong.html' that would ring an old elevator bell I had salvaged from the junk. I didn't end up giving people the URL; I just made an ESP8266 button that would send a request.

If it had been a bit more reliable I would have kept using it but I had some issues with either the bell coil or the relay and it kept sticking. All in all the project ended up being more expensive than a wireless doorbell, but I enjoyed the experience (and the smartwatch notification when someone was at the door).


Quote similar approach was sharing your Floppy Disk A:/ drive. Any attempt to access it via the network would make this loud noise of empty drive, which meant someone is waiting for you in the chat.


This! This era is nostalgic for me and I am convinced we've only downgraded down from the handiness and thoughtfulness of the tools we used back then.


this reminds me of the Yo! app


OMG, I’d achieved to remove it from my mind. Ooob, the olds times in the apps world


i wonder how often they refreshed it lol


Speaking as someone who hosts multiple websites, email, etc. in the corner of a room

> [it should] be reliable if I kick a cable out of the wall

Right, if you want it to be reliable but also be able to cut its cables, then you will need a secondary host outside the home.

> or in the unlikely event that I get a bunch of traffic.

Are you serving media (music or video of more than a few seconds)? If not: DSL or mobile data (if your data cap allows) is fine for HN front page. Judging by the current page weighing 100KB, you can have 10 visitors every second at 1 MiB/s upload. (HN reaches that rate only in spikes, even at a top three position.)

> I’d also like it to be quick!

It's currently not quick at DigitalOcean (2 seconds for TLS setup, 12 seconds for HTML, 8 seconds for JavaScript, etc... 27 seconds total). It can only get better!

I can recommend something beefier than a raspberry pi, though, or at least than than the pi 1-3 speeds that I'm used to. I personally use an old laptop which is plenty fast for, well, anything you'd also ask of a daily driver, except it now doesn't need to render a GUI which speeds things up a lot. They can peak up to 100W depending on the model, but are usually very low power when nothing is being asked of them.

> Oh, and I don’t want to have my home network hacked.

Then install unattended-upgrades, put admin panels (phpmyadmin, wp-admin) behind basic authentication, don't host things you don't trust (random code written by 'someone on the internet' that has never been tested by anyone), put it in a VLAN if you want to be extra cautious, and you'll be fine. It never hurts to keep your phone and other systems on the LAN up-to-date anyhow so they should be secure as well, even if someone does get in.


Pretty much! As engineers we all sweat sleepless nights mulling over five 9's and we conflate these valid business needs with our hobbies and personal art/projects.

It doesn't have to be this way! Put it on a pi and have fun, if not for your sanity at the very least do it for your second most valuable resource, your time. If all a person wants to do is have a website that plays a piezo buzzer when someone visits on your RPi, just write that damned code, they shouldn't feel the need to worry about all the nitty gritty when all that they wanted to do is have fun!


89.9999% has five nines too, just sayin' ;-)


0.99999% as well


~3.65 days of a year. I suspect a lot of small projects nail this.


Look how much electricity we would save if many unused servers would turn on only when users actually need them. We need WoL via browser. https://en.wikipedia.org/wiki/Wake-on-LAN


You mean spinning up resources as needed?

I think I've heard of some services like that...


That's more like billing resources as needed, rather than actually spinning them up. The resources are always spinning.


Yep, exactly that. I know most of HW can go on power-save on idle states, but still drawing some 5-15W of energy doing nothing. We need some kind of "e-ink" for web resources which doesn't change much but just need to be reached occasionally/on-demand, with a slight unsuspend delay of course.


Hey, that's my Raspberry Pi when I find this one thing that looks fun, try it out and give up a few days later. Extremely efficient uptime, it's 100% when i need it.


When a bunch of people ping you because the Plex isn't running right, you find that obsessing over the 9s early saves you headaches.


> I personally use an old laptop which is plenty fast

If connected on wifi to your router this of course solves the "kick a cable out" problem too, even if the battery is really old you'll almost certainly still have a few minutes.

> Then install unattended-upgrades, put admin panels (phpmyadmin, wp-admin) behind basic authentication

I'd go as far as protecting the directory to only allow access from local network, and use wireguard to reach the machine.

It's likely a server in the corner of the room will cost more than a VPS, certainly in my country. A server drawing 25 Watts cost more than the $3/month I pay. (That said I also have a pihole running on a 1B - my parasitic house load is about 100W for the fridge, router, wifi, etc)


> even if the battery is really old you'll almost certainly still have a few minutes

Very true! Battery from like 2015 still manages to keep it running for about two hours I think, which is frankly amazing. I was constantly dealing with taking the battery out of the laptop when not in use (98% of the time, it was connected to a charger, either in a classroom or at home, so I'd need only to bridge the stand-by/suspend/sleep period in the train). At the time, it didn't seem to have an effect as the battery still decreased in capacity and I was disappointed with the results, but I gotta say, it is certainly doing a good job since then!

Unfortunately, external drives on the 'server' are not on uninterruptible power and having two of them in a btrfs mirror caused me more headaches than I like to admit. Even after I figured out which one had the more recent data after going out of sync, I misunderstood the phrasing of the man page and mixed up the arguments for the device to be recovered and the device to recover from. 2/7 would not recommend btrfs on devices without UPS, or if you don't want to shell out the money to buy three instead of two large drives so you can have a 1:1 disk image of the known good device before starting to operate on it (which is what btrfs was supposed to do in the first place, but alas).

> A server drawing 25 Watts costs more than the $3/month I pay.

With the screen and keyboard backlight and such turned off, it should draw less than 25W unless you're actively making use of it (and thus it being worth it), but yes that's ballpark correct.

I also get a lot more value out of it than what I expect to get for $3/month, though :). LAN speed transfers can be nice, no network latency (at least not beyond of your control) when you host a game server, access control is all up to you, dedicated hardware, you can choose to upgrade to 16GB RAM at will (perhaps you got a new DDR4 machine and have no use for the old DDR3 RAM that still fits in this 'server') without having to pay extra every month for those gigabytes forever, buying storage basically at cost price...


> I'd go as far as protecting the directory to only allow access from local network, and use wireguard to reach the machine.

Or, you know, only allow access from the attached hardware and reach the machine the old-fashioned way: By walking.

Regarding costs, it's useful to know the cost of a watt: For my electric rates, the equation runs:

    $0.11/Watt-month = $0.162/kWh x 730 hours/month / 1000 kilowatts/watt
So at least in my area the 25W server would not quite cost more than $3/month.


I roughly equate 1W ~ 1$ / year, a bit more now.


I thought I had made a mistake when I calculated the cost of 100W incandescent lighting to be the awfully coincidental number of almost exactly 100€/year. Finding this to be correct was quite the revelation: makes estimating the cost of anything in the house so easy because I already knew the wattages :)

(The landlord had installed these sensor-activated ancient bulbs in the hallway, where I pass through to to the cellar / power meter, and I was trying to track down this mysterious 100W that seemed to be always running, without fail. Turns out, it was only running when I was checking the meter! We then did the math with a better runtime estimate and still went out to buy LED bulbs at our earliest convenience. They're brighter than before (we erred on the high side), just as warm light, and use 2.5x less power.)


No matter how common it is, I never know what "2.5x less than some reference number" means. Is it "divide the reference number by 2.5"?


Correct, i.e. 40W instead of 100W. It sounded more impressive than "40% of the original value" so I went with "2.5x less". Not the best measure to choose one's words by, admittedly.


40W LED? Wow that's big! I think the biggest LED bulb I have is 11W


2x20 (notice plural 'bulbs' in the original message a few steps up the thread), and this is actually measured whereas iirc the box said a bit less

And yes, in my opinion we erred on the high side, but it's not far off from what the original incandescent (which apparently was 2x50W, measured).


You managed to say that immediately. Were there other serious explanations that came to mind? If not then you need to have more self-confidence because you do know!


I second the isolated VLAN approach. I host all my public-facing sites in a VLAN specifically made for that, which grants no access to anything private.


I third. I've got our computers and phones on one VLAN, everything else is on a separate VLAN (streaming boxes, cameras and other smart home crap, guest devices, etc).


I did but a /slightly/ more expensive but web-managed switch with the precise idea of playing with vlans... Needless to say, I never "found time" to actually do it :)

One day I will. In the meantime, could you please drop some links to some good introductory pages ?

Thanks!


I'm using a ubiquiti managed switch and three ubiquiti access points. Basically, you setup VLANs in the Unifi management software and then you can assign ports on the switch and Wi-Fi profiles to a particular VLAN.

From there, any device that connects to one of those ports or wi-fi networks will use the assigned VLAN. The access points are great because you can create several wi-fi networks and then the software provisions them to how ever many access points and switches you have, so you don't have to login to each one and set them up separately.

Any links I sent would be specific to Ubiquiti, but happy to do so if you plan to use their hardware.


How did you solve the problem of getting a stable mapping from DNS name to IP address?

For me, that's the big challenge; all I have is home internet on a dynamic IP provided by one of the big cable monopolies in the US.


My ISP simply gives everyone a static IP by default.

I know of only one ISP in the Netherlands that uses CGNAT and there you can ask support to fix it, which takes them 24 hours. I learned that the hard way when wanting to have a gaming night, hosting a factorio server in my student room. No gaming night for me, or so the ISP thought while rubbing their hands. It took me a bit but I eventually managed to proxy the UDP traffic somehow, not sure anymore if I used hole punching or somehow encapsulated it in TCP and reverse SSH tunneled or something. (Edit: on second thought, pretty sure I asked the other participants if they had IPv6 -- they did not -- and then proxied the traffic from my server via IPv6 using iptables. /edit)

We are quite fortunate with having had an early ISP community that managed to gobble up all the IP addresses we'd need for a good long while, and our population is relatively stable compared to other parts of the world. I know not everyone is this fortunate. (Hello ipv6...)

Even in a place like Germany, it seems one needs to be a business connection to get this service, it's simply not offered for consumers at all that I could find in some town in NRW. This is why I'm so happy the Netherlands has ISPs like Freedom (successor of XS4ALL) and Tweak who not only care about being cheap. Even if you don't use Tweak or Freedom, I feel like it keeps the local competition sharp.


You can use something like dynamic dns updaters[0]. They run on the box and when they detect that your ISP has changed your IP will update the DNS records accordingly.

[0] https://github.com/timothymiller/cloudflare-ddns


Here are several things that you can do (from more to less affordable):

- Setup public IP updating. You server runs a daemon that updates the DNS record automatically. You can do that with NameCheap. ($)

- You can pay 5$ to have a digital ocean droplet that acts as a reverse proxy that just forwards traffic to your real server. ($$)

- You can pay for "entreprise" service and get a static IP. ($$$)


One option would be to use Cloudflare Tunnel [1]

You would run a program on your system which connects to Cloudflare. The traffic goes to Cloudflare first, and then gets forwarded to your system.

[1] https://blog.cloudflare.com/tunnel-for-everyone/


I keep being amazed how the self-hosting community loves to recommend "just send all your traffic through cloudflare". It's the antithesis of self hosting.


Cloudflare Tunnel can be a step in the right direction. That said, I maintain a list of selfhosted alternatives here:

https://github.com/anderspitman/awesome-tunneling


Nice, thanks for the list! Do you have any recommendation how to tunnel from a VPS to my home server if I'm already using Tailscale? Just use any old reverse proxy like Nginx/Traefik/Caddy?


Yep, exactly. Just use the Tailscale IPs or domains in your reverse proxy config.


Caddy supports .ts.net domains and will pull the cert from the running Tailscale daemon on your system. And even better integration is coming soon, Tailscale is working on things.


Sounds great. I'm not in a rush, so maybe I'll just wait until Tailscale releases whatever they are working on.


I'd say that "self-hosting" is defined by where your processing and data reside, who controls these.

But if you want to be accessible to the outside world, you need to direct your traffic outside; I don't see a substantial difference between routing your traffic through Cloudflare, Comcast, Equinix, or any other major connectivity provider.


> I'd say that "self-hosting" is defined by where your processing and data reside, who controls these.

And I would mostly agree so long as you're the only one who has access to said data. There will always be "ISPs", of sorts, that your data needs to pass through; that's simply how the internet works.

The nitpick about Cloudflare is that they are starting to act as a gateway to the internet. Maybe you can turn their fronting off if they start giving you trouble, or maybe your registrar also runs behind Cloudflare. Anyway, bit of a philosophical discussion how much power to vest in one company.

The real trouble is that their main offering involves giving them the private keys to your traffic. I don't know if that's also the case with this Tunnel product, but at least for regular websites, then they process your actual data, as with the bank example (a colleague at said bank was not happy).


Cloudflare tunnel even lets me host a vanity website (potateaux.com) from a NAT'd LTE uplink using a regular phone hotspot. Game-changer, especially given the price!


I like ngrok


There are free dynamic dns services available. dns.he.net is one.

Try not to worry too much about what happens when your IP is reassigned before you can update the name.


You can rig up your own dynamic dns pretty easy. Most dns services have some simple api you can use so usually it’s just a curl line in your cron tab to run every minute.


Personally, I host my DNS with dyn.org, and use something like ddclient (which runs on my Linux firewall/router) to update my DNS records with Dyn in the rare event it changes. I've never had issues with it.


I have a cron that updates the DNS entries on Cloudflare with my current IP address. This runs every five minutes.


Mind sharing your script? Just want to compare :)


This is to update a record named jellyfin. Its a python script.

  import json
  import requests
  
  IP_API = 'https://api.ipify.org?format=json'
  CF_API_KEY = # Cloudflare API Key
  CF_EMAIL = # Cloudflare email address
  ZONE_ID = # Zone ID
  RECORD_ID = # Record ID for this DNS entry
  
  resp = requests.get(IP_API)
  ip = resp.json()['ip']
  
  
  resp = requests.put(
      'https://api.cloudflare.com/client/v4/zones/{}/dns_records/{}'.format(
          ZONE_ID, RECORD_ID),
      json={
          'type': 'A',
          'name': 'jellyfin',
          'content': ip,
          'proxied': False
      },
      headers={
          'X-Auth-Key': CF_API_KEY,
          'X-Auth-Email': CF_EMAIL
      })


I have a cron job that updates my domain's records at digitalocean every hour via their API. But in practice my ISP only actually seems to issue a new IP if I restart my router.


If you're lucky and your ISP supports dynamic DNS updates: Get a router/gateway capable of running OpenWRT (alternatively some routers might support this natively, or you could setup an old PC for routing), use the appropriate client and set it up to adjust the DNS record [0].

[0] https://openwrt.org/docs/guide-user/services/ddns/client


> How did you solve the problem of getting a stable mapping from DNS name to IP address?

I technically have a dynamic IP, as the ISP also sells an upgrade to a guaranteed static IP which I don't pay for.

However, I've been getting assigned the same IP consistently since 2013. I used to use a dynamic DNS service to keep track of it but stopped doing that since it never changes.


It never changes until it does. And eventually it will. Just be ready for it to change if anything you count on relies on it. I know this from experience (and far more than just mine).

P.S. - I pay for a static IP from my ISP. Don't count on that never changing either. I know this from experience.


You can set up your domain to have very short TTL (like 2 minutes) and have a script polling your external IP every 2 minutes to watch for IP changes, then have the script change the records of your domain when the IP does change.

Most nameservers provide a REST API for updating records so this is very easily done.


But don't set your TTL too low or many caching resolvers will ignore it and use a default instead!


In my country the dynamic IP's at most fiber providers are so long lived / stable, we can basically treat them as static. You have to phone them if you want force renew your ip address because doing it from our side, we end up with the same public address. I appreciate this behaviour from my provider. Boring & predictable. Just check it once a month if it is still the same.


I moved to an ISP that provides a static IP for $5 extra a month.

But before that I created a service for looking up my ip address and hosted it for free at fly [1]. Then I setup a script in cron to update my dns every 5min if it had changed.

[1] https://fossil.chillfox.com/echo_ip/index


I guess this depends, but most ISPs where I live will do a static IPv4 for residential. Mine also does a /56 IPv6 allocation if you ask.


Once upon a time I ran a local Shoutcast radio server on Winamp 2 and used no-ip.org to configure a DNS name dynamically


Most ISPs offer a static IP address as an add-on or higher-cost service. Might vary depending on where you live, though.


Dynamic DNS has been a thing since the first dotcom boom. Your router probably already supports at least one service.


I'd call your ISP, because mine is not small and offers "business" class service which costs the same as residential, reserves a static ip, and slightly boosts uplink speeds.


Dynamic DNS as others have mentioned. Or, many ISPs will provide static IPs for an additional cost, but you may need to switch to their business service.


I didn't see this as an answer, but use Tor (: It has the side benefit that it's harder to discover your service(s) on the wider Internet.


Good suggestion! I used this in 2014. Blog post about it: https://lucb1e.com/?p=post&id=120 (Btw, I no longer vouch for the quality or correctness of an 8-year-old post of mine.) I remember that the latency wasn't amazing or anything, but I apparently found it acceptable enough to use it for SSH.


What does a static IP cost over there? It was a US$7.50 one off charge here in New Zealand.


Duckdns.org


ddns tools like noip.


> I can recommend something beefier than a raspberry pi, though, or at least than than the pi 1-3 speeds that I'm used to.

Thin clients are perfect for such tasks. More powerful than a Pi, fanless, uses little power, and comes with a proper network card. Second hand HP T620s are cheap.


> Then install unattended-upgrades, put admin panels (phpmyadmin, wp-admin) behind basic authentication, don't host things you don't trust (random code written by 'someone on the internet' that has never been tested by anyone), put it in a VLAN if you want to be extra cautious, and you'll be fine.

Also default security measures such as disabling root login via SSH, disable password login via ssh, use fail2ban (also works for wp-admin and the likes!), install/use firewall and only open services which you want to access from the outside.


Old laptop at your own place + second old laptop at a home lived in by family or friend would probably work great for this.

Hell now I want to try this with two old but decent android phones - they would sip power and have a built in UPS and would blow a RPI out of the water speed wise. Throw a USB-C to Ethernet adapter on each and setup for HA (or if you were really lazy just a simple round robin DNS setup). Put one at a friend house and have them both setup with the free Cloudflare proxy thing and you would not even need to open any ports on your firewall.


The problem with using old phones is that the battery swells up after a few months left continuously on charge. You could extend the life using Tasker and a smart plug, but you'd lose a lot of the UPS's potential.


Old phones don't get security/kernel/driver updates, careful...


To solve the redundancy problem I wonder if running something like Hashicorp's Nomad on a few raspberry pis split across some friends houses could work nicely. Each site gets hosted at multiple houses for redundancy but no one person needs multiple devices.


Not familiar with Nomad; how does that work under the hood? Do you proxy all traffic through this third party who then load balances with a regular old http proxy, or is it actually self-hosted by the set of friends? With multiple DNS A records, this shouldn't work (it'll just fail in 1/N cases if 1 of the N IPs is down), so I'm curious how this is different from just hosting with Hashicorp directly.


Each friend's Nomad "client" (a node in the cluster) would accept a "job" which is the HTTP server. The missing piece is communicating back to some authority which clients are running the service, and on which ports. This could be done with DNS SRV records. Most commonly, Consul is used for DNS in a Nomad cluster.

This article is likely to give you a good idea of the architecture: https://developer.hashicorp.com/nomad/tutorials/load-balanci...


So if I understand it correctly, this haproxy they suggest is the new central point of failure? Sorry for being skeptical but I'm not really understanding the advantage.


You can also run multiple haproxies with identical config. One machine goes down? Your existing proxies still balance load to jobs.


But people would need to know which other domains run the other proxies. Might as well manually type in the other domain.

The only way I see to avoid a SPOF is with anycast, which I think involves running your own ISP to be able to arrange your own BGP sessions and announce at multiple locations.


IIRC, Nomad doesn't do well with having clients on distant networks; you could have a server/client at each friend's house, but I don't recall off the top of my head how service discovery would work in that case. I'm sure it's supported by consul in some way though.


Hi there. I have a R-pi vs 2, and I'm wondering how well you think that would hold up for a basic blog site. It doesn't need to be super fancy, but I would like it to at least be a little nicer looking and just a little more complex than for example Ycombinator.

Is this worth while to do? I'm concerned about using a pi, because micro-sd cards seem to be notoriously bad for corrupting data in less than ideal power situations.


> I have a R-pi vs 2, and I'm wondering how well you think that would hold up for a basic blog site.

Whether it could survive a lot of pageloads, like when submitting to HN and it gets traction, 100% depends on the blog software.

It won't run Wordpress well because that software is ridiculously heavy, and I frankly don't have good examples of database-based blog software aside from something I wrote myself. Mine still does multiple mariadb queries per pageload, so it's not as though it's extremely lightweight, but page generation comes out to a few milliseconds on a laptop with a CPU from 2012 inside. This software would survive the HN homepage easily.

I would estimate that anything that can handle >10 requests per second will survive the HN homepage, but if you're on the edge of 10r/s then perhaps it might be slower during the busiest minutes. If you would use static page generation (like jekyll, though I'm personally not a fan of how jekyll works it's the most well-known example) then the pi will definitely survive serving those static pages easily.

> I would like it to at least be a little nicer looking

The looks don't make it heavier or less heavy. Having a few style rules applied to the page is a few extra bytes per pageload, but they're not going to make the difference between it working or it not working.


You can always plug in a spinning disk. I used an RPi Model A to stream HD video off an HDD for around a year and it worked just fine. Keep an image of your SSD in case it gets corrupted and you need to reinstall.


Okay, so forgive me if this is a dumb question, but I thought the R-pi 2 requires the OS to be on the micro-sd to boot and can't use USB or PCI adapter connected drives?


Yes, that’s true. What I was suggesting: * put the OS on the sd card. * Put all the data on an external drive that is more reliable. * Take a snapshot image of the sd card when setup is complete. If your sd card is ever corrupted you can use the image to restore the server.


Oh, okay. Thanks.


It's possible to keep just a bootloader on the SD card, and have everything else on USB storage, so if the SD fails the data is safe and you just need to set up the bootloader again.


I hosted a phpbb board out of my room during high school. Our school board had just done the "one laptop per kid" thing, and the machines were all locked down and most of the fun sites were blocked, but not my site, because IT didn't know about it. So everyone went there to chat. We had an IRC server. People became friends that otherwise were in different cliques irl.

One time we were supposed to be doing work during class, but everyone was on IRC chatting. The classroom was completely silent. Somebody wrote "somebody say penis" in the channel and the whole classroom started laughing at the same time, for seemingly no reason. The teacher was confused, it was a good time to be a 15 year old dorking around with computers.


> We had an IRC server.

How long ago was this? You'd think the school had a strict filter on outside connections, especially to IRC servers.


> "somebody say penis"

Edgy...


Nothing "edgy" about kids finding words that describe genitals funny. It is like that everywhere on the planet and it always has been.


we were like 14 my friend


The "who can say penis the loudest without getting in trouble" game was very much a thing with my friends in middle school.


I graduated HS in 2012. There were a few guys who would literally yell the word "penis" at the top of their lungs in class. I now recall a time when one kid was working in a group with one of these troublemakers, and the problematic one raised his voice to say, "Why are you drawing a picture of a penis 'John'?" when in fact John was merely working on his own part of the project.

Bunch of clowns :P


> So… practically: how to achieve this in 2022?

I'll paraphrase myself from a few days ago[0]:

The reality is that we've let you down. Self-hosting shouldn't be any more complicated or less secure than installing an app on your phone. You shouldn't need to understand DNS, TLS, NAT, HTTP, TCP, UDP, etc, etc. Domain names shouldn't be any more difficult to buy or use than phone numbers. Apps should be sandboxed in KVM/WHPX/HVP-accelerated virtual machines that run on Windows, Mac, and Linux and are secure-by-default. Tunneling out to the public internet should be a quick OAuth flow that lets you connect a given app to a specific subdomain, with TLS certs automatically obtained from Let's Encrypt and stored locally for end-to-end encryption.

The technology exists to do all of these things, but no one has taken the time to glue it all together in a truly good UX (we're working on it). Pretty much every solution in this space is targeted at the developer market, not self-hosters.

[0]: https://news.ycombinator.com/item?id=33098471


Sandstorm.io glued this all together in 2014 and it's still available today. https://sandstorm.io


Wish it still worked...on a BRAND new VM.

GPG signature is valid. ** INSTALLATION FAILED **

The downloaded package seems to be more than a month old. Please verify that your computer's clock is correct and try again. It could also be that an attacker is trying to trick you into installing an old version. Please contact security@sandstorm.io if the problem persists.

Hmm, installation failed. Would it be OK to send an anonymous error report to the sandstorm.io team so we know something is wrong? It would only contain this error code: E_PKG_STALE [yes] Sending problem report... Submitting error report failed. Maybe there is a connectivity problem.

You can report bugs at: http://github.com/sandstorm-io/sandstorm


My fault. The install script refuses to install a release more than 30 days old in order to prevent downgrade attacks, but I failed to push a new release in the last 30 days.

Yeah, that provision of the install script is absurdly paranoid. No one will ever actually try to perform a downgrade attack on Sandstorm. OTOH the time limit has had the side effect of forcing me to push regular releases over the last 5.5 years (since the company shut down). So, I hesitate to juts remove the check, for fear it will make me lazy.

I should have a release up in an hour or two, or you can edit the shell script as ocdtrekkie points out.

EDIT: Done. Installer should work now.


Looks like we're a couple days late on a release. (The current release is 32 days old.) If you edit the install script about here: https://github.com/sandstorm-io/sandstorm/blob/master/instal... to go for say, 45 days instead of 30, it'll work just fine.


Sandstorm is awesome, and still way too hard for my dad to use.


I am fairly technically savvy so I tried using it today and followed their installation guide.

1) I had to change their code to accept a release that is older than 30 days old. 2) It had me pick my domain, say it started, and gave me a sandcats URL to go to that doesn't work. Feel like something is missing from the installation instructions.

I wouldn't say its too hard for your dad to use, it's just poorly documented


1) Fixed

2) This is kinda the hard part because it depends on where you are hosting it. If you ping your Sandcats address, does it return the IP of your server/location? Is there a firewall, router, etc. you may need to open a port or forward a port?

Agreed that the parent posts suggest this should be easier. It really probably should, but it's hard to do that securely without depending on some outside service. Sandcats and Let's Encrypt removes a lot of difficulty but CGNAT and port forwarding and stuff might be best defeated by autoconfiguring something like Tailscale or Cloudflare Tunnel.


I would have expected the installation page to at least maybe mention that port forwarding was required. I assumed you were using something like ngrok.

"The installer will offer you free dynamic DNS and valid HTTPS via sandcats.io, a service maintained by the Sandstorm development team. "

https://sandstorm.io/install


We have some ideas where to go on that, in general and especially for setup, though I am generally in favor of small community hosts, such that your dad should be able to use your server, instead of having to run his own.


I do like the federated approach for many services, but for many others I think it should be individual. Sandstorm needs to run on Windows and Android, and support tunneling for those behind CGNAT et al.


CGNAT is definitely a scary thing I don't presently have to deal with, but yeah, ideally Sandstorm should get some sort of solution for it, yeah.

I think it's important for self hosting solutions to not run Android or Windows: People tend to take those platforms out and about. But obviously the x86 server requirement is (currently) a big limitation for sure.


> I think it's important for self hosting solutions to not run Android or Windows

Not sure I follow. Those are the two most widely deployed operating systems. If you want people to be able to upcycle their old devices for selfhosting, I think that's where efforts should be focused.


Mmmm, you're thinking upcycling. My concern was you wouldn't want someone running their self hosted cloud on say, their phone or laptop which they might take with them out of their home.

I mean, technically we could probably eventually get Sandstorm and similar platforms on WSL? Android is radioactive garbage and nobody should go near it. Especially for an older device already abandoned driver-wise, you'd never manage to do anything secure or stable with it long-term.


You don't even have to use WSL (which requires Windows Pro I believe). Accelerated QEMU already has experimental support on Windows hosts via WHPX, on both Home and Pro.

Android doesn't currently support virtualization, but there's hope it will eventually, which should mitigate many security concerns.


You mentioned phones, which reminds me how much I wish there was a nice toolchain that would allow for hosting a webserver or maybe a federated social network of some sort on old android hardware. There are millions of old smartphones sitting in junk drawers and it's a shame they can't be put to good use.


What I want:

1. GP quote: "Domain names shouldn't be any more difficult to buy or use than phone numbers."

2. Your quote: "federated social network of some sort on old android hardware."

Put 1 and 2 together.

The only reason Facebook exists is as a middleman between people trying to pass messages to each other.

If people could easily find each other and run trusted non-proprietary software: A. there'd be no ads B. all comms are direct so government agencies couldn't simply compel access from a single source


The tricky part is people have to be willing to pay for the plumbing in this case. I think that paradigm shift can take place, but we have to show them why it's worth it first, which is difficult due to network effects.


Google could setup a domain and simply provide people freely with "[usersPublicKey].domain" subdomains updated by users with a dynamic dns client. Even if this were a google special DNS service not part of the global DNS this could work.

Google could also then provide a messaging app to use this service but if some other open source app were to become the defacto and make facebook irrelevant that is still a big win for google. Advertising $$ with one less big competitor.

If Google marketed this correctly then they could be seen as a champion of privacy too.


what you are looking for is abandoning the Android OS on them and flashing something akin to postmarketOS. This can still come with a workable touch-based GUI. But also just installing things like web-servers and php runtimes via a package-manager. Surprisingly broad support: https://wiki.postmarketos.org/wiki/Devices


Exactly what I was thinking: I don't see the point of running a server on an old Android device. Just install Linux there, right? And typically there pmOS seems like a promising project!


I've done some work on this. Android is a very toxic environment for this sort of thing, primarily due to draconian filesystem permissions and aggressive killing of services. It's all in the name of security and battery life, but I wish there were an easy way to turn that all off for selfhosting.

I've also seen people mention that apparently the flash memory doesn't do well with server type workloads, but a lot of that could probably be mitigated with logging to RAM, using a CDN, etc.


its a lot easier to buy domains than phone numbers sadly


Technically true, but you have to create an account with a company that is targeted at very technical customers. And using them requires understanding DNS, which is an insane prerequisite. We need a consumer domain registrar.


I just saw that icloud.com has a domain registrar built in (for receiving emails) I would say that is as "consumer" as it gets, no?


That's good, but should every service have to implement their own registrar? We don't all have the resources of Apple. Plus, what if you want to host other services on subdomains? Even if you can manually set DNS records, you shouldn't have to. I should be able to use the registrar of my choice, and icloud should use an OAuth flow for me to approve them having control over a subdomain, and they make changes via a standardized protocol.

There's some previous work in this space and I've also dabbled myself[0].

[0]: https://takingnames.io/blog/introducing-takingnames-io


I would not be too hard to use a Cloudflare Tunnel (free) or NoIP or similar. Really depends on what you want to host exactly though.


Cloudflare Tunnel solves part of the problem, but not nearly all of it. Plus it's targeted towards developers and operates as a loss-leader product.

But I think a company that's similar in a lot of technical ways to Cloudflare but targeted towards self-hosters instead of developers could be successful.


If you go the IPv6-only route it can still be very simple. Also if you buy a device such as a NAS, it often comes with its own webserver. On the other hand, i would strongly advise anyone not to expose a NAS to the internet...


With IPv6 the user still has to understand firewalls, and IPv6...


Almost no individual user has an internet connection that allows self-hosting.


That's either one hell of a generalization or a USA specific thing. There are definitely some ISPs that don't prohibit it and even give you the tools for it - static IP, unlimited gigabit upload.

I doubt mine would say anything even if I pushed 100TB a month through it. All their congestion issues are on download side thanks to residential traffic being mosty download (netflix etc).


Are you referring to reachability or bandwidth? Reachability is solved by tunneling[0] and SNI routing. 1Mbps upload is plenty for many self-hosting uses. Or are you talking about something else?

[0]: https://github.com/anderspitman/awesome-tunneling


Probably TOS. My ISP provider technically bans running any type of server, but it hasn't been an issue for me.


Out of curiosity, if I may ask: where do you live?

(Because I've never heard of such a thing.)


Los Angeles, but I've had similar clauses everywhere I've lived and with multiple USPS (Starry, Charter, Time Warner, Verizon, university housing, etc)


While my ISP, Comcast/Xfinity, does have a "Business Plan" that allows you to have a server, the normal residential plans prohibit it.


Ah that makes more sense. Also very sad. Hopefully as fiber becomes more prevalent that will become less common.


ISPs used to block port 80 and 443 but it seems they’ve relaxed that restriction for quite some time now. Maybe it’s regional.


IMO you should really be using tunneling anyway. I don't want anyone knowing my residential IP.


Cox in Nevada just started blocking port 80 during the last year or two.


Every time a port is blocked an MBA gets his wings.


All ISPs I had allowed it. UPC requires phone call, as by default they do CGNAT on their IPv6 configuration and need to switch you to IPv4 if you want incoming traffic. (if someone can explain what's the reason behind such approach, I would be thankful).


I’m on comcast and self host. ¯\_(ツ)_/¯


Realistically, anyone with an IP connection already self hosts a wide assortment of IP packets. As long as it isn't commercial or abusive, they are never going to know or care.


This is false. I got nastygrams from my residential ISP in the US accusing me of running servers because I rsynced 3TB of photos offsite as a backup.

It was not a server, not commercial, and not abusive. I was threatened with disconnection.


What did you do to deal with those nastygrams? I'd probably try to feign ignorance, blame it on a computer virus or something, and avoid that kind of massive transfer in the future. I run my own server from home so I'm curious if I could get away with that, or if I should consider alternative solutions.


3TB is not massive. I know professionals who shoot that much in a year; this was all my digital photos from 1997-2021.


Wow, that seems pretty extreme. What's your ISP?


Cox. I also pay extra each month for unlimited data transfer.


ISPs don't like paying those data egress fees


I've had one at home for over 25 years. (Currently, I have to pay extra for a business cable connection, however!)


I'm a huge fan of running web servers in the house - but they don't have to be connected to the Internet to be useful and fun! An Apache instance on my always-on box in the basement [0] serves an incredible number of uses and can be connected to from any computer-like thing on my home network. Old-school CGI scripts can be written almost as quickly as terminal scripts and HTML forms make super quick interfaces. A home web server is probably STILL the easiest way to get files to heterogeneous computers and phones and tablets and...

[0] https://ratfactor.com/setup2


Our (decades old) house web server has a home page with useful links, and in particular to a simple wiki on the same box. Without any pushing (that never works) the rest of the house has slowly learnt to use it, so the calendar, the wish lists, the pet histories, holiday ideas, all sorts of stuff are on it. The server also hosts simple apps like JS clocks, calculators and of course the [0] pewpew attack map (maybe a little less funny these days, but hey).

Edit: ref CGI, there's a few apps on there that do that as well (e.g. fish tank temperature monitor). Nice thing about a small private network is being able to do CGI scripts in bash/whatever without having to worry too much).

[0] https://github.com/hrbrmstr/pewpew


How do you give your intranet site an internal domain? Or do you make the family use the IP address?


You probably already have this. Nearly every ISP has been delivering home gateways with DHCP and DNS built in, and DHCP-registration into the local DNS cache. So <your-computer>.lan or <your-computer>.home are likely candidates. Check your settings to find out.

Besides DNS-based naming there is Multicast DNS (Bonjour/Avahi/ZeroConf) and NetBIOS naming (which still exist and works on most operating systems that have Samba or something similar).

In any case, you don't need a remote service like Cloud9 or Tailscale to any of this. Normal networking has done this for decades.

The next step beyond this is running a more capable DNS system in your home network. Generally this takes the shape of a DNS forwarder service running on a router or server. It could be as simple as a PiHole or OpnSense firewall, or however complicated you might want to make it.


See also .home.arpa which is designated for this purpose.


You can use mDNS [0] to publish an internal domain to others on the same LAN. Alternatively, you can use something like a Pi-Hole [1] to be the DNS server for your LAN. Pi-Hole gives you GUI way to point any domain to any IP [2].

[0] https://wlog.viltstigen.se/articles/2021/05/02/mdns-for-linu...

[1] https://pi-hole.net

[2] https://docs.callitkarma.me/posts/PiHole-Local-DNS/


Not the OP, but for a small local network it is easy enough to sneakernet hosts files around. (On a USB drive if not a properly classic floppy.)

Also, somepcname.local mDNS works on most operating systems today (once you grant firewall permissions to it; for instance, on Windows setting your home network as a "Private" network for instance when it asks Public or Private).


We have a lot of computers, so DNS is easier than hosts files (also easier for dynamic updates,e.g. random Pi's given a hostname will update DNS via DHCP so no need to find the IP address and update other hosts).


It runs DNS and DHCP as well (so we have a domain that's the same as the house name); the DNS is primarily caching so for most sites it's just stock internet (except a bit faster due to the caching). It's also authoritative for a small number of domains that serves ads/do tracking (it's amazing how much better that makes the internet, even the kids comment on how fast it is compared to their friends - and we're out in the sticks on a relatively slow connection).


Tailscale MagicDNS [1] can also do this, which you can also setup with TLS certs using their Let's Encrypt integration [2].

1. https://tailscale.com/kb/1054/dns/

2. https://tailscale.com/kb/1153/enabling-https/


This might be an overkill, but you can host internal domain using public DNS.

I've got a domain, and I've added multiple A records pointing to IPs of servers in my 192.168.X.Y NAT. This has a downside thought, that with short enough TTL, you may not be able to access your server during intermittent connectivity problems.

I'm using letsencrypt through traefik for the certs.


I personally use avahi (mDNS) as many other replies have suggested.

I use NixOS, so it was easy to make a function to abstract over the config. In each computer's config, all I do is specify a hostname. This function does the work (or really, some nixpkgs committer did):

    { hostName }:

    {
      services.avahi = {
        enable = true;
        nssmdns = true; # Allows programs like ssh to resolve .local domains via avahi
        inherit hostName;
        openFirewall = true;
        publish = {
          enable = true;
          addresses = true;
          workstation = true;
        };
      };
    }


Configure following items on your router:

- desired hostname and search domain(can be bogus though not recommended)

- DHCP server parameters with the router's IP as primary DNS

- DHCP static assignment for (each of)server(s)

- DNS static assignment such as "yourserver.bogusdomain.tld 192.168.10.10"

- (optionally) domain names, ddclient, certbot

"Proper" classical router/firewall OSs like Cisco IOS, Juniper JunOS, VyOS, RouterOS, OpenWrt, all easily do it like they do a cigarette, but good gamer routers and some NASs also can do it okay in many cases.


Edit the internal dns server(s).


I am asking this out of ignorance, not knowledge. Isn’t that why the Lord gave us a hosts file?


Or run a local DNS in your router, so you don't have to set each client device up.

(How would you even add hosts to an iPhone or something?)


But of course. Thanks.


you'd have to edit the hosts file on every single device you want to access that domain. personally, i wouldn't even know how to do that on any of my mobile devices.


TIL. Thank you


Network router with DNS resolver, internal domain, all DHCP clients get registered with a name as a subdomain. mycomputer.networkname.lan - I use pfsense, but lots of others support this.

You could have your own top level domain as well.


if you're using pi-hole, you can actually do all of this within the admin panel itself. they added Local DNS a couple releases ago.


If you have a Pi hole, you are already running a dns server. Otherwise, it's not too hard to set one up.


See https://pibox.io/ + https://kubesail.com/ for a low-energy, small, Raspberry Pi-driven, quiet option. I have been running one of them running in our basement for about a month. KubeSail, the startup that sells them, offer DNS and backup services, but the box has been designed to run also in the case the company eventually disappears.


Thank you for the shoutout. We’re working our butts off to make home hosting fun. I hope we don’t disappear - but we want to sell real tech and real skills. No black boxes, no moats, no lock-in. Investors hate it! :P


> A home web server is probably STILL the easiest way to get files to heterogeneous computers and phones and tablets and...

Similarly for printing, I would love a local web app that I could submit PDFs to and get a printer to print the pages. I could imagine scanning working in reverse. I tried googling a bit but alas it seems no one has done it.


Some higher-end printers have HTTPS or LPD (or even FTP) printing built into them. As far as using a web app to queue to a printer that's working on a local Linux machine or so, the webapp could be as simple as just a file upload form and app backed by incron with the right command assigned to the event I think.

https://www.geeksforgeeks.org/incron-command-in-linux-with-e...


For dumb printers we use CUPS, even cheap printers (Oki B412dn here) just plug into the network and are found by most things (even Windows these days).


I also use CUPS on a Pi to put a dumb printer on the network, but I still routinely have issues with my devices not finding the printer or not scaling the page properly.

This is why I was thinking that a plain web app with a known good driver could solve these problems.


The problem with this is that we really can't trust the home network any more. You need to make sure that any services you run on it are zero-trust - i.e., they don't just assume that anyone inside the domestic firewall is a friend.

Because inside the firewall are a bunch of phones and laptops and things that are accessing random webpages and running random apps; and (depending on your level of home network paranoia) maybe a bunch of internet-of-things things, or networked speakers, or televisions, etc., etc.

So even your basement server for home-only use really needs a cert, and client auth, and obviously needs to stay patched... lest it become a monster inside the firewall itself.


This takes me back. My dad worked for IBM and had access to many broken thinkpads (mostly broken displays) so he would bring them home for me to tinker but in the end I installed debian on them, installed ISPconfig and rented out webspace from the laptops running under my bed.

Laptops are awesome for servers since they have built in UPS's and are not very power hungry

It was a fun experience and got me started on my road to becoming a MSP


Yes! This was actually my first shot at home servers as well - same rationale and all. Old Pentium laptops seemed to get thrown out every few weeks, easily ran Windows Server 2003 (I hadn't seen the light yet), were reasonably easy to repair, used very little power, and were practically silent in operation.

Someday I'd like to chronicle how my homelab evolved, but at the end of the "laptops" generation and immediately prior to the "VMware on a desktop" generation, I had an old DEC (Intel), an AST, and a Gateway laptop, all running under my parents' couch.


Sounds like a post on it's own! Do you have a blog somewhere?


yes I even wrote about the laptop servers here https://blog.haschek.at/2015-my-company-just-turned-10.html


Good read, thanks! I follow your RSS feed now.


I liked this article, what a nice story :)

Did you get into your "hobby teaching" through your school sysadmin job?


funny enough yes I did. A school was looking for an IT admin and I got the job and after a year the headmaster asked me to teach too


Cloudflare already does this: https://github.com/cloudflare/cloudflared

It works with all NATs/CGNATs by connecting from the pi over a bidirectional WS connection. PI <-> WS <-> Cloudflare. SSL is done on the cloud, not on the pi.

Install any web server on the pi and "cloudflared" to proxy it.

https://developers.cloudflare.com/cloudflare-one/connections...


> SSL is done on the cloud, not on the pi.

And then the third-party doctrine applies and they can tap you without a warrant and nail you for your private friend message board where you discussed smoking a joint or getting an abortion, if cloudflare complies. A main advantage of self-hosted in the home is supposed to be basic 4th amendment protections.

It's illegal for the federal or state gov to read US mail correspondence or tap phone lines without a warrant and should be for digital stuff too, but the third-party doctrine totally undercuts it.


You shouldn't need "cloudflared". You could setup a 486 on a ISDN line, serve traffic without worry of needing SSL, being DDoS'd and all the rest. The WWW has become so unsettled that it's now become a basic requirement of needing SSL & Cloudflare for a static webserver. Not dissing the tech but it all makes the overhead so much more complicated.

I used to run an IRCd on 56k where it would disconnect you after three hours. With two modems timed just right you could dial-back in to the ISP on the second modem just as the other disconnected and have about a near second of downtime with no connection loss. Some of the best Quake 3 servers I played on were on 56K; truely a magical expirence now lost.


I run my home site off a 3-node Kubernetes cluster with 16-core CPUs and some tens of GBs of RAM each, off a business-class Internet connection with a static IP, and I'm still under no pretense that some arbitrary new restriction on behalf of Internet vendors won't render my setup useless one day.


okay grandpa, time for bed!


Looks good. I guess that doesn't put much workload into home routers, which I assume is the real bottleneck with FTTH connections.


You can stick the cloudflared tunnel exit on the machine doing the hosting then the router performance is largely irrelevant


Yea this works great!


I'm surprised no-one has mentioned Unraid here yet (https://unraid.net).

You install it to a USB, put the USB into whatever you want as a server, and boot it up from the USB. The OS and config gets loaded into a ram drive. You now have a linux server ready to go, with access to thousands of apps and add-ons (which are basically just open source things you already know on various docker containers).

So you can install postgres, redis, whatever, and you are off to the races if you want to host your own side projects. Or you can install Tailscale, plex, and whatever OSS dropbox / media server clones to have access to your media anywhere.

The magic is that it all 'just works' and has a nice web GUI to install and configure things, and you don't even need to know that Docker is handling all the things running under the hood if you don't want to. The only tricky bit can be mounting volumes from the various containers to get things working together, for which there are many many guides and youtube videos. You can always bring your own docker compose config if you you want to keep it in code.

The uptime for one of my servers was up to well over half a year, and only storm & power outage brought the server down...now I've got it on a UPS with automated safe shutoff (again configured in the web GUI, took about 5-10 mins) if power drops again.

Unraid just runs and is rock solid and stays out of your way for whatever things you want to run.


That's cool. Thanks for the shout.


This is one of my favorite blogs that I read consistently. Some of my favorites:

Micromorts: units to measure risk of death— https://interconnected.org/home/2020/09/01/microcovids

First words— https://interconnected.org/home/2020/10/12/first_words

State sponsored fashion— https://interconnected.org/home/2022/08/16/fashion

Speaking with dolphains— https://interconnected.org/home/2020/07/20/dolphins

Bottling the overview effect— https://interconnected.org/home/2021/07/20/overview_effect


The comments seem to be in conflict with the content of the blog post. The author seems to be lamenting what is feasible more or less, but seems uninterested in putting in the extra effort to keep up or anticipate the expectations or demands of the modern web. It almost as if his nostalgia is at war with whatever tastes he has acquired technologically since his college days. Maybe he can compromise by caring less about the demands or expectations about the modern web.


This. I don't think anyone expected high availability from their living room web servers; if they did, they either discovered that was not feasible, or moved their web server somewhere else. And that was FINE. Sometimes, a server with a forum or a website you really liked would go down. Then there'd be an update with a quickly-written note in HTML announcing that the server had some failure and restoration was in process. Then it'd come back when the owner had the time and cash to get it back up and running, maybe along with a story about what happened.


It's interesting how people used to do this back in ~2005 but now don't, however nowdays computers are much much faster and stronger than they were in 2005 so it aught to have become more feasible since a normal laptop should be akin to a small cluster back in those days.


Not all have given up.

I have a web server in the corner of my room since the beginning of 2004.

Besides being a firewall/router/switch and hosting a web server, it hosts more than a dozen other services, including an e-mail server, NTP server, DNS servers, DHCP & TFTP servers, etc.

In 18 years it did not have any down time, except for a few minutes every 3 to 5 years, when I have upgraded the hardware.

I could have upgraded the hardware less frequently, but I have replaced it whenever I could reduce the power consumption without decreasing the performance.

Now it is at the 6th hardware version. It has started as a big Pentium 4 pedestal server consuming over 200 W, but until now it has been reduced to an Intel NUC with a 4.5 GHz 4-core Coffee Lake U CPU, together with 4 USB to Ethernet adapters used to increase the number of Ethernet ports to 5, consuming not much above 10 W, while being much faster than the oldest servers.

A laptop has the advantage of incorporating an UPS, but I would not trust most of them with working 24/7 for years, like an Intel NUC, or preferably some fanless small computer (with an external UPS).


>In 18 years it did not have any down time, except for a few minutes every 3 to 5 years, when I have upgraded the hardware.

I wish I had that reliable of a power source. Even with a UPS, I've had tornados, snowpocalypse, etc where the power loss has lasted longer than any UPS I have.


I'm more impressed by the internet connection. Mine is down for at least a few minutes every week. And that's only counting when I'm at home to notice it.


Though I am an individual user, I have paid since the beginning for a "business" internet connection, in order to obtain some (8) static public IPv4 addresses.

It has cost me about $60 per month, which is significantly more than non-business connections of similar speed (currently around 400 Mb/s) cost around here.

Paying for a business connection has been the main expense for having my own e-mail and web server. Except for the first server, all the later upgrades have been done by reusing computers that had been originally bought and used for other purposes. With the quickly declining power consumption of the newer servers, the cost of the electrical energy has become negligible.

A Raspberry Pi is not a good choice for a firewall/router and/or Web server, but there are small computers similar in size and price, e.g. NanoPi R5S (fanless and with 3 Ethernet ports, including two of 2.5 Gb/s for LAN and one of 1 Gb/s for WAN; 2 USB ports can be used to increase the number of Ethernet ports to 5), which should be good enough for most people.


I have power interruptions from time to time, but fortunately they are not long.

Now, with only an Intel NUC connected to an UPS that could power a big server for a half hour, the NUC might work for a day from the UPS without having to shut down.

Where I live, the "snowpocalypses", which were frequent when I was a child, have disappeared completely. On the other hand, tornadoes, which were completely unknown previously, have started to appear, so they might become a cause of problems in the future.


It's also easier and faster to make your own butter today than it was 100 years ago, but most people don't because it's even easier to just buy some at the store.


oh, but that handmade butter tastes soooooo much better!


Thing about a laptop - I am not going to keep it running 24/7. So I probably want to buy another computer for the hosting. And by that point it is cheaper to use a free netlify/github hosting type thing. You have to be quite motivated to run your own. Even in 2005 there was very cheap shared hosting, I wouldn't have chosen to self host unless it is for hobby/experimentation reasons.


People still do it - it's called self-hosting these days.


I still do it, but for private non indexed stuff.


For those interested, https://indieweb.org/POSSE may be of use.

The idea is you'd publish on your own web server, and syndicate to other services that could maintain under pressure, etc.

I think that for many people, setting it up at home is "Good enough" and if you get slash dotted, well then you can deal with it at that point.


I once took our corporate T1 because I was hosting a site on a work webserver and it got slashdotted. My boss was really cool about it though, he said, “wow, I’ve never known an internet celebrity before!”

This was in 2001, so it’s meaning has changed significantly since then.


My home DSL connection years ago started being slow - so I checked my home server.

A single image was the top result for "Japanese robot death cat" or something on Google Images, so I was getting pounded. A quick robots.txt update and a few days later everything was calm again.


The no-hot-linking option works well too


Remember the hot linking wars?

Small sites would get pissed when site like the nyt hotlinked and serve them pron instead.


as well they should. NYT has a budget. Small sites typically don't. F'em if they can't take a joke!


A home web server is the equivalent of running out of toilet paper. You never designed a Service Level Agreement for either, and it's frustrating, but you will survive it.


Piling-on to the "my old web server back in the day" comments:

Back in 2000-01 I had a box at home hosting a webcam pointed at my cats' food bowls. I was working a lot so I liked to see them. I could also make sure they still had food when I had an extended 36 hour work session. It just pulled still frames. I never did work out streaming.

I installed an X10 module and wired it up to a CGI so I could turn on the lamp by the bowls when it was dark. The click of the relay would reliably bring the cats to investigate. Hit the page, see blackness. Press the "Turn on light" button. Refresh the page and see the light on. Count to 30, refresh again, and one or two cats would be in the frame sniffing around the relay and food bowls.

Going back to BBS days:

I wrote a BBS "door" for a friend's board that captured a 320x200 frame from a composite video feed, stored it as a 16 color grayscale GIF, and sent it to the caller via Zmodem.

The frame grabber had some a crappy command-line program that required a keypress to initiate capture. I hacked the program to NOP out the loop and just immediately capture.

At 9600 baud or higher the download time for the GIF was pretty tolerable-- 30 seconds or less. If I recall properly GIF89 had better compression and sped things up a bit too.

The composite video feed came a surveillance camera in gym my friend ran downstairs. I added a call to a command-line VOC file player to play a chime out the speakers in the gym couple seconds before the capture began.

Later we added an option to allow the caller to upload a VOC file to be played in the gym and optionally receive 5 seconds of recorded audio from the gym along with their video frame.

Fun times.


> I hacked the program to NOP out the loop and just immediately capture.

This is a level of freedom that I wish we still had today. I'm not talking so much about the open source, "free as in speech" freedom, but more so about the "free as in simplicity" type. Nowadays it feels like I need valid TLS and a PGP signature just to host some service in my own home, the ecosystems around them having decided that the ability to tinker was (perhaps justly) a far second to security and correctness.

(For reference, I tried building and inheriting my own Terraform providers a few months ago. It's not possible to host them off plaintext HTTP, nor is it possible to skip TLS validity checks when downloading them, nor is it possible to install them without checking for a valid signature.)


> Nowadays it feels like I need valid TLS and a PGP signature just to host some service in my own home

You don't need all of that. You can just host. I was hosting back in the early 00s and it isn't any harder today. Actually its easier because most DNS providers now have a reliable API so I can have a cron update the DNS entries when my IP address changes. Back in the 00s I use no-ip.org and had a subdomain from them.


I've done this for 10+yrs. Started with a single core Intel Atom powered netbook when those were still a thing, then moved to a quad core Atom desktop, to now where I have a 2nd Gen Core i3 desktop that will soon be combined with a similarly old 2nd gen i5 laptop. Runs half-a-dozen VMs, and like 10 or so different services, probably half of which are custom. At one point when I was still in school it even had a 5 person heavily modded Minecraft server (barely) running on it.

I'm basically the only user now. Its been a great learning tool.

Public access used to be through exposing the proper ports to the Internet, but now its through a Cloudflare tunnel and Tailscale.


Oh man. I relate so much to this.

When I was 15 me and my friends really liked playing online MMOs. We used to enjoy chatting on VoIP program, but this software required a server which all clients would need to connect to.

We always thought it would be cool to host our own servers for this VoIP software instead of paying someone else to host one for us so I decided to dig out an old computer and set it up in the corner of my bedroom to use as a server.

We got the server software installed and then realised we could probably sell these online if we knew how to build a website.

To cut a long story short, we ended up teaching ourselves how to create a website with HTML, which eventually lead to learning how to program in PHP so we communicate with the VoIP software programmatically via Telnet and send emails, then eventually how take payments.

It took us a few months in total, but we did it. And this back before YouTube tutorials or useful programming blogs. You were mostly trying to work things out on your own so it felt like a real achievement.

One of the best moments of my life was receiving our first paid subscriber. I'll never forget the night my friend called me to tell me the news. And this was back when us teens had pay as you go phones so it was odd to get a call - especially that late at night.

Funnily enough we probably used that old computer in the corner of my room as our server for about a year until one night someone hacked into it. Never really worked out what they were trying to do but they managed to install some remote desktop software on their because because I got woke up one night by the computer restarting then someone remotely controlling the computer. It was kinda spookey at the time.

As you can image we paid for a dedicated server in the end, but it was such a fun adventure and that's why I'm here on HN today. The idea a couple of 15 year olds could set up a server in their bedroom and make some money was really inspiring.

Things are different now I think. We were one of just a handful of VoIP hosts back then. Today we would be buried by Google and people would probably complain about the server taking 50ms too long to respond. You'd need to spend $1,000 on adwords and have EC2 instances around the world just to be in for a chance.


I'm running https://text-generator.io from my house, two 3090s right now powering it. It allows the service to undercut OpenAI around 10x on Text/code generation and Google over 8x on speech to text. A Cloudflare tunnel is pointing to it running locally. It makes development very fast too. Its a bit tricky to keep purchasing new hardware to spin up new instances but that's getting easier with practice and autoscaling cloud providers doesn't necessarily work that well either.

I think co's should seriously consider this or at least adding everyones development machines to the prod cluster during when they sleep, which is what we did to render movies when i was at Weta Digital. 1000's of developer machines are pretty valuable if put to good use.


curious what Speech to text you're running on it


I run a webserver (a k3s cluster, actually) from home, but considering how generous the free tiers of cloud providers are (Google Cloud in my case), why waste your home internet bandwidth for a personal site?

In terms of my home server, I mostly point subdomains at it to test projects running on my laptop (via an nginx proxy_pass), or share photos/music with friends. I used to use it a lot more when I why working away from home.

Outside of web facing uses, it's nice to have a central place to store and retrieve files from multiple devices. I'm using a an older i5 Intel NUC, and it works great.


> considering how generous the free tiers of cloud providers are (Google Cloud in my case), why waste your home internet bandwidth for a personal site?

Because arbitrary ToS "violations" are a thing, and good luck getting that fixed with them.


In the case of static sites, it can be as simple as copying the latest version to a new server and updating your DNS records. I would try to avoid lock-in not only for the reasons you stated but also to be able to freely shop around for better options at any point.


I remember home hosting fondly too.

Every so often I think about doing it again, but security paranoia keeps me from it. What if they broke out somehow? I could DMZ it I guess.


Tailscale is nice, you don't even need to open any ports to have your device accessible from anywhere. Works really great, literally (!) takes >10 min to set up (on mobile, dl app, log in with 3rd party identity provider (I choose GitHub), on Server, curl some script (will move to apt or yay or dnf when it detects them), click a link, boom both devices can find each other on unique IP addresses.)

I do open ports, for NextCloud (to be able share stuff) and some websites. But Home Assistant is only accessible from the Tailnet for example, as are my ssh servers.


Cloudflare has a similar (free) service as well. It's quite useful.


Excellent, thanks. I've never used Tailscale before.


It's free for the first 20 devices, I guess there is no limit if you want to selfhost the server part (called headscale). They are committed to the free tier though, they wrote a lot about that. I like the company a lot, I hope they don't change. They are very freedom/opensource oriented, for now :)

Once you use Tailscale you realize this is how a VPN should be, it's really something.


DMZ or second IP/connection is the way to go.

It can be a bit tricky with hairpin routing, but you can make the DMZ seem to be "on the internet" even to the home network.

Use tail scale or something similar for actual "access my home network from far away"


I'd recommend running your selfhosted stuff in a VM (or docker) and using tunneling[0] to access it.

[0]: https://github.com/anderspitman/awesome-tunneling


I host all of marginalia.nu out of my living room. Very little hazzle. UPS is kinda important though.


My web site (taoofmac.com) used to be hosted at home behind a DSL line. I ran it on anything from an NSLU2 (look it up, it was one of the first easy-to procure, easily hackable ARM machines) to PHP+MySQL on Windows Server (don't ask), and after a while I had Snort and all sorts of stuff running alongside to secure it.

Whenever I was linked from Slashdot I would pretty much lose connectivity, so I started using Coral CDN, moved it to a colo, then to Linode, and on and on through some 6 or 7 providers as technology changed and I tried new things.

It's been 20 years now (just wrote about that last week), and I sort of miss those days, but on the other hand I really don't--keeping the server alive and secure (even in Linode) was a bit of a chore, so the writing was pretty much on the wall that it would eventually become just a set of static pages on an Azure storage account. Zero worry about keeping the site secure, no runtime issues, and plenty of opportunities to be creative (like this: https://taoofmac.com/static/graph)

And boy, do I have plenty of in-house web servers and Raspberry Pis to make up for it--but none are public, and I just have a couple of cores spinning on each major provider for toy projects.


Mine is. It has been for 20+ years. It works great. As others have said, POSSE. A repository webserver (nginx) serving static files is incomparibly less of a security risk, than say, running a modern web browser with javascript enabled. But if you go .php/whatever yeah, that's risky.


Funnily enough I had the same wish some time ago, so nowadays I do most of my computing in "fatcity":

https://fatcity.it


>> little Raspberry Pi 4 server that I run from my home ISP, for no reason other than to have some fun

This.

I run mine on RPi 3B+ with a 4 running the database. I reverse proxy to my site via a cloud VPS instance for $4 a month. I switched to the cloud after years on NO-IP when 1) I noticed my IP never changed and 2) my home IP address was public via a look up of my domain name.

On another 3B+ I have a VPN so I can SSH in .

Some day I will get around doing a roll-your-own-ngrok [0] so I don't have to open any ports but have yet to do it. I have done it for a project I was working on and I needed to make the local dev server accessible to a 3rd party. Pretty slick and saves a bunch of time and hassle from having to put the code on the server. (As an aside: Does anyone else dislike the term "grok"? For whatever reason it annoys the hell out of me.)

I really have nothing important on there and go months or years without doing anything to it then get a burst of creativity or what not and update the site or just tinker with it.

[0] https://jerrington.me/posts/2019-01-29-self-hosted-ngrok.htm...


> my home IP address was public via a look up of my domain name.

If you're very concerned about privacy, frequently SMTP headers generally contain IP address info...


> SMTP headers generally contain IP address info...

It's puzzling that email services don't redact the originating (often home/residential) IP address from authenticated clients for privacy reasons.


Good to know, thanks!


If you're looking for selfhosted ngrok functionality you may also be interested in https://github.com/anderspitman/awesome-tunneling


Thanks...bookmarked it.


> most of my computing

What does this involve? Are you tunneling a browser through ssh? Are you doing development work?

Also, the status page is a rather beautiful bit of text. Did you do that yourself?


The Raspberry PI is attached at my home router (1Gb fiber connectivity), then I can access it like a local server (so even by SSH) from everywhere with Tailscale[0]. The rest of the world is proxied by a Cloudflare Tunnel[1].

Yes, remote dev work is done mostly with Visual Studio Code Remote SSH[2] (but I wish something similar would exists for Sublime Text).

[0]: https://tailscale.com/

[1]: https://developers.cloudflare.com/cloudflare-one/connections...

[2]: https://code.visualstudio.com/docs/remote/ssh

Edit: Yes, I hacked together the status page, something similar welcomes me when I ssh into the machine.

Edit 2: Some benchmark here: https://pibenchmarks.com/benchmark/62022


I’m intrigued, care to share more?


Please see the sibling reply: https://news.ycombinator.com/item?id=33166455

And feel free to ask me anything.


A few weeks ago I set up a Stable Diffusion webui on my home linux box and used a Cloudflare tunnel to host it on a url and gate access to just my company's email domain. I started a slack channel for AI Art and we started holding a daily contest, it's been really fun.

Shout out to Cloudflare, setting up an access protected tunnel took like 10 minutes.


I host a website with 20k daily visitors from my living room. If you want something that feels as small and convenient as a pi but with a little more muscle to it, mini PCs are your best friends.


Interestingly, the background colour of this site seems to change over time very subtly, and it's done by CSS with no JavaScript: The "changingbg" parts in https://interconnected.org/home/static/styles/interconnected...


Both the choice of background colors and the constant shifting thereof, for me, are extremely distracting and make the website hard to read.

Like HTML's marquee and blink tags, just because you can do something on your web site doesn't necessarily mean you should.


it screws up my darkmode extension i tell you hwat


This reminds me of setting up a file hosting server at home in high school so i could work on projects from school without constantly burning cds or dealing with terrible thumb drives. Sketchy php, no authentication, no sanitization. Just browse to a file and click upload. In hindsight it's kind of shocking it wasn't taken over


If you built it yourself, it's highly likely nobody ever found it. Even back then most of the "script kiddies" on the internet were using pre-packaged exploits for known software, not searching every single possible IP for forms with upload buttons.


As someone who was a highschooler 2008-2012 who built their own simple PHP apps for things: Script kiddies of the time definitely were scanning for arbitrary forms. Not necessarily trying to exploit the code, but just anything that would allow them to post spam.


I had a big data loss event back in 2008ish when someone found out, I'm guessing, that they could upload a PHP file to an upload-anything form on my home server. I thought I was keeping it secure by disallowing ".php" files, but I think some MultiView option I had set in Apache allowed them to upload .php.somethingelse and still have it get executed, blowing away, sadly, all my Subversion repos. Switched everything I could salvage to Git after that and never looked back. Also I no longer trust Apache to directly serve user-uploaded files. :P

Long story short, someone apparently went to a non-zero amount of effort to hack my homebrew file-upload form.


Perhaps a good opportunity to ask - for a long while now I’ve been hoping that some manufacturer took on a task of producing a good server suitable for this / homelab purpose? Something that allows a ton of ram (512gb at least?) to run VMs, middle of the road cpu with a ton of cores but energy-heat-noise friendly frequency, ssd, and all in a tiny, quiet, and attractive shell the size of a router that sits on a bookshelf? One can dream. But point me kindly to something that isn’t a rack mount pizzabox that sounds like a jet?


Do you really need more than … 128GB of RAM? Most desktops can do 64GB, some ITX and most ATX board can be populated for 128GB, beyond that require server platforms with >2 DIMM channels or LR/RDIMM.

Most people should be fine with an office mini-desktop like ThinkCenter Tiny line, sketchy(sorry!) Docker features on a NAS kit, or even an Amazonian Celeron mystery boxes.



SuperMicro has Xeon-D 1700/2700 boards and matching Mini tower cases for up to 20 Cores, 512 GB ECC RAM and redundant 25 or 10 gbe and 1gbe ports on board - Not cheap, though: https://www.supermicro.com/en/products/embedded/servers the prebuilt servers have smaller cases with noisy small fans but you can combine some boards with the mini tower with larger fan.


Running servers on home connections can get your broadband disconnected now for ToS violation.

Cox now blocks port 80, making LE certs harder to get.

The monopoly situation (enabled by regulators) means if you lose your connection you are probably offline completely. There are no alternatives or competition.

Even if you tunnel/VPN, uploading too much, even on a pay-extra “unlimited” plan, they will accuse you of running a server and threaten disconnection. This happened to me when I rsynced a few TB of photos offsite for backup.


At this point I now host my small projects (less than 10k users) exclusively on boxes in the corner of my room ha.

AWS and heroku are quite expensive for small projects and performance isn't great. Dynamic IP is not a problem these days either (it's also quite surprising how infrequently your IP changes fwiw).

If you're looking for heroku like interfaces check out Dokku (or other open source PaaS platforms).

After this tier of usage I think I'd consider moving many things to cloudflare workers.


Is there a way to run a little web server on our phones? It's a device that's always on, and usually on Wi-Fi


Most modern phone OSes today try to limit background services to squeeze battery life out of idle states. Even though "always on", some of the idle states are extreme battery misers. For instance, even the iPhone 14 with its "always on display" is doing some really interesting idle stuff, the "always on display" itself refreshes as 1 frame per second or slower (sometimes one frame per minute! as the clock is the only guaranteed to update, once a minute). It seems like the device is always responsive due to how "instant" it wakes from idle states.

All of which are a lot of very interesting reasons why you can't just run a web server on your current phone with its current modern OS and expect it to have 24/7 up time even though it feels to you like your phone has 24/7 responsiveness uptime.

It's a solvable problem if there were enough interest: light web hosting is something that could be added to the list of system services that can wake the device from idle states (in similar ways to how notification services get prioritized, or trickle data feeds like Find My Services). It's not likely a problem that current phone OSes are incentivized to support, though, because there's currently no reason for millions of people to want websites served from their pockets.

Maybe one day there will be an interesting P2P data "hosting" protocol that would be useful for modern OSes to prioritize in that way.


You can but it would drain the battery fast. You can use an old phone that stays plugged in, but my old abandoned phones were all abandoned for a reason, usually because some critical part of them doesn't work anymore.

Search for "fanless mini pc" on Amazon or whatever and you'll be surprised. There's a lot of cool devices available that are approximately the size of a phone, use very low power (under 15 W) like a phone, are under $200 brand-new, have wired ethernet built in, USB ports, HDMI ports, can run normal Linux, and unlike Raspberry Pi they are actually in stock.


See https://news.ycombinator.com/item?id=31841051Repurposing an old Android phone as a web server


I have an old phone set up here, running Octo4a. It's working great.

https://github.com/feelfreelinux/octo4a


The folks at IconFactory recently published a web server for iOS. https://apps.apple.com/us/app/worldwideweb-mobile/id16230068...


Oh nice! And it's from IconFactory :D


I once ran some GNU/Linux distro on Android, and then Tomcat on it :)


termux (android) can run python, node, docker and more, but you should have static IP or some tunneling like cloudflare/tailscale/zerotier


I bought a $300, quite old, but many-core workstation from eBay a few years ago.

Plugged it in, installed a Linux and gave it a static IP.

We're building out our bootstrapped startup on that box for $0 per month.

It's the fastest CI service I've seen in years, admin takes a couple hours every couple months, and the minimal webapp that no one visits yet is crazy fast.

He's named Blue.

We love you Blue!

The good: No monthly fees, extremely fast compared to small/medium/large instances, full control and we retain local access when the internet goes down.

The bad: I live in an (un)developing country (FL) without reliable power or internet, so I had to buy a UPS and still get occasional external outages.

Everything on it (that I care about) is backed up in Git or S3.

It's been a real pleasure to spend a few hours setting up my own thing and not having to chase a bunch of service deprecations, API changes and general bullshit that still seems to plague cloud environments.

Obviously, I wouldn't scale up anything huge with this "stack," but I have every intention of running it until we absolutely must migrate and I suspect we'll go a lot further than expected with Old Blue.


I meet more and more people these days who are so used to working with Big Cloud they have no idea how easy is actually is to run your own hardware. AWS never raises their prices but hardware keeps getting cheaper, faster, smaller, and more energy efficient. You could probably host a simple site that did not have crazy traffic on a pair of old android phones with full HA and keep it in a shoebox!


AWS virtually raises their prices for some non-US users because they price in USD regardless of region. It's significant now but not due to AWS.


>But what I remember feeling most magical was the idea that there was somebody visiting that server on my desk. There was somebody coming from a long way away and going inside. An electronic homunculus.

You can relive this feeling by seeding a few torrents. I sometimes check up on my torrents and try to imagine the person behind the Moroccan IP address grabbing my Drop Dead, Gorgeus discography.


Mine still is. I wish my mailserver still was, it was for more than 20 years.. but these days, getting to send mail out onto the net from a normal internet connection is pretty much impossible. Self-hosting is dead.


Disclaimer: Running Umbrel since a year and I think personal servers on the cusp of a revolution.

Pretty much like how google, mail and social networking websites served as the gateway drug for client server apps and cloud server, Bitcoin/Lightning/Ethereum are increasingly looking like the gateway drug for home server apps and home servers itself. And Umbrel has completely convinced me of this.


Anyone have a simple idiot proof way to make sure a hacked webserver can't hit your internal network? I have two routers (effectively a DMZ) but there must be a better way than two levels of NAT.


That DMZ is fine already, assuming they can't start hacking your routers.

What you ideally want is network segmentation, use VLANS and put devices in their isolated network, only allowed to talk to the router/firewall, which only allows incomming traffic and doesn't allow the web server to initiate connections to the internet, except for NTP, software updates and DNS (fixed ips).


Yeah I actually had a Ubiquiti Edgerouter doing this but I was never confident enough it was set up properly, hence the other solution.


I have the router for the internal network (a Linux box) do a bridge. So it is all the same network and no extra NATing is required. That router blocks connections into the internal network.


In college we'd run a Plex/backups/Minecraft server in an old HP box on the floor. It survived a very hostile environment and was very educational to work with.


I was just about to write that "today" is the best time to run servers in your room, due to raspberrpis and low power usage... then I remembered that it's practically impossible to buy one, and that the media is already preparing us (here in the EU) for power restrictions.. so yeah.. :/


Sorry i have to comment on this cheezy as it may sound.

Dont give into the fear. See if there are alternative power sources you can play with for your raspberry pi and see if there are creative ways to buy them (used, other countries, etc).

Re power sources, what can you do with a solar powere battery? Is there a diy system of power you can build? One that takes in mainline power when available, and solar or battery when not? talking about small hobby panels that can charge a battery during the day and discharge at night. I used power banks for that purpose.

In this context if my life style is under threat i want to life style even harder. I sold a car and instead of buying a replacement i will install solar panels. I know its a fortunate case but even if i can life style a little bit harder and lay less in energy then i will do so (not waste energy but say if it gets cut because of actions if a certain dictator then i can still plug my phone in to criticise said dictator … even harder).

tl;dr; i’d look for creative solutions just so i can stick my two fingers up to the current situation.


RockPro64s are easily available. I have 2 running on a closet. It’s not as generally supported as RPis but if you just stick to the supported OS you wont have any issues.


The problem with non-raspberrypi SBCs is, that after a few year, they become unsupported, and it's hard to get a recent OS with a recent kernel and packages available... I can still get newest OS for raspberrypi 1 (which is ancient by now), while RockPro64 has this:

> The ROCKPro64 4GB board designated as LTS (long Term Supply) model, PINE64 committed to supply at least for 5 years until year 2023 and beyond.

2023 is almost here :)


Pretty sure RK3399 is on mainline kernel nowadays isn't?


Festival TTS (Text-to-Speech synthesis), which the article mentions, is part of many Linux distros nowadays, and it was originally developed at the University of Edinburgh by Alan Black and team (Black et al., 1999; Taylor et al., 1998).

http://src.gnu-darwin.org/ports/audio/festdoc/work/festdoc-1...

https://era.ed.ac.uk/bitstream/handle/1842/1032/Taylor_1998_...


Sounds like the people visiting the website are reduced to a form of entertainment for the author, like a reverse-zoo, where the animals are watching the people that come visit.

I imagine an evolved version of this, where the computer speaks the location of every visitor, their OS, browser, etc. Maybe tied into an Ad Network you could get the visitor's name and address spoken aloud, maybe even their picture. Voyeuristically watching the people coming to your website, from your bedroom. Hmm, that one was cute, let's send them a message.


I started, like many others, around 2003, with an old Pentium 1 that my uncle was about to throw away.

I put it under my bed, I installed Slackware on it (it was still the time when it took about 11 floppy disks to get it installed), started playing with Apache+PHP, I used to run a quite popular software development forum on it, later expanded it with a wiki, links to my projects (Github wasn't around yet) and a small hack game.

Two decades later, I still run my own web server in my room. I wouldn't put big publicly-accessible websites on it (I don't want my bandwidth at home to drop because somebody is downloading a lot of images or videos), a couple of Linode instances do a better work handling that use case. But I do use it to run my personal blog, my Nextcloud instance, my mopidy-iris frontend for playing music in the house, my Jellyfin server to stream my media wherever I am, my Miniflux server to keep track of the RSS feeds and podcasts I follow, Ubooquity to keep a collection of my ebooks, and much more. It used to run on a RPi, now I've upgraded it to a mini pc with 8 GB of RAM and 10 TB of physical storage attached.

It does really feel like it's your own space, in your own house, accessible from anywhere, with no storage limits and with nobody who can show you a bill for the storage, bandwidth or CPU that you consumed.


>Seeing your website’s actual server is the virtual equiv of the Overview Effect and I want to have that feeling the whole time!

That's a perception shift I've never really had. Servers have always just been computers with dial up modems, then networks, then the internet, for me. I've always known that I can, if I want, have a node on the internet that I fully control. It's just not been a priority for me in decades now.

I bet it really is a bit of a rush for someone the first time they make it work.


Mine is in the corner of the room I’m in right now. It’s a little NUC under an armchair. I have a tiny ec2 instance which provides my permanent IP and forwards web and certain ssh requests using a VPN connection and iptables. This allows me to have a beefier machine here, keep logs etc local, run alternative OS (smartos), and just generally tinker.

The ec2 fronting technique I stole from the Helm home email appliance/service. Paying three years up front it worked out to less than $3/month.


Would you happen to have time to provide some more details about using EC2 to get permanent IP? I've been thinking of using wireguard to connect an old PC to my VPS to run video game servers, so this is very interesting to me!


Happy to help although it was ~3 years ago I set this up and it uses openVPN as I have not switched over to Wireguard yet (been meaning to).

I do recall that setting up port forwarding and NAT and both sides was the biggest pain (I do not regularly do network admin!), exacerbated by the fact that the client side is smartOS which uses a different system (ipfilter) than linux (iptables) so there were two cryptic network filtering DSLs to learn. The VPN part was relatively easy as it's just a point to point connection with the local machine as the client, configured to reconnect when the connection is lost and on boot.

On the ec2 side this is (approximately) my iptables setup (1234 and 5678 are stand-ins for ports I use to ssh into the local machine from anywhere on the internet, I have two because there are multiple (smartOS/Solaris) zones on the machine):

  sudo iptables -L
  Chain INPUT (policy ACCEPT)
  target     prot opt source               destination         
  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http
  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https
  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:1234
  ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:5678

  Chain FORWARD (policy ACCEPT)
  target     prot opt source               destination         
  ACCEPT     tcp  --  anywhere             ip-10-4-0-2.ec2.internal  tcp dpt:http
  ACCEPT     tcp  --  anywhere             ip-10-4-0-2.ec2.internal  tcp dpt:https
  ACCEPT     tcp  --  anywhere             ip-10-4-0-2.ec2.internal  tcp dpt:1234
  ACCEPT     tcp  --  anywhere             ip-10-4-0-2.ec2.internal  tcp dpt:5678

  Chain OUTPUT (policy ACCEPT)
  target     prot opt source               destination         
  ACCEPT     tcp  --  anywhere             anywhere             tcp spt:http
On the ec2 side, openvpn conf:

  dev tun1
  ifconfig 10.4.0.1 10.4.0.2
  verb 5
  secret local.key
  cipher AES-256-CBC
  keepalive 10 60
  persist-tun
  persist-key
On the local side, openvpn:

  remote [ec2 ip adr here]
  dev tun1
  ifconfig 10.4.0.2 10.4.0.1
  verb 5
  secret ec2.key
  cipher AES-256-CBC
  keepalive 10 60
  persist-tun
  persist-key
On the local side, ipf conf in ipnat.conf. This is abbreviated as most of the stuff in there is just forwarding amid the zones which is not relevant to a simple linux setup without zones. In addition to figuring out the iptables equivalent I believed you'd want to replace the 102 adr (which in this case is a zone) with your local machine (like 0.0.0.0/0 or whatever):

  map net0 10.0.0.102/32 -> 0/32
  map tun1 10.0.0.102/32 -> 10.4.0.2
(not sure if the first line is even relevant or not, it's been a while)


Wow, thank you so much! This leaves me no excuse to procrastinate on implementing my plans any longer.


I not only run my blog on a computer in the corner of the room, it's solar-powered as well. At night it is supported by a bunch of lead acid batteries[0].

If you can you should host your own blog/website on your own physical computer at home. Especially for blogs, availability and redundancy is just not critical. And if you do a little bit of preparation you can recover quickly from any failure. It is fun, you may learn a few things and it makes things more tangible. (Maybe dig into VLANs or a firewall with multiple interfaces that allows you to separate your home network from the server)

My blog is a static HTML site and it has survived many HN visits of 20k+ visits on a Raspberry Pi3b+. It has since been upgraded to a Pi4 but it doesn't really matter. My 50Mbit upload capacity was never really taxed at all.

I'm currently working for a customer fighting the Azure cloud and it's abysmal in every way possible. The simplest tasks of provisioning resources take forever to complete. It makes me fond of my 8-10 year old 20-core DL380 server that allows me to spin up a huge infrastructure in the same time Azure can spin up a small web app.

[0] https://louwrentius.com/this-blog-is-now-running-on-solar-po...


I remember back in the 2000's I had a Debian box running some stuff next to me by my desk. One day my ears picked up on a regular little click-clack from the drive which on investigation turned out to be my first experience of being SSH brute forced. I guess it was the log entries being written for each attempt. That was early in my Linux days... these days I would just pipe it to /dev/null (joking)


> I could hear the hard drive spin up if somebody accessed the machine, and a little chug-chug-chug while Festival (the open source text-to-speech engine I’d installed) generated the voice. Like footsteps approaching before the door opens.

I recently put a 4TB 7200RPM Seagate hard drive into my dev box, split it into multiple partitions and mount them on /var/lib/docker, /var/lib/rancher/rke2 and a local nfs server (that used as an nfs persistent volume in my local rke2 kubernetes installation), and it's loud! It's like I'm going back to the 2000s. Accessing any services hosted in the local kubernetes platform or building a docker image would trigger a storm of clicking noises. Heck, even during idle, kubernetes is pretty chatty and would trigger clicking noises every so often.

I also had an instance of home assistant accessible from internet, but it's on raspberry pi so it's silent. My ISP has cgnat though, so I had to route all connections via my vps through zero-tier.


Hmm. I have a web server in the corner of my home office. It's still doable. Stick an Atom board or a hobby ARM board in a case with a fan, add some storage, done.

Said server is mostly private use for me and a few other people. If I had need for a public one I'd probably rent something for $5/month first.

Oh wait. I am renting something for $5/month. It runs backup DNS and MX.


At the moment it's offline because I'm between homes, but normally I have a cluster of Raspberry Pis running Kubernetes to host my blog and a few other services. Unfortunately, the Pis need static IP addresses which requires admin access to the router, which I lack as I'm staying at an AirBnB, so in the meanwhile my site is running on an EC2 spot instance.


Get a VPN from a reputable ISP, or to a VPS. As a bonus, it's much easier to host mail as you can customize reverse DNS. You also get a "clean" IP, IPv6 regardless of your current ISP, and a static IP.

It's also possible to host a static website on IPFS and point DNS records to cloudfare or another public gateway to let them handle the web server part.


Yeah, there are a lot of options for static site hosting. I’m not doing in on a Kubernetes Raspberry Pi cluster because it’s easy! To be clear, the static IPs for my Raspberry Pis aren’t public addresses—I already proxy tunnel through a VPS, but Kubernetes gets cranky if a node’s private IP address changes, so I need static private addresses (and in the interim, I’ve just put the whole blog into the VPS). There’s probably a better solution, but I haven’t struck upon it yet.


Self hosting has to evolve. It's that simple. All the devices we now seamlessly use in our house were once kits and DIY. I built my own PCs as a teenager, servers in my first job and then software after. My gut says running stuff in your house has to be absolutely dead simple, plug and play with a set of purpose built services that anyone can use from a web or mobile app.

Do we have this already? No I think most superior experiences are saas and cloud hosted e.g Apple/Google. They own the hardware and the software stack. The current state of open source + raspberry pi is not appealing to consumers. It might be the right way forward but only if totally packaged together in a box you just plug in.

Huge undertaking. Entirely doable. Just requires the right motivation.

Oh and websites don't have to be on the public internet, using a tailscale like VPN with shared keys is a good way since they do magic DNS.


I have Comcast Gigabit Pro which comes with a 6 Gbps symmetrical fiber connection and a separate 1 Gbps symmetrical ethernet connection, each connection having their own block of 5 IP addresses. I've been considering moving my colocated server back home and putting it on the 1 Gbps line, my only hesitation is that the IP addresses between the two connections are so similar. If I was only serving https traffic to the public that wouldn't be too much of a concern, since I could just stick Cloudflare in front of it. However, I'm also hosting game servers on there, and those don't seem as straightforward in masking my IP address. Should I be this paranoid about that? They would be completely separate networks, there's no route from the publicly accessible server into my home network.


You could use something like cloudflare tunnels which wouldn't expose your IP, but would still route the traffic back to you machine.


Ooo that looks promising! I will look into it, thank you for the suggestion.


Cloudflare Tunnel is a solid service. Self-hosted options are also available: https://github.com/anderspitman/awesome-tunneling


Or stick a level 4 haproxy at a cheap no bandwidth fee vps provider.


I have 1gbps symmetric fiber with static IP so I run some of my backends from home. Works fine for years.


The upstream on my cable modem is about 1/6th what my college dorm-room[1] upstream was, and I'm not sharing it with 1000s of other people.

1: It was two T3 lines, but only half of the second line was provisioned, so ~67MBps vs today's 12MBps.


Ditto. No complaints.


My first server was some racks I pilfered from school and put in my closet. The 15000 rpm SCSI drives and 90db fans did not make for easy sleeping.

I still think you can go a long way with bare metal you might have laying around. I find hosted VMs to be woefully underpowered for their specs.


It is a nice feeling having a local Linux server running in your house. One that you not only fully control, but one that is on 24/7/365 that you can run services /scripts on, or just screw around in general, and not pay monthly fees for compute and storage.


Hmm, I don't have a problem self-hosting from home. Here's my setup:

1.) Cable (DOCSIS 3.0) Internet connection with a dynamic IPv4 address. 2.) Registered domain(s) 3.) Domain hosted via Dyn.org (for quick updates in the event my IP changes) 4.) Linux-based firewall/router that runs ddclient (to update the public DNS records should my IP change, which is very rarely) 5.) All port 80/443 traffic is forwarded to an LXD container running nginx as a reverse-proxy, where TLS encryption/decryption is handled 6.) Unencrypted HTTP traffic is then forwarded off to whichever LXD container is hosting the actual site

Unless my Internet connection actually goes down (which is rare thanks to a good provider and everything being on a UPS), the site stays up.

Hope this helps!


It sounds wonderful, but doesn’t sound (to borrow the author’s phrase) ‘turnkey’.

And certainly not on a Raspberry Pi running Linux - it sounds like a day of frustration, trial-and-error, and many many google searches!

I would pay good money for (let’s say) a Pi with all of the hard work done - just plug it in to your router and it’s already serving pages online.

Edit: also, dyn.org doesn’t seem to exist?


I'm not much interested in a personal webserver than having as a NORMAL service an ipv6 global per any connection, all ISP crappy router deniable or configurable in bridge mode, anyone normally owning a personal domain name or more than one.

Some subdomains dedicated to personal services etc. Web server just a part of the game, not them specifically.

Technically there are NO reasons to justify "cloud computing" despite claims, the only real justifications are business of some against others interests. There are no reasons despite all ipv6 issue to not offer global addresses etc. The real issue is that most people simply have next to no ideal about IT nor how to benefit from in in their own lives. Those who know have not much choice...


My first server was my windows xp desktop. I had a free subdomain at servepics.com and a dynamic dns client and we had a 0.25 mbit dsl connection. Being 12 I couldn’t figure out how to get an http server running though so I just hosted pictures with caesarFTP. I had to ask people to visit ftp://whateveritwas.servepics.com . I took a lot of party pictures with the digital camera my brother lent me. 2003 I think. This was a big deal at the time because there were no sites we knew of that allowed uploading images for free, that was a premium feature of the social network everyone was using back then called lunarstorm.


Nowadays with SSDs you no longer hear if a PC is working, especially if its fans are quiet.

Products like the DreamCheeky USB gadgets (looks like they went out of business in 2017 https://web.archive.org/web/20170821050810/http://www.dreamc... ) or the more recent Diwoom Pixel Art gadgets could be used to signal activity. There are probably a lot more solutions to this problem if you have a server sitting on or below your desk ;-)


I do this, I host my website on my desktop. It's nice to have just a single computer instead of many. http://catern.com/computers.html


I have hosted my own web server both physically and codevise (open-source) since 2014.

It's on a Raspberry 2 cluster:

http://host.rupy.se

Since 2016 i have my own database also hosted on the same cluster and coded from scratch:

http://root.rupy.se

We need to implement HTTP/1.1 with less bloat, a C non-blocking web server that can share memory between threads is probably the most interesting project for humans right now, is anyone working on that?


I worked for a major satellite TV provider. In the early days the website was just information and directions to the nearest installers - no bill pay or buy flows. The website ran from under the boss' desk.


Through most of the 2000s, I had an ever-growing server sitting in my apartment closet. I upgraded it from IDE (a couple gigs) to SCSI drives (25gigs!!!) and spent a lot of time learning Linux throughout. It was ugly to navigate NATing, etc at that point but I ran eGroupware for a long time.

Now I have a couple of small devices for monitoring, logging, and sharing and run them behind ngrok. They're quick and easy and I don't have to set up anything else.

Disclosure: I work for ngrok (as of last year) but used it since ~2014 already.


Need to mention here that yunohost.org is a great easy solution for your RPI or any other hardware or VPS. It is maintained by a great community that takes care of most of the essentials and provides a great webUI for installation and maintenance. Some of the built in features: Domain management with NGINX reverse proxy and Lets Encrypt certs. Fail2Ban brute force protection. Easy install and upgrade of many free server apps. I love looking over at my little RPI in the corner serving my friends and family.


> Perhaps there’s a way to host my website at home, but have the static bits served by Cloudflare if the Raspberry Pi isn’t available (using a global CDN as a UPS), and the dynamic bits always visit my home – but there’s a graceful “come back later” message if the Pi is down?

I feel like this is what IPFS and similar are made for. I could see a home user appliance configured with something like that, plug it in and your site is up, unplug it and it was replicated to other opt-in hosts.


My web server still is! It's a super cheap old Dell desktop that draws precious little power :) I really want to get an IPv6 block assigned from my ISP, I've been meaning to, and really kick the tyres on using IPv6 public addresses for it, but I keep putting it off because I still don't really understand how it all works (and my router's port forwarding gives me some small measure of comfort that I haven't accidentally exposed everything)


Not sure how your router is configured, but mine (Netgear Orbi, so not a niche router with one-off software) still has terrible IPv6 support. The only options are to allow public routing to each of my devices, or to block it entirely. No port forwarding or firewalling all but a single port. So confirm your router works the way you expect before you try migrating from IPv4.


Yeah that's exactly what makes it complicated. It's an Asus RT-AX86U which does have some support for IPv6 handling, but I still don't understand it fully which is why I keep putting it off, but I also don't want to keep paying $5 AUD a month for a static IPv4 address


I'm running a server in my house. Its a ethereum PoS node. It is oddly satisfying to look at it in the flesh.

It's another pet to me: I have to make sure it has a constant supply of electricity and internet. And requires maintenance from time to time.

Some photos here: https://vedantsopinions.medium.com/eth2-node-at-home-without...


These days the ESP32 could be good enough. It could host a decent website, which could be a portal to do fancy intercom stuff and make photo's on the press of a button.


In 1999 I wrote a piece of PHP trouble ticket tracking software called Ticketsmith which eventually morphed into the foundations of ubersmith.com. I put the first tarball on my home PC (running Linux) and linked that URL to Freshmeat.net. It was so thrilling to sit there that evening, watching TV but looking over to see the Apache log tail process stream out as each person downloaded it to check it out. Very visceral.


https://web.archive.org/web/20221011170702/https://interconn... because it takes 25 seconds to load at the moment (not that archive.org couldn't use a speed boost)


When I was a teenager, I ran a BBS in my bedroom on an Atari 130XE with 6 disk drives hooked up to it (mostly containing single-file cracked games). I used a 64K ram disk to make the BBS not glacially slow.

I would get so excited when people called up, I got to spy on what they were doing and if they were from far afield I chatted with them :) Those were the days...


>This is way back in 2000 so before smartphones, and before texting,

I remember sending my first text on the first widely available Nokia GSM handset in the fall of 1994.

SMS wasn't advertised as a feature until the turn of this century when networks figured out how to charge for the service largely due to base station logging disk constraints of space and latency.


I have my own webserver in the spare bedroom. Earlier it was a tower computer, but now it's just an old laptop. I've been using VMware ESXi for years now, so moving the server is just a matter of moving a few image files.

Work like a charm. I pay a (very) small amount extra per year to get a static IP address.


Way back in the mists of time, we set up our first corporate website. We were using Website Pro, and the box was under a desk. There was an option to make the machine beep with each hit, and for a while it was thrilling to hear those beeps — once an hour or so, maybe a cluster of a few in a row. The physicality!


I was an huge fan of newsgroups in the past, but unfortunately NNTP was blocked from my university campus network. Then I setup a NNTP server in parents house running as OpenVZ VM in an old PC and I was able to access through SSH. I never understood why SSH was permitted and NNTP forbidden.


The general issue is that some protocols have a hundred uses and if only one of them is mission critical, then it can't be blocked. SSH surely fits that description.

NNTP only has one purpose, sharing information (human to human). It's doubtful that any mission critical application needed it. Once IT saw that NNTP traffic had significant traffic and was throttling other traffic (or just expensive), they knew they could shut it down easily and therefore they did.


They should restrict SSH only to the permitted sources and destination then. Leaving SSH open is a big security risk, because you are barely permitting any kind of traffic. I think that people managing the university network at that time was not so great: for couple of years, the DHCP assigned you a public, unfiltered IP address and the password policy for accessing Internet was very easy to infer (take the last digit of your student ID, remove it and sum to the first digit). And you can easily find online a student ID of someone else.

I remember people putting desktop PC in the lockers with ton of warez and P2P to the world with the ID of someone studying architecture or medicine.


The optimal setup (I can think of) that I'm planning to do is to separate a Raspberry Pi on a VLAN and combine it with a cheap hosted reverse proxy from a third party. The reverse proxy part might be a luxury but it's just in case you don't want to expose your home network.


This line from the article sums up my feelings pretty well:

> I’m pretty technically capable but I’m not sure I can be bothered.

All this sounds fun and a cool throwback, but it's also rather more work than I'm willing to put up with right now.


I want the same thing but for different reasons. I want 1TB of HDD storage for a one time payment of $50, instead of having to shell out $80/mo. for the same thing in a datacenter.


I still host some things from home, but Linode, Scaleway, etc. are so cheap for tiny machines it might make more sense to build some APIs that the webserver can call on a machine running from your house.


visit http://i.reddit.com/r/selfhosted to join hundreds of thousands of people hosting at home.


It's a great community for learning, but I think they focus too much on teaching each other and not enough on lowering the barrier of entry.


Not necessarily a "web server", but selfhosting is getting very popular nowadays, and -yes- you can get started with just a Raspberry Pi. Recommend looking over r/selfhosted


My ideal state of the internet is companies sell powerful all in one servers. Each household will have that server for their daily needs - email, messaging, social network, gaming, etc.


Funny, I just spun back up my kaaik.local the other night.

Still working through some things, but everything basically works the way it should. Firewalls might not be a bad idea to update though.


I have the same feelings. When I ran a single-line BBS from my bedroom as a kid, I would get excited every time someone would dial in and I'd see activity.


I have a full server rack in the corner of my apartment. I’m doing a rebuild right now, but I’ve had that rack for the last 7 years. It’s definitely possible.


I recently had mine in my bedroom corner along with all the network gear.

With all the leds and flashing lights I couldn't sleep.


Mine is and has been for a few decades. Different machines, but yeah.

I run it behind a cheapo VPS for geolocation reasons.


Is it scalable and how do you deal with the noise and cooling/power requirements?


From experience, if you're a heavy-sleeping teenager like I used to be, the noise is less of an issue ;) I don't think I could cope with the sound of two fans and three HDs spinning nowadays, but back then it was a tiny price to pay for the coolness of having a real server in my bedroom.

Nowadays I just run an RPi3, which is silent and takes very little power.


If those are your requirements for the websites you host, the point of this article is not relevant for you in that context.


I find an Intel NUC is more than capable of good selfhosting, and is nearly silent and uses very little power.


Isn't it kind of explicitly about not being scalable?


You could automatically scale it into the cloud in case of being hugged by HN, for example.


wonderful. That spirit is what we aim for at our youth centre http://jugendhacktlab.qdrei.info/. Raspis all the way down.


I like the prose poetry of this article. It flows in a nice way. Well done.


Mine is, on my ODroid; though something beefier would be nice :)


The fans are rather loud i wouldn’t advise it


1. Start web server

2. Add AAAA record from domain to RPi IPv6 address

3. Profit


You want to host my plex server there? ;P


2 key words, Wireguard and a cheap VPS.


Back on my day we ran BBSes...


so do I, but my ISP after getting eaten by another larger ISP made it impossible to access remotely.

long live the free market. free for institutional-entities to step on individual humans.


I've wanted to do this for years, but just can't stand the security hassle. One solution I've often thought about, is renting a small office in the neighborhood and setting up there, obviously that adds a lot of expense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: