Then what exactly is the point of posting it?
Not calling you out specifically but there's been quite a few similar ones, always unavailable because they don't scale beyond one user. So there's nothing to see and little to discuss.
consider putting it behind a cdn like cloudfront or fastly, i think both have free plans - it's a good way to illustrate that you can serve a lot of traffic with very little backend power.
> it's a good way to illustrate that you can serve a lot of traffic with very little backend power.
What? It demonstrates that Cloudfront or Fastly can handle a lot of traffic, since they'll cache just about everything if you put it in front of a static website...
That's also true if you employ a ten year old laptop as I do, quite power-efficient and has a built-in UPS (todo: get one for the router). No CDN or anything and the couple of pageloads per second (at peak), as the HN homepage generates, barely raises the load average into the point-tens.
The software/service you run makes most of the difference between whether it can run on standard at-home hardware or if it needs some distributed autoscaling system (when speaking of HN homepage types of traffic). Of course, if you're serving video or large images/audio, you're going to need more than a few megabytes per second of uplink. Or if your site trains neural networks custom for every visitor, that's a special case. But for regular websites...
True, but that's a weird point as you can use free shared hosting (like Netlify) then as an example that you don't need any paid resources at all to take a website live, but the point of this submission seems to have been about taking a website live with your very own hardware, so mentioning CDNs feels weird.
I think I have CloudFlare on, and I am seeing requests with a CloudFlare header, but I am still getting a huge amount of traffic. I am not sure what the issue is. Maybe the site is so busy that CloudFlare can’t even load it to cache it, haha. Anyway, I have to leave for an exam now, so let’s hope it manages to work itself out.
Changing to cloud flare probably meant updating your dns settings. Based on the TTL of the record and clients respecting it, the change can take a while (up to days even) to spread around the world.
Oh, no doubt, but I hold out hope that somehow, some way, somebody will find a way to handle the hug of death. Perhaps it's simply impractical on such modest hardware.
I think it would have fared much better if I had waited until I got my ISA Ethernet card in the mail. But serving this over serial at 38400 baud and watching it try to keep up was tempting. I’ll have to see how well it fares with real networking hardware. At that point I might post it again, although I will have to add enough content to it to justify a repost.
Are you able to measure the latency of processing one request when there is no load?
Just back of the envelope, if it takes 200,000 instructions to handle a request and we assume 6 cycles per instruction, then that’s about 25 requests per second.
HN is roughly 50K requests over 6 hours, so that’s roughly 2 requests per second on average. I would imagine it peaks to about 25. So in theory you should be able to handle the traffic.
>Perhaps it's simply impractical on such modest hardware
I imagine it's not just the hardware limitations, but the available software. This one, for example being MS/DOS, where there were never really any serious server-side http/tcp implementations.
On the other hand, there were very busy BBS systems running on DOS where there had been time for years of various optimizations to happen.
It loaded fast just now, but it's just a wall of text with 3 links. Just seems less impressive because of that. But it is cool! I enjoy older hardware.
The increased speed is probably because of CloudFlare. I know it’s kind of cheating, but it was the only way to avoid the hug of death. The good news is that the 386 is still serving lots of incoming requests, just not every single one. If I can think of some more authentic way to keep the site up, I’ll try that.