Hacker News new | past | comments | ask | show | jobs | submit login

It worked ridiculously well before I posted it to Hacker News. ;)



Loaded fine a couple seconds ago. Took about a minute to start transferring data, during which it didn’t appear to be responding.


Then what exactly is the point of posting it? Not calling you out specifically but there's been quite a few similar ones, always unavailable because they don't scale beyond one user. So there's nothing to see and little to discuss.


Just fun, really. There is not much serious about it.


consider putting it behind a cdn like cloudfront or fastly, i think both have free plans - it's a good way to illustrate that you can serve a lot of traffic with very little backend power.


> it's a good way to illustrate that you can serve a lot of traffic with very little backend power.

What? It demonstrates that Cloudfront or Fastly can handle a lot of traffic, since they'll cache just about everything if you put it in front of a static website...


I think the point is that an individual doesn't need a lot of paid resources to take a website live with their own hardware.


That's also true if you employ a ten year old laptop as I do, quite power-efficient and has a built-in UPS (todo: get one for the router). No CDN or anything and the couple of pageloads per second (at peak), as the HN homepage generates, barely raises the load average into the point-tens.

The software/service you run makes most of the difference between whether it can run on standard at-home hardware or if it needs some distributed autoscaling system (when speaking of HN homepage types of traffic). Of course, if you're serving video or large images/audio, you're going to need more than a few megabytes per second of uplink. Or if your site trains neural networks custom for every visitor, that's a special case. But for regular websites...


True, but that's a weird point as you can use free shared hosting (like Netlify) then as an example that you don't need any paid resources at all to take a website live, but the point of this submission seems to have been about taking a website live with your very own hardware, so mentioning CDNs feels weird.


Ah yes, the very little backend power that cloudflare runs


I think I have CloudFlare on, and I am seeing requests with a CloudFlare header, but I am still getting a huge amount of traffic. I am not sure what the issue is. Maybe the site is so busy that CloudFlare can’t even load it to cache it, haha. Anyway, I have to leave for an exam now, so let’s hope it manages to work itself out.


Changing to cloud flare probably meant updating your dns settings. Based on the TTL of the record and clients respecting it, the change can take a while (up to days even) to spread around the world.


I do believe you have to actually set some headers on your origin HTTP server, to tell the CF caches how long to cache for.


Oh, no doubt, but I hold out hope that somehow, some way, somebody will find a way to handle the hug of death. Perhaps it's simply impractical on such modest hardware.


I think it would have fared much better if I had waited until I got my ISA Ethernet card in the mail. But serving this over serial at 38400 baud and watching it try to keep up was tempting. I’ll have to see how well it fares with real networking hardware. At that point I might post it again, although I will have to add enough content to it to justify a repost.


Are you able to measure the latency of processing one request when there is no load?

Just back of the envelope, if it takes 200,000 instructions to handle a request and we assume 6 cycles per instruction, then that’s about 25 requests per second.

HN is roughly 50K requests over 6 hours, so that’s roughly 2 requests per second on average. I would imagine it peaks to about 25. So in theory you should be able to handle the traffic.


The 38400 baud might be the biggest bottleneck.

http://bettermotherfuckingwebsite.com/ is 20k, ignoring http overhead.

Maybe strip that to 15k after compression, and cut some content.

Still 15000/38400 <= 3 responses per second.

Add in serial parity overhead, http overhead. Might be able to sustain 2 rps with enough cleverness.

Leaves no room for bursts


Don’t forget that 38400 baud serial is 3840 bytes/s…


38400 baud should be 38.4kbits/sec or 4800 bytes/sec no?

[edit] never mind, you're right, assuming an 8N1 configuration - 8 bit bytes, 1 start bit, 1 stop bit, and no parity bits.


Yup definitely forgot bits vs bytes lol.

So 3kB/s, if your site is 15kB you're looking at 5 seconds per request, or 0.2 rps.

Plus overhead. I thought my original math sounded too fast.


A nitpick though from someone that actually used a 300 baud modem without dialing function, baud != bps [0]

[0] https://www.linuxjournal.com/files/linuxjournal.com/linuxjou...


The content can be performance metrics before and after the Ethernet card.


>Perhaps it's simply impractical on such modest hardware

I imagine it's not just the hardware limitations, but the available software. This one, for example being MS/DOS, where there were never really any serious server-side http/tcp implementations.

On the other hand, there were very busy BBS systems running on DOS where there had been time for years of various optimizations to happen.


It is probably the modem honestly. The 386 is well capable of saturating a 38.4k modem many times over.


It loaded fast just now, but it's just a wall of text with 3 links. Just seems less impressive because of that. But it is cool! I enjoy older hardware.


The increased speed is probably because of CloudFlare. I know it’s kind of cheating, but it was the only way to avoid the hug of death. The good news is that the 386 is still serving lots of incoming requests, just not every single one. If I can think of some more authentic way to keep the site up, I’ll try that.


A cluster of load balanced 386 boxes should do it!


Well that and the page is 4KB with no javascript.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: