Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: My website, hosted on a 386 25 MHz, 4 MiB of RAM, 38400 baud internet (serentty.com)
234 points by serentty on April 16, 2022 | hide | past | favorite | 170 comments



This uses CloudFlare and the page is cached (CF-Cache-Status: HIT header). Kinda disappointing that we're not receiving data from your compunet but rather the CloudFlare edge machine.


If you had visited a few hours earlier it would have been direct. Then the hug of death hit.

If you want to visit the page directly without CloudFlare, go to http://trombone.zapto.org instead.


I love watching the text load in discrete chunks!


Gave me a flashback to bbs days!


Still decent, considering.


I don't have that. My response headers:

    HTTP/1.1 200 OK
    Server: mTCP HTTPServ Mar  7 2020
    Date: Sun, 17 Apr 2022 00:30:00 GMT
    Content-Type: text/html
    Content-Length: 5355
    Expires: Fri, 14 Oct 2022 00:30:00 GMT
    Last-Modified: Sun, 17 Apr 2022 00:02:00 GMT
    Connection: keep-alive
The page loads very interestingly. Each paragraph loads separately, one at a time down the page. I'm guessing because of the bandwidth limitation.


I don’t know too much about how CloudFlare works, but it looks like you got a direct connection! Congratulations! My guess is that your DNS does not have the CloudFlare entries yet.


Same for me from Slovenia's T-2 dnslj1.t-2.net. I see that remainimg TTL is only 41 seconds, so it'll not be like that when I post this message.

Really neat, especially the slow loading. When I saw cloudflare on the page, I thought it was faked with javascript, but it was real!

Interesting how chromium will start rendering no matter how little information does it have (:


Yeah, I find the slow loading really fun. Unfortunately, when I install the Ethernet card I imagine it will get a fair bit faster. But on the bright side, I think it will reduce the need for CloudFlare, which helps in terms of authenticity.


I find it interesting how little computer power you need to host a highly accessed website by using CloudFlare. When 386's were the cutting edge it would have taken a car load of computers plus some fancy cutting edge load balancing software. All very expensive.


You could put that entire web page in a Worker probably and not even bother with an origin (or use CF pages).

Disclaimer: I work on R2


The 386 was far from cutting edge by the time Al G^H^H^H^H Tim Berners-Lee got around to inventing the Inter^H^H^H^H^H World Wide Web.


The web was invented in 1989, the same year the 80486 hit the market. So the 80386 wasn't that far from the cutting edge.


I am disappointed too, I wanted my request to be served from that machine :/


If you come back tomorrow, I can probably turn CloudFlare off. Although I understand not being enthusiastic enough about my novelty website to bother.


Please for the love of kittens turn off Cloudflare and never put it on again. For me and other Tor users that awful service is a guaranteed way of ensuring we'll never see your site (which I am very interested in BTW).


Yeah, sorry about that. I am planning on finding a way to keep things interesting and authentic with the hosting, but also accessible to everyone.

In the meantime, if you want to visit the page directly without CloudFlare, go to http://trombone.zapto.org instead.


I use a text based browser from my editor and that loads as close to instantly as you'd ever hope for! A VIC-20 version sounds even more awesome.


If you care about privacy, using a text based browser from an editor is a terrible idea for many reasons, lol.


Sounds interesting. Do share. What's your threat model?


Assuming emacs, attacking major mode detection, or triggering lisp execution.

Edit: but also identifying heuristics like lack of JS execution, or potentially client specific http client behaviour


Turning cloudflare off has DoS implications for a sanely hosted side, let alone a 386. You turning off tor when required would be a better option.

I hope you don’t regularly make such selfish requests to potential DoS victims due to your own arrogance / ignorance.


I’m dumb enough to do it anyway. ;)


Extra points for being able to insult both the OP and myself in one sentence. If you've nothing constructive to add how about you leave us insane, ignorant and arrogant grown-ups in peace to work out our own stuff?


I don’t think I insulted op? You’re not a grown up, you’re a self entitled privacy weeb.


Are there any tricks a user can do to bypass Cloudflare or force a cache invalidation?


That would entirely defeat the purpose of cloudflare's DDoS protections.


Yeah, I imagine it would. I can invalidate the cache myself, but it would not make sense for a user to do so.


generally, adding random query params like ?1, ?2, ?12345 helps with the default settings of including that in the cache key.

that will also work in this instance.

you won't however see it slowly send the response as you do on http://trombone.zapto.org/, as cloudflare seems to block until it received the full response from the backend.


You're not wrong, but all of that behavior is configurable so may work on some sites and not others. The account owner can tell cloudflare whether to consider query params different or the same for cache hit puproses. You can also configure whether cloudflare streams/buffers (although some of it does require the enterprise plan).

No affiliation with cloudflare other than I use them for several sites.


indeed, hence

> helps with the default settings of including that in the cache key

I didn't know about response streaming being configurable, it seems to be enabled by default and configurable for enterprise customers: https://support.cloudflare.com/hc/en-us/articles/206049798-S...

I assume due to the (relatively) small response size of this page it buffers regardless.


If you want to visit the page directly without CloudFlare, go to http://trombone.zapto.org instead.


Nice! Thanks for serving me. It was snappier than expected.


Somehow getting the IP address of the server (in this case 174...*) would enable you to connect directly. Websites, such as crimeflare.org crawled the net to gather those addresses, probably by scanning, but the mentioned site was shut down as it seems.


A site that really wants Cloudflare's protection would ignore all traffic that doesn't come from Cloudflare though. In practice, many origins probably aren't locked down in this fashion.


I just like to go to these to see if they actually are working. Sadly, but predictably, they never are.


It worked ridiculously well before I posted it to Hacker News. ;)


Loaded fine a couple seconds ago. Took about a minute to start transferring data, during which it didn’t appear to be responding.


Then what exactly is the point of posting it? Not calling you out specifically but there's been quite a few similar ones, always unavailable because they don't scale beyond one user. So there's nothing to see and little to discuss.


Just fun, really. There is not much serious about it.


consider putting it behind a cdn like cloudfront or fastly, i think both have free plans - it's a good way to illustrate that you can serve a lot of traffic with very little backend power.


> it's a good way to illustrate that you can serve a lot of traffic with very little backend power.

What? It demonstrates that Cloudfront or Fastly can handle a lot of traffic, since they'll cache just about everything if you put it in front of a static website...


I think the point is that an individual doesn't need a lot of paid resources to take a website live with their own hardware.


That's also true if you employ a ten year old laptop as I do, quite power-efficient and has a built-in UPS (todo: get one for the router). No CDN or anything and the couple of pageloads per second (at peak), as the HN homepage generates, barely raises the load average into the point-tens.

The software/service you run makes most of the difference between whether it can run on standard at-home hardware or if it needs some distributed autoscaling system (when speaking of HN homepage types of traffic). Of course, if you're serving video or large images/audio, you're going to need more than a few megabytes per second of uplink. Or if your site trains neural networks custom for every visitor, that's a special case. But for regular websites...


True, but that's a weird point as you can use free shared hosting (like Netlify) then as an example that you don't need any paid resources at all to take a website live, but the point of this submission seems to have been about taking a website live with your very own hardware, so mentioning CDNs feels weird.


Ah yes, the very little backend power that cloudflare runs


I think I have CloudFlare on, and I am seeing requests with a CloudFlare header, but I am still getting a huge amount of traffic. I am not sure what the issue is. Maybe the site is so busy that CloudFlare can’t even load it to cache it, haha. Anyway, I have to leave for an exam now, so let’s hope it manages to work itself out.


Changing to cloud flare probably meant updating your dns settings. Based on the TTL of the record and clients respecting it, the change can take a while (up to days even) to spread around the world.


I do believe you have to actually set some headers on your origin HTTP server, to tell the CF caches how long to cache for.


Oh, no doubt, but I hold out hope that somehow, some way, somebody will find a way to handle the hug of death. Perhaps it's simply impractical on such modest hardware.


I think it would have fared much better if I had waited until I got my ISA Ethernet card in the mail. But serving this over serial at 38400 baud and watching it try to keep up was tempting. I’ll have to see how well it fares with real networking hardware. At that point I might post it again, although I will have to add enough content to it to justify a repost.


Are you able to measure the latency of processing one request when there is no load?

Just back of the envelope, if it takes 200,000 instructions to handle a request and we assume 6 cycles per instruction, then that’s about 25 requests per second.

HN is roughly 50K requests over 6 hours, so that’s roughly 2 requests per second on average. I would imagine it peaks to about 25. So in theory you should be able to handle the traffic.


The 38400 baud might be the biggest bottleneck.

http://bettermotherfuckingwebsite.com/ is 20k, ignoring http overhead.

Maybe strip that to 15k after compression, and cut some content.

Still 15000/38400 <= 3 responses per second.

Add in serial parity overhead, http overhead. Might be able to sustain 2 rps with enough cleverness.

Leaves no room for bursts


Don’t forget that 38400 baud serial is 3840 bytes/s…


38400 baud should be 38.4kbits/sec or 4800 bytes/sec no?

[edit] never mind, you're right, assuming an 8N1 configuration - 8 bit bytes, 1 start bit, 1 stop bit, and no parity bits.


Yup definitely forgot bits vs bytes lol.

So 3kB/s, if your site is 15kB you're looking at 5 seconds per request, or 0.2 rps.

Plus overhead. I thought my original math sounded too fast.


A nitpick though from someone that actually used a 300 baud modem without dialing function, baud != bps [0]

[0] https://www.linuxjournal.com/files/linuxjournal.com/linuxjou...


The content can be performance metrics before and after the Ethernet card.


>Perhaps it's simply impractical on such modest hardware

I imagine it's not just the hardware limitations, but the available software. This one, for example being MS/DOS, where there were never really any serious server-side http/tcp implementations.

On the other hand, there were very busy BBS systems running on DOS where there had been time for years of various optimizations to happen.


It is probably the modem honestly. The 386 is well capable of saturating a 38.4k modem many times over.


It loaded fast just now, but it's just a wall of text with 3 links. Just seems less impressive because of that. But it is cool! I enjoy older hardware.


The increased speed is probably because of CloudFlare. I know it’s kind of cheating, but it was the only way to avoid the hug of death. The good news is that the 386 is still serving lots of incoming requests, just not every single one. If I can think of some more authentic way to keep the site up, I’ll try that.


A cluster of load balanced 386 boxes should do it!


Well that and the page is 4KB with no javascript.


Its up now.


It has gotten pretty busy. In case you can’t manage to load it live from the machine, here is a snapshot of it:

https://archive.ph/WUdgc


Yeah, even a local cache in front of the serial link would go a long, long way into making this more feasible


For the record, the machine is running the mTCP web server: http://www.brutman.com/mTCP/mTCP_HTTPServ.html

I wrote it back in 2014 and I've hosted brutman.com on it at times. My usual machine to run it on for torture testing is a PCjr with a hard drive and Ethernet. This 386 would be so much happier with an Ethernet card; the serial port connection is really hurting the performance.

A new version of mTCP is in the works; look for it in the next two months.


Oh wow, I know you from the README! It really is something seeing you here. It is not a very impressive site at the moment but the immense amount of traffic that I have been getting is encouraging me to grow it into something with some actual content. Probably something retro related in some way.


Pets; not cattle. That’s the way I like it.


> I have gotten Rust to compile for 386 DOS machines before

I don't know why I laughed so hard at this. Why would any sane person do this?


It’s fun. Lots of people like doing retro game dev and the like, but often you are stuck with ancient C toolchains that suck even if you like C itself. Getting Rust working is a great way to make it more enjoyable from a hobbyist perspective, I think.


Why not?

If you talk to the spooky GCC people with beards they'll point out that they support much much weirder chips than that (albeit with little to no library support).


You may not believe me, but there are relatively modern 386-compatible embedded systems. Some of them run DOS for the sake of very old applications, such as certain industrial control software. Targeting them for modernization without gutting the hardware (which can be perfectly functioning and highly durable) is a valid goal.


You're a cat, a 386 DOS machine is a tiny box on the floor. You jump in the box and slowly settle all of your limbs into the box till you can sit. Now you fit.

Is it sane? No. But you're a cat.


> Why would any sane person do this?

Perhaps the same kind of people who got GCC to compile for 386 DOS machines (also using DPMI). Which might sound useless, since it's an underpowered operating system running on underpowered hardware, but it was used to build some very popular software, like the original Quake game (source: http://www.delorie.com/djgpp/history.html).


The original Doom, yes. Even though it was barely playable on a 386 25 MHz.

The original Quake was barely playable on a 486 DX4 100 MHz, and was fine on a Pentium.


The FastDoom port slims down some floor textures and such and makes Doom playable (and not stamp sized) on a 386.

About the 386, you could run SICP as a GNU Info file blazingly fast and a Scheme compiler on another TTY, or under GNU Screen. You could typeset with Groff and Mom, too.

Now try running LaTeX on that. Yes, Donald Knuth is the Ur-creator on CS and typesetting, but hey, sometimes you don't need a full 3GB Texlive suite to typeset some book or math equation notebook.


I guess you will laugh harder, at the guy trying to get it to work on Psion 3a.


Ditch cloudflare and write the web server in assembly and see if it holds up to raw HN traffic. Bonus points if you have an L1 cache and can fit and serve your entire website from it.


I think the 38400 baud connection is a bigger issue than the CPU load. When I get Ethernet working I think there is a decent chance that I won’t need CloudFlare—especially if I can find a full 32-bit server that really uses the potential of the 386. Now to wait for that card to arrive in the mail. ;)

Edit: If you want to visit the page directly without CloudFlare, go to http://trombone.zapto.org instead.


I guess the next challenge is gigabit ethernet (!!!) for the 386


There are 100 Mbps (Fast Ethernet) cards for the ISA bus, which in practice can apparently be around 50% faster than 10 Mbps Ethernet over such a bus. However, they seemed pretty expensive compared to the ubiquitous 3com Etherlink III that are available quite cheaply. I think Ethernet will still make a huge difference in terms of serving this site.


Poor 386. 25MHz ... an SX, I'd guess?

Man, you should get that upgrade to 8 megs of RAM.

Windows 3.1 benefitted greatly and even OS/2 Warp,

which already worked fine with 4 megs, saw some improvements in performance.


It is indeed an SX! I am upgrading it to 16 MiB soon, as well as putting an Ethernet card in it. At that point I think it might end up a much more capable host. I am not sure if I want to find some way to keep this page up long-term yet. If there ends up being a reason to, then I will look around for some suitably retro way to host it, while still allowing me to use the 386 for other things.


The SX makes it practically a 286, right?


Sort of. 32 bit internally, 16 bit externally. Not as fast as the DX but had all the processor modes so could run protected mode unlike the 286.

One place I worked we sold a 286 upgrade board that had a 386sx and a few support chips. Undearneath was a 286 style socket. You removed your 286 and put this thing in its place. They worked OK and provided a decent speed boost.


Amstrad's last great PC series (the 3x86 series made with very standard components, unlike the unusual 2x86 series) used this strategy for the 3386, I think.


The 286 is a 16-bit CPU, whereas the 386 is 32-bit CPU.

This alone made the 386 far more capable and compatible with modern software (Linux requires 32-bit).


Its a good reminder that you can!

My issue with these projects is how much energy they use for whatever task they can do.

I have a 10 year old machine that was a high end machine in its day, being lapped by a high end laptop on all tasks, except for gpu tasks I slammed into the PCIe slot. The power consumption is honestly sad. Its not like its an energy cost thing, as it would take decades to break even in lower electricity bills after buying a high end machine specifically for lower energy use, its more of a self conscious thing.

OPs machine should just be a compute instance virtualized in some gigantic memory cluster somewhere that was already running.


>OPs machine should just be a compute instance virtualized in some gigantic memory cluster somewhere that was already running.

Then it would no longer be OP's.


Why aren't there ASICs for hosting web sites


"Johnson, why can't we push this security update?"

"Well boss, we thought it would be more webscale to use our own chip design, so we're kinda stuck, but the good news is that we've negotiated the cost of new masks down to 3 million dollars"


Related questions: why aren’t there ASICs to accelerate compilation


Because there's no point. A single server can push hundreds of Gbps of traffic already; using an ASIC would just make it less agile.


You might be able to bake some preprocessing of e.g. HTTP into the NIC, doing incrementally more work using well defined standards before delivering packets to the host. Especially with an FPGA.


This might be possible in the near future using smart NICs that support eBPF offload.


IP and TCP often are.


> Why aren't there ASICs for hosting web sites

Isn't this what ESP32 variants are trying to achieve? [1]

[1] https://en.wikipedia.org/wiki/ESP8266


No, that’s just a general purpose computer. ASICs are dedicated chips for a given task, where that task’s “program” is built into the chip and it can only do that.


While there may not be HTTP-specific ASICs, there certainly are SSL/TLS-offloading ASICs and general TCP-processing ASICs, found on higher-end network cards, in routers, etc.


And none of those are an ESP32


ESP32 can do it.


That's a general purpose microcomputer, not application specific!


This is what my first PC was my freshman year of college back in 1990. Felt damn good I got the 65MB hard drive instead of the typical 40MB that most machines had back then.


Heh, this has an 80 MB hard drive and it is killing me with how small it is. I’m constantly transferring stuff back and forth.


Luxury!

I had a 40MB hard drive on my Cyrix 386, and double-spaced it to "80"MB

It's crazy to think how little 40MB is now, I've edited PowerPoint decks larger than that.


I still remember all the tricks to make a floppy disk fit more data and get more ram. This was long before websites like https://downloadmoreram.com were invented.


Thanks to CloudFlare, it's working now...


Yep, I figured it was the only way to get it to keep working. Interesting that it seems to work for you already. It isn’t yet going through CloudFlare for me when I try to visit it. Maybe DNS cache?


It could be your local DNS cache. Testing with a private browser window usually does the trick, otherwise, just flush your DNS cache ;)


PSA: if you want to see how the site loads directly from the machine (eg, bypass cloudflare), just give it any random query string param. Such as: http://serentty.com/?t=13432 or http://serentty.com/?t=356561649

If you look at the headers, you'll see "CF-Cache-Status: MISS".

And you'll see the site load, just as you remember from the BBS days.

You are welcome.


I actually added a link to the page to bypass CloudFlare entirely. I wanted the page to remain accessible so people could see what was on it regardless, but CloudFlare helps with that, but I also know that setting the page load slowly from the real machine is also an experience that many people are there for, so I tried to find a way to have my cake and eat it too.


ah! nice! sorry about that, I will admit (although I think you already know), that I didn't initially read the contents of your page :)


Enjoyably deadpan :-)


Big ooof, of course it's down. I'm hosting a simple static site on a Core 2 Duo + 4GB RAM and it's amazing just how slow the hardware is. It can barely handle 100 concurrent visitors (as tested with JMeter), and if it starts swapping on the spinning rust inside... that's it, forget it, come back the next day. I don't even know how a 386 can even start with modern software.


Why so slow? As a “this will never work” stop-gap when my laptop died, I put the latest Manjaro Linux on a Core Dou iMac ( early 2008 ) to use for work that day ( 6 GB ). I was blown away how well it worked and I was able to do everything I normally do without problems ( well, except I had to use the Outlook web client for email / calendar — but that is just Linux and not hardware ). Months later, I still use it almost every day. It is my preferred spot to take Zoom / MS Teams meetings due to the large screen. I run Docker and Distrobox containers on it. I built a toy compiler in .NET on it. I play Diablo and StarCraft on it from time to time. I have it running Plex and it serves movies and TV episodes to my family ( often to more than one device at a time ). I toy around with SerenityOS on it in QEMU ( in a VM ) and it runs great.

I have not tried to host a static website but it surprises me that 100 visitors would give that hardware trouble. I will have to try that now.


It shouldn't be a problem. Checking the classic C10K article at http://www.kegel.com/c10k.html ...

> In 1999 one of the busiest ftp sites, cdrom.com, actually handled 10000 clients simultaneously through a Gigabit Ethernet pipe

HTTP isn't that heavier (if at all) than FTP, so 100 visitors for static content shouldn't be a challenge for 2008 hardware


> I play Diablo

Try Flare RPG, and, next, Slashem and/or Nethack.


To be fair, it runs Apache. I know, I know...


I assume you're doing more than just serving up static HTML if your machine can't handle 100 simultaneous connections with a C2D.


It could also be thermals.On old hardware the thermal paste is often completely fried. I've seen C2Ds barely able to handle anything once they got hot.


Yeah, or dust - my C2D Macbook Pro used to peg all its cores with kernel_task at the slightest hint of work, to the point where the UI thread started hitching because it was getting preempted, and the mouse would skip around the screen. Took me a bit of research to figure out that they use kernel_task for thermal throttling (great naming, guys). Opening it up, there were mats of dust that looked like felt pads between the fans and the heatsinks. Took out the felt pads, suddenly everything was smooth as butter.


A regular static site shouldn't be a problem on a C2D with 4GB, and it shouldn't be swapping either (unless you're doing more than web hosting on that machine). I'm assuming you're literally serving static pages and not running much server-side.

Many people host static sites on 1GB SBCs and 1GB/1T VMs with no issues, and you can make do with even less.

Update: I tried some tests on my secondary server, which is likely slower than your C2D (AMD G-T48E). I simply ran ApacheBench on the Proxmox Backup Server web interface login page since that's the only web service I have running on it. The two machines were connected over Gigabit LAN and HTTPS was used.

This is 1000 requests at a time for a total of 5000 requests. While it was running I was able to make a connection to the login page in my browser as well (it took six seconds to load, but it loaded). I think I did the test properly but let me know if I should try something else; it's been a while since I've done this kind of stuff.

  ~$ ab -n 5000 -c 1000 https://<PBS server>:8007/
  This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
  ...
  Benchmarking <PBS server> (be patient)
  ...
  Finished 5000 requests
  ...
  Server Software:        
  ...
  SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
  Server Temp Key:        X25519 253 bits
  ...
  Document Path:          /
  Document Length:        1940 bytes
  Concurrency Level:      1000
  Time taken for tests:   34.274 seconds
  Complete requests:      5000
  Failed requests:        0
  Total transferred:      10215000 bytes
  HTML transferred:       9700000 bytes
  Requests per second:    145.88 [#/sec] (mean)
  Time per request:       6854.761 [ms] (mean)
  Time per request:       6.855 [ms] (mean, across all concurrent requests)
  Transfer rate:          291.06 [Kbytes/sec] received
  
  Connection Times (ms)
                min  mean[+/-sd] median   max
  Connect:       21 2863 2225.0   2535   10907
  Processing:    98 3275 1836.3   3142   10434
  Waiting:        1 3275 1836.4   3142   10434
  Total:        118 6138 3078.9   5655   12545
  
  Percentage of the requests served within a certain time (ms)
    50%   5655
    66%   6733
    75%   6965
    80%   7569
    90%  12324
    95%  12469
    98%  12504
    99%  12517
   100%  12545 (longest request)


It isn’t running modern software. It is running MS-DOS 6.22. If it’s not loading for you, you can try the snapshot.


You are certainly doing something wrong - I'm serving multiple wordpress and static pages from a former passively heated thin client with a Celeron N3010. The only diff is AES-NI compared to core2duo.


Are those wordpress pages cached, or generated for every pageload? Because if you generate it once and then basically just serve static content, yeah that works fine on any potato (and that's how static hosting should be, so that's great)

I tried running Wordpress with some plugins that required fresh page generation for every pageload for a friend on an Intel Atom... D525 I think it was. A single pageload was more than twenty seconds if I'm not mistaken. Without looking up that model number I'd guess this Celeron probably has similar performance, so your being able to host 'multiple' of those sounds like there's more in play


If the site takes twenty seconds on a D525, it's probably pretty demanding server-side and will require solid hardware to deliver many requests at good performance. Imagine if you're using a machine twenty times faster than the D525 (i.e. a modern desktop CPU), assuming linear speedup you can generate the page in one second. That's one second per pageload per user, and that modern machine is going to likely choke up too if the site gets decent traffic.


Which sums up why wordpress blogs go down a lot when the author runs on shared hosting or a VPS and didn't care to setup caching until it literally can't handle the traffic anymore


Perhaps it's the OS and software?

I'm running an AlphaServer DS25, and I can host the heck out of static sites. It even runs php sites like Wordpress decently. Then again, I'm running NetBSD.


IIRC, there was a post on here about Redbean being able to serve a wild amount of requests, on minimal hardware. But perhaps it was a different piece of software than I recall.


I used to run a web forum with hundreds of concurrent users on a much lesser specced machine in the early 2000s so this sounds like a software problem.


Your machine should be able to handle way more than that?

What's running on it and what's your connection?


Takes 80ms to load. From Europe. That's insane, if not altogether impossible.

But the IP resolves to 188.114.96.3, which is Cloudflare.

So regrettably it doesn't look like I was actually fetching anything off 386 per se.


When I first posted it a few hours ago, I didn’t have CloudFlare. You can see lots of people unable to load the page at all in the earliest comments. Unfortunately it got to a point where pages weren’t just slow to load, but the connections were dropped entirely. So I figured that between the options of enabling CloudFlare or just having it be completely inaccessible, the former was preferable even at the cost of authenticity. I would love to find a better idea for how to keep the site online, if you have any.


But but..., going through Cloudflare makes it lose all what's related to the context of being hosted ony a 386SX etc... :(


Yeah, maybe posting it on Hacker News was a bad idea. I should have tried somewhere with fewer visitors first.


Hehe, in the end soon or later you would have ended up here => same problem, later.


It took 96ms to load on an Indonesian island. This is the internet we deserve.


Wouldn't load. Title checks out. :)


Why does the title have the 'i' in 4 MiB of RAM? That's 4 megabytes, right? Traditionally 4 MB?

What does the extra lowercase letter indicate? I'm not used to that, and I've been dealing with bytes from the kilo-size to the peta-size for 40 years.


MiB is an attempt to disambiguate what used to be unambiguous before the hard drive companies set their marketing teams on it.

Specifically, MiB is 1024^2 bytes. MB is either that or 1000^2 bytes, depending on who you ask and if they're trying to sell you something.


MiB refers to mebibyte, which is in base-2 (2^20 bytes).

MB is canonically base 10 (10^6 bytes).


I guess we could host sites on phones now too. That'd be really cool.


That was my first thought when I got unlimited mobile data back in ~2006. I could be always on IRC, seed torrents infinitely... and that's when the teenage lucb1e realized that there is no port forwarding on mobile networks and why that is a problem.

To this day I wonder why nobody seems to care at all about that. It's like being on the real internet except you can't reach each other, you have to always go through some third party that is on the internet proper.

At least we got "net neutrality" now, which doesn't apply to SYN packets for some reason but at least it applies in the other direction, so no more 'blocked site' page on buienradar.mobi because KPN wanted to sell its expensive SMS weather service instead of this newly popular weather radar site.

For what it's worth, I did compile and run a bitcoin miner on my phone ten years ago. Running services on it isn't exactly a new idea, but now that they're so powerful, it also means we can't supply enough power from the battery or dissipate enough heat while in a pocket.


For a long time AT&T would allow phone to phone IPv4 traffic.

When T-Mobile rolled out IPv6 you could get to your phone from anywhere else with IPv6.


Do they not anymore? That’s horrible. Deliberate reduction of the internet into a platform for consumption instead of bidirectional communication.


/me broadly gestures at every well known tech co


I bet someone is trying to access this website using similar hardware.


I guess it explains why it doesn't load for me, or anyone.


What’s amazing to me is my tiny esp32 d4 pico has attached psram of 8mb compared to the x386 powering this with only 4mb… I bet my ram is faster too…


Makes me wonder if you could do it on a Cortex-M!


What is the power consumption? Curious how it compares against a Raspberry Pi Zero.


Loaded and now I want to know more about Rust->C64 (kickass and acme here)


This fork is handy for that. It’s fun to play with, but you need to do a decent amount yourself to get it set up.

https://github.com/mrk-its/rust-mos


It took a second or two to get going but it did load, albeit slowly!


I hope you've disabled call-waiting on your phone line :)


Is it just me… this website has the snappiest page load!


That is what happens when you have a page which is genuinely tiny even for the early days of the web, served over a modern internet connection, and it is cached so it does not have to go all the way to the server. The web could be so much faster than it is.


It loaded for me just now, took a minute or so.


Link doesn’t work


It could be that it is genuinely not working, but it could also be that it takes a while to start loading. The server can only serve 8 clients at once, over a bandwidth of about 4 KiB/s. So maybe try again and give it some time to load in another tab.


it's loading fine here


Cloudflare free plan exists for a good reason.


delete


Please see the comments on this thread. It was directly internet facing when I posted it initially. Hacker News managed to bring it down. I had to add a cached reverse proxy. If you have any ideas about how to more authentically keep the site online, please let me know.


Fair enough, I'll admit I was a bit triggered by the title. I'll delete my original comment. There is a realistic bound to the amount of requests you can serve per seconds per hardware... I'm not doubting you've done things to increase that.


Your machine is smoking right about now… might want to open a window.


They can't open any windows. It's only running DOS.


lol .. it's always the same story with these .. lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: