Hacker News new | past | comments | ask | show | jobs | submit login

Big ooof, of course it's down. I'm hosting a simple static site on a Core 2 Duo + 4GB RAM and it's amazing just how slow the hardware is. It can barely handle 100 concurrent visitors (as tested with JMeter), and if it starts swapping on the spinning rust inside... that's it, forget it, come back the next day. I don't even know how a 386 can even start with modern software.



Why so slow? As a “this will never work” stop-gap when my laptop died, I put the latest Manjaro Linux on a Core Dou iMac ( early 2008 ) to use for work that day ( 6 GB ). I was blown away how well it worked and I was able to do everything I normally do without problems ( well, except I had to use the Outlook web client for email / calendar — but that is just Linux and not hardware ). Months later, I still use it almost every day. It is my preferred spot to take Zoom / MS Teams meetings due to the large screen. I run Docker and Distrobox containers on it. I built a toy compiler in .NET on it. I play Diablo and StarCraft on it from time to time. I have it running Plex and it serves movies and TV episodes to my family ( often to more than one device at a time ). I toy around with SerenityOS on it in QEMU ( in a VM ) and it runs great.

I have not tried to host a static website but it surprises me that 100 visitors would give that hardware trouble. I will have to try that now.


It shouldn't be a problem. Checking the classic C10K article at http://www.kegel.com/c10k.html ...

> In 1999 one of the busiest ftp sites, cdrom.com, actually handled 10000 clients simultaneously through a Gigabit Ethernet pipe

HTTP isn't that heavier (if at all) than FTP, so 100 visitors for static content shouldn't be a challenge for 2008 hardware


> I play Diablo

Try Flare RPG, and, next, Slashem and/or Nethack.


To be fair, it runs Apache. I know, I know...


I assume you're doing more than just serving up static HTML if your machine can't handle 100 simultaneous connections with a C2D.


It could also be thermals.On old hardware the thermal paste is often completely fried. I've seen C2Ds barely able to handle anything once they got hot.


Yeah, or dust - my C2D Macbook Pro used to peg all its cores with kernel_task at the slightest hint of work, to the point where the UI thread started hitching because it was getting preempted, and the mouse would skip around the screen. Took me a bit of research to figure out that they use kernel_task for thermal throttling (great naming, guys). Opening it up, there were mats of dust that looked like felt pads between the fans and the heatsinks. Took out the felt pads, suddenly everything was smooth as butter.


A regular static site shouldn't be a problem on a C2D with 4GB, and it shouldn't be swapping either (unless you're doing more than web hosting on that machine). I'm assuming you're literally serving static pages and not running much server-side.

Many people host static sites on 1GB SBCs and 1GB/1T VMs with no issues, and you can make do with even less.

Update: I tried some tests on my secondary server, which is likely slower than your C2D (AMD G-T48E). I simply ran ApacheBench on the Proxmox Backup Server web interface login page since that's the only web service I have running on it. The two machines were connected over Gigabit LAN and HTTPS was used.

This is 1000 requests at a time for a total of 5000 requests. While it was running I was able to make a connection to the login page in my browser as well (it took six seconds to load, but it loaded). I think I did the test properly but let me know if I should try something else; it's been a while since I've done this kind of stuff.

  ~$ ab -n 5000 -c 1000 https://<PBS server>:8007/
  This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
  ...
  Benchmarking <PBS server> (be patient)
  ...
  Finished 5000 requests
  ...
  Server Software:        
  ...
  SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
  Server Temp Key:        X25519 253 bits
  ...
  Document Path:          /
  Document Length:        1940 bytes
  Concurrency Level:      1000
  Time taken for tests:   34.274 seconds
  Complete requests:      5000
  Failed requests:        0
  Total transferred:      10215000 bytes
  HTML transferred:       9700000 bytes
  Requests per second:    145.88 [#/sec] (mean)
  Time per request:       6854.761 [ms] (mean)
  Time per request:       6.855 [ms] (mean, across all concurrent requests)
  Transfer rate:          291.06 [Kbytes/sec] received
  
  Connection Times (ms)
                min  mean[+/-sd] median   max
  Connect:       21 2863 2225.0   2535   10907
  Processing:    98 3275 1836.3   3142   10434
  Waiting:        1 3275 1836.4   3142   10434
  Total:        118 6138 3078.9   5655   12545
  
  Percentage of the requests served within a certain time (ms)
    50%   5655
    66%   6733
    75%   6965
    80%   7569
    90%  12324
    95%  12469
    98%  12504
    99%  12517
   100%  12545 (longest request)


It isn’t running modern software. It is running MS-DOS 6.22. If it’s not loading for you, you can try the snapshot.


You are certainly doing something wrong - I'm serving multiple wordpress and static pages from a former passively heated thin client with a Celeron N3010. The only diff is AES-NI compared to core2duo.


Are those wordpress pages cached, or generated for every pageload? Because if you generate it once and then basically just serve static content, yeah that works fine on any potato (and that's how static hosting should be, so that's great)

I tried running Wordpress with some plugins that required fresh page generation for every pageload for a friend on an Intel Atom... D525 I think it was. A single pageload was more than twenty seconds if I'm not mistaken. Without looking up that model number I'd guess this Celeron probably has similar performance, so your being able to host 'multiple' of those sounds like there's more in play


If the site takes twenty seconds on a D525, it's probably pretty demanding server-side and will require solid hardware to deliver many requests at good performance. Imagine if you're using a machine twenty times faster than the D525 (i.e. a modern desktop CPU), assuming linear speedup you can generate the page in one second. That's one second per pageload per user, and that modern machine is going to likely choke up too if the site gets decent traffic.


Which sums up why wordpress blogs go down a lot when the author runs on shared hosting or a VPS and didn't care to setup caching until it literally can't handle the traffic anymore


Perhaps it's the OS and software?

I'm running an AlphaServer DS25, and I can host the heck out of static sites. It even runs php sites like Wordpress decently. Then again, I'm running NetBSD.


IIRC, there was a post on here about Redbean being able to serve a wild amount of requests, on minimal hardware. But perhaps it was a different piece of software than I recall.


I used to run a web forum with hundreds of concurrent users on a much lesser specced machine in the early 2000s so this sounds like a software problem.


Your machine should be able to handle way more than that?

What's running on it and what's your connection?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: