Kids today. The Raspberry Pi has a 700Mhz processor and at least 256M of memory. In 1999 I was running an mp3 search engine that processed 200k daily queries at its peak, on a Pentium II with 128MB.
Serving static pages at the rate HN can generate requires less resources than playing any game on your phone.
I don't think this is a totally fair comparison, because anyone who knows how to code their own site (/service) is already at a major advantage.
Now, if you downloaded an off-the-shelf MP3 search engine package and merely hosted it, it'd be different.
I also don't think that surviving a sudden upsurge in traffic is a particularly impressive feat just because it's Wordpress. If anything, it highlights how inefficient Wordpress is, which might possibly be a result of trying to be easy to set up. And a lot of the people who do use it don't realise how much worse they make it by adding plugins.
Maybe this is a better take-away when comparing the two:
This is a showcase of the progress of such inexpensive, accessible, and miniaturized (embedded) hardware over the course of a little more than a decade - back when we needed much more bulkier and expensive PC's. Yes there are better examples of things that can fit in your pocket and do the things of yester-years N times better, but I still like seeing such raw and transparent examples. Extrapolate this by another decade, it's nice to imagine about what's coming next.
I agree that the site holding up to traffic is nothing special. However, the server operates on less than a 5 watts, costs under $50, runs on a flash disk (still faster/bigger than disks back then), and can fit in your palm - that is what really is impressive.
So, you blogged about your blog, posted the blog about your blog, then blogged about the traffic you got from that post, then posted that blog as well.
Not quite everywhere. When you have enough traffic, the load spikes resulting from putting cache generation in the user path becomes seriously painful, so often a separate process is responsible instead.
(Imagine 5,000 threads all deciding they want exactly the same data at exactly the same time, then trying to write it to exactly the same location. Now imagine 50,000 more threads trying to do exactly the same thing because of the delays caused by the first set. Now imagine your web site is down and your mobile phone is ringing)
Yes, the thundering herd problem. While the site may be briefly less responsive, if the traffic is all to a single piece of content as long as one of them goes through you'll end up with the content in cache and then the load immediately drops.
The bigger problem is when your entire cache is cold (ex. memcached was restarted) and there's a ton of traffic to lots of different content. A single piece of content should not be that crippling, unless it's stupidly-slow to render and cache.
Caching plugins are a prerequisite for Wordpress to survive heavy traffic though, even on much more powerful servers. Anyone who knows what they're doing is using one.
one thing to consider is to send all of your logs over the UDP syslog service to a collector like splunk or another server that just listens on UDP and writes them to disk. will save you the trouble of hooking up an external disk to the raspberry pi device and start you down the path of centralization and management.
Not the OP, but it seems that my comcast IP stays pretty stable (for months on end). Also, I know that namecheap (and possibly others), provide a dynamic dns type api to update their nameservers if your ip does change.
Certainly you wouldn't want to host something too important on your home network, the odd mumble server or webpage shouldn't be too big of a problem.
I do exactly this--I use namecheap's dynamic DNS thing and a Comcast home connection to serve my personal website. Namecheap lets you update DNS with a GET request to a URL, so I just set up a cron job to do that every night. Works great.
I think there's a lesson here about how proper caching really gives life to weaker hardware. I only bring this up because I remember when the term "the digg effect" was used for the first time. I also remember people testing services like MediaTemple to see if it could stand up against the incoming traffic. There was so much focus on having beefy hardware when really all everyone needed was better caching.
Caching is only needed in the first place because wordpress generates hundreds of sql queries on a page load. It probably does more queries then Amazons homepage.
Serving static pages at the rate HN can generate requires less resources than playing any game on your phone.