Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most apps just don't have computation patterns where RAM usage could even be a problem; most apps are IO-bound in some way. The companies I've worked for have deployed new servers because of high load averages (in the unix-load sense), not because of RAM shortages.


That's only true because most companies admit defeat before trying: they hit the disk when serving. The big Internet companies (Google, Facebook, LiveJournal, hell, even Hacker News and Plenty of Fish) all serve out of RAM: they keep everything a user is likely to hit in main memory so that a request never needs to perform I/O other than network. In this situation you're absolutely RAM-constrained.

I remember trying to optimize some financial software a couple jobs ago and hitting a brick wall because that's the speed the disk rotates at. We ended up buying an iRAM (battery-backed RAM disk) and sticking the DB on it. You can get this a lot cheaper by avoiding the DB and using a RAM-based architecture if you're willing to sacrifice fault-tolerance under power outages (or if you have some other architectural solution for fault-tolerance, like writing to multiple computers).


It's not that they admit defeat, it's that they admit success. Yes, there are ten companies that are pushing hardware so hard that every bit counts again because if it didn't they'd be in danger of exhausting the earth's supply of elemental silicon. But everyone else can make a good living without going there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: