Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's necessary re: server RAM?
17 points by zimbabwe on June 26, 2009 | hide | past | favorite | 23 comments
I'm signing up for Slicehost right now because I want more freedom with my server. Storage and bandwidth I don't care about much, because I don't do very much transfer-heavy work on my own sites, but how do I judge how effective their RAM offerings are? How much memory does a single hit to a site take? How many concurrent threads can be run with, say, 256MB of RAM versus a full GB? I don't recall ever reading about this before (other than "X site goes down thanks to Digg effect) and I was wondering if anybody had advice about figuring out which would best work.



You may want to also take a look at http://www.linode.com, they give more ram, for the same price, and have a pretty slick, albeit not as pretty control area.


I use linode and have had absolutely no problems (they even upgraded our storage and RAM a while back for free). I have a 360M and a 720M and both scream for what I use them for (mostly catalyst).


Also, linode has a chat that often has a very good level of discourse, even no members of the staff are present. Sometimes I ask a question there, and sometimes just listen :)


Or, if you wish to go further down the cheapness route, check out http://www.gandi.net/hosting/proposal/ (they offer really much bandwith plus extras like Gandi Flex) and http://prgmr.com/xen/ (much RAM) which both seem solid.

Going further down the cheapness route, you arrive to http://www.lowendbox.com/ which specifically lists providers that offer VPS for under $7, plus there are how-to's like this:

"Yes, You Can Run 18 Static Sites on a 64MB Link-1 VPS" http://www.lowendbox.com/blog/yes-you-can-run-18-static-site...


Do you have experience with PRGMR? It looks great, but I don't know how trustworthy it is.


Also, http://www.chunkhost.com is, AFAIK, still running a Beta deal where you can get 512mb chunk for $15/mo, which is pretty sick. Hell, for the Beta period it costs nothing.

Hello, permanent screen session and homepage!


For some reason, they only allow beta testers in the U.S. (checked by credit card)

To think of it, I'm not sure I want to trust my credit card number to new and unknown company on the internet (they only allow paying by card directly).


Have you tried it out, out of curiosity?


I am running a rails app with 100,000 pages views/month with a 86MB slice(not at slicehost though) with <2sec response times. The pages are pretty heavy dynamic pages.


Well, 100k/month is only 1 view every 25 seconds, so while it sounds impressive in aggregate I wouldn't like to be around if you ever get Dugg. But I'm impressed you can run anything at all in 86M really. What is that, a single mongrel instance, nginx, and mysqld?


Lets see. I have 3 slices, 2 of which are publicly accessible.

512 MB: Apache2 / PHP, serves a dozen websites last time I checked, most notably a forum that has a few thousand hits a day and my blog, which when it gets spiky goes up to X0,000 hits in a day.

256 MB (until recently): Bingo Card Creator, which is Nginx => 2x Mongrel.

I bumped the 256 up to 512 (a process that took me above 5 minutes and a restart) when I started developing the new version, so that I could fit a staging server on the same box. Currently it has about 6 Rails instances running on it at any given time (2x Mongrels for the live site, 2x for the staging, 2x for DelayedJob or consoles). After I release the new version I'm probably going to put the staging server on another box and reallocate the saved memory to more instances for the production site.


Do they give you private IP's to connect non-publicly accessible nodes? (Linode does) Other way, you'd be using up your traffic limit on this.

How much traffic do your slices use?


I have never come anywhere close to my limit. I think the highest I've ever gotten was a bit over 100 GB a month from the sites, with another 50 GB through my (private) web proxy (I have a bad Amazon tv episodes habit). My limit is 500 GB at the moment, so I'm not too worried.


They do give you private IP's so that your "slices" can talk to each other - this traffic doesn't count against your bandwidth quota.

http://www.slicehost.com/questions/#private-ips


That's exactly the sort of scenario I was wondering about. Thanks a lot for your response! (Also, out of curiosity, what's your app?)


What's your setup? nginx/mongrel?


Thats pretty crazy. I am running a similar size app with 2gb slice. Although it can probably take quite a bit more (10x views) .


It completely depends on your application/stack. Something with static content and simple dynamic pages could scream at 256MB, or even less. Something with other RAM-eating frameworks and code might run best with GB+.

A nice thing with Slicehost is you can start with the smallest, and then if it seems you need more RAM, can restart the same image on a bigger slice.

You will know if you need more RAM by watching your busy server with 'top' over time, and among other things, ensuring that it never begins swapping.

Almost any app can benefit from more RAM, especially if you use various levels of caching effectively (from the automatic linux disk cache through framework-level fragment- or page- caching to HTTP-server/proxy caching).

So when you're sure things work OK, but then want to make them snappy, throwing RAM at the problem (with a little engineering to use it wisely) is usually a good idea.


Okay, thanks! So you have experience with Slicehost? Moving up to larger slices is a painless task?


Slicehost makes everything as painless as could be asked for. I had a DNS problem on a new slice yesterday, their folks noticed it within 3 minutes.

My understanding is that Xen is pretty tricky admin, but you can do your own ubuntu or CentOS VM under vmware ,give it x Meg RAM, throw ab or jmeter or trample at your app, see what happens.


About a half an hour downtime, in my experience. No work needed.


It was a matter of minutes for me both times I went from a 256 to 512 slice.


Yes.

My one bump up was a tiny, almost vanilla image and took even less time.

And, if continuity of service is important, it's possible to launch the larger clone first, then do a near-instantaneous cutover in various ways (esp. if your site can be read-only for a short window, and you can forward hits from the old machine if DNS takes a while to update).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: