One month of EC2 XL with 15GB of memory is roughly $500 a month.
A 90 GB database fits in a single $150 SSD. You can get 1 TB of SSD storage for $3,000.
"One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation. " [1]
You can get a 6 core AMD Phenom II which runs at 3.2Ghz. for $180.
16GB of RAM will set you back $400.
From the sounds of what they went through, spending $10k on decent hardware might have saved them a man year or two of developer time.
Granted that's not nearly as fun or sexy as trying to use MongoDB, Cassandra or HBase in production. And, saying that you're going to use actual hardware is soooo old school.
You are very right; I should have tried to have us move to physical hardware before we did. Definitely one of the things that I would have done differently, in hindsight.
The larger EC2 instances (especially for always-on systems like primary databases) do get quite a bit cheaper with the 1-year reserved instance reservations, so if you are on EC2 be sure to get those as soon as you're at a somewhat stable point.
I think the introduction of the SSD is causing site owners to "leave the cloud". I haven't seen any cloud hosting that lets you choose SSD-based disks yet, probably because it's low demand and high expense. The throughput of a single sata3 SSD blows away any kind of raid setup you can accomplish on EC2.
Also, I don't think you even need $10k in hardware. Sounds like you could do just fine with a $3k 1U server. $3k can still get you 16 cores w/ 32GB memory and SSD drives.
SSDs make a stunning difference in performance for large, complex joins that won't fit in RAM. However, I imagine that in 5-10 years time 'the cloud' will use them too; even if only as part of a larger storage pool.
Ehhh those SLC drives are still mighty pricey, if you want a decent number of them raided, you could probably eat up nearly 10k. And the nice Xeons are expensive.
Even when they didn't want to buy own hardware (colocation also costs), they could use some decent dedicated servers. They are usually more reliable, faster and cheaper than EC2. Also most startups grow slower than the time needed to rent & tune new dedicated machine for increased traffic. I don't believe many people are firing up hundreds of instances one day and kill them another day so it would really justify usage of EC2. I think it's just a habit: people start with EC2 because they think they will need ultra-scalability tomorrow and then there's lock-in - the cost of switching to another infrastructure is just too big. In the meantime the ugliness and performance problems of EBS start to bite them, but it's too late.
A 90 GB database fits in a single $150 SSD. You can get 1 TB of SSD storage for $3,000.
"One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor referenced in our original documentation. " [1]
You can get a 6 core AMD Phenom II which runs at 3.2Ghz. for $180.
16GB of RAM will set you back $400.
From the sounds of what they went through, spending $10k on decent hardware might have saved them a man year or two of developer time.
Granted that's not nearly as fun or sexy as trying to use MongoDB, Cassandra or HBase in production. And, saying that you're going to use actual hardware is soooo old school.
ref: [1] http://aws.amazon.com/ec2/faqs/#What_is_an_EC2_Compute_Unit_...