As a start, be present in multiple EC2 availability zones (not just US-east-2, basically) and regions (this is harder). Cross-region presence needn't be active-active, just a few read-only database slaves and some machines to handle SSL termination ("points of presence") for your customers on the east coast. Perform regular "fire drills" where you actually fail over live traffic and primary databases from one AZ/one region to another.
"Building your own" is also something very few people (including Amazon itself up until fairly late, probably after the IPO) do: you can use a managed hosting provider (very common, usually cheaper than EC2) or lease colo space (which doesn't imply maintaining on-site personnel in the leased space: most colos provide "remote hands"). You can still use EC2 for async processing and offline computation, S3 for blob storage, etc... or even S3 for "points of presence" on different US coasts, Asia/Pacific, Europe, but run databases, et al in a leased colo or a managed hosting provider.
Yes, these options are more expensive than running a few instances in a single EC2 AZ: but that's the price of offering high availability SLA to your customers. It's a business decision.
We run gear in multiple physical locations, but both the application and data is stored/backed in S3. This allows us the redundancy of S3 without the cost and fragile nature of EC2.
Not to mention unless you have very unusual traffic patterns (spin up lots of servers for short periods of time), colo/dedicated servers will usually be vastly cheaper than EC2, especially because with a little bit of thought you can get servers that are substantially better fit for your use.
E.g. I'm currently about to install a new 2U chassis in one of our racks. It holds 4 independent servers each with with dual 6 core 2.6GHz Intel CPUs, 32GB RAM and a SSD RAID subsystem that easily gives a 500MB/sec throughput.
Total leasing cost + cost of a half rack in that data centre + 100Mbps of bandwidth is ~ $2500/month. Oh, and that leaves us with 20U of space for other servers, so every additional one adds $1500/month for the next 7-8 or so of them (when counting some space for switches and PDU's). Amortized cost of putting 2U with 100Mbps in that data centra is more like $1700/month.
Amazon doesn't have anything remotely comparable in terms of performance. To be charitable to EC2, at the low end we'd be looking at 4 x High Mem Quadruple Extra Large instances + 4 x EBS volumes + bandwidth and end up in the $6k region (adding the extra memory to our servers would cost us an extra $100-$200/month in leasing cost, but we don't need it), but the EBS IO capacity is simply nowhere near what we see from a local high end RAID setup with high end SSD's, and disk IO is usually our limiting factor. More likely we'd be looking at $8k-$10k to get anything comparable through a higher number of smaller instances).
I get that developers like the apparent simplicity of deploying to AWS. But I don't get companies that stick with it for their base load once they grow enough that the cost overhead could easily fund a substantial ops team... Handling spikes or bulk jobs that are needed now and again, sure. As it is, our operations cost in man hours spent, for 20+ chassis across two colo's is ~$120k/year. $10k/month or $500/per chassis. So consider our fully loaded cost per box at ~$2200k/month for quad-server chassis of the level mentioned above with reasonably full racks. Lets say $2500 again to be charitable to EC2...
This is with operational support far beyond what Amazon provides, as it includes time from me and other members of staff that knows the specifics of our applications, handles backups, handles configuration and deployment etc.
I've so far not worked on anything where I could justify the cost of EC2 for production use for base load, and I don't think that'll change anytime soon...
If disk performance is important you can also take a look at the High IO instances, which give you 2x 1TB SSDs, 60GB of RAM and 35 ECUs across 16 virtual cores. At 24x7 for 3 years you end up with ~$656/mo per instance, plus whatever you would need for bandwidth. By the time you fill up an entire rack it still ends up being slightly more expensive than your amortized 2U cost, but you also don't need to scale it up in 2U increments.
Completely agree, building your own is cheaper, gives more control, etc. But what is more: you do NOT lose the ability to use the cloud for added reliability: it is pretty cheap to have an EC2 instance standing by that you fail over to.
If you are very database heavy, and you want to be able to replicate that to the cloud in real time it does get expensive, but if you can tolerate a little downtime while the database gets synced up and the instances spin up that's cheap too.
We have SQL Server 2008 boxes with 128GB+ of ram; we're able to run all of our production databases right out of memory. This would be cost-prohibitive in a virtualize environment such as AWS, Linode, etc.
Did you know that many websites operated BEFORE Amazon Web Services existed? Perhaps going back to 2008 could give us some ideas for alternate deployment methodologies...
For the very early stage, perhaps. Once you're dealing with more than a handful of instances, it is extremely likely you'd save a substantial amount of money moving your base load off EC2.