EC2 micro instances throttle down very heavily if you use CPU for more than a few seconds at a time. Other instances allow you to burn 100% CPU all day, but not the micros.
Very unfair to be using a t1.micro instance, they're only meant to give very short bursts of power and don't give any dedicated hardware (so giving the hardware specs is pointless).
Do a new benchmark comparison vs a m1.small and it'd be interesting. I bet the small wins by an absolute mile.
While true, it generally means buying bigger hosting / moving to a new provider / some significantly-manual step. If they haven't done it before, it could be hours before it's completed, if they're even aware of it yet.
"Can be" does not mean "is". Unless it is built to do so, why expect it to? You have no idea what they're running, nor on what kind of machine or bandwidth allotment, why are you making these huge assumptions and insulting people who don't meet them?
Why? Maybe they optimized for their own performance (put words online) instead of (buying, configuring, and) over-building their server. What if they're not a sysadmin? What if they hate sysadmin work? Seriously, it's a wordpress site.
There is a bit of overhead since you are in a virtualized environment on EC2, plus the "micro" instance does not really give you a dedicated processor core - you're sharing CPU time with other instances.
We just finished doing some video encoding testing on a few different platform and EC2 (along with EC2-based offerings) are considerably slower and more expensive. Although 10x more expensive than a 3930K, a cc2.8xlarge instance was only 1.75x faster.
I think EC2 is almost always going to be more expensive than bare metal servers, unless you are taking advantage of the ability to pay by the hour, or leveraging flexible pricing. Reserved instances will save quite a bit of money, but spot pricing can do you even better. Check out this post from a few months ago from someone who is using spot instances for core services, and estimates 70% savings vs on demand pricing.
How about comparing EC2 to a Virtual Private Server? Thats a bit more of an apples-to-apples comparison.
Serverbear notes that Amazon 7.5GB Large instances (which cost $180+ / month) benchmark at ~650 on Unixbench... with 30 MB/s for its disk. In comparison, a 8GB VM from Digital Ocean only costs $80/month. I don't have the numbers for the 8GB VM, but the smaller $20/month 2GB instance has a UnixBench of ~1900 with over 300MB/s I/O from its solid state drive.
(I presume the larger instances have more CPU power / priority in the VM scales)
That is half the cost for triple the CPU performance and 10x better disk performance. Other smaller providers, such as RamNode, offer extremely fast I/O with RAID 10 Solid State Drives in their Virtual Private Servers (500+ MB/s).
Amazon vs Digital Ocean
serverbear.com/239-large-amazon-web-services
serverbear.com/1990-2gb-ssd--2-cpu-digitalocean
To be fair though, Amazon's CPUs are more consistent... consistently bad, but consistent. VPS CPUs and I/O are affected by their neighboring VMs, while Amazon seems to have removed that uncertainty. Nonetheless, in practice, you will always get a better performing CPU and I/O from other providers.
And if we compare both to bare metal servers, obviously bare metal servers win in price/performance, but are harder to maintain, so its hard to do an apples-to-apples comparison. But Digital Ocean VMs can be spun up/down just like Amazon instances... although Amazon has more load balancers and other infrastructure. (But nothing is stopping you from setting HAProxy on a front-end VM to loadbalance a cluster of VMs from Digital Ocean. Even then, other VPS providers like Linode offer Load Balancers as part of their infrastructure now)
Its hard for me to see the case for Amazon's cloud offerings. They don't have very much price/performance at all. At all ends of the spectrum, low end to high end, VPS providers such as Digital Ocean offers more vertical scalability as well as a cheaper price on all of Amazon's offerings.
Unless you need some specialized VM from Amazon (ie: GPU compute), or are locked into their vendor-specific API (oh I feel sorry for you), there is no reason to use Amazon's services IMO.
In the next few months we will be migrating a number of servers to EC2. The only reason is to take advantage of latency based routing -- we really really need to reduce latency as far as possible.
Anyway, there's your reason.
The other reason is that big businesses just don't care. Margins are high enough on software that cost of EC2 over another provider is outweighed by the benefit of existing infrastructure, developer experience, and the risk limitation by choosing AWS.
Fair enough. I consider that part of the "specialty" kind of service however. I still wouldn't touch their S3 compute stuff though, even if I'd use Amazon's DNS services. I know that you can use Amazon's CDN with other provider's VPSes or your own dedicated boxes somewhere.
And certainly, for the small 2 or 3 server clusters that a small startup uses, Amazon's prices are much significantly higher than other providers.
Anyway, I'd have to check out the latency based routing thing, and how it differs from typical Geo DNS or "Anycast" DNS that is offered by a number of providers. My bet is that its just Amazon marketing speak for GeoDNS or Anycast technology.
EDIT: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Cre... As far as I can tell, Amazon's "Latency based Routing" is just GeoDNS with much better marketing name. Its all about reducing latency, but at the end of the day, it is no different from GeoDNS.
That said, Route53 does seem to be a good DNS service from Amazon. $0.75 per million anycast queries per month + $0.50 per zone is a good price methinks.
So while I'd never use a compute instance at Amazon, I probably definitely keep their Route53 service on my list. Looks pretty nice from what I can tell.
As noted above, it is hard to do a "fair" comparison between Amazon and the others due to the fact that Amazon offers a bit more consistency. Linode and Digital Ocean benchmarks are all over the place depending on how much CPU or IO that their neighbors are using.
Another thing to consider is the number of mistakes a company has done. While Amazon and Linode have been around for years... Amazon had the Virginia fiasco this past year (Netflix outage), and Linode had the bitcoin hack. Digital Ocean has only been around for a few months, so their security / reliability is basically untested.
With those caveats in mind, it is then possible to look at the inherently flawed benchmarks and work off of them. Serverbear is a good resource for comparing those things.
A raspberry pi is by many considering a minimum viable computer of sorts, and the bottom of what one would consider acceptable performance.
Therefore seeing how Amazon compares to that is an interesting exercise. I was personally floored by how poor performance some EC2 instances has for some types of tasks (java/clojure related things among them).
I quickly decided Amazon was not able to serve my needs within the price-range I was willing to pay.
Generally you never want to use JFFS2 or other flash file systems on a flash device that has a separate controller. this is because their own wear leveling will usually confuse the hell out of the controller and can sometimes cause it to either slow down or to lower the life because the controller will not level things properly. Whether ext4 or not is better than say XFS for this, no clue though.
Fwiw, CascadeLink is a high-speed apartment building ISP in the Seattle area (or at least, I that's why I recognize the name). Some of my friends get 30/30 for $40/mo.
http://gregsramblings.com/2011/02/07/amazon-ec2-micro-instan...