Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: AWS or dedicated server?
78 points by bkrausz on Jan 4, 2009 | hide | past | favorite | 40 comments
So there seem to be some major trade-offs between AWS and dedicated servers, the most obvious of which being that AWS seems much more difficult to configure, while it's easier and cheaper to scale.

Considering I only have experience with setting up dedicated servers, I was wondering if someone with experience setting both up could comment on whether the difficulties of using AWS outweigh the benefits.




I've been setting up dedicated machines for years and looked into switching to AWS for TicketStumbler. I determined that it was actually considerably more expensive to obtain the same amount of resources (i.e. cpu/ram) on AWS because the pricing scheme doesn't lend itself well to having many always-on images.

In the end I chose to run Xen on top of dedicated hardware, which has essentially bought us the best of both worlds: simple scaling and low costs. Granted, it would probably take me a couple hours to start up a dozen more VMs (I'd need to requisition new hardware) as opposed to a few minutes and S3 is still cheap for mass storage, but neither of these points had any relevance to our situation.

As mentioned by others, it all comes down to what your project needs.


This seems like a very unusual configuration. Can you explain why you chose it? If you control the hardware and software, how does the VM layer help you scale better? Why not just install whatever packages you need on whatever servers you have, and skip the added complexity and performance cost of virtualization? Is there an assumption that at some point you'll move part of your operations to AWS or a similar service? Or is some of your software buggy enough that it needs to be contained within a VM?

I'm not trying to judge, this just seems like a weird choice and I'd like to know what motivated it.


Sure (I really should do a real write-up on this...):

So the main purposes are simple horizontal scaling and efficient use of hardware. Virtualization makes horizontal scaling simple because it's just a matter of cloning a particular machine (or machines). It makes efficient use of hardware because there's no need to have a dozen physical machines for a dozen different purposes, unless they all use a full machine worth of resources.

Lets say I want to add a new (non-static) Web server. Well, Apache is on its own VM; I can clone and migrate it to a new physical box (or just have two on the same hardware). I could also simply add more resources dynamically. If I need to scale the database, same deal. The biggest win here is that when I scale one of those, nothing else comes with it. There's no DNS server on the Web box. There is no NFS server on the database box.

Before virtualization, you basically had two choices: Throw a whole bunch of packages on a single box or spread it out over different physical machines. The first completely ruins encapsulation, thus adding unnecessary complexity, while the second is really uneconomical unless you're using all those resources out of the gate.

Then, what happens when you need to scale? All that crap needs to be setup again! Hopefully we were smart about it and made it as simple as possible, but I have never made a system as scalable as my current one-click-cloning mechanism.

Right now we only have a single physical machine (16 cores, 32gb ram, iSCSI); using your recommendation of "whatever packages on whatever servers" I would end up with this clusterfuck of a server that does a dozen different things at once. What I have now is exactly that, except encapsulated into VMs with their own resources and their own purpose.

Yes, virtualization has a slight performance cost (though bare-metal virts like Xen have a pretty marginal one), but I'll gladly accept it for the massively easier scaling and efficient use of hardware. And, yes, if one VM happens to go insane for some reason, it doesn't affect anything else. For instance, on one of our older servers, the MySQL VM's "drive" had a tendency to become corrupt randomly. I never really did figure out why, but I imagine it was because I make fun of MySQL all the time, but I digress -- the point is, it never affected the rest of the machine and, since MySQL was used for things of little importance (Wordpress), it didn't even take down the website when it happened.

That was a pretty rambling explanation, but hopefully it covers your questions. If not, let me know.


Thanks! That's a pretty thorough reply, and I have to agree that it's hard to imagine an easier way to deploy new server instances than copying a VM. I think you can get pretty darn close with a package system like DEB or RPM, but in terms of being able to bring up and test an environment on a development machine, and then be assured the same environment will work exactly the same in production, this is a reassuring approach. I think it makes a lot of sense if you are expecting to require rapid scaling at some point in the future.

I'm a little unsure why you wouldn't just use EC2 in that case, although it might make sense to use a dedicated box for as long as you can and supplement that with EC2 instances when appropriate. Obviously you can get a lot of EC2-instance-equivalents out of a 16-core 32gb box, and your approach would likely be easy to integrate with EC2 when the time comes.

I'm not sure what your load trends look like, but if you're not likely to surpass what your dedicated server can handle within the next few months, this would seem like overengineering. I know you said it "ruins encapsulation" and makes a "clusterfuck of a server" but I don't quite see it. If this is a web app running on a modern gnu/linux distro, with maybe some DNS and cron jobs and email and blogs, we're talking about a very common setup that's running successfully on zillions of boxes. On the other hand, two or three times in five years I have had buggy Apache modules bring down a machine by leaking processes, and it would have been nice if that didn't take e.g. email down with it.

I think my biggest reservation about your approach is that it's weird. It's basically hand-made custom EC2, right? Is there an open source project somewhere to package up the tools to do this on one's own servers? (If not, maybe you should start one... :)


It's basically hand-made custom EC2, right? Is there an open source project somewhere to package up the tools to do this on one's own servers?

There's two that I know of that support the EC2 wire protocol: Nimbus and Eucalyptus.

I am the primary developer of Nimbus. We had an EC2-like system released before EC2 existed, only later adapting to their protocol because our users wanted to use the EC2 client tools.


Yeah, we should surpass it. As you said though, it's also very much about simplification (following the initial learning, anyway). EC2 may have simplified a little more eventually, but it was also even more expensive and difficult for me to learn up front.

As for a custom EC2, it's really the other way around; EC2 is a custom Xen setup. Having initially learned of virtualization before AWS and such cloud services, EC2 is the strange one to me. I'd only ever known "DYI EC2", so to speak.


"EC2 is the strange one to me. I'd only ever known "DYI EC2", so to speak."

Those EC2-like systems do have their place. There is a lot more to do and think about when you are allowing others to run VMs on your infrastructure. That is often not the case and not your situation either, it sounds like.


This is a very serious question as you clearly know what you're talking about from experience: how do you find it cheaper to run dedicated hardware? The reason I ask is because I've priced out 4-core servers with 16GB of RAM at SoftLayer and ThePlanet and they come out to around $700/mo with 2 drives and RAID 1. Amazon charges $750 for an Extra-Large instance (15GB RAM).

There is the potential that you don't want to delve into what you're paying for stuff too much, but it just seems like AWS is charging similar rates to ThePlanet and SoftLayer which are the two dedicated hosts that seem to have the most credibility in the community. Even if you were provisioning your own 1.7GB instances on a larger dedicated box, you would still only fit about 8 or 9 of them in 16GB of RAM (leaving room for Xen and such) which would make it the same price as AWS. The only thing I can see is that the included bandwidth could save some money. Maybe I'm not good at looking for dedicated server deals.


We've been running these numbers ourselves lately as well.

We find AWS much more expensive.

For instance, we bought a Dell 2950 / 2xQuad Core / 12 GB RAMB / 4x500GB RAID 5 w/hot spare for ~$4500. We have it in a colo where it costs about $150 a month for the space + bandwidth.

This is about equivalent to the $750/mo extra-large instance. There will also be additional AWS fees for transfer, storage, etc, but we'll go with $750 for simplicity.

That's a $600/mo premium, or $7200 a year. So I pay for the hardware within 8 months and after that it's $600 a month savings.

There is a lot of value in being able to provision extra server quickly, use cloudfront, etc, but it comes with a high price, IMO.


Well, there are a few considerations here (all of this is in reference to Softlayer, whom we use):

- Depending on your storage requirements, 2 drives + RAID 1 (which is more of a convenience than anything and I almost never recommend getting) is often times more expensive than an iSCSI LUN which is far superior and offers zero-setup cross-country replication and snapshots (if we're going to pretend that RAID 1 is some kind of backup solution).

- When ordering, if you choose the lowest clock speed CPUs, you're practically guaranteed to get the highest rated (more expensive ones) for free. This is either due to scarcity of low-end CPUs or Softlayer loves me. I have ordered numerous boxes from them and this has always been the case.

- They always have "specials" which are usually pretty ridiculous. For instance, 16 of the 32GB of RAM we have was free, as in beer. Right now (and most of the time) they have free double RAM and HDD. Kiss the cost of one of those RAID drives away.

- There are non-monetary considerations, such as support. Softlayer has without a doubt the best technical support I have ever been provided, and I've been through countless hosts in my tenure. We're talking about an Unmanaged host that has better techs than any Managed host I've come across. Not to mention conveniences such as automated OS reloads, private network, inter-DC OC3 backbones, VPN, secure backups, optional CDN, etc. (AWS has most of these afaik, minus VPN, but this goes to equivalence)

- Your 4-core server, if you don't make use of deals, would likely be equivalent to AWS. Once you start getting into high core-counts, that changes fast. As a huge proponent of parallelization, many of the processes run for TicketStumbler make use of multiple CPUs; this means a lot of what we do is CPU-bound, thus the need for higher core counts.

- 2TB of bandwidth is included; I also have no idea how this affects the cost overall. Edit: I added a couple TB of transfer to the AWS calculator, plus 80GB of storage: $854.10 per Extra Large. The difference in cost between this and our machine now amounts to nearly nothing.

So, at the end of the day, the hardware we have is nearly identical in cost (within $100, IIRC) to the Extra Large Linux instance you reported, while having twice the number of CPU cores and twice the amount of RAM. We're also afforded all the other luxuries that come with the myriad services and support the conventional dedicated host provides.

The dedicated hosting environment also allows me to setup and administer the hardware in the method I described in my previous reply; i.e., I don't have to setup a single Extra Large Instance (well, technically two) to handle a dozen different jobs.

Hope this helps! Let me know if you have any other questions.


Hosting on EC2 is stupid, it's way more expensive than linode or slicehost. I accidentally left one of their extra large slices running for a month, and it cost almost $600. You can get real metal for those prices.

That said, if you have no problems setting up dedicated servers, than you won't have any problems with EC2. Use rightscale's free interface to manage instances, the rest is what you already know.


Hosting on EC2 might not be cost effective when you only need one server, but once you start need multiple servers the advantages quickly add up.

I'm currently a customer of Amazon, Softlayer, Serverbeach and few cheap VPS elsewhere. I find that each has their advantages.

Need cheap bandwidth - nothing beats Serverbeach (youtube ran their CDN on serverbeach boxes until they went to google). If you need a small number of dedicated boxes, Softlayers support is worth the extra money (I usually get softlayer boxes for about $50+serverbeach costs -- and softlayer includes private net in that cost).

EC2 is great if you are going to be scaling up/down or have interesting synergies with S3. (filesystem snapshotting, free bandwidth, ...)

Small VPSes are great when you are building to prove an idea - a couple bucks a month while you have no traffic.

I also find using EC2 for one time tasks preferable. Spin up 10 instances to do a massive amount of computation or to do load testing.


Do you have any experience with 10tb.com (or heard anything about them)? Their bandwidth prices list as better than serverbeach.


AWS, no hesitation.

AWS isn't that hard to configure. ElasticFox puts a nice GUI to it and while it will take a short while to get used to the AWS way of doing things, you're better off.

With AWS, you have a nice spray files everywhere storage in S3, EC2 provides lots of RAM and CPU muscle, EBS provides RAID-level reliable persistent storage for EC2 that can be backed up to multiple data centers with a single API call, CloudFront even gives you the chance to have static files served from 12 different locations in the world making your latency very small. If you need more servers, no problem just wait a few minutes for them to boot. If you need more bandwith, it's automatic. If you need more storage, S3 is infinite and EBS can always give you more (you can even stripe the drives so that you could have terrabyte after terrabyte of storage as a single drive).

Dedicated servers have little upside. You're relying on physical hardware in a very acute fashion. While AWS runs on real hardware, there's an abstraction level which helps a lot. Let's say you're small and want a single box. That box fails, you call your host and get a new one in a couple hours, you restore from backups for another couple hours maybe and you're back online. Of course, many often don't test their disaster recovery scenarios that well and are often met with little problems. With AWS, you simply boot another machine off that image and you're good. Worst case, your EBS gets trashed and you say, "hey, S3, rebuild that EBS drive". Easy by comparison.

Real boxes are a pain. You have to deal with RAID, backups, how fast your company can provision new boxes, bleh! AWS (or even Slicehost and Linode) isolate you from a lot of that mess. There's a reason virtualization is the hot new topic.

AWS isn't that hard to use. It's definitely different, but it makes so many other things so much less painful. If you want some of the benefits of AWS with a "simple as dedicated" feel, try Slicehost. You can get instances with as much as 15.5GB of RAM and they just give you the instance with your choice of Linux on it. From there, you can install Apache, MySQL, other. And you get benefits like cheap and easy backups - they just store an image of the machine. Then, if you need more capacity, you can boot one of those images as a new instance, now you have another server. If one of their servers fails, they can easily migrate your instance to another box. RAID10 is already set up. Easy.

If you're worried about AWS' management being a little different, don't worry too much. It's not that bad once you start using it - just a tad hard to imagine without trying it. If you're still worried, Slicehost will give you instances that will work like you're used to dedicated hosting working, but with many of the advantages of AWS.


I find EC2 to be a bit pricey for webapps if thats what we're going for here. It seems to be optimized for high performance and high availability compute clusters, not for persistent webapps. A traditional VPS (like Slicehost) is a more economical solution for that, IMHO.


Slicehost can be cheaper because of the included bandwidth, but AWS can be really helpful too.

Here's two examples:

Database server. EBS is a high-performance disk system offering speeds of 70MB/s in the real world. Databases love fast disks. Slicehost's disks are slow (at least for high performance databases). Xen doesn't have great disk access speed and they're just plain old local disks. So, if you need to scale up a database, AWS is going to accommodate that better.

Infinidisk! With Slicehost, you have a fixed disk size and you can't buy more. With EBS, you can just keep getting more and more disk - you can even stripe volumes so that they show up as a single disk to your instance.

Life is about trade-offs. I love Slicehost and use them for all my stuff. I've set up EC2 stuff for other people who wanted it. So, it really depends on what you need.


I use and recommend Slicehost.

I'll tweak just one thing and that is that SH offers backups only for slices upto 2GB - for the 4/8/15G slices, there is no backup option. I am not sure why though.


All other things being equal (which I don't know for a fact), Linode is significantly cheaper than Slicehost, considering the differences in memory usage between x86 (32 bit) and x86_64:

http://journal.dedasys.com/2008/11/24/slicehost-vs-linode


I asked Slicehost and they say that because of the way they currently do backups, they can't guarantee that backups of slices larger than 2GB will get done in a reasonable amount of time


If you ask support, they'll tell you they've got some backup options in the works supposedly.


OK, I'm sold on the ease of use. However, I'm looking to host a SAAS application that requires a separate database for each subscribed client. How does AWS handle the isolation and privacy of the database(s) - so that the likelihood of one of our clients accessing another client's data (even accidentally) is minimized, and access from outside our framework (our support or clients)? There's also the issue of minimizing downtime in the event of a restore request to a specific client - so not all clients are affected. Finally, we have health care and public sector clients who have very stringent data protection and privacy policies and issues. Does that kind of protection exist in AWS, or any shared environment, or does it require a dedicated solution?


mdasen, i frequently learn from your infrastructure comments, and find myself wanting more. I'm sure all sysadmins wish you had a blog/book in your profile :D


the upside of dedicated servers (that you own) is that after a few months they are dramatically cheaper. Personally, I think that if you are small enough to need less than a full physical box, a VPS usually makes sense. If you are large enough to need more than one physical box, it usually makes sense to use ec2 or other VPS providers for the stuff you need right now, and then build out dedicated servers for your longer-term needs.


I've done AWS, dedicated, and colo. Each one has their own tradeoffs.

AWS is daunting at first -- but then so is Debian. Once you figure out the keys thing and your base image it's fairly easy (also see ElasticFox, S3Browser). You might as well learn it, even if you stick with dedicated hosting for other reasons.

Things not to sneeze at: * elastic: start up a dozen servers in a few minutes * free access to s3 storage * crazy awesome pay-as-you-go bandwidth (250mbps) * expensive for the CPU/RAM you get * virtual disks can be slow on random seeks * sorry, no cPanel (though see RightScale) * poor locality of servers


So, just to comment on the disk reads: the local, non-persistent disks don't offer great speeds (you're right). However, that shouldn't matter too much now. All of your application code should fit in memory once it's been read off the disk and so you shouldn't be hitting the local disks much after boot. Static files should be on S3 and databases should be stored on EBS.

EBS has great performance. RightScale noted that they got over 70MB/s with sysbench and over 1000 I/O operations per second. If you want more performance, you can even stripe across EBS volumes.

EBS really helped EC2's viability a ton. EC2 users now have access to cheap, reliable, and fast storage.


I'll have to try it out, thanks. A quick search doesn't turn up any bonnie tests or similar, so I'll do them and put them online.

(edit) this suggests that EBS is slower than the virtual disk for open/seek/write/flush: http://bizo-dev.blogspot.com/2008/11/disk-write-performance-...


related to an earlier post today about backup, S3 snapshots of the virtual disks provides easy backup (just make sure you have it setup right - eg, it isn't a backup solution unless you test restoration).

Snapshots are also useful for testing new ideas, out of band processing (spin up an instance that processed data from a snapshot instead of hitting the main DB).

Amazon consistently improves their offerings - so if it doesn't fit right now it might next month.


I'd say that if it isn't obvious why you would need AWS, then you don't need AWS and should go with a standard dedicated server provider.

Configuration should be the least of the reasons to make the decision. The many other factors are much more important than configuration.


As always, the right solution depends on what you're building.

"Dedicated server" sounds like you're asking about one or maybe a few machines. If that's the case, you're better off with dedicated hardware from a vendor you're familiar with.

AWS is an entirely different way to build an application infrastructure. You don't keep state on any one instance because you plan for redundancy. Rather than have 1 or 2 front-end machines, you're running 1 or 2 load balancers in front of N front-end machines. If something goes awry with a front-end instance, you take it out of the loop and start another to add to the loop without downtime. It's the kind of power that up to now only companies with large IT budgets have enjoyed.


I built a 140-machine farm at Softlayer, ~6 months ago. Here are some observations (that may not mean much south of 10 boxes.)

a) AWS feature advantages (mostly instascale, in our case) fade with the high cost of every additional dedicated box.

b) It's nice to virtualize the map of services to boxen, but at some level of scale, each box has a single task and you want the ability to run it _flat out_. If so, you have to decide if Xen overhead is worth labor somewhere else, and alternately, if Xen source compatibility holds you back from new kernel features. (Pick your VM technology.)

c) We still wanted instascale with our own software distribution, so in something less than a week I hand-tooled a pxe-based provisioner that initialized from a live exemplar (gentoo, whee). It took some work to find the right propeller heads at Softlayer, but eventually we understood each other and the bootp listeners got turned off for our subnets. "Insta" became 2-hour hardware activation, which was ok for us. You might consider puppet in the same way (except for Gentoo's long from-scratch build time.)

d) Virtually all provider admin is automated and this still works and makes sense at scale. The SOAP API that backs it is not quite fully baked, but is very useful. Paired with box-level IPMI pokes, you can replicate AWS control over hardware.

e) Whatever AWS provides, at scale you still have a custom setup at some level of abstraction, so putting in place exactly the right hardware saves labor.

f) Substantial discounts can be had.


AWS -- we run our entire insurance company, and its multiple applications, using the EC2, EBS, S3, SQS, and FPS services provided by AWS. I think the only AWS we haven't used to date is Mechanical Turk. We highly recommend AWS: we started with dedicated hardware years ago, migrated to an excellent virtual host, and then finally moved to AWS last year when they implemented EBS and provided the SLA. Once you have multiple servers, it makes increasing financial sense to use AWS.


What exactly do you want to do - surely you need to ask that question before deciding which route to go (or at least let us know and we'll try to help). We use AWS extensively for scaling up and down and it is AMAZING for this. We couldn't do what we now do without it - well, without a huge amount of investment anyway. It enables us to sell our products at a decent price point. If you are just wanting to host websites then a dedicated server might be your thing. How much bandwidth are you going to consume? Sometimes a dedicated account will give you a better deal for bandwidth.


Two more things that AWS has that no other hosting company offers: Queue and Billing. The Amazone Message Queue service handles all those messages you pass from virtual machine to virtual machine. Companies far larger than Amazon have spent years working on message queues for proper scaling (RabbitMQ, AMQP), and Amazon just has one there that works for you off the bat. It makes scaling a lot easier.

Also, they have a bill pay system for charging credit cards for access to your systems. Just another boring bit of code you don't have to write yourself included free with the AWS service.


Yes but see here for an example of the smarter solution. That is: using RabbitMQ on AWS to get the best of both worlds. Link to AWS blog: http://aws.typepad.com/aws/2008/12/running-everything-on-aws...


I should definitely try out AWS. It's a bit of work, but the docs are good and I think it's a useful experience to at least know a bit about how it actually works.

Since you pay for AWS by the hour (and bandwidth), you can more easily switch to a dedicated server from AWS than the other way around.

If your application doesn't need to deal with peaks of traffic or potentially scale up fast, going with a (couple of) dedicated servers is probably a more cost effective option.

If you do need to deal with peaks or fast scaling, also check out http://scalr.net/


I would say "do both" - aws is awesome if your site is running slow 'cause you are out of capacity, or you otherwise need a box 'right now' or for only a short period of time. spin up another instance and be done with it. But for the boxes you leave on all the time, you are probably better off buying and co-locating your own server. Usually the capital cost difference is made up in only a few months.

The times when a Xen host makes long-term sense are when you want a box that is smaller than optimal. Right now, I buy dual quad-core opterons w/ 32G ram and 2x1TB disk... assuming I am ok with moderate-speed low-power opterons, it costs about $3K up front. Hosting, say, another $150/month. That's a whole lot of ec2 instances. At those prices, well, AWS is pretty expensive over the long term.

But yeah. AWS is awesome for the servers you don't need on all the time, or servers you don't have time to setup (or your whole ball of wax if your margins are such that paying more for computers won't break your business model.)


Base hosting for an AWS small image (if that's still the correct terminology -- equates to about a 1.8Ghz Xeon with 512Mb RAM or so) is $72.50 a month in machine time. That's to keep the machine running only, not counting bandwidth. Their bandwidth is confusing to me, so I can't really speak to that, and I've only been dealing with me and the machines so far (no users), so I can't speak to how that works out at all.

That said, $75 a month or so can get you a small dedicated server in some places that includes a fixed amount of bandwidth, or more predictably priced hosting at slicehost or somewhere similar.

If your application doesn't need to scale, then AWS probably doesn't make sense. If you do, then it does.

As an AWS noob myself, the only confusion I had were with the very initial setup (in using the keys provided to authenticate and whatnot) -- and in the initial server configuration. The major differences you'll need to be aware of are as follows:

- The AMI image (basically just a virtual image) is static. You can't save files to this and expect them to exist after a reboot. That took a second to get my head around, after configuring apache and rebooting, wondering where it all went.

- Set up your base OS, then save the AMI. It was confusing to me figuring out exactly what needed to go where, and remapping my server between 'fixed' and 'dynamic' content and making sure that they were in appropriate places. This includes your web server configurations, disk mounts, /var/ directories, etc. User generated data, SQL data, and (probably) your website data will be stored on either an elastic block or to an S3 bucket. The important thing to note here is that you configure your OS to look how you want it to be every time you wipe it clean. Perhaps you put your web application on it, perhaps you don't. I could see using AMIs as a sort of version control for your apps, but I don't know your use case.

- The elastic IPs threw me. Don't release them on production instances. lol. Effectively, it maps an IP address to your machine virtually, which means it can be moved around. Your DNS points at the EIP which can be a single apache instance, or later, a load balancer -- all configurable within a couple minutes.

Other than that, it took me less than $10 worth of AWS resources to configure a couple servers, deploy my app and get it configured to how it would be in the real world if I were to migrate, so you should definitely check it out. There's no major upfront commitment like there is with dedicated hosting, so there's really no excuse not to familiarize yourself with it.

Also, you definitely want the elasticfox plugin if you're going to do anything with it. I'd point you at the following resources, which got me up and running within a few hours.

- ElasticFox Plugin - http://developer.amazonwebservices.com/connect/entry.jspa?ex...

- ElasticFox Owner's Manual (PDF) - http://ec2-downloads.s3.amazonaws.com/elasticfox-owners-manu...

- Configuring MySQL to use ElasticBlock storage - http://developer.amazonwebservices.com/connect/entry.jspa?ex...


Actually, a small AWS image comes with 1.7GB of RAM. That's a big difference.

In terms of the processor metric, that's harder to gauge. Right now, Amazon says one EC2 compute unit is roughly equivalent to a 1.0-1.2GHz 2007 Xenon or Opteron (or a 1.7GHz 2006 Xenon which was their original documentation).

Think of it this way, Amazon is putting you on a beefy server with some other people. I'd guess these servers are 4-core boxes running at around 2GHz+ with 16GB of RAM. So, with the Extra-Large instance, you're basically getting the box (15GB of RAM, 4 cores with 2 compute units per core (roughly 2-2.4GHz per core)). So, with the Large instance, you're getting half of the server (2x 2GHz Xenon processors) and with the small instance you're probably getting one core at half speed.

And that's really as much as most people will need especially since I'm guessing there's a bit of bursting ability to the CPU capacity.

Hope that helps make Amazon's CPU situation a little more understandable.


Ah yes. Thanks for the clarification on the numbers, yours sound more right (and more generous), and do a fair job of making AWS services even more competitive than I'd thought they were.


From the AWS blog: "CloudFront Management Tool Roundup": http://aws.typepad.com/aws/2009/01/cloudfront-management-too...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: