Instead of 12 physical cores, 96GB of RAM and a 2TB SSD Array pushing 1M IOPs on dedicated hardware for my PostgreSQL database servers, I'd need 1TB of RAM in an AWS box because I'll be lucky if I can even break 10K in IOPs.
Does the price make sense then?
I have yet to see any significant AWS deployment that doesn't feel like it could be done better, more reliably, and much more cheaply as a co-located setup.
You can't really create a massive co-located setup on demand for big jobs, then tear it down ... use the right tools for whatever job you're doing. EC2 being more expensive for your Postgres deploy doesn't mean it is an expensive toy
If you really have a need to setup and teardown a bunch of cores for the occassional large batch, nobody's questioning that the cloud is an economical way to do that.
But I've never had to do that. Not in a way that would make the development time involved economical anyway. Wait two hours for this once-in-a-blue-moon processing job to finish, or spend a day setting up a process to handle such jobs quickly in the future?
It'd really take something exceptional (again, with low IOPs demands) to have that make much sense unless I was already hosting in the cloud and had invested money in making such a task quick and cheap.
Instead of 12 physical cores, 96GB of RAM and a 2TB SSD Array pushing 1M IOPs on dedicated hardware for my PostgreSQL database servers, I'd need 1TB of RAM in an AWS box because I'll be lucky if I can even break 10K in IOPs.
Does the price make sense then?
Does it? That's the point of analysis. It may be that you can scale up your organization on AWS, and then make the decision to scale something like a database vertically once you actually need it.
The choice should be based on a rational cost analysis, however. Not just money, but developer time, productivity, and a potential loss of focus on the core competency -- which should be your product, not your commodity infrastructure.
I have yet to see any significant AWS deployment that doesn't feel like it could be done better, more reliably, and much more cheaply as a co-located setup.
That's a stretch. Especially the "cheaply" part. The operational costs involved in building and maintaining a significant co-located deployment are huge, not to mention the capital expenditure involved in enterprise networking hardware, servers, cages, racks, PDUs, etc.
I don't firmly fall on either side of the debate -- one must balance the requirements and costs, like anything else.
However, I do firmly believe that we should have programmers automating the entire software system administration job away, leaving only the question of hardware provisioning. That's why we have "devops" style teams nowdays, and I only expect that trend to grow.
No, it doesn't. You're throwing up a spectre. This are pretty easy numbers to come by. The point in this one example being vertical scaling options are pretty constrained on something designed to really only effectively scale cores and RAM.
> That's a stretch. Especially the "cheaply" part.
All this is a straw man. I really have a hard time believing "I don't firmly fall on either side of the debate". In fact, I call BS.
Who actually has to wire their own PDUs unless they want to? Or is forced to buy cages, racks, etc?
If you want to I suppose you can, but I haven't hit a Tier 1 data-center where that's even an option. Unless you were to buy an unfurnished cage perhaps on the terms you got to hire your own people for the build out.
Otherwise, for most deployments, even for something like Reddit, you're talking about stacking a couple 48 port switches, racking a few servers, and making sure you don't screw up airflow with bad cabling. Just spend $500 on an experienced cable guy to wire it up. Cabling isn't the most fun.
Staffing costs are a drop in the bucket. Racking a couple dozen systems, configuring your ports is pretty trivial.
If you have a business with dependable positive cash flow affording you the luxury of signing a three year hardware lease, it doesn't make cash sense to do anything else for 99% of deployments.
The "core competency" stuff is just a salve for developers who want to live in a homogenous environment. Life gets easier with a competent IT person. Not harder. You still have to monitor processes, setup syslog servers, archive logs, monitor disk space, load, available RAM, bandwidth, trace down abusers, setup mail servers for at least internal monitoring, create backup procedures, automate deployments, figure out how to compile some old library.
All of this stuff is where your IT staff effort goes. Not in the once-in-a-blue-moon-we-have-to-rack-some-servers tasks. You aren't better off because Amazon is handling your power requirements instead of a good colo. Those things are details you never have to think about either way. Saying so is a definite straw-man.
The problem is that it's touted as a great way to scale web apps. Since DB performance is usually the limiting factor in webapp scaling, this doesn't appear to stack up. Needing a lot of IOPS is pretty par for the course.
I guess my question is why go for reasonable when you can get incredible performance per dollar with RAID-10 SSDs? A few dedicated DB machines with SSDs and many cores can get you absolutely monstrous throughput without going for any of the more exotic DB solutions.
Does the price make sense then?
I have yet to see any significant AWS deployment that doesn't feel like it could be done better, more reliably, and much more cheaply as a co-located setup.