Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious, have you benchmarked both of these and seen if there's no real world difference? I've had major performance differences between clouds on what is supposedly similar hardware. If AWS is faster in nothing then why they're so expensive would be a very good question indeed.

My initial guess would be AWS is relying on its advantage that users who are currently using AWS heavily are unlikely to switch clouds if they suddenly have to go from VM's to dedicated servers, and thus they can profit off such capture. e.g. 7x cost is probably fine when you consider what the cost might be to employ people for a cloud migration.



A lot of it also comes down to reliability, security (not just at the platform level but also the tools for you to build on), and most importantly integration with other services which you’d have to build and maintain on your own.

You don’t buy AWS because it’s cheaper than a dedicated server but because you can get that Linux server using the same management services (API, authentication, monitoring & logging, etc.) that you use to get serverless functions & containers, object storage, managed ML services, normal and reporting databases, development tools, etc.

That’s not saying that either one is wrong, just different - it’s like asking why someone used a hotel for a conference when you had a great tent site on your vacation to the mountains.


The question is how much money Amazon spends on development of those services. What part of price is hardware expenses, what part of price is software expenses.

My gut feeling tells me, that it's not much.


There's definitely a healthy margin but don't underestimate how much time goes into making everything work smoothly. The major cloud providers do a lot of things like handling zero-downtime migration between failed physical hosts, monitoring for infrastructure issues, firmware testing/security/updates, etc. which is completely transparent to you until you realize that you haven't spent time in years on things like hardware or environmental failures (stuff like that SSD 40k hour bug which showed up here, recently, for example) or dealing with stuff like Spectre.

The big confound here is scale: AWS can amortize the cost of someone really digging into firmware security over millions of customers but a smaller provider can't. You might not notice that until you either need to get your environment certified or some kind of vulnerability / supply chain attack against a hardware vendor is revealed. That can mean that AWS is both making a hefty profit reselling those engineers' work to all of their customers but also that it's still cheaper to pay them than it is for you to do that work yourself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: