>As the fortunes of AWS et al rose and rose and rose, I kept looking at their pricing at features and kept wondering what I was missing. They seemed orders of magnitude more expensive for something that was more complex to manage and would have locked us into a specific vendor's tooling. But everyone seemed to be flocking to them.
In 2006 when the first aws instances showed up it would take you two years of on demand bills to match the cost of buying the hardware from a retail store and using it continuously.
Today it's between 2 weeks for ML workloads to three months for the mid sized instances.
AWS made sense in big Corp when it would take you six months to get approval for buying the hardware and another six for the software. Today I'd only use it to do a prototype that I move on prem the second it looks like it will make it past one quarter.
Aws is useful if you have uneven loads. why pay for the number of servers you need for christmas the rest of the year? But if your load is more even it doesn't make as much sense.
The business case I give is a website which has a predictable spike in traffic which tails off.
In the UK we have a huge charity fundraising event called Red Nose Day and the public can donate online (or telephone if they want to speak to a volunteer).
The website probably sees 90% of their traffic on the day itself - millions of users - and the remaining 10% tailing off a few days later. Then nothing.
The elasticity of the cloud allows the charity to massively scale their compute power for ONE day, then reduce it for a few days, and drop back down to a skeleton infrastructure until the next event - in a few years time.
(FWIW I have no clue if Red Nose Day ever uses the cloud but it's a great example of a business case requiring temporary high capacity compute to minimise costs)
Only consumer scales up for the holidays. Most other industries scale down. The more companies they have, the more even the overall demand is for them.
Also, every unused resource goes into the spot market. They just have a bigger spot market during the year.
And lastly, that's why they charge a premium. Because they amortize the cost of spare hardware across all their customers.
Plus bidding on spot-instances used to be far less gamed so if you had infrequent batch jobs (just an extreme version of low-duty-cycle loading), there was nothing cheaper and easier.
I've been out of that "game" for a bit, but Google Compute used to have the cheapest bulk-compute instance pricing if all you needed was a big burst of CPU.
It's all changed if you're running ML workloads though.
"buying the hardware from a retail store." Never buy wholesale and never develop on immature hardware, I have seen c** with multiple 9 y.o. dev servers. I could shorten the ROI to less than 6 months.
In 2006 when the first aws instances showed up it would take you two years of on demand bills to match the cost of buying the hardware from a retail store and using it continuously.
Today it's between 2 weeks for ML workloads to three months for the mid sized instances.
AWS made sense in big Corp when it would take you six months to get approval for buying the hardware and another six for the software. Today I'd only use it to do a prototype that I move on prem the second it looks like it will make it past one quarter.