Avoiding scaling the stateful part is also a path to hammer-nail syndrome - you start using less and less of the database system because you keep pulling things out since that's the only place you've established the ability to add CPU capacity and with that come a host of new and old issues.
I can't decide if I would enjoy reading real examples of these cycles. Just knowing how precarious the supply of natural rubber is, and how inadequate artifical solutions are (including prospective ones from continental), makes me uneasy, but I'll be damned if there isn't some kind of allure there as well.
It seems almost certain that climate change is going to severely disrupt several of these cycles in decade or two, and a slightly different failure mode of what you allude to in C. is that our tech and processes become capped before we adequately scale up renewables, and overshoot kicks in and it's actually a negative feedback loop that certainly doesn't just stop with computers.
I live in Canada, things don't quite work like the US in terms of ownership incentives, but generally nobody is renting a house here for less than what it costs to finance and maintain it, so rather than "renting with extra step", an entire generation of people are being told they can not afford to own the home, but they can go ahead and pay the bills of the person who does.
Buying a house ten years ago has added about $450,000 to my net worth, without me making any extra payments. Had we stayed renting an apartment, saving the difference would have produced about $57,000 at 8% annualized, and had we rented a house that was comparable, would have yielded nothing.
Buying a house here can be highly leveraged - you start off only having to pay 5, 10, 20 percent of the price, but until very recently houses appreciated more than the interest on the mortgage, furthermore, as you pay your mortgage off, you can take a loan out against a certain portion of the value of the house and use that money to invest in other things, while tax deducting the interest from that loan. Never actually paying your house off is a thing for both the rich and the poor, but with very different outcomes.
Honestly, looking at Oracle Cloud, I think the best thing they could do is spin off and just completely remove any trace of the name Oracle. You have some absolutely fantastic products that are being criminally neglected because people won't go anywhere near the name, and can you blame them?
Most if not all AWS services are really just HTTP APIs. A Lambda invocation is really just a POST to a public AWS endpoint. You can absolutely come up with login flows that obtain a set of temporary STS credentials that are only allowed to invoke your "API" function. (Agreed this is not most workloads)
I responded lower, but dude! 2000 requests a second is hardly anything at all, unless the application server is doing some seriously heavy lifting in which case the architecture is wrong.
You should redo the calculations with 1gb of memory for Lambda and like 30 machines would be generous
Concurrency is key. Requests don't cost much when they're just waiting for other things, but Lambda continues to pile costs on for every increase in concurrency.
APIs should maybe use a tiny fraction actual real CPU time. Perhaps BBCs are different - In order to make an actual fair comparison and properly predict what they would need in servers, greater detail is needed than what you have available to you, but I think your estimations are off by a significant amount.
I stopped reading at "3000 cores"; there is a lot of money to be made mopping up disasters like that, it's clearly even something of a growth industry. We had one machine push 2,400 requests/sec average over election night, without even touching 30% capacity, costing around $600/mo including bandwidth. Its mirror in another region costs slightly more at $800/mo. As a side note, it's always the case with those folk that they invent new employees to top up their estimates, that wouldn't be required in the serverless world, yet in every serverless project I've ever seen, they absolutely still existed because they had to.
Price-perf ratio between Lambda and EC2 is obscene, even before accounting for Lambda's 100ms billing granularity, per-request fees, provisioned capacity or API Gateway. Assuming one request to a 1 vCPU, 1,792MB worker that lasted all month (impossible, I know), this comes to around $76, compared to (for example) a 1.7GB 1 vCPU m1.small at $32/mo or $17.50/mo partial upfront reserved.
Let's say we have a "50% partial-reserved" autoscaling group that never scales down, this gives us a $24.75/mo blended equivalent VM cost for a single $76 Lambda worker, or around 3x markup, rising to 6x if the ASG did scale down to 50% its size the entire month. That's totally fine if you're running an idle Lambda load where no billing occurs, but we're talking about the BBC, one of the largest sites in the world...
The BBC actually publish some stats for 2020, their peak month was 1.5e9 page views. Counting just the News home page, this translates to what looks like 4 dynamic requests, or 2,280 requests/sec.
Assuming those 4 dynamic requests took 250ms each and were pegging 100% VM CPU, that still only works out to 570 VMs, or $14,107/mo. Let's assume the app is not insane, and on average we expect 30 requests/sec per VM (probably switching out the m1.medium for a larger size taking proportionally increased load), now we're looking at something much more representative of a typical app deployment on EC2, $1,881/mo. on VM time. Multiply by 1.5x to account for a 100% idle backup ASG in another region and we have a final sane figure: $2,821/mo.
As an aside, I don't know anyone using 128mb workers for anything interactive not because of memory requirements, but because CPU timeslice scales with memory. For almost every load I've worked with, we ended up using 1,536mb slices as a good latency/cost tradeoff.
Just for completeness, updating parent comment's Lambda estimates, not counting provisioned worker costs, and assuming no request takes more than 100 ms.
Note the "1 req/vCPU" case would require requests to burn 250ms of pure CPU (i.e. not sleeping on IO) each -- which in an equivalent scenario would inflate the Lambda CPU usage by 3x due to the 100ms billing granularity, i.e. an extra $30,000/month.
That's an 87% reduction in operational costs in the ideal (and not uncommon!) case, and a minimum of a 59% reduction in the case of a web app from hell burning 250 ms CPU per request.
Totally agree. Lambda needs to 1/10 their costs or start billing for real CPU time and get rid of invocation overheads to really compete at these scales.
Now I have dozens of serverless projects for smaller use things because there is still a point where the gross costs just don't matter (as in, if my employer was worried about lambda vs EC2 efficiency, there are probably a few meetings we could cancel or trim the audience of that would make up for it.)
Lambda has huge potential for very defined work load. In this case, I do not get it. As you mentioned the idea of having 3 regions with 3000 cores? Are you doing ML on K8s? Another aspect is the caching with CDN and internally, I do not get that either.
The calculations are still a little more complicated. I think serverless is the future, but I also think we need to continue to put pressure on AWS to lower costs
Lambda and servers are not equal, you can't just calculate the number of servers one would need for an equivalent Lambda load. It's entirely possible that they could get away with significantly fewer servers than you think.
Your cost calculation includes 128mb provisioned. You cannot run an API with 128mb Lambdas. Try 1gb or even 1.5gb. It's not that you need that much memory of course, but if you want to have p98 execution and initialization times that are palatable, you need the proportional speed benefits that come with the additional memory.
And no, you won't need API gateway because you'd likely be including your own in your cluster and it will handle far more load without needing nearly as much autoscaling as the app servers.
Lambda autoscales too - it's not instant, and there are steps it goes through as it ramps up.
If Lambda removed the per-invocation overhead and billed for actual CPU time used, not "executing" (wall) time, I think that would be fantastic. Again, I still think it's the future, but it has a ways to go before it's appropriate for certain use cases and load profiles.
Edit: oh, and I think the managed ROI is also a case by case basis. Do you have people who know how to run a cluster for you already? Completely different conversation.
I will also say that Lambda is still not maintenance-free, either.