I still just don’t understand why I want serverless, especially as my main request serving mechanism. Every investigation I’ve done into it revealed a lot of issues around function management and latency that always seemed harder to deal with than just writing a server in <language> and deploying it on ECS or fargate.
My experience with serverless (mostly AWS Lambda) is that I've found 3 major use cases where it's been a very successful choice:
1. as a cron-style job (e.g. download a file every hour and put it in S3, or connect to a DB and do some smaller processing task)
2. as a responder to (or processor of) cloud-based events (e.g. receiving from a stream, reacting to an instance shutdown notification or an alarm)
3. as a backend for a small REST API (especially for heavily cacheable APIs)
For all 3 cases, assuming the task isn't hugely inappropriate and you've got a bit of infrastructure-as-code lying around which can be repurposed, serverless has lead to a massive time saving for me for several tasks, for very little money and with basically no maintenance effort required.
There's definitely a tendency towards smaller tasks, though. Ultimately serverless necessarily means giving up control of your infrastructure and removing a lot of customization or specialization options; that means that at a certain scale or level of complexity, it just isn't an appropriate choice either for cost or performance reasons - but that's fine, it doesn't have to solve all problems. It has its niche, and it's quite easy to go from a quick Lambda to a container-based or VM-based alternative.
As others have said, if you're gonna be reaching that kind of timeout, the use case would come under this caveat I mentioned:
> assuming the task isn't hugely inappropriate
We have a couple of cases where reasonably small files (< 100MB but it'd work with larger) need to be downloaded from one place and placed in another, potentially with an ETag check to prevent redundant uploads/downloads. Lambda is perfect for that.
If you're downloading a file that takes anywhere near that long every hour then it's probably a bad choice for cost reasons too. Most files aren't big enough to take 15 minutes to download though.
If you're at the point where you're worried about the timeout, you should also be worried about the disk space available to your Lambda. The file should be stored in and served from S3.
Serverless backends tend to work quite well when paired with a Single Page Application on the frontend – e.g. Vue.js or React. That way your frontend can be served from a static host – e.g. S3 or GitHub Pages – almost instantly. And so the perceived performance of your application isn't harmed (as much as you'd think) by the latency of your backend, since other aspects of the application's interface can load and continue to be responsive.
To me, that does not seem to describe anything specific to "serverless"?
You've always been free to serve your static assets in any way you like, so I'm unclear as to how the way the backend is architected comes into play here.
I think they're suggesting that bad latency from a serverless architecture is mitigated by a SPA, since the application can render and appear functional while it's fetching data.
Still, not sure how damage control for poor performance is a "pro" of serverless design.
Well, there's very little (if anything) that "serverless" can do that other techniques can't accomplish. It's about costs & benefits, not whether or not you can do something.
Though I tend to agree I'm yet to hear a really compelling description of why I should move very much into it. Some of this may be because I tend to write in a style that makes it fairly easy to mix & match bundles of application functionality anyhow, so to me adding some tiny function to a running app isn't that big a deal. (I don't use Erlang directly, but Erlang is where I learned this from.) If you're in an environment where deploying a single new REST handler or some recurring service is much harder, though, I could see where it comes in handy for certain things.
I guess. I fail to see how this is better than sticking an API server onto Heroku or similar, especially given the engineering hours spent will easily dwarf any potential hosting cost differences.
Not sure if any of the serverless offerings are 100% there yet, but I have a hard time not seeing this as the future for most things.
I just want to quickly deploy code in a friction and maintenance free manner with zero compromises on scalability, latency, flexibility or reliability, what the problem is?