There's a warmup time when a function is first executed in a container that can be up to 2 seconds depending on what libraries you include (you pay for this time [1]). After the first execution, your function environment will stay "warm" for 10-15 minutes and it will only take 10-150ms to execute your function on a new event.
The other big downside is every time your function goes "cold" you pay the cost of all the extra (anything not in standard lib, ImageMagick, or the AWS Nodejs SDK) libraries you need.
The upside is that you only pay for the time your code actually runs. I've replaced a cronjob server and saved ~90% on the bill to run my jobs. Mostly they were scheduled backups and other misc integrity checks on AWS and 3rd-party infra, so nothing that intense.
1: the cost for 2 seconds of execution time on a 512-MB execution environment is 0.00001668 USD, so it's not likely to break the bank. If you have a high-traffic function it's likely to stay "warm" pretty much all the time. And if your function is low-traffic, it's likely you fit inside the free tier.
Ha! Cronjob to keep functions from going cold! If everyone starts doing that, it will reck the cost structure of Lambda for Amazon, I wager. One wonders why they didn't consider the possibility of users doing just that.
+1 one of the draws for Lambda is that it has easy integration with other AWS tools (Kinesis, DynamoDB, SNS, even CloudFormation) and is really low-cost since you only pay for execution time.
And Amazon API Gateway, potentially the request routing layer for a heterogenous set of Lamdba-backed endpoints. Seems like that may have been particularly a service jenkstom was referring to. https://aws.amazon.com/api-gateway/
True, but in this particular case, the cost to time going (temporarily) backwards would only be connecting to a potentially-suboptimal disque node, and that would be remedied after the subjective-machine-time caught up with the previous subjective time.
Python makes it easy to do lots of things -- including very inefficient ones. Profiling your existing stuff and trying to optimize in pure Python often gets you pretty far.