Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's why I love serverless.

I cannot tell you the number of times I have implemented "upload your photo and it'll get resized to (profile avatar size from design specs)". It's ridiculous, and it's one of those things that everyone burns time implementing their fun hook into Imagemagick. Now I have one lambda that gets pointed at a new record stream from an S3 bucket, and I'm done.

I cannot tell you the number of times I have implemented "when this user signs up, send them a welcome email." It's one of those things where you construct your email, point it at your MTA, do a ton of configuration, then it may work. Now I have one lambda that gets pointed at a new record stream from Dynamo, which calls SES, and I'm done.

I cannot tell you the number of times I have implemented "clear the Redis cache if a user changes their preferences". You write a clearUserCache hook into your DAO, or you paste it manually into your crud functions, and you always forget something, and six months down the line you start getting bug reports of people's zip code not updating, or something. Now I have one lambda that takes record streams from Dynamo, removes a key from Elasticache, and I'm done.

It's not that you couldn't do this before serverless, of course you could and you still can. It's that it makes that level of code reuse that much simpler. You have all of these helper infrastructure functions that you implement for every single project you work on, and reusing that glue code is so, so much easier in Lambda/GCF/AF/etc.



> It's that it makes that level of code reuse that much simpler. You have all of these helper infrastructure functions that you implement for every single project you work on, and reusing that glue code is so, so much easier in Lambda/GCF/AF/etc.

That sounds great until you need to add a feature or fix a bug in the reused code. Then you deploy a change to a Lambda function that impacts X other projects immediately, with no chance to test each of them individually.


Lambda has pretty good support for versioning and aliasing so you can control that sort of roll out.


Sure you can do that, and that works ok in some scenarios. But there are some problems.

In a scenario like:

> Now I have one lambda that gets pointed at a new record stream from an S3 bucket, and I'm done.

Ok, so you got AWS set up to fire your Lambda when an object is created in an S3 bucket. You decide you need another "stream", we'll call it, so you start dumping stuff into another prefix. How does one go about testing that the right function is invoked?

A smart person will probably say that they have a dev environment and they manage infrastructure with Terraform. Great! That's probably the best solution there is.

But that still leaves a massive, glaring problem: it's quite difficult to implement any sort of automating testing of this Lambda function setup. In all likelihood, you're probably just pushing a file up to an S3 bucket in dev and watching it run through.

Let's say you made a pass at automated testing, and let's continue with the example of creating resized avatar images. The end product of this Lambda resizing process is probably a different file somewhere in S3. So you fire off the automated test and it fails. How did it fail? Well, if you're lucky, the Lambda function actually had an execution that errored out. Then it's up to you, or your automation, to look up logs in CloudWatch to troubleshoot the failure. What if it didn't error out, and instead just put the file in the wrong place?

This kind of stuff is where Lambda falls over. Running Docker images on EC2 in some fashion puts way more sanity around testing as a whole. You have real Docker artifacts that you ran tests in, not just some zipfile abomination that does nothing to create a good local development environment.


Sure. Lambda isn't great. There are trade offs.

I'm not gonna try and say that it should replace containerized applications (I would choose those 9 times out of 10 given the choice). Unfortunately, there are cases where there isn't a choice or Lambda is still a good choice (like maybe the grandparent's image resize thing, probably depends on a lot of things).

I tend to think of a Lambda as a custom piece of cloud infrastructure. So, in addition to unit tests, I just test them like I would any other Terraform module. I use Terratest to deploy a stack containing the resource under test and a surrounding harness. In this case, maybe the Lambda, a source bucket, a destination bucket, a DLQ, logs, etc. Then execute my test cases, poll for results, do assertions, etc. When it's done, Terratest destroys the stack.


I don't know, it sounds like you've just implemented all of those things n+1 times.


And practically every web framework out there provides exactly this type of reusable functionality, not tied to a specific vendor.


I think a lot is hidden inside "which calls SES" versus "point it at your MTA, do a ton of configuration, then it may work."

Email is unfortunately a moving target. What worked before, gradually stops working as the big providers put up increasing obstacles to your own MTA doing a successful delivery.

I've heard Amazon SES also has delivery problems so take with a pinch of salt. But I would hope they generally try to maintain it.


I agree, but then I think then we're comparing using SES (or Mailgun/Sendgrid/Mandrill/whatever) versus maintaining your own MTA, which is orthogonal to the serverless vs non-serverless debate.


It would be nice if someone made a machine that would run all three of those discrete, generalized computing tasks on a single general purpose "thing". That way, you could infinitely scale your work up or down within available memory based on what needed to be done. It would even intelligently free something from memory if it wasn't being used.

Servers.


I was hoping you'd end with "LAMP" for a second there. Many of the non critical path use cases that come up here sound like something you'd have deployed to a shared PHP hoster at the end of the 90s. Does a bunch of php files on one of those count as a serverless API gateway yet? For those use cases it doesn't seem all that different, just with added buzzwords, nicer languages and better developer ergonomics. At the cost of replacing skills in maintenance and debugging of what's going on in the stack with maintaining and debugging whatever abstraction cloud provider X put on top of it.


You can with lambda too. You map each of these to the same lambda function and branch off path, just like a server.


I favorited your comment, but I'll be curious to see how rosy your outlook is in say 10 years time. My prediction: all the use cases you're using it for are not the ones that it's designed for (i.e. which pay AWS's bills). And historically, unintended/illegible customers have a way of being caught out as the vendor shifts between strategies.

Certainly, I hope you can be the remora to this shark for a long time to come. Just be aware of the benefits and drawbacks of the position you're taking.


I don’t think cloud functions in any of the providers are going anywhere. In fact seeing how google is investing in cloud run (docker images as cloud function) and other providers are catching on, this will be a growing trend.

The ability to take whatever crazy code with spaghetti dependencies and freeze it into an image and have a cloud provider auto-scale from 0 to ludicrous in seconds is phenomenal ability.

I love cloud functions. They make the perfect Webhooks.


You misunderstand; I'm not saying cloud functions will disappear. My claim is that the _details_ of how cloud functions work will gradually mutate in strange and subtle ways you can't anticipate today, in a manner akin to the way that Google Chrome's behavior has gradually mutated. Just because successful services will tend to follow their most lucrative customers' use cases. The long tail running tiny lambdas will lose influence over time.

This may take many forms. Pricing models may change. Use cases that gradually see diminishing use may get discontinued (Google Chrome). You might get on some sort of treadmill of having to update details every so often (Facebook API). I can't predict what exactly will happen, but I believe that if your use case doesn't fit "we run a bazillion Lambdas and send tens of thousands of dollars (at least) into Amazon's bank account every month," any service you receive is accidental and contingent.


Sorry but AWS Lambda (and the related event-based constructs on AWS) is exactly built for the use-cases GP is using them for.

I use it for similar use-cases GP pointed out: It is a breeze to setup but the best part is there's no devops, no sre (pretty much set-it and forget-it) which is pretty great for something that'd be highly-available yet not be expensive at all even for the smallest of businesses.


Sounds like before you could run your code on a bunch of different providers, and now you can only run it on one that has S3, SES, and Dynamo.


This sounds like libraries but with an extra spot where you can be charged money if something breaks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: