My favorite language is C#, and has been for years - also am a big user of AWS in almost all my projects - and even though I know C# much better than python or node, I still choose node or python for my lambda functions when I write them. C# I stick with to run EC2 instances, typically as windows services where it serves me well.
But C# support for lambda always seem like the poor step child for AWS.
Is it because of the deployment overhead or something else?
I've inherited a pretty big C# project at work and we've just started porting it from Framework to Core. Definitely feels like there are some opportunities for serverless in there.
> Is it because of the deployment overhead or something else?
In my experience, two things. Smaller is the JIT overhead. Bigger, is the haphazardishness of the other AWS Libraries for .NET; there's sometimes 'edge cases' where if running in a lambda, you have to do something special, set a special flag, etc. And often the solution is buried in a SO post or closed github issue.
As much as I love Node and Node on Lambda, AWS's unwillingness to stick to any kind of a predictable timetable, or at least communicate, for new LTS stable versions of Node has me bemoaning the platform every time a new even-numbered release comes out.
Yea as others have said, it's a shared nothing request model, but also single threaded, with built in memory and time limits, basically exactly how lambda runs (you can of course share memory, and multithread if you want to but its pretty uncommon practice in php).
I don't really see it as a "limitation" as it really makes grokking the code much easier than other languages. Probably >95% of libraries out there make this assumption as well, so its not just "how it is used" but also the culture and existing code that make this assumption.
What’s old is new again. It’s like, hey, let’s go back to the CGI model and reinvent it. Never mind there are still languages using it! Forget them! /s
Using .NET Core 3.1 for quite a while to run the "batch like" jobs that get kicked off when events happen in my system (file upload in S3 -> event -> Lambda to process it). Plus a few other things, basically anything that I needed some kind of asynchronous processing to occur.
The upside for me is the system is "sorta serverless" - the main website is a Docker container that has a .NET Core 3.1 web service in it. This is now setup so that a simple "dotnet ecs deploy-service" copies the container up to AWS, and triggers the load balancer setup etc. without me having to do anything.
Similarly the Lambda functions are deployed with "dotnet lambda deploy-function" that just replaces the existing function.
Obviously this is all more complicated than doing stuff with Python, but the ability to have a single .NET library that access all my stuff, but can be executed within the Docker container or executed as part of a Lambda function makes developing an absolute breeze. There don't seem to be any downsides to using .NET for this compared to anything else.
Throw in S3 for file storage and AuroraDB as a shared database and you have something that doesn't cost a bomb to run (Aurora is the most expensive bit), is wicked fast on minimal requested hardware and bonkers reliable. The only real downside is that Lambda functions are a bit hard to do testing/debugging on but there are localised execution tools you can use to simulate AWS events that trigger Lambdas.
Overall I've been surprised how good it turned out to be. I originally envisaged my system to be running on a dedicated server but this is much, much better.
We are. A big chunk of our app was already written in dotnet from a previous failed attempt, including the C# libraries that talk to our CRM system over SOAP. So it was a no brainer to just port the C# code to lambda. Also node is terrible at SOAP.
We also have a layer of node lambdas that can be used to aggregate/transform the more granular C# lambdas, but aren't always necessary. IE - and endpoint in API Gateway can go straight to a C# lambda if the response already fits its needs.
20M req/day is =~ $2000/month in API gateway fees alone. I can imagine it depends on the performance profile, but at a previous job, I replaced a service (lots of GETs, highly cacheable) with a similar number of daily requests with 2 ec2 instances and an elb, with automatic setup / blue green etc. in a matter of hours.
Company policy. Devops team uses X so that's what we have to use. When you keep getting truckloads of cash from investors no one seems to really care much about costs.
Ah yeah, I figured it was something like that. Seems to me that when you get into the thousands of req/sec you are still better served by simpler setups.
> I replaced a service (lots of GETs, highly cacheable) with a similar number of daily requests with 2 ec2 instances and an elb, with automatic setup / blue green etc. in a matter of hours.
Most companies do not have access to an engineer capable of doing this in a matter of hours (or in infinite hours, at some firms). $2k/mo is a business-trivial amount of money to solve the problem. Ignoring the other capabilities of API gateway, that's the niche.
> 2 ec2 instances and an elb, with automatic setup / blue green etc. in a matter of hours
> > Most companies do not have access to an engineer capable of doing this in a matter of hours (or in infinite hours, at some firms).
This would take ~30 min (and at no additional cost) using Elastic Beanstalk[1]. I'm sure there are better, non-free (but not expensive) EB alternatives that can target AWS resources, though.
This is what I ended up with. It's easy to script, and we were able to ship a Docker container to it, leaving us a nice migration path to other systems if necessary.
I should really upload my 500 line bash script that does this to a cluster of machines, nothing is tied to AWS. It can provision a cluster with a single command line, all you need is ssh access to the machines and you're done, complete with log aggregation (custom, not yet released), on-demand file storage (using Longhorn), an s3 compatible filesystem (using Garage), automated and encrypted backups, ssl termination, automated os upgrades, etc.
You only need ssh for provisioning and when things break (like that one time etcd went crazy from a full network buffer).
I run some C# lambdas using the HTTP API and that post is not accurate. Our Cloudwatch numbers are way lower on less memory used.
Ways for better c# production startup (AWS or on Prem)
* Compile in Release mode
* Test Ahead-Of-Time (AOT) mode
* Limit code size. The JIT reading 200mb+ of code will be slow compared to a 15mb deployment.
I'm curious why that is. I've spent a bit of time benchmarking startup for plain old executables, and .NET does OK there (even without AOT compilation). Not sure what would make it slower than other languages on Lambda. https://twitter.com/reillywood/status/1459332721205936128
Ah, I think I see what's going on. It's trivial to precompile .NET code with ReadyToRun today, but the Thundra post was written before that was possible. And it looks like the post above forgot to enable ReadyToRun (see the comments - not sure why it wouldn't be on by default).
That said, in everyday console apps on modern hardware the JIT is super quick at startup. I guess Lambdas are running in more constrained environments...
Curently have 570 lambda's in .NET and know many people using them, feedback from AWS we've had is there's alot of people using .NET. So who really knows.
I used it. The company was using it more like a showcase to prove that they had experience with all the hottest technology, rather than as something intrinsically advisable.
I think Lambda in general is kind of a pain and likely not worth it for most of the use-cases that it is applied to.
But I do think the Lambda SDK for .NET was very good. Honestly easier than working with .NET on Azure Functions.
We're using it as part of an SQS flow. We add the item to the queue from our backend and then process it with a lambda function. Right now we're using containerized NET6. Works well enough.
We're building on .Net 5 with docker, unfortunately with cold starts it's not a good fit for client-serving APIs. Looking very much forward to drop docker and use native lambda support.
Unless that person you spoke to is on the Lambda team breaching all sorts of NDAs telling you the metrics across all accounts, their experience is definitely anecdotal.
I've been playing with it for running PowerShell lambda, which is maybe even more unusual than running it for just C# or F#. The only thing I don't like is that the runtime really starts choking with less than 256m of memory. It can still complete, but it will be much slower. Slow enough that it's actually cheaper in memory-seconds to just give it the extra memory.
Depends on the migration path. I would have though the generic directive of "get everything out of the data center and into AWS" would usually start with replicating existing infrastructure, but after the bills start coming in you need to deal with "why aren't we using this more effectively".
That's where the Lambda stuff becomes interesting. A lot of systems don't need a great deal of re-designing to shift workloads to serverless type processing if they are already doing some kind of batch system.
If you need to get out quickly then yes, but if you have a few years there is ample guidance that says 'lift and shift' is a way to really hose yourself in the cloud.
Here's my experience as a consultant who has worked with several .Net shops:
A significant percentage of .Net teams (in more enterprisey companies) have tech teams who haven't seen anything other than Windows. Deploying on Linux gets shot down in meetings by senior decision-making folks who again have seen only Windows. It's a bit of self-preservation, plus fear of the unknown.
Not necessarily lambda candidates but from what I have seen: COM dependency, or native specialised assemblies (in my case math libraries, internal enterprise components that aren’t under development anymore), anything to do with system.drawing. The crypto library is also very integrated with windows, .net core only partially implemented the API.
>What dependencies do you imagine code running in lambdas would have that preclude Linux as a deployment candidate?
The context was "Shit-ton of .Net running in on-prem corporate environments still", not code already running in a Lambda. There are plenty of Windows dependencies/assumptions that are not GUI code. Porting a .Net service, for example, which would be pretty common. And just lots of mundane stuff that's still work, like logging, file locations, etc.
.NET 6 is fairly new and after the .NET Core reunification so I wouldn't say moving to Linux is exactly the complicated part of upgrading legacy .NET apps.
It can be. There’s a bunch of windows-specific and “this was a mistake to build” APIs that aren’t available in modern .NET. And for good reason! The big legacy .NET Framework apps all inevitably use some of them. I got a good taste of this when building the try-convert tool, and that was only focused on converting project files and package references.
Depends on how far you're moving. Already on Core 3.1? Probably not an awful lift. If you're still trying to get off Framework if you are using certain libraries/namespaces (System.Data, WCF, System.Web, System.Addin) there is a lot more pain involved.
The move from 3.1 core to .NET 6 was reasonably painless for me. A few of the setup classes on a web app changed but that was about it. It was much easier than .NET core 2 -> 3.1
Surely .NET usage on AWS lambda is nowhere close to Node.js, and the Node.js 16 LTS has been out for almost a year and there's still no support for it.
Amazonian here! (Speaking for myself, not for the company.) You can use any runtime you like for your Lambda functions, even if it's not supported as a first-class native runtime. The simplest way is to build a container image that has the runtime you want to use. All you need to do is to include the Lambda Runtime Interface Client with your image, and set the proper ENTRYPOINT.
Doesn't using a container image hurt the Lambda experience a lot? Longer cold start times, needing to maintain the userland dependencies (and not just the language-level dependencies) in the container image?
Just speculation, but there may be different teams (or subsets of teams) responsible for different kinds of bindings, and it may be easier to support the latest .NET LTS than the latest Node LTS.
It'll be interesting to see how this performs vs. the offering in Azure: Azure Functions. AF was very, very slow (especially cold) early on, but it seems much better now. Still not ideal, however, and I wonder if Lambda will be faster.
AWS Lambda doesn't run a container like one might expect, but it treats the files in the Docker image basically the same as if it's be a regular AWS Lambda runtime rootfs + your application code. So if there are differences in cold start time, reasons for that could be that a docker image is fetched from ECR instead of S3 for regular Lambda packages and that a docker image is likely larger than the usual Lambda package.
You can package a lambda as a container image, but lambda doesn’t itself run the container like it would in dockerd or kube etc, it’s just a packaging mechanism I believe.
The biggest difference is that you have to use a lambda compatible entrypoint that basically communicates with AWS receiving the input and returning the output.
For sure. I have a couple of non-standard OpenJDK containers for Lambda. But it isn't like you just use a random container. It has to be custom setup for AWS Lambda for it to work.
But C# support for lambda always seem like the poor step child for AWS.