Hacker News new | past | comments | ask | show | jobs | submit login

Former CTO at Arist (YC S'20) here.

Jet's literally got us off the ground and to the point where we could handle spikes where we scale up 1000x to handle hundreds of thousands of messages per second and then immediately scale back down 1000x because of the spikey nature of our workload.

Almost all of the traditional issues one encounters when running a Rails app in production vanish when you build on top of Jets, as scaling just becomes something that happens automatically without you worrying too much about it other than at the database level.

One thing that was particularly impressive about Jets is its whole ApplicationJob system that provides an easy-to-use API for writing lambda fan-out routines, which we used as the crux of our message scheduling and processing system https://docs.rubyonjets.com/docs/jobs/

Anyway, I mostly work in Rust now but still am and will always be a huge fan of the project and Tung Nguyen, its creator :)




How much more do you estimate, very roughly, it costs to use the serverless stuff in "normal times" outside of spikes compared to a more traditional system?


normal times was well within the free tier, and this was even with a pre-warm job turned on for our endpoints. Even the spikes were not very expensive. Back then 80% of our bill was RDS and I think we were paying under $50/mo for beyond-free-tier usage of lambda. It was a tiny fraction of what the cost was when we had EC2 clusters.

If you're ok with cold-starts on less-used routes you could probably make any medium to low traffic app run totally free on jets (other than db)


I guess this is a serverless thing rather than Jets-specific? Has anyone had this experience in NextJS using serverless erm... servers? Maybe Vercel themselves (or Lambda etc.)


What's really special about Jets in particular is it lets you have your cake and eat it too, in that locally you have what feels like a pretty normal monolithic Rails app, but when you deploy every endpoint magically becomes its own lambda...


To be fair that’s exactly what NextJS does too. In dev everything runs locally, when you deploy to Vercel or a similarly capable host each non-static route becomes a lambda.


How is making each endpoint a lambda helpful?


He says quite clearly in the GP. It solved all their scaling problems and massively reduced their costs. With a note that their traffic was particularly bursty.


Does the “lambda per api endpoint” part help with the application boot part or something? Otherwise I don’t quite see what advantage you would with that get over just one lambda for the entire monolith.

OC also mentions that they run a job to keep the lambdas warm. One disadvantage with the lambda per endpoint is that he has to keep dozens of lambdas warm instead of just one.


Dead code/dependency removal. "Tree shaking". If you only serve one route per lambda, you can be (somewhat) sure about what you need and compile the minimal optimized code path for that.

Less "stuff" in each route, less boot/warmup time for each lambda.


I tend to agree that in a lot of cases just sticking the whole monolith in a single lambda is fine, and that's probably how I would do it if I had to do things manually. But as things grow, it becomes much better to tree-shake unused routes, though dead code detection in Ruby might as well be a pseudo-science with all that Ruby can do lol



Thanks Sam!


isn't that just cloud?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: