After using Rocket in production for a year now. I'd really recommend Actix Web. Don't get me wrong, Rocket has some really nice features and good UI, but a couple things have proved to be real pain points:
- Middleware ("fairings" in rocket parlance) can't respond to requests. This means your access control has to be replicated on every route as a guard.
- Guards can result in errors (for example, if the request doesn't have a valid Auhorization header), but you can't set the response body without a nasty workaround that causes other issues [1]
- Guards also have tricky performance gotchas. When multiple handlers match the same route, rocket will try them all in priority order. Often these variants have the same or similar sets of guards, but guards don't cache by default. Doing your own caching gets tricky especially in the presence of workaround [1].
All this to say, Rocket works well, but it has some unique problems that come from its early design decisions (e.g. the guard error messages which has an open issue) and some from ongoing decisions (i.e. the project seems to committed to the fairing model over standard middleware).
Many of these problems are fixable by community members like you, but actix avoids several of these and has a larger developer base. I've heard good things about axum and the docs look great but I haven't had much experience so I can't offer a strong recommendation.
The first two points will be addressed in the next major release with typed catchers. The latest release notes[1] call this out, and in fact reference the same Github issue.
Additionally, it looks like they're even addressing one of the most common criticisms - having to declare requirements/guards on every handler. See "Associated Resources".
But honestly, “middleware”? That was already taken years ago, long before the web took off. In fact, it was first used before the web was even invented!
I'm using Rocket for a small production application for my PhD project. One supervisor recently took over the project and asked me where he could connect a log stream to detect crashes. I said: "I don't know. I never have crashes". At the same time, maximum memory consumption is about 15 MB (3% of 512 MB) and CPU load 0.1% on a Starter instance on Render.
So credits to the Rocket maintainers for making Rocket such a reliable piece of software!
These numbers are fine and all, but what really matters is how many requests per second it can do. Combined with response time across a timeline where you gradually increase the number of clients (fake or real).
To observe where your server experiences pain, and watching how it deals with it, and how/if it recovers is super useful.
Benchmarking tools to pound your servers are plentyful, and these stats (connection/sec and avg response time) combined with the hardware spec is what makes a good praise or diss of a webserver performance.
Maybe you'll never intent do draw a huge load for real. I'm just saying its performance can be measured better than its cpu and ram usage while on low or no load. Even if that is a useful aspect in some sense.
Unless this is serverless of course. New ballgame entirely.
Learning Rust can be challenging, especially if you're accustomed to Python or Kotlin. The Rust compiler is strict, which can be demanding for programmers. However, the effort can be rewarding due to Rust's advantages. Expect a slower pace of progress, as tasks that are straightforward in Python or Kotlin may require more time in Rust. The difficulty level is higher, but with motivation, it's certainly achievable.
In my experience, the more programming languages and frameworks you know, the easier it is to learn a new one. So it depends on how much you know already and how much you want to learn.
From a quick glance it seems still vulnerable to trivial slowloris’ing D:
Is anybody actually exposing their rust-based websites to the internet? I want to, but it seems that for some reason every rust web framework keeps TCP connections open _forever_, meaning that even with file descriptors bumped to 64000, my web server runs out of FDs and needs to be killed and restarted every 3 hours or so. The standard advice seems to be “don’t do that, put your rust app behind a reverse-proxy written in C or Go, and let the proxy handle the complexities of TCP”, but it seems so sad to add an extra layer of latency and complexity just to close idle connections ;(
On the one hand, it would be nice to be able to handle clients connections directly, for a lot of special use cases or performance-sensitive situations.
On the other hand, I don't know if it makes sense to have to expose all those security or rate-limiting settings for every application. Every app would have its own way to set supported ciphers, rate limits, request duration and size limits, revocation lists, etc.
How do you handle https if you don't? Do you use certs directly in code? Also, do you use LetsEncrypt or do you actually pay for certs?
I have been developing websites all my grown life and I always put them behind a reverse-proxy. That has never been the culprit of any slowdowns in my experience and nginx is very, very fast and supports everything you may want to have.
I usually nowadays reach for caddy just because it's so much simpler and fast enough.
Two async functions - one listens on port 80, and forwards requests into the business logic; one listens on port 443 (grabbing a certificate from Let’s Encrypt if it doesn’t have an up-to-date one in the cache), decrypts the SSL, and forwards requests into the business logic.
Before I gave up and wrote my own software I tried various combinations of nginx, varnish, hitch, haproxy, squid, traefik, and I’m sure more that I’m forgetting. Most of them worked ok in most cases (and I’m still happily using `varnish -> nginx -> app server` for other parts of the site) -- but for one reason or another they each had issues handling tens of thousands of requests per second on a tiny potato of a server D: (If any of them worked then yes I would go ahead and use them - but it wouldn’t stop me feeling bad about needing to have a whole extra reverse-proxy layer just because my web framework doesn’t know how to close idle TCP connections :P)
(Incidentally if somebody knows of a CDN or cloud service that’ll serve ~3Gbps of NSFW content for <$800/mo, I would be more than happy to quit writing my own software to run on hand-managed bare-metal servers :P)
Cloudflare R2 might work well for you for serving the image files themselves, there's a per-request fee ($0.36/million GETs) but no bandwidth fees. AFAIK there's no restriction on NSFW content on any Cloudflare service, as long as it's legal.
Disclaimer: I work for CF, but not on a team related to R2. I'm just speaking as a CF enthusiast here.
Interesting~ Last time I spoke to somebody from CF we were too big for the regular plans and too small for the “call us on the phone and we’ll discuss a custom contract” plan, but it has been a couple of years so maybe worth looking at the newer services :)
I'd say R2's definitely worth a look, since it works quite well standalone (without other CF services). If the public pricing works well for you, there's no real benefit to an Enterprise contract or anything; the product should Just Work and arbitrarily scale.
Some of our infra at FastComments handles the SSL termination itself, it's really nice owning that in the app layer and removing another component. Yes, we use LetsEncrypt. Those are Java vertx apps. Good thing I didn't move them to rust I guess? But this seems like too weird of an issue to be true.
Rocket's documentation and overall 'dev experience' are very pleasant. In the past, I felt like I was making a significant tradeoff when using Rocket over more actively developed frameworks (Axum and Actix Web). With v0.5 (finally) coming out, and the Rocket Web Framework Foundation, that feels like much less of a concern.
A great example of Rocket in use is the official Rust website, which is open source.
> While Rocket is still in the 0.x phase, the version number is purely a stylistic choice.
> If you're coming from a different ecosystem, you should feel comfortable considering Rocket's v0.x as someone else's vx.0.
I didn't think there were libraries in the Rust world that objected semver. It seemed quite ubiquitous since it's built into Cargo. Very disappointing.
I know they recently released 0.5 which finally works with stable Rust, among other things, but it really is "too little, too late" for me, as I've since moved onto first Actix Web then onto Axum. There is something to be said for rapid development rather than waiting to release big releases every other year. This is similar to the discussion of Elm a few days ago, where I left it because we needed critical bugs to be fixed and features to be added, so we migrated to a framework that does have such a rapid development cycle.
Rocket is a delight. Been using it for a year now and the docs and dev experience and stability are all exceptional.
Request Guard Transparency[1] is something I’ve only seen in Rocket:
> When a request guard type can only be created through its FromRequest implementation, and the type is not Copy, the existence of a request guard value provides a type-level proof that the current request has been validated against an arbitrary policy. This provides powerful means of protecting your application against access-control violations by requiring data accessing methods to witness a proof of authorization via a request guard. We call the notion of using a request guard as a witness guard transparency.
Basically your endpoints can require access to a protected service via a parameter and you’re guaranteed that your code will only execute for valid&authorized requests. For example, imagine a UserService and a TeamAdminService, each with their own methods appropriate for their user type. Request guards are used to validate the request headers and database entries are correct before constructing these services. And since you can only construct them from a request, simply having a service listed as a parameter in your endpoint guarantees that the proper access control has be enforced before your code runs.
We’ve structured our app so that every sensitive operation goes through these services, thereby sidestepping entire classes of security concerns and missteps. I sleep better as a result and our security reviews are much more enjoyable.
I’d love to see this discussed more and adopted by more frameworks.
The main selling point for me was that Rocket made security threats (XSS, SQL injection etc) impossible by guarantees which was a bit mind blowing imo. It is a bit like database guarantees which is insanely useful as an application grows. Is this still the case with Rocket and if so, how come it's not all over their web page as the unique selling point? I mean, security is getting more and more important and if we can solve many preventable security threats permanently by a technical choice this seems like the obvious way to go for the future.
I mean, Rust is fast and secure and makes my software "unhackable" for the price of a bit slower development? Seems like an obvious choice then.
When I browse their website, I don't understand if this is still the case and coming from a php/node background this would be the main driver for me since the language performance is very rarely a concern of mine and coding rust is.. slow and requires a lot of learning because of the weird syntax that makes me think about memory when I don't really care.
What I also really, really like about Rust, Go and other similar languages is that the deployment option that becomes available. Just deploy a single binary file. That is super simple and awesome. But the one thing that made me look away from Rust is the foundations weird rules and new draft that you cannot use the word Rust in urls, courses etc and if you like guns?
I don't know how it's been developing so far but the Rust foundation seems to do some crazy stuff that makes it hard to trust.
> I mean, Rust is fast and secure and makes my software "unhackable" for the price of a bit slower development? Seems like an obvious choice then.
Something being written in Rust doesn't mean it's unhackable. It lowers the likelihood of memory safety errors to the point of them being negligible, and a lot libraries will have APIs that encourage correct usage by default. But your application can still have a bug, and that big can still be exploitable. For example, if you're making a file transfer application where the sender has control over the path to be written, and you don't make an explicit check for path traversal on the receiver, a malicious sender can overwrite any file the receiver process has write access to in the entire filesystem. Rust didn't protect you there from an exploitable bug. You could make an API where that won't happen unless you opt into arbitrary path traversal, but that is not what most libraries, including std, do.
I don't say this to mean "Rust bad", but rather "don't be mislead by imprecise language about what Rust gives you". You can write bugs in any language. The extent of the blast radius is different per language.
I know, I think you misread what I meant. I mean that even if it isn't unhackable due to bugs you can do in runtime, that it can guarantee stuff like no SQL Injection and possible no XSS if it compiles is pretty amazing.
But maybe that is pretty much the same as other frameworks in other languages provide also but the difference is that it's not guaranteed by the compiler.
introducing a new term when an established term exists seem to add another overhead. "but bro, you'll get used to it in no time." i hear you, buddy. it just leaves a bit bad taste.
anyway, i can't believe i say this, i think dhh's words on tradeoff between squeezing performance out of an ecosystem and "just throw more hardware on it" is a tradeoff that i can clearly choose.
Every little speed bump that's thrown in your way has a real cost.
When those speed bumps are on the main path that you travel every single day, all those little sources of friction add up to a lot of drag. As an industry we've collectively established this nomenclature over the last 2-20+ years, and throwing it away so you can keep the cute rocket analogy reflects some really bad decisions being made.
Hey, at least everything isn't randomly named after Lord of the Rings characters though.
I mean, you’re right if it’s something that they plan on maintaining for 10+ years and want to use a language specifically designed for the Web with a stable and mature ecosystem and plenty of developer support.
If it’s a small little hobby project then sure, use whatever little Web framework you want for Rust, Go, Zig, or whatever else the flavor of the day is.