Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If we use Rust as an example, it's super easy to include high quality libraries (just add s line in Cargo.toml and you're good to go).

Assuming a web server—Which library? Which version? How many commits does it have? When was the last one? How long will it be supported? Is it async? How many deps will it pull in? Will it be superseded by a fork? Does that fork have a different API? Will I have to bump the Rust version to match it in the future? Ok, I've gone to another team and their app uses a completely different library, what's the answer to all those questions again? Etc.

With Go, the answer is and always has been:

  include "net/http"
I love Rust, but its DIY libraries can be quite a barrier to overcome when starting a project.


I'm not quite ready to announce it yet, but I'm working on a site that will provide a curated guide to the Rust ecosystem. There are packages that are well supported and de facto standards, you just need to know which ones they are (in this case the answer is "axum").

In the future the answer may be go to <SITE-URL>, look up the HTTP server category and either read through a few one-line descriptions or just use the recommended package.


That is great, and as someone just starting to dabble with Rust I'd love a resource like that.

But: no matter how well-curated your site is, and no matter how quality those libraries are, and how well maintained, "batteries-included, in the stdlib, backed by a compatibility promise that the dev team is willing to uphold" carries some serious weight.


It does, but it's not without its downsides either. stdlib modules might end up being abandoned or deprecated because the design can't be changed and it doesn't work well anymore or there are unfixable security issues. Python's stdlib has plenty of such modules for example.

Considering that the Rust dev team is mostly volunteers, and many of the most important non-std-lib libraries in the Rust ecosystem are maintained by those same people I feel like there's less of a distinction than there is made out to be.


How do I get notified when this is available? Sounds great :)


If you email hn@my-hn-username.com then I will be happy to notify you. Otherwise I'll probably submit it here and on /r/rust. Whether it makes the front page, who knows.


signs that things might not be that easy, i'd say that the answer is rather "actix-web". axum has tokio's momentum, but a much shorter track record than actix.


So "web servers" is a particularly competitive category and there are number of options that are all solid choices. For most categories I have 1 or 2 options with 1 recommended. For web servers I have 6: axum, actix-web, warp, rocket, tide, and poem. With a couple of sentences for each explaining why you might or might not want to choose that option.

For me Axum should be the default recommendation because it's the least quirky option. It's also a very thin layer on top of hyper, and built by the tokio team, both of which have a very long track record.

Actix-web is a fine choice, but it's a bit of an odd duck in things like not being based on hyper, which makes it a little harder to integrate with the rest of the ecosystem.


thanks, i appreciate the detailed response. I do believe, though, that "least quirky", while definitively important, is not the most deciding factor here.

Axum is very young (was announced <1y ago), it has good momentum, is based on hyper and built by the tokio team. On the other hand, it has few actual projects built with it, I couldn't find a guide for it like actix or even Rocket have (it has some documentation in its docsrs documentation, but it is pretty minimal). Crucial questions like how to handle configuration, integration with database, autoreload, are left to the examples at best.

I have no doubt that these things will come if the framework matures. I'm just questioning if it should be the default choice right now, as opposed to in a year.

(Btw, I would absolutely recommend against Rocket right now, even though API wise it clicked the best with me, until it gets more maintainers and development resumes)


That's definitely a fair point on the documentation for actix-web being better (and rocket, but I agree with you on Rocket's maintenance status being problematic). Once I've got my site into a launchable state, my intention is to open up contributions on the github repo so that we can capture the wisdom of the community from all it's diverse set of perspectives. I definitely don't have all the answers myself.


A curated list doesn't have to be "The Best(tm)" in every choice. It just has to recommend one of the top 3 for every choice and never recommend something that sucks.


The issue with Go is that "net/http" is rife enough with footguns that for example Cloudflare has en entire blog post on how to handle timeouts. The defaults will cause production issues.

https://blog.cloudflare.com/the-complete-guide-to-golang-net...


I’m mostly an infrastructure guy. I mostly fix C or identify bugs, and write integrations in Go or Python.

But I spend probably 50% of my time dealing with production problems due to this in Java business applications.

Nearly every client library in the Spring ecosystem has the worst of all possible defaults, and all internet code examples are “look at how easy X is” without anything around making it real.

Devs chuck code out without HTTP connection pooling or pipelining requests, let alone reasonable timeouts. Hikari has an awful default DB pooling config, and on and on. Inevitably it gets reported as “latency” when a quick look at [APM tool] shows all the time is spent waiting on a connection, or a stack trace shows which library chucked the timeout.

And often devs just bump connection pools higher when they see a bottleneck and there’ll be 1000 idle connections wasting DB resources and not helping the problem (inefficient query, missing index, etc).

I complain because the defaults here are masochistic. But there does need to be a cultural change where using a client or server library assumes a tuning exercise. And yeah, good testing would be great too.

There are no 1-size-fits-all values. You have to look at the use case and the SLOs of the app and it’s up and downstream dependencies. You don’t want optimizing a config in isolation to break a larger system either.

Software development is hard. It requires understanding. No language or library can solve that for this stuff. I just wish all of these libraries’ docs called that out up front vs projecting plug and play. It’s all there, but buried in the godoc for you to have to know what you’re looking for. Others are worse.

This is especially the case with http libraries, since it’s usually building a client or server object/whatever correctly and not just a config tweak you can slap in at 2am.

And back to the original point, when there are 12 implementations that can mean having to relearn this activity for each one. That said, if I’m using Go, that usually means building something with net/http and handing it to the library to use instead of the default. Other languages without that base layer baked in don’t get that advantage.

Back to Java/Spring: usually Catalina vs netty/Reactor Core are 2 totally different worlds and often dev teams don’t know which they picked or that they switched when they generated a new project and pulled their code in from Spring Boot 1.5 to 2.x.


When you code go by sticking to the standard lib like that, that the friction of updating becomes as smooth as compiling with the new version and deploying. Really quite sweet.

You -CAN- go the route of using a well supported, well maintained library but I have seen some code rot on larger projects in the go code ecosystem for stuff that tried to include more batteries.


I'm on the opposite side here, the slim standard library is something I enjoy with Rust. I can pick and choose from a multitude of different libraries with different trade-offs, strong suits and so on without being locked to one single implementation. I don't know what the guarantees for the standard library for Go is, but Rust's is very strict, you better be damn sure the design once something goes into `std` is near perfect and thought through, there are already mistakes, deprecations and design mistakes in Rust's standard library that will forever be there. Similar to how Python has both `urllib` and `urllib2`, and others like it, I'm of the opinion that a big standard library is where good libraries go to die once the stability guarantees come into play; though being able to do so much when scripting with Python is certainly a boon.


On top of all those questions, how safe against supply chain attacks is the library?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: