In a typical web app you probably wouldn't notice any difference but in the niche where you're handling millions of reqs/s even slight improvements there could possibly translate to a non-trivial amount of money savings. On the other hand if you've already scaled to such a degree then those savings wouldn't matter that much. So, what's left is maybe for educational or research purposes, it's good to have an idea where the upper bound limit is while still maintaining a decent API, so there's value there, it's a great feat regardless.
At millions of reqs per second the thin contract of the internals of http request/response will never be the layer that will have any fruit left to pick to make gains.
You’re making a statement here that is only true if you assume the developers of such a service have optimized their framework. If instead they are using one of the slower frameworks, they will pay proportionally — you can’t assume that just because they take Mrps that the service will be super complex past http. Often the exact opposite is true and they get to that scale by keeping the service very, very simple. Http processing becomes -more- of their latency budget in such cases.
Yes, and the top frameworks are already bandwidth limited on the simplest benchmarks. The big gains are now made in DB access, which is a much more complicated topic than "ORM is bad for performance".