It's useless because a real world application will take some non trivial amount of system resources to actually serve the request and that is going to be the actual bottleneck, not the HTTP module.
So for example lets say your application has to parse a JSON POST body, talk to a database, and then serialize a JSON response. You'll be lucky to get 1k reqs/sec throughput. At that point it actually doesn't matter whether your http module can handle 65k req/sec or 1 million reqs/sec because you will never be able to serve that many anyway. If your http module did manage to pick up 65k reqs/sec from clients they would all just timeout.
These benchmarks reach those numbers by doing nothing but serving a tiny static string, but that's not what happens in real life. In summary these benchmarks are interesting, but its optimization in an area which isn't actually the thing holding back most backend servers from serving more requests per second.
So for example lets say your application has to parse a JSON POST body, talk to a database, and then serialize a JSON response. You'll be lucky to get 1k reqs/sec throughput. At that point it actually doesn't matter whether your http module can handle 65k req/sec or 1 million reqs/sec because you will never be able to serve that many anyway. If your http module did manage to pick up 65k reqs/sec from clients they would all just timeout.
These benchmarks reach those numbers by doing nothing but serving a tiny static string, but that's not what happens in real life. In summary these benchmarks are interesting, but its optimization in an area which isn't actually the thing holding back most backend servers from serving more requests per second.