Depends on your use case. I've written quite a few backends that didn't use a database and there are a number of cases for the pure HTTP benchmark. Pass through proxies with injected behaviour, in memory key-value data storage APIs, memory mapped files, etc. IMO not every shop offloads the state to its DB. Once you add the database you are really testing the DB driver - and that adds a lot more variance to the test. Maybe its just the DB driver for that particular DB?
A fast "Hello World" benchmark implies that the HTTP/transport layers of the framework are very fast. That's your base and the lower bound to your best performance potential. As a real world example if .NET ASP NET Core has the best request/response benchmark and its better than say nginx (a popular reverse proxy) it might be better to have all your gateways using a reverse proxy implemented with that as its base instead. Over your whole network depending on your scale that could be a big cost and latency saving measure. I wouldn't be surprised if Microsoft or it's community have started writing one.
It's more interesting to see results of high-load DB tests, for example:
https://www.techempower.com/benchmarks/#section=data-r20&hw=...