If it takes 100ms CPU time to process 10 promises and 1 request involves creating 10 promises, the server can then only handle 100 requests per second at theoretical max.
IO time doesn't matter as much (for throughput) because IO is asynchronous so the server is free to process other requests while waiting for it. But CPU time is only being used for 1 thing at at time.
So to get e.g. 10k rps out of a node server, handling 100000 promises must take well under a second of CPU time. Looking at the benchmark results you can see how many implementations will bottleneck this.
If it takes 100ms CPU time to process 10 promises and 1 request involves creating 10 promises, the server can then only handle 100 requests per second at theoretical max.
IO time doesn't matter as much (for throughput) because IO is asynchronous so the server is free to process other requests while waiting for it. But CPU time is only being used for 1 thing at at time.
So to get e.g. 10k rps out of a node server, handling 100000 promises must take well under a second of CPU time. Looking at the benchmark results you can see how many implementations will bottleneck this.