Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Aren't you still capping the throughput by the query rate of your connection pool though? By limiting that, you are limiting the application as a whole - i.e. your benchmark is bound by the speed of your database, and has (almost) nothing to do with the performance of a specific python implementation.


Only if there are spare resources left to saturate the connection pools, which didn't seem to be the case.

If the system as a whole is well saturated, and the python processes dominate the system load with a DB load proportional to the requests served, then I don't think we would hit any external bottlenecks.

The benchmarks performed are not that great (e.g., virtualized, same machine for all components, etc.), but I don't think the errors are enough to throw off the result. Note, of course, that such results are not universal, and some loads might perform better async.


If the benchmark is bound by the database speed, wouldn't the expected result be that all implementations returned roughly the same number of requests per second?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: