Hacker News new | past | comments | ask | show | jobs | submit login

Using a non-blocking server with Python does make sense if you use non-blocking code for the things that take a long time. Here's an example: a server takes 400ms to handle a request. Of that time, 300ms is spent waiting for an HTTP response to an external service. The other 100ms is spent accessing the local database and other blocking operations. If the server does not block waiting for the HTTP response, then an instance of the server can handle 10 requests a second (1000ms / (400ms - 300ms)). If the server does block, then an instance of the server can handle 2.5 requests per second (1000ms / 400ms). An instance of the non-blocking server has 4x the throughput of the blocking server.



My argument is that most things that potentially take a lot of time are blocking by default in Python, which makes it a bad fit for making non-blocking servers... I am not arguing that blocking servers have better perfomance than non-blocking servers...


I am arguing that a small number of things take a long time in a web server. Can you give some examples of things that potentially take a long time in a web server that are not already covered by Tornado (http client, long polling)?


Database drivers (Redis, MySQL, Tokyo Tyrant etc.) and a bad query/overheated database server can easily take over 1 sec execution, which would basically stall your servers each time such a query is run. File handling and system calls are blocking in Python. E.g. if your system call is expensive - say you are processing an image via imagemagick, very common for most web applications, then it will block your server.


Blocking for short periods of time is fine if you run more than one instance of the python application per core.

If database database access is taking more than a second, then I recommend fixing that problem first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: