Hacker News new | past | comments | ask | show | jobs | submit login
A take on Tornado and "modern" web servers (audidude.com)
43 points by pufuwozu on Sept 14, 2009 | hide | past | favorite | 15 comments



He seems to recognize (correctly) that Apache's mpm idiom is crap, but he still seems to think that its model of loading multiple language runtimes into one monolithic process is a good idea.

Why does it matter if you use 4 or 8 event-handling processes on one machine, each with their own database connection? If that's your scaling bottleneck, you'd be running on many machines anyway, so a small constant multiplier doesn't change the story. Besides, you could use crap like ODBC to have one persistent DB connection per machine.


Exactly. Stuffing everything into one process is a whole lot of complication for very little benefit.


I'm with you on the multiple language runtimes into one process, I find it hard to envision a situation where that would be a benefit, the only situations I can think of have nothing to do with scaling issues (multiple smaller websites for some reason bound to legacy versions of an interpreter).

The other issue (a small number of processes accepting work) is a good idea though I think, to get the number of processes doing actual work closer to the number of cores.

After all, even if it is a 'small multiple' it is a divider when it comes to how much hardware you need and less hardware=better.

The DB may be a bottleneck, it may not be, that very much depends on the workload of the server.


But he's arguing against the idea of having a small number of processes -- he seems to want a single monolithic process per machine, with a native threadpool and multiple interpreters to get around GILs.

The idea of rolling multiple interpreters into a single process is especially bad for python, as it has a real garbage collector and os.fork()-ed processes will share a lot of memory. It makes some sense for Ruby, since its assy mark-and-sweep will invalidate every single page on the next collection after a fork().


I tried to argue that was an implementation detail with sacrifices no matter which direction you go. One could also argue that its problematic to deal with run-away runtimes with memory issues. The GC does provide an interesting problem I didn't consider. Especially those that over-utilize OS signals. Perhaps the choice of Python was poor for this.

You can still separate it into two fundamental shifts. The first being making a fully asynchronous/event driven module/handler API. This should be done regardless of the runtime module.

The massively-scaled websites I've seen would tip with 4-8 times the sql connections. SQL Server is especially bad at this. It also makes it hard to simply throw additional hardware at the problem.

I guess you could yield on resources from a master process if you have the logic to know their status.


@audidude: FYI - your server (www.audidude.com) is either being hit really hard right now or is completely down.

Without access to the original post there, this noob can only assume(hope) the discussion is about the Tornado Web Server (http://www.tornadoweb.org/). I'm interested in alternatives to Apache, so am looking forward to you being back online. Gracias.


... or you could use Erlang and it takes care of a lot of that crap for you.

That said, I think the big pain point is really the data store in any case. Yeah, it's nice to handle more with less, but in the end, adding django/rails/php/whatever machines is easy and a known quantity. It's tougher to scale up the data store.


Copying my comment from my comment page.

Well like I mentioned I believe that is an implementation detail. For example, if you have the subprocess you have two fundamental designs. The first being where the master process still manages the client socket (so data is transfered from the worker back to the master). The second being where you pass the client socket (over a UNIX message with send_msg) and allow the worker to flush the buffers and close the socket. The problem with the first is that you increase the amount of wake-ups your event loop needs to do by 2x (since it needs to handle data in and out for the worker) which could increase your handling latency. This is a no-go in some applications. The problem with the second model is you lose the ability to have connections live longer their request (otherwise they are restricted to that workers affinity which will not be evenly balanced).

ODBC is an option (or SQL Relay) but it adds an increased latency without providing the ability to yield on the resource being ready. For example, even with the models I described above to reduce over subscription of resources, your container may not be correct in its assumption that the connection has little contention. So you effectively add latency and reduce correctness in your ability to run efficiently.


The second being where you pass the client socket (over a UNIX message with send_msg) and allow the worker to flush the buffers and close the socket. ... The problem with the second model is you lose the ability to have connections live longer their request (otherwise they are restricted to that workers affinity which will not be evenly balanced).

Remove this problem by having the worker process pass the file descriptor back to the master process as an open (doesn't need to be accept()'ed) socket to be reprocessed or closed as necessary.


But why do the otherwise-independent app server processes need to be sharing 'the' socket at all? If you're already going to be running the same app on more than one machine, there's no reasons left to stick to port 80.

It seems like you're stuck on the idioms of Apache and 'real web-servers' (as you put it).


If you don't share the client sockets at a level that can distribute work to all of the processes you will get an unbalanced number of connection:keep-alives amongst the workers.


But we have this nice little abstraction called HTTP! The app servers don't have to know anything about load balancing.

You're persistently parochial in how you're approaching these things.


I'm surprised there has been no discussion around NT's IO Completion Ports on this topic. IOCPs have this magic where the scheduler makes sure the number of threads running from a worker pool are the optimum for the number of cores on the system. It lends itself well to writing hi-perf networking apps where you're waiting on a bunch of objects using MsgWaitForMultiple or some equivalent.


That and IIS7's integrated pipeline.


Sorry to kungfudoi for posting this half an hour after he/she posted it:

http://news.ycombinator.com/item?id=820962




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: