Hacker News new | past | comments | ask | show | jobs | submit login

Sit down in front of an Amiga from the 1980s or early 1990s sometime and you will get a whole new perspective on what constitutes "good enough latency to be interactive". In particular, the UI was always perfectly responsive, even with multiple processes and disk accesses going on in the background -- on a 680x0 machine running tens of megahertz. Once you start spawning enough background tasks and pegging the poor kernel's I/O scheduler, your multi-gigahertz Linux box can't match that sort of claim.

Select is still not nearly as flexible as Windows I/O completion ports, for instance. You cannot kick off an I/O operation, do some other processing, and then wait for the I/O to complete before you use its results with a select loop. You have to do a little bit of processing, pump and poll, do a little bit more processing, pump and poll, etc. You are wasting cpu time compared to the interrupt-driven way, and if you have many processes going at once this can add up in terms of battery power used and latency suffered. Remember that select(2) and poll(2) are system calls -- and all the overhead that goes with that.

The X server, in its effort to implement a GUI which is by nature event driven, historically contained all sorts of hacks in order to simulate the asynchronicity which should come from the OS. And it has always been slow and laggy compared to its counterparts on Windows and Mac OS -- let alone the Amiga where interrupt-driven programming is the norm and the GUI has always been flawlessly responsive even when other tasks are going on.

There's also the fact that nonblocking I/O IS NOT ASYNCHRONOUS I/O. If the kernel needs time to perform an I/O operation -- for example, reads and writes from/to disk -- it WILL block your process doing so. All "nonblocking" really means is "read until read buffer is drained, write until write buffer is full, then return". Your process is still stopped during the read/write! With better primitives such as Windows I/O completion ports that map more closely to the asynchronous I/O subsystems of the underlying architecture -- DMA, interrupts, etc. -- that becomes a non-issue. I/O happens transparently in background from an application (and maybe even kernel) perspective, and nothing is blocked.

In short, nearly anything at all is better than the Unix model in terms of throughput, latency, and more natural programming style. Unix is actively regressive.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: