Hacker News new | past | comments | ask | show | jobs | submit login

That's not necessarily true. Adding Network latency on top of Disk latency is clearly a net loss. However, network latency and disk seek latency stack in a different fashion and most servers don't really hit there disk all that hard. Which opens arbitration opportunities between those who need R/W and disk space.



Network and disk io vary in other characteristics. The most obvious being the block vs packet size. Lose a single packet and the block device has to ... Block.


Inside a data-center packet loss is practicably zero.

More importantly you probably don't want the full TCP/IP stack for disk access inside a data-center. At the OS level disk IO has become fairly abstracted just read up on Native Command Queuing http://en.wikipedia.org/wiki/Native_Command_Queuing or even TCQ to get some idea what's already going on. Which opens a lot of doors for optimizing and makes it hard to generalize when it comes to network failures.


I'd agree that packet loss is generally zero. But dirty fiber, bad ports, bad device, congestion, etc are pretty dang frequent in a large deployment. And those "edge cases" are the ones that are going to kill your poor network block iops.

I do agree on the "why tcp?" point. Funnily everyone is running the other way and wrapping up IP inside another abstraction layer. Nvgre vxlan etc are slipping yet another complication under your "simple" block device.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: