Hacker News new | past | comments | ask | show | jobs | submit login

Huh. Data please.

Part of the reason I wrote lmbench was to make sure that what you are saying is not true. And it is not in Linux, kernel entry and exit is well under 50 nanoseconds. Passing a token back and forth, round trip, in an AF_UNIX socket is 30 usecs. A ping over gig ether is 120 usecs.

Unless I'm completely misunderstanding, you are saying that the OS overhead should be "orders of magnitude" more than the network time, that's not at all what I'm seeing on Linux.

I guess what you are saying is that given an infinitely fast network, the overhead of actually doing something with the data is going to be the dominating term. Yeah, true, but when do we care about the infinitely fast network in a vacuum? We always want that data to do something so we have to pay something to actually deliver it to a user process. Linux is hands down the best at doing so, it's not free but it is way closer to free than any other OS I've seen.




Linux sounds fast. 50ns is blazingly fast. Benchmarks I surveyed show figures around 1-10usec but they are old and processors are faster now.

Kernel entry and exit are of course contributors to i/o overhead. Also include data copy time (2K times all those main memory flushes), ip stack time (well into scores of usec now) and driver overhead.

Add the latency of the result being signalled to your user-mode application. Interrupt latency, user-mode task scheduling time and if a receive then data copy time again.

Of course router delays are negligible on the backbone, but your local cable modem etc will add something.

I think we're over 16usec now, which if I did the math right is the wire time.

Another way to estimate all this is to benchmark achieved transfer rate peer-to-peer on an otherwise idle link. Folks report from 100mbit to 300mbit depending on other bottlenecks (disk speed, bus etc), but that's often using very large block sizes, not our 2K. Even so we see most of the Gigabit rate whittled away.


This may be ignorant and please correct if I am off base but given the same physical medium isn't sending 2k across the network the same cost whether you are at fastE or gigE? Given the network is not saturated?


fastE is 100Mbits/sec, gigE is 1000Mbits/sec, so given the same size packet, gigE is in theory 10x faster.

However, to make things work over copper I believe that gigE has a larger minimum packet size so it's not quite apples to apples on pings (latency).

For bandwidth, the max size (w/o non-standard jumbo grams), is the same, around 1500 bytes, and gigE is pretty much linear, you can do 120MB/sec over gigE (and I have many times) but only 12MB/sec over fastE.


Some day I need to do a post about what I learned from SGI's numa interconnect. I used to think big packets are good, that interconnect taught me that bigger is not better.


If I have 1000Mbits to send then gigE is 10 times faster. But in the article we are transferring only 2k across the network. Mixing latency and bandwidth here. The latency to send 2k across an empty network isn't 10 times greater on a fastE versus gigE network, right?


2K is going to be 2 packets, a full size and and a short packet, roughly 1.5K and .5K.

For any transfer there is the per packet overhead (running it through the software stack) plus the time to transfer the packet.

The first packet will, in practice, transfer very close to 10x faster, unless your software stack really sucks.

The second packet is a 1/3rd size packet so the overhead of the stack will be proportionally larger.

And it matters _a lot_ if you are counting the connect time for a TCP socket. If this is a hot potato type of test then the TCP connection is hot. If it is connect, send the data, disconnect, that's going to be a very different answer.

Not sure if I'm helping or not here, ask again if I'm not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: