At speed of light in a vacuum, from New York to London is 19ms. That's the physical limit of what a network could ever hope to accomplish, but in reality in fiber it's apparently about 28ms.
At 60fps we need to present a new set of pixels to the screen every 16ms.
So in the time that a packet goes in one direction to Europe we could have presented almost two full frames of pixels.
Even on the rather slow old SoC in the Google Nest Home Hub platform that I work on, we're able to do quite a bit of pixel crunching in that 16ms. Even with code written in JavaScript, or Dart. Enough to make our users mostly happy.
John Carmack is much smarter than me, so I can't believe he meant this literally, or it's been taken out of context.
“ The bad performance on the Sony is due to poor software engineering. Some TV features, like motion interpolation, require buffering at least one frame, and may benefit from more. Other features, like floating menus, format conversions, content protection, and so on, could be implemented in a streaming manner, but the easy way out is to just buffer between each subsystem, which can pile up to a half dozen frames in some systems.”
What he’s talking about is (in his opinion) unnecessary buffering that causes a delay in the pixel actually appearing on screen.
He blames the driver and the display’s internal software, so his argument could be made out to support OP, but I think the situation is a bit more complex than Wirth’s law here.
Well, yes, I suspected it was something like this.
I'm well aware of this kind of bloating (guess I should have said something in my comment to avoid the downvotes...) but it still doesn't support the OP's comment.
Network latency is not only high, but there's literally nothing that can be done of it -- because of the speed of light!
(I am somewhat lucky to work on something where we can optimize away much of the crap you're talking about here, as we own the whole package.)
The person you initially negatively responded to said "networking is not the bottleneck," and if it's possible to have a meaningful negative reaction to that, it might involve asking "the bottleneck in what system?" I think he's right, but it's a blanket statement and it's fair to ask for more context.
More context: typical network latency is good enough that video games rendered on a remote server are becoming practical, or at least salable. "Network latency is high" is a vague enough statement that it could mean anything, but if being able to render video games remotely and stream the output to the client doesn't make you reconsider, I question what you would ever consider network latency that's not too high.
The kicker with these games, that perhaps speaks to the original, crazy post by horsawlarway, is that it's normal for a TV set and set of controls to introduce a lot more latency than the network connection itself: the network is not the bottleneck. There's a good excuse for the latency in involved in networking, rooted in physics, but this is not true for the hardware and the software stack.
Perhaps you can see why that makes your comments all the more baffling? It's understandable that you might view the network as a UI bottleneck since you were working on an application that relies maximally on low-latency networking, but you must realize how unusual that is, and how fast typical networking actually is in order to make your work possible at all. (and to the original point, how lame an excuse network latency is for those who can't manage to cobble together a fast implementation of a much, much simpler application)
It's just round trip to Europe from North America (esp western north america) is actually ... an eternity, could easily drift into 100ms -- and not one I can optimize to get rid of. Whereas I can work my way down the software stack and find bottlenecks and deal with them.
Yes I can't control what TV manufacturers do, that is a wild card. But the quote taken out of context is more than a little inflammatory -- the network has a hard physical limit that the local device does not.
FWIW I'm just as dissatisfied with software bloat as the next person. Retro computing is one of my hobbies, and the latency measurements there are something to be envious of.
ChromeOS generally does better in latency measurements than other platforms; much effort was made there, much of it by people I know.
A new set of pixels every 16ms is about throughput and we do have that. But the latency today is worse than in the VGA era.
If you have a completely black screen and want to draw a tiny white circle in the middle of it, it will take your processor or GPU less than a microsecond to change the bytes. Less than 16 milliseconds later (an average of 8ms, actually) the updated bytes will be flowing through the HDMI wires and into the monitor. There they will be stored into a local buffer. If there is a mismatch between the image format and the LCD panel then it will probably be copied to a second buffer. Some DMA hardware will then send the bytes to the drivers for the LCD and the light going through those pixels will change. All that can easily add up to 50ms or more.
You haven't even tried to take the shocking amount of latency introduced by the hardware and the software stack into consideration, and it's what prompted Carmack to make his comment.
At speed of light in a vacuum, from New York to London is 19ms. That's the physical limit of what a network could ever hope to accomplish, but in reality in fiber it's apparently about 28ms.
At 60fps we need to present a new set of pixels to the screen every 16ms.
So in the time that a packet goes in one direction to Europe we could have presented almost two full frames of pixels.
Even on the rather slow old SoC in the Google Nest Home Hub platform that I work on, we're able to do quite a bit of pixel crunching in that 16ms. Even with code written in JavaScript, or Dart. Enough to make our users mostly happy.
John Carmack is much smarter than me, so I can't believe he meant this literally, or it's been taken out of context.
The network is definitely a bottleneck.