Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There some answer is because different applications require different latencies. VoIP and video conferencing, for example, have hard latency requirements that even other high bandwidth activities like viewing YouTube or Netflix do not.

As a user, I would prefer if my video calls are the highest priority, whereas I wouldn't care if my YouTube videos downloaded in batches in the background, as long as I didn't encounter buffering.



But wouldn't that use case be best solved by your router, giving priority to some kind of traffic instead of other? (most routers come with QoS, which is exactly that)

Why is it better to give all the power to the ISP? You're losing your ability to choose if what you need isn't included in a pack.


QoS needs to be configured on the ISP's network to really work. Your home router can only impact your outbound traffic, not the bottleneck on your inbound traffic 4 routers up.


As another user said, QoS on my router won't much matter when I have only one computer connected to it. Content-aware load balancing/QoS at higher stages, if done correctly/morally, leads to better service for everyone.

There's a counterargument to this that you should just be able to purchase a dedicated/guaranteed 1 gb/s, and I can buy that idea, but currently, that kind of thing is really expensive (and I expect that there's a dirty little secret that the service provider will still use "your" bandwidth if you aren't).


> Content-aware load balancing/QoS at higher stages, if done correctly/morally, leads to better service for everyone.

"Content-aware" isn't really what's required. You can get decent QoS by just putting CoDel on every buffer, so that congestion doesn't lead to unreasonable latency. Your latency-sensitive traffic will still get its share of packet drops in proportion to how much bandwidth it's using, but the end result will be that your latency-sensitive traffic won't be affected by congestion as much as the high bandwidth bulk file transfers that are causing most of the congestion.

If you want to go further to protect the latency-sensitive traffic from packet drops in the event of congestion, you can upgrade from codel to fq_codel. Then the traffic flows that are using most of the bandwidth will get the packet drops first, and your low-rate latency sensitive traffic will be unaffected until it's taking up your fair share of bandwidth.

None of that requires being explicitly content-aware. Traffic flows on port 80 get treated with the same rules as traffic flows on port 500. UDP flows get the same rules as TCP flows (except that your UDP traffic almost certainly doesn't support ECN). Flows originating from Netflix get the same treatment as flows originating from Windows Update. And it all works well for the users, while being content-blind in every way except for deciding which packets are related to each other.

Not only does it work well, but it almost always works better than any manually-crafted QoS ruleset that singles out certain ports and protocols and endpoints and applications. Those rulesets are inherently fragile and riddled with corner cases, and require expert maintenance to keep up with changing usage patterns.


Does such a scheme continue to work well for high bandwidth, low latency traffic (videoconferencing, potentially livestreaming)?

Essentially, I believe such a scheme would break down if you had, for example, 4N bandwith, 3 users using an average of N, with the ability to buffer, and one user using an average of N but fluctuating by +- .5N, and without the ability to buffer. I don't think the scheme you described would work in such a case, but an "intelligent" provider could give everyone in this situation a "perfect" experience.

Granted, I'm not sure how realistic what I just described is, but still.


> if you had, for example, 4N bandwith, 3 users using an average of N, with the ability to buffer, and one user using an average of N but fluctuating by +- .5N, and without the ability to buffer.

Are you referring to buffering in the endpoint, as for video streaming? The buffers I was referring to are the queues in the network itself. Properly managed queues can absorb bursts of traffic but will otherwise maintain a steady state of minimal buffer occupancy, so packets spend minimal time waiting in the buffer even when the line is running at full capacity. Even when a user is experiencing packet drops as a congestion signal, the packets that make it through the bottleneck will do so without undue delay.

If I understand your hypothetical correctly, user number 4 has higher latency sensitivity than users 1-3, but they're all trying to use at least their full fair share of the bandwidth. Furthermore, users 1-3 are transferring at a fairly steady rate, indicating that their traffic is being managed by a relatively intelligent endpoint that is using something like TCP BBR.

Depending on the timescale of user 4's traffic volume fluctuations, his experience may differ. Short bursts of traffic will get buffered, and so the tail end of the burst may experience a few milliseconds of delay (and also induce a few milliseconds of delay on the neighbors' traffic), but if the burst is large enough that it would monopolize the line for tens of milliseconds, packets will start getting dropped as a congestion signal, and user 4 will experience most of those drops. On a long time scale of seconds, if user 4 is still trying to use more than his fair share of bandwidth in spite of having had enough time for congestion signals to make a round trip, then user 4's packets are going to get dropped as much as necessary to keep them under control, because user 4's traffic is behaving badly.

In the real world, Netflix-style video streaming tends to be fairly well-behaved, dropping to lower resolutions in response to congestion. It is also fairly latency-tolerant because of client-side buffering. Interactive videoconferencing is more latency sensitive but has similar congestion response and is almost as loss-tolerant. Video games, DNS lookups, and early-stage connection handshakes are all relatively unresponsive to congestion signals and very latency-sensitive. But because those are almost never the traffic flows that are using the most bandwidth, they're never first in line to be dropped in the event of congestion and they usually are the first packets to be forwarded by a fq_codel style traffic manager.


Your ISP prioritizing your video call over your youtube videos on your link according to your wishes is not a violation of NN. The problem is when they prioritize your video call over your neighbour's youtube videos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: