Exactly – this is both easier to implement and has the advantage of not incentivizing people to use something like a VPN to get more bandwidth at the expense of everyone else.
Per user limits are not easy to implement, maybe easier to visualize - consider that there are a bunch of packet gateways sitting behind a load balancer and each HTTP session may end up on a different server. There is no entity that counts the live bandwidth usage on a per user basis, let alone control it. Billing and metering is done on a session basis through logs. So from T-Mo's point of view it is much easier to detect a HTTP session as video and just throttle that session.
It is very easy to implement this (I have worked on such a limiter before). You pick the point of entry into your network (wi-fi connection, ISP connection), keep packet and byte counts for every such point of entry, and limit them.
ISPs have it especially easy, because they can be assured of being able to distinguish traffic from a given user (hardware control of the medium). It's a bit harder in wireless scenarios, since the client can spoof multiple different IDs, but it's hard for them to keep open a TCP connection under those conditions.
Just because your cheap home router does it doesn't mean it scales to thousands of users on one router. Some home routers are actually quite capable AND very unsaturated. I'm not trying to defend carriers but it is a very apples to oranges comparison.
We did that on low-end PC hardware 15 years ago for conference and guest networks (800+ simultaneous active users, LAN and WiFi).
I find it unlikely that Linux, FreeBSD have gotten less efficient since then and the hardware has made enormous improvements, far in advance of the common uplink speed.
The auto-adjustment is a problem for fast but traffic limited connections because it detects that the connection is fast, then it switches to HD or even 4K and the traffic limit gets used up faster even if the user don't need the better quality stream.
Because that would require actual effort from ISPs, which would be in conflict with their current business practice that can be summarized as "to us, you are all equally worthless".
The problem here is from who's perspective is there "limited bandwidth"? To the ISP there is limited bandwidth available, but to the customer they are having the service they pay for purposefully degraded because the ISP doesn't want to deliver on what they've marketed.
> Now, assuming bandwidth increase is difficult to achieve, they would be forced to keep increasing the price instead.
And they should. That ISPs commonly use deceptive practices in pricing and delivering service is not a good thing. It reduces market information There should be much more granular pricing based on what's actually delivered, but the major ISPs don't want to go that way because then they would actually have to account for how what they deliver so often falls below what they market.
For example, when I moved into my brand new built house a couple years back and got a 25 Mbit Comcast connection set up, the following conversation happened:
Installer: Wow, you have the best signal I've every seen actually.
Me: Really? That's good. So what throughput am I seeing?
Installer: Let me check. (Installer does a speed/circuit test). About 14 Mbit.
Me: Didn't I order 25 Mbit?
Installer: Yes, but lots of things can affect that, such as line quality...
Me: (Having worked at a local ISP multiple times in the past for years, cuts him off, realizing the futility of this conversation). Okay, that's fine.
In what reality do "the best signal I've ever seen" and 56% of the advertised throughput coincide? (this was not because the connection was overused by others in the neighborhood either, it was fairly consistent at 14 Mbit).
> This would help in cases such as airplane flights. One person watching HD cat videos is going to consume more bandwidth than 20 people doing work.
So would separate tiers of connectivity. If you are doing business, you may be happy with a guaranteed minimum throughput, while other people (such as those streaming) might be fine to take up the slack or excess (since you can cache future video). We've had this for a long time through QoS.
> Prioritizing non-video content is challenging if all content is encrypted.
So don't prioritize based on content, prioritize based on connection.
FWIW, every time (about 6 times in 3 years I think) I've complained about slow connections to my ISP, with documentation from speedtest.net that I'm getting less bandwidth than promised, they've given me that month free of charge.
Yeah, I thought about it, but we recently got the decent-speed option included almost-free with our cable TV (15 mbps for $4/month), so I couldn't be bothered anymore. Used to be on 30 mbps for $40/month.
In which case you do what should have been done all along: Rate limit without regards to packet contents. People that still need better QoS than what that offers can pay for a higher tier to get a bigger chunk of the pie. With effective rate limiting in place the video bandwidth can be dropped by the server as necessary to maintain the stream. The infrastructure in the middle doesn't have any reason to meddle further.
This would help in cases such as airplane flights. One person watching HD cat videos is going to consume more bandwidth than 20 people doing work.
Now, assuming bandwidth increase is difficult to achieve, they would be forced to keep increasing the price instead.
Prioritizing non-video content is challenging if all content is encrypted.