"What is zero rating?" you might ask. (I did). This is the process of ISPs offering free carriage of exclusively particular traffic. e.g. T-Mobile's "unlimited [downsampled] youtube" offering, Facebook's free internet (offered in India IIRC), Amazon's whispernet for Kindle.
It's a neat psychological trick to get people accustomed to non-content-neutral pricing strategies by mobile network operators.
Basically it is a counter-strategy to the net neutrality people. While net neutrality is usually argued as being a proven way to get the best possible outcome for the entire market as a whole, zero rating counters this by appealing to individual greed of the small-minded ("But I like free YouTube now more than your lofty it's-gonna-be-better-for-all-in-the-end future utopia!").
While I strongly detest it, using this strategy in this context is a stroke of genius. The base strategy already is generally proven to work great in all target demographics, but applied in a way in in which the modern, urban, don't-need-to-own-stuff-cause-sharing-economy-and-streaming-exists metropolitans which are traditionally rather opposed to old-school big-corp power grabs, actually get something immediately valuable to them out of the deal additionally boosts its effectiveness. A big fraction of the people that would otherwise possibly take part in the movement to advance the net neutrality cause are now placated by endless Spotify and Netflix on their phones.
What exactly is “anti-competitive” about data caps? They’re just a form of congestion pricing. Congestion pricing is now widely considered desirable for things like roads. Now, ideally, we would have fine grained congestion pricing with fees kicking in based on tower occupancy level. But customers would probably find that too unpredictable.
Moreover, congestion isn’t the only cost to account for. Take the total number of dollars of capital and maintenance costs of the network over it’s useful life and divide by the total number of bits sent during that useful life. That produces a cost per bit that seems quite reasonable to apply to customers based on how many bits they send.
The physical RF spectrum is the local road. There are no more roads to take. No amount of economic policy supporting competition is going to bend the laws of physics, unless you're willing to shovel herculean amounts of resources into microcells (even smaller than 5G cell sites) to reduce contention (essentially replacing fiber and wifi).
Cell companies can more or less directly translate $ = bandwidth with no meaningful limit.
You have effectively unlimited RF bandwidth assuming ever smaller cell locations. Bluetooth is the equivalent of a new cell in under every 10 meters. And that’s with a tiny slice of the RF spectrum.
Forcing lower bandwidth versions of video and audio streams reduces local spectrum use. Do I really need anything above 480p on my iPhone? Probably not, so the carrier incentivizes me with zero rating.
What's anti-competitive is zero rating and violation of net neutrality. Data caps are driving it. But to go one step further, all this sickening situation is caused by lack of competition between ISPs themselves. That's oligopoly for you.
And how do you know the congestion level of the cell?
With LTE you can deduce it from RSRQ, but that won't tell you if congestion is happening at the next hop.
I'm not sure data caps are a disease, as such. It costs money to move data around. Caps on "unlimited" deals are obviously a dodgy sales technique, but I don't think that's what's being discussed.
Up to $100/6GB of mobile data. The next 9GB of mobile data are at normal up/down rates, then everything over 15GB is metered for the rest of the billing cycle.
The natural data cap is the actual line speed of the connection.
That would mean a 10Mbit connection == 3.24TB/month
That's definitely a cap that is directly related to line speed. But these 150GB caps are purely because the internet companies are also content companies - and their content channels are zero rated.
And with a 10Mbit connection, it only takes 33 hours to exceed their arbitrary cap over the whole month.
That would only make sense if everyone used a totally consistent amount of bandwidth 24/7. The reality is that most bandwidth is used between 5-10pm. Bandwidth caps cause people to use bandwidth sparingly and ease congestion. To allay your complaints about unfairness, you could structure it so that the caps are looser in off hours like 3am, but then again, that wouldn’t benefit most people because they don’t use the internet at 3am.
The best system would be to charge based on bandwidth usage, and raise and lower prices based on congestion per tower. But most ISPs don’t have the billing capability to support that, and bandwidth caps sort of approximate it.
>To allay your complaints about unfairness, you could structure it so that the caps are looser in off hours like 3am, but then again, that wouldn’t benefit most people because they don’t use the internet at 3am.
On the other hand, something like this might incentivize developers to take advantage of lower pricing during periods of lower congestion. Mobile OSes tend to provide the user a choice between downloading updates anytime or only when on WiFi, but if the pricing structure made this useful, there’s no reason they couldn’t download updates overnight too.
Back in the early 2000s, before most plans move to "unlimited", some ISPs in Portugal did exactly that: they had rather small caps, but then unlimited between 1am and 7am.
And we did have developers take advantage of that; for example, there was a popular fork of eMule that had extra scheduling features, so it could automatically run just in that period.
then they realized they could get away without even that modicum of respect to the clients, because, well they run a legal monopoly. Either you only have one option (fiber, or even cable in some markets) or the 3 to 4 offerings have the same "deals" across the board.
I try very hard to be civil on HN, but everyone defending artifical data caps are a bunch of idiots (in the true sense of the greek work: a person that can't live in society and should be voted to be ostracized).
>I try very hard to be civil on HN, but everyone defending artifical data caps are a bunch of idiots
Not that hard I guess.
Data caps are the only reason you have affordable consumer internet because they allow a significant amount of oversubscription which matches the mostly-idle bursty behavior of consumers.
You can get leased lines from ISPs with no caps easily. You just won't like the real price that comes with guaranteeing a customer that kind of bandwidth.
And why should content creators also be the ones who control the very last mile of the internet in the USA? You don't think there's something very wrong with that arrangement?
Back in the day, we used to take a dim look at companies who amassed horizontal or vertical monopolies, and broke them up. Well, they're like the replicators from Stargate SG1 and have reassembled into something even uglier.
And yet when we discuss this, you fall back on caps and justification by contention ratio... But why are their services zero rated, eh? Of course we know why - it's to kill the opposing services like Netflix, Hulu, or others... except for their services!
Logically, the actual lines should belong to the people (government). The money We've(royal) paid has been above and beyond what it would have cost for a REMC-style internet-ification. I don't want the US or State govts being ISPs themselves, but muni style arrangements have worked out well in the past - except when Comcast and ATT haven't lobbied them out of existence.
> And yet when we discuss this, you fall back on caps and justification by contention ratio... But why are their services zero rated, eh? Of course we know why - it's to kill the opposing services like Netflix, Hulu, or others... except for their services!
I mean there is also the fact that TV and broadband run on entirey separate infrastructure that just happens to share a physical wire. On FiOS, for example, TV is a separate wavelength of laser and is a broadcast signal, not an IP signal.
How can it be entirely separate infrastructure if it uses the same physical wire? Using those wavelengths for that leaves less carrying capacity for internet data. It's in direct contention for the same resource.
The video signal is muxed onto the fiber after the OLT, so the only shared portion of the infrastructure is the passive fiber. That’s effectively an uncontended resource. The single mode fiber has vastly more capacity than you can actually use with the active components of the network. Your limit in data speeds is from the OLT, and the Ethernet ports those OLTs are plugged into, none of which is shared with TV.
"The fiber has plenty of capacity" seems like an argument against data caps, considering that getting/keeping the fiber in the ground is the major cost.
The whole concept of separation seems like a dodge. Is there any real advantage to using separate equipment for TV vs. using the same hardware resources to provide more data capacity and using IP multicast? Or is that what they are doing, and just doing it on separate hardware in order to claim separation?
> "The fiber has plenty of capacity" seems like an argument against data caps, considering that getting/keeping the fiber in the ground is the major cost.
No, a huge part of the cost is the active equipment, which also requires maintenance and upgrades. Have you priced aggregation routers with 10G ports recently?
> Is there any real advantage to using separate equipment for TV vs. using the same hardware resources to provide more data capacity and using IP multicast?
Yes, injecting the TV signal as RF over glass is vastly simpler, cheaper, and more reliable than using IP multicast. There is also the fact that it pre-dates streaming services, and is an outgrowth of the analog CATV distribution systems that were in places for decades before streaming Internet TV.
> No, a huge part of the cost is the active equipment, which also requires maintenance and upgrades. Have you priced aggregation routers with 10G ports recently?
At something like $1000/port, for 100 Mbps symmetric broadband customers that's $10/customer divided by the oversubscription ratio, per hardware refresh cycle (say three years). The percentage of the customer's monthly bill rounds to zero.
And the maintenance cost may be significant, but it also shouldn't really be proportional to the traffic level.
> Yes, injecting the TV signal as RF over glass is vastly simpler, cheaper, and more reliable than using IP multicast.
It just seems like the opposite of how everything else is going. Virtualization, SDN, etc. Get everything running on the same kind of hardware so everything is fungible, failed hardware can be swapped out with universal replacements, idle resources can be reallocated to other workloads without changing hardware, etc.
> There is also the fact that it pre-dates streaming services, and is an outgrowth of the analog CATV distribution systems that were in places for decades before streaming Internet TV.
This would be easier to believe from Comcast than Verizon.
> At something like $1000/port, for 100 Mbps symmetric broadband customers that's $10/customer divided by the oversubscription ratio, per hardware refresh cycle (say three years). The percentage of the customer's monthly bill rounds to zero.
If you're only building enough active equipment to support 100 mbps service, then your point about sharing capacity on the fiber itself with RF TV is entirely irrelevant--you've got nowhere near enough active equipment to exceed the capacity of even a single pair of wavelengths.
Compare that to something like Comcast's Gigabit Pro. Each user gets their own port on a 10G aggregation router. The CPE is a Juniper ACX-2100. And you need a beefy router after that to do traffic shaping. (Due to the way the traffic policing works, Comcast will happily deliver 25MB at 10 gbps before the policer kicks in, which completely overwhelms any consumer router you put after the Juniper.) That's thousands of dollars per user in active equipment--and you're still using just a single pair of wavelengths.
> It just seems like the opposite of how everything else is going. Virtualization, SDN, etc. Get everything running on the same kind of hardware so everything is fungible, failed hardware can be swapped out with universal replacements, idle resources can be reallocated to other workloads without changing hardware, etc.
There is nothing cheap, simple, or reliable about SDN. Flexible yes, but not those other things. It takes enormous amounts of hardware to, e.g. build an SDN switch with VPP that can compare to a $1,000 Juniper 10G switch.
RF over Glass is super easy and cheap. You get a feed from a head-end, and you inject it into a passive fiber network. There is no IP, no routing, nothing active in the delivery network until you get to the CPE.
They don't have to. It's only because the alternative is to invest in CDN, which some don't want to do. Real Internet video providers do that however (Youtube, Netflix and so on).
Either way, it doesn't mean they aren't competing. Legacy TV business hates Internet video, even though they know well, they are finished and either they'll go bust or they need to switch to Internet video themselves.
We used to have a worldwide CDN messaging service: Usenet.
Early on (1998-1999) ISPs axed their newsgroup servers. Nobody paid them for those CDNs. But soon after, CDN companies paid ISPs to install their content.
> The natural data cap is the line speed of the ISP's peering links to upstream providers divided by the number of customers.
The natural data cap is the line speed of the ISP's peering links to upstream providers divided by the number of active customers. But you don't need monthly limits for that, it's true all the time inherently.
It's also kind of a scam on the ISP side because content providers are generally happy to peer with ISPs at no cost to the ISP, so it's not as if there's a non-artificial bottleneck there.
> purely because the internet companies are also content companies
I have no doubt this is a reason, but most ISPs also oversell their bandwidth. If every customer with a 10Mbit line is only using it 10% of the time, your upstream lines only 'need' to be 10% of 10Mbit * the number of customers (plus whatever margin for spikes).
It's called contention. Typical ratios are about 20:1 - for every 20Mbits of throughput they sell to customers, they have about 1Mbit of throughput to the internet. Leased lines are uncontended, but they're also vastly more expensive than conventional broadband services.
I don't care how they do contention. That's a business plan.
What I do care about is fraudulent business practices. I expect a minimum speed alongside a maximum speed. And if their contention ratio is 10:1 then I expect 1Mbit-10Mbit for that connection.
But no, the content/internet media companies play insane games, zero rating their stuff, enforcing arbitrary 'kill netflix' limits, and evil layer 7 filtering. They need broken up into lines owned by the state, service over lines sold to whomever provides service (like an ISP or a content company), and customers leasing the lines like how power works.
These megagiant media corps should have never owned the physical connections. At all.
The problem is there's simply not enough capacity for _everyone_ to use their 10 Mbit connection at the same time. That's a necessary property of efficient packet-switched networks.
As a result of this, people who use more total data generally incur higher infrastructure costs to the provider than those who use less; even if those two groups of people have the exact same link speed.
It doesn't really cost money to move data around. The biggest cost by far is building the infrastructure. If the capacity is available, the cost of using it is minimal.
Infrastructure has finite capacity, especially on cellular networks. You need some way of managing network utilisation fairly, otherwise everyone's experience will be severely degraded. To my mind, it seems obviously unfair to force light users to cross-subsidise heavy users.
The problem is simultaneous usage, not "heavy" usage as in consuming a lot of data each month. If the network can't handle increased simultaneous usage, no amount of data caps will fix that.
The only thing that can be viewed as network management is bandwidth limiting based on the current network load. I.e. your realtime bandwidth limit.
Limiting monthly data has nothing to do with network management IMHO and is a simply a method to fleece users and push them to pay more for less capped plans.
Limiting bandwidth for everyone just makes the experience worse for everyone. If someone downloads 1 GB over an hour or over a minute, it makes no difference to your network. The amortized load on your network is the same.
This is why the trick is to deliver data asap to low usage users, and throttle high usage users to limit the amount of total load that they take up. This is of course a problem if they lie and advertise "unlimited" data.
> Limiting bandwidth for everyone just makes the experience worse for everyone.
Sure it does, but not limiting it will make the network die even faster if it can't handle said simultaneous usage.
Data caps on the other hand can't help with that at all. I.e. users with such caps connecting at the same time will produce the same negative effect as ones without caps.
The only proper solution is to build up the network, if it's plagued by congestion all the time. Once it's congested - it's already too late and you can only make it degrade gracefully until it's built up.
Simultaneous usage and heavy usage is the same thing in practice. Most people watch youtube videos at times of the day when everyone else is watching youtube too. Data caps make users consider whether or not to watch a video via their wireless connection, and hence reduce peak usage.
Telecom companies could work around this by making it cheaper to use data in off hours, but that makes it complicated. Most electricity providers don't have that for consumers either, even though it's actually super expensive to temporarily shut down many forms of electricity generation (and they can't keep them running either, because that would fry the network).
It's not the same though. The idea that data caps prevent simultaneous usage is a fake. It might affect it, but it can't prevent it. So it's not a solution, but simply a money grab.
Paying for electricity per kWh is just fleecing the customers, too, then? After all, the problem is simultaneous usage, not "heavy" usage as in consuming a lot of electricity each month.
I mean, seriously? Just because it is possible for someone to buy a 1 TB package and then use it all on the first day of each month ... doesn't mean that most average people don't have relatively predictable usage patterns and relatively constant use throughout the month, does it?
> The all-in costs of whatever you had to burn (literally or not) to produce that kWh?
OK, so in the case of wind and solar power, what is burned (not literally) is hydrogen (being fused into helium), which noone is paying for, so there is zero cost for what is being burned to produce each kWh of wind or solar power. So, if you are being billed per kWh for wind or solar power, that's fleecing the customer then?
Also, for coal, the coal per kWh costs about 3 cents, you pay at least three times that for electricity, usually more--is that also fleecing the customer?
> Packets do not have a meaningful marginal cost. Metering is a crap model.
What do you mean by "meaningful"? I mean, either they have a marginal cost, or they don't right?
> So, if you are being billed per kWh for wind or solar power, that's fleecing the customer then?
At any time other than a load emergency, yes. The ideal pricing mechanism for a 100% renewable grid would be to pay a flat rate based on your service size (100 amp, 200 amp, etc.) and then get paid by the grid for dropping your load when the grid is overloaded. That would give the power company the incentive to prevent overloads by providing adequate capacity.
> Also, for coal, the coal per kWh costs about 3 cents, you pay at least three times that for electricity, usually more--is that also fleecing the customer?
That price doesn't include externalities. It can make sense to use pricing to discourage usage of resources with otherwise unpriced negative externalities.
> What do you mean by "meaningful"? I mean, either they have a marginal cost, or they don't right?
Network equipment may use more electricity when forwarding packets than not. But when this is in the amount of pennies per TB of data, the accounting overhead of keeping track of it would exceed the measured dollar amount.
> At any time other than a load emergency, yes. The ideal pricing mechanism for a 100% renewable grid would be to pay a flat rate based on your service size (100 amp, 200 amp, etc.) and then get paid by the grid for dropping your load when the grid is overloaded. That would give the power company the incentive to prevent overloads by providing adequate capacity.
So, do I understand you correctly that someone who uses their electric oven (so, ~ 3 kW) once a day for half an hour but uses no other electricity should pay the same total as someone who runs some 3 kW machine 24/7, unless the grid is overloaded, in which case the latter pays less?
> That price doesn't include externalities. It can make sense to use pricing to discourage usage of resources with otherwise unpriced negative externalities.
Except that's not what is happening here?
> Network equipment may use more electricity when forwarding packets than not. But when this is in the amount of pennies per TB of data, the accounting overhead of keeping track of it would exceed the measured dollar amount.
Except that that's a minor part of the marginal cost of moving a packet?
> So, do I understand you correctly that someone who uses their electric oven (so, ~ 3 kW) once a day for half an hour but uses no other electricity should pay the same total as someone who runs some 3 kW machine 24/7,
Yes, exactly. Because they both want to come home and use 3 kW for a half an hour at the same time, so the grid needs 6 kW of capacity just then. But if it has 6 kW just then with a source that generates 24/7 the same amount at no marginal cost, the person who is also using that amount the whole day isn't costing the power company anything more. The capacity was needed for that half hour regardless of what you do the rest of the day, so why should what you do for the rest of the day change what you pay?
> unless the grid is overloaded, in which case the latter pays less?
No, you get compensated for not using what you paid for, not for not using what you normally use. If they're both not using then they both get paid. Obviously this gives the power company a good incentive not to have an overloaded grid.
Now in practice what's going to happen is that the person who uses 3 kW for a half hour a day isn't going to buy 30 amp service for that, they're going to get 10 amp service and a 5 kWh battery that can run their oven or clothes drier from time to time and then slowly charge back up over 24 hours.
The analogy for broadband is you pay for 50Mbps service but it will burst up to 500Mbps for a few seconds at a time, so the person who doesn't use 24/7 has de facto 500Mbps instantaneous service for the price of 50Mbps.
And there are some complexities in power generation that don't apply to broadband, e.g. grid-scale batteries and the fact that solar doesn't actually generate 24/7. But even that doesn't really change that much, especially if you tie the service level to time of day, e.g. you can order 10 amp from 4PM to 10PM and 100 amp from 10PM to 4PM and that costs a lot less than 100 amp 24/7, but it means you're not allowed to use more than 10 amps during peak hours.
And it would make sense to sell broadband that way as well, with Mbps rather than amperes. Why can't I get 10Mbps during peak hours and gigabit the rest of the time? And then someone who needs 20Mbps during peak hours can get that and pay twice as much as me, even if I'm using a thousand times more data than them during off peak times.
> Except that's not what is happening here?
It's not administratively what's happening with power generation, but it's de facto what's happening, and so there isn't a lot of cause to change it just because we're doing something sensible and calling it something else.
By contrast, there is no comparable negative externality caused by sending data.
> Except that that's a minor part of the marginal cost of moving a packet?
That's the entire marginal cost of moving a packet. The infrastructure cost is a fixed cost. You don't get it back if you don't send the packet and the network goes underutilized.
> Because they both want to come home and use 3 kW for a half an hour at the same time, so the grid needs 6 kW of capacity just then.
Except that's not how it works on the large scale. If you have 1000 consumers coming home and using their 3 kW ovens for half an hour "at the same time", you very reliably get nowhere near 3 MW of load on the grid. For one, people do not in fact come home at the exact same time, and then, the exact switching intervals of the oven thermostats are essentially random, so the actual simultaneous load on the grid is pretty close to the thermal loss of all ovens combined, rather than the total peak power that they could consume if synchronized.
> No, you get compensated for not using what you paid for, not for not using what you normally use. If they're both not using then they both get paid. Obviously this gives the power company a good incentive not to have an overloaded grid.
So, the power company calculates the maximum energy that you could use given the service that you are buying, makes up some per-kWh price, multiplies the two, and then bills you that amount minus the kWh that you didn't use, and that gives the power company a good incentive not to have an overloaded grid?
Could you provide an example where the total of that bill wouldn't be identical to just billing the same per-kWh price multiplied with the kWh used?
> Now in practice what's going to happen is that the person who uses 3 kW for a half hour a day isn't going to buy 30 amp service for that, they're going to get 10 amp service and a 5 kWh battery that can run their oven or clothes drier from time to time and then slowly charge back up over 24 hours.
Well, that depends on the pricing? If it costs the same to buy 30 amp service and use 10 kWh per day as it costs to buy 10 amp service and use 10 kWh per day, why would I buy a 5 kWh battery?
> The analogy for broadband is you pay for 50Mbps service but it will burst up to 500Mbps for a few seconds at a time, so the person who doesn't use 24/7 has de facto 500Mbps instantaneous service for the price of 50Mbps.
So, the analogy to a hard rate limit where you have to operate your own energy storage is a setup where you can burst the service? I am not sure I get what you are trying to say.
> And it would make sense to sell broadband that way as well, with Mbps rather than amperes. Why can't I get 10Mbps during peak hours and gigabit the rest of the time? And then someone who needs 20Mbps during peak hours can get that and pay twice as much as me, even if I'm using a thousand times more data than them during off peak times.
Well, sure, that certainly could make some sense, but is completely orthogonal to whether billing happens based on Wh/bytes or W/bps? You also could have a time-dependent per-kWh or per-GB price, couldn't you? (And in the case of electricity, you usually can.)
> That's the entire marginal cost of moving a packet. The infrastructure cost is a fixed cost. You don't get it back if you don't send the packet and the network goes underutilized.
Well, infrastructure costs are fixed costs, yes, but infrastructure costs aren't fixed, as any given infrastructure has limited capacity. So, the marginal cost of one additional packet while you are under the capacity limit is a few femtocents (whatever the additional energy cost of moving that packet is, I haven't done the calculation), but the marginal cost of the next packet after you reach the capacity limit of some component is a few, possibly many, thousand bucks. Thus, the marginal cost that has to be used for calculating customer prices is the average over the range of realistic infrastructure expansion, which is quite a bit higher than the energy cost for an additional packet on existing infrastructure.
Also, wouldn't it follow from your argument that internet connectivity should be provided for free to most people? I mean, once you have the network built, it's a fixed cost, and providing service to an additional customer has nearly zero cost in most cases, so shouldn't people be able to sign up for free?
> Except that's not how it works on the large scale. If you have 1000 consumers coming home and using their 3 kW ovens for half an hour "at the same time", you very reliably get nowhere near 3 MW of load on the grid. For one, people do not in fact come home at the exact same time, and then, the exact switching intervals of the oven thermostats are essentially random, so the actual simultaneous load on the grid is pretty close to the thermal loss of all ovens combined, rather than the total peak power that they could consume if synchronized.
Which is why they don't actually need 30 amps of capacity, only something less than that and a battery to smooth out the load. Or for the power company to sell "10 amps capacity" as a five-minute average that allows for temporary surges above that capacity as long as the average stays below it.
You're also focusing on one thing, and not the many other things that are synchronized. Grids often have problems on hot days, why? Because everyone wants to run their A/C at the same time. It'd be fine if half the people would do it at night or three days from now, but the grid needs that much capacity now. If you don't have it right now you can't make it back up over the rest of the month no matter how much people don't use.
> So, the power company calculates the maximum energy that you could use given the service that you are buying, makes up some per-kWh price, multiplies the two, and then bills you that amount minus the kWh that you didn't use, and that gives the power company a good incentive not to have an overloaded grid?
> Could you provide an example where the total of that bill wouldn't be identical to just billing the same per-kWh price multiplied with the kWh used?
You only get money back for not using when the grid is overloaded, which if the power company is doing their job should rarely if ever happen. And if it does it's because most people are using their full capacity, so the people who stop get paid to stop and the people who weren't to begin with get paid to not start.
> Well, that depends on the pricing? If it costs the same to buy 30 amp service and use 10 kWh per day as it costs to buy 10 amp service and use 10 kWh per day, why would I buy a 5 kWh battery?
Which is why per-kWh pricing is problematic. It's better for the grid for you to buy the battery and reduce your peak usage, and $-per-kWh gives you no incentive to do that. So then the grid needs more capacity because you consume more at peak times, which is more expensive for everyone. Meanwhile the cost of that would go on the person who is productively using a lot of zero-marginal-cost power during off-peak hours.
> So, the analogy to a hard rate limit where you have to operate your own energy storage is a setup where you can burst the service? I am not sure I get what you are trying to say.
Suppose you have 10 amp service, but a 5 kWh battery that will charge whenever you're using less than 10 amps. Then if you've been using only 3 amps for 20 hours, your battery is charged and you can run your 3 kW oven for half an hour even though it uses more than 10 amps.
Suppose you have 50 Mbps service, but the rate cap is a rolling average over 60 seconds. Then if you haven't downloaded anything in 60 seconds and you go to download a 100 MB file, you get the whole thing at link speed (e.g. 1000 Mbps) because your 60 second average is still below 50 Mbps so the cap hasn't kicked in yet.
I suppose the closer analogy is that you can do exactly the same for both, i.e. you get a 50 Mbps rolling average or a 10 amp rolling average and you can go over for a few seconds at a time as long as the short-term average stays below the rated capacity.
> Well, sure, that certainly could make some sense, but is completely orthogonal to whether billing happens based on Wh/bytes or W/bps? You also could have a time-dependent per-kWh or per-GB price, couldn't you? (And in the case of electricity, you usually can.)
Only the ISPs aren't doing that. And even then, it would mean that the off-peak price per-kWh or per-GB should be zero, which it isn't.
But even that doesn't quite capture it, because it's not just about peak hours in a given day, it's about peak hours in a given year. The grid has more load at 7PM in the fall than at noon in the fall, but it has more load at noon on the hottest day of the year than at 7PM in the fall.
If you want to run your A/C on the hottest day of the year or live stream the Superbowl then the network needs that much capacity on that day, which means it has that much capacity on every day. But none of the other days require capacity to be expanded further to support them.
> Well, infrastructure costs are fixed costs, yes, but infrastructure costs aren't fixed, as any given infrastructure has limited capacity. So, the marginal cost of one additional packet while you are under the capacity limit is a few femtocents (whatever the additional energy cost of moving that packet is, I haven't done the calculation), but the marginal cost of the next packet after you reach the capacity limit of some component is a few, possibly many, thousand bucks. Thus, the marginal cost that has to be used for calculating customer prices is the average over the range of realistic infrastructure expansion, which is quite a bit higher than the energy cost for an additional packet on existing infrastructure.
What you're really getting at is that marginal cost when the network is below capacity is much different than marginal cost when the network is at capacity. But that's the point -- almost all of the time, the network is below capacity and there is immaterial marginal cost.
> Also, wouldn't it follow from your argument that internet connectivity should be provided for free to most people? I mean, once you have the network built, it's a fixed cost, and providing service to an additional customer has nearly zero cost in most cases, so shouldn't people be able to sign up for free?
This is what we generally do with roads. The main cost is the fixed cost and the marginal cost is trivial, so the government pays for them from taxes and everyone can use them for free. This is also why toll roads are very stupid -- you pay the same fixed cost and then discourage use of an already paid for resource with a trivial marginal cost, by charging a non-trivial marginal cost.
But the fixed costs still have to be paid somehow. Having the first customer pay ten billion dollars and all the others pay nothing doesn't exactly work in practice. It's also not how the cost structure works, because a huge part of the infrastructure cost is per-customer -- if you want to service twice as many customers you have to wire twice as many streets.
On the other hand, having everyone who wants the same capacity pay the same monthly fee works pretty well. It still discourages people from signing up compared to the public roads model, and it would be better if it didn't, but probably doesn't discourage very many because the value of having internet service is much greater than the cost.
By contrast, charging high prices per byte at anything other than the all-time peak consumption period does in practice discourage productive use for no benefit.
Sorry, I understand less and less what your suggested billing models would be, and it seems like some of your suggestions are self-contradictory.
On the one hand, you suggest that billing should be based on peak power, because it is supposedly better for the customer to run their own battery, but then it would also be OK for the power company to offer bursting with billing based on peak 5 minute average power, which essentially means that the power company is selling you use of their battery. But then, if they are selling you use of their battery, why only for 5 minutes? What is wrong with them selling use of their battery for all your storage needs? In particular when the power company often has options at their disposal that are far cheaper than actual batteries to achieve the same goal, such as simple averaging over a large number of customers, or moving loads that aren't time critical to otherwise low-load times, which don't need any actual storage at all.
You seem to be focused on incentivizing everyone to flatten their own load curve as a supposed method for optimizing the utilization of capacity. While it is obviously true that if everyone had a flat load curve, global utilization would be maximized, you seem to be simply ignoring the fact that there are some real-world requirements behind peoples' load curves, so the actual power or bandwidth consumption of each individual can not be flattened. At best people could pay for some sort of storage (in the case of electricity) that transforms their actual load curve into a flat curve on the grid. But at the same time, it is perfectly possible to flatten the global load curve without the need to flatten every individual load curve. In fact, you can flatten the global load curve by making some individual load curves more bursty. If you sell electricity cheaper at night, that incentivizes some people to install storage heating, causing them to create an artificial load peak at night--leading to the global load curve becoming flatter.
The ability to increase utilization by combining different load curves is one of the great possibilities of many users sharing a common infrastructure, so why would we possibly want to disincentivize that?!
> On the one hand, you suggest that billing should be based on peak power, because it is supposedly better for the customer to run their own battery, but then it would also be OK for the power company to offer bursting with billing based on peak 5 minute average power, which essentially means that the power company is selling you use of their battery. But then, if they are selling you use of their battery, why only for 5 minutes? What is wrong with them selling use of their battery for all your storage needs? In particular when the power company often has options at their disposal that are far cheaper than actual batteries to achieve the same goal, such as simple averaging over a large number of customers, or moving loads that aren't time critical to otherwise low-load times, which don't need any actual storage at all.
It's two different use cases. It's confusing that we keep talking about the same load, so let's change that.
On the one hand, you have an electric oven. It uses 30 amps, but only while it's heating, which it only does for 30 seconds out of every 90 even when it's in use. So the average is 10 amps. The power company sells you 10 amp service, your average over a few minutes is indeed 10 amps and the power company can average this out with other customers with similar loads, so you don't need to pay more for 30 amp service even though you're periodically using 30 amps for a few seconds at a time.
On the other hand, you have a data center. The servers use 30 amps at all times. It has a UPS, so you've already paid for an inverter, and now you buy twice as many batteries as you would have. Then you order 5 amp peak and 40 amp off-peak service, which is much cheaper than 30 amp all the time service, and run the datacenter on the batteries during peak usage hours. The power company couldn't have averaged this over other customers because the average customer wants to use more at peak hours than off peak, by definition.
And the method by which they get loads to move to other times is by not limiting usage at other times at all, which is what's happening here -- when you order 5 amp service you get 5 amps during peak hours, but during off peak hours you can use however much you like, including to charge your batteries to reduce the peak consumption rate you'd otherwise need to buy.
> While it is obviously true that if everyone had a flat load curve, global utilization would be maximized, you seem to be simply ignoring the fact that there are some real-world requirements behind peoples' load curves, so the actual power or bandwidth consumption of each individual can not be flattened.
"Making the curve flatter is more efficient" is true even if you don't ever actually completely flatten it. If people use a total of 500GWh during peak hours and 200GWh during off peak hours, and you can get that to 450GWh and 300GWh, you've reduced the required generation capacity by more than 10%.
There are a lot of pricing structures that can achieve this. A lot of them are really just the same thing using different terms. One of the better ones is to price based on "maximum average consumption rate during peak hours", i.e. the most of the resource you're entitled to use over a few minute period during peak hours. What you're going to need to A/C your place during peak hours on the hottest day of the year. Because that's how much capacity they need to build, and then have all the rest of the time too.
But the single thing that all of the good pricing strategies will have in common when there is no marginal cost and you're just amortizing the fixed cost of building capacity is that off peak usage is unmetered. Which is specifically the thing that "GB per month" caps get completely wrong.
> There are a lot of pricing structures that can achieve this. A lot of them are really just the same thing using different terms. One of the better ones is to price based on "maximum average consumption rate during peak hours", i.e. the most of the resource you're entitled to use over a few minute period during peak hours. What you're going to need to A/C your place during peak hours on the hottest day of the year. Because that's how much capacity they need to build, and then have all the rest of the time too.
Well, it may well be one of the better ones, depending on what you are comparing it to, I guess, but I would think it has a pretty serious flaw: Anyone who has the option to not use any power during peak hours would get to extract value from the grid without contributing anything to its construction or maintenance, so those people or businesses who are in the unlucky position of needing power at certain fixed (peak load) times would end up paying the full cost of building and maintaining the grid that everyone is using. That doesn't exactly sound like a fair way of sharing the costs, does it?
While the cost of building the grid is determined by peak load, it's not like building a grid that could only deliver the base load would be free to build. So, while it makes sense to have a price structure so that users who cause the load to exceed the base load pay for the additional costs of building a higher power grid, I don't see why it is appropriate to make them pay the total cost of building the grid, let alone how that leads to optimal use of resources.
Now, your argument might be that anyone is free to just buy a bunch of batteries for their peak-load needs and thus avoid paying for their electricity (up to the point when everyone does so, so the global load curve becomes flat, thus it's peak load time 24/7, and everyone will start paying based on the energy taken from the grid after all ...)--which is true, of course. But what's the point of that? What is the problem with the grid having storage built-in and billing you for the use of that capability, rather than incentivizing/forcing you to install your own storage? In particular when some forms of storage can be cheaper than batteries, but completely unrealistic for personal use (such as pumped-storage hydroelectric).
And all of that is completely ignoring that a flat load curve isn't actually desirable anymore with renewable sources, as the generation capacity is just not capable of providing that (without massive overprovisioning). In particular the A/C example that was a big problem for traditional power grids has the nice property that you need A/C roughly proportionally to the intensity of sunshine--and luckily, the generation capacity of solar panels is also roughly proportional to the intensity of sunshine. And not only does this mean that additional power from that source is available exactly when it is needed for that purpose--it doesn't even need a stronger distribution system in the case of roof-top solar, as the power is generated right where it is needed, so no need to move it long distances.
> But the single thing that all of the good pricing strategies will have in common when there is no marginal cost and you're just amortizing the fixed cost of building capacity is that off peak usage is unmetered. Which is specifically the thing that "GB per month" caps get completely wrong.
Well, yes, "GB per month" caps don't create any incentive towards a particular shape of the customer's load curve on any scale smaller than a month, that's maybe not quite optimal. But off-peak usage being unmetered is a pretty bad solution as well in that regard, as that obviously doesn't amortize anything, and creates an incentive to avoid certain useful investments because of a free-rider problem (you won't start a business that needs bandwidth at peak times if the fact that you have to subsidize other users of the infrastructure makes your business unprofitable).
Also, while that approach doesn't solve that problem, that doesn't make it useless. Most consumers as a matter of fact have a relatively flat load curve on the scale of a month (both for electricity and for bandwidth), and a cap influences the amplitude of the curve, and thus does influence infrastructure costs. And realistically, at least most bandwidth uses of consumers have little oppportunity for incentivizing a flatter load curve. Much of the consumer traffic is videos and streams on demand, which users generally don't want to watch at 3 am, and also don't want to pre-order to watch it the next day. Of course, it would still be nice to have the option of buying cheap traffic during the night for uses that can profit from that, which obviously would also benefit the ISPs to some degree.
That cost is only a small part of a power plant. Most of the cost is amortizing the build of the plant, just like an ISP. This is especially true with wind, solar, and nuclear.
It does cost money to move data around, even if it is an upfront cost when you build the infrastructure. When your users move enough data that you are getting close to the capacity of your infrastructure you need to invest more money to extend your infrastructure.
Or to express it slightly differently: if there weren’t any data to move you wouldn’t have to build any infrastructure and you wouldn’t have any costs. If you want to move data you have to build the infrastructure which cost money, which means that it costs money to move data.
That's no reason to ignore the cost of the infrastructure. And the costs aren't even fixed. You'll eventually have to upgrade the infrastructure to allow for the ever growing data transfers on ISPs.
Not every single bit adds to the overall cost, but in general, more data sent means more spending.
It's perfectly reasonable to charge for data. Though that doesn't mean Comcast isn't charging for anti-competitive reasons.
> You'll eventually have to upgrade the infrastructure to allow for the ever growing data transfers on ISPs.
Given current obscene prices that IPSs are already charging, they have enough money to upgrade the infrastructure. They just need to line their pockets less, and actually spend money on upgrades.
Internet infrastructure upgrades are very unevenly distributed, and you can find horror stories all over the US. But when I got online in the 1990s, a T1 (1.544 Mb/s symmetric) was extravagant and almost impossible to imagine for a home user. I now have 1 Gb/s symmetric at home.
I doubt whether the entire uplink of my first ISP was 1 Gb/s. Many colleges' uplinks sure weren't.
Somebody's out there making Internet infrastructure better some of the time.
Over the last 20 years, data speeds have increased much faster than say speeds (something we generally consider to be the subject of massive competitive investment and improvement).
And most ISPs were sitting doing nothing, until Google Fiber introduced gigabit to the masses. Major execs said on record that "no one needs Gigabit", only to deploy it not long after, because they got scared of GF.
Too bad GF sizzled out, but it had a good positive effect to disrupt slumbering monopolists.
That’s a complete fever dream. FiOS and U-verse fiber launched in 2005 and 2006. In 2002 Comcast’s top speed tier was 3 mbps. In 2009, it was 20, a 7x increase in 7 years. Google Fiber launched in 2010. From 2009 to 2016, Comcast went from 20 to 150, about the same factor of increase as before Google Fiber. Verizon and Comcast announced gigabit upgrades ... after Google Fiber stopped expansion in 2016.
Yeah, Verizon jumped to gigabit to match Google’s marketing. It was a non-event. (After about 100 mbps the shittiness of the modern web stack makes further upgrades pointless unless you’re a huge downloader. I’ve got gigabit FiOS load-balanced with 2-gigabit Comcast fiber at my house. I literally cannot tell the difference from the 150 mbps FiOS I had before, except in speed tests.)
Google Fiber really has nothing to do with these increases. That’s total make believe. There is an upgrade treadmill for DOCSIS that the industry follows, just like for CPU fabrication. And like improvements in fabrication technology, staying on the treadmill requires massive continual investment (faster versions of DOCSIS only work if you keep building fiber closer to the house and decreasing the amount of coax and the degree of fan-out). ISPs were spending that money before Google Fiber, and continue to spend that money now Google Fiber is on life support.
You sound like McAdam. First, he conceitedly claimed that no one needs or will need gigabit so customers can get lost expecting it. And then he "magically" changed his tune. Which is totally the indirect effect of Google Fiber, which affected Comcast and AT&T which affected Verizon. McAdam had to swallow his conceit, shut up and deploy gigabit.
All this "no one needs" bunk falls apart very quickly even at the slightest sight of looming competition. Imagine what could have been if there was real one around.
I'd assume ISPs are taking infrastructure into their cost calculations, even in a case where the initial build out was government subsidized... things break, there are ongoing costs.
And my totally uninformed understanding is that the ongoing maintenance costs, and the costs of expanded service are both pretty minimal.
Both are incorrect. Almost no ISP infrastructure is “government subsidized.” Almost all the subsidies are urban ISPs subsidizing rural ones. There was a tiny bit of actual government subsidy under Obama as part of the post-recession stimulus.
And ongoing maintenance and support costs are very high. Even if you don’t trust Verizon’s SEC disclosures (showing 5% or less in operating profit for wireline) look at the financial statements for something like Chattanooga’s EBP. The vast majority of revenue goes to ongoing costs, before you even get to paying down the initial build out.
Given severe lack of competition among ISPs in US, I don't buy the bogus argument that current prices are fair. Simple market logic suggests, that they overcharge, because they can. Therefore they do have more than enough money for their upgrades.
And on top of that, most simply prefer to pocket them instead of investing into the network, with "no one needs it" excuse. Something they would never have done with healthy competition.
Not specifically a symptom of caps! Even in a billing structure that has a fixed cost per unit of data transferred, regardless of usage (thus no cliff-like data caps), a zero rating exemption can still influence user behavior in anti-competitive ways.
It's analogous to product dumping.
(Did you want to say that it's a symptom of the disease of any price structure under which users get charged extra on top of their subscription fee, according to some function of their data use?)
Good thing IPSs don't charge per unit of data transferred. That would be even worse. So in practice, zero rating in the context of ISPs is very much related to data caps. When there are no caps, zero rating has no meaning.
It wouldn't be terrible if they charged per unit of data transferred _at a sane level_, we're just all aware that if they switched to this model (which exists in some places like South Africa) then they'd gouge us over prices... it'd also introduce an interesting social dynamic since sites that push megs of ads on you would literally be taking money from your pocket, instead of just wasting your time.
It'd actually be kind of neat if bloatware were discouraged this way, since currently there is no cost to bloatware as long as whatever it is remains within the acceptably performant range.
$10/GB is high. Purposely high, because the point of Google Fi is to use WiFi as much as possible and avoid cellular. So it's there if you need it but you don't want to need it much.
This hardly works for the connection which is actually providing your WiFi, or if you want to try to use cellular exclusively.
I totally agree that data caps are insane with today's technology and need to die for good. However, at least some zero-rating programs are "category-wide" and will enroll any streaming provider on request.
For some reason, there are places where this necessary evil doesn't seem to be, well, necessary. Neither my wired not my 4g connection have a data cap.
Taking "necessary" out of necessary evil, leaves just evil.
By that logic, api rate limits are a method to fleece api users... which is not true at all. Rate limits are put in place to protect infrastructure for overloading, that's been the main reason I have implemented circuit breakers and rate limiting.
Except it's a fallacious logic. ISP networks can perfectly handle the load already. There is no "data flood apocalypse" or anything the like. ISP execs said so much explicitly. They point blank admitted, data caps are not driven by technical needs but simply by greed.
In all fairness, T-mobile’s zero rating is not anti-competitive. Any video carrier can sign up for it and no money changes hands. There were some porn sites that signed up for it - ie T-Mobile didn’t discriminate.
It usually is "free-as-in-beer" since they are allowed to sell user statistics that way (the Dutch T-Mobile zero rated music being an exception due to law). There is no way to opt out of that practice as it is included in most contracts by default. Setting up these methods are also not trivial for companies. The selection process is hidden (there is no information available), looking at the participating services most probably don't bother either due to cost, restrictions or administrative reasons. Streaming your own library is excluded in the contract, so it is not music, only what T-Mobile says is music. I call that censorship.
It usually is "free-as-in-beer" since they are allowed to sell user statistics that way (the Dutch T-Mobile zero rated music being an exception due to law).
Why would the providers need to buy statistics on their customers? They already know who is listening to and watching what content.
There is no way to opt out of that practice as it is included in most contracts by default.
T-mobile has a setting where you can turn it off and on
Setting up these methods are also not trivial for companies.
Setting up adaptive streaming based on bandwidth available has been a solved problem since RealVideo in the late 90s. All providers do it now. Anyone can set this up with WireCast.
The selection process is hidden (there is no information available), looking at the participating services most probably don't bother either due to cost, restrictions or administrative reasons.
Every streaming provider in the US took advantage of it.
Streaming your own library is excluded in the contract, so it is not music, only what T-Mobile says is music. I call that censorship.
You can stream your own audio through Apple Music through the Music Match (?) Service.
This is all really a moot point now that T-mobile only sells unlimited plans now and if you really want to opt out of compressed video you can pay $10 more.
The need to "sign up" and go through the gatekeeper is a problem already. The whole point of net neutrality is to make sure ISPs aren't going to become such gatekeepers. Whether they charge for passage or not is irrelevant.
From a customers standpoint. I don’t see the problem. If a company cares about reaching their users they can sign up for the service.
Wireless is different, no matter how much money a wireless carrier is willing to spend, there is a finite limit of how much data can be transferred through the air and only a limited amount of spectrum that is good for cellular service.
The video/music producer can opt out and be treated like all other data and watch their customers prefer providers who went through the trouble of filling out some paperwork or they can be on the same level playing field as their competitors.
From a customers standpoint, this is exactly the problem.
". In comparison, the countries with prevalent zero rating practices from their wireless carriers consistently saw data prices increase. This makes sense; carriers have an incentive to raise the costs of exploring alternatives in order to make their preferred, zero-rated choice of content more attractive."
Did you even RTFA or did you came here to defend TMobile.
Yes I did both. The classic zero rating started with T-mobile in the US and no matter what the “F’ing article” said, that in fact didn’t happen with T-mobile. T-mobile is in fact the cheapest of the big four carriers in the US and they have unlimited data. The data cost has gone down over the years.
India does not have Facebook free internet. We do have a billion people and most competetive mobile service companies. For 200₹(3$) a month you can have unlimited calls SMS and 1GB of daily data.
This is a relatively recent development! Pretty remarkable the pace of competition within the wireless market, which was relatively stagnant only a few years ago.
I may be overly cynical here but my understanding is:
Facebook tried to build a walled garden development platform on top of TCP/IP, bundle it with free low-bandwidth internetnecessities (consisting of Facebook and several other deliberately non-Google properties) and ram it dowm the throats of a billion impoverished and technologicly unfamiliar new internet users of India.
At that time in 2013[0], FB market cap was $100B, GOOG was $282B. FB had 1.1B users with ARPU of $1.63, GOOG had 1.3B users with ARPU of $10.09. Looking avoid a market correction, FB aimed to add 1B new users from India and simultaneously prevent them from becoming new Google users and disguised the scheme as philanthropy. It didn't work.
FB was then forced to moved fast and break: data access control policies, respect for their users, expectation of privacy, and lots of pesky regulations. By distributing user data for free as an investment in the future, then buying the competition to control the demand, FB cemented their position as a gatekeeper of the online commons and dictator of social media.
Insights gained from this freely available, or loosely guarded user data helped explode demand for the user manipulation as a service offering FB had newly monopolized.
Gloves now off, FB leveraged this position and acheived hockey stick profit growth after just one US congressional election season, a midterm year at that.
Unfortunately it's not always represented accurately! This is a self link but if anyone else is curious about some of the specifics of zero rating, its justifications and issues, I wrote a piece on it a little while back:
You missed step 0: Regulate RF spectrum such that setting up a new provider is almost impossible but spectrum is extremely valuable so big providers will have incentives to buy smaller ones to stop competition.
Here is the deal with that though. Spectrum is finite. There is only so much to go around. There absolutely needs to be regulation, as it cannot be a free for all. Just look at wifi in large hotels or apparment/office complex to see what happens if there are too many devices talking over the air at once.
Sure, but it doesn't mean that the method of regulating it that's used is best - should spectrum be transfered on sale of a company? Is selling spectrum to the highest bidder the best thing for a competitive market? Perhaps larger portions of spectrum could be reserved for smaller players and bigger players could be left to make the existing spectrum used more efficiently?
Additionally, in any given location most of the spectrum is unused but reserved, smarter devices could take advantage of this, but current regulations prohibit this from becoming a reality. With certain portions of cellular spectrum in particular this could be hugely advantageous to consumers, at the cost of governments who wouldn't be able to make money selling that spectrum.
Yes, we want to utilize spectrum as efficiently as possible. But do we have any better mechanisms to efficiently allocate resources than markets? I'm sure we could improve the spectrum market. Maybe renting for a limited time instead of owning would work better and make squatting more expensive. But that kinda should have been priced in at the sale and I'd not be surprised if that would create perverse incentives in some way as well. I just don't see easy fixes...
Governments are both creating and enforcing a monopoly here, that doesn't sound like any efficient market I've ever heard of. You wouldn't call it efficient if a government came in and decided that they're going to sell the rights to be the only company that can sell cars tomorrow. Yes, cars don't suffer from interference, but the basic point still stands - the way the current regulatory system works leads directly to government enforced monopolies.
There are alternative solutions - like opening up wide swaths of spectrum to smarter devices that are able to share that spectrum broadly.
Government enforced monopolies present the worst of both corporate and regulatory worlds - a disaster for the consumer. Shouldn't we care more about the market for services the consumers get than the market for spectrum that providers buy?
> You wouldn't call it efficient if a government came in and decided
That is what happens for every natural resource. Your gov establishes rules for land and ground water just like that. Actually, even for IP and patents...
I don't see how you get from "gov regulated process of resource allocation" to "gov enforced monopolies". Yes, we should care about the value generated to consumers. The theory is that companies use exactly this money from consumers to get the spectrum.
> There are alternative solutions - like opening up wide swaths of spectrum to smarter devices that are able to share that spectrum broadly.
That only works with strong regulation (I'd guess you don't want gov involved in details?) or in situations were cooperation will always win out. Otherwise you'd usually end up with with some kind of Tragedy of the commons / Prisoner's dilemma like situation. Like every neighbor here upping their WLAN power...
I didn't mean to suggest that no regulation was the superior option - just that the market isn't efficient because it can't be operated efficiently. The limitations on distribution of spectrum makes it an inherently inefficient market. Unlike with another natural resource like say, tree pulp or oil, I can't just import some cell service if I don't like your prices, that's why I call this a government enforced monopoly. If government decide to distribute an immovable resource in a way unfriendly to competition it results in a broken market.
Patents and copyrights are an intentional breaking of this market to encourage innovation - explicitly with the intention of creating a temporary monopoly. One can hope that government doesn't go out with the goal of creating a monopoly on cell service.
However there is good and there is bad regulation. During the UMTS frequency auction the German finance minister joked about UMTS meaning "Unerwartete Mehreinnahmen zur Tilgung von Schulden" (unexpected additional income to repay debt) and tried to maximize financial gain.
A good regulation would be one which ensures competition, for instance by ensuring infrastructure in rural areas can be used by multiple companies isntead of making entry into the market expensive.
The ITU bands (e.g. 900MHz, 2.4 GHz) are way too small. If we had larger, national or global, unregulated bands, it would drive wireless innovation to even greater heights.
As it is now we have "innovators" who have access to private bands, and should know better (the cellular telecoms) threatening to trash the ITU bands with 5G coverage. It's a travesty since the result will be cellular Big Co basically squashing your Wi-Fi and the smaller players who can't afford private spectrum.
What's wrong with low prices though? Those who are fighting anti-trust try to claim that it restricts free market, while in fact these liars are themselves restricting it through monopolization.
How much of this is due to in places with cheap wireless offering zero rating won't be as effective at attracting customers as it would be in a place with expensive wireless, hence you are more likely to see it offered in places with expensive wireless?
For instance, T-Mobile's "Music Freedom" zero-rates a whole bunch of music streaming services. In the US, where data is expensive, that could easily cause someone to pick T-Mobile over one of the other providers, if they listen to a lot of music. With "Music Choice" I can get by on the smallest data plan. Without it, I'd have to step up, maybe even to unlimited.
In a country where data is cheap, something like "Music Freedom" wouldn't make much difference, and so I could see less ISPs bothering with the technical and administrative overhead of having such a program.
This is putting the cart before the horse. Competition is what drives down prices. When companies aren't allowed to zero rate content then they're all offering more or less the same product so they have to compete with each other on price.
Also keep in mind that zero rating is itself an explicit admission that network capacity and overhead aren't factors in the price. The whole deal is that the wireless company lets customers on those plans use unlimited data at no extra charge as long as it's for zero rated content. Allowing customers at that same price point to use that same unlimited data without arbitrary restrictions would ultimately be just as profitable.
Isn't low speed steady traffic easier for the network to handle than bursty high speed traffic? Someone streaming music for 8 hours will use about a gigabyte of data total but only needs a 256 kbits/second connection. The network could deliver that over older 3g infrastructure.
The same thing could be accomplished without zero-rating if they made their data plans something like N gig of high speed data and unlimited low speed data, and then the phones provided some way for applications to specify whether a given connection should use high speed data or low speed data.
For some applications that would be easier for the developer. Music streaming apps could always ask for a low speed connection. But what about file download apps? Whether they should use my limited high speed data or my unlimited low speed data is probably not something the app can determine on its own, because it depends on how much of a hurry I'm in. So a lot of apps would probably need to expose this decision making to the user.
That wouldn't have any net neutrality issues, but I bet it would be a UI nightmare.
(Actually a lot of carriers do kind of do that. A lot of unlimited plans are N gigs high speed unlimited low speed, except rather than trying to optimize which is used on a per connection basis it simply uses high speed until you've run out and then uses low speed for the rest of the month).
>Also keep in mind that zero rating is itself an explicit admission that network capacity and overhead aren't factors in the price.
No, that's not what that means. You can easily take special means to get direct peering to zero rated partners or install CDNs so that zero rated traffic does have any impact on peering links. Congestion at the last mile is only a small part of what an ISP deals with.
For wireless carriers it’s all about the last mile. You can only get a certain amount of data within a certain amount of spectrum. Yeah I know I’m butchering the explanation. It’s been over 20 years since I studied it.
If zero rating wasn't allowed then the ISP would still be doing that sort of thing with popular content providers anyway, just the ones that their users prefer instead of the ones their users are being railroaded onto by the ISP itself, so as far as I'm concerned it's a wash.
Why is data expensive in the USA? T-Mobile could charge less instead of zero-rating. Why don't they? (Answer: They want to soak businesspeople who can expense more than home/consumer users, and people with obscure Internet hobbies are collateral damage? )
The US is a big ass country with a relatively small population. When you get a data plan from t-mobile, they promise it will work well even in many remote rural areas. That increased coverage has a cost.
You have it all backwards. "Data" isn't a thing that somehow has a natural location-dependent cost. The cost for a mobile provider is infrastructure. They have to buy devices, install them somewhere, and supply them with energy. None of those is in any way inherently much more expensive in the US than in other developed countries. It's only the price that you pay that is expensive, and the reason for that is zero-rating.
While it is interesting to read this, there are a lot of confounding variables. Chances are countries which would allow zero ratings in the first place would also be more tolerant of other incarnations of excessive market power.
Note, that in spite of my opposition to net neutrality, I strongly support using traditional antitrust mechanisms to prevent firms' excessive market power and last mile monopolies from leading to unfair prices.
I had the same thought. The first paragraph concludes: And the evidence is in that it conclusively makes broadband more expensive, but this seems a bit much given the confounding variables you mention.
I get the feeling that the authors of this report are not entirely honest. For example, part-way through they make this claim about Portuguese operator MEO's plans:
"Using applications participating in the DPP is two up to 77-fold cheaper compared to using applications via general data volume. This strong incentive for customers to use participating applications infringes on the rights of consumers to use applications of their choice and the rights of CAPs to provide services independent of the origin of their users."
Up to 77 times more for neutral data than data to their partners - sounds scary, but how do they get that figure? Well, they take MEO's smallest month-to-month contract which offers 250 minutes + SMS + 500 MB of data + free in-network calls, divide the amount of data by the total cost, and compare this with the nominally 10 GB Smart Net addon which only offers data to the included services. That is, they're treating the phone and SMS part of the all-internet plan as though it costs nothing when it definitely does not.
I think the two-fold cheaper figure on the lower end is wrong too - on paper the non-neutral Smart Net is more like three times cheaper than comparable prepaid data, at least for people who make good use of the Smart Net data limit. Bear in mind that as I understand it each Smart Net plan is for access to one of Messaging, Social, Video, or Music, which includes a handful of the main sites in that category. I imagine most people will have usage that is relatively low and spread across multiple categories plus some outside-of-package usage, in which case a general internet access plan will work out cheaper.
Does anyone know who regulates Comcast/Xfinity in California? I'm a cord cutter and with 4K video becoming more popular, I've almost hit my 1TB data cap twice in the last year. Comcast/Xfinity is illegally promoting their video services by delivering it over the same network, but zero rating their content, while charging customers overage fees for using 3rd party video services like Netflix, Hulu, and Amazon Prime Video.
I filed a complaint with the California PUC and they told me they don't regulate Comcast/Xfinity because they are not a landline telephone service.
It seems horrible that there might not be any regulator that is keeping Comcast/Xfinity from harming consumers like myself.
It seems like the problem as you're phrasing it is that they are deliberately harming consumers -- it costs them the same to deliver the content whether via Netflix/Hulu or their own services, but they are choosing to overcharge customers to force them into their own service.
An alternative way of looking at it is that their own service is offering a subsidy -- bumping you up implicitly to a higher tier in exchange for using their service.
In the latter case, it's not really abusive; they're offering an enhancement; the "actual" price is the next tier up, but you're getting a "discount" in exchange for preferring their own video service.
Which is to say that this is awful behavior, and nothing is more frustrating than the fact that (the lack of) competition makes it so that it is nearly impossible to switch to another internet provider that does not engage in this kind of monkey business. The other problem is that I think many customers prefer this model -- they're willing to make the sacrifice and get the subsidized package rather than shell out the extra money. I'm certainly guilty of this; avoiding paying minuscule subscription fees for websites even though I hate the ads. The value of my hatred is still lower than the cost of the subscription.
I'm aware of net neutrality, but I'm simply asking what recourse we as consumers have.
The current FCC may be too industry friendly, but it will not be that way forever. We, as consumers, need to continue to fight against these monopoly providers that are harming us.
> Comcast/Xfinity is illegally promoting their video services by delivering it over the same network, but zero rating their content, while charging customers overage fees for using 3rd party video services like Netflix, Hulu, and Amazon Prime Video.
Given the repeal of federal net neutrality regs and California putting it's net neutrality rules on hold pending the result of a federal lawsuit, under what active law is the illegal?
It may not be illegal, but they're using their monopoly provider position (they are the only broadband provider in my city) to harm consumers, so it needs to stop.
IMHO net neutrality and zero rating are just the tip of the regulatory iceberg.
The really hard work of the regulator is to ensure that telcos don't abuse their access to spectrum and other resources. IMHO the best way to do this is to force telcos to give each other access to their infrastructure at a reasonable price. For example, when margins* are high enough, new virtual telcos must be able to start up with minimal infrastructure.
The consumer side does not need a lot of regulation. If there is enough competition, consumers will vote with their money.
That is not the best way. Defining “reasonable price” is extremely difficult and that’s one thing the market does much better than the government. The history of regulation in the 20th century, not just in the US but all over the world, is a pattern of governments ditching the idea that they can calculate the “reasonable price” and impose price controls, and moving to more market-oriented mechanisms for regulation.
In the case of wireless, where there is no natural monopoly, the best approach is to simply open up lots of spectrum and ensure there are a sufficient number of competing carriers. There is a ton of spectrum being wasted for things like television that could be used for broadband instead.
It just seems to me quite wasteful that there are so much telco infrastructure is duplicated (redundant): For example digging along the same street more than once to lay fiber. Or have cellular towers from different companies next to each other.
On my last visit to the US, I used both "Straight Talk" and "Trac Phone". These are virtual telcos that use Verizon/AT&T and T Mobile infrastructure. New customers can choose a SIM card before activating the service.
Surely the prices paid by these virtual telcos are set by the regulator.
Here is South Africa, the third and fourth mobile operators roam on the first and second mobile networks. AFAIK, the regulator force these roaming agreements apon the operators. (Here I can get 50 GB of prepaid data for only R500 = $38. Much cheaper than the US!)
> It just seems to me quite wasteful that there are so much telco infrastructure is duplicated (redundant): For example digging along the same street more than once to lay fiber. Or have cellular towers from different companies next to each other.
That was the basis of a lot of early 20th century regulatory thinking. That was thoroughly discredited because it turns out that government price controls are worse than duplicated infrastructure.
Governments only have an incentive to keep prices low, which is not the same as the economically efficient price. From the government’s point of view, it’s better to have cheap 3G networks forever than to have prices that lead to investment into 4G and 5G networks. Low prices kill innovation, which is why all the innovation is happening in IPhones and Macs and Surface tablets and not cut-rate Acer and HP products.
MVNO pricing is not set by regulators. The FCC’s last major foray into rate regulation, in connection with DSL loop unbundling, ended up killing DSL as a viable competitor to cable because wholesale prices were set so low there was no incentive to upgrade DSL networks.
> That was the basis of a lot of early 20th century regulatory thinking. That was thoroughly discredited because it turns out that government price controls are worse than duplicated infrastructure.
You don’t have to take my word for it. Read up on the history of the Interstate Commerce Commission and the insane things it used to do (regulate rates for air and truck freight). Read how companies like UPS came into existence after those things were deregulated. Read how most of the western world followed suit in the 1980s and 1990s.
Especially when you consider in many western countries many people are experiencing congested lines at peak hours. Duplicated infrastructure in these cases are not only a negative but a positive.
So without duplicated infrastructure, we would be experiencing even more congestion so I don't get what your not buying.
Obviously price per gigabyte isn't the only parameter. Things like latency, uptime and maximum throughput should also be specified. Densities in rural areas are lower, so the reasonable price per gigabyte will be higher.
Ideally wholesale prices should be set before the infrastructure is built. Then telcos know what they get themselves into.
--
In South Africa, DSL was never unbundled. But it's still in decline due to competition from 4G and fiber.
--
IMHO a lot of innovative business models are built on zero rating. Based on what the article says about prices and your thoughts on innovation, zero rating may actually be a good thing.
That’s not great either. It leads to a lowest common denominator approach, because government is over senisitive to affordability. That’s what happens with our water and sewage systems. Rates are about half of what they need to be in order to adequately maintain and upgrade that infrastructure. The result is kids being poisoned with lead and raw sewage being dumped into rivers. But nobody in an elected position wants to raise water rates on grandma with her fixed income.
That's the thing, the government isn't charging people directly, they just manage the hardware, individual companies can charge their own rates on top of that hardware.
But the government has to build the hardware. And that costs money, and a massive amount of evidence shows that the government will underinvest in that hardware.
When I travel to Japan or Germany, it makes me cry to come back to American public roads and trains. But not privatized American broadband. We’re doing something wrong and that isn’t giving government insufficient involvement in the construction of infrastructure.
In Germany we had a state owned train infrastructure for the longest time and the process of privatization isn't completed yet. The problem with a privatization is that unprofitable regional areas are not going to get any investments and will get basically excluded.
While there were some improvements, customers in general do not get better condition today. I think overall using a train got more expensive. To a degree that you should think about using a plane.
Infrastructure investments were low before and are low right now.
> IMHO the best way to do this is to force telcos to give each other access to their infrastructure at a reasonable price.
I believe this is what we in America had with dial-up and DSL up until about 2005, when DSL was reclassified as a title I information service. I seem to recall having a hell of a lot more ISP choices/competition in those days.
Yeah but they just plain don't like to invest in infastructure period. It is an expense which takes a long time to pay off and publicly traded corporations are heavily pressured to short-termism.
Reading the comments bellow its really funny to hear all this bull about caps being there to protect the providers and their lack of capacity.
I have 300 mbit link for 50 USD and no caps.
I have full lte and 2TB downlink cap for 25 USD.
But we do have strong competition and it seems it was never about capacity. Its about who offers more. Its obvious that they can afford this since nobody is loosing money, and all of this on a very small and marginal market where isps purchsing power is small.
What about country with more expensive wireless have zero rating ? If wireless is cheap, you can afford to pay it so zero rating would have no 'client'.
This report is particularly suspect. First of all zero-rating is not banned in the EU, and it's not clear which countries are included in the "has zero-rating" basket. The more likely interpretation of their data is that "in countries with shitty internet, providers tend to offer a lot of zero-rating offers". E.g the internet is vastly better in romania than in greece yet they both have a lot of "differentially rated" offers
Then they only show two years , 2015 vs 2016, where there is a slight increase of 2% in prices , without error bars. Then there is this:
> we repeated our analysis for zero-rating offers introduced in 2016 or 2017. However, initially this did not produce statistically signifcant results in any category. Closer examination of the data however revealed Finland to be an outlier market, in which the replacement of a single offer signifcantly changed the prices in almost all data volume aaskets. This is likely due to the fact that unlimited data plans, which do not sensibly admit a price per gigaayte calculation, are prevalent in Finland. We therefore repeated the analysis but excluded Finland from our dataset. In this case, we found a statistically signifcant result (p=0.04) for markets in which zero-rating was introduced between 2015 and 2016. These markets showed a 1% price increase between 2016 and 2017, whereas markets without zero-rating in both cases showed a 10% price decrease.
I think they are stretching it with p=0.04 on a cherrypicked sample of n=30, and present a rather peculiar conclusion about their data. Zero rating is obviously marketing garbage, but i am very unconvinced that it is the reason why ISPs are not investing in their networks.
(It also took 10 minutes to download their 5MB pdf - talk about bad internet ;) )
It's been slow but Telia (first proper zero rating court case in the EU afaik) lost in the national courts in September, with the courts referring to EU regulation from 2015.
after reading this article, i have come to view zero-rating as a form of branding. basically, the internet providers are trying to take a step away from the forces of commoditization. once i viewed it that way, it's pretty predictable that the price for the same exact service will be higher than if zero-rating were disallowed.
I have my doubts whether or not this an apples-apples comparison. EU tends to have extreme regulation and also lofty subsidies. Nevertheless, I agree with the assertion that zero-rating is an anticompetitive practice. Actually rating in general just kind of stinks. Billing this way has lead to the current situation.
Highly anecdotal but while visiting Lithuania I had a prepaid SIM card with 200 local call minutes and 6GB of (quite fast) LTE data + unlimited Facebook (including Messenger) and Spotify for 3 Euros for 30 days, SIM card included.
It can certainly be much cheaper. I pay 17€/month for "unlimited" LTE (includes 9GB EU roaming) and this at ~75 Mbit/s up/down speeds in most cases. Now it's hard to say what "unlimited" actually means in numbers, as it's just defined as reasonable usage in the terms of service. However as another data point, the next cheapest plan is 100GB LTE for 15€/month, so I assume unlimited is at the very least above 100GB.