Hacker News new | past | comments | ask | show | jobs | submit login

It doesn't really cost money to move data around. The biggest cost by far is building the infrastructure. If the capacity is available, the cost of using it is minimal.



Infrastructure has finite capacity, especially on cellular networks. You need some way of managing network utilisation fairly, otherwise everyone's experience will be severely degraded. To my mind, it seems obviously unfair to force light users to cross-subsidise heavy users.


The problem is simultaneous usage, not "heavy" usage as in consuming a lot of data each month. If the network can't handle increased simultaneous usage, no amount of data caps will fix that.

The only thing that can be viewed as network management is bandwidth limiting based on the current network load. I.e. your realtime bandwidth limit.

Limiting monthly data has nothing to do with network management IMHO and is a simply a method to fleece users and push them to pay more for less capped plans.


Limiting bandwidth for everyone just makes the experience worse for everyone. If someone downloads 1 GB over an hour or over a minute, it makes no difference to your network. The amortized load on your network is the same.

This is why the trick is to deliver data asap to low usage users, and throttle high usage users to limit the amount of total load that they take up. This is of course a problem if they lie and advertise "unlimited" data.


> Limiting bandwidth for everyone just makes the experience worse for everyone.

Sure it does, but not limiting it will make the network die even faster if it can't handle said simultaneous usage.

Data caps on the other hand can't help with that at all. I.e. users with such caps connecting at the same time will produce the same negative effect as ones without caps.

The only proper solution is to build up the network, if it's plagued by congestion all the time. Once it's congested - it's already too late and you can only make it degrade gracefully until it's built up.


Simultaneous usage and heavy usage is the same thing in practice. Most people watch youtube videos at times of the day when everyone else is watching youtube too. Data caps make users consider whether or not to watch a video via their wireless connection, and hence reduce peak usage.

Telecom companies could work around this by making it cheaper to use data in off hours, but that makes it complicated. Most electricity providers don't have that for consumers either, even though it's actually super expensive to temporarily shut down many forms of electricity generation (and they can't keep them running either, because that would fry the network).


It's not the same though. The idea that data caps prevent simultaneous usage is a fake. It might affect it, but it can't prevent it. So it's not a solution, but simply a money grab.


Paying for electricity per kWh is just fleecing the customers, too, then? After all, the problem is simultaneous usage, not "heavy" usage as in consuming a lot of electricity each month.

I mean, seriously? Just because it is possible for someone to buy a 1 TB package and then use it all on the first day of each month ... doesn't mean that most average people don't have relatively predictable usage patterns and relatively constant use throughout the month, does it?


Electricity has a real per-kWh cost to produce.

Packets have, at best, a nominal per-mbps cost to transmit.


Renewable energy is making that less the case


> Electricity has a real per-kWh cost to produce.

Like, what? What do you consider a "real" cost of producing a kWh?


The all-in costs of whatever you had to burn (literally or not) to produce that kWh?

Packets do not have a meaningful marginal cost. Metering is a crap model.


> The all-in costs of whatever you had to burn (literally or not) to produce that kWh?

OK, so in the case of wind and solar power, what is burned (not literally) is hydrogen (being fused into helium), which noone is paying for, so there is zero cost for what is being burned to produce each kWh of wind or solar power. So, if you are being billed per kWh for wind or solar power, that's fleecing the customer then?

Also, for coal, the coal per kWh costs about 3 cents, you pay at least three times that for electricity, usually more--is that also fleecing the customer?

> Packets do not have a meaningful marginal cost. Metering is a crap model.

What do you mean by "meaningful"? I mean, either they have a marginal cost, or they don't right?


> So, if you are being billed per kWh for wind or solar power, that's fleecing the customer then?

At any time other than a load emergency, yes. The ideal pricing mechanism for a 100% renewable grid would be to pay a flat rate based on your service size (100 amp, 200 amp, etc.) and then get paid by the grid for dropping your load when the grid is overloaded. That would give the power company the incentive to prevent overloads by providing adequate capacity.

> Also, for coal, the coal per kWh costs about 3 cents, you pay at least three times that for electricity, usually more--is that also fleecing the customer?

That price doesn't include externalities. It can make sense to use pricing to discourage usage of resources with otherwise unpriced negative externalities.

> What do you mean by "meaningful"? I mean, either they have a marginal cost, or they don't right?

Network equipment may use more electricity when forwarding packets than not. But when this is in the amount of pennies per TB of data, the accounting overhead of keeping track of it would exceed the measured dollar amount.


> At any time other than a load emergency, yes. The ideal pricing mechanism for a 100% renewable grid would be to pay a flat rate based on your service size (100 amp, 200 amp, etc.) and then get paid by the grid for dropping your load when the grid is overloaded. That would give the power company the incentive to prevent overloads by providing adequate capacity.

So, do I understand you correctly that someone who uses their electric oven (so, ~ 3 kW) once a day for half an hour but uses no other electricity should pay the same total as someone who runs some 3 kW machine 24/7, unless the grid is overloaded, in which case the latter pays less?

> That price doesn't include externalities. It can make sense to use pricing to discourage usage of resources with otherwise unpriced negative externalities.

Except that's not what is happening here?

> Network equipment may use more electricity when forwarding packets than not. But when this is in the amount of pennies per TB of data, the accounting overhead of keeping track of it would exceed the measured dollar amount.

Except that that's a minor part of the marginal cost of moving a packet?


> So, do I understand you correctly that someone who uses their electric oven (so, ~ 3 kW) once a day for half an hour but uses no other electricity should pay the same total as someone who runs some 3 kW machine 24/7,

Yes, exactly. Because they both want to come home and use 3 kW for a half an hour at the same time, so the grid needs 6 kW of capacity just then. But if it has 6 kW just then with a source that generates 24/7 the same amount at no marginal cost, the person who is also using that amount the whole day isn't costing the power company anything more. The capacity was needed for that half hour regardless of what you do the rest of the day, so why should what you do for the rest of the day change what you pay?

> unless the grid is overloaded, in which case the latter pays less?

No, you get compensated for not using what you paid for, not for not using what you normally use. If they're both not using then they both get paid. Obviously this gives the power company a good incentive not to have an overloaded grid.

Now in practice what's going to happen is that the person who uses 3 kW for a half hour a day isn't going to buy 30 amp service for that, they're going to get 10 amp service and a 5 kWh battery that can run their oven or clothes drier from time to time and then slowly charge back up over 24 hours.

The analogy for broadband is you pay for 50Mbps service but it will burst up to 500Mbps for a few seconds at a time, so the person who doesn't use 24/7 has de facto 500Mbps instantaneous service for the price of 50Mbps.

And there are some complexities in power generation that don't apply to broadband, e.g. grid-scale batteries and the fact that solar doesn't actually generate 24/7. But even that doesn't really change that much, especially if you tie the service level to time of day, e.g. you can order 10 amp from 4PM to 10PM and 100 amp from 10PM to 4PM and that costs a lot less than 100 amp 24/7, but it means you're not allowed to use more than 10 amps during peak hours.

And it would make sense to sell broadband that way as well, with Mbps rather than amperes. Why can't I get 10Mbps during peak hours and gigabit the rest of the time? And then someone who needs 20Mbps during peak hours can get that and pay twice as much as me, even if I'm using a thousand times more data than them during off peak times.

> Except that's not what is happening here?

It's not administratively what's happening with power generation, but it's de facto what's happening, and so there isn't a lot of cause to change it just because we're doing something sensible and calling it something else.

By contrast, there is no comparable negative externality caused by sending data.

> Except that that's a minor part of the marginal cost of moving a packet?

That's the entire marginal cost of moving a packet. The infrastructure cost is a fixed cost. You don't get it back if you don't send the packet and the network goes underutilized.


> Because they both want to come home and use 3 kW for a half an hour at the same time, so the grid needs 6 kW of capacity just then.

Except that's not how it works on the large scale. If you have 1000 consumers coming home and using their 3 kW ovens for half an hour "at the same time", you very reliably get nowhere near 3 MW of load on the grid. For one, people do not in fact come home at the exact same time, and then, the exact switching intervals of the oven thermostats are essentially random, so the actual simultaneous load on the grid is pretty close to the thermal loss of all ovens combined, rather than the total peak power that they could consume if synchronized.

> No, you get compensated for not using what you paid for, not for not using what you normally use. If they're both not using then they both get paid. Obviously this gives the power company a good incentive not to have an overloaded grid.

So, the power company calculates the maximum energy that you could use given the service that you are buying, makes up some per-kWh price, multiplies the two, and then bills you that amount minus the kWh that you didn't use, and that gives the power company a good incentive not to have an overloaded grid?

Could you provide an example where the total of that bill wouldn't be identical to just billing the same per-kWh price multiplied with the kWh used?

> Now in practice what's going to happen is that the person who uses 3 kW for a half hour a day isn't going to buy 30 amp service for that, they're going to get 10 amp service and a 5 kWh battery that can run their oven or clothes drier from time to time and then slowly charge back up over 24 hours.

Well, that depends on the pricing? If it costs the same to buy 30 amp service and use 10 kWh per day as it costs to buy 10 amp service and use 10 kWh per day, why would I buy a 5 kWh battery?

> The analogy for broadband is you pay for 50Mbps service but it will burst up to 500Mbps for a few seconds at a time, so the person who doesn't use 24/7 has de facto 500Mbps instantaneous service for the price of 50Mbps.

So, the analogy to a hard rate limit where you have to operate your own energy storage is a setup where you can burst the service? I am not sure I get what you are trying to say.

> And it would make sense to sell broadband that way as well, with Mbps rather than amperes. Why can't I get 10Mbps during peak hours and gigabit the rest of the time? And then someone who needs 20Mbps during peak hours can get that and pay twice as much as me, even if I'm using a thousand times more data than them during off peak times.

Well, sure, that certainly could make some sense, but is completely orthogonal to whether billing happens based on Wh/bytes or W/bps? You also could have a time-dependent per-kWh or per-GB price, couldn't you? (And in the case of electricity, you usually can.)

> That's the entire marginal cost of moving a packet. The infrastructure cost is a fixed cost. You don't get it back if you don't send the packet and the network goes underutilized.

Well, infrastructure costs are fixed costs, yes, but infrastructure costs aren't fixed, as any given infrastructure has limited capacity. So, the marginal cost of one additional packet while you are under the capacity limit is a few femtocents (whatever the additional energy cost of moving that packet is, I haven't done the calculation), but the marginal cost of the next packet after you reach the capacity limit of some component is a few, possibly many, thousand bucks. Thus, the marginal cost that has to be used for calculating customer prices is the average over the range of realistic infrastructure expansion, which is quite a bit higher than the energy cost for an additional packet on existing infrastructure.

Also, wouldn't it follow from your argument that internet connectivity should be provided for free to most people? I mean, once you have the network built, it's a fixed cost, and providing service to an additional customer has nearly zero cost in most cases, so shouldn't people be able to sign up for free?


> Except that's not how it works on the large scale. If you have 1000 consumers coming home and using their 3 kW ovens for half an hour "at the same time", you very reliably get nowhere near 3 MW of load on the grid. For one, people do not in fact come home at the exact same time, and then, the exact switching intervals of the oven thermostats are essentially random, so the actual simultaneous load on the grid is pretty close to the thermal loss of all ovens combined, rather than the total peak power that they could consume if synchronized.

Which is why they don't actually need 30 amps of capacity, only something less than that and a battery to smooth out the load. Or for the power company to sell "10 amps capacity" as a five-minute average that allows for temporary surges above that capacity as long as the average stays below it.

You're also focusing on one thing, and not the many other things that are synchronized. Grids often have problems on hot days, why? Because everyone wants to run their A/C at the same time. It'd be fine if half the people would do it at night or three days from now, but the grid needs that much capacity now. If you don't have it right now you can't make it back up over the rest of the month no matter how much people don't use.

> So, the power company calculates the maximum energy that you could use given the service that you are buying, makes up some per-kWh price, multiplies the two, and then bills you that amount minus the kWh that you didn't use, and that gives the power company a good incentive not to have an overloaded grid?

> Could you provide an example where the total of that bill wouldn't be identical to just billing the same per-kWh price multiplied with the kWh used?

You only get money back for not using when the grid is overloaded, which if the power company is doing their job should rarely if ever happen. And if it does it's because most people are using their full capacity, so the people who stop get paid to stop and the people who weren't to begin with get paid to not start.

> Well, that depends on the pricing? If it costs the same to buy 30 amp service and use 10 kWh per day as it costs to buy 10 amp service and use 10 kWh per day, why would I buy a 5 kWh battery?

Which is why per-kWh pricing is problematic. It's better for the grid for you to buy the battery and reduce your peak usage, and $-per-kWh gives you no incentive to do that. So then the grid needs more capacity because you consume more at peak times, which is more expensive for everyone. Meanwhile the cost of that would go on the person who is productively using a lot of zero-marginal-cost power during off-peak hours.

> So, the analogy to a hard rate limit where you have to operate your own energy storage is a setup where you can burst the service? I am not sure I get what you are trying to say.

Suppose you have 10 amp service, but a 5 kWh battery that will charge whenever you're using less than 10 amps. Then if you've been using only 3 amps for 20 hours, your battery is charged and you can run your 3 kW oven for half an hour even though it uses more than 10 amps.

Suppose you have 50 Mbps service, but the rate cap is a rolling average over 60 seconds. Then if you haven't downloaded anything in 60 seconds and you go to download a 100 MB file, you get the whole thing at link speed (e.g. 1000 Mbps) because your 60 second average is still below 50 Mbps so the cap hasn't kicked in yet.

I suppose the closer analogy is that you can do exactly the same for both, i.e. you get a 50 Mbps rolling average or a 10 amp rolling average and you can go over for a few seconds at a time as long as the short-term average stays below the rated capacity.

> Well, sure, that certainly could make some sense, but is completely orthogonal to whether billing happens based on Wh/bytes or W/bps? You also could have a time-dependent per-kWh or per-GB price, couldn't you? (And in the case of electricity, you usually can.)

Only the ISPs aren't doing that. And even then, it would mean that the off-peak price per-kWh or per-GB should be zero, which it isn't.

But even that doesn't quite capture it, because it's not just about peak hours in a given day, it's about peak hours in a given year. The grid has more load at 7PM in the fall than at noon in the fall, but it has more load at noon on the hottest day of the year than at 7PM in the fall.

If you want to run your A/C on the hottest day of the year or live stream the Superbowl then the network needs that much capacity on that day, which means it has that much capacity on every day. But none of the other days require capacity to be expanded further to support them.

> Well, infrastructure costs are fixed costs, yes, but infrastructure costs aren't fixed, as any given infrastructure has limited capacity. So, the marginal cost of one additional packet while you are under the capacity limit is a few femtocents (whatever the additional energy cost of moving that packet is, I haven't done the calculation), but the marginal cost of the next packet after you reach the capacity limit of some component is a few, possibly many, thousand bucks. Thus, the marginal cost that has to be used for calculating customer prices is the average over the range of realistic infrastructure expansion, which is quite a bit higher than the energy cost for an additional packet on existing infrastructure.

What you're really getting at is that marginal cost when the network is below capacity is much different than marginal cost when the network is at capacity. But that's the point -- almost all of the time, the network is below capacity and there is immaterial marginal cost.

> Also, wouldn't it follow from your argument that internet connectivity should be provided for free to most people? I mean, once you have the network built, it's a fixed cost, and providing service to an additional customer has nearly zero cost in most cases, so shouldn't people be able to sign up for free?

This is what we generally do with roads. The main cost is the fixed cost and the marginal cost is trivial, so the government pays for them from taxes and everyone can use them for free. This is also why toll roads are very stupid -- you pay the same fixed cost and then discourage use of an already paid for resource with a trivial marginal cost, by charging a non-trivial marginal cost.

But the fixed costs still have to be paid somehow. Having the first customer pay ten billion dollars and all the others pay nothing doesn't exactly work in practice. It's also not how the cost structure works, because a huge part of the infrastructure cost is per-customer -- if you want to service twice as many customers you have to wire twice as many streets.

On the other hand, having everyone who wants the same capacity pay the same monthly fee works pretty well. It still discourages people from signing up compared to the public roads model, and it would be better if it didn't, but probably doesn't discourage very many because the value of having internet service is much greater than the cost.

By contrast, charging high prices per byte at anything other than the all-time peak consumption period does in practice discourage productive use for no benefit.


Sorry, I understand less and less what your suggested billing models would be, and it seems like some of your suggestions are self-contradictory.

On the one hand, you suggest that billing should be based on peak power, because it is supposedly better for the customer to run their own battery, but then it would also be OK for the power company to offer bursting with billing based on peak 5 minute average power, which essentially means that the power company is selling you use of their battery. But then, if they are selling you use of their battery, why only for 5 minutes? What is wrong with them selling use of their battery for all your storage needs? In particular when the power company often has options at their disposal that are far cheaper than actual batteries to achieve the same goal, such as simple averaging over a large number of customers, or moving loads that aren't time critical to otherwise low-load times, which don't need any actual storage at all.

You seem to be focused on incentivizing everyone to flatten their own load curve as a supposed method for optimizing the utilization of capacity. While it is obviously true that if everyone had a flat load curve, global utilization would be maximized, you seem to be simply ignoring the fact that there are some real-world requirements behind peoples' load curves, so the actual power or bandwidth consumption of each individual can not be flattened. At best people could pay for some sort of storage (in the case of electricity) that transforms their actual load curve into a flat curve on the grid. But at the same time, it is perfectly possible to flatten the global load curve without the need to flatten every individual load curve. In fact, you can flatten the global load curve by making some individual load curves more bursty. If you sell electricity cheaper at night, that incentivizes some people to install storage heating, causing them to create an artificial load peak at night--leading to the global load curve becoming flatter.

The ability to increase utilization by combining different load curves is one of the great possibilities of many users sharing a common infrastructure, so why would we possibly want to disincentivize that?!


> On the one hand, you suggest that billing should be based on peak power, because it is supposedly better for the customer to run their own battery, but then it would also be OK for the power company to offer bursting with billing based on peak 5 minute average power, which essentially means that the power company is selling you use of their battery. But then, if they are selling you use of their battery, why only for 5 minutes? What is wrong with them selling use of their battery for all your storage needs? In particular when the power company often has options at their disposal that are far cheaper than actual batteries to achieve the same goal, such as simple averaging over a large number of customers, or moving loads that aren't time critical to otherwise low-load times, which don't need any actual storage at all.

It's two different use cases. It's confusing that we keep talking about the same load, so let's change that.

On the one hand, you have an electric oven. It uses 30 amps, but only while it's heating, which it only does for 30 seconds out of every 90 even when it's in use. So the average is 10 amps. The power company sells you 10 amp service, your average over a few minutes is indeed 10 amps and the power company can average this out with other customers with similar loads, so you don't need to pay more for 30 amp service even though you're periodically using 30 amps for a few seconds at a time.

On the other hand, you have a data center. The servers use 30 amps at all times. It has a UPS, so you've already paid for an inverter, and now you buy twice as many batteries as you would have. Then you order 5 amp peak and 40 amp off-peak service, which is much cheaper than 30 amp all the time service, and run the datacenter on the batteries during peak usage hours. The power company couldn't have averaged this over other customers because the average customer wants to use more at peak hours than off peak, by definition.

And the method by which they get loads to move to other times is by not limiting usage at other times at all, which is what's happening here -- when you order 5 amp service you get 5 amps during peak hours, but during off peak hours you can use however much you like, including to charge your batteries to reduce the peak consumption rate you'd otherwise need to buy.

> While it is obviously true that if everyone had a flat load curve, global utilization would be maximized, you seem to be simply ignoring the fact that there are some real-world requirements behind peoples' load curves, so the actual power or bandwidth consumption of each individual can not be flattened.

"Making the curve flatter is more efficient" is true even if you don't ever actually completely flatten it. If people use a total of 500GWh during peak hours and 200GWh during off peak hours, and you can get that to 450GWh and 300GWh, you've reduced the required generation capacity by more than 10%.

There are a lot of pricing structures that can achieve this. A lot of them are really just the same thing using different terms. One of the better ones is to price based on "maximum average consumption rate during peak hours", i.e. the most of the resource you're entitled to use over a few minute period during peak hours. What you're going to need to A/C your place during peak hours on the hottest day of the year. Because that's how much capacity they need to build, and then have all the rest of the time too.

But the single thing that all of the good pricing strategies will have in common when there is no marginal cost and you're just amortizing the fixed cost of building capacity is that off peak usage is unmetered. Which is specifically the thing that "GB per month" caps get completely wrong.


> There are a lot of pricing structures that can achieve this. A lot of them are really just the same thing using different terms. One of the better ones is to price based on "maximum average consumption rate during peak hours", i.e. the most of the resource you're entitled to use over a few minute period during peak hours. What you're going to need to A/C your place during peak hours on the hottest day of the year. Because that's how much capacity they need to build, and then have all the rest of the time too.

Well, it may well be one of the better ones, depending on what you are comparing it to, I guess, but I would think it has a pretty serious flaw: Anyone who has the option to not use any power during peak hours would get to extract value from the grid without contributing anything to its construction or maintenance, so those people or businesses who are in the unlucky position of needing power at certain fixed (peak load) times would end up paying the full cost of building and maintaining the grid that everyone is using. That doesn't exactly sound like a fair way of sharing the costs, does it?

While the cost of building the grid is determined by peak load, it's not like building a grid that could only deliver the base load would be free to build. So, while it makes sense to have a price structure so that users who cause the load to exceed the base load pay for the additional costs of building a higher power grid, I don't see why it is appropriate to make them pay the total cost of building the grid, let alone how that leads to optimal use of resources.

Now, your argument might be that anyone is free to just buy a bunch of batteries for their peak-load needs and thus avoid paying for their electricity (up to the point when everyone does so, so the global load curve becomes flat, thus it's peak load time 24/7, and everyone will start paying based on the energy taken from the grid after all ...)--which is true, of course. But what's the point of that? What is the problem with the grid having storage built-in and billing you for the use of that capability, rather than incentivizing/forcing you to install your own storage? In particular when some forms of storage can be cheaper than batteries, but completely unrealistic for personal use (such as pumped-storage hydroelectric).

And all of that is completely ignoring that a flat load curve isn't actually desirable anymore with renewable sources, as the generation capacity is just not capable of providing that (without massive overprovisioning). In particular the A/C example that was a big problem for traditional power grids has the nice property that you need A/C roughly proportionally to the intensity of sunshine--and luckily, the generation capacity of solar panels is also roughly proportional to the intensity of sunshine. And not only does this mean that additional power from that source is available exactly when it is needed for that purpose--it doesn't even need a stronger distribution system in the case of roof-top solar, as the power is generated right where it is needed, so no need to move it long distances.

> But the single thing that all of the good pricing strategies will have in common when there is no marginal cost and you're just amortizing the fixed cost of building capacity is that off peak usage is unmetered. Which is specifically the thing that "GB per month" caps get completely wrong.

Well, yes, "GB per month" caps don't create any incentive towards a particular shape of the customer's load curve on any scale smaller than a month, that's maybe not quite optimal. But off-peak usage being unmetered is a pretty bad solution as well in that regard, as that obviously doesn't amortize anything, and creates an incentive to avoid certain useful investments because of a free-rider problem (you won't start a business that needs bandwidth at peak times if the fact that you have to subsidize other users of the infrastructure makes your business unprofitable).

Also, while that approach doesn't solve that problem, that doesn't make it useless. Most consumers as a matter of fact have a relatively flat load curve on the scale of a month (both for electricity and for bandwidth), and a cap influences the amplitude of the curve, and thus does influence infrastructure costs. And realistically, at least most bandwidth uses of consumers have little oppportunity for incentivizing a flatter load curve. Much of the consumer traffic is videos and streams on demand, which users generally don't want to watch at 3 am, and also don't want to pre-order to watch it the next day. Of course, it would still be nice to have the option of buying cheap traffic during the night for uses that can profit from that, which obviously would also benefit the ISPs to some degree.


That cost is only a small part of a power plant. Most of the cost is amortizing the build of the plant, just like an ISP. This is especially true with wind, solar, and nuclear.


How much are they paying in PON networks? Users are even paying the bills for their ONTs' power usage themselves.

Data caps are without any question fleecing. Not a need driven idea.


How much is who paying who for what?


It does cost money to move data around, even if it is an upfront cost when you build the infrastructure. When your users move enough data that you are getting close to the capacity of your infrastructure you need to invest more money to extend your infrastructure.

Or to express it slightly differently: if there weren’t any data to move you wouldn’t have to build any infrastructure and you wouldn’t have any costs. If you want to move data you have to build the infrastructure which cost money, which means that it costs money to move data.


That's no reason to ignore the cost of the infrastructure. And the costs aren't even fixed. You'll eventually have to upgrade the infrastructure to allow for the ever growing data transfers on ISPs.

Not every single bit adds to the overall cost, but in general, more data sent means more spending.

It's perfectly reasonable to charge for data. Though that doesn't mean Comcast isn't charging for anti-competitive reasons.


> You'll eventually have to upgrade the infrastructure to allow for the ever growing data transfers on ISPs.

Given current obscene prices that IPSs are already charging, they have enough money to upgrade the infrastructure. They just need to line their pockets less, and actually spend money on upgrades.


Internet infrastructure upgrades are very unevenly distributed, and you can find horror stories all over the US. But when I got online in the 1990s, a T1 (1.544 Mb/s symmetric) was extravagant and almost impossible to imagine for a home user. I now have 1 Gb/s symmetric at home.

I doubt whether the entire uplink of my first ISP was 1 Gb/s. Many colleges' uplinks sure weren't.

Somebody's out there making Internet infrastructure better some of the time.


I have symmetric gigabit at home too. But that happened only thanks to namely Google Fiber making gigabit a norm in common perception.


Over the last 20 years, data speeds have increased much faster than say speeds (something we generally consider to be the subject of massive competitive investment and improvement).


And most ISPs were sitting doing nothing, until Google Fiber introduced gigabit to the masses. Major execs said on record that "no one needs Gigabit", only to deploy it not long after, because they got scared of GF.

Too bad GF sizzled out, but it had a good positive effect to disrupt slumbering monopolists.


That’s a complete fever dream. FiOS and U-verse fiber launched in 2005 and 2006. In 2002 Comcast’s top speed tier was 3 mbps. In 2009, it was 20, a 7x increase in 7 years. Google Fiber launched in 2010. From 2009 to 2016, Comcast went from 20 to 150, about the same factor of increase as before Google Fiber. Verizon and Comcast announced gigabit upgrades ... after Google Fiber stopped expansion in 2016.

Yeah, Verizon jumped to gigabit to match Google’s marketing. It was a non-event. (After about 100 mbps the shittiness of the modern web stack makes further upgrades pointless unless you’re a huge downloader. I’ve got gigabit FiOS load-balanced with 2-gigabit Comcast fiber at my house. I literally cannot tell the difference from the 150 mbps FiOS I had before, except in speed tests.)

Google Fiber really has nothing to do with these increases. That’s total make believe. There is an upgrade treadmill for DOCSIS that the industry follows, just like for CPU fabrication. And like improvements in fabrication technology, staying on the treadmill requires massive continual investment (faster versions of DOCSIS only work if you keep building fiber closer to the house and decreasing the amount of coax and the degree of fan-out). ISPs were spending that money before Google Fiber, and continue to spend that money now Google Fiber is on life support.


> That’s total make believe.

You sound like McAdam. First, he conceitedly claimed that no one needs or will need gigabit so customers can get lost expecting it. And then he "magically" changed his tune. Which is totally the indirect effect of Google Fiber, which affected Comcast and AT&T which affected Verizon. McAdam had to swallow his conceit, shut up and deploy gigabit.

All this "no one needs" bunk falls apart very quickly even at the slightest sight of looming competition. Imagine what could have been if there was real one around.


I'd assume ISPs are taking infrastructure into their cost calculations, even in a case where the initial build out was government subsidized... things break, there are ongoing costs.

And my totally uninformed understanding is that the ongoing maintenance costs, and the costs of expanded service are both pretty minimal.


Both are incorrect. Almost no ISP infrastructure is “government subsidized.” Almost all the subsidies are urban ISPs subsidizing rural ones. There was a tiny bit of actual government subsidy under Obama as part of the post-recession stimulus.

And ongoing maintenance and support costs are very high. Even if you don’t trust Verizon’s SEC disclosures (showing 5% or less in operating profit for wireline) look at the financial statements for something like Chattanooga’s EBP. The vast majority of revenue goes to ongoing costs, before you even get to paying down the initial build out.


Given severe lack of competition among ISPs in US, I don't buy the bogus argument that current prices are fair. Simple market logic suggests, that they overcharge, because they can. Therefore they do have more than enough money for their upgrades.

And on top of that, most simply prefer to pocket them instead of investing into the network, with "no one needs it" excuse. Something they would never have done with healthy competition.


"Driving to work is free, if you ignore the cost of the car, the fuel and the roads."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: