Is there any cost that an ISP company has related to the amount of data transferred? All the costs that I'm aware of are related to the bandwidth, not the total transmitted data (eg.: IX connections and such).
The ISP companies want to do this because they get more money, that's simple, and usually followed by awful things like exemption for certain services.
In Brazilian Facebook IT groups it's already common to see people from Angola using the group to google things, because Facebook usage is out of the data cap. Some Brazilians even created groups specialized in this "facebook googling" for Angolans.
It makes me sad knowing how far ISPs are being able to push such bad things.
- edit to add more info about the current state in Brazil:
Mobile internet plans all have data cap and most have exemption on WhatsApp and Facebook, while house data plans are forbidden to do so by Brazil's telecom regulator (ANATEL), unfortunately they are lobbying pretty hard to change that.
Yes, the costs are proportional to the bandwidth, but people want a fast connection at a low price, so ISP's sell a fast connection at a discount and limit how often you can use it through traffic shaping and/or data caps to still turn a profit on it. The business plans are often unlimited, and typically priced a multiple for what's the exact same technology as the home plans underneath. If you want to know what unlimited bandwidth really costs, look at the price of the business plans.
I think ISP's are often undeservedly painted as the bad guys. Yes, some are profiteers, but most are just turning a normal profit, just like any business. The only way for them to compete on price with other ISP's is to also impose caps, because what they charge is just not enough to fund the infrastructure for a dedicated connection with guaranteed bandwidth.
If it was some sort of a natural law that shipping 1 byte cost $x on the margin, then the PSTN never would have survived the shift to unmetered long distance service. Somehow ISPs have been making big profits selling unmetered internet service during a period of hypergrowth. How is it possible that they didn't go broke during this process?
The real story here is that "power user" is a proxy for "cord cutter". If a residential user downloads 1TB per month, then they're a thoroughgoing "over the top" video consumer. The traffic patterns are different for streamers vs. pirates, but it's still people watching TV and not paying cableco for the privilege.
They're just doing price discrimination, rolling out bandwidth caps with punitive fees for violations, and waiting for the 4K frog to boil. Roll it forward, and you're back in the world of all you can eat at a higher price point. Cablecos don't really have to care how you split your bill between internet and video, as long as ARPU stays in the $70-$100 range. "Dumb pipes" are higher margin than cable channels, and don't need contract renegotiations with steely-eyed squeezers like Disney.
Let DIS roll their price increases out directly to consumers with their streaming channels, while you just sitting there running a toll road. It's like a regulated natural monopoly, without the burden of appropriate regulation. This is a nice business.
> If you want to know what unlimited bandwidth really costs, look at the price of the business plans.
The premium there is for uptime guarantees, better service, and a certain amount of legislative corruption (without which no discussion of telco/cableco practices would be complete). They are allowed to discriminate between commercial & residential customers, and refuse residential service to commercial addresses.
The ISP needs to build infrastructure to support peak instantaneous load. If the current network is at 50% capacity and I turn on Netflix there's basically no cost to the ISP, even if I've already used my 1TB of data for the month. If they are at 95% capacity then things start to matter a lot more.
If the ISPs were interested in fair pricing, they should offer pricing like the power companies these days, charging more during the day/evening and less during the night when less people are generating traffic. You should be throttled during the afternoon when everyone else is on or pay for the "fast lane".
But they don't operate like that, because $, making them the bad guys.
I assumed the extra cost was not because of the unrestricted bandwidth (Granted I have Xfinity in Chicago and only pay $15 a month for unlimited on Gigabit)but because of the static IP(s) associated with the service. In addition I would assume that trouble calls are elevated over that of a home user with quicker response times.
Just looking at Biz service vs home and assuming that's because of unlimited bandwidth is 1:1, there's more going on there that's different between the two.
Until a few years ago the cost to get additional IP addresses on most business ISPs was pretty minimal. The ISP my company uses does not even charge for additional IP addresses, you just need to give good reasons for why you need a larger IP space. With a previous ISP, additional IP addresses were about $10/mo but the cost of service was easily 3-4x similar rated speeds as a residential connection. Most of the additional cost is due to increased capacity planning and faster service times.
You can get gigabit fiber throughout Asia and Europe for as low as $20 a month with no cap. There is no "discount" for fast connections in the US. The ISP cartel price gouges the whole country and because of their regional monopolies have no incentive to invest in actually having the provisioning to support the speeds they advertise.
> Is there any cost that an ISP company has related to the amount of data transferred? All the costs that I'm aware of are related to the bandwidth, not the total transmitted data
Bandwidth is strongly correlated to total transmitted data, as the amount of time in a day isn’t changing any time soon. It may be reasonable to offer some kind of off-peak pricing for customers that can time-shift their internet needs, but the complexity is apparently not worth the benefits.
In particular, if the usage over time of a high-data customer generally looks like a scaled version of the same chart as a low-data customer, there won’t be much effective difference in the two models, and consumers understand quantity billing much easier than bandwidth-percentile billing, which is how the wholesale market is priced.
> Consumers understand quantity billing much easier than bandwidth-percentile billing, which is how the wholesale market is priced.
"Data used between the hours of 11pm and 9am do not count against your data cap." Not difficult to understand, cell phone companies used to do it all the time with minutes.
(Times made up, adjust for real-world peek.)
Personally, I think it's relatively obvious that data caps on cable internet connections are not related to the cost of service.
>Personally, I think it's relatively obvious that data caps on cable internet connections are not related to the cost of service.
They’re related, but there may not be a direct 1:1 correspondence. Offering unlimited anything at a fixed price is a terrible idea for any company because there will be at least a few outliers that throw their modeling out the window, and there is ample precedent for ill-advised promotions bankrupting companies (see the Hoover flight article that was on the front page a while ago).
The issue isn’t with the existence of data caps, it’s with the laws that allow nominally unlimited things to be actually limited and the lack of competition that might produce the kinds of promotion you’ve suggested in the fight for market share.
> Is there any cost that an ISP company has related to the amount of data transferred?
Not sure if at a noticeable level, but I keep wondering: aren't there costs that are directly proportional to the amount of data transferred? Flipping bits costs energy, both at the level of fundamental physics and (more importantly) at the level of engineering - all the electric and optical signals aren't free, and packet processing isn't either. E.g. I'd expect my power bill to be larger if my Raspberry Pi saturated the network by streaming /dev/urandom than it would be if it was sitting idle.
But aside that, I think that beyond the "we can charge you for data because you can", the per-data-transferred costs come from the overprovisioning ISPs do. The fraction of the advertised to actually supported bandwidth per user they can get away with is dependent on average amount of traffic users generate.
But aside that, I think that beyond the "we can charge you for data because you can", the per-data-transferred costs come from the overprovisioning ISPs do. The fraction of the advertised to actually supported bandwidth per user they can get away with is dependent on average amount of traffic users generate.
How do data caps help with that though? People are still going to use most of their internet during peek hours, because...well, those are the times people use the internet the most, by definition.
It's like trying to combat rush hour traffic problems by limiting the number of times people can use a highway. People are still going to use it during rush hour, they just are going to avoid using the highway during non-rush hour periods, which accomplishes nothing.
They help demand management because if they're set low enough, people can't use the highway 5 times a day, they have to limit their usage to just once a day, or maybe a few times a week.
In Australia the bandwidth caps used to be specifically set up with a different peak and off-peak cap to help with this even more.
In the UK most ISPs will instead perform traffic shaping where they will slow down your connection during peak times after a certain small amount of data transfer. This still seems to be disliked by consumers just the same.
Yes. Everyone still needs to go to work at the same time so the resources are still going to be used at the same time. It's charging extra for standard expected usage that the ISPs have known exists for years. If this were really an issue there wouldn't be so much dark fiber in the US hooked into copper cable local loops.
If we had a competitive market in either place, it would be interesting to see what consumers choose. I think I would prefer the traffic shaped example, since the current tiering situation appears to have _both_ limits in the fine print with Comcast.
If there weren't municipal companies with better service, faster speeds, no caps, all for cheaper, then I might agree. But these companies are greedy, and they keep crying wolf while increasing their profits.
WRT. home example, what if it's a router? I'd very much expect the router to use less power idling than when handling 1gbps - especially that routers tend to look into packets, which is computation, which uses power. Also, if the router does does mode change (e.g. Ethernet on one side, fiber on the other), I'd expect more power usage still when busy.
Indeed there is. ISPs don't run a non-oversubscribed network. If they did your bill would be much higher. The costs are in building cables, maintaining cables, buying and maintaining local pop network equipment, building and maintaining whatever backbone they run, installing and maintaining network equipment in network meeting rooms where they connect with transit providers, etc...
I think the suggestion is more along the lines of you buy a set bandwidth with a certain contention ratio (or ratios) and then the ISP lets you contend for the bandwidth as you like.
Having a bandwidth cap is basically the ISP punishing you for contending too often.
That said, assuming the ISP wants to keep an average speed higher than the minimum their contention ratio would offer, people who download a lot would have a greater impact on that, and could therefore cause the ISP to have to invest in more hardware and back-haul.
The counter is that the ISP isn't upgrading their network anyway, they're just juicing the market by charging some people more.
I agree with what I think you are stating implicitly that bandwidth caps a blunt instrument. ISPs don't really have a lot of nobs to turn without setting off the network neutrality crowd. I wish there were more QoS nobs that could be turned to allow for lower latency and traffic prioritization as that would probably have some interesting applications.
Most ISPs are constantly upgrading parts of their network. Earlier this spring AT&T came through my complex and installed fiber in every apartment. Average broadband speeds have increased significantly over the last decade, and for that to have happened significant network upgrades would have to have taken place.
I don't think this directly impacts a cost per unit of data transferred, as the poster was wondering. I agree that maintaining the network and ensuring it can meet capacity is a cost that the ISP passes on to their customers, but I would point out that this is why people pay a monthly subscription to their ISP.
To my mind the question would be "if an ISP has network with a fixed capacity, do their monthly operating costs differ between a month where they consistently utilize 50% of that capacity compared to another month where they utilize 80%?"
I suspect the cost to the ISP is the same in either case: they have purchased the equipment and bandwidth to support the network and those costs are fixed, regardless of how much of that bandwidth they are actually using. If that is true, this idea that there is only so much data per month to go around and if you use more then your neighbor then you should be charged more money would be fundamentally dishonest.
Consumer ISPs are effectively bandwidth resellers: they buy in bulk at wholesale rates, split it up, and sell retail quantities to individuals. Wholesale bandwidth is generally billed based on the peak transfer rate during the billing period, and not a fixed price.(1) You also can’t assume a fixed network capacity- ISPs will buy enough equipment to maintain service to their customers instead of having brownouts. This capex cost is directly caused by higher aggregate usage from new and existing customers, so it makes sense to charge them by usage as well.
(1) This is overly simplistic; these are individually-negotiated B2B contracts, so details vary widely.
I would think that would mostly apply to DSL ISPs that are just using a telcos infrastructure. I suppose AT&T Fiber may be charged by AT&Ts backbone for traffic transiting their network or something like that. The major ISPs AT&T, Verizon, Comcast, etc... are building their own networks at least in part.
Yes, the truly big backbone ISPs aren’t generally paying for interconnection to the others. Instead, they have the capital expenses of maintaining and extending that backbone to handle growing internet traffic in general. If they don’t keep up with demand, they’ll lose customers and potentially their settlement free agreements with the other backbone providers.
Where I live there are no competitors for the one cable internet ISP (DSL is not available and satellite is n competitive). As such they have not need to worry about losing customers to _competitors_.
Based on this article, I suspect that ISPs like Comcast and Charter are not extending their backbone to handle growing internet traffic. It seems they are charging exorbitant fees to discourage increased usage instead (in this article, the subscriber canceled their Hulu account and lowered the resolution of video supplied by Netflix).
Having only one ISP locally is typically either caused by market size or local regulation. Comcast does offer a national backbone and sells transit, cdn and other similar services through a subsidiary called Comcast Technology Solutions. The backbone at a national level has definitely received upgrades and they are advertising multiple terabits of capacity. Of course upgrading the transit network does not help if your local link is saturated, which is a problem in some areas.
If there is truly demand for unmetered internet someone will step in and fill the void. Some ISPs also use total transfer as a way to discriminate between business internet and residential internet.
Those are both consumer ISPs, and thus not subject to the same market pressure as I described for Tier 1 ISPs (without naming them as such). Similarly, consumer divisions of Tier 1 ISPs are driven more by local market forces than anything else.
I don't understand how this answers the question "Is there any cost that an ISP company has related to the amount of data transferred?". This all sounds like maintenance and they'd have that if I use 1GB or 1TB.
If every customer can use 1TB, it is more likely that many customers will be using more bandwidth at the same time, therefore requiring more infrastructure to ensure their connections still provide their advertised bandwidth
I don't see how that would affect anything. If we assume that most people consume the most data in the evenings (streaming Netflix, etc), no matter the cap, you would see issues if bandwidth was an issue. If I'm downloading ISOs of Linux during the day at 100% everyday while people are at work, big deal, right? But, if everyone is streaming at the same time, if there was going to be an issue, it would happen if caps existed or not because at some point during the month you're going to have nearly all your customers with data to use.
> But, if everyone is streaming at the same time, if there was going to be an issue, it would happen if caps existed or not because at some point during the month you're going to have nearly all your customers with data to use.
It's a giant bucket of 'it depends'. If the CDN network serving the content has placed hardware within the ISPs network, it's likely a non-issue. This was a major issue before folks like Netflix started moving their CDNs closer to the customer. You might have a hard time finding the articles, but I recommend looking at the pissing match Level 3 and Verizon got into years back.
>it would happen if caps existed or not
I'm not convinced. You are essentially arguing that scarcity of a resource has no affect on consumption.
The way I've seen this handled in the UK (a while ago, but I think some ISPs still do it) is to have caps at certain times (either by having a limit or throttling bulk P2P downloads), but then allow unlimited downloads at other times, especially overnight
> Is there any cost that an ISP company has related to the amount of data transferred?
Data caps allow ISPs to run oversubscribed networks, which are very profitable. If you're streaming 24/7, then that bandwidth isn't available to sell to your neighbor. In that sense, cost is indeed directly related to amount of data transferred.
I remember someone writing a proxy that ran over Facebook, maybe it was specifically chat, in response to that. However, I am not able to find it right now.
edit: I found it here[0].
> The idea of this project is to tunnel Internet traffic through Facebook chat (packets are sent as base64), the main component is tuntap and also the Google's Gumbo parser which does the interaction with Facebook (login, send/receive messages, etc.).
I really hate that there's some whiny wiki editor complaining that those Angolans are exploiting their fundamentally exploitative zero system. I'm all for it, good on them.
It is not exploitative at all to give people access to wikipedia (or even facebook) without giving them general internet access.
I can accept that it would be better for the health of the internet as a whole to give people full access or not at all. But it cannot be exploitative to give something of value for free to someone, just because that someone wants something more/something else.
> But it cannot be exploitative to give something of value for free
One of the (many) criticisms of Nestle is that they went to 3rd-world countries and gave new mothers free supplies of bottled formula. Then after the mothers switched to formula and stopped lactating, they took away the free milk offer and forced them to pay out the nose for a life-essential product that their bodies would have otherwise produced naturally.
Just like with Nestle, giving away limited Internet that can only access Facebook/Wikipedia has the effect of forcing those services to become critical architecture. It denies nations the autonomy to decide how their own networks should be run, and puts them at a massive disadvantage when competing on those networks or building their own services.
In exchange for giving away access to their product (and only their product) for free, Facebook gets a monopoly-level stranglehold on how information is transmitted across those regions -- and that's not just bad for the Internet as a whole, it's bad for the countries and their citizens.
Have heard the worst things about Nestle, and that baby formula fact does not seem surprising.
I agree it could be on the national interest of a country to forbid nestle formula given like that (or, for that matter, basic internet given by facebook). Also, the nestle case seems more clearly exploitative to me, because they meant to deprive women of something by giving them the formula.
Facebook has no other reason to supply them with their zero service other than to get valuable data out of them. Unless FB is the most well disguised charitable organization ever
> Is there any cost that an ISP company has related to the amount of data transferred?
If the data is coming from outside the ISPs network, it might be on a paid connection. Although I think it's less popular today, it was common for paid connections to be billed based on the physical capacity as well as the amount of transfer (95th percentile billing used to be ubiquitous).
Within the ISPs network, or on unpaid connections, the marginal cost per byte is discontinuous: most of the time it's so low it's probably hard to measure, but when you hit the capacity of a link, the cost can be high to increase capacity.
An ISP, especially a residential ISP is not expected to have capacity for all its users to transit data across the internet at the full speed of the last mile connections at all time. There is oversubscription, and it's ok and reasonable. To the extent that someone is using a residential connection at full speed all day every day, that's not really reasonable, and they should be in a different account type. However, ISPs play a large part in this: they should make this more clear, their data caps should be sized to bandwidth, they should offer transfer rate capping in case of overage, and the overage fees should be more reasonable.
>Is there any cost that an ISP company has related to the amount of data transferred?
I mean, there's a physical limitation to how much data a given amount of cable can transfer at any time. Some of bandwidth caps are likely to prevent people from streaming as much data as they can 24/7 otherwise considerably more cable would have to be run, not unlike adding more pipes when more housing additions are added to an area (think of Sim City when you'd need to add more water infrastructure when you'd have more buildings).
Data usage is actually increasing at a mind-boggling rate, there are plenty of good articles about this on the internet.
If you look at the numbers you see that at the current growth rates, there really is a valid reason to limit data usage as much as possible by providers via datacaps. While their current cabling can likely handle the demand, they're almost certainly going to reach a point in the near future where they simply can't deploy cable enough to keep up with the annual increases.
>Is there any cost that an ISP company has related to the amount of data transferred? All the costs that I'm aware of are related to the bandwidth, not the total transmitted data (eg.: IX connections and such).
the more data you have to transfer the more bandwidth you need to have available to be able to offer some desired throughput to your customers
There is some cost to transfer between ISPs or to higher tier ISPs, though generally that cost is negligible. They always pay for accepting traffic and generally they have agreements that only the difference between out-in is paid. At my current host, the peering cost of data is about 5€/TB outgoing transfer, though recently they removed this altogether and cover it from other profits.
The real world cost of a TB of data is basically the cost of the energy moving it, which is increasingly cheap.
Since no one answered your question even remotely, no there is no cost to the ISP related to the amount of data transferred. It's basically how much pipe do you want, not how much water are you going to flow. So at the ISP I worked at we'd buy a fiber connection from say sprint, and it would be 10 gig speed. There were no costs involved with how much we transferred. 0 bytes or hit that line 100% saturation 24x7, costs the same.
What are you talking about, basically every tier 1 carrier is going to charge you based on usage. Usually per average mbps or gbps with a minimum commit (so you'd buy, say, a 100G link with a 10G commit, then you pay $x per gbps using 95th percentile[1] with a minimum of 10gbps).
The only exception is if you peer and have a bilateral agreement, in which case you pay nothing anyway.
If the ISP you worked at used other ISPs (tier 2 or 3) as upstreams then yeah, you get the same service as business class internet (flat rate for a capped link speed) but once you're one of the big boys it's absolutely metered and charged by usage.
Since infrastructure is the same initial cost to everyone; I would guess that electricity is the main cost. Driving continuous max bandwidth should require more energy and thus cost more.
I say this without knowing too much about it; but I do know the cable line is separate from the power line... so the cable company needs to drive the bits through the coaxial with their own electricity, correct?
A long, sustain, high bitrate seems like that would use more, no?
In all fairness AWS will charge you something like $90 for 1TB of traffic per month. I am all for cheap unlimited data plans but I can see why an ISP would create tiers of service and charge a premium to multi-terabyte users (and why non multi-terabyte users shouldn't pay for multi-terabyte users).
It's not a good comparision, AWS charges this to lock AWS users into their ecosystem. It's free from the internet into AWS and extremely expensive out to the internet. This is of course by design to make sure you can't have some services outside AWS.
I think if it becomes a common problem for most of the USA, it could finally push the monopoly conversation to the forefront. People being frustrated with charges for a service that should have been next to free is what broke up Bell Systems.
I don't know about Facebook, but I use vk.com to share video files (~100 GiB/month) with a friend who only has access to a mobile plan with 10 GiB data per month and unlimited data to a few social networks.
I put a directory into a multi-volume .7z archive (sliced into 200 MiB chunks, their per-file limit), upload the chunks through their API (solving a simple captcha every 5-10 GiB), and that's it.
I believe I've read about people in Philippines (?) using Facebook for sharing files using a similar process.
The ISP companies want to do this because they get more money, that's simple, and usually followed by awful things like exemption for certain services.
In Brazilian Facebook IT groups it's already common to see people from Angola using the group to google things, because Facebook usage is out of the data cap. Some Brazilians even created groups specialized in this "facebook googling" for Angolans.
It makes me sad knowing how far ISPs are being able to push such bad things.
- edit to add more info about the current state in Brazil: Mobile internet plans all have data cap and most have exemption on WhatsApp and Facebook, while house data plans are forbidden to do so by Brazil's telecom regulator (ANATEL), unfortunately they are lobbying pretty hard to change that.