Hacker News new | past | comments | ask | show | jobs | submit login
Protecting Net Neutrality and the Open Internet (blog.mozilla.org)
159 points by sarreph on May 10, 2014 | hide | past | favorite | 54 comments



I can't help but think this is one potential super cluster of potential unintended consequences, if it ever happens. Net neutrality implementation, or enforcement, has been in limbo for a very long time, and exploitation as a result of the lack of such action has been mostly theoretical. Then there are all these proposals that doesn't consider many things, like this one from Mozilla which excludes interconnects and peering, how would we deal with discrimination exercised through simply not peering or interconnecting? Or, if that gets to be 'made neutral', what's the point of a CDN apart from reducing latency? Then, if I run a video service from Seattle will an ISP in Miami be forced to make sure they have an adequately sized interconnects with me, or a provider one tier above them have to? Also, it's not like there is discrimination right now, how many domestic access providers allow outbound access on port 25, or multicast, or BGP? What about all the research with game theoretical models about this? What about other countries that has no neutrality legislation on the horizon? What about the fact that commercial internet started less neutral (before PPP/SLIP) with limited access to limited protocols and ended up more neutral over time without intervention apart from customer demand? What if the last mile is wireless and certain services would become viable only if it's restricted? What if I want to provide $1/month access to only Wikipedia? To me, personally, it sounds like the problem net neutrality is supposed to deal with is badly defined, or a symptom of another set of problems, and the solutions are defined even worse and has the potential to create even more problems.


> how would we deal with discrimination exercised through simply not peering or interconnecting?

The simple solution is to require last mile ISPs to do settlement-free peering with anyone who brings traffic for their customers into their facilities.

> Or, if that gets to be 'made neutral', what's the point of a CDN apart from reducing latency?

The point of a CDN is that it's more efficient. By having facilities closer to the end users, a CDN has lower costs and could charge customers less to deliver the same amount of content than you would have to pay e.g. Level 3 for that amount of transit.

> Then, if I run a video service from Seattle will an ISP in Miami be forced to make sure they have an adequately sized interconnects with me, or a provider one tier above them have to?

If you run a video service from Seattle then you still have to somehow bring your traffic to Miami before you could peer with an ISP in Miami. If you do that by paying a transit provider then obviously the transit provider will have to supply the amount of capacity you're paying them to get and they'll charge you accordingly.

> Also, it's not like there is discrimination right now, how many domestic access providers allow outbound access on port 25, or multicast, or BGP?

Blocking port 25 by default is an anti-spam measure. In general if you call your ISP and ask them to unblock it, they will. They're not blocking it because they don't want you use it, they're blocking it because you want them to. BGP is pretty much the same deal. And requiring last mile ISPs to support multicast would probably be a good thing.

> What about all the research with game theoretical models about this?

Such as?

> What about other countries that has no neutrality legislation on the horizon?

If we had to do what everybody else is doing then we would have to have government-operated ISPs or local loop unbundling.

> What about the fact that commercial internet started less neutral (before PPP/SLIP) with limited access to limited protocols and ended up more neutral over time without intervention apart from customer demand?

That happened in the dial up days when there was more competition between ISPs because the "last mile" was literally the phone network. It was a result of de facto local loop unbundling.

> What if the last mile is wireless and certain services would become viable only if it's restricted?

What services would those be?

> What if I want to provide $1/month access to only Wikipedia?

An enterprise that builds a last mile ISP solely to provide access to Wikipedia for $1/month is not a thing in any danger of occurring.


CDN also do traffic spikes. If a MMO pushes a >100MB update to a several millions subscribers at the same time, no single network connection could handle it. CDN make it possible to push massive amount of information fast and it limits the risk of service disruptions.

The benefits of CDN's are thus mostly unaffected by Net Neutrality.


Url changed from http://www.forbes.com/sites/emmawoollacott/2014/05/06/mozill.... HN prefers original sources.


Thanks for the catch on this :)


I doubt we'll ever get the net neutrality we want until we swallow the pill of paying for what we use—like we do for other utilities—instead of some imaginary total capacity. How do we make sure transfer amounts are monitored fairly? How do we make sure our software is accessing the network only when we want it to? Those are technical challenges we'll need to solve. But if we want ISPs to play fair, then we must absorb the simple concept: If you stream Netflix all day, you should pay more than the lady next door who checks Facebook a couple times per week, even if the bits are flowing at the same rate.


Most of the cost is in building infrastructure, not delivering bits. A sensible metering approach would still have a high monthly fee, with a very small additional charge for bandwidth.

Further, I don't want to live in a nickel-and-dime world where every single thing I do has the cognitive overhead of worrying about metered usage.


> Most of the cost is in building infrastructure, not delivering bits.

But there's significant cost in building infrastructure that can deliver more bits.

> Further, I don't want to live in a nickel-and-dime world where every single thing I do has the cognitive overhead of worrying about metered usage.

So stop worrying about it. How much cognitive effort do you spend factoring in the cost of electricity, gas and water when you're making spaghetti?


> But there's significant cost in building infrastructure that can deliver more bits.

Here's the problem with that: You have to build capacity for the peak load you want to be able to support. But the reason peak load is peak load is that everybody is using bandwidth then. If you buy a fast connection and you expect that fast connection to be fast the one day a year when they're streaming the Superbowl, you're contributing at least as much to the need to build infrastructure as the guy who watches Netflix nine hours a day. Since everybody wants to use the internet when everybody is using the internet, everybody should pay the same amount.

> How much cognitive effort do you spend factoring in the cost of electricity, gas and water when you're making spaghetti?

You make spaghetti every week, it has roughly the same utility and cost every week, and there is no real risk that you could quadruple your electric bill by not paying enough attention to how much spaghetti you make.

Bandwidth doesn't work like that. Most people have no idea how many bits it takes to do something on the internet, it's extremely difficult to determine whether something is worth seeing until after you've already seen it, and it's incredibly easy to sit down at a computer for "five minutes" only to realize you've been watching YouTube videos for 14 hours.


The people who are insisting on watching the Superbowl during that peak load moment you are contributing to that load much more than someone browsing a few we pages or someone like me that decides to watch the game later (if at all). They are even contributing to peak load more than people who watch 100x as much Netflix if they happen to not be watching much Netflix at that exact moment. There is something unique in character to that Superbowl experience that is making it fundamentally more expensive to the world than other possible uses of bandwidth, and that needs to be modeled as a cost to someone.

So, much like electricity (which is difficult to store and had to then be modeled as a real-time capacity), bandwidth needs to be priced differently during "peak hours". This is obviously much more complex as peak hours is defined by some decentralized notion of "what is popular right now", which is why the current approximation turns into something like "if you create this weird peak demand by offering the Superbowl on your website for coordinated live viewing, you should pay for this peak load increase, not the guy whose usage is small and distributed over time".

Sadly, that approximate solution leads to seemingly-unfortunate consequences (hence all the arguments for net neutrality), so what we need is to really think through how to get users, as opposed to content providers, to pay for this in a content-unaware fashion. If you seriously don't want to address this and want to insist that bandwidth should be flat-rated like this you are not going to be able to achieve net neutrality :(.

As for the spaghetti argument: you really really should be thinking about your bandwidth usage. The reason we don't have horribly wasteful electronic devices is partly because they would cause electricity bills to go up for the people using them. If you had a washer/dryer that used a thousand times more electricity than other appliances you owned, or you had a holographic television that used a thousand times more electricity than a lightbulb does, you would start to limit how often you washed clothes and watched holovision, and this would be a good thing: it would lead to market incentives to create similar devices that used less electricity.

Effectively, someone has to pay for the increased electricity cost of the world as it transitions to holovision: either users need to pay more for their electricity bill each month or the holovision manufactures need to subsidize electricity for them. The latter is what we don't want (we want net neutrality), so either you increase everyone's flat rate or you bill just the users using the holovision. The former penalizes people who don't have a holovision unit and leads to inefficient electricity usage. To bill users, either you bill people who own holovision (which is largely equivalent to what we don't want) or you just bill by the actually-relevant metric: usage. The best option is billing users for usage of electricity.


But bandwidth isn't a scarce resource in the same way electricity is. There's very little incremental cost to delivering each extra bit, so it doesn't really matter if bandwidth usage is wasteful. All that matters is there's enough peak capacity, and nearly everybody contributes equally to peak usage. The incentive to minimize waste is the fact that performance will suffer otherwise.


I feel like you ignored the brunt of my argument and demonstration for my statements, and instead are just arguing with a soundbite from my comment. I will try again, maybe more explicitly: bandwidth is actually very much like electricity because both have a peak capacity, a noticeable cost of equipment maintenance, but very little incremental cost; you don't experience a blackout or a brownout because you ran out of coal: maybe that will happen in the distant future, but today the problem will be that the immediate amount of power needed by the users exceeded the maximum capacity of either the plant burning coal (which likely has lots of coal in reach that it can't burn fast enough) or at best the rate at which you can remove it from a mine to get it to the plant.

Now, if you could somehow cheaply lay cable and setup cell towers constantly and everywhere, and if they didn't degrade or get obsoleted by new standards, sure: maybe you could keep building out infrastructure and accumulate capacity to handle increasing usage forever. But finding places to put cell towers is hard, laying new cable is destructive, and the reality is that you have to maintain and upgrade your infrastructure constantly. The peak load that can be supported is thereby a key cost, and there should be incentives for everyone to keep that cost low.

As for your "peak usage is contributed to by everyone equally" assertion, I just don't understand why this is not clear: if the Superbowl being streamed is the highest usage load, then if you are watching the Superbowl (or doing something else equally bandwidth intensive at that exact moment) you contributed to the max total peak usage for the month in a way that someone who wasn't using the Internet that day did not. I just don't see how you can argue against that: it seems self-evident :(. Some people are simply going to be using more bandwidth than other people during the peaks (and some services are going to encourage people to use lots of bandwidth, especially simultaneously, while others do not.)

To put this another way, you are saying that my grandmother sending a few emails every month is contributing to peak usage as much as someone who streamed the Superbowl... or that someone who only uses Netflix at 3am (or whatever the most non-peak time happens to be) is also contributing to peak usage as much as someone who streamed the Superbowl... at some point this argument becomes gratuitously preposterous. If everyone were my grandmother then the world would need fundamentally less networking infrastructure to keep up with demand: she should not have to bear the cost of people who insist on streaming the Superbowl live over the Internet.

And again, with the concept of "wasteful", this should also be really obvious: let's say the there are two ways to steam the Superbowl--two codecs that could be used to compress the stream--and one uses half as much bandwidth as the other. This one decision affects peak usage more than any other decision: someone somewhere has to bear the price of this decision, and to make the wrong choice is "wasteful". If someone insists on using the worse codec (or in a network neutral "bill the users" model, chooses to use services that employ the worse codec), they should somehow be forced to bear the infrastructure buildout and upkeep cost of that decision.

Maybe you could work this out with some math: can you demonstrate how all these different kinds of users are contributing equally to the peak load? You seem to believe that this should be as obvious to me as I think this should be to you. Whether you measure a ratio of the peak or the probability of increasing the peak or even metrics that are purposely flawed in an attempt to force equality, like "whether the user was using any bandwidth at all during the peak", any and all analyses show that the person watching the Superbowl streaming in this example are contributing to the peak much more than my grandmother is, much more than someone watching Netflix in the middle of the night is, and even more than someone using a "less wasteful" codec to perform the same activity is... please show me what metric you are using where these people ally work out to "contribute equally".


Unfortunately with all the projects I'm working on, my previous comment was stated poorly and I don't have the time and energy to craft a satisfactory response.

Fundamentally, I believe that any current problems with bandwidth are entirely manufactured by the last mile ISPs, and that further ISP consolidation will give them greater leeway to fabricate more crises in the future.

I also believe that a premature focus on morally loaded concepts like "fair shares" of capital expenses is a manipulative tactic used by ISPs to divide the public, turn innovators into the enemy, and cripple technology's progress.

As other posters have explained, the vast majority of the expense of an ISP is not in delivering the next gigabyte of data, but in establishing the physical connection. Once the cable is laid, new technology can be added to the endpoints (like DOCSIS 3) for a fraction of the original cost. Grandma's web browsing cost just as much to lay the cable as everyone else's Netflix, and Grandma already has the option to buy a cheaper connection if she wants.


> There is something unique in character to that Superbowl experience that is making it fundamentally more expensive to the world than other possible uses of bandwidth, and that needs to be modeled as a cost to someone.

If this was actually a problem there is a much better way to model it than metering: Sell different speeds at different times of the day. Instead of selling a connection which is allegedly 100Mbps all day and all night, sell a connection which is 1Mbps during peak hours and 100Mbps off peak. Then sell a more expensive connection which is 10Mbps during peak hours and 100Mbps off peak, etc.

This is a much better mapping to costs than metering because it takes into account the persistence of capacity expansions. With metered billing, the people who only tune in on Superbowl Sunday aren't actually paying as much as their usage is requiring capacity to be expanded. Peak usage that day is higher than peak usage any other day so more capacity was needed just for the day, but once built it sticks. Just paying the “normal” metered pricing that day would let them off the hook for their contribution. But increasing the price to the accurate level, putting the whole capacity expansion cost onto that one day, could create such high prices that customers would revolt, and would probably make live streaming of popular events uneconomical. The better solution is to make the correct people pay (namely those who buy the plan with sufficient peak hours performance to play the stream), but pay by subscribing to a plan that spreads the cost out over the whole year and in exchange provides superior peak usage performance all year to those paying for the capacity expansion that allows it.

> The reason we don't have horribly wasteful electronic devices is partly because they would cause electricity bills to go up for the people using them. If you had a washer/dryer that used a thousand times more electricity than other appliances you owned, or you had a holographic television that used a thousand times more electricity than a lightbulb does, you would start to limit how often you washed clothes and watched holovision, and this would be a good thing: it would lead to market incentives to create similar devices that used less electricity.

The incentive exists without metering. If you had a version of Netflix that used a thousand times more bandwidth than the existing one, you would need a connection which is a thousand times faster in order to use it. You can even get such a connection: Just pay AT&T or Level 3 to dig up the street in front of your house and install 10 or 100Gbps fiber just for you. They'll do it for the right price, but that price is prohibitive for pretty much everyone. Which is why Netflix doesn't even offer to stream uncompressed 4K 3D video at ~10Gbps per stream.

You're also making the assumption that the price of metering would be high enough to have a significant deterrent effect on usage. But accurately priced metering probably wouldn't. The significant majority of an ISP's expenses are not strongly correlated with the amount of data transmitted. They're paying accounting, customer service, marketing, linemen to repair weather damage, property tax, electricity, etc. Even a significant proportion of expansion-related costs are uncorrelated with future consumption, because by the time you break ground to do any expansion whatsoever, the digging becomes a sunk cost and the cost to install 2X or even 10X as much fiber into the open hole is extremely modest. Expanding capacity by 500% can cost in the same ballpark as expanding capacity by 50%.

The marginal cost of increased usage that could be deterred by accurately priced metering, i.e. the cost to the provider of usage increasing by say 55% instead of 65% over the same period, is so small that trying to measure it isn't even worth doing. It may even be zero – upgrading links from 1Gbps to 10Gbps gives you ten times as much capacity regardless of how much you actually needed.

The concept of trying to conserve bandwidth is the application of false analogies. It's not like electricity; there is no generation cost. Bandwidth is use it or lose it. If you're paying for a high speed plan and then not using it, that isn't the fault of someone paying for a high speed plan who is using it. A single user using more than average costs nothing because it isn't enough to require an overall capacity upgrade; only the average user increasing average usage does, and if that happens to a sufficient degree then all it means is the average user will have to pay a higher rate. People who don't use as much and don't want to pay as much can buy slower connections – but if it turns out that the actual cost of increased usage is a sufficiently small proportion of the cost of providing internet service that slower connections cost almost as much as faster connections, that's hardly the fault of people who send more bits.


You seem to believe that it makes sense to buy a fixed guaranteed speed connection: I am starting from the assumption that no one would ever want that as it is inherently wasteful, and that any ISP that bills like that will be defeated instantly by an ISP that understands the idea of time sharing. As a user, I don't want to buy a connection that is guaranteed to be able to transfer a bunch of bits that lies dormant 99% of the time even when I'm actively browsing. However, when I am transferring something, I want to be able to take advantage of the faster rate. If you work out the math on some of the examples I've given in this thread (the people trying to transfer a minute of video over different time frames), you will see that "total number of bits" is actually a reasonable approximation and a fundamentally more reasonable approximation than buying fixed-size connections. I also have already stated there are more accurate billing models, but they start to look as complex as Amazon Glacier, and they require your browser to purposely rate limit themselves, which is tech we don't have deployed.

To address your comments on the pricing of "peak usage", metering is an approximation of your probability of increasing the peak load. This works better than you would expect (and even better I feel than for electricity) because bandwidth is designed to gracefully degrade: if it takes twice as long for me to download tiny files from the Internet I am unlikely to even notice. If the speed slows down tremendously, then rather than having to increase the price for that bandwidth, we simply decrease everyone's quality of service. This happens automatically and naturally, and while it might make you angry that you feel you were paying for something you weren't, what you were actually paying for was always this "probability distribution of speed assuming characteristic usage patterns under an expected load distribution". If you thought your flat-billed bandwidth was anything different you haven't thought through the economics enough: you would have to be paying much much more than you do under the current model, so anyone who actually billed like that would look so expensive as to be "insane". As people are already effectively paying more during peak load (as they are getting slower service for the same amount of money) that part is already actually fine: the issue is now how to deal with the fact that some people are more likely to increase load due to their usage patterns, and while switching to Amazon Glacier style billing would be "extremely accurate" just billing on total usage is fundamentally more accurate than a flat rate.

As for electricity, I covered this already many times: you are just wrong about the costs. The costs associated with power are dominated by the overhead of the plant, not the costs of generation. If you run a plant "below capacity" you are often better off just shutting it down, as the overhead is so high (you mostly do that in order to have "spinning reserve", so you can obtain more power quickly in the near future). If you are simply unwilling to believe that the cost of coal is largely irrelevant to our power costs, think about the economics of hydroelectric, which provides 16% of the world's power. In fact, electricity is mostly a game of "how many plants do I need to have built to have capacity to satisfy the load requirements", which is exactly the game with bandwidth. You don't have a blackout because someone ran out of coal: you have a blackout because more people are using power than is capable of being generated, we are using power faster than we can burn our copious available coal (which was probably more obvious back in the days when brownouts were more acceptable). If you are capable of generating power and you aren't, in a similar manner to bandwidth you are just "losing it": it isn't like we can store power at these kinds of levels; with some kinds of plants you get to save a little fuel, but again, that is a minor cost and doesn't apply to renewable sources, including hydroelectric.

All that aside, though: I want to point out that you are also not arguing for the current state of affairs. You also think people need to "pay for what we use", you just have a different concept for how that "usage" must be monitored. As it stands, most people only want to pay for "lots and lots of bandwidth at a flat price". They don't appreciate the idea of an incentive and they aren't modeling any costs to the ISP at all. In a way, and I think in a very fundamental way, what you are saying is actually agreeing with the overall point I was defending: the statement upthread that we need to pay for what we use to not run into network neutrality issues. If people who wanted to watch Netflix actually were paying much much more to get that kind of dedicated service than the people who were browsing web pages there would also not be this network neutrality issue. I argue that this would be a wasteful and expensive way to do the billing (as people browsing web pages--who would like their web pages to download quickly but also are downloading so dispersedly that time slicing is super efficient for them--either end up paying too much or having to live with downloading things needlessly slowly, and people who watch Netflix erratically are forced to spend on the same guaranteed bandwidth as someone watching it constantly despite they also being able to timeslice better) but at least your different way if billing is also a way of billing compatible with network neutrality.


I think the argument is that there's nothing about that tradeoff that's "fair", because it doesn't reflect reality. Unlike electricity, gas, or water, they aren't creating bits, storing bits to ensure supply, or depleting their supply of bits by transmitting them to you.

Charging by the bit would just be a good way to increase telco revenue.

Now, if you think it would be a good bargaining chip for getting them to accept being dumb pipes, that might be a good argument, but we should call it what it is if that's the case.


Electricity companies can only sort of store electricity: they produce electricity, and it is consumed at some rate by users; this is why electricity is priced differently in many areas during "peak usage hours": because electricity is more like bandwidth than you seem to realize. If you have a difficult time modeling this, try to model electricity in a world where everyone is using renewable energy sources like wind or solar (and if you find that "preposterous", hydroelectric is very common as a primary electricity source in many areas). I can see a strong argument for switching to some kind of billing based on the peak load you contributed to, but that turns into a pricing model almost as complex as Amazon Glacier ;P.


it's possible to export electricity unlike bandwidth.


Non-peak hours for one area are by-and-large non-peak hours for nearby areas. It would be awesome if we could efficiently transfer power halfway around the globe, but as we can't, the ability to export power is only a negligible change in the model.


> they aren't creating bits, storing bits to ensure supply, or depleting their supply of bits by transmitting them to you

Their value proposition is not "bits" per se, but capacity, which they do in fact deplete when they transmit bits to you.


Their value proposition is not "bits" per se, but capacity, which they do in fact deplete when they transmit bits to you.

Capacity in this case is a rate, not a quantity. The same capacity is required to deliver a web page at 50mbits/s as to deliver video at 50mbits/s.


A video has a minimum viable rate (past which point it will no longer be real time), while a web page only sort of does. A video is a sustained download, while a web page is relatively instantaneous. If a thousand people are clicking links and browsing web pages even fairly quickly, they will likely not notice each other's usage of the network. The same cannot be said of a thousand people streaming a video over the same network.

To model this another way: you are muddling together the capacity of something with its speed. These are highly related ideas, as past the latency of a link the bandwidth does manage to model the speed a single user can obtain over the connection. However, users are sharing that speed over a specific duration: it absolutely matters how much data is being transferred. The web page is simply fewer bits than a video, so more of them can be downloaded over the same capacity link per unit time.

To make that time requirement more concrete: a thousand people who need to download a minute of video within the next minute requires more capacity than a thousand people downloading a minute of video who are willing to wait up to two minutes to watch it, both of which require more bandwidth than a thousand people downloading a minute of video who just need it to arrive before tomorrow.

(Putting this all together, you hopefully see how flow control becomes relevant: peak load is informed by peak capacity, which causes the speed per user to automatically slow down as the bandwidth capacity is reached. But this is more of an advanced thought that is probably not needed to appreciate that a thousand people streaming video need larger pipes than a thousand people working at some email: that is hopefully intuitive.)


If you're the sole user of the internet, then this is true.


> until we swallow the pill of paying for what we use—like we do for other utilities

What other utility gets to charge the manufacturer of whatever you use consume their service for? What power company gets to charge Lenovo because I plugged a Lenovo laptop into an outlet?

You only touch on a portion of what the issue is. And quite frankly to deal with what you did mention, I pay for internet access, I expect internet access. Why should I be charged more because I then choose to use it? Why should I be penalized for the ISP oversubscribing and underestimating the usage.

I shouldn't because that's not how they sold their service.


That's the point: as users we have to pay for what we use in order to avoid carriers trying to bill website operators. If you aren't willing to pay more for using more bandwidth than your neighbor, because you use Netflix and your neighbor only haphazardly checks email, then the carrier is going to try to bill Netflix, and we don't want that to happen.


Honest question; if a very naive one (Since I haven't thought about it very hard...) But why wouldn't a very simplistic metering be acceptable by that definition?

I'd be totally fine having my internet metered, if the costs were commensurate to the service provided, and the service was as regular as other metered services (e.g. water/power)

I've heard why the common carrier argument isn't "the best solution", but I don't fully understand why metering hasn't been championed as a way to have the two sides (consumers and the companies) come to some common ground, or at least "call the bluff" of the companies using justifications like the above ("people use internet differently", "people will never use this amount"); so why not just see, and charge accordingly?


> why wouldn't a very simplistic metering be acceptable by that definition?

It would be perfectly acceptable (assuming I've understood you). I think the reason nobody is championing it is twofold, ISPs don't like it because it would make them that much more utility-like, which they strenuously want to avoid. Active internet users don't like it because their bandwidth is largely subsidized by less-active subscribers, and leveling the playing field would be a net loss for them. To me it looks like pure selfishness on both accounts.


Active internet users don't like metered billing because they understand it is a ruse. If I use my connection at full capacity all day long, except during the evening prime time, I might rack up several TB of data each month. I'm not causing any network congestion or costing the ISP any money though. The basic user who 'only' streams several shows on Netflix each evening might only use several dozen GB of data each month, but they cost the ISP much more in peak bandwidth congestion.


So have a peak rate and an off-peak rate.

That's exactly what my ISP (Andrews and Arnold[0]) does. I pay for a specific number of 'units' which are consumed faster at peak times (09:00-18:00) than at all other times.

A&A's policy[1] is to never be the bottleneck in the network and they also kick up a stink with their transit providers (mainly BT) when they detect a problem (which is very often, as they monitor their network's health closely). They even publish statistics[2] on dropped packets classified by their uplink - for instance over the last 28 days they have a 98.48% success rate at a 1-second granularity one of BT's backhauls, and 100% on TalkTalk's backhaul.

They do work out to be more expensive than other ISPs (I pay ~£33 a month for a 40mbit FTTC connection with 4 units) but you pay for a good service and competence.

[0] http://aaisp.net.uk/ [1] http://aaisp.net.uk/broadband-speed.html [2] http://clueless.aa.net.uk/linkreport.cgi


Metering is used in Canada. Many have criticized the policy as backwards, unfair, and hampering creativity compared to other countries. http://www.theglobeandmail.com/technology/tech-news/a-metere...

Tho I do agree that, at a first glance, it seems to have extremely good correlation with the cost of the service provided.


They present a counterargument of the "heavy user", dismiss it, and don't seem to explain why it's not a fair example of why metering is useful. It seems that the main complaint of the article is simply that the overage charges are insane, as they are for cell. I agree. I also think that it's "throwing the baby out with the bathwater" to suggest an entire reversal of policy (when observing what unmetered use results in via the US) when a more iterative response would seem to be, "why don't we regulate costs to ensure that this pseudo-utility is actually accessable at a basic level (that grows with the times) to all people?"

I'll have to tag another question on here, but is there any system in place for utilities to prevent, say, my water company for charging me 300$ per gallon? I'd assume there's simply some degree of regulation, why not apply that fix elsewhere; have a "hybrid utility" rather than treating it as all black and white.


Utility rates are highly regulated. They can only charge what a government utility commission allows them to charge. Every few years, they generally plead the case to raise rates slightly, so they can keep up with rising costs or inflation.


It's called the Insurance Effect. People are willing to pay more for flat rate, which includes "carrying" high volume users, to insure against bill shock if their usage patterns change. Humans are bad at estimating bandwidth use, and pricing that removes that requirement has real value.


"Insurance" is a form of risk mitigation. The reality of bandwidth is that we know someone who likes using FaceTime, or who has a Netflix subscription, is going to use more bandwidth than someone else. Sure: it makes sense if you aren't certain how large a web page will be to buy insurance on it, but you don't want to be in the same insurance bracket as a true "heavy user": I am happy to take part in a pool where everyone subsidizes the risk of accidental trauma, but I know someone who insists on smoking is going to increase their health costs, and they should be forced to take on that increased burden. This is especially evident when the playing field changes: the cost of insurance for all users goes up dramatically when something like FaceTime or Netflix is released, because the people using it are using dramatically more bandwidth than the people who aren't. If the bandwidth companies said "ok, thanks to Apple's FaceTime service, everyone's connection is now twice as expensive even if you don't own Apple hardware", people would be pissed. This is why insurance charges people differently for their inherent risk, and why you can't use insurance as a metaphor here (well, unless you want to make the opposite argument ;P).


It may be hard to predict bandwidth usage, but electricity, gas, and water aren't perfectly predictable either, and unlike bandwidth we're incentivized to estimate them and to self-throttle to the extent reasonable (e.g. turn the thermostat down at night).


The scarcity lies in apportionment of the pipe at any given moment, not in the total data transferred. Therefore, the only way that charging users (whether in money or some other form of opportunity cost) will alleviate scarcity is if it's assessed against he former (apportionment of the pipe), not the latter (total data transferred).

Simply charging per unit of data downloaded just makes people cut back on total use, without concern for when they cut back. This does nothing for what you really want to discourage, which is usage at peak times.

(My preferred solution is to reduce the fraction of the pipe that you get in proportion to how much you use at peak times. That way you have some incentive to cut back, but not in a way that forces you to eye your wallet.)


Usage during peak times is already discouraged by the network degrading during peak times: rather than charge more you simply get less value, which is equivalent. This causes traffic to naturally smooth out some over time. You thereby in fact do need some way to cut back on the total usage, as a bunch of people randomly using Netflix really does cost more than a bunch of people randomly browsing the web. The goal of the network is not to support the peak load: this seems to be a key misconception in these arguments :/. It just isn't economically efficient to build a system that can support the peak load. If people really do want to have a system that can stream Netflix to everyone's house at 7pm when the traffic throughout the rest of the day is nothing like that amount, then someone has to bear those costs: either Netflix needs to subsidize the weird bandwidth requirement, customers need to opt in to "supports Netflix" plans, or we have to give up entirely on pricing models that don't take into account usage in some way. You might disagree with how that usage is calculated, but right now the problem is that most people just want "lots and lots of bandwidth for a flat rate", and that is inherently incompatible with a demand for network neutrality :(.


I'm aware of all that and don't see how it refutes my assumptions or solution.

Yes, the network is degraded at peak times naturally. My idea is to further penalize heavy users (weighted by time of use) but do it in favor of lighter users (which avoids the rent seeking problem).

As you note, this inevitably penalizes total use, but does it in a way that accounts for when the use is bad.


So, first off, I want to point out that I think we are largely in agreement, as I certainly would not be bothered with your suggestion were it implemented: the key thing is that people are charged for their usage in some manner as opposed to expecting a flat rate to cover "lots and lots of bandwidth".

However, you are also stating that simply charging people by the byte is incapable of having a similar effect--that your modification is the "only way" to gain a benefit. I was showing that because there is already a difference in the price during peak vs. non-peak times you don't need that modification.

I also argue (generally in this thread, though not in that comment as it had not seemed like it was needed; now it does) that even as per byte billing has almost the same overall result (again, due to peak capacity being inherently lower than peak desired load) it has a key benefit of being a cost remotely capable of being predicted: charging people based on a constantly-fluctuating idea of "how much bandwidth are other people using" is a price fluctuation that can't be effectively predicted until after you are accruing the cost: it requires users to not only predict their usage, but keep track "these four hours are the Superbowl, so everything is going to cost more".

I guess this could be solved by having a live "surge pricing" indicator in your status bar next to the network icon, but that requires infrastructure to traverse that across NATs that we don't have currently. One more point on that: it would also be valuable to do this kind of minute fluctuation with relation to power, but we don't: we just bill at best a difference for "typically peak" vs. "typically non-peak", even if some days are fundamentally more energy-intensive than others (as in addition to it being a weekday someone scheduled a massive lights show across the entire city or something).

That is why I still feel like I responded to what you said: while I did not in any way attempt to address whether your model was more or less "accurate" (I have yet to decide whether that is really the case, but it certainly doesn't sound wrong or anything), I was only trying to show that it was not by any means "necessary".


Alright, I agree that "cannot" was too strong. Nevertheless, charging for total usage is discouraging the wrong thing, even though it correlates with the right thing. If you want the roads to run smoothly, you don't charge everyone a dollar; you charge peak users $4 and everyone else nothing.

And weighting data price by "peakness" is not significantly harder than not weighting it; in both case it's "just another metric" for the user to monitor.

(The broader point is that, in no case does it justify the standard ISP practice of singling out a particular content provider, even if the content provider is responsible for 99% of usage.)


Sure. Then ISPs need to start listing what they can actually fulfill, under load, for all customers in a given area at any given time.


If they billed by the byte instead of by some nebulous concept of connectivity then it wouldn't matter what they "can actually fulfill, under load": if they weren't able to get you all the bytes you wanted, you would be paying them less, and they would make less money. The maximum amount of money they could make from their infrastructure per hour would then be exactly their per byte cost times the number of bytes they could move per hour, leading to obvious incentives to increase their capacity.


Please tell me exactly what the person streaming Netflix is taking away from the little old lady checking Facebook.


In many parts of the US, Netflix usage saturates neighborhood cable modem networks in the evenings, making things painfully slow for everyone.


It's not like this is novel. Every person with any knowledge of the situation (and no financial ties to ISPs) have been calling for the telecommunications services classification for years from the FCC. As I remember it, the FCC neutrality rule strike down would have been totally avoidable if they had set the classification to begin with.

Will Mozilla "officially" proposing this make any real waves? I guess as a large organization, maybe their example will at least open up the door for other companies to join Mozilla's proposal?

Edit: I get the "last mile" distinction in their proposal, but if the entire idea of Internet delivery was classified under common carrier laws, that would cover last-mile.


Edit: I get the "last mile" distinction in their proposal, but if the entire idea of Internet delivery was classified under common carrier laws, that would cover last-mile.

"Common carrier" worked when we had a (true monopoly) telephone system where touchtone was considered a radical, once-in-a-decade technical advancement. What incentive does Comcast have to, say, push standard bandwidth to 100Mbit while being paid regulated, low margin rates as a dumb pipe provider?

Mozilla's proposal is interesting, but I think it's probably too clever by half: Even if the FCC wanted to get into the business of setting "fair" rates for "remote delivery", I think Congress would prefer they didn't, and that matters.


> What incentive does Comcast have to, say, push standard bandwidth to 100Mbit while being paid regulated, low margin rates as a dumb pipe provider?

To be more explicit: Mozilla's proposal solves this. ISPs get to charge whatever prices they want to consumers, which covers the cost of 100mb bandwidth and technological innovation, but to companies they have to act like a dumb pipe and ensure equal access. This means Comcast has an incentive for innovation even in a monopolistic situation—They can charge users more money.

I'm not sure if this would solve the problem with BitTorrent though...


1. Someone needs to expain ths topic better. 2. There's too much confusion right now, and I think it's an important topic. 3. Right now--I don't trust Comcast in any senerio.


Plus, there's paid shills here on HN attempting to confuse the issue.


I think the far distant future internet will distinguish between cached and uncached data, especially for international and interplanetary communication.

So any kind of data that can be easily hashed and disseminated over ad hoc networks will be basically free and unstoppable (BitTorrent and Netflix will be the same thing).

But data that has to be sent from scratch instead of referencing a hash will have to pay some kind of toll, albeit a small one.

It's simple to see the endgame of this: there will be an economic incentive to favor storage formats that reference existing data whenever possible. What’s really remarkable to me is that ISPs don’t seem to understand this, and by killing net neutrality they are hastening the demise of their own business model.

It’s going to be a little rough for 5-10 years, but when everyone’s cell phone has p2p gigabit wifi, I find it a little hard to imagine that ISPs will even exist as a business, outside of the government or things like banking where a hard line/low latency is desirable. So writing this out now, I see that latency is everything, so a few decades from now, what low latency industries are they hoping to capitalize on? Maybe gaming, surgery, telepresence, high frequency trading.. Netflix is just a cover story.


We need a revolt to protect the internet. Washington is overrun by money but if people raise enough of a ruckus they do somewhat listen, c.f SOPA. If net neutrality is too big of a leap for the so-called "libertarian", anti regulation crowd than we need to break up Comcast, AT&T, Verizon, etc. so we can have some actual last-mile competition.


Not to be tok contrarian here, but isn't there a massive moral hazard (eg one that Netflix takes advantage of) in enforcing Net Neutrality?


Could someone explain like I'm five? This is fairly technical for anyone not familiar with telecom regulations.


With this "fast lane", could I theoretically get "for free" 1gbps while paying for 1mbps?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: