It's too bad that Comcast chose to have inferior upload speeds, about 6Mbps on standard plans in my region, no matter whether it's 100Mbps down or 1,000Mbps down. When we're talking work from home, online learning, video conferencing, upload speed matters. Comcast tries to bury the upload number, I'm kind of surprised to see it printed in this article, and when I realized the reality I thought there must be some mistake. There is no option for a higher upload speed at less than $300/mo, which pays for a full fiber. It's great to have an option but too bad Comcast chose to make their service inflexibly inferior in this way.
Just got fiber deployed to my small city here in France, and I now pay 40 euros a month for 900 down / 500 up, with a free phone line and IPTV access. We're barely 8000 people and surrounded by farmers fields for kilometers around.
And I'm still jealous of eastern countries like romania where they get better for cheaper.
I don't know how such a high development country as the US can tolerate those ridiculous price, bandwith and data caps.
This is a bit of an old trope. "Size matters". I'm not defending the American ISP abominations like interference in allowing municipal fiber to be built, and denying access to poles so new competitors can't access public infrastructure to build a competing service without unreasonable barriers to entry. All this anti-competitive crap... Right on, I'm with you.
However... America is ENORMOUS compared to any European state... and mostly unoccupied. And our government only recognizes the decaying Telco system as a public utility that we all need access to...
Nevermind. I don't understand either. It's just fully stupid. The size will make it cost more... Not recognizing it's worth it... For that there are no excuses.
Europe is over 3x as population dense as the usa. Europe also doesn't have American suburbs, so even small European towns are very population dense within the city limits. America is too spread out for the capex of fiber to be worth it outside of major cities.
Yes, by lowering the download speed, at least on DOCSIS 3.0.
DOCSIS 3.0 supports up to 8 upload channels for a total speed of 216 Mbps (on 6.4 MHz per channel). That's shared for all users on the CMTS, which could be a lot, depending on the setup. For comparison, on EuroDOCSIS 3.0, with 32 channels the total download speed is 1600 Mbps for all users, and on DOCSIS 3.0 it's 1216 Mbps.
DOCSIS 3.1 allows for multiple OFDMA upload channels which can allow for up to 2 Gbps shared upload speed capacity, and DOCSIS 4.0 allows for symmetric 10/10 Gbps shared capacity with an undefined split DS:US (you can have the same number of channels). It uses extended RF spectrum in the cable and also needs new amplifiers and splitters.
No, the maximum upload is fixed and independent of the download speed. I meant they can offer 6/6 Mbps, or 12/6 Mbps, for a 1:1 or 2:1 ratio. Basically as a joke.
As I have said here: https://news.ycombinator.com/item?id=22572536 , DOCSIS 3.0 supports 216 Mbps usable upload capacity with 8 upstream channels, halved to 108 Mbps with 4 upstream channels. That speed is, however, shared with ALL users on a CMTS, which could be dozens or a hundred, since the coaxial infrastructure is based on a shared medium where all devices connect to one physical cable that leads to the CMTS and share the available RF bandwidth.
You could sell 1 user 100 Mbps upload with 4 channels, and have one user on a CMTS. You can also sell 6 Mbps upload to 36 users, guaranteeing them 100% of the capacity, or to 72 users, guaranteeing them 50% of the capacity.
It is not feasible to sell packages with more than 50 Mbps upload on most coaxial networks (even HFC, if there are many users, i.e. more than one apartment building connected to a node) as users will not be able to use the full capacity, and the speed will inevitably drop to the sub-10 Mbps level, but the user will be unsatisfied as they are paying for and expecting higher speeds.
The maximum download speed for DOCSIS 3.0 is 1216 Mbps, and that's only if you use 32 channels. If you were selling 200/6 connections to 72 users then you would be only guaranteeing them each ~8% of the download capacity (and yet 50% of the upload capacity, using 8 upstream channels).
To have the same level of oversubscription for upload as download, you would give them all 35Mbps upload with the 200Mbps download. But you don't even have to do that, because who says they all need the same plan? Sell a quarter of them 100Mbps up connections and the others still get 17Mbps up.
No way to justify why it should only be 6 up if it can be 200 down. (And if they're offering 200Mbps down without that much oversubscription by using DOCSIS 3.1 or 4.0, even less excuse for 6Mbps up.)
There is only 37 MHz available for upstream, from 5 MHz to 42 MHz (basic upstream frequncy allocation). There is 890 MHz avilable for downstream, from 112 MHz to 1002 MHz.
Therefore, given that there is 24x more bandwidth in the downstream direction compared to upstream, it would make sense to offer 20:1 or higher ratios of DS:US speed to customers. Of course, if your HFC network is "good", i.e. has less users, you can sell higher speeds, or just grossly over-sell the service and "hope" not everyone uses the upload at the same time.
Also, a correction: any given cable modem can support up to 32 bonded downstream channels for ~1200 Mbps per CPE, while the cable itself has capacity for ~130 channels, which is theoretically ~4900 Mbps of usable capacity, if there are no TV channels on the cable. However, there is still about ~200 Mbps of upload bandwidth on the cable for all customers, which is limited by the number of channels in the return path RF range. You can assign multiple customers to a single channel, but you can also segment the channels so that all customers get some channels out of the available range, but not exactly the same ones, which would increase the overall capacity of the network.
> Therefore, given that there is 24x more bandwidth in the downstream direction compared to upstream, it would make sense to offer 20:1 or higher ratios of DS:US speed to customers.
And yet 200/6 is >33:1, much less 1000/6 at 167:1.
Moreover, to even get to 24x you're using the full theoretical allocation for downstream but not for upstream, which would be 40MHz from 5-85MHz (extended frequency allocation) according to your link. Which would then be ~11x, implying (on average) 200/18 and 1000/90.
Also, who decided it should be impossible to allocate more channels to upload? I realize that's more a question of "who designed this crap" than "why aren't they doing this now" but when the parties responsible are still Comcast et al, that makes it hard to want to give them a pass for it. (Though credit where credit is due, DOCSIS 4.0 supports symmetrical uploads.)
Easy for them to do because it costs them nothing. VPNs use so little bandwidth. What a joke to get some good PR, and sad they are exploiting this pandemic.
Yes as maybe it’d draw attention to the absurdity of the limits vs advertised speeds.
It’s similar to the security theater at airports. I personally dislike the entire TSA precheck concept specifically because it works so well. The lack of pain for frequent travelers has kept the entire system from being replaced with what’s now the TSA precheck process.
I think the point they were trying to make (and it's one I share based upon talking to operators in a few of the largest broadband networks in the US about their usage) is that the WFH crowd is not moving the needle in terms of driving traffic. Except for the 10th which was Call of Duty updates that did congest networks starting in the morning the peak is still post 5PM. Now with schools being closed that can certainly change and lead to what many broadband networks spot during spring break (peak coming sooner).
seems like there might be increases in total traffic [1] from more streaming. Not sure if Netflix has released numbers, though that wouldn't be as much of a burden to ISPs given their colocated 'open connect'
but... but... don't they enforce this data cap to protect their infrastructure? I don't get it! What's gonna happen to their infrastructure if they do that??!!
A lot of this going on in Spain right now. Higher data caps for mobile connections (landlines have no data caps here), and some ISPs that have a TV service have opened it up for free.
Less safe is less true than it was back when that answer was written: a LOT of traffic has gone TLS-only and the operating systems most people use are secure by default. Yes, there’s always a chance of an exploit but these days I’d be more worried about what links people are clicking on rather than where they’re sitting.
Congestion is also an interesting challenge: in some cases that’s a problem (imagine having an AP next to a school) but since the hardware limits have gone to substantially we’re probably at a threshold point where geographic separation is enough to avoid that problem for a large fraction of places. The public library has open WiFi but there are only so many people who are going to camp out there.
> in some cases that’s a problem (imagine having an AP next to a school)
Really it's the opposite. You put an AP next to a school and it gets the whole school's traffic off the cell tower and onto a local high bandwidth coax/fiber network. You save a ton of wireless spectrum by having low power wifi APs everywhere instead of needing high power cell towers. (And obviously in that case the school itself would be the best candidate to be operating the open APs instead of or in addition to whoever lives next door.)
You don't really get a tragedy of the commons either, because the range is so short. You could go to any given place and find open wireless, but the best way to get a good signal in your own home is to have your own AP. The exception would be high density housing where you're actually close enough to share, but then you do just that -- have all the neighbors chip in to get a really fast connection and share it. You can keep it open to the public as long as the other neighbors pay their share, which is cheaper for each of them than paying for a whole connection themselves as would happen if they defect and cause you to stop offering it. Or, more realistically, in those situations HOAs or landlords could install the AP and pass on the cost as fees/rent.
> Really it's the opposite. You put an AP next to a school and it gets the whole school's traffic off the cell tower and onto a local high bandwidth coax/fiber network. You save a ton of wireless spectrum by having low power wifi APs everywhere instead of needing high power cell towers.
Note that I was talking about a single AP - not a planned large rollout - and just the point that there are a few high-density applications where you actually have to worry about the number of simultaneous users.
Hotspot range is small, and competing networks slow things down. You do not get a tragedy of the commons by making the networks open.
And "open" and "unencrypted" being the same thing is a historical artifact. You can have each user encrypted with a separate key, and WPA3 in fact does this.
WPA3 adoption is still limited, and transition mode will be a necessity on any public access network for many years. “Historical artefact”, eh, like COBOL.
Many things that are in current use and without replacements are historical artifacts, too.
The prompt was not about using the exact software we already have, it was about a world where things were mildly different. In that world, with open hotspots given priority, I would expect the security problem to have been solved long ago.
It takes a fantasy wonderland for increased usage to means a basic feature might get coded sooner? Or to have a few more good crypto people working on the standard?
This is a feature that already exists and will be in most devices relatively soon. Using it in a hypothetical is not magic. Especially because we could sidestep your whining just by setting the hypothetical in the year 2023, because the exact timing wasn't the point of it!
Welcome fellow Americans to the normal citizens that don't even know the words "data cap". Wish you stay longer then 2 months in this club. Personally I am member of this club for over 2 decades (as is my entire country btw).
>AT&T was willing to tap into their Strategic Packet Reserve yesterday and now Comcast has to follow suit. Let those extremely rare and finite in number packets flow!
Chill and maybe take it slower so you understand things better. The previous post was simply quoting a (rather amusing) joke that appeared in the comments.
The joke is a reference to the strategic petroleum reservers in the US:
"The Strategic Petroleum Reserve (SPR) is an emergency fuel storage of petroleum maintained underground in Louisiana and Texas by the United States Department of Energy (DOE). It is the largest emergency supply in the world, with the capacity to hold up to 727 million barrels (115,600,000 m3)."[1]
The comment is satirically suggesting the telecom industry has similar reservers for such emergencies. These carriers are notoriously stingy and known for their data caps.
While the US's Strategic Petroleum Reserve is the most well known, Canada's Strategic Maple Syrup Reserve is way more amusing and fascination especially after they had a massive theft.
It was a fluid situation but they caught the sticky fingered thieves and delivered sweet sweet justice.
A lot of people moaning about how the data caps are unnecessary don't seem to understand how bandwidth works. If you get fiber or high-speed cable to your home, yes, you may have a buttload of potential bandwidth. If everyone in a large metro area tried to use all 1Gbps of their bandwidth at once, connections for many of them would crawl to about the speed of a 48.8 modem.
There simply isn't enough backbone capacity for everyone to use all potential bandwidth all the time. But besides just not having enough raw capacity, the closer you get to reaching capacity, the more knock-on effects from buffer overruns and collisions and retries and latency and all sorts of shit start to cause connections to slow further. In order to keep speeds faster near capacity, you are forced to use traffic shaping to artificially squeeze capacity in order to make it not seem dog-slow, and falling into an unusable tailspin.
The caps are there to keep people within practical, usable limits, to prevent knock-on effects on edges of the network more vulnerable to bandwidth problems. To reduce that possibility they'd have to invest more money in unprofitable sections of infrastructure. Charging you more for bandwidth is effectively a way for them to not invest, because if they did invest, they would subsequently charge you more money to cover it. The caps basically artificially lower your own bill by getting you to choose to use less data. It's the choice of "do we piss them off with higher prices or worse service?"
Do they want to charge you more? Of course. Do they know most people won't use more than 1TB of data per month? Of course; they have trending usage metrics, they do a calculation, and this is what works to balance what people want with what they need, and how the provider can afford to pay for maintaining it all.
If they weren't absolutely enormous multi-headed-hydra conglomerates, service would be cheaper, better quality, and faster. But they're enormous, and as such they are inefficient, and as such, very expensive for what you get. If you want better service, lobby your local government to make municipal internet legal, because local private providers will never be able to compete.
I don't think you understand what oversubscription means, tiered network topologies' links increase in performance nearer to the IX, and that someone, usually the large ISP themselves, often owns the network all the way to the IX. Cable internet providers in the US get away with extreme oversubscription in certain low avg bandwidth areas and don't publish these details as required in other countries.
>I don't think you understand what oversubscription means, tiered network topologies' links increase in performance nearer to the IX, and that someone, usually the large ISP themselves, often owns the network all the way to the IX.
Getting to the IX doesn't mean you're home free, though. Your transit providers are also oversubscribed and links further down the chain could get congested.
In this case, Comcast is a nearly-Tier-1 transit provider that spends almost $0 on transit outside of interconnect/switchport costs. This argument doesn't really apply. In fact, companies pay them for transit to connect/peer because they can't afford to not.
>Comcast argued that it could be considered a Tier 1 itself, as less than one percent of its traffic requires transit.
>As Comcast's market power continued to increase and consumers had less choice, they actually started demanding payments for connectivity. A larger Comcast will be able to demand even greater payments.
As for T-Mobile/ATT, they are Tier 1 providers who again get transit for free. Spectrum is the only resource that is scarce in that scenario.
The same rules don't apply to the larger ISPs. This market is not fair.
A data cap has almost nothing to do with congestion.
Network congestion is a lot like road traffic, in that it requires the participation of others. A data cap is like trying to reduce traffic by saying that each driver can go only so many miles. If all of a person's data usage is at off-peak times, the data cap is completely useless.
You're right, it doesn't. However, that doesn't mean it doesn't achieve its intended goal. If users use less of their internet to avoid hitting the bandwidth cap, it stands to reason that congestion would decrease because overall network usage is down. Of course, it's not perfect as it wrongly penalizes some edge cases. For instance, someone who downloads/uploads terabytes of data, but only on off-peak hours (overnight).
That said, I don't see how the competing solutions are better. You could do QoS, but that gets flak because of net neutrality. You could divide the available bandwidth evenly across all users, but that would penalize infrequent users. A gamer who doesn't use much bandwidth most days, but has to download huge patches every month would get the same speed as someone who watches 4K streams every night.
Throttle speeds after you've reached a very high threshold, maybe only during peak hours to avoid unnecessary underutilization. Why is that not one of your options?
I like the idea of throttling speeds during peak times only, and only after a certain amount of daily data use. That way neither low data users or offpeak users are penalised.
This applies for the shared medium of legacy coax, but not the "backbone" in general. Essentially Comcast/Time Warner are camping on the aging asset of the cable TV network, extracting as much as they can until it's obsoleted by fiber (municipal, or competitive, depending on population density).
And yes, if you want to fix this, support municipal fiber. 1Gbps symmetric, no caps, no throttling, low ping. Likely for less than you're paying with the incumbents' ridiculous fees and price-jacking/contract games.
Comcast is fiber to the CMTS then its coax the rest of the way (last mile). In my neighborhood they terminate about 4-6 customers per CMTS. BTW, Comcast offers me the lowest ping both Verizon and AT&T backhaul to a city 250mi away.