I think there should be more focus on latency and upload improvements as well. With all the realtime streaming/communication going on, really low latency is crucial for the future. Also we are sending a lot more data these days, so broadband with 100/6 Mbps or 200/8 Mbps is very unbalanced.
You don't really need more than 25Mbps downstream for streaming even 4K with one client, not uncompressed of course. However 100+ Mbps with <10ms latency allows for real time gaming via streams in high quality for example, making expensive hardware at home obsolete.
Our home line is 110/3.5 - if a service can serve content faster than 110 Mbit/s we are limited by the rate at which we can upload the TCP ACK, and not by the download limit on our connection.
What we should really achieve is 1Gbps symmetric connections. Then the cloud is your LAN and then only you can seriously consider working on files stored or backed-up in the cloud.
Right now the best upload speed you can get with optic fiber with BT in the UK is 30Mbit. Not a technical limitation, purely a commercial one.
I actually don't understand why ISP do not offer better upload speeds. I thought the whole system of peering relied on having balanced upload and download flows, which created problems in datacenters and ISPs, which by nature are one flow only (end users downloading data from datacenters). I would assume both ISP and datacenters would want to encourage having flows the other way, by mean of people backing up their data in the cloud.
Most users pull far more data than they push, so it's a simple reaction to the market -- with technologies like DOCSIS you have a set number of channels and you assign them to either downstream or upstream bandwidth. More users benefit from a fatter downstream pipe. ADSL has the same sort of bandwidth allocation. Having a symmetrical connection generally means limiting the downstream connection.
My cable modem pulls 120Mbps down, pushes 10Mbps. 24/7. DOCSIS 3.1 promises up to 10Gbps down, 1Gbps up (again, because it biases the bandwidth to down). It's a signal on a medium.
And when we talk about fiber optics, 9 times out of 10 it isn't really fiber optics. The vast majority of BT "fiber optic" installs are a copper twisted pair, with exactly the issue I mentioned. But their junction box, just like cable internet applications, has fiber optics going to it, for what's that worth. For a real pure fiber optic connection, which is rare, the service usually is symmetric because that compromise didn't need to be made.
Indeed, I have recently got a fully fiber optic connection up to a flat and it is symmetric 300/300Mbps download/upload. As I was looking the plans, there wasn't even an option to go with non-symmetric connection.
> I actually don't understand why ISP do not offer better upload speeds.
ISPs usually have business/enterprise offers with more upload bandwidth for $$$. They don't want to cannibalize this with more upload speed for residential users.
Yes. In Switzerland we have an awesome company who is rolling out 1Gig up and down fiber. The price is only about 67$ per month (thats the same you pay for 100mb down/10mp up with other companies).
They are slowly rolling out. Can wait until the arrive in my city.
BT sells FTTH in new developments, with 300mbps down but only 20mpbs up. Exactly on the same fiber I get 1gbps symmetric from Hyperoptic. It's amazing they constrained themselves so much.
Requirements for 5G are getting equally ridiculous [1]. 5 ms latency and 100 devices per square meter? Whenever I hear something like that, I like to outline a meter by meter of empty floor space and ask people what they imagine 100 devices will be doing in that particular spot.
What are you really going to do with a 1gbps residential connection, that doesn't sound as tentative and outlandish as my plan to push a shopping cart full of iPhones streaming their own episode of GoT down the street? (it's art, don't ask)
This isn't about having a feasible plan to exhaust the new standard using current technology and use cases, it's about pushing the new standard as far as possible (within technological limits), so it can last as long as possible without needing to be redesigned. Also, reality has a nasty habit of catching up to the possible much faster than anticipated.
It's funny how each standard is made to "last as long as possible without needing to be redesigned". LTE stands for "long term evolution" - as in "we made this standard so that it will be possible to evolve it according to demands without total redesign". It has been in commercial use for what, 4 years? As far as I know, current plans for 5G are "screw LTE, we'll redesign it from the ground up".
Maybe I'm miscalculating, but I think you're off by an order of magnitude.
According to [1], a dense open space office in Manhattan has 100-120ft² per employee; let's take 108ft² which is ~= 10m². That means each m² has ~1/10 of an employee. Even if you multiply by 100 floors, you're only getting ~10 employees/m².
I think your miscalculation might be devices per person, not people per square meter.
My desk area in Tokyo (so maybe not 120ft²; let's call it 60ft²) has a whole lot more devices than people. 4 computers + personal/work phones + 1 watch + 3 tablets + wireless camera + let's say 5 other random items with an IP address that might be paying a visit (think health/fitness devices, teledildonics, wifi-enabled geiger counters, etc.).
I may not be typical in April 2016, but I'm also not that much of an outlier. 10+ devices per person seems to be the direction we are heading, so... at 100 floors, yeah 100+ devices per square meter.
While I do think you're an outlier, remember that we're specifically talking about 5G-enabled devices. "Wifi-enabled geiger counters" don't really, er, count :)
>I may not be typical in April 2016, but I'm also not that much of an outlier. 10+ devices per person seems to be the direction we are heading, so... at 100 floors, yeah 100+ devices per square meter.
Not sure where you get this "100 devices per square meter" again.
Even if it was 10+ devices per person (which is at best 3 or 4), there are hardly more than 1 or 2 persons per square meter still.
And across floors there are one or several meters of empty space -- above people's heads and before the ceiling.
Well, the example I was replying to was for a 100-story building, with a total of ~10 employees/m².
I think this absurdly specific hypothetical building is getting in the way my point, which is simply that we humans have an increasing number of networked devices on and around us, and ~10 devices is already common for some people.
Yeah, I know, but that's the specific calculation I was replying to — icebraining's figure of people per square meter of land.
This subthread veered off from the original linked post about broadband speed in general to discuss how upcoming "5G" cellular service is expected[1] to support 100 devices per square meter - that means per square meter of coverage area, regardless of how many floors are stacked up on that square meter. E.g., Mbps/m².
Telcos hate low latency. High latency is in their DNA. Nobody has ever been able to sell low latency except to HFT people, and until they started asking, nobody knew what a circuitous route the cables took from Manhattan to New Jersey.
The first research project Bell labs did was a test to see how much latency they could get telephone users to tolerate.
It makes sense just fine. The research about latency tolerance informed the design constraints for packet-switched digital phone networks, which had been proposed since the 1950s and began to replace POTS in practice in the 1980s (e.g. System X in London in 1980).
Latency requirements constrain routing and buffering specs. Packet switched voice could not work until the system could meet human tolerance of latency.
While that is true, at a certain point, more latency in a working system costs money, especially at the earlier switched systems
But yeah, they needed to do a research about it, but I would doubt "The first research project Bell labs did", since Bell Labs predates digital phone networks
Sure, but no-one is ever talking about deliberately over-long latency. There's a sweet spot where packet-switched is fast enough, while cheap because of shared bandwidth and routing flexibility. It costs more to go faster, but it's game over to go slower than humans will accept.
And it's unlikely that the poster was referring to direct (non-packet) connections since these have been rare for a long time.
i understand that, just saying that not every country has these insane distances like the US, as even most US content is mirrored on CDNs that are usually pretty close.
I think the mirrored thing is mostly for assets -- js, images etc.
For most smaller websites it's one server somewhere (for the majority of them actually co-hosted with other sites in a single location/box).
And for most large websites outside of the very big (e.g. FB and Google), it's at best a few locations around the world, e.g. e.g. US east/west/mid, 1 in Europe, one in Asia, etc, and often even less. So the distances involved do get large there too.
Could quantum physics hold the solution? I keep a particle here that I move around and you have it's pair particle on the East coast and watch it wiggle. Somehow turning that into bit flips.
It does not. At a practical level, you can't "wiggle" one entangled particle to be in any particular classical state without destroying the entanglement.
At a theoretical level, the no-communication theorem in quantum mechanics forbids it absolutely:
99 percent of use cases of IoT are data aggregation. Latency is not crucial here; you can still generate precise timestamps on the target side and have a very precise global view even with ridiculously large latency.
On the latency front, CoDel exists for consumer/edge routers.
Backbones solve their latency concerns by always having excess bandwidth available. AFAIK, there has been no network queuing advancements that allow for link saturation without multiplicitive latency increases.
My 12Mbit/768Kbit ADSL2+ connection has ~15ms latency when idle and ~35ms average (~55ms peak) during upload saturation because of CoDel & traffic-shaping. Without CoDel = ~650ms.
I have 15/1. Everything is latency-city compared to everyone else out there. I would literally trade 15/1 for 10/5 if I could, even though that means I would lose SuperHD on Netflix, which is the largest consumer of bandwidth in my house (and virtually Bluray quality when SuperHD is enabled).
Your upstream bandwith isn't your latency. It's how fat your upload is. Swapping 15/1 to 10/5 isn't going to do anything for your latency UNLESS you're already maxing out your upstream.
That's the problem, I'm on ADSL2. The more upstream bandwidth you use, the worse your latency gets.
Let's say I want to stream a multiplayer game. This is now an impossible situation because I barely have enough bandwidth to do a 360p stream as it is. Even if I very carefully keep my upstream bandwidth under 80% utilization, my latency is through the roof.
Or let's say an automated off-site sync happens for backups. There goes my latency again. Or I take a photo and upload it. Or I upload music I purchased to Google Music. Or I video chat with someone.
So, yes, when you have that little upstream, anything relevant maxes it out and your latency turns to shit.
Simple, it's critical to the industry who saw existing hardware sales stalling and needs a boost - ergo 4K and VR. Now they just have to convince the consumer his life is miserable without them.
While the underlying devices themselves aren't so critical, considering NEST thermostats shut off your freaking furnace when internet is unavailable, once you do have them it is pretty critical.
Frankly this is just one more reason I don't want IoT garbage.
If they were the exception I'd agree with that, but near every post I see on HN about IoT devices is either a way they're easily hacked, phoning home to God-knows-where servers with no documentation as to why they're doing it and no way to stop it, or how the default configuration broadcasts information to the internet in a way they can be easily found if you know what to look for.
Right now it's blatantly obvious that to the vast majority (if not the entirety) of IoT companies, subscriber counts and ease of use are prioritized way, way higher than anything you could even call security.
The title says that broadband speeds are becoming a critical need. It doesn't really matter what it is that the broadband is used for, the point is that the demand needs to be met. I'm sure you could do without a VR headset, but then again I don't know if you would have had a similar opinion at the beginning of the rise of streaming video. One of the most amazing aspects of technology are unanticipated uses and developments.
And one of the worse is that it always changes and goes over the top of what people actually want, just to sell more stuff -- and gets into a rat race.
Both number of devices and bandwidth could be conserved if we had a standard protocol for these things to talk to a local hub that would (a) proxy interactions with the outside, (b) enforce privacy rules and (c) handle management overhead.
As it is now, a house can easily find itself with three brands of "smart" light bulbs, a thermostat, a power meter, six video systems and a camera: all of them demanding their own IP address and exposing varying levels of private information.
I expect there are six standards for such hubs already. Insert XKCD here...
I am not sure that is is a good idea to put such devices in a special class of their own. It risks giving devices permissions and trust that they don't actually deserve. Better to treat them as an internet server and use existing procotols (OAuth, CORS, TLS, WebSockets etc.). As far as I know all those protocols work perfectly well on a local network behind a NAT. More importantly browsers already have well understood restrictions to prevent XSS built in.
It's not difficult to see that AR/MR/VR will alter our current conception of IoT; imagine the functionality that can be layered onto objects/devices with MR alone. The nature of IoT as embedded robotic systems will become much more apparent, and spacial rendering, and other potential bandwidth intensive sensing capabilities, will push higher upload bandwidth need.
As an aside, is there any practical delineation between "augmented reality" and "mixed reality" that requires the separate terms, outside of a marketing drive to differentiate products?
For me it's simply that I think MR is a better description, but I recognize that AR hasn't been replaced as the preferred term yet, so I include them both. Augmenting our experience of reality is certainly part of the use case, but Mixed seems to capture the overall paradigm change more accurately.
You don't really need more than 25Mbps downstream for streaming even 4K with one client, not uncompressed of course. However 100+ Mbps with <10ms latency allows for real time gaming via streams in high quality for example, making expensive hardware at home obsolete.