Hacker News new | past | comments | ask | show | jobs | submit login
IoT, 4K, virtual reality signal a critical need for broadband speed (fierceonlinevideo.com)
74 points by PaulHoule on April 4, 2016 | hide | past | favorite | 75 comments



I think there should be more focus on latency and upload improvements as well. With all the realtime streaming/communication going on, really low latency is crucial for the future. Also we are sending a lot more data these days, so broadband with 100/6 Mbps or 200/8 Mbps is very unbalanced.

You don't really need more than 25Mbps downstream for streaming even 4K with one client, not uncompressed of course. However 100+ Mbps with <10ms latency allows for real time gaming via streams in high quality for example, making expensive hardware at home obsolete.


Unbalanced is a nice way to put it :)

Our home line is 110/3.5 - if a service can serve content faster than 110 Mbit/s we are limited by the rate at which we can upload the TCP ACK, and not by the download limit on our connection.

Another good anecdote: Netflix has been cited as using 9.5% of the internet's upstream traffic on ACK alone. (source: http://arstechnica.com/information-technology/2014/11/netfli...)


What we should really achieve is 1Gbps symmetric connections. Then the cloud is your LAN and then only you can seriously consider working on files stored or backed-up in the cloud.

Right now the best upload speed you can get with optic fiber with BT in the UK is 30Mbit. Not a technical limitation, purely a commercial one.

I actually don't understand why ISP do not offer better upload speeds. I thought the whole system of peering relied on having balanced upload and download flows, which created problems in datacenters and ISPs, which by nature are one flow only (end users downloading data from datacenters). I would assume both ISP and datacenters would want to encourage having flows the other way, by mean of people backing up their data in the cloud.


Most users pull far more data than they push, so it's a simple reaction to the market -- with technologies like DOCSIS you have a set number of channels and you assign them to either downstream or upstream bandwidth. More users benefit from a fatter downstream pipe. ADSL has the same sort of bandwidth allocation. Having a symmetrical connection generally means limiting the downstream connection.


Yes but I don't expect this sort of performance for cable or DSL. We are talking about Optic Fiber really.


My cable modem pulls 120Mbps down, pushes 10Mbps. 24/7. DOCSIS 3.1 promises up to 10Gbps down, 1Gbps up (again, because it biases the bandwidth to down). It's a signal on a medium.

And when we talk about fiber optics, 9 times out of 10 it isn't really fiber optics. The vast majority of BT "fiber optic" installs are a copper twisted pair, with exactly the issue I mentioned. But their junction box, just like cable internet applications, has fiber optics going to it, for what's that worth. For a real pure fiber optic connection, which is rare, the service usually is symmetric because that compromise didn't need to be made.


Yes but by Optic Fibre I meant FTTP, not FTTC. FTTC is nothing more than a thick layer of lipstick on the DSL pig.


Indeed, I have recently got a fully fiber optic connection up to a flat and it is symmetric 300/300Mbps download/upload. As I was looking the plans, there wasn't even an option to go with non-symmetric connection.


> I actually don't understand why ISP do not offer better upload speeds.

ISPs usually have business/enterprise offers with more upload bandwidth for $$$. They don't want to cannibalize this with more upload speed for residential users.


Yes. In Switzerland we have an awesome company who is rolling out 1Gig up and down fiber. The price is only about 67$ per month (thats the same you pay for 100mb down/10mp up with other companies).

They are slowly rolling out. Can wait until the arrive in my city.


BT sells FTTH in new developments, with 300mbps down but only 20mpbs up. Exactly on the same fiber I get 1gbps symmetric from Hyperoptic. It's amazing they constrained themselves so much.


30mbit up, not 20. Yes, it very much looks like pure product differentiation.


>What we should really achieve is 1Gbps symmetric connections. Then the cloud is your LAN

jokes on you because my LAN is 100. also im pretty sure all 4 ports on my el cheapo switch are shared.


If you've got Cat 5 installed, then ~$10 per port can fix that for you.


Cat 5e is 1Gbps


I don't know if you're familiar with Riot Games' infrastructure investments, but I figure it might interest you.

They're essentially building their own low-latency network across the US. It's really impressive.

https://engineering.riotgames.com/news/fixing-internet-real-...


wow that is super interesting!


<10ms latency?

The speed of limit limit for US coast to coast (Boston to LA) is ~20ms.

Roundtrip, since you also need to ask for something, is double that.

And that's just the speed of light.

Forget <10ms latency as long as physics stand, even for Illinois to California.


Light in fiber travels about 30% slower than in a vacuum. http://physics.stackexchange.com/questions/80043/how-fast-do...


Requirements for 5G are getting equally ridiculous [1]. 5 ms latency and 100 devices per square meter? Whenever I hear something like that, I like to outline a meter by meter of empty floor space and ask people what they imagine 100 devices will be doing in that particular spot.

[1] https://5g-ppp.eu/wp-content/uploads/2016/02/BROCHURE_5PPP_B...


What are you really going to do with a 1gbps residential connection, that doesn't sound as tentative and outlandish as my plan to push a shopping cart full of iPhones streaming their own episode of GoT down the street? (it's art, don't ask)

This isn't about having a feasible plan to exhaust the new standard using current technology and use cases, it's about pushing the new standard as far as possible (within technological limits), so it can last as long as possible without needing to be redesigned. Also, reality has a nasty habit of catching up to the possible much faster than anticipated.


It's funny how each standard is made to "last as long as possible without needing to be redesigned". LTE stands for "long term evolution" - as in "we made this standard so that it will be possible to evolve it according to demands without total redesign". It has been in commercial use for what, 4 years? As far as I know, current plans for 5G are "screw LTE, we'll redesign it from the ground up".


100 devices per square meter is normal in tall-ish buildings. Each apartment has 2-3 phones. Offices are even more packed.


Maybe I'm miscalculating, but I think you're off by an order of magnitude.

According to [1], a dense open space office in Manhattan has 100-120ft² per employee; let's take 108ft² which is ~= 10m². That means each m² has ~1/10 of an employee. Even if you multiply by 100 floors, you're only getting ~10 employees/m².

[1] http://www.nytimes.com/2015/02/23/nyregion/as-office-space-s...


I think your miscalculation might be devices per person, not people per square meter.

My desk area in Tokyo (so maybe not 120ft²; let's call it 60ft²) has a whole lot more devices than people. 4 computers + personal/work phones + 1 watch + 3 tablets + wireless camera + let's say 5 other random items with an IP address that might be paying a visit (think health/fitness devices, teledildonics, wifi-enabled geiger counters, etc.).

I may not be typical in April 2016, but I'm also not that much of an outlier. 10+ devices per person seems to be the direction we are heading, so... at 100 floors, yeah 100+ devices per square meter.


While I do think you're an outlier, remember that we're specifically talking about 5G-enabled devices. "Wifi-enabled geiger counters" don't really, er, count :)


>I may not be typical in April 2016, but I'm also not that much of an outlier. 10+ devices per person seems to be the direction we are heading, so... at 100 floors, yeah 100+ devices per square meter.

Not sure where you get this "100 devices per square meter" again.

Even if it was 10+ devices per person (which is at best 3 or 4), there are hardly more than 1 or 2 persons per square meter still.

And across floors there are one or several meters of empty space -- above people's heads and before the ceiling.


Well, the example I was replying to was for a 100-story building, with a total of ~10 employees/m².

I think this absurdly specific hypothetical building is getting in the way my point, which is simply that we humans have an increasing number of networked devices on and around us, and ~10 devices is already common for some people.


>Well, the example I was replying to was for a 100-story building, with a total of ~10 employees/m².

Across 100+ meters of height? I don't think that's what people intuitively understand/mean when they talk about "devices/m²".

It's not like a device on the 15th floor messes with another on the 5th floor...


Yeah, I know, but that's the specific calculation I was replying to — icebraining's figure of people per square meter of land.

This subthread veered off from the original linked post about broadband speed in general to discuss how upcoming "5G" cellular service is expected[1] to support 100 devices per square meter - that means per square meter of coverage area, regardless of how many floors are stacked up on that square meter. E.g., Mbps/m².

[1]: http://spectrum.ieee.org/telecom/wireless/telecom-experts-pl...


Telcos hate low latency. High latency is in their DNA. Nobody has ever been able to sell low latency except to HFT people, and until they started asking, nobody knew what a circuitous route the cables took from Manhattan to New Jersey.

The first research project Bell labs did was a test to see how much latency they could get telephone users to tolerate.


I would like to see a source for that, because this makes no sense

Latency in analog circuits is a natural product AND very difficult to change either way

In digital circuits you might have things adding latency like encoders and filters and some effects of network and packaging of voice data

Unless you're blaming AT&T for not being able to go faster than the speed of light


It makes sense just fine. The research about latency tolerance informed the design constraints for packet-switched digital phone networks, which had been proposed since the 1950s and began to replace POTS in practice in the 1980s (e.g. System X in London in 1980).

Latency requirements constrain routing and buffering specs. Packet switched voice could not work until the system could meet human tolerance of latency.


While that is true, at a certain point, more latency in a working system costs money, especially at the earlier switched systems

But yeah, they needed to do a research about it, but I would doubt "The first research project Bell labs did", since Bell Labs predates digital phone networks


Sure, but no-one is ever talking about deliberately over-long latency. There's a sweet spot where packet-switched is fast enough, while cheap because of shared bandwidth and routing flexibility. It costs more to go faster, but it's game over to go slower than humans will accept.

And it's unlikely that the poster was referring to direct (non-packet) connections since these have been rare for a long time.


> The first research project Bell labs did was a test to see how much latency they could get telephone users to tolerate.

Do you have a source for that? I genuinely enjoy reading about things like this.


I agree with __john ; reading about this stuff is cool. Source please?


We could always try to put the servers into the center of the USA. Apart from gaming, household to household latency is not particularly important.


Why would you ever have them in one place?


Not unheard of in Europe. I had a connection with <10ms pings to Quake3 Servers in Frankfurt Germany (~300km) 7 years ago.


Well, obviously, for 300km yes, since light does that in 2ms roundtrip.

But that's for stuff mirrored locally like those Quake3 servers. When you get stuff from US datacenters etc you'll suffer even more than 20ms.


i understand that, just saying that not every country has these insane distances like the US, as even most US content is mirrored on CDNs that are usually pretty close.


I think the mirrored thing is mostly for assets -- js, images etc.

For most smaller websites it's one server somewhere (for the majority of them actually co-hosted with other sites in a single location/box).

And for most large websites outside of the very big (e.g. FB and Google), it's at best a few locations around the world, e.g. e.g. US east/west/mid, 1 in Europe, one in Asia, etc, and often even less. So the distances involved do get large there too.


Could quantum physics hold the solution? I keep a particle here that I move around and you have it's pair particle on the East coast and watch it wiggle. Somehow turning that into bit flips.

Is it feasible?


Referred to as superluminal communication: https://en.m.wikipedia.org/wiki/Superluminal_communication


Faster than light communication is impossible.


Isn't that exactly what Quantum particle pairing solves for?


It does not. At a practical level, you can't "wiggle" one entangled particle to be in any particular classical state without destroying the entanglement.

At a theoretical level, the no-communication theorem in quantum mechanics forbids it absolutely:

https://en.wikipedia.org/wiki/No-communication_theorem


99 percent of use cases of IoT are data aggregation. Latency is not crucial here; you can still generate precise timestamps on the target side and have a very precise global view even with ridiculously large latency.


On the latency front, CoDel exists for consumer/edge routers.

Backbones solve their latency concerns by always having excess bandwidth available. AFAIK, there has been no network queuing advancements that allow for link saturation without multiplicitive latency increases.

My 12Mbit/768Kbit ADSL2+ connection has ~15ms latency when idle and ~35ms average (~55ms peak) during upload saturation because of CoDel & traffic-shaping. Without CoDel = ~650ms.


I agree with down being insufficient.

I have 15/1. Everything is latency-city compared to everyone else out there. I would literally trade 15/1 for 10/5 if I could, even though that means I would lose SuperHD on Netflix, which is the largest consumer of bandwidth in my house (and virtually Bluray quality when SuperHD is enabled).


Your upstream bandwith isn't your latency. It's how fat your upload is. Swapping 15/1 to 10/5 isn't going to do anything for your latency UNLESS you're already maxing out your upstream.


That's the problem, I'm on ADSL2. The more upstream bandwidth you use, the worse your latency gets.

Let's say I want to stream a multiplayer game. This is now an impossible situation because I barely have enough bandwidth to do a 360p stream as it is. Even if I very carefully keep my upstream bandwidth under 80% utilization, my latency is through the roof.

Or let's say an automated off-site sync happens for backups. There goes my latency again. Or I take a photo and upload it. Or I upload music I purchased to Google Music. Or I video chat with someone.

So, yes, when you have that little upstream, anything relevant maxes it out and your latency turns to shit.


You my friend need to to apply some QoS on that ADSL link and have it prioritize the ACK packets to go upstream which is what is killing your latency.


I already use QoS. It does not fix it unless I hard limit upstream to half of my already useless 1mbit.

That is not a fix.


What's "critical" about IoT, 4K and VR though?

At best they could enable some important new functionality -- maybe.

But that's hardly "critical". Desirable, nice to have, etc, yes.


Simple, it's critical to the industry who saw existing hardware sales stalling and needs a boost - ergo 4K and VR. Now they just have to convince the consumer his life is miserable without them.


Trying to recreate the success story of the VirtualBoy and the Nintendo Glove.


While the underlying devices themselves aren't so critical, considering NEST thermostats shut off your freaking furnace when internet is unavailable, once you do have them it is pretty critical.

Frankly this is just one more reason I don't want IoT garbage.


Just because Nest did a terrible job being ready for the real world doesn't mean that the idea behind the product isn't sound.


If they were the exception I'd agree with that, but near every post I see on HN about IoT devices is either a way they're easily hacked, phoning home to God-knows-where servers with no documentation as to why they're doing it and no way to stop it, or how the default configuration broadcasts information to the internet in a way they can be easily found if you know what to look for.

Right now it's blatantly obvious that to the vast majority (if not the entirety) of IoT companies, subscriber counts and ease of use are prioritized way, way higher than anything you could even call security.


> phoning home to God-knows-where servers in East Asia/Russia

> East Asia/Russia

I'd leave this part out, since phoning phone to the "West" isn't anything better.


That's fair, edited.


The title says that broadband speeds are becoming a critical need. It doesn't really matter what it is that the broadband is used for, the point is that the demand needs to be met. I'm sure you could do without a VR headset, but then again I don't know if you would have had a similar opinion at the beginning of the rise of streaming video. One of the most amazing aspects of technology are unanticipated uses and developments.


And one of the worse is that it always changes and goes over the top of what people actually want, just to sell more stuff -- and gets into a rat race.


4K I can understand, virtual reality as well, but IoT? How much data do you expect to send to saturate a 3mbit/s upload line?


I'd assume IoT concerns are about the number of devices rather than bandwidth, which probably translates to "switch everyone over to IPv6 already".


Both number of devices and bandwidth could be conserved if we had a standard protocol for these things to talk to a local hub that would (a) proxy interactions with the outside, (b) enforce privacy rules and (c) handle management overhead.

As it is now, a house can easily find itself with three brands of "smart" light bulbs, a thermostat, a power meter, six video systems and a camera: all of them demanding their own IP address and exposing varying levels of private information.

I expect there are six standards for such hubs already. Insert XKCD here...


I am not sure that is is a good idea to put such devices in a special class of their own. It risks giving devices permissions and trust that they don't actually deserve. Better to treat them as an internet server and use existing procotols (OAuth, CORS, TLS, WebSockets etc.). As far as I know all those protocols work perfectly well on a local network behind a NAT. More importantly browsers already have well understood restrictions to prevent XSS built in.


There is no incentive for any kind of product that acts as a filter or proxy. There is little demand, outside of a few Kickstarters.

You and I might move towards egress filtering but I suspect most people won't.


It's not difficult to see that AR/MR/VR will alter our current conception of IoT; imagine the functionality that can be layered onto objects/devices with MR alone. The nature of IoT as embedded robotic systems will become much more apparent, and spacial rendering, and other potential bandwidth intensive sensing capabilities, will push higher upload bandwidth need.


As an aside, is there any practical delineation between "augmented reality" and "mixed reality" that requires the separate terms, outside of a marketing drive to differentiate products?


For me it's simply that I think MR is a better description, but I recognize that AR hasn't been replaced as the preferred term yet, so I include them both. Augmenting our experience of reality is certainly part of the use case, but Mixed seems to capture the overall paradigm change more accurately.


CCTV, nanny cams, any video stream.


critical need for who? businesses who need to sell these products?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: