Hacker News new | past | comments | ask | show | jobs | submit login
WebRTC: Not Quite Magic (alexfreska.com)
120 points by alexfreska on March 30, 2014 | hide | past | favorite | 56 comments



As far as I understand it, this is not a problem with WebRTC. It's a problem with the Internet itself. We ran out of IPv4 addresses years ago and now the Internet is heavily hacked together with various kinds of NAT which all work differently and sometimes break things. It seems a lot of engineers/IT admin out there have designed their networking setups assuming people only connect to central servers which don't use NAT, and either ignored peer-to-peer connections or went ahead and broke them anyway. This makes innovation with peer-to-peer tech more difficult and expensive, because now to get things working for everyone you need to configure additional relay servers and pay for the bandwidth of everyone who needs to use it. That could rule out some services being made free and could mean the extra cost being passed on to customers when it could have just run through their ISPs.

I think the solution is IPv6. Once every device on the Internet is uniquely addressable again, we can do away with these NAT hacks and two endpoints should be able to reliably connect to each other again, no matter where they are. Of course, that's assuming we don't get more short-sighted engineering that breaks things again...


Work is in progress to make webrtc implementations work over ipv6 but it's not ready yet. Chrome 34 will implement it behind a feature flag (googIPv6:true) [1], and hopefully Firefox will follow suit.

When they do, most pain points caused by NATs will go away, and that's not webrtc specific. While you'll always encounter some (intentionally?) broken network which only allows 80/tcp and 443/tcp from time to time, there's not much you can do about it, and webrtc can't do much about it.

[1] http://code.google.com/p/webrtc/issues/detail?id=1406


> When they do, most pain points caused by NATs will go away, and that's not webrtc specific.

This is a naive statement since it assumes IPv6 support amongst the clients. At least here in the US, such support is fairly minuscule.


In the US, today, one in 15 clients accessing google.com / yahoo.com / facebook.com is doing it from an IPv6 address.

And it's more than double compared to a year ago [1].

While indeed this still qualifies as "relatively small", I think it grew out of the "miniscule" :-)

[1] http://6lab.cisco.com/stats/cible.php?country=US


I think mobile operators are going to drive the near term biggest uptick in ipv6 adoption.

My local ISP (small, independent) has no current IPV6 plans, which is actually a little bit annoying.


Thanks for the heads up, this does look like it will help a lot! (Assuming IPv6 reaches most people eventually)


NAT has been around for twenty years now, so there's no excuse for having anything less than robust, any-direction-any-path in every new protocol. I'm honestly more than a bit disappointed that WebRTC has pushed this down into the application leaving every application to work it out for itself.

The smart thing to do would have been to make the signalling layer use IPv6 and insist on configured 6in4-configured gateways (or similar).


Maybe NAT has been around a long time but there is still a lot of broken NAT, e.g. symmetric NAT on corporate networks (where one user might appear to come from two different IPs from the outside). This is not WebRTC's fault, it makes any peer-to-peer connections over the Internet hard, and given IPv6 adoption stands at around 3% right now insisting on IPv6 support would probably have been sufficient to prevent widespread adoption.


TURN/ICE is supposed to work just fine on symmetric NATs. Chrome even supports single-socket TCP relays, which can't be broken easily by any sane firewall. I personally have a few boxes testing it with only TCP 80 port allowed and it still finds a way to connect. A few tricks on the TURN side have to be employed but it's pretty robust. Whatever issue the OP has to be a bug or configuration issue. There are known problems and I still see a lot of freezes, protocol or hardware compatibility issues, CPU overload and so on, but straight Chrome to Chrome connections seem robust.


Not all people have Chrome or latest version of Chrome and I think a fairly recent version is required to relay over TCP.


I'm not convinced.

WebRTC is supported by huge corporations with massive resources. If they had decided to use IPv6 instead of SDP for addressing, I can't imagine there would be any difficulty in user adoption, and they could only improve service adoption.


Don't most users of the internet not have IPv6 addresses. I know my FIOS and business Comcast lack IPv6 support... I can only imagine that means everyone in a very large radius around my hometown are in the same boat of zero IPv6... That said, I agree it would be nice if the browser vendors handled networks that block all traffic that is not over TCP port 80 and TCP port 443, but I'm certain that it's because WebRTC is non-trivial and we have it's usefulness goes beyond peer to peer...


WebRTC isn't just non-trivial: It's hard, and the hardest part is the fact that every service provider has to implement a complex network configuration that is difficult to test and exercise.

Even tunnelling IPv6 would be preferable, but my point was about making browser vendors do "the hard work" instead of making service providers.


SDP is a session description protocol, not an addressing protocol. WebRTC will be able to use IPv6 addressing in future, too, and supporting IPv4 where possible means it's practical to use now as well.


> SDP is a session description protocol, not an addressing protocol

SDP is most certainly an addressing protocol. It's a very complicated one that uses many binary tokens including IPv4 addresses (or IPv6 adddresses) and port numbers, cookies, and keys.

That some of the binary tokens used in the SDP addresses are discovered using ICE/STUN (sometimes) complicates service providers greatly.


How is this supposed to work? Unless you want to route all non-native-IPv6 traffic (i.e. almost all traffic) through the gateway, which would defeat the goal of peer-to-peer, you need a way to do NAT traversal over UDP, and at that point assigning IPv6 addresses would be somewhat redundant.


Users can arrange NAT traversal or tunnelling as appropriate, and have a better success rate than services which have to anticipate all of the different NAT configurations (and test them), without pissing off users. It's clear that the browser can help a lot here.

WebRTC is deployed on somewhere around 40% of browsers and is potentially supported on some 80-90% of those browsers, and works on maybe half of the services. Your mileage may vary, but there's not going to be a Skype competitor built on WebRTC unless Google makes it and tunnels to Google's servers (which they do).

Requiring servers implement IPv6 correctly is a lot easier than making every server do the STUN/TURN/conntracking dance. It's probably easier than making one server work with WebRTC everywhere, since the only WebRTC app that seems to work for me is Google.


You make the problem sound like it's lack of IPv4 address space.

That is true in some respects because the method of allocation of these addresses is not straightforward and has its own set of problems.

But in actuality "the problem" the OP is encountering is that for the RTC developer/user, a publicly reachable IPv4 address block is too expensive.

With a publicly reachable IP address (most ISP's will provide one, often for an additional fee), you can do peer-to-peer quite easily.

UDP hole punching works fine, save for when both peers are behind the same NAT, in which case you need a peer outside the NAT to forward traffic.

And it's easy to simulate a LAN over the internet using encapsulation.

Gamers have been successfully hole punching for many years.

I'd say up until recently, gamers have really been the only group that has demanded peer-to-peer connectivity and made it work.

One wonders if every ISP customer were willing to pay the extra fees (if any) and requested a publicly reachable IP address, could the ISP's meet the demand?


UDP hole punching does not work with all types of NAT, e.g. symmetric NAT where a user can appear from the outside to be coming from multiple different IP addresses. (This makes the STUN server useless for telling the other peer where to connect.) Symmetric NAT is common in corporate networks, which I suppose is not the first place you'd find gamers (or you could just run LAN games with co-workers). I guess WebRTC is the first place you'd run in to networking problems with NAT outside of gaming, torrents and bitcoin - and is perhaps the first developer-accessible peer-to-peer tech which is obviously useful in the workplace.


"UDP hole punching does not work with all types of NAT..."

I think what you mean to say is the STUN and TURN solutions do not work.

What else have you tried?

I personally do not use those solutions.

The last resort is for a peer outside the problem NAT to forward traffic. This is not true peer-to-peer (IMO) but it does work.

If what you suggest were true, that reliable peer-to-peer is "impossible" because of some types of NAT, then how do you explain the success of Skype?

If you give specifics about what exactly you were trying to do, and what exactly you did to try to accomplish this, maybe someone could offer suggestions.

I already knew that STUN and TURN have problems. That's why I do not even bother with those "solutions".


Skype are a commercial company that has hundreds of millions of dollars in yearly revenue. Doubtless they have their own relay servers which they pay to run and connect everyone up. However it would be great if hackers and startups didn't have to deal with all of that.

Ideally it would not be necessary to try anything else and peers would just be able to directly connect. In practice TURN is a standardised way of doing relay, and what else are you going to do? Invent your own? How will you test it across the 1000s of different network configurations and know with any confidence that it actually works across the open Internet? Skype might have the resources to pull that off, but the next peer-to-peer startup might not.


I may be wrong but I don't think Skype works without at least a browser extension.


Skype does not require a web browser to do NAT piercing.

Nor does NAT piercing require a web browser.


So Skype has a 100% success rate piercing NATs and does not require any fallbacks? Also you mentioned that you do not use STUN/TURN, what other solutions are there that allow for p2p data streams between browsers?


> One wonders if every ISP customer were willing to pay the extra fees (if any) and requested a publicly reachable IP address, could the ISP's meet the demand?

Well no, that is why all ISPs are using DHCP. Most offer static IPs for extra fees, but like fractional reserve banking that only works if all possible subscribers aren't trying to claim unique IPs each.


You may have misunderstood what I mean when I use the term "publicly reachable".

It is not only the static nature of the IP address that make it more suitable for peer-to-peer.

It is the ability to accept unsolicited inbound connections on any port.

The key word is "unsolicited".

Essentially, the user pays extra for the "privilege" of less stringent firewall rules.


> Once every device on the Internet is uniquely addressable again, we can do away with these NAT hacks and two endpoints should be able to reliably connect to each other again, no matter where they are.

IMHO, this is a common misconception. IPv6 doesn't magically solve the problem.

In an IPv6 world, we will all need stateful firewalls (imagine a typical human's home router). These will generally be configured to allow all outgoing connections, and block all incoming connections - just like a NAT router effectively does today.

Now, you have the same problem all over again. How does the firewall know what new inbound connections to accept, and which to reject? We're back into the realms of packet inspection ("ALG") or protocols to explain to the NAT router what is required, such as NAT-PMP, uPnP etc.

Sure - each endpoint will have a unique address, and this is useful. But a direct peer-to-peer connection between these endpoints will be firewalled by default, except via the same (equally bad) solutions that currently solve the problem (badly) in a NAT world.


Aren't you talking more about firewalls than NAT? I don't see any problem with having uniquely addressed devices behind a single device implementing a firewall blocking incoming connections by default - that can still be done without modifying the addresses or ports (which NAT does). We could also do away with particularly nasty kinds like symmetric NAT which breaks STUN.


> Aren't you talking more about firewalls than NAT?

Yes, I am, but this is exactly the distinction I'm saying is being conflated in discussions about the original problem. Problems are being attributed to NAT (and it is being assumed that IPv6 thus will solve it), when instead they should be attributed to the necessity of firewalls (and so IPv6 will not solve the underlying problem).

> I don't see any problem with having uniquely addressed devices behind a single device implementing a firewall blocking incoming connections by default

The problem is that peer-to-peer connections will fail by default, and we would like them to Just Work in cases when the user has initiated it and approves of it.

> We could also do away with particularly nasty kinds like symmetric NAT which breaks STUN.

Fair enough, but that will not make a peer-to-peer connection work when a firewall blocks the connection.


Yeah IPv6 is valuable but it seems NAT is the smallest inconvenience since a p2p connection can usually still be made. Briefly giving it thought, I don't think IPv6 could rid the use of TURN servers or close the other 5% of anomalies. Am I wrong?


NAT only exists to workaround IPv4 address exhaustion. I think it's more likely the 5% edge cases have firewalls that block anything that doesn't look like normal HTTP traffic.


> NAT only exists to workaround IPv4 address exhaustion.

Untrue; people were using NATs long before they were concerned with running out of IPv4 addresses. I think it was a bit of paranoia combined with lack of trust in firewalls: corporate sysadmins just didn't want their internal networks to have routable addresses.

This seems to have been mostly calmed by the explosion of "cloud" IaaS offerings, which need publically-routable addresses to do much of anything.


I don't think that's so. A NAT also prevents random hosts from directly opening connections to the machines that it's obfuscating (module port forwarding), which serves a nice security purpose.


You're confusing NAT with the firewall. Without a firewall, a pure-NAT will often let you route to the internal network addresses from the outside. There isn't really much "obfuscation" of the LAN addresses either, as they are almost certainly a 1/256-guess away on the 192.168.1.x network.

This confusion is very common, probably because it's incredibly rare to find NAT by itself. Every home router is basically guaranteed to have a basic stateful firewall in addition to providing NAT.


And every modern IP stack also provides a firewall, be it iptables or whatever the Windows Firewall is. I don't think the "NAT is necessary because putting our computers on the public internet is scary" is anything close to a reason to keep the hodge podge mess we have.


I suspect I am confused. If you're feeling generous (and notice this comment 6 days later, heh), would you mind correcting my mental model and slicing up the difference between NAT and firewall?

Suppose that there is one external ip, we'll say 1.1.1.1 which is NATting two internal ips, 10.0.0.1 and 10.0.0.2. When a packet comes in, say a TCP packet attempting to open a connection on port 80, what are the options that a non-firewalling NAT has to figure out which internal ip to route it to? Assume that both ips are running webservers.

I know of two answers to this question, one is port forwarding, where the NAT is explicitly configured to forward incoming port 80 traffic to one of the internal hosts (meaning that only one of them can listen for traffic on a given port). The other doesn't work for new incoming connections but just has the NAT watching for outgoing traffic and allowing incoming traffic to come back (using (foreign ip, receiving port, sending port) up to route packets to the internal IP that started that conversation). My understanding is that NAT traversal techniques typically try to first ask the NAT to forward a port for them (uPnP / NAT-PMP), and if that fails then they try to exploit the second method using ICE.

What am I missing?


While I think WebRTC is so fantastic it's nearly magical, I find it incredible how people using/creating it overlook networks' topologies problems that SIP and other VoIP folks already had to deal with (and went crazy doing so) a whole decade ago. NAT issues, STUN, all that is long known and yet I've never seen a fully working solution for peer-to-peer communication that won't be stopped by the simplest firewalls. Does anyone know any other "fix" for these kind of scenario as described in the post?


Because of low barrier to entry people who didn't learn about jitter, network latencies, BSD sockets, etc.

That is the promise of WebRTC -- just a few lines of JS and you've got yourself Google- Hangouts.

One can argue this happens in web frameworks and other technologies that make it easy to get started with, which is a good thing.

But unfortunately a lot of these easy abstractions can't completely abstract away things like speed of light latencies, limitation of network bandwidth, funky NAT setups and so on.

So far there are very few peer peer technologies that work reliably and are successful.


The problem is that peer-to-peer isn't a very efficient way of doing group video chat to begin with...much better and simpler to just use a central server in that case.


Yeah. Sadly, in order to protect users against servers MITMing their video chats, the WebRTC spec was crippled to require that the central server in a group video chat have the keys required to MITM it.

I'm unfortunately not joking. The original spec allowed Javascript to directly provide an encryption key; this was removed because someone working for one of the browser vendors argued it would allow companies to MITM video chat (I think it was Google?) In order to make group video chat feasible, this was then replaced with a new feature where the central server sent out a copy of the encryption key over the encrypted RTP channel, meaning it now needed to have the keys to decrypt all the video passing through it.


I think that's the whole point of firewalls, to control incoming and outgoing connections. So peer-to-peer sometimes is simply impossible. TURN actually is just a proxy as far as I know so it's not p2p.

That's why BitTorrent asks to open specific ports in your router, because otherwise you simply can't do p2p.


Tip if you feel like you have to maintain a lot of infrastructure with your TURN server: You can scrap your STUN server, since TURN is an extension of STUN. The TURN server will produce the same server reflex candidate as your STUN server.

What port(s) were you using on your TURN server endpoint? Keep in mind that port 80 is often filtered, and many corporate networks often block non-HTTP(S) ports. At the WebRTC-utilizing service https://appear.in, we have found that using port 443 plays nicely with most restrictive networks.


TURN over port 443 specifically using TCP not UDP should get around most firewalls.

If a provider is using DPI and trying to disable ssl network wide, you will still have a problem.


I agree this is a problem. From what I understand SCTP can be TCP and is more reliable but cannot be proxied via a TURN server since it's UDP only. I am experimenting with WebRTC and I built a fallback over websockets but I find myself encrypting the data client side to avoid trust issues because I don't have access to the encryption mechanisms baked into WebRTC. I think there needs to be a pluggable fallback mechanism when all the ICE servers are exhausted.


Turn servers can also send data over TCP (e.g. https://code.google.com/p/rfc5766-turn-server/ does). The browser also needs to support this option as well; Chrome seems to with a ";transport=tcp" at the send of the turn server configuration url.


My mistake, I see that now [1]. It seems both sides need to specify that which isn't a problem. Also seems FF is on board [2][3].

1 - https://groups.google.com/forum/#!topic/turn-server-project-... 2 - https://bugzilla.mozilla.org/show_bug.cgi?id=891551 3 - https://bugzilla.mozilla.org/show_bug.cgi?id=906968


Interesting, yes I agree, as much should be exposed as possible to interface with any creative fallbacks people come up with for their system.


I'm interested in the security of WebRTC. Is it end to end? Can it be easily MITMed? What are its main flaws from a security design point of view?


I worked as an intern with Mozilla last summer on helping to improve the issues around security in WebRTC specifically related to authentication and the possibility of MITM attacks. You can watch my intern presentation at [1] which goes through a very high level overview of the state of authentication in WebRTC and an example implementation of how WebRTC could be built into the browser to include authentication:

[1]: https://air.mozilla.org/intern-presentation-seys/

TLDR: E2E encryption is included, but authentication is currently non-existent, allowing for pretty easy MITM attacks if you have control of the relaying website.


Eric Rescorla does a good job of describing the security architecture and concerns around webrtc here:

http://tools.ietf.org/id/draft-ietf-rtcweb-security-arch-09....

and

http://tools.ietf.org/id/draft-ietf-rtcweb-security-06.txt


It's still a work in progress. Right now if you trust the service to connect you to the right person, everything works out. However, they aim to make things work even if you don't, and the jury is still out about whether or not they can make a UI that reflects this safely. Furthermore, there are some anonymity issues related to the use of RSA to secure the SRTP connections.

The identity provider portion involves trying to sandbox javascript in new and interesting ways. However, most of the pieces are fairly well understood, and the breaks will happen because identity providers mess up.


Some of the security is explained in an overview at Google IO. Here is a video: http://youtu.be/p2HzZkd2A40?t=22m18s


It's end-to-end, but you need a way to validate that you're talking to the peer that you think that you are - there's no certificate authority verifying client certs (in the normal case, I haven't explored traditional client-side certs for webrtc).


Signaling and NAT traversal are two interesting pieces, as outlined.

It can get even more involved if you want to use peer identity and authenticate against an IDP. WebRTC is still in draft stage and browser implementations are still writing more code.

That said, it does provide a platform for interesting apps.

On a related note - I am curious how many people will use WebRTC directly or wrappers like those provided by twilio and similar.


I don't think even ipv6 without nat would guarantee 100% (not that we can count on ipv6 coverage in the next decade anyway).

1. Stateful IPV6 firewalls still need hole punching to connect peers, thus a stun server. 2. Think about a corporate network with complex routes and potentially multiple firewalls (for load balancing or route optimization). No guarantee you use the same gateway to get to the stun server and the other peer. Thus, punching fails.

At least that's my uneducated guess. PS there's nat for ipv6 too (linux supports it). Misery is not going to end.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: