> When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has ossified. TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control.
> It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.
On the other hand, I've done a fair bit of work getting TCP based applications to behave properly over high latency high congestion (satellite or radio usually) links and QUIC makes me nervous. In the old days you could put a TCP proxy like SCPS in there and most apps would get an acceptable level of performance, but now I'm not so sure. It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.
I significantly benefit from QUIC. My home network is exceptionally lossy... and exceptionally latent. ICMP pings range from 500ms (at best) to 10 seconds (at worst) with an average somewhere in the 1-2 second range. Additionally I am QOS-ed by some intermediary routers which appear to have a really bad (or busy) packet scheduler.
Often, Google sites serving via QUIC are the only sites I can load. I can load HTML5 YouTube videos despite not being able to open the linkedin.com homepage. Stability for loading HTTP over QUIC in my experience is very comparable to loading HTTP over OpenVPN (using UDP) with a larger buffer.
> It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.
This is intentional. The powers that be have an interest in moving everyone to faster networks, and they effectively control all new web standards, and so build their protocols to force the apps to require faster, bigger pipes. This way they are never to blame for the new requirements, yet they get the intended benefits of fatter pipes: the ability to shove more crap down your throat.
It's possible to cache static binary assets using encrypted connections, but I am not aware of a single RFC that seriously suggests its adoption. It is also to the advantage of the powers that be (who have the financial means to do this) to simply move content distribution closer to the users. As the powers that be don't provide internet services over satellite or microwave, they do not consider them when defining the standards.
There's a technical reason for this. The Internet2 project spent a lot of time and effort working on things like prioritized traffic to deal with congested links. They found that it was easier and more cost effective to just add more bandwidth than it was to design and roll out protocols that would deal with a lack of bandwidth.
> In those few places where network upgrades are not practical, QoS deployment is usually even harder (due to the high cost of QoS-capable routers and clueful network engineers). In the very few cases where the demand, the money, and the clue are present, but the bandwidth is lacking, ad hoc approaches that prioritize one or two important applications over a single congested link often work well. In this climate, the case for a global, interdomain Premium service is dubious.
"Premium service" in this document refers, basically, to an upgraded Internet with additional rules to provide quality of service for congested links.
(I'm not personally claiming that all these conclusions are correct and that they still apply today, just that there's some backstory here.)
I think more bandwidth is better in every case except geostationary satellites due to their unavoidable latency. And in theory those satellites are going to be obsoleted by LEO ISPs.
Almost positively that answer is Google. Notice they are behind multiple of the new protocols here (HTTP/2 and QUIC), and are used as an example how bundling DOH with an existing major player can prevent blocking DNS.
Google is effectively the actual determiner of Internet standards. As the article notes, Google implemented QUIC on their servers and their browsers, and therefore, 7% of Internet traffic is already QUIC-based, despite it not being officially accepted at this point. This is essentially the same as what happened with SPDY at the time.
As Google controls both the primary source of Internet traffic (up to 35-40% of all Internet traffic, depending who you ask, and a browser share in the what 65% range), Google can implement any new protocol it wants, and everyone else needs to support it or be left out.
Arguably, the IETF is no longer the controlling organization here: Google is. Should the IETF not approve Google's proposals, Google will continue to use them, and everyone else will continue to need to support them.
As the parent notes, Google both essentially determines these standards, and has an interested in faster networks so they can shove fatter payloads down the pipe. This is in part due to the ability to implement pervasive tracking, and of course, they operate a lot of content distribution products like YouTube.
Note that every method here that makes it harder for governments to censor and ISPs to prioritize also makes it harder for people to detect, inspect, and filter out Google's pervasive surveillance.
For sure the "we've implemented this in Chrome already" aspect is a huge aspect in the standards process.
I do think it's important to say that this isn't a sure-fire deal. We had stuff like NaCl (native code in the browser) that basically died off, and other options as well.
Inversely, SPDY is not what the standard is, and Google instead pulled stuff it learned into HTTP/2. This seems like a positive aspect to me.
It's important to be wary of things that Google won't bring up, but overall I feel like we're getting a lot of benefit from having an implementer "beta test" stuff in this way.
On the upside they provide real world prototyping and testing of protocols at a scale that's never before been available. Standards they submit for consideration will have had a lot of real world usage to iron out kinks and be battle tested. This is a good thing, IMO.
Do not think it is Google fault they run so much of the Internet and also have most browser and mobile phone share.
Google using the standards process to release new things is exactly what we should want and see. You make it like it is some nefarious behavior. Geeze.
Google has offered things for standards which were changed by the standards group and Google adopted the change.
Google could just keep it all closed if they want. Honestly wonder if they will not in the future with posts like yours. Why do it?
I'm very much torn on this phenomenon, and it depends on an org like Google being more or less benevolent when pushing new protocols and tech.
Designing a new protocol to be used at internet scale is really hard. Having the ability to test that on a significant amount of traffic before refining and standardizing is a huge advantage that few groups on the planet are able to make use of. From the IETF's perspective, I would look skeptically upon a new standard that someone dreamed up but had little to no practical real-world data to show how it behaves in practice.
But I also want them to avoid ramming standards down the IETF's throat. If, after due consideration, the IETF says "no" to a new standard, I want Google to stand down and abandon it. If the IETF thinks it's a good idea and is willing to work with Google to iron out issues and standardize in a public manner, then that's the ideal outcome.
While everything you state here is correct, and is clearly of serious concern, my view is that the general public doesn't really have the capacity to fight both of these battles at once.
Right now, I'm convinced we need Google's help to make it harder for governments and ISPs to censor and prioritize.
As of today Google with it's BBR + QUIC + Chrome combo and Facebook with their Zero protocol, combined with their extensive data center network can get much better end user experience than any other service provider regardless of any government.
We can talk about network neutrality whatever we want - but these companies have built technological capabilities to have "little bit bigger share".
So, probably after some time it is too late to deal with Google:)
> As of today Google with it's BBR + QUIC + Chrome combo and Facebook with their Zero protocol, combined with their extensive data center network can get much better end user experience than any other service provider regardless of any government.
What is your preferred course here? Google has provided free as in freedom source code for all of the above, and in some cases pushed the code upstream. Granted, it will take small players longer to see the benefits of these techs, since they can't afford to integrate it themselves and must wait for vendors to include it. But small players usually have worse user experience regardless of the mechanism. Proposed standards are just one way that happens.
I don't understand why people are more afraid of regional monopolies which have only a portion of one country as their scope, than of a global operation like Google.
I think it's far more important to deal with Google than hand them the keys to the kingdom while fighting smaller fish.
I agree with you, and it feels dirty, but we're going to end up with neither if we don't start with one. If you have a better idea that can even plausibly attempt to get both, I'm all ears.
I have no better idea, but it's a problematic plan. If you go with this, you will be surprised by underground deals that will stop that division, and will very likely mislead you.
The good part is that it focus on fixing the government. That's the correct entity to fix.
Content creators make this assumption way more than any other party. Webpage bloat is already several years past qualifying as obscenely bloated, new words need to be used to describe it today.
You see the same thing in simple data consumption patterns. It's become normal for your average app to suck down tens or even hundreds of megabytes a month, even if it barely does anything.
It's so normalized I figured there wasn't a whole lot I could do about it, until I noticed Nine sync'ing my Exchange inbox from scratch for something like 3MB. Then I noticed Telegram had used 3MB for a month's regular use, while Hangouts had used 10MB for five minutes of use.
Despite living in the first world, I'm kind of excited for Android's Building for Billions. There's so much flagrant waste of traffic today, assuming as you say that you have a big fat broadband pipe, with no thought to tightly metered, high latency, or degraded connections.
(I switched to an inexpensive mobile plan with 500MB, you see)
There’s little real impetus to change widely-used protocols. Job security, product/support sales and developer “consumerism” novelty aren’t valid reasons, but often get pushed with spurious “reasons” to fulfill agendas, despite their cost.
The only demonstrable needs are bugfixes and significant advances because inventing and imposing complexity in all implementations or damaging backwards compatibility are insanely costly in terms of retooling, redeployment, customer interruptions, advertising, learning curve, interop bugs and attack surfaces.
I work in the app / WAN optimisation space, and periodically do some work with SCPS -- I'm guessing you're referring to the tweaks around buffers / BDP, aggressiveness around error recovery and congestion response?
I think there'll be a few answers to the problem - first, it'll be slow to impact many of the kinds of users currently using Satellite, second for web apps it's either internal (so they can define transport) or external (in which case they are, or can, use a proxy and continue tcp/http over the sat links.
Later on I expect we'll get gateways (in the traditional sense) to sit near the dish ... though I also would expect that on that timeframe you'll be seeing improvements in application delivery.
Ultimately I hope - hubris, perhaps - that the underlying problem (most of our current application performance issues are because apps have been written by people that either don't understand networks, or have a very specific, narrow set of expectations) will be resolved. (Wrangling CIFS over satellite - f.e.)
To what extent are the TCP problems you can solve by tweaking manually with a proxy the same problems that QUIC solves automatically? If there's a big overlap, it may not be a real problem.
For example, you mention latency, but QUIC is supposed to remove unnecessary ordering constraints, which could eliminate round trips and help with latency.
Another interesting protocol, perhaps underused, is SCTP. It fixes many issues with TCP, in particular it has reliable datagrams with multiplexing and avoiding head-of-line blocking. I believe QUIC is supposed to be faster at connection (re)estabishment.
SCTP is a superior protocol, but it isn't implemented in many routers or firewalls. As long as Comcast / Verizon routers don't support it, no one will use it.
It may be built on top of IP, but TCP / UDP levels are important for NAT and such. Too few people use DMZ and other features of routers / firewalls. Its way easier to just put up with TCP / UDP issues to stay compatible with most home setups.
Why do the routers involve themselves at the transport layer? Can't they just route IP packets and leave the transport alone?
Firewalls -- whose firewalls are we talking about here? If a client (say, home user) tries to initiate an SCTP connection to a server somewhere, what step will fail?
> Why do the routers involve themselves at the transport layer? Can't they just route IP packets and leave the transport alone?
Because they have to do NAT, at least on IPv4.
> Firewalls -- whose firewalls are we talking about here? If a client (say, home user) tries to initiate an SCTP connection to a server somewhere, what step will fail?
Because the connection tracking that's needed to even recognize whether a packets belongs to an outbound or an inbound connection needs to understand the protocol.
I haven't tried it and am not terribly familiar with SCTP, but from skimming a few references I suspect it would fail when the NAT logic decides it's never heard of protocol number 132 and drops the incoming INIT_ACK on the floor.
This is an improvement --- it was dumb of SCTP to try to claim a top-level IP protocol for this --- but only marginally, since lots of firewalls won't pass traffic on random UDP ports either.
The only "advantage" to declaring an IP protocol is that you might save 8 bytes and a trivial checksum. Mostly, though, declaring an IP protocol is a vanity decision.
I'm wondering the reverse: what's the advantage to building on UDP, other than passing routers that for some reason are inclined to pass UDP but reject IP protocols they don't know? You said that this was "what UDP was for", and I was hoping for some more detail there on why UDP helps. As you said, it's just 8 bytes and a trivial checksum.
The advantage to building on UDP is that it gets through (some) middleboxes; that's what the 4 bytes of port numbers buys you. That, and the fact that UDP is designed to be the TCP/IP escape hatch for things like SCTP that don't want TCP's stream and congestion control semantics.
No, another advantage is that you can do UDP entirely in userland without privilege.
What allocating a new IP protocol says is pretty close to "all bets are off, and we're carefully taking responsibility for how every system that interacts with TCP/IP headers will handle these packets". Since SCTP doesn't need that, there's no upside. It's vanity and bloody-mindedness.
Very true. I'd of course run tests, but I would guess port 80 would work these days because of QUIC. Even port 53 is probably locked down to whitelisted hosts.
Yes. But I don't think that corporate firewalls matter when it comes to user adaption. If you need to work with SCTP in your company, you'll get your firewall rule, if not you don't.
If they want to only allow proxied HTTP, that's their decision and developers should respect that, and not mask everything under HTTP. Their administrators made a substantial effort to forbid everything, why would honest person try to overcome their effort?
Home routers is another matter, home users don't make conscious decision about it, but in my experience UDP works just fine (because a lot of games use it) and it should be good enough for any protocol.
Because in many cases, those administrators don't even realize they're breaking the network. They just run a botched config that they inherited from their predecessors or something like that.
You might say that they're clueless and shouldn't be allowed near networking equipment (and you might even be right) but it's not going to change a thing. For the foreseeable future, working around broken environments is all we can do.
To your point, IPv6 has been around 20 years, the whole time we know we're running out of IPv4 addresses and adoption is still around 20%.
However, the high turnover for mobile phones has allowed more aggressive changes to the networking stack. Perhaps this, in addition to IPv6, would make something like SCTP easier to adopt widely?
"Still" seems a bit disingenuous when considering the current trajectory IPv6 adoption is on. [0] Yah it's happening slowly, but it does seem to be pushing ahead.
That graph has a couple doglegs that make it look exponential.
The last dogleg was January 2015. And since then it’s been linear (with a little stall this month) at about 5% of the Internet converting per year. That’s another 15 years to convert the rest, unless there’s a new dog leg up.
Also percentages don’t work the way humans think they do. Especially when the number of devices is constantly climbing. That may just indicate that some fraction of new hardware is ipv6 but little old hardware is being updated.
We may well have ~2 billion machines on ipv4 pretty much indefinitely, slowly being diluted by addition of new hardware.
You're absolutely right. I've read in other articles that IPv6 jumps up on weekends. IPv6 adoption is incredibly skewed. It's much higher in things like cellular networks and in developing countries. I think the weekend jump is attributed to cellphones (not sure if I'm connecting the dots or if that's what the consensus is).
How is that disingenuous? I don't think it matters what the curve looks like when it spans 10 years and ends up at 20%. There were implementations released 10 years previous to the graph's start. I wasn't implying it would never happen, just that even for something that's inevitable like IPv6, adoption is glacial.
I think disingenuous is the wrong word; but the curve does matter: 20% in 10 years indicates 100% in 50 years if the curve's a line, but it's not.
I dove into the world of curve fitting (wee!) and my prediction[1] for 95% IPv6 adoption is around the year 2025: https://imgur.com/a/LyBJn (fitted to the logistic curve[2], x=0 is basically 2010, y is percent adoption)
[1] Which you should completely trust because I've been doing this for all of 20 minutes!
Lets say 20% adoption means we're 40% of the way through the transition. The slope for IPv6 looks better and better every day, but overall it's not a great adoption story for tech that inevitably has to happen. (not to discourage or minimize the all the hard work done in getting IPv6 this far)
Which is frankly horrifying considering the reference implementation was released in FreeBSD 7. That really ought to scare people from purchasing any of those routers/firewalls that don’t support it.
It's not a matter of time in the wild, it's a matter of adoption and cost priorities. Until the past several years, most SoHo routers were super constrained wrt CPU, memory, and ROM, so adding support for a new transport layer protocol would involve an unacceptable cost increase in what has rapidly turned into a race-to-the-bottom commodity industry.
So you end up with a chicken-and-egg problem: router manufacturers aren't going to add support for it unless there's sufficient demand, and there can't be sufficient demand because very few people can use and rely on it.
It seems that widely deploying TLS 1.3 and DOH can provide an effective technical end-around the dismantling of net neutrality. So we should be promoting and trying to deploy them as widely as possible.
Of course, they can still block or throttle by IP, so the next step is to increase deployment of content-centric networking systems.
It seems to me that all of the changes described in this story will contribute to thwarting intermediaries and their agendas. HTTP/2 and its "effective" encryption requirement are proof against things like Comcast's nasty JavaScript injection[1]. QUIC has mandatory encryption all the way down; even ACKs are encrypted, obviating some of the traditional throttling techniques. And as you say TLS 1.3 and DOH further protect traffic from analysis and manipulation by middlemen.
Perhaps our best weapon against Internet rent seekers and spooks is technical innovation.
It is astonishing to me that Google can invent QUIC, deploy it on their network+Chrome and boom! 7% of all Internet traffic is QUIC.
Traditional HTTP over TCP and traditional DNS are becoming a ghetto protocol stack; analysis of such traffic is sold to who knows whom, the content is subject to manipulation by your ISP, throttling is trivial and likely to become commonplace with Ajit Pai et al. Time to pull the plug on these grifters and protect all the traffic.
But I, as an user, want to be able to block domains, inject scripts and see what Chrome is sending to Google on my own devices (which is what Google doesn't want me to do). That's why I can't support these protocols...
You, as a user, absolutely can. An ISP or network administrator who does not control the endpoints, on the other hand, cannot, by design. That's a feature.
What if I want to use my router to block telemetry domains? Or other malware sites? It’s looking like the only way forward is running my own CA to mitm all encrypted traffic.
> It’s looking like the only way forward is running my own CA to mitm all encrypted traffic.
Correct. Middleboxes should be presumed hostile; if you control the endpoints you can install a MITM CA, but it's safer to put what you want directly on the endpoint.
Which will fail if apps check public keys manually, and is also not very efficient. I think we'll need to patch applications directly, but the good news is that since many people will need this, those patches will probably be developed.
And you can put your CA into the system CA store if you have root. (You can make an Android image, so technically the requirement is unlocked - unlockable - bootloader.)
Unlocking the bootloader makes the device permanently fail the strictest SafetyNet category.
Apps can and will refuse to run in that situation.
Modifying /system will make every SafetyNet check fail, as result Netflix, Snapchat, Google Play Movies, and most banking apps will refuse to run.
I can decide to install the app or not? How do I go about replacing Google's system apps with my own, without preventing above mentioned apps from running? I can't. And I can't buy reasonable devices withou Google Android, due to the non-compete clause in the OEM contracts.
You can walk into your bank and access the services. Or call them. Or use their browser based service, right?
Google and a lot of developers made the choice to restrict user freedom for more security.
I don't agree with it, but it's what it is. A trade off.
Of course, you can sign your own images and put the CA into the recovery DB and relock the bootloader on reasonable devices. ( https://mjg59.dreamwidth.org/31765.html )
I know that I could edit the source code and recompile the program. I know I could disassemble the binary, find out addresses of functions and then use things like uprobes to dump/modify registers/memory. I know that in theory I could write my own version of mitmproxy that supports QUIC. But I don't have time to do all that, and that's why I speak against those protocols (which changes nothing anyway)
> It seems that widely deploying TLS 1.3 and DOH can provide an effective technical end-around the dismantling of net neutrality.
If you don't think about it, it may seem that way. But until everyone sends all their data over tor, or some other system that obscures which IP you're trying to get to, it's still easy to filter.
There's (within epsilon of) zero motion I've seen towards obscuring IP addresses, for good reason.
Yes, IPFS, but actually there a huge number of perhaps lesser-known projects that do all sorts of things with content-oriented-networking. It's been a pretty big research topic.
Unfortunately not really; Net Neutrality mostly focuses around the semi-bigger services who in most cases will have at least one of a dedicated AS number; dedicated IP ranges or dedicated physical network links they can limit the capacity of. Which is traditionally how the game has been played.
Think Netflix/Comcast.. no hiding what that traffic is.
Let's just hope that future innovations (and, more perniciously, "innovations") reinforce the end-to-end principle. A major weakness of the 2017 Internet is its centralization.
The DNS-over-http discussion in this post mention that in passing, though I wonder if this treatment might not be worse than the disease.
The DOH example, in particular, only conveys it's benefits if centralized to something governments are hesitant to block. This is an example of "innovation" specifically designed to centralize. There's maybe a handful of companies with enough influence that countries would hesitate to block in order to block DOH.
This is just depressing. Sure, sell us out to big corporations by not implementing proper features in protocols like HTTP/2 so we can get tracked for decades to come. Yet, represent freedom by yet another cool way to "fool" governments. When historians look back at what happened to the Internet, or even society, they are going to find that organizations like the IETF was to busy with romantic dreams of their own greatness to serve the public. It's like people leaned nothing from Snowden.
Authentication mostly. The lack of which is the major reason why the majority of us are still typing passwords into boxes in the browser and send them over the Internet in contradiction to best practices. Doing away with that would potentially solve a lot of problems, like phishing, but also replace cookies. Meaning it would be much harder to track users across the Internet threatening not only the revenue of major player but also their dominance since being able to handle security issues is a major advantage for them. So instead of fixing the problem at the source, we have security people recommending password managers and the EFF making cookie blockers.
Essentially every geek I have ever talked to support standards, decentralization, community efforts etc. Yet, here we have the company that has more influence than anyone else over the Internet almost single-handedly designing the protocol.
There's already a protocol for that[0], just almost nobody's using it. Which is a real shame, because with a cleaner UX and more adoption it could be a serious win.
I quote myself: “It really is no surprise that Google is not interested in this, since Google does not suffer from any of those problems which using SRV records for HTTP would solve. It’s only users which could more easily run their own web servers closer to the edges of the network which would benefit, not the large companies which has CDNs and BGP AS numbers to fix any shortcomings the hard way. Google has already done the hard work of solving this problem for themselves – of course they want to keep the problem for everybody else.”
I would also like to see SRV record support in HTTP/2 but IIRC Mozilla did some telemetry tests and found out that a significant amount of DNS requests for SRV records failed for no reason (or probably for reasons mentioned in this submission). Unfortunately I can't find a source link for that claim right now.
I know of two rather large users of SRV records already: Minecraft servers and (the big one) Microsoft Office 365. I’m less than convinced that resolution of SRV records is that broken.
Yeah but the services that you mentioned are used mostly by enterprises. It's still possible that SRV lookups are broken for large amount of consumers that are not enterprises.
> Finally, we are in the midst of a shift towards more use of encryption on the Internet, first spurred by Edward Snowden’s revelations in 2015.
Personally, I'd say it was first spurred by Firesheep back in 2010, but the idea of encrypting all websites, even content-only websites may have been Snowden related.
I'm really struck by how hostile to enterprise security these proposals are. Yes, I know that the security folks will adapt (they'll have to), but it still feels like there's a lot of baby+bathwater throwing going on.
DNS over HTTP is a prime example: blocking outbound DNS for all but a few resolvers, and monitoring the hell out of the traffic on those resolvers is a big win for enterprise networks. What the RFC calls hostile "spoofing" of DNS responses enterprise defenders call "sinkholing" of malicious domains. Rather than trying to add a layer of validation to DNS to provide the end user with assurance that the DNS request they got really is the name they asked for (and, in theory, allow the enterprise to add their own key to sign sinkhole answers) instead DOH just throws the whole thing out...basically telling enterprise defenders "fuck your security controls, we hate Comcast too much to allow anyone to rewrite DNS answers."
"Fuck your security controls, we hate Comcast" is, I think, a bad philosophy for internet-wide protocols. (That's basically what the TLS 1.3 argument boils down to also...and that's a shame.)
As implemented, all these "enterprise security" things are mostly indistinguishable from malicious attacks. Of course they break when you start tightening security.
Forging DNS responses is a horrible idea (and already breaks with DNSSEC). I have a hard time to comprehend how this can be considered a reasonable security measure.
> I have a hard time to comprehend how this can be considered a reasonable security measure.
OK, let's walk it through.
Task: block access to "attacker.com" and all it's subdomains.
Reason: Maybe it's a malware command and control, maybe it's being used for DNS tunneling, whatever. Blocking a domain that's being used for malicious behavior is a reasonable thing for an enterprise to want to accomplish.
Option 1: Block by IP at the firewall.
Problems: Attackers can simply point the domain to another IP, so you're constantly playing whack-a-mole and constantly behind the attacker. Also, if it's a DNS tunnel the DNS answer is what's interesting, not the traffic to the actual IP.
Result: Fail, doesn't solve the problem.
Option 2: Block by DNS Name at the firewall.
Problems: Requires the firewall to understand the protocols involved, which they have shown themselves to be inconsistent at, at the best of times. Also, doing regex on every DNS query packet(in order to find all subdomains) doesn't scale.
Result: Fail, doesn't scale.
Option 3: Block with local agent.
Problems: Tablets, phones, appliances, printers can't run a local agent.
Result: Fail. Not complete coverage
Option 4: Block outbound DNS except for approved resolvers, give those resolvers an RPZ feed of malicious domains.
Problem: Clients have to be configured to use those resolvers, but otherwise none.
Result: Pass. It's standards compliant, and DNSSec isn't an issue since the resolver never asks for the attackers DNS answer, so they never get the chance to offer DNSSec.
That's why option 4 (or some variant of it) is popular in enterprises. It accomplishes the task in a standards-compliant way, and covers the entire enterprise in a way that scales well.
DOH blows this up. So, the question becomes: in a world with DOH, how is an enterprise supposed to completely and scalably block access to "attacker.com" and all its subdomains? So far, the answer has been "you don't." I think that is a really shitty answer to someone who's trying to accomplish something reasonable.
Yes, generally they're some combination of overly complicated technically, difficult to use without layers and layers of heavy dependencies, are poorly thought out, or solve Google-specific use cases.
Well, the complexity is a problem, but I don't really see that as Google's fault. The only chance to evolve the network is by building on stuff that works despite all the hostile middle boxes, and that necessarily requires quite a bit of complexity, unfortunately. In the long term, it seems to me like QUIC is a better idea than everyone individually having to work around idiocies all over the internet, as that is not exactly a zero-complexity game either.
I'm pretty excited about DNS over TLS. Ahaha no, that's so tacky, I meant DNS over QUIC of course. Sorry, I meant iQUIC. Ah no, it's not even there, but it will suck compared to DOH, DNS over HTTPS.
> For example, if Google was to deploy its public DNS service over DOH on www.google.com and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).
Which will result in all of Google being blocked by schools, businesses, and entire nations. Which, as Google is relied upon more and more, means less access to things like mail, documents, news, messaging, video content, the Android platform, etc.
Nah, many of them can't -- won't -- block Google over this.
A huge number of them are absolutely reliant on Google, for things like (org-wide) Google Mail, Google Docs, ChromeBook deployments, and so on -- not to mention basic Google search.
> It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.
On the other hand, I've done a fair bit of work getting TCP based applications to behave properly over high latency high congestion (satellite or radio usually) links and QUIC makes me nervous. In the old days you could put a TCP proxy like SCPS in there and most apps would get an acceptable level of performance, but now I'm not so sure. It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.