Hacker News new | past | comments | ask | show | jobs | submit login
DNS over HTTPS–What Is It and Why Do People Care? [pdf] (congress.gov)
119 points by signa11 on Oct 21, 2019 | hide | past | favorite | 87 comments



Encrypted DNS is a good idea, I just wish all of the basic protocols were getting encryption in-place, instead of being reinvented over HTTPS.

First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS, now DOH is reinventing DNS over HTTPS... Sigh. Both of these would be better off by evolving and modernizing their respective existing protocols. And HTTP is a client/server protocol with a _heavy_ handshake cost, why are we using it for a quick one-off request like DNS.


I don't disagree.

But the problem began decades ago when sysadmins started using firewalls to control what employees could access. During early 2000's I was involved in moving a lot of apps that used a bespoke port to port 80/443 just to make sure our apps and services didn't have any client hiccups due to (rightly so?) belligerent sysadmins.

All this has really done has made sysadmins lives harder bc of packet inspection. So, all app developers and now infrastructure solution devs must run thru 443, otherwise the take up wouldn't happen. The internet is effectively running on one port nowadays.


> But the problem began decades ago when sysadmins started using firewalls to control what employees could access.

If the sysadmins were told "We don't want people doing X on company time: stop it.", that is hardly the sysadmins' fault.

Do you think IT / Helpdesk wants the drama of extra calls because certain things are blocked? That they're sitting in their cubicles twirling their sinister mustaches thinking of ways to make people's lives more difficult?


Honestly, I've met a handful of IT bureaucrats who enjoyed the power trip of holding up dozens of people on a multimillion dollar project over an outdated policy.

These people are the exception, not the norm, but it only takes one.


> holding up ... over an outdated policy

https://en.wikipedia.org/wiki/Slowdown#Rule-book_slowdown

When used carefully, it can be a very effective tactic.


The Bastard Operator From Hell! :-)


I did a talk on this phenomenon once. It's a form of Red Queen's Race: admins block stuff, protocols not blocked grow to subsume blocked functionality, repeat.

In the end we will have a virtualized Internet 100% encrypted and tunneled over port 443, rendering the physical IP header worthless legacy baggage.

To IT admins: the more you tighten your grip, the more protocols will slip through your fingers.


It's the consequence of a design flaw on IP, because every information for consumption only of the final host should have been encrypted.

Nobody knew better at the time, but this time we know. Fortunately, it will end at HTTPS, and it's just a matter of optimizing it well enough (in HTTPS4, or 5, or how many it takes) to make it behave like an internet-level protocol and replace IP.


Early 2000's was also when email got encryption between a mail client and the server. Belligerent sysadmins may try to block all ports except 80 and 443, but users get a bit upset when 143, 993, 25 or 465 get blocked and mail stop working.


Why not extent the existing protocol and keep it on port 53, which DNS already uses?


Middleboxes like to drop any data they don't understand. No doubt there is one out there that is checking that all data on the dns port looks like dns and when encrypted dns is used it gets dropped. Using https for everything means middleboxes can't tell what is going on and just allow it.


DNSCurve is specifically designed to look like UDP DNS. The problem isn't that the packets don't validate as DNS, it's that some of the middleboxes modify them, or don't send them to the appropriate server at all and try to answer the query themselves. The protocol then detects that MitM attack and rejects the response -- as well it should, but detecting the attack doesn't get you to working DNS.

HTTPS can't save you from that, though, because the same networks that modify DNS queries to third party DNS servers also do things like require you to install a root certificate and then MitM your TLS connections, and drop connections that don't accept their root certificate.

In both cases it's the same thing. You can detect the attack but that doesn't get you through.


Because some companies run with an effective

    iptables -t filter -A FORWARD -p udp --dport 53 -j DROP
And prevent anything from their own internal resolvers from accessing outside DNS services, especially now that encrypted SNI is moving more web filtering to the DNS layer.


But that one is OK. DNS over HTTPS doesn't exist to circumvent enterprise content filtering (hence why you can hint you don't want it to run with a DNS entry, making it vulnerable to downgrade attacks).

Easier to make the orgs internal recursive resolver "does the right thing" than all clients, too.


> DNS over HTTPS doesn't exist to circumvent enterprise content filtering

Replace "enterprise" with "ISP" or "nation state", and that's closer to what the DoH aims are.


Sure, if you have internal only sites, you pretty much have to, and this is used even in small and medium shops.

DOH will cause so much drama when internal sites wont resolve any more.


Its perfectly ok to let your internal sites be resolved outside your network? Abc.internal resolves to 192.168.0.101 is fine?


How do you do PTR records? (what hostname is 10.12.43.122) How do you do split DNS? (Inside my network I want it pointing to my internal server)

Now you could argue that the fault in both of those cases is IPv4 and the natting that comes with it.


i don't get the ptr problem. My internal sites already have ptr records. My dns server sits facing the internet. Ptr records too.

But yes. If u have a domain that is supposed to resolve different depending on the source ip that'd be..interesting... Now u need a website that displays different content depending on source ip or something like that...


If your machine is pointing to doh on the Internet, how does a reverse lookup for an rfc1918 address work?

One issue with doh is that it’s inconsistent - some applications use one resolver, some use another. The network (via dhcp) can’t hint at which resolver to use.


i'd prefer not to. We work for a number of different clients, and its common to have pg11-clientanme.* and node-clientname.* ...

I have over 200 domains internally, that i don't have externally.

but its also flexibility. We often when we develop, want to redirect certain domains to our endpoints, for testing or whatever, and do that on DNS so that not everyone has to edit host files etc.

we have pfsense internally, with number of people that can and do add/remove hosts/domains on daily basis, but host our external DNS on CF, and only 2 people have access to that.


What's the risk?


My home ISP, for example, will replace anything you request on port 53 by their DNS results, with ads and sites blocked by incompetence.


Because some totaliterian governments block illegal web sites like wikipedia.org


HTTP had persistent connections ever since HTTP/1.1, and HTTP/2 supports parallel streams within the same persistent connection.

HTTP/3 is still a IETF draft, but is already being deployed pretty much by all big sites. It supports zero-round-trip (0RTT) requests even after the IP of the client has changed.

DoH can essentially match the latency of traditional UDP queries, while also encrypting the channel and traversing any gateways that let HTTPS through.


You can't get 0-RTT with TCP because the TCP handshake itself isn't 0-RTT. HTTP/3 accomplishes it by switching to UDP -- but then you can't get through the legacy middleboxes that only allow HTTPS over TCP port 443.


> First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS

This doesn't sound right. QUIC is built on UDP and TLS 1.3, not HTTPS. HTTP/3 doesn't go over itself.

TLS 1.3 also makes the handshake significantly cheaper with the option for 0-RTT resumptions.

DNS isn't really being reinvented either - it's the old wire format with a different framing.


in my experience, it's not about the protocols, it's about all existing middleboxes blocking/intercepting all traffic on non 443 ports. UDP traffic is notoriously blocked by routers. The only viable option being: HTTPS.


> The only viable option being: HTTPS.

But why not DoT on port 443?

The story about office workers, sysadmins, and service providers trying to make life harder for each other and complicating things for everyone seems unfortunate, but makes sense, yet I don't quite see why HTTP has to be involved.


HTTP2 isn't significantly more overhead than TLS in my experience.

Also, using DoT over 443 would mean that you have an issue with traffic not being distinguishable for routing purposes at the edge. By making DNS on 443 DoH, it means that all HTTP2 routers/loadbalancers/etc. can work the same regardless of it being a Web request or a DNS request, because then they are both Web, i.e. HTTP, requests.


Probably because in your corporate/controlled environment you're supposed to use the company DNS for a good reason.


If you're not using a VPN, DNS is actually pretty easily spoofed. DoH and DoT make that much harder.


> First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS,

Well, multi-streaming and multi-homing is part of SCTP, but no one seems to have bothered implementing it.

> now DOH is reinventing DNS over HTTPS... Sigh.

DoT was already invented when the Web folks decided to go and invent DoH:

* https://en.wikipedia.org/wiki/DNS_over_TLS


DNSCrypt uses the standard DNS mechanism, and just encrypts the content beforehand. Quick one-off requests. No handshake cost. It uses UDP, or TCP for large responses, exactly like standard DNS. In fact, it can even share port 53 with standard DNS. But can be configured to use TCP/443 if you're on a network where this is the only port that works.


There's DOT aswell. Http over tls which doesn't use http.

It's on your andriod phone now!


The big concern with DoT is that firewalls are more likely to block the new port on outbound requests than DoH requests over 443. I think the main concern here is that it would slow uptake of the new protocol similar to how long IPv6 has taken to get fully supported.


I feel that this everything over HTTPS trend is a scam, I don't know how or why yet, I just feel it.

Perhaps it makes the surveillance tooling more uniform and easier to develop/maintain.

The argument that HTTPS is always available and not filtered is just wrong, anyone who has experience working with large corporations knows that clients use local * certificates and everything is decrypted by the firewall and then re-encrypted. Making HTTPS slow af and sometimes plain broken.

I guess someone will implement a full HTTPS stack in js and announce HTTPS over HTTP to go around this "problem".


> The argument that HTTPS is always available and not filtered is just wrong, anyone who has experience working with large corporations knows that clients use local * certificates and everything is decrypted by the firewall and then re-encrypted. Making HTTPS slow af and sometimes plain broken.

Yeah, and implementing everything over https is just going to make things harder or impossible to filter. There's a reason that large corporations block certain things, so if they get reimplemented over http, that will just be a security hole. It would be nice if an actual encrypted DNS protocol was created.


My only gripe with DNS over HTTPS is that it seems to somehow be coupled with making it harder for me to actually force everything to use a particular DNS at the OS level, so apps can do things like circumvent your pihole regardless of how you configure your device's DNS settings.


They could do that already. There is nothing requiring that your app uses the OS set dns server


> They could do that already.

Before Mozilla's drama with DoH, which app(s) did that?

Now that Mozilla has shown people that it's 'okay' to override the OS I'm worried that more things will do that same cockamamie thing.


Not an app, but Chromecasts have 8.8.8.8 hard coded.


> Before Mozilla's drama with DoH, which app(s) did that?

Chrome, for one?


But your firewall could block port 53. If everything goes over https you can't do that any more either.


That is by design though. You want your dns requests to blend in with regular traffic on hostile networks / ISPs. The solution is to not have proprietary spyware devices in your network that don't let you set your own dns.


You could redirect these requests at your router though, essentially performing a MITM on your own device.

That will no longer be possible, which is a good and bad thing depending on the circumstances.


Sure, but they mostly didn't.

Now we will have 2 mayor browsers, that might or might not resolve internal domains correctly. And unless you have AD, and ability to push to config clients, you will have to go to each and every computer and set it manually. And hope that updates wont break it further.


I have firefox with DoH enabled and it still works with the companies internal domain names. I'm pretty sure nothing breaks because of the fact that it falls back to regular dns if the lookup fails so internal domains still resolve.


Why wont internal domains resolve correctly? Abc.private is 192.168.0.123 irregardless who's doing the name resolution n over which ports/protocols??


I'm not a fan of apps doing that. I agree it's sort of tangential.


This was very well written. I am pleasantly surprised.

Another concern is that DOH will complicate content delivery to users. Today, content delivery networks (CDNs) host multiple instances of web content on geographically dispersed servers. This creates resiliency for web services and helps to deliver content to users more quickly. If ISPs lose the ability to view users’ DNS queries, they will still be able to route users to a CDN, but not necessarily the closest or most efficient CDN. Technical measures that may alleviate this concern include sharing some user data (like general geolocation data) and CDN load management tactics.

Is this a real concern? Don't ISPs just route on IP addresses?

Other potential implications of DOH implementation involve issues such as international data flow and advertising competition.

What on earth is this referring to?


DNS lookups for CDNs will return a different (local) IP for clients in different regions. The proper way to handle this is using EDNS Client Subnet, which is what Google DOH does. CloudFlare DOH doesn't support this, but instead handles this by making the DNS request from a server near to the end user. This is only roughly accurate, and can't for example point a user's request to their ISP's on-prem edge cache: https://samknows.com/blog/dns-over-https-performance


That refers to the fact that DoH gives the same old monopolies on the web (Google, Cloudflare) additional web visit signal data, worsening the already appalling state of the economy on the internet captured by Google and Facebook for more than a decade, with naive nerds cheering in the name of technical progress. It also refers to ad blocking no longer being able to block ads based on domain names and IP addresses.


Other providers can provide DNS over HTTPS. It doesn't affect client side ad blocking. You can still point your browser to a regular DNS server (like some kind of DNS-based adblock solution), just the default changed.


For how long though? I don't necessarily trust either Chrome or Firefox to quietly accept my "why yes, use normal DNS" choice, so I've just gone ahead and set up DoH at home (since I'm already running DNS at home). But then again, I can.


Are you saying DOH does not affect client side ad blocking because it can be shut off?


Of course.

If you blocked ads using browser addon like uBlock, that's going to be neutered with manifest v3 in Blink (i.e. Chrome/Chromium/Edge).

If you blocked ads using pi-hole like setup, now that's sidestepped.

If you wanted to enforce your DNS, now you cannot, because it will be mixed with other https traffic. For now, you can do DPI on the SNI, but that's going to end with eSNI too.


It's referring to the ability of ISPs (as of last year) in the USA being able to use your DNS requests for advertising purposes. The argument for this is to expand competition around ad based products by enabling ISPs to join the game that large web properties currently dominate.


God forbid there's a piece of infrastructure that can't show ads.


> The Congressional Research Service (CRS), a federal legislative branch agency located within the Library of Congress, serves as shared staff exclusively to congressional committees and Members of Congress. CRS experts assist at every stage of the legislative process — from the early considerations that precede bill drafting, through committee hearings and floor debate, to the oversight of enacted laws and various agency activities.

https://crsreports.congress.gov/Home/About


By its own definition[1], DOH forbids recursive resolution of queries. The client "MUST NOT use a different [DOH resolver] simply because it was discovered outside of the client's configuration"[2].

The protocol seems to be designed to require clients to send all of their DNS traffic to a single upstream provider. This may be similar to your current DNS configuration, and your network may even be limiting your ability to use the internet with bad policies on broken middleboxen. That's unfortunate for you, but please don't presume everyone has similar limitations.

>> "But we need to overload port 443 to hide our DNS traffic!"

It's unfortunate your ISP or national infrastructure requires such obfuscation. In those situations, the current DOH protocol could be a good workaround.

However, protecting against a malicious upstream server sending bad results is very different than protecting against large institutions being able to eavesdrop on your DNS traffic to build a model of your pattern-of-life. The latter is only stopped by not giving a single entity all of your DNS traffic, which DOH explicitly requires.

If you recursively resolve DNS queries locally - ideally in a future where traffic to authoritative servers is encrypted (DoT?) - only the first request goes to a centralized server. Most traffic goes to the domain's authoritative server, which is probably the same controlled by the same entity you are about to connect to with HTTPS.

[1] https://news.ycombinator.com/item?id=21110296

[2] https://tools.ietf.org/html/rfc8484#section-3


> Today, DNS queries are generally sent unencrypted. This allows any party between the browser and the resolver to discover which website users want to visit. Such parties can already monitor the IP address with which the browser is communicating, but monitoring DNS queries can identify which specific website users seek. As more services move to cloud computing infrastructure, this distinction becomes increasingly important, because multiple websites may be consolidated under a few IP addresses, rather than each having a unique IP address.

This is super misleading. Even with DoH, any party on the network can see which websites you're talking to, because their hostnames are sent in the clear via SNI. ESNI fixes this, but it's not clear to me whether the major cloud providers are going to go for that, and if they don't it's not going anywhere.

https://news.ycombinator.com/item?id=21264814 was a good discussion of the actual security benefits of DoH.


I wonder if Mozilla will change their rollout plan now that the UK has dropped their porn-filtering plans[0]

0. https://www.bbc.com/news/technology-50073102


They still will want to block piracy sites.

https://en.wikipedia.org/wiki/List_of_websites_blocked_in_th...


DNS is heavily cached. What are the caching implications for encrypting DNS?


It's only the point-to-point transports that are encrypted, not the queries or responses, that is, the queries and responses are encrypted in transit but would still be cached in the clear for any other queries to benefit from on the resolvers. The caches will work the same as they do today.


Not much any more, with load balancers/geo routing/CDNs generating ephemeral host names, and everybody using very low TTLs.


It's only the traffic that is encrypted. It's still cached on the server and the client.


I wish it was cached by Firefox. But it's not. I'm running a DoH setup at home, and from the logs, it looks like Firefox is NOT doing any caching of results it gets.

At least my DNS server is caching the results though.


This links directly to a PDF. Reader beware.


What do you mean? This is pretty common here. Nevertheless it should be included in the title if that is what you are getting at.


Yes, putting PDF into the title is considered polite.


On the other hand, if you don't want to open pdfs by accident, use "Always Ask" .. https://support.mozilla.org/en-US/kb/disable-built-pdf-viewe...


What's wrong with links to PDFs? A concern with malware?


I'm not sure about these days, but I do remember being hesitant about opening PDFs 12+ years ago, before I switched to MacOS (Safari has always had very good PDF display ability via OS X), and before all the major browsers had built-in PDF display (PDFs would often be downloaded as a file, which would trigger the opening of some PDF viewer shareware pre-installed on Windows). Terrible UX back then.


Yeah that was true back then. Opening a PDF link on windows 2000 or XP always a pain in the ass. I think those days are mostly over now though.


It used to be considered polite to append "(pdf warning)" to direct links to PDFs. I still like it, even though it's way less of a PITA now than it used to be.


PDFs are often big, which is a concern for those on metered connections.


FYI: South Korea uses SNI to block porn sites.


Encrypted SNI is on the way. Alternatively, if the site is on a CDN or hosting platform you can usually send any SNI you want and it still resolves to the correct site.


Major cloud providers started blocking domain fronting last year.


Firefox users can enable ESNI from about:config. If the web site supports it then it will work.


Disclaimer: This will be a wildly unpopular opinion.

I do not believe that DoH was created first and foremost to protect the privacy of people. I believe that it was created to use the frog in boiling water methodology of silently pushing millions of people into centralized logged DNS that can be used for whatever purpose those companies see appropriate, in my personal opinion.

I do not believe this opinion is far fetched. No company is going to just provide a large amount of infrastructure out of the kindness of their hearts. I am not saying that philanthropists do not exist. They do, but not here. This is a data grab first and foremost, in my opinion.

Another factor in my opinion is the lack of support for a corporate infrastructure. Some companies may manage some facets of user settings in Chrome and Firefox via AD policies, but I believe that is the exception rather than the norm. Companies will be leaking even more internal infrastructure topology than they do today. It isn't like ISP's manage browser settings, nor would they want to.

Nation states? DoH will not affect them at all. They will simply null route all of the DoH hosts like they do with existing proxies and VPN providers. This is what I had to do in my home network so that I could maintain control of my DNS.


I think you're arguing against the wrong thing here. You want decentralization of DNS. I don't see DoH really impacting this issue in any clear direction.

On one hand, you have centralized DNS (based on your ISP). DoH gives you some choice over that now through your browser. On the other, you have only a handful of DNS providers to choose from. DoH is just a technology, there's nothing preventing ISPs from still providing their DNS services over HTTPS.


> I don't see DoH really impacting this issue in any clear direction.

See my other posts[1][2] about why DoH explicitly requires centralization.

> you have centralized DNS (based on your ISP)

You might, I don't, because DNS doesn't require that the client delegate recursive resolution to an upstream server.

[1] https://news.ycombinator.com/item?id=21110296

[2] https://news.ycombinator.com/item?id=21309811


This is a strange definition of decentralization. I suppose it gives you redundancy. But aren't you broadcasting information about your request to multiple parties if you're doing recursive resolution from different providers?

But I agree that this should be a choice on the client-side.


(sorry for the late response!)

> This is a strange definition of decentralization.

DNS is - by definition[1] - a distributed database.

    Name servers store a distributed database consisting of the
    structure of the domain name space, the resource sets associated
    with domain names, [...]
> I suppose it gives you redundancy.

The distributed database is about administrative boundaries[2].

    Authority is vested in name servers.  A name server has
    authority over all of its domain until it delegates authority
    for a subdomain to some other name server.
Redundancy in the DNS system is provided by a requirement that at least two authoritative nameservers must be listed when delegating authority to a subdomain.

> But aren't you broadcasting information about your request to multiple parties

Recursive resolution only involves multiple parties when the authoritative nameserver for a DNS zone[3] isn't known and again when the TTL (time-to-live) for the zone's NS record expires. Once the NS records are cached locally, queries only involve one party. DNS is a very flexible system that is designed to allow queries at any level and easy caching.

Also, consider that TTL for different record types is often very different. The only record that necessarily must be requested from the 2nd-level domain (".com") or other central server (like 8.8.8.8) are the NS records for a zone. Using the IETF's rfc server in [1] as an example, to lookup the A record for "tools.ietf.org", we first (unfortunately) leak information to .org (or 8.8.8.8, etc) to discover the domain's authoritative servers:

    ;; SERVER: 8.8.8.8#53
    ;; QUESTION SECTION:
    ;tools.ietf.org.                        IN      NS

    ;; ANSWER SECTION:
    tools.ietf.org.         1209600 IN      NS      heroldrebe.levkowetz.com.
    tools.ietf.org.         1209600 IN      NS      zinfandel.levkowetz.com.
    tools.ietf.org.         1209600 IN      NS      dechaunac.levkowetz.com.
    tools.ietf.org.         1209600 IN      NS      dunkelfelder.levkowetz.com.
    tools.ietf.org.         1209600 IN      NS      durif.levkowetz.com.
We then cache that locally for 1209600 seconds (14 days!). While this does leak the fact that you asked about something in the ".ietf.org" zone to a central server, two (apx) of these requests per month* doesn't reveal much about your pattern-of-life.

The TTLs of A (and AAAA) records tend to be much shorter, sometimes unnecessarily short for better surveillance resolution. In this case, IETF is using a fairly standard 10 minutes:

    ;; SERVER: 4.31.198.61#53 (durif.levkowetz.com)
    ;; QUESTION SECTION:
    ;tools.ietf.org.                        IN      A

    ;; ANSWER SECTION:
    tools.ietf.org.         600     IN      A       4.31.198.61
    tools.ietf.org.         600     IN      A       4.31.198.62
    tools.ietf.org.         600     IN      A       64.170.98.42
For comparison, the A record for graph.facebook.com (their big "analytics"/spyware ingress server) seems to have a random TTL between ~5 and 59 seconds. Thy are effectively forcing a DNS query on every analytics event. That's insane, but my point is that doing the recursive resolution locally means that almost all of those DNS queries are sent to dns.facebook.com only. The upstream DNS at the ISP or 8.8.8.8 only gets ~two UDP packets per month. With DoH, all of that still happens, but it has to be routed though Cloudflare first!

[1] RFC 882, page 13, "NAME SERVERS" https://tools.ietf.org/html/rfc882#page-13

[2] RFC 882, page 14, "Authority and administrative control of domains" https://tools.ietf.org/html/rfc882#page-14

[3] From [1], "In general, a name server will be an authority for all or part of a particular domain. The region covered by this authority is called a zone."




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: