Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Getting Started with DNS over HTTPS on Firefox (medium.com/nykolas.z)
78 points by nykolasz on July 20, 2018 | hide | past | favorite | 77 comments


I'm interested to see how the implementation performs in practice, but I don't see DNS over HTTPS as better than some of the other solutions out there. Some have been around for a while and are well-tried but failed to gain wide adoption, like DNSSEC. Others are new kids, unproven but with lots of promise on paper, like IPFS service discovery.

In no particular order, here are some alternative technologies. As always YMMV and the proof is not just in the technical implementation of the protocol, but also the policies and politics around the adoption. A good chunk of them overlap DNS's goals in what they aim to do, but only partially.

* DNSSEC - https://www.icann.org/resources/pages/dnssec-qaa-2014-01-29-...

Various Distributed Hash Table (DHT) based approaches:

* IPNS - https://medium.com/@yaniv_g/hosting-websites-on-ipfs-with-ip...

* Telehash - http://telehash.org/

Various cryptocurrency approaches:

* Namecoin - https://bit.namecoin.org/

* DomainToken - http://www.domaintoken.io/

* Steemit - https://steemit.com/

If you know of others, please comment with the name and a link.


The point of DNS over HTTPS is enhanced privacy. Currently, DNS packets are unencrypted, but can be signed (with DNSSEC). DNSSEC does nothing to hide to content of DNS requests /responses, but makes sure that they can't be tampered with. Adoption (client side DNSSEC validation) is currently at around 14% [1], which is indeed low.

There is the DNS Privacy project, which proposes some solutions to the privacy issue [2].

The alternatives you're mentioning are really not alternatives at all (except namecoin, but only for .bit). How the other solutions (e.g. Steemit) are connected to the problem at hand is beyond me.

[1]: https://stats.labs.apnic.net/dnssec/XA?c=XA&x=1&g=1&r=1&w=7&... [2]: https://dnsprivacy.org/wiki/display/DP/DNS+Privacy+-+The+Sol...


Enhanced privacy if you consider what's between your device and the DNS operator. But the DNS operator itself learns more about you. They can more accurately fingerprint your devices with DoH than regular DNS due to HTTP sessions, HTTP headers, and TLS tickets.

This concern started being addressed in the latest DoH draft.


Right now there are no benefits though. Because the browser sends the domainname you contact unencrypted via the TLS handshake due to SNI. So someone listing in to your communication will learn the hostname anyway.

I know people are working on encrypted SNI but that will take time.


Of course there are benefits.

Assume a large entity willing to do some mass surveillance (NSA, ...). Now with unencrypted DNS this entity just has to MITM a link on the last hop of a few DNS providers (Google, Cloudflare) and voila, the IP's of the clients and the domains visited are pouring in.

With encrypted DNS, for an entity to get the same amount of information they need to MITM a much larger amount of links.

Though I agree the benefits are clearly limited, the idea is to eliminate all weak links. If there are 2 broken windows in your house and you can fix one - why not do it?


> Right now there are no benefits though.

In terms of privacy, I would mostly agree. Using an authenticated channel to your resolver still protects against many common MitM vectors, so there's definitely a benefit there. Unlike DNSSEC, you're not dependent on the target domain being in the small subset of DNSSEC-enabled domains, not to mention that most client resolvers won't validate DNSSEC anyway.


Encrypted SNI will take years before it is in common usage.

If you're that concerned about privacy, you better use a VPN.


Not necessarily.

CDNs have been doing, and keep doing a great job at pushing new things forward.

Fastly, Cloudflare and Akamai already have implementations and test websites.


Unless I misunderstand how it works, DoH also provides protection against an attacker spoofing replies.

I have no clue how often this attack vector has been used in the real world, but last time I read about it, I got the impression that it would not be very hard for a skilled attacker to pull off. (DNSSEC would work, too, but as you say, most clients do not make use of it.)


Over the past year or so I've done a complete re-evaluation of my home network and online activity with an eye to privacy and safety, after having the epiphany that the cloud is just a fancy term for other people's computers. Why would I trust my modest compute needs and most personal data to someone else in this age of cheap hardware and virtualization?

To concentrate purely on the DNS side, I set about re-engineering things for privacy, safety and speed (in that order). I'll only address my local resolver, and not the DNS I'm serving to the world at large for my personal domains.

I run Palo-Alto's free minemeld server in order to get realtime threat lists. Any medium or higher threat level domains are fed to:

an unbound caching resolver on my OpenBSD edge firewalls. These threat domains (along with adware domains from someonewhocares) are blackholed to 0.0.0.0. Any queries that are not in the cache are forwarded to:

a BIND 9 server on a VM that has no direct access to the internet or the rest of my LAN. It will either answer authoritative queries for my internal LAN or forward queries that require an external authoritative answer to:

Six DNSCrypt proxies in a round-robin scheme. Each proxy was chosen because it (claims it) doesn't log, and will also pass back DNSSEC failures. OpenDNS doesn't!

Notes :

My BIND server verifies DNSSEC. I also have a bunch of known-good/bad DNSSEC domains that my Nagios server checks constantly, verifying that DNSSEC is succeeding/failing as expected. I also have DNSSEC/TLSA/DANE for all my domains and services. Thank you letencrypt!

My OpenBSD pf firewall forces ALL DNS queries to my unbound resolvers, so regardless of what server an internal client attempts to use, it ends up going thru all my security and privacy apparatus. Malware is unable to use it's own DNS servers to bypass my blackholing.

I have not gone the extra step of using TOR. Although this seems like it would improve my privacy, I can't shake the feeling its an NSA honeypot and does more to mark you as a target of interest than it does to protect you.

One feature I would like, which I have found impossible to implement on my own is fresh DNS cooldown, to prevent brand-new domains from resolving for x number or hours. I like the idea that malware using dynamically generated domains could be thwarted with this, but there isn't any central list/mechanism to figure this out. whois info is too unreliable and unstructured.


You can also configure unbound to prefetch records that are about to expire that the user has recently requested. This can reduce human pattern recognition and correlation. Do this on your upstream servers as well. Read up on "target-fetch-policy:"

Unbound also allows you to set a min-ttl, which is taboo in the DNS admin realm, but very useful for adding a small amount of privacy, at the risk of poorly engineered websites being unreachable for a small period of time. See "cache-min-ttl:" Consider keeping that under 20 minutes.

I also log all query responses, then sort by request count and do additional prefetching of names I use, plus batches of random domain names to add some noise as others suggested. "log-queries: no" and "log-replies: yes". Logging this data can also help you spot websites that try to enumerate a users DNS and real IP by using unique A records per client.

Also make sure you don't have the "subnetcache" module loaded, as it will by default send client-subnet data, allowing enumeration of your private network.


SpamEatingMonkey offers a set of RBLs allowing you to look up domains which have been registered in the last N days with a simple DNS query (where N is one of 5, 10, 15 or 30)

https://spameatingmonkey.com/services

I've not used this myself, so I'm not recommending it. I only know that it exists.


Thanks! I'll check it out, but it looks like there isn't a downloadable list, and inserting an extra DNS lookup as a check in my caching/resolving infrastructure isn't something I have been able to figure out how to do. Looks pretty sweet as part of an anti-spam filter tho!

Edit: I've emailed them asking about the possiblity


That looks interesting, but I would also like to see a read-only rsync endpoint that I could grab the zones (raw) or formatted zone data.


Respect! Thanks for posting! Could you do a blog-post or more technical write up about your implementation.

I have my own DNS with min-ttl set. Further more I have firewalled all DNS-lookup except to a certain provider.

On top of this I have a installed a local CA on all devices.

Blocked domains are reverted to an local nginx that answer with a empty gif or 204, even when the request is on https (domains spoofed).

The access.log is very interesting to analyse, and sometimes you find bugs in iOS apps that crashes because they could not send their usage data to flurry.com or other domains.


Cool project! I'd wish I had so much dedication for implementing this at home :)

You could also take inspiration from the NSA and actually perform random DNS queries & HTTP requests to various sites to disguise the true queries.

Another improvement would be actually offloading the resolver to another location via a VPN and querying from there.


Thanks!

Random queries sound like a good extra tactic. It would prevent any of my 6 (unrelated afaik) DNSCrypt proxies from knowing with certainly about where I'm going. The queries would have to be somewhat credible and randomly timed to stop the noise from being easy to filter out, but that doesn't seem insurmountable.

I think the VPN aspect is already taken care of by making all queries via those independent proxies. Is there something else a VPN would help with that I'm missing?


Essentially, you're now not only trusting 1 entity but 6 - if any of those 6 will log / leak data, you're offering them 1/6th of your traffic. The probability of that happening is significantly larger than using only a single provider (if you want to calculate it, there's the Binomial Distribution).

That being said, if any of those providers would then leak your traffic patterns, all the attacker would get is your VPN IP address and not your home IP address. So essentially, you're making it even harder to correlate DNS queries to you.


It's important to note that DNSSEC does not provide encryption. Additionally, very few client resolvers validate DNSSEC. In typical MitM scenarios, DNS over TLS or HTTPS provides much better protection. If the resolver happens to validate DNSSEC, you're probably adding a bit of protection for other scenarios, but overall I'd still rather not see DNSSEC succeed.

The confidentiality win is not huge, most use-cases will let an attacker make a good guess at the query through the destination IP (for IPs serving only one site) or via SNI. Still, there's definitely a win for other types of queries, and it'll get more useful once we figure out how to encrypt SNI.


"overall I'd still rather not see DNSSEC succeed"

Can you explain why it's important _not_ to have DNSSEC ?

In the most threatening MitM scenarios that we see, an adversary controls IP traffic to their victims (usually only relatively briefly). If we can use DNSSEC this gets them a denial of service and nothing further. But with your preferred options they are able to silently interpose as the victim, and tools you've worked on like Let's Encrypt will help "re-assure" the public that nothing is wrong. I have no doubt that ISRG would say they have no liability when the relying parties are screwed over - that's after all exactly what their commercial equivalents say all the time - but it'd be an easier argument to make if you weren't here saying that when it comes to the one thing that _would_ work you'd "rather not see it succeed".

Do you at least have something _instead_ you think would give people these benefits ?


I'll bite. Reasons that you might not only not want to use DNSSEC yourself, but also believe it's important for the protocol not to find any success:

* It creates a global tree-based PKI rooted in world governments (and, on the 2018 Internet, particularly the "Five Eyes" governments), many of which have already demonstrated an eagerness to manipulate Internet infrastructure to capture intelligence.

* Simply in order to reach parity with the (poor) security UX we have today, it will require major forklift-grade updates to libraries and code already deployed, because virtually all Internet software is written with the assumption that DNS lookups fail only because of network failures or user errors, and DNSSEC introduces a third basic class of failures. All this new code will be costly and that money could be spent on better things, and all that new code will be buggy.

* We now know from years of niche experience that our predictions about DNSSEC and reliability were right, that DNSSEC is rarely deployed reliably (major providers routinely screw it up), and that it creates frequent outages that wouldn't occur were DNSSEC not deployed in the first place.

Against these and other concerns you have to weigh the benefits of DNSSEC. But unfortunately for the protocol, the last 25 years of protocol development has proceeded based on the premise that the DNS isn't trustworthy to begin with --- you have to draw a line in the protocol "stack" somewhere, with "insecure" stuff below and "secure" stuff above, and we've essentially spent the last quarter century with that line drawn above DNS. As a result: the benefits of DNSSEC are marginal, bordering on vanishing.

It used to be that there was a coherent argument for DNSSEC around email; DNSSEC would enable MTAs to establish secure connections reliably. But major email providers have given up on DNSSEC even for that application: SMTP STS doesn't depend on DNSSEC.


We know that a very large group of internet users are directly or indirectly using Google's public resolvers, which do DNSSEC validation.

Therefor, any DNS stub library in common use already has to know about DNSSEC failures, because in some countries, like The Netherlands, or Sweden, a very large fraction of the domains are DNSSEC signed.

Due to those Google public resolvers, broken DNSSEC doesn't go unnoticed. So broken DNSSEC is actually quite rare these days.


I didn't just make that up; there was a measurement study at last year's Usenix. It's a debacle.

https://www.usenix.org/system/files/conference/usenixsecurit...


I have domains that have deliberate broken DNSSEC. Does that that count toward DNSSEC being broken? Same for TLS, I have websites with broken certs.

If you want to say something about how the internet is broken, then look at production traffic. Don't just take the list of all .com domains. Because many of them will never see any traffic.

Of course, nobody is going to report on the alexa 1 million. Because that would be completely boring.


In the modern Web PKI, it seems to me that the only thing DNSSEC is good for is mitigating BGP hijacking and similar attacks (in combination with CAA and either some contractual agreement with your CA or the ACME CAA extension).

That's an important issue to solve, but you wouldn't need anything with the complexity and baggage of DNSSEC to do that.

DNSSEC, to a degree, stands in the way of a better solution. And while I have no problem with people using DNSSEC as an additional layer of defense as such, I feel like there's a risk that people will use it as the only layer of defense if it becomes a success, effectively blindly trusting DNS.


Wait, how would DNSSEC possibly mitigate BGP hijacking?

I heard this last time there was a publicized BGP-hijacking incident and it made no sense to me then and still doesn't now. Attackers who control BGP control IP addresses themselves, the things DNS records point to. They don't even need to touch the DNS to intercept traffic.


The magic sauce would be CAA. This is under the assumption that all publicly trusted CAs respect CAA and have no bypass bugs (CAs probably aren't ... too far off target) and that they properly validate DNSSEC (good luck with that, I don't think there's even consensus on what the Baseline Requirements consider compliant).

Anyway, in this mostly hypothetical world where all of that works as intended, you could use a CAA record with just one or two CAs with which you have some kind of contractual relationship requiring out-of-band verification. Alternatively (and, IMO, preferable, since the controls are technical), you can use the ACME CAA extension[1] to lock down the validation methods to just DNS-based ones, or bind the the whole validation flow to a key. Let's Encrypt is working on this currently[2].

[1]: https://tools.ietf.org/html/draft-ietf-acme-caa-05#section-4

[2]: https://community.letsencrypt.org/t/acme-caa-validationmetho...


My understanding is (a) the current interpretation of the BR's does not require DNSSEC, and that (b) validation of DNSSEC among CAs is not the norm; CAs have disabled it because it proved unreliable.


It's certainly not an interpretation I'd punish a CA for, the language in section 3.2.2.8 is rather ambiguous.

Let's Encrypt is running a fail-close setup for DNSSEC, so I wouldn't quite say it's too unreliable in this particular context. Still, it's quite clear that DNSSEC's complexity is getting in the way of things here. A solution just for this specific use-case would be much simpler overall, and I'm not buying into any of the other benefits DNSSEC claims to bring, so I'd rather just see it die.


> DNSSEC, to a degree, stands in the way of a better solution.

When no such "better solution" exists this is wishful thinking, we might make the same argument for any number of core systems and protocols.

In particular you've offered no evidence whatsoever that a system could be developed (much less _has_ been developed and thus offers a viable alternative) which avoids "blindly" trusting authoritative answers from name owners about their names...

Historically the rationale for why we can't do DNSSEC on clients is that it won't work because of middleboxes. The present topic is of course about deliberately altering clients so as to bypass middleboxes and so actually this is a world where DNSSEC works much _better_ than ever before.


This doesn't make any sense. First, it's plainly obvious that we could design replacements for essentially any protocol. Second, the fundamental design problems that would motivate and be corrected in a new DNSSEC are obvious:

* Offline signers was a mistake, since virtually all leaf authority servers are online signers due to the NSEC debacle.

* A protocol with less than 20% saturation in 2018 that expects to be deployed would be based on a modern signature scheme, not RSA.

Just altering those two elements of the design would allow for a radically improved protocol.

I'm 41 years old. The DNSSEC design being promoted right now was started when I was still in high school, during the phlogiston era of cryptographic protocol design. It is lunacy to believe that it is the best thing we can come up with.


Major resolvers do DNSSEC validation. The real problem is the very low number of zones that are actually signed.

Which shouldn't be an excuse for not doing DNSSEC validation if only because your own zones can be signed. You know, the ones you will be constantly ssh'ing to, and blindly accept new ssh server keys from because hey, it's your zone and you trust it.


A good reason not to do DNSSEC validation is that it adds significant unreliability to your network (DNSSEC deployment failures happen all the time, with what appears to be higher frequency than TLS certificate failures, and DNSSEC failures are more dramatic and disabling than TLS failures) while adding only marginally to security, and then only in the best case --- there are cases where it reduces security.

The security of my SSH servers is not influenced at all by the integrity of my DNS queries. If I had a fleet of servers and a large organization of people to secure, such that I was concerned about the security of user introductions to SSH servers, I would address that concern directly with certificates, not indirectly with DNSSEC.

What's more, there are other good things that fall out of adopting SSH CAs (simplified provisioning being the most noticeable, short-lived credentials being the most important). Whereas there is basically no upside at all to adopting DNSSEC, and a lot of downside.

DNSSEC is bad, which is why nobody uses it.


Note that I was referring to client/stub resolvers specifically. Last time I checked, it was rather uncommon for them to perform their own DNSSEC validation rather than trusting the AD bit sent by their upstream resolver. In practice that means any MitM between you and your DNS resolver can spoof DNS regardless of DNSSEC status. DoH/DoT, on the other hand, mitigates this.


Ipns lookups take over a minute: https://github.com/ipfs/go-ipfs/issues/3860

Dnssec is just as vulnerable to mitm attacks, because you just strip the dnssec responses and the client has no way to know if it the resolver just doesn't support DNSSEC

The crypto ones don't even warrant a response, they are non compatible with anything that happens today.


1. sudo apt install unbound

2. Configure network settings to use 127.0.0.1 as the resolver

You now have end to end DNSSEC protected DNS (when the domain supports it. Where support is admittedly low). The best a MITM can do is block your lookups. In the same way they can also block your HTTPS connections.

[edit] Of course, the above will now fail by default in Firefox, because they wont use your local system resolver. Every other app on your system will be secured though. Until they also start getting their own custom made name resolution systems too.


The reason DNSSEC is vulnerable to that kind of MITM attack is because browsers refuse to do client side validation.

For privacy reasons, DNS over TLS and DNS over HTTPS are still a good idea. So even with DNSSEC you would need one of those.


Browsers tried to do DNSSEC validation. It didn't work. DNSSEC features in Chrome, OS X, and Firefox were rolled back.

Also: it is part of the architecture of DNSSEC for clients not to do full validation, which requires that they act as their own recursive resolvers and eliminates shared caches.


Any pointers to which browser versions did do DNSSEC validation and is there any documentation of what didn't work? My experience with running the validator plugin for a couple of years is that it just works.

Obviously, in the early days of DNSSEC there were more improperly signed zones. But since Google's public resolvers do DNSSEC validation, that is mostly a thing of the past.

Then there might be the rare case of middle boxes breaking DNSSEC, but as far as I can tell, that is extremely rare.

There is no connection between DNSSEC validation and being a full recursive resolver. You can easily have a stub resolver that does DNSSEC validation. Or, if it is easier, have a local validation recursive resolver that forwards to another recursive resolver.


Google [why not dane in browsers].


I'm sure you are aware that Google has different results for different people. So for me the first 10 hits: don't seem to include any reference to browser versions that did DNSSEC validation. And didn't have any reference to studies using those browser versions to see what breaks.

In short, 'Just google it' often means that somebody doesn't actually have the relevant references.


You can make your resolver verify the entire chain.


There are tlds that are unsigned, and any domains that are unsigned. You can not verify the entire chain, because it is possible that the domain you are looking up is infact unsigned. You can maybe pin some of the tld keys, but you can't pin all the domains under them.


> You can not verify the entire chain, because it is possible that the domain you are looking up is infact unsigned.

Then you conclude that it's okay that the domain is unsigned. Else - hard fail.


I don't know what the figures are today, but in 2016, 89% of TLDS were signed using DNSSEC - https://www.internetsociety.org/blog/2017/01/state-of-dnssec...


This is probably kind of a dumb question, but can I run my own DoH server? If so, where can I find a tutorial?


Install rust-doh: https://github.com/jedisct1/rust-doh

There are also tutorials on how to set up your own DNSCrypt server: https://github.com/jedisct1/dnscrypt-proxy/wiki/How-to-setup...


Thank you!


That is not a dumb question at all. Running your own servers gives you control over your DNS, your cache and where it is directed upstream.


I have been running BIND9 on my home network, both as an authoritative name server for my local zone, and as a recursive resolver, for more than ten years. ;-)

But for DoH to make sense, the server must be outside of my private network and my ISP's network.


Exactly. You can forward traffic from DoH to TLS upstream servers, or servers sitting in tier-1 networks that are well outside of your ISP.


So is this where we are going, application level dns implementations?


I think we can also say that SSL is "application level". I mean, the network isn't secure, so the application (ie. a web browser) must do something to encrypt data in transit.


It doesn't have to be. On iOS, DNSCloak will provide system-level DNS authentication (plus filtering, caching, etc). On Android, AdGuard Pro also does. On Windows, Simple DNSCrypt does. On Unix, dnscrypt-proxy does.


Seems the sensible place to start, no? We're a long way off secure DNS from the OS.


As part of releasing 1.1.1.1, Cloudflare implemented DNS-Over-HTTPS proxy functionality in to one of their tools: cloudflared, also known as argo-tunnel. You can install it and configure the OS to do lookups agains cloudflared on localhost instead of some outside DNS server.


Android P (beta) has it in system settings. I am using it right now. (Although I have no idea how to test if it's really working)

There's also this interesting app: https://play.google.com/store/apps/details?id=app.intra


And by a long way, you mean you can do it right now on pretty much all Linux systems.

https://github.com/systemd/systemd/pull/8849



> GET https://dns.google.com/experimental?ct&dns=AAABAAABAAAAAAAAB... HTTP/2.0”

I guess the stuff in the dns=bit is a query to look up the ip of dns.google.com? ;)

I'm not sure if I think trusting certs for ip addresses (as opposed to domain names) is a great idea. And how else would this bootstrap?


I installed doh-client from https://github.com/m13253/dns-over-https onto my EdgeOS router, then pointed dnsmasq at doh-client and, well, it works and I have nothing else exciting to report. One less thing for AT&T to snoop.




What is the point of using DNS over HTTPS if you use google's DoH server?


Preventing your ISP from logging your DNS queries?


Well I suppose that this is a concern, and with recent regulatory roll-backs even more so today. I'm creeped out now...


I have enabled DNS over https on Android P (it has built in system wide cpapbility) with Cloudflare.

Problem is that I have no idea how to test if it is really working :-)


Intercept the traffic somewhere and inspect it with Wireshark. You can decrypt the HTTPS packets if you can manage to log the keys on Android (see https://jimshaver.net/2015/02/11/decrypting-tls-browser-traf...).



Nice ! Didn't know about that.

For a reference, this is what I get when I set my dns as 1dot1dot1dot1.cloudflare-dns.com on Android P:

https://cloudflare-dns.com/help/#eyJpc0NmIjoiWWVzIiwiaXNEb3Q...


Install stubby on your mac or linux host. Ask the same questions on both. See what happens.


Google does no evil. Yep :)


Many of us that use local DNS (pi-hole and similar technology), this is not an option. On the other hand, I feel more secure with my local ISP than with mega ad-corporation like Google.

I think that DNS over HTTPS is loved by the ad-community. No local DNS that can disturb or block user generated data. Don’t get fooled people.

#DeleteGoogle


There's no reason you can't use DNS over HTTPS and pi-hole, just use dnscrypt-proxy, and point your pi-hole at that: https://github.com/jedisct1/dnscrypt-proxy


Thanks for tip! Will look into it.


As I just learned a few minutes ago, you can run your own DNS over HTTPS server. Then you do not need to trust either Google or your ISP.


Just another thing that you need to remember to configure on each of the applications that don't use the system resolver, on each of your systems.

I guess for me that equates to Firefox on 3 systems. Although I have three separate Firefox profiles, so I guess that's 9 times it needs configuring. And at some point 3 copies of Chrome, and another 3 Chromiums I guess. And then whatever other apps decide they want their own custom name resolution. That's assuming I'm actively notified that they have been changed to work that way, and assuming I remember to do it after each install.

Because by the sound of it, without remembering to configure my systems, I'm going to end up in a situation where I'm now shipping off a list of all the websites I visit to some American corp for safe keeping, by default. Along with everyone else in the World.

I can't even sit here in the UK, visiting a UK website, without asking an American corp for the IP. At least today, I'm asking my ISP for the IP. A company based in my legal jurisdiction, with whom I have a business relationship.

[edit] I forgot Firefox on my phone too. Thankfully I don't have a tablet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: