Hacker News new | past | comments | ask | show | jobs | submit login
Statement on DNS Encryption [pdf] (root-servers.org)
80 points by moviuro on April 21, 2021 | hide | past | favorite | 42 comments



To clarify a bit:

The root servers already use DNSSEC (which is not DNS query encryption). Try

$ dig rrsig . @c.root-servers.net

if you want to check this. (I chose c-root just for fun.)

DNS query encryption has become a more and more popular idea because of things like spy agencies conducting surveillance of DNS queries for espionage purposes, ISPs collecting them for commercial purposes, and network censors using them for site blocking. If you can query resolvers in an encrypted way that the network can't see, you can make it less likely that they will know what sites you visit or be able to block particular ones (especially in certain cases where an adversary is on-path for DNS queries but not for the resulting HTTP connection, which is not always true but is sometimes true).

This letter seems to respond to suggestions that the root servers ought to enable a query encryption mechanism of some sort so that you could query them without revealing (to the network) what the content of the query was. This is of varying interest and relevance depending on your threat model (and who and where your recursive resolver is), but it could be important in some threat models.

The letter basically argues that it isn't really necessary for the root servers to do this, because (1) in principle the information they serve isn't that sensitive (and even the fact that someone is interested in a particular part of it might not be that sensitive), (2) other people are in a position to mitigate the privacy issues, and (3) other people can reasonably do this right away without requiring the root servers to change anything.

The QNAME minimization thing in particular is like: if you want to look up the IP address of controversialsite.example.com, this might end up generating a query "IN A controversialsite.example.com." (and/or "IN NS controversialsite.example.com.", "IN SOA controversialsite.example.com.") that goes unencrypted from you/your network/your recursive resolver to the root servers. But this is unnecessary. Instead, this query could ask just for "IN NS com." to the root servers, and neither the root server operators, nor your ISP, nor governments tapping Internet backbones, will know that you were interested in com because of a lookup for controversialsite.example.com as opposed to google.com. (The TLD servers such as gtld-servers.net would still need to offer query encryption in this case.)

A further argument made by the root server operators is that recursive resolvers can do better caching to make it much less likely that they'll need to query the root servers directly for the huge majority of individual queries that are sent to the recursive resolver.


Thanks for great explanation.

Possibly lookup for google.com isn't completely innocent action depending on the Country, especially on this topic


I know DNSSEC has a ton of flaws and has received a lot of justified criticism. But I'd still desperately love to be able to have good standards in place such that my possession of a given domain also allows me to operate a generally recognized private CA off of that for said domain (as well as elimination of a variety of DNS attacks of course). There shouldn't need for 3rd parties there in principle, it should be enough to just be able to put up a CA root public cert up in DNS and have everything automagically work from there. It'd make a host of things like RADIUS and future enhanced distributed auth email-likes a lot more convenient and less fragile. I love Let's Encrypt, but it's a bandaid for the fundamental issues with DNS. I hope increasing recognition of the centrality of DNS and need for improvements there will ultimately result in something modern and better than DNSSEC that can move a lot of stuff forward.


I'm not exactly sure this is germane to this thread because none of the query encryption mechanisms the root operators are writing about here really address the same things as DNSSEC.

This debate has happened in various places (here on HN; on mozilla.dev.security.policy; on the Let's Encrypt forum; on IETF WG mailing lists; presumably inside browser vendors' security teams) but I haven't usually seen the authoritative statement of both sides' views about the status quo and their visions of the future for web PKI.

I was involved in setting up Let's Encrypt, and I noticed that many of my colleagues were, to varying extents, themselves PKI skeptics (people who had been critical of existing CAs' practices, but also of the fragility of what certificates can even really attest to).

I've also seen (partly from a colleague who went to a bunch of meetings about these things) that generally e-mail people really like DNS and somewhat mistrust X.509, while commonly web people grudgingly respect X.509 and somewhat mistrust DNS. This was especially visible in the debate about DANE (a mechanism for putting public keys and statements about them in the DNS), and also in related debates about the role of DNSSEC. Reputedly, many of the avid DNSSEC users and DANE developers and proponents are e-mail software operators or implementers. We've also seen Thomas Ptacek here criticizing DNSSEC and arguing that it has no important role for Internet security (especially in terms of cryptographic protections against government eavesdropping).

During the DANE standardization, if I understand correctly, the developers hoped that it would eventually be on par with X.509 certificates as a way of validating public keys for TLS connections (both STARTTLS and HTTPS, as well as other applications). But I think the Chrome developers at some point made a clear statement to the effect of "nope, sorry, we're not going to do that"!

My intuition supporting any or all of "DNS registrars should be [name-constrained] CAs"/"DNS registries should be [name-constrained] CAs"/"DANE should replace X.509" was basically that DNS records are the fundamental source of ground truth for CAs issuing DV certificates, and literally if you look in the CA/B Forum's list of approved methods for doing domain validation, they're all about cross-checking information provided by DNS registrars, and similarly if you look at Let's Encrypt's domain validation methods (https://letsencrypt.org/docs/challenge-types/), they all directly rely on DNS, while the most secure method (DNS-01 challenge) is literally about the ability to place a TXT record into a specified DNS zone. So, DNS registrars and registries are directly being trusted in the DV issuance path -- every time.

The main counterarguments I remember were along these lines:

* DNS registries and registrars are (despite their huge importance) less security-conscious than CAs in numerous ways, and operate under less stringent security precautions and procedures. [But this argument is a little weird when CAs will issue certificates based on whatever the registries tell them.]

* DNS is less transparent than CAs, especially given the mandatory use of Certificate Transparency, which allows detection and investigation of misissuance. (Also if we relied on DANE without X.509, there would be no public or permanent record of the DANE DNS RRs that were used in a particular trust decision, so fraudulent ones might never be detected either contemporaneously or retrospectively, and a root or TLD keyholder could construct a completely fraudulent DNS signature chain that would be accepted by a particular relying party but never stored anywhere else.)

* You can choose a relatively security-conscious registrar, and then registry's security precautions should ensure that attacks against other registrars (or negligence or malice on their part) don't affect your domain's zone contents, but if registrars could directly issue certificates, arbitrary registrars could misissue for domains that aren't even registered through them. But registries couldn't usually issue certificates because they have no relationship with subscribers, no means of authenticating them, and no user interface through which certificate issuance could be requested or performed.

I think there are others. Maybe I could approach some people and get them to write up their different views of all of this for the record, not as part of a debate or flamewar.

Edit: the sibling comment reminds me that there are also different proposed use cases -- one is the belt-and-suspenders method where you need an X.509 certificate and a valid DNSSEC-signed DANE record confirming that the subject key is legitimate. Other proposals have been that the DANE record could be just-as-good as the X.509 certificate, or that it could eventually supplant it entirely.


Chrome found that the Internet as it actually exists, with a diversity of broken middleboxes, made DANE impossible to deploy effectively. You can't count on DANE lookups succeeding, because middleboxes shoot down DNSSEC lookups, which look weird compared to ordinary lookups. You can't distinguish between a broken middlebox shooting down your DANE lookup and an adversary deliberately suppressing DANE lookups. So, in effect, all DANE bought you was an additional single point of failure; you still had to trust the rest of the WebPKI infrastructure.

It's worth keeping in mind that proof of domain ownership doesn't have to depend on the DNS; you can also just ask registrars directly, via something like RDAP. If we're going to keep pinning Internet trust to name ownership, there's probably no reason we have to involve the DNS at all. Moreover: if you can accomplish WebPKI trust using an RDAP-type lookup, that pretty much obviates the need to forklift in a cumbersome, pre-outmoded signing system for the DNS itself.


> Chrome found that the Internet as it actually exists, with a diversity of broken middleboxes, made DANE impossible to deploy effectively. You can't count on DANE lookups succeeding, because middleboxes shoot down DNSSEC lookups, which look weird compared to ordinary lookups.

Yikes!

> It's worth keeping in mind that proof of domain ownership doesn't have to depend on the DNS; you can also just ask registrars directly, via something like RDAP. If we're going to keep pinning Internet trust to name ownership, there's probably no reason we have to involve the DNS at all.

Do you think this would be useful? I'm not part of the Let's Encrypt team anymore, but I still know almost all of them (but, to my knowledge, nobody from any registrars).

I've kind of wished for something like this in the past (on the basis that it would obviate other kinds of attacks against the domain validation process, such as routing or DNS attacks), but I'm not sure what the user interaction flow would look like, or whether it could be made compatible with Let's Encrypt's desire to automate almost all certificate issuance and renewal steps.

It seems like maybe the registrars would have to give out proof-of-ownership RDAP challenge API credentials (that a server could use to ask the registrar to serve a particular value via RDAP?).


From an absolute security perspective, I'm not sure I see how RDAP is much better than a DNS/DNSSEC-based solution, but if you can make RDAP work, you can get most (maybe more!) of the purported benefit of DNSSEC (vis a vis the WebPKI) with a tiny number of deployments, compared to the billions (I think, if you do the math over, say, 10 years, assuming wide deployment --- which won't happen, but, arguendo --- you can get there) we'll spend on DNSSEC...


I mean, given how just about all DNSSEC domains are bootstrapped through domain registrar interfaces and their APIs (which have been compromised in attacks before), that might not be the best idea. As a counterpoint I think despite it's flaws that removing any form of certificate/key pinning from browsers was a mistake, rather than learning from said issues (i.e Maybe provisioning pinning of the public keys through an HTTP header isn't the best idea.)


The contents of the root.zone have always been available by means other than DNS queries. That is to say, one does not need DNS, or encrypted DNS, to retrieve all the DNS data that the root.zone contains. For example, the data can be retrieved by FTP or HTTPS. The later being a TLS encrypted stream of DNS data preceded by an HTTP header.

   curl https://192.0.32.9/domain/root.zone
If there really is serious interest (I seriously doubt it) in encrypting DNS queries sent to authoritative servers, I would be happy to start a TLD that accepts encrypted UDP DNS packets and returns encrypted UDP DNS packets, using CurveDNS, a time-tested implementation of DNSCurve. DNSCurve is per-packet (individual query) encryption, not the same as "DoT", or "DoH". Whether this hypothetical TLD gets listed in ICAAN's root.zone is of little consequence; there are easy workarounds.


Wait, what? httpS with a raw IP address. Where does the TLS certificate come from?

Curl has the same complaint that I do:

    $ curl https://192.0.32.9/domain/root.zone
    curl: (60) SSL: no alternative certificate subject name matches target host name '192.0.32.9'


The IP address for that FTP (and now HTTP) server changed so infrequently over the years, on the order of decades, that in the past IANA/ICANN published advance notice to the public before changing it. (A non-DNS way of changing an IP address.) It is probably hardcoded into lots of DNS software as this address is where root.hints is served. This is how ICANN DNS is boostrapped. From a single IP address. There are a few IP addresses I have memorised, 198.41.0.4, 192.5.6.30 and this one. Yes, I still use FTP to fetch the root.zone.

In the days of ye olde internet it used to be called "ftp.internic.net". Today, I believe it is still "internicftp.vip.icann.org".

To make curl work

   curl -k https://192.0.32.9 
or

   curl https://www.internic.net/domain/root.zone
I use curl in examples on HN because so many paople love it but TBH it is not a program I use myself. I use custom programs I wrote for doing HTTP requests. Finer controls than curl.


The certificate is issued to internic.net, www.internic.net, ftp.internic.net, wdprs.internic.net, and reports.internic.net

The IP address isn't directly validated, but you get something even better.


What is the better thing that we get.


Proof that it's internic.net is much better than proof of IP address.


How is it much better. I am honestly curious about the answer.


You can verify it with less preexisting knowledge, and it gives you much more confidence that you're in the right place to see a signed certificate for "internic.net" than to see a signed certificate for "yes this is the IP you typed, hope you typed the right one".

And it can't go out of date as easily. Like you said, the IP changes sometimes. internic.net doesn't change.


I have the IP address memorised. Is that what you mean by "preexisting knowledge". How does a certificate give me more confidence that I am in the right place. ICANN could choose any name it wants for this IP address. It might cease to use "www.internic.net". That name is only a CNAME at this point. If I look up the actual name "internicwww.vip.icann.org" it does not even return this IP address. It returns 192.0.47.9. Both addresses serve via FTP as well as HTTP. The only thing that gives me confidence that 192.0.32.9 is still the "right" address is 1. it is in a block that remains registered to ICANN and 2. the server continues to offer root.zone, arpa.zone and root.hints.

Am I understanding this correctly. You are concerned that the IP address might change. As I said, if that happened, they would not change it without notifying the public in advance. This IP address is used to bootstrap DNS. Thus, no one should need DNS to find it. AFAIK, outside of EV certificates, CAs rely on domain name registration as their "verification" mechanism. Seems like one has to trust the DNS in order to trust a CA. And why trust a CA.

I am quite certain I will be dead before this IP address ever changes again. It used to be 208.77.188.26. This is going to be the "right" IP address for the forseeable future. TLS cert or not. I use FTP to get the root.zone, not TLS. Verisign's zone file access program used to offer .com and .net zone files only via FTP. Even if you can use TLS to get them now, I'll bet you can still use FTP.


> How does a certificate give me more confidence that I am in the right place.

> Seems like one has to trust the DNS in order to trust a CA. And why trust a CA.

You're arguing that certificates are useless?

They're not, because you might be on a hostile network, and it's much easier to attack one person than to attack domain verification.

And even if certificates are 99% useless, that doesn't affect my argument about which kind of certificate is better.

> You are concerned that the IP address might change.

That was only one of the things I said. Other than hostile networks, you might make a typo, and there are various reasons you might not notice getting the wrong file immediately that a nice verifiable "internic.net" label could help with.


I never said certificates are "useless". I am commenting only on this one IP address and one specific use of it, downloading zone files. How do you extrapolate that to be a general statement about certificates.

In this case you are downlaoding a file that is served at a number of other known, unchanging IP addresses, "root servers." Even more, the RRs in the file have been signed, "DNSSEC".

All I am saying is that a "bare IP address" in this case is still useful, even without a domain name and certificate.


> I never said certificates are "useless". I am commenting only on this one IP address and one specific use of it, downloading zone files.

I meant useless in this scenario. So same.

> All I am saying is that a "bare IP address" in this case is still useful, even without a domain name and certificate.

What? Then we don't disagree at all. You need to reread my earlier comments. I said a certificate with a name is better than a certificate with an IP. I never said anything about the value of the IP address itself, only what's validated.


"I said a certificate with a name is better than a certificate with an IP."

I think that's debatable. IMO, it depends in part on the perceived value of the ICANN "domain name business" as some sort of vetting mechanism. In this case the party being vetted is ICANN itself. Although they did pay for "EV".

https://censys.io/ipv4/192.0.32.9/raw

Does Globalsign still offer certificates that are tied to IP addresses.

https://support.globalsign.com/customer/portal/articles/1216...

Here is a question for the TLS certificate fanatics: Why do CAs provide non-http URLs to their CA certificates in the certificates they sell. For example,

http://cacerts.digicert.com/DigiCertTLSRSASHA2562020CA1.crt

I suspect it is a bootstrapping issue. At the top of the chain there is some notion of implict trust. No different than with DNS. Trust should be decided by the end user not by developers, nor by some self-appointed "authority".


I think that DoH has basically turned DNSCurve into a dead letter design. There's a Betamax-vs-VHS level argument that DNSCurve is better, but DoH is what's going to be deployed.


DoH will use TLS.

TLS 1.3 mandates use of X25519, Ed25519, X448, and Ed448.

Who is responsible for designing X25519, Ed25519, X448 and Ed448 and introducing these algorithms to TLS developers. The same person who designed and introduced DNSCurve. More diplomatically, a team lead by the same author as DNSCurve.


I have no idea what you're trying to say here. DNSCurve is not DoH, nor is TLS, nor is TLS the same as Curve25519. IIRC, Bernstein designed a secure transport of his own, for an OS project at UIC. It was not TLS 1.3.


I did not say DNSCurve is DoH, DNSCurve is TLS nor that TLS is the same as Curve25519. I said that DoH relies on TLS and TLS 1.3 mandates cryptography designed by the same person who designed DNSCurve. I am not sure what point you are trying to make. Constructing a straw man, perhaps. We are each stating facts. The reader must draw their own observations/conclusions.

The observation I make from the facts is that while DoH may win a "popularity contest" over DNSCurve (VHS vs Betamax) in terms of what most people will use, TLS and therefore DoH nonetheless is or soon will be relying on the work of the author of DNSCurve. Whether anyone else besides me thinks this is notehworthy I have no idea. I mean, the author could have just intended the work to be used in DoT or whatever was the current trend (VHS), but the fact that he demonstrated how it could be used in a different way (Betamax), encrypting each packet, to me that is not a coincidence. It was not meant to be ignored, IMO.

As an end user, I am not interested in what is popular, I am interested in what is best. But that's only me. Readers can decide for themselves.


In case it isn't clear from reading the article PDF or this HN thread, DNSSEC doesn't encrypt DNS. If you want encrypted DNS, DNSSEC isn't the answer, because DNSSEC doesn't encrypt DNS. The main thing DNSSEC does is create outages.


> The main thing DNSSEC does is create outages.

It was not that long ago that the same thing was being said about TLS.

I like my DNS to be validated. I don't see what the big problem with that is. Just don't enable the checkbox in your DNS resolver if you hate it that much.


First, TLS outages pop a warning box up on your screen, and DNSSEC outages make your site entirely disappear from the Internet. So that's a difference.

But the bigger difference is, TLS gets you an encrypted secure channel between the client and the server, and DNSSEC gives you... a signed name lookup.

DNSSEC's risks are larger, and the rewards far smaller.


A TLS outage shows all of your visitors a big, red warning message saying "Attackers might be trying to steal your information from example.com (for example, passwords, messages, or credit cards)". Such a warning is a great way of scaring away any business's visitors for good.

A DNSSEC outage shows a message saying "this site cannot be reached at the moment, try again later". I'd much rather have people see the latter than the former.

Signed DNS lookups are quite important for any use of DNS for domain metadata storage, for example in systems like DKIM or SPF. Neither is encrypted, neither is signed in any other way, yet together they control if you can send emails at all, or if your server will get (near) permanently banned from Outlook's spam filters. There are also various PTR records for auto-configuration of various protocols that can easily redirect traffic to other hosts were a victim to get a spoofed version.

We might have abandoned DANE, but there's still important information in DNS. Until we abandon DNS somehow (and I shudder to think what the alternative would be with the way the internet is ruled by Google and Facebook right now), I disagree that the risks are far smaller as you say.


A DNSSEC outage does not show a message that "the site cannot be reached at the moment, try again later". It shows the same message that you'd get if the site doesn't exist at all, because that's the result of a failed DNSSEC resolution: the site doesn't exist.


> First, TLS outages pop a warning box up on your screen, and DNSSEC outages make your site entirely disappear from the Internet. So that's a difference.

Meh. If you handle your site correctly, you have at least HSTS, in which case, TLS errors are also game-over. If you don't have HSTS, well I have bad news for you.

The other point stands though (risks are higher and rewards lower for DNSSEC).


> TLS gets you an encrypted secure channel between the client and the server, and DNSSEC gives you... a signed name lookup.

I respectfully disagree with that second part. The risks for DNSSEC/DANE might be higher, the rewards are bigger too.

TLS gives me a secure channel only when I connect to the right (i.e. expected) server. A TLS encrypted and validated connection to a wrong party is the threat here.

As user, I don't know which is the CA for any given domain. And Chrome only caches a small subset. Otherwise we wouldn't need neither CAs or DANE, self signed certificates would suffice ;-)

Either I have to trust that there are vigilant parties that monitor all CT logs against fake certificates or my user agent does that for every connection, blocking when it finds double certificates.

With DNSSEC and DANE, my agent fetches the address and CA for the site and validates these against the TLS handshake.

From there, HSTS will protect future lookups so it becomes a TOFU issue.

Besides, the middle-box problem is being resolved with DOH over TLS1.3, isn't it?


It would be great if we could solve the problem by backing up and addressing a few preliminary things first. Like, DNS is a very old protocol, and it's a good protocol. But will we keep using it this way 100 years from now? It may be counter-productive to only focus on backwards-compatible fixes.

Domain registration and name serving are pretty similar, yet we treat them very differently. For both security reasons and functional reasons, it would be nice if we had one unified way to manage them both.

Other services depend greatly on DNS names, like PKI. It would be very advantageous, again for security and functional reasons, if there were better (not necessarily tighter, but better) integration between the two.

Secure connections all pretty much share the same properties. Virtually every system you can think of can be thought of as one that (at some point) requires authentication, authorization, encryption, and data integrity. So, perhaps there is a unified way we can provide all of this, the same way TCP and UDP provide a unified way of transporting either streams or datagrams. Maybe even making them OS primitives the way the TCP/IP stack is today [didn't used to be!].

If we step back and start from first principles, and design protocols that provide all the functionality we know we need (now and in the future), we could start getting ready for computing in the next century and beyond. It doesn't have to be a pipe dream, especially if we start building support for experimental protocols into devices today.


> encrypted UDP DNS packets and returns encrypted UDP DNS packets, using CurveDNS

Somebody ought to tell the root zone operators about DJB's DNSCurve. Evidently from this PDF they are not aware of it, since it does not suffer from any of the three issues they enumerate as blockers (it is UDP-based, connectionless, and requires no server state):

https://en.m.wikipedia.org/wiki/DNSCurve

The only additional overhead relative to plain DNS is two ECC operations and two salsa20 operations per query. Hardware capable of doing this at line rate is really not a budget-buster for them -- if you can't afford four crypto ops per packet you ought to reconsider whether you should be running one of the root servers.


DNSCurve requires online decryption as the authoritative server needs to be able to unwrap incoming queries using its private key. Similarly, the server has to generate a whole response from scratch every query, whereas DNS was originally designed so that you can just echo the query header and tack on a cached answer.

Contrast that with DNSSEC, where you can sign zones offline. Not only is this more secure as hacking the server doesn't expose your private key, but you can also keep serving the same immutable data. It really doesn't matter how fast Curve25519 and Salsa20 are, they're significantly more work than just spitting out the same answer to everybody.

These benefits are easy to dismiss for leaf zones, but mean everything when you're serving the root or top-level zones in an increasingly hostile environment.


DNSCurve is something different from DNSSEC. People like to argue for one or the other but there is no rule that both cannot be used at the same time. They each do different things.

The "wrapping" and "unwrapping" in DNSCurve is done by a forwarder, a separate server. People have written such forwarders many years ago. No DNS software needs to be rewritten.


> DNSCurve is something different from DNSSEC.

Yes, I know. I was contrasting their use of crypto and the degree to which they fit traditional name server architectures.

> The "wrapping" and "unwrapping" in DNSCurve is done by a forwarder, a separate server

Whether the wrapping is done on a [reverse] forwarder or integrated into the authoritative name server is entirely irrelevant. (Perhaps you were thinking of the querier, which would be even more irrelevant from the perspective of root and TLD servers.)

I like DNSCurve. But I was contesting the point that DNSCurve was effectively zero cost. It definitely is not zero cost, neither in terms of CPU nor operationally. The cost may be de minimis in most contexts, but root and TLD zones are certainly the exception.


I misread your comment. Sorry. Thank you for clarifying.

Has anyone, e.g., at Verisign, ever debated this computational/operational cost of using DNSCurve at a large TLD.


Stub resolver = your client device.


Given the amount of crap traffic the root servers get I can see why they’d be hesitant to just turn on DNSSEC. “Let’s see you all stop sending us stupid shit queries first” sounds entirely reasonable.


The PDF is about DNS Encryption not DNSSEC. Some (all?) of the root servers already support DNSSEC.


All, since 2010.

https://www.iana.org/dnssec/archive

Although this is a big "just" because of the amount of fanfare and (literal) ceremony, DNSSEC support on the server side is just about signing zones and being willing to serve the associated RR types.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: