Hacker News new | past | comments | ask | show | jobs | submit login

Encrypted DNS is a good idea, I just wish all of the basic protocols were getting encryption in-place, instead of being reinvented over HTTPS.

First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS, now DOH is reinventing DNS over HTTPS... Sigh. Both of these would be better off by evolving and modernizing their respective existing protocols. And HTTP is a client/server protocol with a _heavy_ handshake cost, why are we using it for a quick one-off request like DNS.




I don't disagree.

But the problem began decades ago when sysadmins started using firewalls to control what employees could access. During early 2000's I was involved in moving a lot of apps that used a bespoke port to port 80/443 just to make sure our apps and services didn't have any client hiccups due to (rightly so?) belligerent sysadmins.

All this has really done has made sysadmins lives harder bc of packet inspection. So, all app developers and now infrastructure solution devs must run thru 443, otherwise the take up wouldn't happen. The internet is effectively running on one port nowadays.


> But the problem began decades ago when sysadmins started using firewalls to control what employees could access.

If the sysadmins were told "We don't want people doing X on company time: stop it.", that is hardly the sysadmins' fault.

Do you think IT / Helpdesk wants the drama of extra calls because certain things are blocked? That they're sitting in their cubicles twirling their sinister mustaches thinking of ways to make people's lives more difficult?


Honestly, I've met a handful of IT bureaucrats who enjoyed the power trip of holding up dozens of people on a multimillion dollar project over an outdated policy.

These people are the exception, not the norm, but it only takes one.


> holding up ... over an outdated policy

https://en.wikipedia.org/wiki/Slowdown#Rule-book_slowdown

When used carefully, it can be a very effective tactic.


The Bastard Operator From Hell! :-)


I did a talk on this phenomenon once. It's a form of Red Queen's Race: admins block stuff, protocols not blocked grow to subsume blocked functionality, repeat.

In the end we will have a virtualized Internet 100% encrypted and tunneled over port 443, rendering the physical IP header worthless legacy baggage.

To IT admins: the more you tighten your grip, the more protocols will slip through your fingers.


It's the consequence of a design flaw on IP, because every information for consumption only of the final host should have been encrypted.

Nobody knew better at the time, but this time we know. Fortunately, it will end at HTTPS, and it's just a matter of optimizing it well enough (in HTTPS4, or 5, or how many it takes) to make it behave like an internet-level protocol and replace IP.


Early 2000's was also when email got encryption between a mail client and the server. Belligerent sysadmins may try to block all ports except 80 and 443, but users get a bit upset when 143, 993, 25 or 465 get blocked and mail stop working.


Why not extent the existing protocol and keep it on port 53, which DNS already uses?


Middleboxes like to drop any data they don't understand. No doubt there is one out there that is checking that all data on the dns port looks like dns and when encrypted dns is used it gets dropped. Using https for everything means middleboxes can't tell what is going on and just allow it.


DNSCurve is specifically designed to look like UDP DNS. The problem isn't that the packets don't validate as DNS, it's that some of the middleboxes modify them, or don't send them to the appropriate server at all and try to answer the query themselves. The protocol then detects that MitM attack and rejects the response -- as well it should, but detecting the attack doesn't get you to working DNS.

HTTPS can't save you from that, though, because the same networks that modify DNS queries to third party DNS servers also do things like require you to install a root certificate and then MitM your TLS connections, and drop connections that don't accept their root certificate.

In both cases it's the same thing. You can detect the attack but that doesn't get you through.


Because some companies run with an effective

    iptables -t filter -A FORWARD -p udp --dport 53 -j DROP
And prevent anything from their own internal resolvers from accessing outside DNS services, especially now that encrypted SNI is moving more web filtering to the DNS layer.


But that one is OK. DNS over HTTPS doesn't exist to circumvent enterprise content filtering (hence why you can hint you don't want it to run with a DNS entry, making it vulnerable to downgrade attacks).

Easier to make the orgs internal recursive resolver "does the right thing" than all clients, too.


> DNS over HTTPS doesn't exist to circumvent enterprise content filtering

Replace "enterprise" with "ISP" or "nation state", and that's closer to what the DoH aims are.


Sure, if you have internal only sites, you pretty much have to, and this is used even in small and medium shops.

DOH will cause so much drama when internal sites wont resolve any more.


Its perfectly ok to let your internal sites be resolved outside your network? Abc.internal resolves to 192.168.0.101 is fine?


How do you do PTR records? (what hostname is 10.12.43.122) How do you do split DNS? (Inside my network I want it pointing to my internal server)

Now you could argue that the fault in both of those cases is IPv4 and the natting that comes with it.


i don't get the ptr problem. My internal sites already have ptr records. My dns server sits facing the internet. Ptr records too.

But yes. If u have a domain that is supposed to resolve different depending on the source ip that'd be..interesting... Now u need a website that displays different content depending on source ip or something like that...


If your machine is pointing to doh on the Internet, how does a reverse lookup for an rfc1918 address work?

One issue with doh is that it’s inconsistent - some applications use one resolver, some use another. The network (via dhcp) can’t hint at which resolver to use.


i'd prefer not to. We work for a number of different clients, and its common to have pg11-clientanme.* and node-clientname.* ...

I have over 200 domains internally, that i don't have externally.

but its also flexibility. We often when we develop, want to redirect certain domains to our endpoints, for testing or whatever, and do that on DNS so that not everyone has to edit host files etc.

we have pfsense internally, with number of people that can and do add/remove hosts/domains on daily basis, but host our external DNS on CF, and only 2 people have access to that.


What's the risk?


My home ISP, for example, will replace anything you request on port 53 by their DNS results, with ads and sites blocked by incompetence.


Because some totaliterian governments block illegal web sites like wikipedia.org


HTTP had persistent connections ever since HTTP/1.1, and HTTP/2 supports parallel streams within the same persistent connection.

HTTP/3 is still a IETF draft, but is already being deployed pretty much by all big sites. It supports zero-round-trip (0RTT) requests even after the IP of the client has changed.

DoH can essentially match the latency of traditional UDP queries, while also encrypting the channel and traversing any gateways that let HTTPS through.


You can't get 0-RTT with TCP because the TCP handshake itself isn't 0-RTT. HTTP/3 accomplishes it by switching to UDP -- but then you can't get through the legacy middleboxes that only allow HTTPS over TCP port 443.


> First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS

This doesn't sound right. QUIC is built on UDP and TLS 1.3, not HTTPS. HTTP/3 doesn't go over itself.

TLS 1.3 also makes the handshake significantly cheaper with the option for 0-RTT resumptions.

DNS isn't really being reinvented either - it's the old wire format with a different framing.


in my experience, it's not about the protocols, it's about all existing middleboxes blocking/intercepting all traffic on non 443 ports. UDP traffic is notoriously blocked by routers. The only viable option being: HTTPS.


> The only viable option being: HTTPS.

But why not DoT on port 443?

The story about office workers, sysadmins, and service providers trying to make life harder for each other and complicating things for everyone seems unfortunate, but makes sense, yet I don't quite see why HTTP has to be involved.


HTTP2 isn't significantly more overhead than TLS in my experience.

Also, using DoT over 443 would mean that you have an issue with traffic not being distinguishable for routing purposes at the edge. By making DNS on 443 DoH, it means that all HTTP2 routers/loadbalancers/etc. can work the same regardless of it being a Web request or a DNS request, because then they are both Web, i.e. HTTP, requests.


Probably because in your corporate/controlled environment you're supposed to use the company DNS for a good reason.


If you're not using a VPN, DNS is actually pretty easily spoofed. DoH and DoT make that much harder.


> First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS,

Well, multi-streaming and multi-homing is part of SCTP, but no one seems to have bothered implementing it.

> now DOH is reinventing DNS over HTTPS... Sigh.

DoT was already invented when the Web folks decided to go and invent DoH:

* https://en.wikipedia.org/wiki/DNS_over_TLS


DNSCrypt uses the standard DNS mechanism, and just encrypts the content beforehand. Quick one-off requests. No handshake cost. It uses UDP, or TCP for large responses, exactly like standard DNS. In fact, it can even share port 53 with standard DNS. But can be configured to use TCP/443 if you're on a network where this is the only port that works.


There's DOT aswell. Http over tls which doesn't use http.

It's on your andriod phone now!


The big concern with DoT is that firewalls are more likely to block the new port on outbound requests than DoH requests over 443. I think the main concern here is that it would slow uptake of the new protocol similar to how long IPv6 has taken to get fully supported.


I feel that this everything over HTTPS trend is a scam, I don't know how or why yet, I just feel it.

Perhaps it makes the surveillance tooling more uniform and easier to develop/maintain.

The argument that HTTPS is always available and not filtered is just wrong, anyone who has experience working with large corporations knows that clients use local * certificates and everything is decrypted by the firewall and then re-encrypted. Making HTTPS slow af and sometimes plain broken.

I guess someone will implement a full HTTPS stack in js and announce HTTPS over HTTP to go around this "problem".


> The argument that HTTPS is always available and not filtered is just wrong, anyone who has experience working with large corporations knows that clients use local * certificates and everything is decrypted by the firewall and then re-encrypted. Making HTTPS slow af and sometimes plain broken.

Yeah, and implementing everything over https is just going to make things harder or impossible to filter. There's a reason that large corporations block certain things, so if they get reimplemented over http, that will just be a security hole. It would be nice if an actual encrypted DNS protocol was created.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: