Encrypted DNS is a good idea, I just wish all of the basic protocols were getting encryption in-place, instead of being reinvented over HTTPS.
First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS, now DOH is reinventing DNS over HTTPS... Sigh. Both of these would be better off by evolving and modernizing their respective existing protocols. And HTTP is a client/server protocol with a _heavy_ handshake cost, why are we using it for a quick one-off request like DNS.
But the problem began decades ago when sysadmins started using firewalls to control what employees could access. During early 2000's I was involved in moving a lot of apps that used a bespoke port to port 80/443 just to make sure our apps and services didn't have any client hiccups due to (rightly so?) belligerent sysadmins.
All this has really done has made sysadmins lives harder bc of packet inspection. So, all app developers and now infrastructure solution devs must run thru 443, otherwise the take up wouldn't happen. The internet is effectively running on one port nowadays.
> But the problem began decades ago when sysadmins started using firewalls to control what employees could access.
If the sysadmins were told "We don't want people doing X on company time: stop it.", that is hardly the sysadmins' fault.
Do you think IT / Helpdesk wants the drama of extra calls because certain things are blocked? That they're sitting in their cubicles twirling their sinister mustaches thinking of ways to make people's lives more difficult?
Honestly, I've met a handful of IT bureaucrats who enjoyed the power trip of holding up dozens of people on a multimillion dollar project over an outdated policy.
These people are the exception, not the norm, but it only takes one.
I did a talk on this phenomenon once. It's a form of Red Queen's Race: admins block stuff, protocols not blocked grow to subsume blocked functionality, repeat.
In the end we will have a virtualized Internet 100% encrypted and tunneled over port 443, rendering the physical IP header worthless legacy baggage.
To IT admins: the more you tighten your grip, the more protocols will slip through your fingers.
It's the consequence of a design flaw on IP, because every information for consumption only of the final host should have been encrypted.
Nobody knew better at the time, but this time we know. Fortunately, it will end at HTTPS, and it's just a matter of optimizing it well enough (in HTTPS4, or 5, or how many it takes) to make it behave like an internet-level protocol and replace IP.
Early 2000's was also when email got encryption between a mail client and the server. Belligerent sysadmins may try to block all ports except 80 and 443, but users get a bit upset when 143, 993, 25 or 465 get blocked and mail stop working.
Middleboxes like to drop any data they don't understand. No doubt there is one out there that is checking that all data on the dns port looks like dns and when encrypted dns is used it gets dropped. Using https for everything means middleboxes can't tell what is going on and just allow it.
DNSCurve is specifically designed to look like UDP DNS. The problem isn't that the packets don't validate as DNS, it's that some of the middleboxes modify them, or don't send them to the appropriate server at all and try to answer the query themselves. The protocol then detects that MitM attack and rejects the response -- as well it should, but detecting the attack doesn't get you to working DNS.
HTTPS can't save you from that, though, because the same networks that modify DNS queries to third party DNS servers also do things like require you to install a root certificate and then MitM your TLS connections, and drop connections that don't accept their root certificate.
In both cases it's the same thing. You can detect the attack but that doesn't get you through.
iptables -t filter -A FORWARD -p udp --dport 53 -j DROP
And prevent anything from their own internal resolvers from accessing outside DNS services, especially now that encrypted SNI is moving more web filtering to the DNS layer.
But that one is OK. DNS over HTTPS doesn't exist to circumvent enterprise content filtering (hence why you can hint you don't want it to run with a DNS entry, making it vulnerable to downgrade attacks).
Easier to make the orgs internal recursive resolver "does the right thing" than all clients, too.
i don't get the ptr problem. My internal sites already have ptr records. My dns server sits facing the internet. Ptr records too.
But yes. If u have a domain that is supposed to resolve different depending on the source ip that'd be..interesting... Now u need a website that displays different content depending on source ip or something like that...
If your machine is pointing to doh on the Internet, how does a reverse lookup for an rfc1918 address work?
One issue with doh is that it’s inconsistent - some applications use one resolver, some use another. The network (via dhcp) can’t hint at which resolver to use.
i'd prefer not to. We work for a number of different clients, and its common to have pg11-clientanme.*
and node-clientname.* ...
I have over 200 domains internally, that i don't have externally.
but its also flexibility. We often when we develop, want to redirect certain domains to our endpoints, for testing or whatever, and do that on DNS so that not everyone has to edit host files etc.
we have pfsense internally, with number of people that can and do add/remove hosts/domains on daily basis, but host our external DNS on CF, and only 2 people have access to that.
HTTP had persistent connections ever since HTTP/1.1, and HTTP/2 supports parallel streams within the same persistent connection.
HTTP/3 is still a IETF draft, but is already being deployed pretty much by all big sites. It supports zero-round-trip (0RTT) requests even after the IP of the client has changed.
DoH can essentially match the latency of traditional UDP queries, while also encrypting the channel and traversing any gateways that let HTTPS through.
You can't get 0-RTT with TCP because the TCP handshake itself isn't 0-RTT. HTTP/3 accomplishes it by switching to UDP -- but then you can't get through the legacy middleboxes that only allow HTTPS over TCP port 443.
in my experience, it's not about the protocols, it's about all existing middleboxes blocking/intercepting all traffic on non 443 ports. UDP traffic is notoriously blocked by routers. The only viable option being: HTTPS.
The story about office workers, sysadmins, and service providers trying to make life harder for each other and complicating things for everyone seems unfortunate, but makes sense, yet I don't quite see why HTTP has to be involved.
HTTP2 isn't significantly more overhead than TLS in my experience.
Also, using DoT over 443 would mean that you have an issue with traffic not being distinguishable for routing purposes at the edge. By making DNS on 443 DoH, it means that all HTTP2 routers/loadbalancers/etc. can work the same regardless of it being a Web request or a DNS request, because then they are both Web, i.e. HTTP, requests.
DNSCrypt uses the standard DNS mechanism, and just encrypts the content beforehand. Quick one-off requests. No handshake cost. It uses UDP, or TCP for large responses, exactly like standard DNS. In fact, it can even share port 53 with standard DNS. But can be configured to use TCP/443 if you're on a network where this is the only port that works.
The big concern with DoT is that firewalls are more likely to block the new port on outbound requests than DoH requests over 443. I think the main concern here is that it would slow uptake of the new protocol similar to how long IPv6 has taken to get fully supported.
I feel that this everything over HTTPS trend is a scam, I don't know how or why yet, I just feel it.
Perhaps it makes the surveillance tooling more uniform and easier to develop/maintain.
The argument that HTTPS is always available and not filtered is just wrong, anyone who has experience working with large corporations knows that clients use local * certificates and everything is decrypted by the firewall and then re-encrypted. Making HTTPS slow af and sometimes plain broken.
I guess someone will implement a full HTTPS stack in js and announce HTTPS over HTTP to go around this "problem".
> The argument that HTTPS is always available and not filtered is just wrong, anyone who has experience working with large corporations knows that clients use local * certificates and everything is decrypted by the firewall and then re-encrypted. Making HTTPS slow af and sometimes plain broken.
Yeah, and implementing everything over https is just going to make things harder or impossible to filter. There's a reason that large corporations block certain things, so if they get reimplemented over http, that will just be a security hole. It would be nice if an actual encrypted DNS protocol was created.
First HTTP/3 (nee QUIC) reinvented TCP and ports over HTTPS, now DOH is reinventing DNS over HTTPS... Sigh. Both of these would be better off by evolving and modernizing their respective existing protocols. And HTTP is a client/server protocol with a _heavy_ handshake cost, why are we using it for a quick one-off request like DNS.