Because applications can't ship OS level support, but want to experiment with it and add it now. That doesn't mean OS-level support can't be a thing, but it's a different level (waiting until the OS vendor gets around to it, or users explicitly installing and setting up tools for it)
This is my beef with (my understanding maybe incomplete, in which case I apologise) the implementation.
Internal DNS, split brain DNS aren’t catered for without disabling support? I don’t want my internal names leaking to the internet, nor necessarily are they the same for external resolvers. Now yes the latter is a hack, but it’s one widely used still today.
The idea is laudable. But it feels hostile. I can disable support, but for how long?
I guess it would be possible to run my own local DNS server that connects to these DoH servers. Does any DNS server support DoH? This could also allow the user to override domains using their /etc/hosts file in case DoH on Firefox doesn't support it.
I'm running my own DNSoverHTTP instance at home. I have Apache, with HTTP/2 support, running, some self-signed certificates, and a CGI script that accepts the DNSoverHTTP request and makes a DNS call to my local version of bind. I found RFC-8484 to be quite easy to follow, and I've set network.trr.mode to 4 (use DNS, but also send DNSoverHTTP for testing) and network.trr.allow-rfc1918 to true (so local addresses can be resolved locally).
I will do the occasional tests with network.trr.mode to 3 (only use DNSoverHTTP) but I seem to have issues resolving github. I haven't looked that far into it.
Thanks, I’ll hace to look it up and give it a read. I’ll be honest I’ve not read the actual RFC in this instance and pieced together what I know from articles, reported behaviour, etc.
I know it’s lazy and I should’ve done more work. But, burn out
In that case you are better off running local DNS and using a different subdomain (internal.companyname.com or whatever) for internal DNS entries; the DNS-over-HTTPS query will go out, fail, and then Firefox will fallback to traditional UDP DNS on port 53, hit the local resolver on the LAN, and away you go. It will presumably cause a short delay the first time a host is queried, but after that I assume Firefox is smart enough to cache the result, so unless you have absurdly short TTLs the performance impact should be pretty low.
The positives certainly outweigh the negatives of inconveniencing some IT admins who, as you correctly point out, are implementing a dirty hack anyway.
You completely missed the point of the parent, which is to NOT let internal hostnames out of the network.
The positives certainly outweigh the negatives of inconveniencing some IT admins who, as you correctly point out, are implementing a dirty hack anyway.
This is a perfect example of the irritating attitude I see from people pushing hostile features like this. Everyone wants their network to operate the way they want, and yet you think you know better than the actual owners of those networks.
You seem to forget that Domain Name Resolution became a problem after the more generic Name Resolution (ie Novel/lanman/NetBIOS). The Generic Name resolution system used lmhosts, which became hosts to more easily associate IPs and names. [0]
> Originally these names were stored in and provided by a hosts file but today most such names are part of the hierarchical Domain Name System (DNS).
The lack of trust I mentioned was about ISP provided DNS servers. You don't own your WAN network and the majority of people use the DNS provided by their ISP.
On your own network, if you feel like doing a DNS lookup to what amounts to a public address book is unethical then don't allow arbitrary clients on the network.
If you want to do blocking based on a DNS list, configure your firewall to do that.
Knowing better than the owners is a matter of tradeoff.
There are whole isps and even countries (including the UK shortly) which mess with DNS requests. Helping the millions of users who are in that situation, and don't even know what D Sits, seems like a net good. As you say, experts can choose to disable it.
As long as they can. The problem with these ideas is that it can get increasingly difficult to work around them. How many hoops you have to jump through to pcap your own software on your own machines now that certificate pinning is becoming popular? What when someone will have the bright idea of implementing certificate pinning for DoH inside browsers, "because security"?
(I could live with the choice between having to somehow acquire Chrome Enterprise Edition vs. switching to Firefox, to have a browser I can control. I'm worried now that Firefox might be turning into Chrome, though.)
If you're implying the porn filter, no, the porn filter has been shelved 'indefinitely' because a) it's against EU law, b) it was May's personal project (she pushed heavily for it when she was Home Secretary, and it became a thing under her PM-ship).
Once Firefox starts to ignore DNS resolvers configured at the OS level other apps are sure to look at it and think it must be a good idea because Firefox is doing it. Soon there will be a multitude of applications needing this disabled in their own unique way.
If the Mozilla Foundation see this as an issue they should instead be developing a separate solution to provide this system wide. If you must bundle it with Firefox and offer to install it at browser installation or upgrade time. Don't install it by default and certainly don't enable it without user permission.
Here is an example of how one can use DOH "like ping or nslookup". This example uses HTTP POST and cloudflare-dns. Maybe check out "stubby" for "OS-level" DOH. Currently I think it only does DOT but future plans are for DOH.
Because the OS (getaddrinfo(), gethostbyname(), etc') doesn't implement DoH; it implements a /etc/hosts parser and a DNS (over UDP) client.
I wrote a glibc plugin that implements a caching DoH client for glibc, which can replace the DNS client or fall back to it - https://github.com/dimkr/nss-tls.
The criticism (which you seemed to
miss) is that everyone is rushing to implement this at the application-level(s), instead of contributing to get it implemented, once, at the OS level instead and have a fix in place for everyone.
> Not to mention that DNS over HTTP is one of the class of features where you might want to override sysadmin policy as a user.
I don’t buy that argument at all.
Why should we special case policies of one internet-protocol over all the others?
Also: implementing/marketing DoH as a way to bypass enterprise control and policies is a surefure way to find it permanently blocked at firewall level in said enterprises.
Ie your attempt at subverting control won’t gain you anything but deserved distrust.
Hi Dima - I'm assuming you're aware of dnscrypt-proxy and wrote nss-tls because you wanted a lighter weight implementation of a subset of dnscrypt-proxy's features on a specific platform (linux/glibc, for example this won't work on linux/musl afaik)? I use dnscrypt-proxy happily but was interested in nss-tls, yet couldn't find a rationale/comparison in the readme.
This is doable on linux/unix through an NSS plugin (and has been linked to in this discussion), but the vast majority of Firefox users are on Windows (and a minority on Android) where this cannot be done as easily.
For the users, 99% of whom live in the self-updating browser these days, this is much better than waiting for an OS patch that they may or may not know how to install.
> At least on Linux, isn't DNS all at the application level anyway? There is no system level DNS lookup
Nearly all applications use the standard library, i.e. getaddrinfo(3) or the old gethostbyname(3) or something that wraps them. Which itself uses the services configured in /etc/nsswitch.conf, one of which is DNS which will in turn query the DNS server(s) configured in /etc/resolv.conf.
You can also have other services configured in nsswitch.conf like "mdns" (multicast DNS for names of devices on the LAN) and "files" for /etc/hosts, or any other name resolution system. The general result is that you can change the settings for the whole system and even add completely new name resolution services (like, for example, DoH) and have substantially everything automatically use them.
What the parent poster means is that each application does its own DNS lookup separately and independently. The family of functions you linked to, plus the newer getaddrinfo family of functions, is implemented in the C library within each process, not as a system call or as a separate daemon. These functions read the /etc/nsswitch.conf file, load the C library plugins listed there, and call each one in sequence - still within the same process. The most common setting is a variation of "hosts: files dns", which first reads /etc/hosts, then reads /etc/resolv.conf and connects directly to the DNS servers listed there, without using any system level "DNS lookup" daemon (unless you have nscd enabled).
At least on linux, go's native resolver follows a sane subset of glibc conventions like parsing /etc/nsswitch.conf, /etc/resolv.conf, /etc/hosts [1]. As long as your dns configuration is defined there, you won't notice much of a difference between go programs using go's resolver and programs making glibc library calls for dns stuff.