DoH /is/ a bad technology on a technical level. On a modern network DNS requests come in pretty much constantly and I've never seen so many DNS timeouts and slow lookups as when I tried running a DoH proxy for my LAN. The head of line blocking of HTTP / TCP is horrible and my router was running at 100% CPU with all the TLS overhead.
I'm all for authenticated and encrypted DNS but routing it over HTTPS is just a nasty hack.
> Nice. Remember the days when IT professionals would exclaim that this was a bad idea?
It has made some things more difficult. In the old days when I had problems with a remote IMAP server I could watch each command and response going over the wire. It made troubleshooting dead simple. When a POP3 mailbox got hung up on a single huge message you could just telnet in and delete the offending message in a few seconds. It's crazy to suggest that encrypting everything hasn't made things more complicated than they were. It hasn't been an insurmountable problem, and in an age where everyone wants to sell your browsing habits the rewards have been greater than the pain but it did make things harder.
> I could watch each command and response going over the wire.
AFAIK, Wireshark supports decrypting TLS traffic if you give it the private keys.
> When a POP3 mailbox got hung up on a single huge message you could just telnet in
Use “gnutls-cli” or “openssl s_client” – transparent TLS for your terminal. Both those commands also have options supporting protocols’ use of STARTTLS.
For a modern TLS session Wireshark will need the session keys, which will need to be exported separately for each connection made because they change every time.
Private keys in modern TLS are used only to prove who you are, they aren't used to decrypt anything. Instead random ephemeral secrets are chosen by both sides and a Diffie-Hellman (ECDH) key agreement method is used to agree a shared secret based on those ephemeral secrets.
As a result of this design the connection is encrypted and delivers integrity and confidentiality protection before either side knows who they're talking to.
In the early 2000s almost any traffic that wasn’t involving financial services or ecommerce was plain HTTP. Gradually, HTTPS became optional (remember encrypted.google.com?) and more sites used it for login (but not all pages, even with cookies.)
This meant that MITMs were a lot more effective. Hell, even today Comcast and some other ISPs will MITM you to send notifications when it can do so on a plaintext HTTP connection.
A lot of IT departments also used this to be able to block unwanted traffic and perform monitoring. Now a lot of that relies on DPI techniques like analyzing SNI, or intercepting DNS. DoH and encrypted SNI work together to close both gaps, and widespread deployment of them would largely kill the ability to MITM or monitor consumer devices without modifications.
In modern times the cost of TLS certificates and the overhead of TLS encryption has dropped to effectively zero, so that ship has sailed, and nobody even remembers there was any concern to begin with. Maybe this time, it will be different, due to the lack of other options for MITM.
I imagine in the future there will be similar concerns about protocols that encrypt session layer bits like CurveCP.
I can't find anything specific at the moment but anecdotally I remember seeing this and being told it hurt performance to encrypt everything. The "solution" was to only encrypt sensitive pages like forms for credit cards.
I'm sure there was some substance to it at the time when computers, networks and browsers were slower but I also completely ignored that advice at the time and always used SSL everywhere on sites I set up.
I've never manged a very high traffic site so any extra overhead from SSL was negligible for us.
When people were concerned about HTTPS overhead? Both in terms of increased latency when establishing a connection and AES overhead for the duration of the connection. Hardware TLS accelerators used to be a thing.
Back in the 1990s and early 2000s, it was very common to have "transparent proxies": your router or the ISP's router was configured to transparently redirect all connections to TCP port 80 to a Squid caching proxy or similar running on a nearby server. This meant that images, CSS, JS, or even whole pages (the web was much less dynamic back then) were transparently cached and shared between all users of that router. That could save a lot of bandwidth. Encrypting the HTTP connections completely bypassed the caching proxy; to make it worse, IIRC some popular browsers didn't cache the content from encrypted connections as well, so every new page view would have to come from the origin server. Obviously, the IT professionals which set up these caches didn't like it when most sites started switching to HTTPS, since it made the caches less useful.
A common problem back then with those caches back then was that in their common configuration they would limit the maximum upload size to a few megabytes... which would manifest itself as a broken connection when such an upload was attempted.
We regularly had to tell customers "can you try whether uploading works with this HTTPS link? now it suddenly works? okay, use that link from now on and complain to your network admin/isp"
Seems like it's cyclical thing. DNS over HTTPS is now the big bad technology.