Hacker News new | past | comments | ask | show | jobs | submit login

No, haha. When was that a thing?



In the early 2000s almost any traffic that wasn’t involving financial services or ecommerce was plain HTTP. Gradually, HTTPS became optional (remember encrypted.google.com?) and more sites used it for login (but not all pages, even with cookies.)

This meant that MITMs were a lot more effective. Hell, even today Comcast and some other ISPs will MITM you to send notifications when it can do so on a plaintext HTTP connection.

A lot of IT departments also used this to be able to block unwanted traffic and perform monitoring. Now a lot of that relies on DPI techniques like analyzing SNI, or intercepting DNS. DoH and encrypted SNI work together to close both gaps, and widespread deployment of them would largely kill the ability to MITM or monitor consumer devices without modifications.

In modern times the cost of TLS certificates and the overhead of TLS encryption has dropped to effectively zero, so that ship has sailed, and nobody even remembers there was any concern to begin with. Maybe this time, it will be different, due to the lack of other options for MITM.

I imagine in the future there will be similar concerns about protocols that encrypt session layer bits like CurveCP.


I can't find anything specific at the moment but anecdotally I remember seeing this and being told it hurt performance to encrypt everything. The "solution" was to only encrypt sensitive pages like forms for credit cards.

I'm sure there was some substance to it at the time when computers, networks and browsers were slower but I also completely ignored that advice at the time and always used SSL everywhere on sites I set up.

I've never manged a very high traffic site so any extra overhead from SSL was negligible for us.


When people were concerned about HTTPS overhead? Both in terms of increased latency when establishing a connection and AES overhead for the duration of the connection. Hardware TLS accelerators used to be a thing.


Back in the 1990s and early 2000s, it was very common to have "transparent proxies": your router or the ISP's router was configured to transparently redirect all connections to TCP port 80 to a Squid caching proxy or similar running on a nearby server. This meant that images, CSS, JS, or even whole pages (the web was much less dynamic back then) were transparently cached and shared between all users of that router. That could save a lot of bandwidth. Encrypting the HTTP connections completely bypassed the caching proxy; to make it worse, IIRC some popular browsers didn't cache the content from encrypted connections as well, so every new page view would have to come from the origin server. Obviously, the IT professionals which set up these caches didn't like it when most sites started switching to HTTPS, since it made the caches less useful.


A common problem back then with those caches back then was that in their common configuration they would limit the maximum upload size to a few megabytes... which would manifest itself as a broken connection when such an upload was attempted.

We regularly had to tell customers "can you try whether uploading works with this HTTPS link? now it suddenly works? okay, use that link from now on and complain to your network admin/isp"


Verizon used to tell people that until this year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: