Hacker News new | past | comments | ask | show | jobs | submit login

Back in the 1990s and early 2000s, it was very common to have "transparent proxies": your router or the ISP's router was configured to transparently redirect all connections to TCP port 80 to a Squid caching proxy or similar running on a nearby server. This meant that images, CSS, JS, or even whole pages (the web was much less dynamic back then) were transparently cached and shared between all users of that router. That could save a lot of bandwidth. Encrypting the HTTP connections completely bypassed the caching proxy; to make it worse, IIRC some popular browsers didn't cache the content from encrypted connections as well, so every new page view would have to come from the origin server. Obviously, the IT professionals which set up these caches didn't like it when most sites started switching to HTTPS, since it made the caches less useful.



A common problem back then with those caches back then was that in their common configuration they would limit the maximum upload size to a few megabytes... which would manifest itself as a broken connection when such an upload was attempted.

We regularly had to tell customers "can you try whether uploading works with this HTTPS link? now it suddenly works? okay, use that link from now on and complain to your network admin/isp"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: