I was really expecting a serious discussion about useless and dangerous flags, outdated encryption, expensive and dangerous renegotiations... I got a one line complaint about "network traffic" (take a read about latency and bandwidth difference!), caching, and bad tooling (go learn some better tooling, it's out there).
There are plenty of things to complain about in TLS, but the article touches none of them. What a bummer.
I'm personally still trying to figure out where the hordes of new users that just install Apache are. In these times, if you can install Apache 2 on a computer permanently connected to the internet you can probably also install Caddy or Certbot.
I really hope I'm not the only person who mentally groans whenever I see yet another "X considered Y" clickbait title. It's the tech equivalent of "this one weird trick" or "X Happened And You Won't Believe What Happened Next".
clickbait (google search): "(on the Internet) content, especially that of a sensational or provocative nature, whose main purpose is to attract attention and draw visitors to a particular web page."
I think this safely fits under the sensational/provocative attention-grabbing umbrella.
The problem with this argument is that there are very high-security pages on the Internet --- things that protect people's bank accounts or most sensitive personal information --- and they're not going away. The junction, at the protocol level, between insecure web sites and secure ones is a major design weakness; we would have fewer attack vectors in the long run if we could count on uniform encryption across the web.
This is precisely my thought on SSL. I'm no expert (correct me if I'm wrong), but if I understand the technology correctly: if your http website connects to an https login page, who is to stop someone from spoofing a link to a fake login page on the http website.
This is an annoying problem – even if your entire site uses HTTPS, users could be vulnerable if they ever follow an HTTP link or their browser's autocomplete doesn't update to use HTTPS.
Strict Transport Security was designed to solve that problem partially by telling the browser to never make an insecure HTTP request:
That's widely supported now but it does require your browser to have previously visited the legitimate site within the stored `max-age` period. Preloading (https://hstspreload.appspot.com/) can close that further but increases the organizational commitment that your web developers will never break the all-HTTPS contract by mistake.
EV certificates may improve a user's awareness of a spoofed page, but cannot do anything to make it more technically difficult to execute.
Providing an HTTPS login with an otherwise HTTP site is vulnerable to redirection to HTTP or to another site.
There is lots of evidence that suggests that in this configuration, cookies are often not set up properly (secure only) and can therefore be transmitted and stolen over HTTP.
Totally agree, encryption by default is the best option. However:
And if SSL is offered perhaps it should still be possible to access non-security-critical pages by plain old HTTP.
I often thought about setting up a forwarding proxy for my sites that listens on `un{encrypted,secure}.mydomain.tld` and provides plain old, non-optimized and unencrypted HTTP/1.1 access for those who wish to access the sites that way. I've experienced the issue with slow loading times when roaming (bandwidth is expensive) and/or in remote areas (very low bandwidth and spotty coverage) a couple of times, and having low-profile variants for browsing essential stuff would've helped a lot.
However, I'm not sure if this will open attack vectors for the unencrypted sites. I.e. I'd imagine that a man-in-the-middle-style attack could show the main site / URL (www.mydomain.tld) with another cert and just forward the un{encrypted,secure} content … but then again, this is not a new technique and can be prevented by public key pinning (though, https://news.ycombinator.com/item?id=12434585 :-/), DNSSEC/DANE (no browser vendor buy-in yet), etc.
Any idea if this were a sane idea for other sites as well? With enough momentum, browser vendors could detect and display un{encrypted,unsecure}. sub-domains specially.
> Seems to me a bit like equipping everyone with armour to make shooting them more difficult. Solving the problem the wrong way?
I don't know, making humans immune to bullets would be an elegant solution to the gun control debate which doesn't involve disagreements over the second amendment, and would make everyone win.
To be clear: I proposed, if even possible, that would actually be a good way to resolve this debate.
"Yes, you can keep your guns, they're just totally ineffective at harming people now".
I wasn't taking a stance for/against gun control, or more broadly for/against the US Constitution. Telling someone to "just move to [some other country]" is needlessly hostile; the message it sends is, "You aren't welcome here."
The author of this post proposed a straw man of a "crazy sounding idea" to illustrate "solving the wrong problem"; what I'm saying is that it would be the right problem to solve if it were even possible.
(As far as my actual politics go on this matter: I'd like to see mandatory gun safety taught in places where they aren't illegal to prevent accidental misuse. Friends of friends have lost their lives to mishandled firearms. That's all you'll get out of me on HN.)
The problem with SSL/TLS is that it is binary. There's currently very strong pro-binary movement in the ranks of Internet infrastructure engineers, probably originated in Google. Yes, binary protocols are marginally more efficient, but they are inherently harder to understand, debug, and generally see what's happening, especially in high-stress conditions when something fails in production. Binary protocols are more complex than text protocols, and more complexity leads to negligence and security problems (e.g. recent OpenSSL bugs). Secure systems are simple systems (OpenBSD gets it right).
Text-based protocols are the greatest thing that UNIX brought to the world. There should be more of them, especially in security sensitive areas.
Text-based protocols are simple for humans to read, but anything requiring parsing is a security smell.
Even if you are sure your buffer handling is free of bugs (reasonable in new languages, but the known-size of binary fields has been a security strength for them), the ambiguity of text is dangerous.
Interpreting as text easily corrupts binary embeds if you aren't careful, and escaping bloats the size of what's already the largest part of your message.
Many security bugs have been triggered by implementations disagreeing about when they interpret UTF-8 and when they don't. UTF-8-encoded ASCII characters, for example, may cause one parser to recognize a keyword that another ignores; nevermind different sets of accepted whitespace characters.
You could define a very-strict encoding and delimination scheme, but at that point you can't trust text editors to edit it- making it effectively a needlessly-complicated binary protocol.
Well, OpenPGP works, and it is not terribly complicated.
But yes, UTF-8 is a huge can of worms.
(An accepted solution for this problem is obligatory "canonicalization" step - transforming the message into agreed canonical format before transmitting it. Still viewable and editable by text editors, no need to worry about parsing errors. Canonicalization step can still be error-prone, but the attack surface is much less than every possible parser in the world.)
Also, some sort of "parsing" (normalizing) is always required even for binary messages. Endianness, alignment conventions, etc. — just mapping the network bytes onto memory is inviting trouble.
amusingly, the "one-line" server is not only "not really one line", but also contains a number of errors and other incongruities:
1. there's no reason to put : at the start
2. z=aa is the same length as z=$r
3. there are double quotes where there shouldn't be and none where there should
4. the sed quoting is wrong and only works since file names cannot be empty
5. useless use of subshells
6. won't work on echos which don't parse escape sequences or don't accept -e
7. parsing ls
but most importantly, the whole first part can easily use TLS with "openssl req -x509 -newkey rsa:4096 -nodes -subj /CN=localhost -keyout server.pem -out server.pem; openssl s_server".
There are several other very important reasons missing from this article, which I think invalidate part of the argument.
One is widespread use of open wifi networks. I know many people don't bother to redirect traffic through a VPN when on open wifi, which means anyone on the network can monitor their traffic. This might be mostly innocuous, but at the worst, they can steal login credentials and personal info.
The second is ad/analytics tracking networks. By using SSL, you force your trackers to be SSL as well. Small comfort for those who despise this anyway, but it's better than these networks moving plain text identifiers and info about you around, allowing it to be monitored as you surf around the web.
I believe the third is widespread government surveillance/mass spying. By using SSL you do two things: prevent (or at least complicate) the 3rd party interception of data, and also decrease the signal-to-noise ratio (making it less likely that any given encrypted stream is actually something valuable and worth breaking).
Hopefully the argument about back/forth traffic in SSL will soon be obsolete if Zero-RTT handshakes are implemented in TLS1.3. Surely this would then be comparable to standard HTTP requests?
Total clickbait. More like websites with black backgrounds and bright green monospace fonts considered unreadable.
No major browser will be supporting the insecure mode of http/2. I don't think I'm alone in thinking that is a good thing. I like to know that the page I'm interacting with hasn't been tampered with, whatever website I'm on. Nefarious certificate authorities aside, TLS is the way to do that.
Besides, connections (especially mobile) are getting faster all the time. I'd say encouraging better connectivity is a more worthwhile pursuit than allowing everyone to turn off TLS.
I prefer proxying of SSL (and automatic generations of LetsEncrypt certificates) using containers so that my web servers don't have to worry about that aspect of configuration.
This post focuses only on the technical costs of TLS. The reality that we currently live in contains a hostile network where unarmoured packets are the easiest of targets. The movement to put TLS on everything is a reaction to the hostility and is overwhelmingly driven by #1: A legitimate interest in security.
SSL/TLS is bloated but that's not a reason not to use it.
Rather it's a reason we need some TLSv2 that just removes the crap and focuses only on three encryption/authentication modes:
* Desktop: High throughput, lots of CPU, minimal latency
* IoT: small throughput, very little CPU, latency acceptable
* Mobile: small to medium throughput, some CPU, minimize latency
A lot of bloated protocols are still good, they're bloated because backwards compatibility and everyone and their kitchensink needs to be able to decode it.
It seems to make more sense to just have ONE that can accommodate all those scenarios in a secure way. One doesn't solve bloat by introducing more bloat.
I'd say more can be won by removing e.g. ASN.1 and X.509 for certificate handling and encoding that are a very difficult (impossible?) to get right and switch to something simple that solves the 99% use case of current TLS.
Precisely - and just like that one line of code is enough to spawn a HTTP server, a different line of code could be enough to spawn a HTTPS server.
It's a matter of improving the tooling. No one is advocating to disable HTTP today, what browser vendors are trying to do is get the ecosystem to a point where that's possible without a significant increase in cost for site operators.
Similarly, HTTP/2 has been tuned specifically for (typically high-latency, low-bandwidth) mobile connections and is practically (at least) indistinguishable from the speed of (optimized) HTTP. This is only going to improve with things like Zero-RTT handshakes coming with TLS 1.3.
It uses Netcat, and it would be much more readable if it where more lines.
I suppose you could take some small web server, written in C, and just remove all newlines, but what's the point really.
The article seems to overlook on aspect of https. It's much easier for the site operator to just serve everything over https. We tried the whole "switch the user to https" when needed years ago. It was cumbersome and we often got it wrong and exposed traffic that was suppose to be encrypted.
The point is well made. We don't NEED https all the time. It's just easier and most of our connections and devices don't care about the overhead.
Anything can be done in one line when using existing tools that are made of thousands of lines. This is also a one line web server: python -m SimpleHTTPServer 8080
In my perfect world, you'd receive a certificate from your ISP when it assigns you one of the IP addresses it was itself assigned, and you'd receive a certificate from your registrar when you purchase a domain name. The former certificate would be good for the duration of your IP assignment; the latter for the duration of your domain ownership.
The IP-level certificate would be used for IPsec; the DNS-level certificate would be used for HTTP and other protocols; if you needed some other, stronger sort of identity verification then you'd need to take other measures.
This would solve the accessibility problem.
As for proxying, I think that HTTP had a really interesting idea with proxying, but it just doesn't work in practice. Proxies are untrustworthy, so it doesn't make sense to use them.
As for speed, I don't think SSL is noticeably slow from a modern phone.
The author lists legitmate motivations for why people want to see 100% SSL adoption.
The CA system also began with such good intentions. But motivations for profit enveloped good intentions. Certificates became a business, and the quality of the software became an afterthought.
The same may be or is happening with SSL/TLS deployment. With a function such as encryption, one cannot ignore software quality. Poor quality can defeat the whole purpose of the software. There is no point in using bad encryption software.
One of the good intentions the author cites is that people want ubiquitous encryption. Is encryption synonymous with SSL? Why? SSL is not the only system ever written to encrypt internet traffic. And it is probably far from the best one that could be written.
Nothing wrong with the good intentions. But is SSL is an asset or a liability? There is a cost to taking on SSL's baggage of complexity and maybe it's only worth it if the benefit achieved is real and not illusory.
If SSL can so easily be exploited, then the false sense of security it's name inspires may cause more problems
than SSL solves. But that's only for users. Others with purely commmercial goals stand to profit immensely from SSL adoption, the same way businesses did from CA certificates.
SSL was not created with the intent to protect non-commercial communications. It was created in the 1990's by Mozilla to allow for "e-commerce" using their Netscape browser. It served it purpose.
SSL is old and people are attempting to retrofit it with "improvements". Such as being able to host multiple sites with one wildcard certificate on one IP address. This is a hack. It's called SNI and it breaks a lot of software. People should consider why such a "feature" even needs to be implemented. Is it for the benefit of the user? The CA business has become nothing more than an impediment for many people.
Costs vs benefits. Not just for business but for users.
Missed the biggest point which is cognitive overhead. HTTP is simple to understand and it has thrived because of this. What a pain it is to get Wireshark to decode TLS traffic, which is not just cognitive overhead but debugging overhead too.
> It stops proxies from caching responses between different clients. There is no way to fix this.
There is, at least in corp environments. We have, via proxy.pac, a couple of ordinary proxies which act as regular cachers with low TTL, and additionally a huge (read: multiple TB storage) proxy which caches with extremely high TTL the auto-updaters from Apple, MS, Debian, Ubuntu as well as the media CDNs of some major newspapers.
It works because our machines have its CA certificate locally installed.
Somewhat related: I went to check something on my home router for the first time in months and learned that:
a) it uses an old version of SSL to serve up its admin page
b) all modern browsers refuse to load that page and no longer offer an override
I had to dig up and load an old unpatched browser so I could turn off SSL completely on my router so I can continue to administer it. Am I more secure now? I'm not sure.
A better option would have been to use something like sslstrip/sslsplit/mitmproxy to strip/bump the SSL connection. Admittedly the situation is bit unfortunate, but there aren't really many good solutions when dealing with broken crypto.
As a compromise between SSL and plain http, wouldn't it be enough for most of the content to be signed? E.g. background images don't necessarily have to be encrypted. They can be sent in plain sight with a signature which ensures that the image hasn't been modified. The signature has to be computed only once so that the overhead can be neglected.
There are plenty of things to complain about in TLS, but the article touches none of them. What a bummer.