Hacker News new | past | comments | ask | show | jobs | submit login
RC4 is kind of broken in TLS (cryptographyengineering.com)
92 points by B-Con on March 12, 2013 | hide | past | favorite | 60 comments



Attackers can use Javascript and/or browser plugins to coerce browsers into making millions of requests to (say) GMail, specifying the URL to make the session cookie line up at a specific point in the plaintext. Similar techniques animated BEAST, CRIME, and Lucky 13.

Now, for sites that use RC4 (to mitigate BEAST and Lucky 13), attackers can take advantage of a flaw in RC4: RC4's keystream output has biases throughout the first 256 bytes. Over millions of trials, these biases make it possible to use basic statistics to predict cookies, from the vantage point of a passive attacker.

This is one of the easier-to-understand TLS flaws of the last few years. RC4 is simple: you key it and it spools out a Hard-to-predict stream of random bytes, which are XOR'd to the plaintext. If those bytes aren't Hard to predict in some way, the attack is obvious. Now that (a) we know how to get browsers to generate millions of sessions (hello, BEAST) and (b) half the Internet is now using RC4 (thanks, BEAST), these RC4 attacks have become germane.

My guess is that these attacks are (1) noisy, (2) slow, (3) unreliable, and (4) expensive --- not in a "need a grid of PS3s to exploit" kind of way, but in a "nobody will cast a broad net across the Internet with this attack" sort of way.

But the vulnerability is also plainly unacceptable, in a way that I don't think is true of "Lucky 13" (which isn't intractable to fix in application code, just very annoying to fix).

It looks like we're finally reaching a place where TLS compat is going to stop dictating what attacks we mitigate and how, because unless we opt to accept the risk of poor "Lucky 13" fixes, there's no compatible way out. We need a widely deployed authenticated ciphersuite.


> unless we opt to accept the risk of poor "Lucky 13" fixes, there's no compatible way out.

What particular class of fixes are you referring to by 'poor "Lucky 13" fixes'? Do you mean that getting Google, Yahoo, Facebook, Amazon, Microsoft, and Mozilla Foundation all to stop using a particular cipher is unworkable, or that TLS is just too broken and needs to be replaced, or both?


Adam Langley does a much better job explaining this than I can: http://www.imperialviolet.org/2013/02/04/luckythirteen.html

As I understand it, the AES-CBC ciphersuites in TLS are not fundamentally unworkable.


When one is in a game of whack-a-mole, something is probably broken somewhere. It seems like your position comes down to this:

We programmers mostly know that ciphers are hard to design and should only be designed by experts.

We programmers mostly know that designing a protocol is also hard and should only be designed by experts. In fact, it turns out they're even harder than ciphers to get right.

What our community still isn't getting and dealing with correctly, is that implementing ciphers, protocols, and security software with current techniques is hard and should only be done by experts.

It all smacks of the kind of problem newb programmers have if they haven't studied concurrency and they decide to build their first concurrent system. This makes me wonder if better tools could be made on the language level, like a functional language and formal system designed to enable provably secure systems, also accounting for timing attacks. Such languages would probably have to leave out embedded systems, but might stand a reasonable chance of covering desktop, mobile, and browser applications.


if people are skimming this thread, the link above is quite entertaining. a nice example of how much care is needed to thwart timing attacks.


I had assumed that TLS didn't use plain RC4, but rather a variant of RC4 that discards the first several hundred bytes of the keystream to avoid this problem. But I just checked and apparently TLS uses straight-up RC4. Sigh...

I'm not sure why people so fervently recommend disabling AES-CBC (and SSLLabs knocks you a whole letter grade if you haven't) when modern browsers and TLS clients work around BEAST with "1/n-1" record splitting [1]. I figure out-of-date browsers are probably vulnerable to something that lets you hijack sessions without needing BEAST anyways.

[1] http://www.imperialviolet.org/2012/01/15/beastfollowup.html


[I am the author of SSL Labs.]

Not all major browsers implement 1/n-1 record splitting. In particular, the last time I checked, Apple did not in Safari and the iOS devices. In that light, downgrading the grade to a B is a reasonable way to indicate that your configuration is not secure for a potentially large portion of the user base.

The 1/n-1 split is the main reason that the grade is just reduced and sites are not simply failed (as is the case with insecure renegotiation, for example). The other reason is that the BEAST attack is not exactly easy to execute.


Wow, thanks for this reply (esp the info about Safari/iOS).

What are your thoughts on the RC4 attack? Do you plan to take it into account in the SSL Labs test?


Knowing what we know today, attacks against RC4 are not yet practical, and thus there is no reason to panic. But we must act now. Given the huge incentive for researchers to continue to break RC4, it's reasonable to expect that the attacks will continue to improve.

Yes, SSL Labs will start to penalize RC4 at some point, but not just yet. Later today we will start warning people about the problems.

I've just published the recommendations here:

RC4 in TLS is Broken: Now What? https://community.qualys.com/blogs/securitylabs/2013/03/19/r...


First, the record split problems caused compat issues.

Second, even with the record splitting fix, you still have Lucky 13, which is really annoying to fix completely, and people are wary of deploying a crypto fix to their SSL library that in 5 years we'll just find out was window dressing.


> First, the record split problems caused compat issues.

It did, but I think we're through it now. IE, Firefox and Chrome have all had it on for a while.


What scares you more: AES-CBC with best-effort fixes for Lucky 13, or RC4?


CBC at the moment I think. There's still the possibility of clients without record splitting, there's timing issues in AES itself, there's lots of possibilities for bad server padding implementations.

I have TLS 1.2 and AES-GCM working in NSS on my desktop, but we don't currently have the NSS reviewer time to get it landed. We also have to deal with the issue that an active attacker can trigger a version downgrade somehow. (I don't love AES-GCM either, but it's the nearest port in this storm.)

In the mean time, I'll probably tweak RC4 in NSS to send one byte records at the beginning of the connection and thus burn off the first 256 bytes of keystream by encrypting MACs. That still leaves a handful of bytes vulnerable, but half of them will be "GET ".

But I doubt that's fully sufficient so AES-GCM is the medium term goal.


Is this what web security is reduced to? On one hand, openssl not releasing TLS 1.2 support for over 3 years until a semi-panic over BEAST generated enough interest to get it done? On the other, major browser vendors not implementing TLS 1.2 (okay, MS did, but disabled by default) for even longer because lack of server support, compatibility issues, or fear of protocol downgrade attacks made it an uninteresting or risky proposition? Nobody was interested enough to say screw chicken-and-egg and simply get it done, on either side?

OS maintainers aren't helping either. RHEL/CentOS 6.4, just released, is still stuck with openssl 1.0.0, which doesn't support TLS 1.1 or 1.2, so there's another few years of webservers running an "enterprise" OS not having TLS 1.2 support. But hey, OpenSSL 1.0.0 is "stable"!

I guess there's not much hope of getting Salsa20 or ChaCha, and better curves like curve25519 (I see a draft proposing some Brainpool curves), into TLS 1.2? (with the intent of discouraging many of the less desirable ciphersuites after a few years... not adding more ciphersuites just to add more ciphersuites.)


What is the advantage of getting Salsa20 or curve25519 into TLS 1.2?


If some catastrophic problem is discovered where there's a practical attack on block ciphers as used in the TLS ciphersuites, with no clever spec-compatible mitigation techniques, it would be nice to have a stream cipher in TLS (and implemented in the major TLS libraries) that isn't known to be problematic like RC4 is.

How long would it take to fix TLS implementations for high security applications in that scenario? And what would be the probable response? I think Salsa20 has a good chance of being the "fix"; maybe it wouldn't have to go through a truncated RFC process, and the major stakeholders would simply agree to add it to their TLS implementations and ship, but even then it would take significant time to implement and review.

You and others complain about the hodgepodge of ciphersuites in TLS.[1] South Korea gets their obscure block cipher into TLS, and yet the only stream cipher currently in TLS is known to be flawed and hasn't been ripped out and replaced?

Curve25519 was an afterthought before I looked and noticed 3 brainpool curves seem to be on track for inclusion, but Curve25519's parameters are more cautiously selected than the curves already in TLS implementations (secp256r1 for instance being the closest equivalent, and a common default curve for ECDH).

[1] I can see that the design-TLS-by-committee strategy doesn't seem to be working optimally. Someone credible ought to step up and demand a return to sanity. TLS 1.2 is barely even supported yet in practice, and yet there are 32 AES TLS 1.2 ciphersuites (by openssl 1.0.1e's count, and more that it hasn't implemented), 16 of which are RSA-based?! To take one example, TLS 1.2 RSA ciphersuites that use plain DH... what are they for? The parameters in TLS-enabled applications are traditionally hardcoded for 1024 bit DH, which decreases the security of sites using 2048+ bit rsa keys... and who uses 2048+ bit DH in practice? It's slow, and ECDH is the obvious choice. Then there are lots of less popular ciphersuites (most not even supported by openssl or nss), and as far as I can tell, that's driven by software and hardware companies who want a particular ciphersuite for their commercial products and want it in the official TLS spec, even though that does no good because major TLS libraries will not support obscure ciphersuites.


djb says they're awesome!

But more seriously, salsa is very efficient, so if rc4 is still being used for performance reasons, rather than security reasons, then it would seem to be to be a decent replacement.


RC4 is fast, but it's being used because Google was forced by circumstance to use it.


Whoah, neat. I didn't realize the RC4 workaround was that straightforward. Easy to forget you still have the record layer when you're using a stream cipher. Thanks, this was an excellent comment.


So what exactly are the NSS reviewers doing instead? Also, could you reorder HTTP headers to put cookies last? And at what point do we say "Screw it, TLS is dead"?


That's a pretty amazing workaround and delightfully ironic that it's made possible by MAC-then-encrypt.


Apparently the compat issues with record splitting have been resolved sufficiently well that all the major browsers have turned it on.

Yeah, there's Lucky 13 but you said above that this RC4 vulnerability is unacceptable in a way that Lucky 13 isn't. Basically I'm wondering what the point is in fervently recommending RC4 over AES-CBC when they both suck.


>We live in a world where NIST is happy to give us a new hash function every few years. Maybe it's time we put this level of effort and funding into the protocols that use these primitives? They certainly seem to need it.

This is a great point. Are there any modern reasonable alternatives to TLS to use in applications? On the one hand developers are told to not implement crypto directly and use something like TLS. Yet on the other hand it seems most TLS implementations suck (don't check the keys for example) and the standard itself has a bunch of holes.


No. Developers should continue to use TLS.

If you look at the last few years of TLS --- which have been rocky, to be sure --- you have flaws that are really difficult to exploit and (usually) straightforward to mitigate. If you look at a representative sample of non-TLS transport protocols, you get clownish flaws:

* Block ciphers deployed in the default mode (ECB), which allows straightforward byte-at-a-time decryption

* Error-based CBC padding oracles for which off-the-shelf tools will do decryption

* Unauthenticated ciphertext --- not "used a MAC in the wrong order", like Lucky 13 exploits, but "literally no integrity checks at all", so attackers can trivially rewrite packets

* RSA implemented "in the raw" with no formalized padding or PKCS1.5 padding

* Key exchanges with basic number theoretic flaws

* Repeated IVs and nonces that allow whole message decryption by analyzing captures of just a few hundred messages

The list goes on and on. Not only that: two of the recent 4 TLS problems (BEAST's chained CBC IVs and CRIME's compression side channel) are equally likely to affect custom cryptography --- they aren't the product of any weird SSL/TLS requirement. Chained CBC IVs also happened in IPSEC; compressing before encryption was IIRC an _Applied Cryptography_ recommendation. The only reason the RC4 bug is unlikely to apply is that nobody outside of TLS server operators would choose RC4.

To be sure: your best options (PGP and TLS) are creaky and scary looking. But they are nowhere nearly as scary as the "New" cryptosystems people deploy. What's especially annoying about the new stuff is that they follow a release cycle that conceals how terrible they are:

* Initial release with great fanfare about the new kinds of applications they'll enable, press coverage

* Security researchers flag unbelievably blatant flaws in crypto constructions

* Blatant flaws are fixed, cryptosystem is rereleased, now with promotional text about the external security testing it has

For a cryptosystem published by someone without a citation record in cryptography, a basic crypto flaw should be considered disqualifying; it's a sign that the system was designed without an understanding of how to build sound crypto. But that's not how things actually work, because everyone wants to believe that cryptographic protection is the Internet's birthright and that we're all just a few library calls away from "host-proof" or "anonymous" communications.

If you're really worried about TLS security but have the flexibility of specifying arbitrary crypto, why not use a library that does TLS with an AEAD cipher, like AES-GCM?


  why not use a library that does TLS with an AEAD cipher, like AES-GCM?
Some possible reasons:

* lack of confidence in TLS's design and designers (for example, TLS1.2 still allows compression and fails to counsel against its use).

* TLS has far too many options. I want a secure channel. I don't want a secure channel toolkit.

* TLS tends to be paired with a broken and discredited root-of-trust infrastructure (which often gives the misleading impression that TLS itself was broken).

(nb. I don't have any evidence that the AEAD TLS1.2 ciphersuites are broken, I'm playing devil's advocate here.)

Regarding your 'new cryptosystems' point: I agree, and its completely and frustratingly hopeless. But that's why the world needs a decent secure channel standard with good security bounds, and no knobs on the side which break confidentiality or integrity, and no backwards-insecurity ability.


* You should have even less confidence in new cryptosystems.

* Downthread, I suggested that TLS doesn't have as many extraneous options as it appears.

* If you can specify AES-GCM, it is even easier to specify not using default CA roots.

* Using an AEAD cipher removes crypto logic from the SSL protocol (the order of operations and message formatting for getting a block cipher to work with a hash-based MAC) and moves it into the block cipher mode, which (unlike TLS) is NIST-standardized.


Is there a problem with the body content of an http response being compressed, or is it mainly a header thing?


There can be if a secret is on the same page as text under the attacker's control. The attacker can run a hidden JavaScript reload attack on the page, the fiddle with the text under their control until compression is maximized.


I wasn't suggesting that because TLS is as you say "creaky an scary looking" I'm going to go off and write something from scratch. I think a good job has been done of making developers fear writing their own cryptosystems.

What I'm wondering is if there's any serious effort out there that could in the near future replace TLS? Like the article was saying, NIST has promoted new crypto primitives, who can be trusted to create the next generation of crypto systems?


Browsers are going to rely on a secure transport that features a directory-style PKI and session resumption for the foreseeable future --- CA's aren't going anywhere, and handling millions of inbound connections is going to be a requirement.

As long as we need a directory-based PKI and a session feature, what complexity can we really cut out of TLS? The record layer is sane and simple; it's more than HTTPS needs, but isn't hard to implement. The handshake is complicated, but it's complicated because it addresses 15+ years of downgrade attacks.

After thinking about that, ask, what's the real benefit of having two (really three, including SSH) mainstream encrypted transports? No matter what happens in any other protocol, a vulnerability in the transport used by browsers is going to be a hair-on-fire emergency. So why not just have everyone use the transport the browser uses?

The last point I'd make is, it's 2013. SSL 3.0 goes back to, what, 1996? The vulnerabilities we're finding in SSL are protocol flaws, and they've taken more than a decade to surface. Who feels better about new protocols?


The NIST contests are great, but I would expect that running such a contest for a hash or a cipher is easier than doing so for most protocols. It is possible to define exactly what a cryptographic hash must do in order to be considered a cryptographic hash. I think one would have a harder time making an analogous characterization of the solution space for HTTP security.

Whatever eventually replaces TLS, I doubt it will be something that could have emerged from a limited-duration contest.


NIST has weighed in on how to use TLS: http://tools.ietf.org/html/rfc6460

I am not aware of any proposed attacks on the approved cipher suites that are anywhere near feasible. TLS deployment is far behind known best practice. We should do something about that.


For sure! I was responding more to the lament that TLS is different from e.g. SHA3. As I see it, this difference is inevitable.


> Are there any modern reasonable alternatives to TLS to use in applications?

We're using NaCl, and love it. With libsodium out now, hopefully more people will give it a try.


As a short term work-around the client/server could randomly change the order of the request/response headers or move the cookie to near the end of the request/response (where it is harder to recover).

They could also add "invalid" headers of random length to push the cookie around making it difficult to find/inconsistent. Increasing the number of request/responses that the attacker would need to sniff on in order to break it.

The nice thing about this solution is that it could be done in the browser (e.g. Chrome) when it is connected to an RC4 site without any involvement of the server administrator.

It is also backwards compatible.

PS - Yes, I know, STOP USING IT - but in the real world if you told people today then they'll still be using it ten years from now...


This seems like an excellent solution.

edit: I've just noticed that this is something someone with no experience in crypto would say. Sometimes things actually get worse with randomization, for example when there is a flaw that will always allow bytes 160 and 161 to be revealed. If the position of the cookie is randomized it will fully be revealed instead of possibly just two bytes. Before actually implementing this someone with a few crypto publications should take a look at it ;)


It's a nice idea, but if I understand this correctly[1], the predictable bytes are very early in the request, as early as the second byte, which is 'E' (for a request which starts with something like `GET / HTTP/1.1`). So moving the cookie headers might not make much of a difference.

[1]http://security.stackexchange.com/a/31873/7306

EDIT: I think I might be wrong. Trying to read a little more, it seems like the first few bytes are the easiest to predict and then it gets harder... but with an HTTP request, the first few bytes are kinda known anyway (`GET / ...`), so this doesn't give much advantage to the attacker. Perhaps randomizing the position of the cookie header, or perhaps adding more NO-OP headers could help against this kind of attack after all?


I thought that post was saying "right now we can only get the first line, but as we learn more we expect to get more and more data out of the request header, including potentially cookies!"


The problem is the client's headers, not the server's.


I addressed the client's headers. Modifying the browser alters the client's headers.


If you can modify the browser, modify it to use a ciphersuite that doesn't have these problems!

As a shorthand: workarounds are only fair game if they don't require software updates by Microsoft or Mozilla. So, for instance, having Rails treat session tokens as one-time-use does mitigate this flaw (somewhat) and is fair game. But having Firefox randomize client headers is not useful, compared to getting Firefox to reliably do AES-GCM (which I think Firefox may be close to doing already).


I've seen you recommend GCM in a couple of places in this topic. I'm not a crypto guy, so I rely on people like you for this stuff.

Other experts I've read (Colin Percival, Thomas Pornin) have mentioned that GCM (and other encrypt-and-mac) implementations are more likely to to have chosen-ciphertext vulnerabilities with respect to CTR mode then MAC.


Can you cite either Colin or Pornin on that? It's easy to find Pornin saying positive things about the AES-GCM TLS ciphersuite.

I don't know what you mean by "chosen-ciphertext vulnerabilities". Authenticated encryption is inherently less vulnerable to chosen-ciphertext, because the ciphertext is integrity-checked. You can't choose an arbitrary new ciphertext, because it won't pass the MAC. In fact, it's the opposite construction --- MAC-then-encrypt --- that causes chosen-ciphertext flaws; a MAC-then-encrypt construct is what got us Lucky 13.


Pornin:

I was wrong on Pornin. I was remembering a crypto.stackexchange question in which Pornin participated, but he did not say anything specifically about GCM or encrypt-and-mac modes. It does look like he's referencing another answer that is no longer there. The quote I remembered was from Vennard (below) and this answer was marked as accepted by Pornin.

Vennard:

under encrypt-and-mac method:

> No integrity on the ciphertext again, since the MAC is taken against the plaintext. This opens the door to some chosen-ciphertext attacks on the cipher, as shown in section 4 of Breaking and provably repairing the SSH authenticated encryption scheme: A case study of the Encode-then-Encrypt-and-MAC paradigm. This may not apply specifically to GCM; I'm not sure if the MAC validates plaintext or ciphertext.

http://crypto.stackexchange.com/questions/202/should-we-mac-...

Percival:

> Why use a composition of encryption and MAC instead of a single primitive which achieves both? Because people are very good at writing bad code.

http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....


GCM authenticates the ciphertext, not the plaintext.

Colin doesn't marshal a specific argument against GCM here, but rather a philosophical one. And his argument is wrong: if you look at the histories of SSL/TLS, SSH, and Tor, you find that the stuff that goes wrong is in code that tries to do simple stuff like combine a block cipher with a hash MAC (which is exactly what he's arguing for here).

GCM, on the other hand, is a NIST standard; you don't have the degrees of freedom with how you e.g. handle padding, or nonce generation, or when you apply a MAC that you do with bespoke crypto.

Obviously, I agree with Colin that generalist developers shouldn't be writing their own AES-GCM libraries. Where Colin and I differ is that I think generalist developers shouldn't be writing crypto code at all. Leave that stuff to the Adam Langley's of the world.


I think Thomas Pornin's [reply](http://security.stackexchange.com/questions/20464/when-authe...) (and our subsequent back-and-forth) to a question of mine on the security StackExchange highlights exactly how hard it is to Encrypt-Then-MAC yourself properly.

You should use a separate key for the encrypt phase and MAC phase. You must MAC the ciphertext, the IV, the authenticated data, and possibly a specifier for the encryption algorithm. You must also construct the MAC'd string in a way that prevents tampering with field boundaries.


Just to be clear, these are all details GCM takes care of.


Yes, absolutely. Use GCM (or XSalsa20Poly1305) whenever possible.


And if it breaks connections to servers set up to use RC4 specifically?

Sure, the browser should stop "suggesting" they use RC4. That is the browser's right. But if the server decides to use it anyway then they use it.

Also you kind of break your own rule. If we cannot suggest things which Microsoft or Mozilla have to do then we cannot suggest they alter their ciphersuite either...


The point of my rule is that if you're going to push new client code, you push a real fix, not a workaround.


Why not push both?

Why can't browsers "suggest" that they don't use RC4 any more, and when they still use RC4 (as they will almost certainly do) they use the workaround.


Then it seems like the right solution is to push TLS 1.2 + AES-GCM along with fixes for Lucky 13, and use CBC for everything before 1.2 and GCM for everything after it.


> However, recent academic work has uncovered many small but significant additional biases running throughout the first 256 bytes of RC4 output.

Didn't we know about the RC4 weaknesses of the first few bytes since WEP?


That was the first handful of bytes. This is the first 256 bytes.


Does this require the authentication cookie to be constant? If, for example, I issue a new cookie to the client every connection then this is mitigated?


No, there isn't a reason why a session cookie needs to remain constant forever. I think rotating the cookie on every request would be challenging (because, at any given time, there may be several requests active), but it's very easy to rotate the cookie every couple of minutes. Such rotation would mitigate all attacks that rely on forcing browsers to submit thousands of requests.

Web applications that use HTTP Authentication cannot be fixed in this way, because you cannot change the password regularly. Other protocols that carry plain-text passwords (after SSL) may be even more vulnerable, for the same reason. For example, authenticated SMTP may be the worst case if the attacker can consistently force an automated client to reconnect and try again.


That probably does mitigate the attack, with the proviso that a MITM can keep cookies from rotating by preventing requests from hitting the target.


If the MITM can do that, it doesn't need to attack cookies does it? It can just impersonate the remote site and steal user-entered credentials. Sharp-eyed users or up-to-date browsers might notice the lack of https for popular sites, and also 2FA, but in general e.g. a malicious WAP has many options.

Or I could be very wrong about this. Please advise.


No, the MITM can be choosy about what traffic it relays and allow the attack to run without causing any of the connections to complete. Think network-layer MITM instead of transport-layer MITM.


The other catch with that approach is that requests can be simultaneous, fail, and/or arrive out-of-order, so the server would need to accept any authentication cookie that it had recently sent to the client, not just the last one.

Not impossible, but not easy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: