Hacker News new | past | comments | ask | show | jobs | submit login
Overclocking SSL (imperialviolet.org)
86 points by n-named on Oct 30, 2010 | hide | past | favorite | 21 comments



The article states that SSL encrypted with a 1024 bit RSA key is computationally inexpensive, taking less than 1% of the CPU time on his current web servers. This is correct, however, 1024 bit RSA is no longer considered secure enough. It can be brute forced by modern computing systems.

In fact, the NIST has recommended that all SSL systems be upgraded to 2048 bit keys by January 1st, 2011. This recommendation will have "teeth" for financial institutions that must comply with PCI guidelines, and healthcare that must comply with HIPPA, ARRA, and HITECH regulations.

The problem is that encryption with larger key sizes is computationally much more than 2x expensive. In fact, compute costs are based on a cube of key size, meaning encryption using a 2048 bit private key can be as much as 30x more expensive than 1024 bit.

I agree that website operators should encrypt everywhere, however, encryption with weak keys might be almost as bad as no encryption in the near future, by giving people a false sense of security.

If you are running a serious volume website, you pretty much need SSL accelerators to handle the volume of traffic with 2048 bit encryption. 1-2% CPU load on your web servers can be managed, but 30-60% cannot.


> The problem is that encryption with larger key sizes is computationally much more than 2x expensive. In fact, compute costs are based on a cube of key size, meaning encryption using a 2048 bit private key can be as much as 30x more expensive than 1024 bit.

How'd you get 30? From what you said, it sounds like it should be 2^3 = 8 times as expensive.


1024^3 = 1073741824 2048^3 = 8589934592 / 1024^3 = 8.

It depends on the implementation. Some implementations are more computationally expensive than others.

See: http://devcentral.f5.com/weblogs/macvittie/archive/2010/09/1...

Average of 5x reduction in transactions per second when moving from 1024 bit to 2048 bit keys.


I'm also curious where the 30x comes from. If cost is indeed the square of key size I would expect:

  Cost(1024) = (1024*x)^2
  Cost(2048) = (2048*x)^2 = (2*1024*x)^2 = 4 * (1024*x)^2
Which is a 4x increase in cost, not 30x.


Cube, not square. Also, implementation matters.

2^3, or 8 times more expensive.


That's an interesting point - as computers get faster at negotiating ssl ciphers, computers also get faster at breaking ssl ciphers.

I do hope that the factor on those two numbers is not the same though! So eventually one will win.


Complexity of breaking (presumably, prime factorisation in the case of RSA) is exponential wrt key size, whereas encryption has polynomial complexity. We'll be fine.


Would the codebreakers winning really be a better outcome than a constant arms race? :)


I don't know how cryptography works, so I'd like to know why the key length for all cryptography schemes seems to go up in powers of 2. What's the reason one couldn't have a 1300 bit RSA key, for example?


Fast modmults for crypto keys are done using Montgomery multiplication, if the numbers being multiplied are power of two in size the ops are all masks and shifts after the initial transformation.


I don't see any reason why you couldn't. It seems like a 1280 bit key should be secure for at least a few more years.


The article seems to contradict this: "An abbreviated handshake saves the server performing an RSA operation, but those are cheap anyway." Given that they are performing aggressive session caching, perhaps that 1% is mostly symmetric crypto.


The problem is that even if you cache sessions, you still have to do the math to encrypt every packet. Encryption cost is a cube of key size. Please, contradict me. If you can, many cryptographers and mathematicians will thank you... :-)

Say what you will, but the difficulty of brute-forcing decryption is based on key length. You might be able to optimize the encryption using a certain length, but the key will still be vulnerable if it is only 1024 bit (as the article suggests).


For SSL, aren't the individual packets encrypted with a symmetric cypher e.g. AES, RC4, 3DES, etc?


Yes, the data stream is encrypted using a symmetric cipher, as the linked article says. The overhead of this is trivial.

It's the key exchange at the start of a session that requires the server to perform expensive RSA operations, which is why session caching helps.


There are some server-side implementation notes here, focused on Apache: http://journal.paul.querna.org/articles/2010/07/10/overclock...

http://www.imperialviolet.org/2010/09/05/blacklisting.html is also interesting; it says Chrome will use a blacklist to avoid using False Start on some domains.

Also, does anyone know which older clients require that the server cache the session information? The article doesn't say.


Does the extra 1K added by OSCP stapling put enough pressure to make your payload too big for the initial TCP window? It seems like it is impossible to use a 2048 bit key, include an intermediate certificate and turn on OSCP stapling while staying under 4k.


Interesting article, lead me on a bit of a side tangent to my usual monday morning.

Based on people saying that 1024 bit RSA is no longer considered secure. I hunted down what I believe is at least one source from nist.gov.

http://www.nist.gov/manuscript-publication-search.cfm?pub_id...

With the default having to be increased in less than a few months, I was a little surprised to see that google, having recently trumpeted about their "encrypted" search is using 1024bits.

https://encrypted.google.com/

If you check their certificates you will see it is 1024bit.

I was hoping to compare it to https://duckduckgo.com, yet it turns out they too are running 1024bits.

I hasten to add, I dont really understand the significance of different ciphers used etc, this is purely based on bits.


Elliptic curve cryptography can get away with less bits in the keys than RSA at the moment. We do a better job of breaking RSA than elliptic curves. I don't know about Diffie-Hellman, which is based on discrete logarithms.


Can someone explain this statement: "remember that there's no point using AES-256 with a 1024-bit public key"

Why not?


Because once the 1024-bit RSA is cracked, it doesn't matter how strong the symmetric algorithm is as you'll have its key. Analogy: a virtually uncrackable safe (AES-256) whose combination is written on a piece of paper that's stuck in a box protected by a weak padlock (1024-bit RSA).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: