There are tables at the end describing the algorithms key sizes. be mindful of "(Size in bytes)" not bits. They cover that these algorithms use bigger keys, but it is 4 kilobytes.
Thank you for the update. This is really useful. It would be really great, if you could commit to an update a few years down the road at the latest. E.g. "I will release an update no later than August 15th 2027". 3 years in the fast-changing world shouldn't be such a burden and it would help to settle many discussions somewhat reasonably with appeal to authority :-D No seriously, having something that can be considered current advice would be great.
There are a string of these posts going back to 2009. Not "updated every 3 years", but it looks to me like we get an update when important advice has changed at least. I may have missed some, but from my bookmarks I have:
So not every 3 years, but if you read through you'll notice a _lot_ of each update pretty much says "use the same advice as last time."
It's not clear who wrote the most recent Latacora post, but it's Thomas Ptacek's company, and the original 2009 post was by Colin Percival. If you've been around here for a while you'll probably recognise those names, they's #1 and #60 here: https://news.ycombinator.com/leaders At least in my head, both have serious credibility over many years in this subject space.
The 2018 Latacora post says:
"This content has been developed and updated by different people over a decade. We’ve kept what Colin Percival originally said in 2009, Thomas Ptacek said in 2015, and what we’re saying in 2018 for comparison. If you’re designing something today, just use the 2018 Latacora recommendation."
I started Latacora with Erin and Jeremy in 2016, and wrote the last "Right Answers" post with their name on it, but Erin and I haven't worked there since 2020.
Strong pre-shared keys will continue to remain secure, even against a quantum computer.
Wireguard, for example, provides the ability to add a pre-shared key for endpoints, which it mixes in during key exchange. Wireguard sessions collected under such a configuration should remain safe when attacked by a future quantum computer, assuming that the shared keys remain secret.
Pre-shared keys are just inconvenient to handle safely.
> Pre-shared keys are just inconvenient to handle safely.
You can transfer PSKs safely and easily with OpenSSH 9.0 (released 2022-04-08) or later, which uses sntrup761x25519-sha512@openssh.com as the default key exchange method.
If your threat model includes someone with a quantum computer intercepting all of your traffic and storing it to decrypt later, you probably don't want to share your keys over a non-PQC channel unless you can guarantee that they haven't started eavesdropping on your traffic yet.
While sntrup761x25519-sha512 is a QC secure key exchange, sending a key over it doesn't count. It's not really a "pre-shared" key unless the sharing is done using organic, locally sourced sneakers. Unless FIPs, and then it's boots.
The NSA has a copy of your ciphertexts on their disks today. What could stop them from trying to decrypt it in 5 years' time? It's not like they will be held back by any Terms & Conditions.
The only way you can do any "not after X time" decryption even for honest-ish users is if the decryption involves getting extra key material from some server that erases it or shuts down at some point. But even that doesn't help if someone can break the crypto.
I don't think that is true, current PFS algorithms are probably all just an inconvenience PQ, but I think they suggest strategies where one has to have a key at the time of a negotiation or even be part of a decision in a negotiation to ever have the session key as long as the parties discard it.
Have crypto agility, so that when you want to transition algorithms, the move will be as seamless as possible. You can start to use a blended or hybrid crypto today, where you simultaneously use both classical and pqc algorithms. For your classical algorithms, you should adjust your keys' security strength in accordance with your threat model. See CNSA 2.0 for a starter reference. For data in motion, you can use two VPNs, configured appropriately.
There are a number of things you can do today, more than I listed. I suggest you discuss with an appsec person who is familiar with your threat model.
Excellent post, I've always recommended people to this series.
I'm curious what's the general opinion on the production-readiness of these solutions. Open Quantum Safe, for example, discourages it's use in production, and recompiling nginx to use PQC-BoringSSL feels risky since I'm not intimately familiar with both projects ("did I miss a --enable-security flag?").
> the PQ keys are 4 orders of magnitude larger
For McEliece, perhaps, but the algorithms in the tables are "only" 2 orders of magnitude larger.
One interesting thing is that if you look at what companies that want to prepare for Store Now Decrypt Later are doing (see links at bottom) they're pretty much all using the non-production ready OQS. If you believe in hybrid encryption this is mostly okay since a failure in the PQC portion should not cause a breakage in the classical portion. Assuming that OQS has implemented the hybrid protocol correctly.
True, and hybrid cryptography is definitely the way to go.
But there's more to it than just resistance to cryptanalysis: crashes, memory leaks, disabled security features (e.g., ASLR), irregular performance, supply chain attacks...
PQC requires extra code, and every added instruction carries some risk.
“Classical cryptography” used to refer to historical ciphers, Vigenère and the like, tapering off after the World War 2-era cipher machines and definitely not used to describe asymmetric algorithms. There should be a different term for pre- (non-?) quantum cryptography from the modern era. We already suffered the redefinition of “crypto”.
It's a reasonable point but this would probably be a losing battle; at this point terms like "classical security", "classical adversaries", etc. are common in the literature.
To me what's worse is "zk" used to describe applications of verifiable computation with no secrets involved, but that seems like a losing battle also.
Hostly, most cryptographic vulnerability today are because things are stuck in the NIST and FIPS regulation. Most vulnerable building blocks are still shipped to have their certification to begin with. Why there's still excitement to their work?
I've always found it a bit disquieting how many times people feel the need to update these "cryptographic right answers" blog posts.
This is what, a fourth or fifth version since 2009?
Meanwhile everything from ubuntu's apt-get to my connection to HN is secured with 2048-bit RSA - an algorithm invented in 1977 and in widespread use since at least 1995.
Am I getting crypto advice that will keep my data safe for 30+ years, if the advice changes every 3 years?
Yes. It's past time for this concept to be put to rest. It started out as a joke and has taken on a life of its own; moreoever, it has looped back over onto itself, to the point where it's advocating a sort of DIY SOTA vibe that is going to get people hurt --- the opposite of what the joke was going for originally.
I suspect that cperciva, who wrote the first "cryptographic right answers" post in 2009 and doesn't seem to have intended it as a joke, would disagree.
Yes, and I wrote the next two. The joke was about Colin's post --- "these 'right answers' are neither common best practices nor what cryptographic hipsters are advocating". The joke has always been "these should be titled Colin's Right Answers or Thomas' Right Answers [eek]" or whatever.
Nobody has deliberately included bad advice. That's not what I'm saying.
> Meanwhile everything from ubuntu's apt-get to my connection to HN is secured with 2048-bit RSA - an algorithm invented in 1977 and in widespread use since at least 1995.
That’s a bit misleading, considering RSA is only used for certificate verification. Key exchange and symmetric encryption is handled by somewhat more recent algorithms (ECDH / AES-GCM).
The right answer is not always about straight-out security: 2048-bit RSA is not broken and won't be broken for the foreseeable future, but we know that it is much less efficient and more error-prone than e.g. ECDH. So why suggest the former when the latter is a better alternative?
You should consider these "right answers" as if the question were, "I want to develop a new product today. What cryptographic primitive should I use?"
Even that is more subtle. RSASSA-2048-PKCS#1v1.5 is fine if leaking that you signed the same plaintext more than once isn't a threat. If that is a threat then you need RSASSA-2048-PKCS#1v2, (AKA RSA-PSS-2048).
RSAES-2048-PKCS#1v1.5 has implementation-dependent security; implementations keep getting broken due to padding oracle attacks. RSA-KEM-2048 is fine, though slower than ECDH.
Perhaps the meta-message here is that you absolutely have to design for cryptographic agility.
You may not need to jump to the next best thing every 3 years, but as certain constructs are proven weak, you’ll need to start migrating systems and data off of them to modern equivalents.
> you absolutely have to design for cryptographic agility
Yes, but for heaven’s sake don’t design something with “cipher suite negotiation” which has been an endless source of vulnerability over the years in SSL/TLS, IPsec, PGP…
Instead one should advance the version of the entire protocol or file format when you need to upgrade the cryptography. Then you deprecate old versions as quickly as possible. WireGuard and age have no algorithm negotiation at all.
The best way to do cryptographic agility is to associate the algorithm with the key and negotiate keys (from a given set) only. Google’s Tink library does this very well. See https://neilmadden.blog/2018/09/30/key-driven-cryptographic-... for some more background. Version numbers are just algorithm identifiers in another form.
> Am I getting crypto advice that will keep my data safe for 30+ years, if the advice changes every 3 years?
If you drill into the details a lot of things haven't changed. Eg, the key size and HMAC recommendations go back to the original version (among others).
The “right answer” also depends on attack vector. If you are trying to protect against nation-state-level tampering or data leaks for two full generations, the enemy moves much quicker, and so you will need to advance much more quickly, to outpace what you think they might do to outpace you in the future.
If you are only trying to prevent your ISP from seeing your traffic, which they are not trying particularly hard to do, then that level of protection would be overkill.
> If you are trying to protect against nation-state-level tampering or data leaks for two full generations, the enemy moves much quicker, and so you will need to advance much more quickly
It's supposed to protect my users' data for two full generations, but the advice is only good for 3 years?
On one side, there is good in "This is the best prediction of the future we have right now and we update it as our knowledge changes".
They also have a consulting product to sell you. When you build your entire society on "greed as a virtue", it is reasonable to assume it as a primary motivation for a profit seeking entity.
The issue here is that MD5 and SHA1 are broken for collisions. But no one could figure out an actual attack for HMACs based on them. The linked paper is an attempt to explain that.
I think the phrasing in this post could be better, but the basic observation is sound: if the last use of a weak hash function in your codebase is in HMACs, then it’s better to upgrade to a stronger underlying hash function and apply a blanket ban to the weak ones. Similarly, in a greenfield codebase, there’s no reason to pick an HMAC construction based on a weaker hash when collision-resistant ones are universally available.
What you got now as new standards is already the result of multiple iterations of improvements in key size reductions and performance improvements. But you'll likely not get drop-in replacements for existing public key crypto in post-quantum variations. It appears signatures are even more challenging regarding size than encryption in the post-quantum world.
It may be worth noting that choosing the algorithms that are now chosen, which are primarily lattice-based, is already kinda a compromise. They're not the ones with the highest trust in security, which would've been McEliece and SPHINCS (though the latter has been standardized as a fallback). But those come with key and signature sizes that are entirely impractical for the most common use cases.
It appears most of the crypto community came around thinking that the somewhat-smaller lattice algos are now "almost certainly secure". But surely there's at least one famous cryptographer raising his voice that he still has concerns.
> "Asymptotically, the size of an mKyber multi-recipient ciphertext is 16 times smaller than the sum of the sizes of N Kyber ciphertexts.”
There’s a whole zoo of useful variants on the “KEM” idea, but sadly NIST decided to standardise the least flexible variant. See my blog from a few years ago for some background on the literature: https://neilmadden.blog/2021/02/16/when-a-kem-is-not-enough/
AIUI part of the problem with size is due to hybrid setups. Size could probably be reduced if you only used a PQC algorithm instead of combining it with existing crypto.
Great post! I was worried for a long time about this thing as I'm also working in DeFi field. It's great that governments are taking the quantum computer threat seriously