Excuse the prolix reply; I have a flight to catch.
As I wrote in my blog post, I have a lot of respect for Thomas. He's who I usually point people at when they want their code audited. I really hate reading other people's code and I trust Thomas (well, Matasano) will do a good job.
two different kinds of practical cryptography: cryptographic design and software design
Agreed.
Colin happens to work on both levels. But most people work on one or the other.
I'm generally writing for an audience of people who already know how to write software, but want to know something about crypto. So I take one as given and focus on the other.
modern cryptographic software developers work from a grab-bag of '80s-'90s-era primitives
Right, and that's exactly what I'm trying to change through blog posts and conference talks. We know how to do crypto properly now!
This is an AES CTR nonce reuse bug in Colin's software from 2011. Colin knew about this class of bug long before he wrote Tarsnap, but, like all bugs, it took time for him to become aware of it.
To be fair, that was not a crypto bug in the sense of "got the crypto wrong" -- you can see that in earlier versions of the code I had it right. It was a dumb software bug introduced by refactoring, with catastrophic consequences -- but not inherently different from accidentally zeroing a password buffer before being finished with it, or failing to check for errors when reading entropy from /dev/random. Any software developer could have compared the two relevant versions of the Tarsnap code and said "hey, this refactoring changed behaviour", and any software developer could have looked at the vulnerable version and said "hey, this variable doesn't vary", without needing to know anything about cryptography -- and certainly without knowing how to implement attacks.
Unfortunately, the population of people who can spot bugs like this in 2010's-era crypto code is very limited, because, again, people don't learn how to implement attacks.
Taking my personal bug out of the picture and talking about nonce-reuse bugs generally: You still don't need to learn how to implement attacks to catch them. What you need is to know the theory -- CTR mode provides privacy assuming a strong block cipher is used and nonces are unique -- and then verify that the preconditions are satisfied.
To be fair, that was not a crypto bug in the sense of "got the crypto wrong" -- you can see that in earlier versions of the code I had it right. It was a dumb software bug introduced by refactoring, with catastrophic consequences
Isn't that the entire basis of 'tptacek's argument, though? That even you, as an expert in both software development and cryptography, accidentally got something wrong? An engineering fault occurred, to an expert practitioner. This seems to suggest this sort of thing is not just a function of pure science.
EDIT: On a more serious note, isn't crypto both science and engineering? We have the theoretical aspects, etc... Then we have the practical aspects of implementing these systems in production within an ecosystem that is constantly fighting entropy. I declare a draw.
You still don't need to learn how to implement attacks to catch them.
Implementing attacks is a good way to internalize the idea that "Oh shit, this isn't just a theoretical attack, I better be super careful when doing X, Y, and Z."
There is definitely a kind of crypto attack that isn't very practical to know. For instance, you don't need to understand differential cryptanalysis unless you plan to implement your own block cipher algorithm, which you should never do anyways.
As I wrote in my blog post, I have a lot of respect for Thomas. He's who I usually point people at when they want their code audited. I really hate reading other people's code and I trust Thomas (well, Matasano) will do a good job.
two different kinds of practical cryptography: cryptographic design and software design
Agreed.
Colin happens to work on both levels. But most people work on one or the other.
I'm generally writing for an audience of people who already know how to write software, but want to know something about crypto. So I take one as given and focus on the other.
modern cryptographic software developers work from a grab-bag of '80s-'90s-era primitives
Right, and that's exactly what I'm trying to change through blog posts and conference talks. We know how to do crypto properly now!
This is an AES CTR nonce reuse bug in Colin's software from 2011. Colin knew about this class of bug long before he wrote Tarsnap, but, like all bugs, it took time for him to become aware of it.
To be fair, that was not a crypto bug in the sense of "got the crypto wrong" -- you can see that in earlier versions of the code I had it right. It was a dumb software bug introduced by refactoring, with catastrophic consequences -- but not inherently different from accidentally zeroing a password buffer before being finished with it, or failing to check for errors when reading entropy from /dev/random. Any software developer could have compared the two relevant versions of the Tarsnap code and said "hey, this refactoring changed behaviour", and any software developer could have looked at the vulnerable version and said "hey, this variable doesn't vary", without needing to know anything about cryptography -- and certainly without knowing how to implement attacks.
Unfortunately, the population of people who can spot bugs like this in 2010's-era crypto code is very limited, because, again, people don't learn how to implement attacks.
Taking my personal bug out of the picture and talking about nonce-reuse bugs generally: You still don't need to learn how to implement attacks to catch them. What you need is to know the theory -- CTR mode provides privacy assuming a strong block cipher is used and nonces are unique -- and then verify that the preconditions are satisfied.