Hacker News new | past | comments | ask | show | jobs | submit login

If not expert cryptographers, than at least expert C programmers. I was trying to be diplomatic, but blunt is perhaps better for clarity: no expert C programmer should have any trouble reading and comprehending the syntax of the code in question, which is mainly what the parent poster was calling out. https://www.gitorious.org/gnutls/gnutls/commit/0fba2d908da6d...

It's probably tempting to think that unit testing might save us from security failures. People have written many words on both sides. But how many of us have actually tried it? No one who hasn't tried (and tried hard) to see whether unit testing works has any business arguing for (or against) unit testing. We simply don't have enough data to know whether it's valuable. And the fact that unit testing works in domains other than security isn't evidence that unit testing would produce any kind of benefit in computer security.

What's getting lost in the noise here, I think, is that it's important to zoom out and think about a broad overview of how difficult compsec is. Think about all the different situations that require security. Even something as seemingly simple and straightforward as an OS-level copy-paste mechanism requires thinking about security.

Could you write unit tests to cover all security situations? How about all possible situations? If not all possible situations, then why? What's the difference between a situation that you can unit test and one that you can't? Is it possible to write a unit test that checks whether there's a sequence of steps a user could take to exploit an OS-level copy-paste mechanism? Can you unit test the efficacy of a TCP/IP stack? Is it possible to unit test whether the Debian OpenSSL PRNG is being fed with sufficient bits of entropy? Can you unit test whether the colors of your website are pleasing?

The most interesting answer to those questions isn't "yes" or "no." The most interesting answer is "maybe." Because "maybe" means nobody knows, and that's fundamentally interesting. It's possible that it's possible to write unit tests for domains that nobody has thought of yet. (Yes, even tests that automatically infer whether your website's colors are pleasing to humans.)

Could unit testing have saved us from this one particular case? Maybe. But is it possible to write unit tests in the general case? Maybe not. But you, I, and everyone else doesn't know until we actually try. It could be impossible, or if not impossible then infeasible, or if not infeasible in a dynamic language then infeasible in C libraries. On the flipside, if you investigate, and you come up with a way of writing unit tests in the general case for security libraries, then you may have just discovered a breakthrough in how cryptographers can write security libraries. Because if it's possible to write useful unit tests in the general case for security libraries, then you will have demonstrated something that most cryptographers generally disbelieve, the same way that people disbelieve that it's possible to unit test whether colors are pleasing.

So get to work! There's nobody stopping anyone from right now going and trying to write those unit tests. Try it, write a blog post about your experience and the hardships, write about what you've learned. I guarantee it will be interesting.

All I'm saying is that we should be experienced before we advocate or dismiss an idea, and the only way to get experience is to try it out.




I haven't looked at the gnutls code in depth, but I looked at Apple's 'goto fail' code. I have a write-up here: http://pgcode.blogspot.de/2014/02/lots-has-been-said-about-a...

Cryptographic code introduces new concerns (e.g. is the code safe from timing attacks, or leaking information via cache usage,...). But it is also still code, and susceptible to common code errors. Both this bug and the Apple one are simple coding errors.

It is definitely possible to write unit tests that can help catch errors like this. I wrote some unit tests for the Apple code while refactoring it. Anyone trying to write unit tests to cover all execution paths in SSLVerifySignedServerKeyExchange would have caught that bug (since there was unreachable code). I suspect the same is the case here.

The bigger take-away for me was that the code could be improved a lot. If you have a look at the code after my refactoring, it is dramatically simpler, in my opinion.


Your blog post is an excellent example of how code like this should be cleaned up.


Unit testing is not terra incognita, it's industry best practice. Personally I write at least twice as many lines of code in tests as I do in the implementation under test. That's the nature of test-driven development.

If you doubt that TLS can be tested, may I direct your attention to the extensive suite of unit tests in Go's crypto package. For example http://golang.org/src/pkg/crypto/x509/x509_test.go and http://golang.org/src/pkg/crypto/x509/verify_test.go among many other source files.


Nobody claimed that we can fix all security problems with unit testing. Unit testing should be used in addition to auditing, reviewing, and other techniques. However, unit testing can dramatically improve the quality of code, including avoiding security pitfalls and mistakes.

What we need to do is to include mandatory unit testing in all security-critical code, and ensure good coverage.

This particular bug is a bug that would have easily been caught by anyone writing rudimentary unit tests.

It's sad that there are no regression tests for these changes either, meaning we remain susceptible to such bugs in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: