It's interesting to think about, but I'm not as bullish.
Stipulate that we're just talking about X.509 validation. (You can still have "goto fail" with working X.509, but whatever).
Assume we can permute every field of an ASN.1 X.509 certificate. That's easy.
Assume we're looking for bugs that only happen when specific fields take specific values. That's less easy; now we're in fuzzer territory.
Now assume we're looking for bugs that only happen when specific combinations of fields take specific combinations of values. Now you're in hit-tracer fuzzer coverage testing territory, at best. The current state of the art in fault injection can trigger these types of flaws (ie, when Google builds a farm to shake out bugs in libpng or whatever).
Does standard unit testing? Not so much!
Would any level of additional testing help? Absolutely.
But when we talk about building test tooling to the standard of trace-enabled coverage fuzzers, and compare it to the cost of adapting the runtime of a more rigorous language --- sure, Haskell is hard to integrate now, but must it be? --- I'm not so sure the cost/benefit lines up for testing our way to security.
For whatever it's worth to you: I totally do not think code audits are the best way to exterminate these bugs.
I appreciate that testing a TLS stack is a major pain. But I'm a bit confused about "move to Haskell" in this context; sure, it would cut down on the buffer overflows, but TLS stacks usually fall to logic errors and timing attacks, not to buffer overflows. "goto fail" can occur in Haskell too: a chain of conditions can still incorrectly short-circuit.
Also note that Haskell doesn't exactly help in avoiding timing attacks. In a sane cryptosystem, you might be able to implement AES, ECDSA and some other primitives in a low-level language and use Haskell for the rest; but as you know, TLS involves steps like "now check the padding, in constant time" (https://www.imperialviolet.org/2013/02/04/luckythirteen.html). You could certainly implement those parts in C, too, and then carefully ensure that no input to e.g. your X509 parser can consume hundreds of MB of memory, and so forth, but you're going to lose some elegance in the process. (Those problems would admittedly be smaller in OCaml, ADA or somesuch.)
I'd be more interested in something like Colin's spiped - competently-written C implementing a much simpler cryptosystem. If only because even a perfect implementation of TLS would still have lots of vulnerabilities. ;-)
(I think the case for writing applications in not-C is considerably stronger, if only because TLS stack maintainers tend to be better at secure coding than your average application programmer. Like you, I do like writing in C, though.)
Stipulate that we're just talking about X.509 validation. (You can still have "goto fail" with working X.509, but whatever).
Assume we can permute every field of an ASN.1 X.509 certificate. That's easy.
Assume we're looking for bugs that only happen when specific fields take specific values. That's less easy; now we're in fuzzer territory.
Now assume we're looking for bugs that only happen when specific combinations of fields take specific combinations of values. Now you're in hit-tracer fuzzer coverage testing territory, at best. The current state of the art in fault injection can trigger these types of flaws (ie, when Google builds a farm to shake out bugs in libpng or whatever).
Does standard unit testing? Not so much!
Would any level of additional testing help? Absolutely.
But when we talk about building test tooling to the standard of trace-enabled coverage fuzzers, and compare it to the cost of adapting the runtime of a more rigorous language --- sure, Haskell is hard to integrate now, but must it be? --- I'm not so sure the cost/benefit lines up for testing our way to security.
For whatever it's worth to you: I totally do not think code audits are the best way to exterminate these bugs.