More importantly, if you're dealing with anything critical, start with the assumption your code if full of bugs and will fail in ways you haven't even thought of. Now make sure you have fail-safes and procedures in place outside of your code to catch these failures and deal with them as early as possible.
In some ways this is worse than writing code for something like the Therac-25: there are hostile people out there who will actively try to destroy you. If they can cause you $10 million in damages to get $10,000 for themselves they will consider that a good day.
This was the guy who wrote his own SSH server (in PHP, FWIW) and put it into production, right? This whole thing is a disaster waiting to happen.
Go kind of makes you do this. Last weekend I was writing some code, ran the tests, and saw that they failed. I debugged my test harness for a while, only to find that the bug was actually in my real code. (Overall I kind of like the strategy, I'd much rather debug my own for-loop-over-test-data than someone else's. But it does lead the mind down different paths than when you use something like JUnit/Hamcrest.)
Incidentally, this is why "test first" is more than just a methodology for selling high-priced consultants. At least it lets you see your tests fail and then pass, rather than just pass. Lots of common patterns pass in the presence of incorrect code.
An example that a coworker was complaining to me about recently:
Can you spot the bug? The test still passes even if thisShouldThrowFooBarException doesn't throw an exception. Oops.
I personally avoid this by checking that I can make the test fail when I expect it to fail, by editing some values or commenting something out. But that doesn't scale, that only saves you once.
/**
* Tests that thisShouldThrowFooBarException throws FooBarException
*
* @expectedException FooBarException
*/
public function testFooBarException() {
thisShouldThrowFooBarException();
}
but I much prefer
/**
* Tests that thisShouldThrowFooBarException throws FooBarException
*/
public function testFooBarException() {
$this->setExpectedException('FooBarException');
thisShouldThrowFooBarException();
}
Much more explicit and you're less likely to miss it when trying to grok someone else's unittest.
Now you're only asserting that any of these statements throw, anywhere in the code under test. That's significantly weaker and I've seen it mask real problems in code.
I think using plain try something/fail/catch is clearer, or using Closures and an assertThrows if your language supports it.
You can always revert to the old try/catch/fail if you need to test something more complex (e.g. testing the message of the exception) but most of exception tests fall in the simple case shown above.
> avoid this by checking that I can make the test fail when I expect it to fail, by editing some values or commenting something out.
Whenever I do this, I feel awful, because:
> But that doesn't scale, that only saves you once
You need to fuzz test your tests. (I always forget the name of the awesome Java tool for this....) If you can randomly mutate your test code (negate boolean conditional, etc) and your tests still pass, they fail.
I often deliberately break something and make sure it fails before accepting that my test actually tests for what I think it does. Not perfect, but it gets you some of the way there.
EDIT: NIH is pretty bad overall, but it's especially terrible in crypto, where 'useless' and 'useful' take on something akin to binary values, rather than a continuum.
Remember, when working with bitcoin, that transactions are _irreversible_. Even in the real billion-dollar finance world transactions have been reversed for serious software errors. Not in bitcoin.