Automated tests are the closest thing to actual engineering in software. While communication methods like agile methods are important, to me they still are outside the scope of actually engineering software. I've found that one of most significant factors of the software quality I've produced is the test coverage of the code, and not just lines of code but the different branches of a function, which honestly feels quite impossible especially if you have exceptions or threads involved.
Some celebrate TDD, but to me it just has appears to help people not dropping out the tests.
Automatic static analysis with compiler warnings or pure static analyzers is a really good and quick solution for it, but what I'm really hoping is a fully automated dynamic analysis solution. What's really exciting is that there already are some dabbles with that approach, like the american fuzzy lop (http://lcamtuf.coredump.cx/afl/) which actively tries to find new branches from the application being tested.
Some celebrate TDD, but to me it just has appears to help people not dropping out the tests.
Automatic static analysis with compiler warnings or pure static analyzers is a really good and quick solution for it, but what I'm really hoping is a fully automated dynamic analysis solution. What's really exciting is that there already are some dabbles with that approach, like the american fuzzy lop (http://lcamtuf.coredump.cx/afl/) which actively tries to find new branches from the application being tested.