The first step is to create an empty test suite with at least one test that runs. This will mean that when you go to add a test later you have no barriers — you already have a suite.
You then need to follow two rules. First, write tests for all new features. Second, add a failing test that exposes the bug before you fix any bug.
By doing these two things you will end up testing two very important areas of your code — new code (which will always be buggiest) and code that has recently proven to be buggy.
This is so incredibly important. So many people do not spend time upfront getting projects structured and setup for builds, testing, deployment and the appropriate automation for each. I should not be copying and pasting out of date scripts from random folders on a network share to build. Or to package something. Or grabbing an EXE from one place and a DLL from another to run my unit tests.
People don't like writing tests because it's not as fun as writing code, but because it also requires a different setup, configuration and flow to get them executing. Figuring that out and making it easy to execute those tests is important in getting people to want to write them.
> Figuring that out and making it easy to execute
> those tests is important in getting people to
> want to write them.
This is so incredibly important. Anyone on your team should be able to run the test suite from the command line by invoking a single command. The easier and the quicker a test suite runs, the more people will run it.
Also, spending a day to setup a CI server (Jenkins, circleci.com or similar) is a good use of your time. You want to have the CI run the test suite after every commit to your mainline branches in SCM.
Another key aspect to testing is generating clear and concise error output. One thing I find very frustrating is when a CI build fails, but it is not clear why. I find most CI systems are very good at listing all the passing tests, or when a few tests fail in the expected way. When tests fail hard (exceptions thrown, processing not start or not running) I find that I often need to go reading through build logs to find out what went wrong and why.
You then need to follow two rules. First, write tests for all new features. Second, add a failing test that exposes the bug before you fix any bug.
By doing these two things you will end up testing two very important areas of your code — new code (which will always be buggiest) and code that has recently proven to be buggy.