Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ignore it. Dont write any tests. Then, when things start to break, write tests that would have caught the problem.

In general, you want to write the smallest test that will work. Unit tests are always better (and you dont need to stub anything or mock anything if you're lucky).

Note that your problem might be your software. If you write relatively decouples and composible code, testing it should be easy (and you can go a long time without mocking anything). So if you're using fat models for example, that's very easy to test.



I never thought I'd hear "don't write any tests", since that seems to be a horrible sin to most people


It's a horrible sin for people who have to debug other people's code (read: team projects). Normally this team will have style guidelines on how to test, what to test, etc.

My view is that for personal projects and limited scope code you don't have to write any tests. I'm part of an operations team and nothing from the immediate team has test coverage. It's all Perl and shell scripts. Never have I thought "We need more test coverage", and I'm pretty sure only 1-2 people know of TDD. By the time you write the tests (for whatever reason it always takes me forever to get a test harness working with my code and IDE), you could have finished the code, tested it manually, and used it. If anything, I'd go back and write the test coverage before you redesign it, so you ensure functionality is the same before and after.


Let me clarify. Nearly all programs should be tested, and have good test coverage. However, in this particular case your getting analysis paralysis from thinking too much about testing.

To overcome the analysis paralysis, just dont test. For now. Once you've started to get code out there and understand what you really need to test (by seeing what gets broken in production) it should be simpler to understand what you want to test and how to do it.

The most important thing is to ship. If testing is preventing you from shipping, skip the testing. The result of that is that in future, lack of testing will prevent you from shipping. At that point, it will be essential to improve your testing.


While there can be analysis paralysis having a clearly defined workflow and writing a test for that flow before you even implemented that feature worked great for us.

Writing a functional test at this point helps in understanding the problem space and interaction with the service quite well. And with the functional test in place it is a lot easier to see which other part of the new feature needs to have unit tests in place to make it very stable.

At least that has worked very well for us for a long time now


In my opinion it is. You should start writing functional tests that test the application from the users perspective. Capybara/Selenium/Cucumber for example are a nice combination for this.

Unit tests are great for catching specific small issues, but you always want to make sure that your users can go through the most important steps in your application. These need to be thoroughly tested.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: