Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: How did you learn to properly build tests?
6 points by PLejeck on Oct 14, 2013 | hide | past | favorite | 11 comments
There's so many testing techniques out there, that I just can't seem to find a resource that describes mocking, stubbing, etc. in the real world. I get a lot of theoreticals of what they're supposed to be, but none of the how-do-I-apply-this-to-my-Rails-app.

Can anyone recommend resources on building unit, integration, and other tests, and what to test, what I should be looking for, etc.?



The most important thing with testing is getting started. Over time you can build a better test infrastructure and tooling, but just start with it.

At Codeship we focus on functional tests first. We use Cucumber/Capybara/Selenium a lot to test the user facing functionality. This way we can be sure that the feature works on the highest level. For some parts you might need to go down to unit tests, but start with functional tests first.

If you want to get started with testing your system try the following:

Everyone in your team writes down his 7 most important workflows in the application from a users perspective and ranks them. Then put all of the workflows together and try to find the 7 most important ones your team agrees on. Then find a tool that helps you test those from a users perspective. Build the whole toolchain (Tetsing tools on every developer machine, Continuous Integration server/service, ...) so adding new tests is trivial.

Now there is no more excuse not to write tests.

For mocking take a look at a screencast we did a while ago:

http://blog.codeship.io/2013/06/11/testing-tuesday-9-stubbin...

But still the most important thing with testing is getting started and having the whole workflow in place. Even if there is just one test, getting to a point where it is easy to add new tests needs to be priority #1


Don't.

I spent a lot of time trying to test "the right way" in Rails. There is none.

There is only a right way for you/your product/team/business right now. Do what works, and iterate and refactor when you find you spend too much time testing (that is, above 50%).

Source: I spend a lot of time working with testing at https://circleci.com


I know there's not a right way, but I still can't even figure out the most basic of tests. The guys in #rails tell me to stub this, mock that, but then they never explain the terms or how to even stub Rails features. I get what I need to do, but I'm not sure how to do it.


Ignore it. Dont write any tests. Then, when things start to break, write tests that would have caught the problem.

In general, you want to write the smallest test that will work. Unit tests are always better (and you dont need to stub anything or mock anything if you're lucky).

Note that your problem might be your software. If you write relatively decouples and composible code, testing it should be easy (and you can go a long time without mocking anything). So if you're using fat models for example, that's very easy to test.


I never thought I'd hear "don't write any tests", since that seems to be a horrible sin to most people


It's a horrible sin for people who have to debug other people's code (read: team projects). Normally this team will have style guidelines on how to test, what to test, etc.

My view is that for personal projects and limited scope code you don't have to write any tests. I'm part of an operations team and nothing from the immediate team has test coverage. It's all Perl and shell scripts. Never have I thought "We need more test coverage", and I'm pretty sure only 1-2 people know of TDD. By the time you write the tests (for whatever reason it always takes me forever to get a test harness working with my code and IDE), you could have finished the code, tested it manually, and used it. If anything, I'd go back and write the test coverage before you redesign it, so you ensure functionality is the same before and after.


Let me clarify. Nearly all programs should be tested, and have good test coverage. However, in this particular case your getting analysis paralysis from thinking too much about testing.

To overcome the analysis paralysis, just dont test. For now. Once you've started to get code out there and understand what you really need to test (by seeing what gets broken in production) it should be simpler to understand what you want to test and how to do it.

The most important thing is to ship. If testing is preventing you from shipping, skip the testing. The result of that is that in future, lack of testing will prevent you from shipping. At that point, it will be essential to improve your testing.


While there can be analysis paralysis having a clearly defined workflow and writing a test for that flow before you even implemented that feature worked great for us.

Writing a functional test at this point helps in understanding the problem space and interaction with the service quite well. And with the functional test in place it is a lot easier to see which other part of the new feature needs to have unit tests in place to make it very stable.

At least that has worked very well for us for a long time now


In my opinion it is. You should start writing functional tests that test the application from the users perspective. Capybara/Selenium/Cucumber for example are a nice combination for this.

Unit tests are great for catching specific small issues, but you always want to make sure that your users can go through the most important steps in your application. These need to be thoroughly tested.


Testing is just a means to an end like the code you write. It creates a certain confidence level that your code is doing what it supposed to do based on the instructions you gave it.

Like the other dude said start with model tests because they are generally the easiest to work with.

Maybe start off with something basic. Write down a list of validation constraints that are necessary for the model you're testing in normal english.

Example, maybe you would do something like this for a Profile model.

  "expect error when first name is empty"
  "expect error when last name is empty"
  "expect error when e-mail address is empty"
  "expect error when e-mail address is an invalid format"
To get these working with the built in rails 4 test framework you just need to change them so they read:

  test "the description would go here" do
    # insert test code here
  end
I'll fill in 1 test for you and you can do the rest.

  test "expect error when first name is empty" do
    @profile = Profile.new
    @profile.first_name = ""
    refute @profile.save
  end
If you have no validations in your Profile model then this test will fail because it WILL save. So your goal is to make it pass. Now you would write your validation in the model as so:

  validates :first_name, presence: true
If you re-run the test it should pass now because active record will throw an error saying first name can't be blank. The test is designed to refute (ie. refuse) saving the model, which is the opposite of assert.

IMO just read through this: http://guides.rubyonrails.org/testing.html


take a simple piece of functionality, decide what you expect it to do, and write a test to check that it does it. It isn't complex - start with the simplest thing that works - minitest is integrated into Ruby, so use that. Write a single test to test a method on one of your models. Build it up from there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: