Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I mean by feedback is that, when a developer works on functionality he will get a feedback from somewhere. It has to do what it is supposed to do whether the developer understood the requirements or didn't and the users are going to point it out with bug reports or incidents.

On the other hand if the developer misunderstood the requirements or completely bungled the unit tests (for example they always pass) then there is no feedback to tell. Everything is fine as long as tests pass. Of course, one could say that the test should be first written to not pass and then fixed by implementing the feature, but who's going to check that? There isn't easy way to tell me if developers are doing good job with tests other than to delve into each feature and understand it and look at the tests.

Also unit tests test components that typically are specified by the developer (so he makes both spec and the component and the unit test). With functional testing at least one of those is verified by the user and you get some feedback -- if the user reports the service is not working properly you make a functional test to replicate the issue and then fix the service so that it behaves correctly according to the written test.

If the user is another development team it is entirely possible, in a mature org, that the other team can write the test.



> It has to do what it is supposed to do whether the developer understood the requirements or didn't and the users are going to point it out with bug reports or incidents.

This sounds like you're saying that one should leave testing of the code to the end user, which obviously makes no sense. I'm going to assume it really means just "things the developer missed".

So that that point, we have the developer testing for things the developer thought of. We're in agreement there.

When I write code, I tend to make sure the individual pieces work (sort actually sorts, it doesn't blow up on null values, maybe it has too maintain order for equivalent items, maybe it has an option to sort null values first or last). As time goes on, I may find a bug or a change (or misunderstanding of requirements). When that happens, I need to go modify the code to account for it. When I do that, I want to make sure that everything that worked before still works.

If I wrote automated tests originally, all I need to do it rerun my tests to get back to "everything I knew to work before, still works". If I did not, then I need to manually test everything. Both are ways of regression testing; one of them is a lot faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: