Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the code was open source I would show it to you. There are definitely cases that weren't covered during the initial creation of the service that came up in QA/prod. The entire process was a war story in and of itself. The tests aren't perfect but I have done everything in my power to make sure they cover all of the use cases that appear, samples of bad data from production, samples of bad data I forced to happen in other services, asserting failure cases, and (most importantly) are vigilantly kept up to date. I also, on my own initiative, found people internally to help run a small QA study where they interacted with this system on a daily basis for about 2 months.

The senior engineers were not supportive of other forms of testing in any way (even the user testing). They flat out refused my proposal to attempt to create an integration test suite to cover it's communication to other services as well as other services consuming it's data (this I started over the weekend and was told to proceed no longer).

tl;dr: You're absolutely correct but I still think my testing procedures were better than nothing.



>I still think my testing procedures were better than nothing.

I am sure they were, but you also said:

>take the time to understand why they want 100% coverage

which to me indicates that they are under the false impression that this gives some sort of completeness, i.e. "make sure this plastic fence covers the entire stretch of mountain road".

We have a coverage threshold of 90% and it is helpful because it works as a reminder if you try to check in code that isn't covered by tests. I think that pushing that to 100% would add little value but a lot of overhead. Never tried it though so I fully admit that it's just my hunch.


I feel for you. I argued for unit testing for ten years before they were instituted and even then it was half hearted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: