it is when the focus is on having 100% coverage and not on test quality. When 100% becomes the metric it tends to get gamed pretty heavily with tests that have such a huge amount of mocks as to make the tests useless, or doing something to make it hit a branch, but ignoring actually verifying it cause that branch is just a log statement.
I work closely with the CR-time coverage/enforcement tool. The tool expects 70% new line coverage. The number was chosen arbitrarily, but individual teams/managers can choose to avoid the rule or set a different threshold.
It's not ridiculous to expect some, *configurable*, amount of test coverage for newly generated code, is it?
No, reasonable levels of testing are part of the job, it's when you get hard and fast rules from leadership like "don't lower coverage, reward increasing coverage" that it starts breaking down, we had that rule and someone gamed it using the methods I described up to 100% coverage.
everybody on my team. Are you seriously implying I was put on some kind of special "extra testing required" probation system where the unit test coverage requirement was upped from 95% to 100%?
It sounds kind of absurd really. I've never once been asked to sign a contract that required me to do 100% code coverage. As others have noted in the thread, it is like some mythical number any way. It sounds to me like this wasn't going to end well since the expectations were wonky from the start.
It is absolutely possible to set the coverage threshold in Jest to 100% for all categories and they are set to that in the codebase I worked on. I would spend hours trying to that last 0.02% covered sometimes.
I still don't understand why that would cause testing to be hard, but since you won't explain it... I guess it leaves me thinking the problem might not be entirely them.
It's quite possible to get line coverage to 100%. It doesn't really mean anything because it might be covering those lines getting executed in one very particular order - and all bets are off if that order is different.
He's a digital nomad who took off to Mexico (and Belize) and seems a bit lost in where he is going (he says it himself). I've been there myself (except I moved to Vietnam), so I understand it.
This is a lot deeper than just writing tests or 40% internal tooling issues. I also suspect that being remote, he feels like his hands are tied in a company that isn't used to working remotely. Or at least, if I was struggling with tooling, I'd work to find a way to make it better.
I also never complain about writing tests. I can't tell you how many times tests have saved my bacon or resulted in writing cleaner code.