No, as a sibling comment to mine shows, it's actually easy to make 100% coverage with bad tests, since one doesn't challenge the implementation to handle edge cases.
It's easy to achieve 100% coverage with happy-path code and low quality shallow tests, agreed.
AFAIK, «high coverage» may have different meaning for different people. For me, it's «high quality», for others it's «high percentage», e.g. «full coverage» or «80% coverage», which is easy to OKR.
It's the fact this could even have a different meaning that makes this a useless metric - defining 'quality' or 'coverage' is subjective. The majority of tests written are meaningless noise, and serve mainly to distract from covering 'critical' failures. Again, a subjective measure, in the sense that was is critical to you and me may not be the same thing.
Which is what makes this whole concept of code coverage so much toxic nonsense...
Not to argue against writing 'quality' tests, but high 'coverage' actually decreases quality, objectively speaking, since erroneous coverage of code serves negative purposes such as obscuring important testing, enshrinig bugs within testing.
I would make my case here CodePilot and all such 'AI' tools should be banned from production, at least until they solve the above problem, since as it stands they will serve to shovel piles of useless or worse, incorrect testing.
It is also important to remember what AI does, i.e. produce networks which create results based upon desired metrics - if the metrics were wrong or incomplete, you produce and propagate bad design.
So yes people use it now as a learning tool (fine) and it will get 'better' (sure), but as a tool, when it gets better, it will constrain more, not less, along whatever lines have been deemed better, and it will become harder, not easier, to adjust.
No, as a sibling comment to mine shows, it's actually easy to make 100% coverage with bad tests, since one doesn't challenge the implementation to handle edge cases.