It would be awful to write every test using Copilot, but there is potential there for a certain kind of test. If I'm writing an API, I want fresh eyes on it, not just tests written by the person who understands it most (me). For example, a fresh user might try to apply a common pattern that my API breaks. Copilot might be able to act like such a tester. By writing generic tests, it could mimic developers who haven't understood the API before they start using it (most of them).
If you can find an example of Copilot coming up with a test you wouldn't have thought of, I'd be very interested to see it.
Even if that happened, which I am not expecting, I think the need is much more easily solved via means that are simpler and more effective. E.g., a good tester writing up a list of things they test about APIs: https://www.sisense.com/blog/rest-api-testing-strategy-what-...
copilot as defined would not be "fresh eyes"... it would be "old tired eyes of every code writer who uploaded stuff to github, not knowing if they made one off errors or mistakes in their code"
I mean fresh eyes with respect to my new API. Having seen a lot of other code is a benefit. I expect most tests that Copilot writes to fail, but I would hope some would fail in interesting ways. For example, off-by-one errors might encourage me to document my indexing convention, or to use a generator rather than indexing.