> Once we prove efficacy for multiple use cases, we can at least remove the "oh you computer scientists dont get it"
No, you can't. Stating this is a clear proof that you don't understand what you're dealing with. In medical ML/AI, efficacy is not the issue. What you are detecting is not relevant. That's the issue. But I know I won't convince you.
They are detecting what they are testing for. But that's in most cases irrelevant regarding what happens to the patient afterwards, because it's lacking major connexions to the clinical situation that will have to be filled up by a human expert.
So it does in fact work. Unfortunately, only in trivial cases.
Maybe, but then the problem isn't an issue with AI/ML, it's that humans just suck at math.
We're terrible at bayesian logic. Especially when it comes to medical tests, and doctors are very guilty of this also, we ignore priors and take what should just be a Bayes factor as the final truth.
We're terrible at bayesian logic all right, but still better than machines lacking most of the data picture. That's why the priority is not to push lab model efficiency but to push for policy changes that encourage sensible gathering of data. And that's _far_ more difficult than theorizing about model efficiency vs. humans.
No, you can't. Stating this is a clear proof that you don't understand what you're dealing with. In medical ML/AI, efficacy is not the issue. What you are detecting is not relevant. That's the issue. But I know I won't convince you.