AI has had the opposite effect on false positives. Statistically, most patients aren't (that) sick and don't have zebras.
Taking radiology as an example (because that's my specialty) ~90% of studies are normal and some types (e.g. CT for pulmonary embolism, CT for transient ischemic attack/vertigo) are closer to 98-99% normal.
Every diagnostic AI application I've seen implemented as of 2023 that merely replicates the work of a human has done nothing but increase false positives.
The extreme class imbalance makes this a non-trivial problem.
Taking radiology as an example (because that's my specialty) ~90% of studies are normal and some types (e.g. CT for pulmonary embolism, CT for transient ischemic attack/vertigo) are closer to 98-99% normal.
Every diagnostic AI application I've seen implemented as of 2023 that merely replicates the work of a human has done nothing but increase false positives.
The extreme class imbalance makes this a non-trivial problem.