> Trying to find a zebra in the hoof beats of horses when the number of patients quickly outstrips your department’s capabilities is a fools errand because if the workup require will overwhelm throughout to the point that the delay in care will put other patients at risk.
This is where that I hope (as a non-physician) that AI, used carefully, should actually be able to help. A well-designed ML system should have a decent chance at distinguishing a zebra from a horse because it has read absolutely everything, has perfect recall of that corpus, and has some ability to contextualize the knowledge it has to the situation at hand. I suspect that a good proportion of ER doctors already have those characteristics, but surely not all of them do, and surely not consistently.
An AI-assisted system will still false-positive, because the computer is still just a tool, tools are never perfect and designers of safety systems tend to err on the side of false positives. However, a thoughtful pop-up that displays when the situation really may warrant it is surely more helpful to a physician that one that cries wolf to you constantly?
My optimism assumes that there's fundamentally enough information available to make the diagnosis, however. If you're actually saying that finding the zebra would require gathering so much more information for each patient that it would lower overall outcomes for all patients, then I guess we're stuck.
Perhaps, but it should be noted that sepsis detection is one of the places where ML has been applied for a very long time. A lot of these popups are built off those models.
I have been in the informatics space for some time and none of these ML generated sepsis alerts are helpful in the ED.
We can readily tell who has sepsis when confronted with patient appearance, vital signs, and workup.
The issue is trying to find a signal in all of this data where they have an occult condition that we are not yet observing. These folks get sick real bad and real quick and we often do miss this!
Trying to find that signal with our current medical technology is difficult but there are some immune markers that could potentially alert us. These tests are now coming online and should be prevalent within the next few years. Hopefully, more research will show whether they are helpful or not.
Sometimes, even the best tests or calculators or ML generated alerts do not measure up to physician gestalt.
This is where that I hope (as a non-physician) that AI, used carefully, should actually be able to help. A well-designed ML system should have a decent chance at distinguishing a zebra from a horse because it has read absolutely everything, has perfect recall of that corpus, and has some ability to contextualize the knowledge it has to the situation at hand. I suspect that a good proportion of ER doctors already have those characteristics, but surely not all of them do, and surely not consistently.
An AI-assisted system will still false-positive, because the computer is still just a tool, tools are never perfect and designers of safety systems tend to err on the side of false positives. However, a thoughtful pop-up that displays when the situation really may warrant it is surely more helpful to a physician that one that cries wolf to you constantly?
My optimism assumes that there's fundamentally enough information available to make the diagnosis, however. If you're actually saying that finding the zebra would require gathering so much more information for each patient that it would lower overall outcomes for all patients, then I guess we're stuck.