Having spent some time with a hallucinating ChatGPT and having spent some time with doctors during my life, at this point, in my humble opinion, it should be made illegal for a doctor to make a diagnosis without consulting an LLM finetuned with all the medical research and literature available.
Ah, but copyright/patents/IP. Well, IP was created to foster production of useful immaterial stuff. If you now want to use it to hinder production of useful immaterial stuff, you can go to f*ck yourself if you ask me.
Ah, but lawyers and liability. I propose only that the doctor is required to consult the LLM. Easy to log and verify. All liability stays at the doctor who makes the final diagnosis.
I'm a physician and use chatGPT extensively for coding, writing, and general knowledge inquiry.
With 60-70% correct rates on most training sets and 0.63 critical errors per report, for any physician not very well-versed with the limitations of LLMs, this is more of a liability than an asset. Some of the biggest barriers to care are cognitive, such as anchoring or availability biases. LLMs in their current state will only muddy the water.
Good physicians already know and do use these tools, bad ones will only get worse. A legal mandate will not benefit care.
Doubtless these models will progress to where this calculus will change. The only benefit from a mandate now that I can foresee is to accelerate fine-tuning by forcing widespread reinforcement learning by physicians, but that is a different discussion.
> it should be made illegal for a doctor to make a diagnosis without consulting an LLM
> All liability stays at the doctor who makes the final diagnosis
Sorry, what? You’d force clinicians to use a specific technology (a specific “how” for finding their answer) and also make them liable for the correctness of that answer?
You seem to have a strange idea about the law if in the same breath as you make something illegal, you sigh with exasperation at lawyers and liability.
I feel like this is unfair, because Epic Systems could recommend ICD10 codes based on the listed symptoms. The doctor would pick the final diagnosis code from the recommended list or their own. That doesn't mean Epic shoulders the liability.
What if the case is so straightforward (you've see thousands of these, hundreds a year, the entire system is built around them) that you know the diagnosis in less than the blink of an eye?
What if it's emergent, and you have no time to think, like a major hemorrhage? Not only is it obvious, but you must act now, right now?
What if there is a highly studied, routinized process (e.g. cardiac arrest) where you're managing a team going through the diagnostic procedure and treatment, which, over decades, have become a carefully interleaved dance performed at stacatto pace, and, again, there is no time to consult an LLM?
We should require software engineers to do the same. So much garbage code I've reviewed that would have easily been resolved had the SE just "asked an LLM".
Maybe we can legislate this into existence as well?
A bit blunt way to word it (and you’re going to get a lot of pushback), but overall I agree with the sentiment. The medical field, due to various regulations and special interests, is quite likely the field of study where humanity is most behind where it could be based on the technology we currently have.
I’m a data scientist, and it stuns me the way in which diagnoses are made compared to how they could be made if we had a large worldwide dataset of symptoms and other observations to draw correlations from. Especially with regard to preventative medicine.
Not only that, but there are a lot of bad doctors out there. If you go to four different doctors for an even slightly obscure problem, there’s a good chance you will get four different diagnoses. If we applied rigorous statistical tests to the assessments made in the medical industry, I think everyone would be unsettled at how inconsistent and irreproducible everything is (as applied to the medical practice—not necessarily academic medical research).
Because the parent commenter "spent some time" with ChatGPT and doctors, we should change our entire paradigm of modern medical care carefully refined and honed over 500 years.
Yeah bro, you know better than all of modern medical science because you played around with chat gpt for an afternoon.
I feel “require” is a bit strong. But, I think there are some interesting possibilities.
Advertising your medical practice as an “LLM-consulting” (better name needed) one in the same way there’s “Montessori” schools could be interesting.
Another option I think that would be interesting would be making the LLM patient-facing and required as one of the check-in “docs”. And then attach the results to the patient’s file for the medical professional to view.
Could also be great for pre-screening and/or suggesting a virtual visit if appropriate.
Utility here depends on what type of practice it is. ED doc on no sleep and only 10 minutes for a patient, maybe. But it would be useless in something like psychiatry
Ah, but copyright/patents/IP. Well, IP was created to foster production of useful immaterial stuff. If you now want to use it to hinder production of useful immaterial stuff, you can go to f*ck yourself if you ask me.
Ah, but lawyers and liability. I propose only that the doctor is required to consult the LLM. Easy to log and verify. All liability stays at the doctor who makes the final diagnosis.