As someone that built an EMR that sold to Epic, I think I can say with some authority these studies don't suggest this is ready for the real world.
While tech workers are unregulated, clinicians are highly regulated. Ultimately the clinician takes on the responsibility and risk relying on these computer systems to treat a patient, tech workers and their employers aren't. Clinicians do not take risks with patients because they have to contend with malpractice lawsuits and licensing boards.
In my experience, anything that is slightly inaccurate permanently reduces a clinician's trust in the system. This matters when it comes time to renew your contracts in one, three, or five years.
You can train the clinicians on your software and modify your UI to make it clear that a heuristic should be only taken as a suggestion, but that will also result in a support request every time. Those support requests have be resolved pretty quickly because they're part of the SLA.
I just can't imagine any hospital renewing a contract when their support requests is some form of "LLMs hallucinate sometimes." I used to hire engineers from failed companies that built non-deterministic healthcare software.
(Co-founder of PicnicHealth here; we trained LLMD)
Accuracy and deploying in appropriate use cases is key for real world use. Building guardrails, validation, continuous auditing, etc is a larger amount of work than model training.
We don't deploy in EHRs or sell to physicians or health systems. That is a very challenging environment, and I agree that it would be very difficult to appropriately deploy LLMs that way today. I know Epic is working on it, and they say it's live in some places, but I don't know if that's true.
Our main production use case for LLMD at PicnicHealth is to improve and replace human clinical abstraction internally. We've done extensive testing (only alluded to in the paper) comparing and calibrating LLMD performance vs trained human annotator performance, and for many structuring tasks LLMD outperforms human annotators. For our production abstraction tasks where LLMD does not outperform humans (or where regulations require human review), we use LLMD to improve the workflow of our human annotators. It is much easier to make sure that clinical abstractors, who are our employees doing well-defined tasks, understand the limitations in LLM performance than it would be to ensure that users in a hospital setting would.
We train on real records, and even though they are de-identified in training we still have to keep the model closed and under careful management to protect against the possibility of information leaking.
We are, though, definitely invested in this corner of research, and want to be able to work with others to push medical AI forward.
Given that, the best model for us is to collaborate on an engagement-by-engagement basis. For now we'd look to find ways to do the work directly involving LLMD within our systems.
If you research in the field and have some ideas, I'd love to chat!
> We find strong evidence that accuracy on today's medical benchmarks is not the most significant factor when analyzing real-world patient data, an insight with implications for future medical LLMs.
I interpreted this as challenging whether answering PubMedQA questions as well as a physician is correlated to recommending successful care paths based on the results (and other outcomes) shown in the sample corpus of medical records.
The analogy is a joke I used to make about ML where it made for crappy self-driving cars but surprisingly good pedestrian and cyclist hunter-killer robots.
Really, LLMs aren't expert system reasoners (yet) and if the medical records all contain the same meta-errors that ultimately kill patients, there's a GIGO problem where the failure mode of AI medical opinions makes the same errors faster and at greater scale. LLMs may be really good at finding how internally consistent an ontology made of language is, where the quality of its results is the effect of that internal logical consistency.
There's probably a pareto distribution of cases where AI is amazing for basic stuff like, "see a doctor" and then conspicuously terrible in some cases where a human is obviously better.
An often-ignored/forgotten/unknown fact about utilizing LLMs is that you really need to develop your own benchmark for your specific application/use-case. It’s step 1.
“This model scores higher on MMLU” or some other off-the-shelf benchmark may (likely?) have essentially nothing to do with performance on a given specific use-case, especially when it’s highly specialized.
They can give you a general idea of the capabilities of a model but if you don’t have a benchmark for what you’re trying to do in the end you’re flying blind.
I think that's very true -- and it felt like one of the real opportunities we had in the paper: that we have real production tasks whose results we need to stand behind, and so we can try to explain and show examples of what matters in that context.
One of the sentences near the end that speaks to this is "...[this shows] a case where the type of medical knowledge reflected in common benchmarks is little help getting basic, fundamental questions about a patient right." Point being that you can train on every textbook under the sun, but if you can't say which hospital a record came from, or which date a visit happened as the patient thinks of it, you're toast -- and those seemingly throwaway questions are way harder to get right than people realize. NER can find the dates in a record no problem, but intuitively mapping out how dates are printed in EHR software and how they reflect the workflow of an institution is the critical step needed to pick the right one as the visit date -- that's a whole new world of knowledge that the LLM needs to know, which is not characterized when just comparing results on medical QA.
Giving examples of the crazy things we have to contend is something I can (and will!) gladly talk about for hours...
One other interesting comment in there -- the note about how people think the worst records to deal with are the old handwritten notes. But actually, content-wise they tend to be very to-the-point. Clean printouts from EHR software have so much extra junk and redundancy that you end up with much lower SNR. Even just structuring a single EHR record can require you to look across many pages and do tons of filtering that doesn't come into play on the old handwritten notes (once you get past OCR).
Long way of saying: I feel for today's clinicians. EHRs were supposed to solve all problems, but they've also made things harder in a lot of ways.
Have you seen/heard of Abridge[0]? Long story short their secret sauce comes in two main forms:
1. Accurate speech rec, diarization, etc to record a clinician-patient encounter. No notes, no scribes, no "physician staring at Epic when they should be looking at and talking to you".
2. Parsing of transcripts to correctly and accurately populate the patient EHR record - including various structured fields, etc.
Needless to say you're in this space so I don't have to tell you - every Epic/Cerner install is basically a snowflake so there's a lot going on here, especially at scale.
>LLMD-8B achieves state of the art responses on PubMedQA over all models
Hang on -- while this is a cool result, beating a limited number of models that you chose to include in your comparison does not qualify LLMD-8B as SOTA. (For example, Claude 3 Sonnet scores 10 percentage points higher.)
>This result confirms the power of continued pretraining and suggests that records themselves have content useful for improving benchmark performance.
In support of this conclusion, it would be informative to include an ablation study, e.g. evaluating a continued pre-training data set of the same total size but omitting medical record content from the data mix.
Thanks for reading! We'll definitely include our Sonnet results in the next revision. It's worth pointing out that we're comparing accuracy on text responses and not log probability based scoring, which I think is the number you're referring to (based on Section E of this paper https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bb...). But if I'm mistaken and you have a direct pointer, that'd be super helpful! In general, we've been basing our comparisons against the models in the Open Medical LLM leaderboard here: https://huggingface.co/spaces/openlifescienceai/open_medical...
Also definitely a good idea on the ablation study. We had some results internally based on a production-tuned version of our model that includes a much higher weighting of records-data. It's an imperfect ablation, but it supports the story -- so I think it's there, but you're right that it would be more complete to develop and include the data directly.
I can't understand your methods without example prompts or code, so it's hard for me to interpret the data in figure 6. It will be important to document the methodology carefully to avoid concerns that your "text response" methodology is unfairly punishing other models.
In any case, since the methodology that Anthropic applied is documented and straightforward, it would be possible to do an apples to apples comparison with your model.
(I'm also very curious to know how 3.5 Sonnet performs.)
Is your text methodology based on CoT (like the "PubMedQA training dataset enriched with CoT" you trained on) or a forced single token completion like Anthropic used in their evaluation? In the latter case, I'm not sure how "text responses" differ from log probabilities at Temperature T=0 (i.e., isn't the most likely token always going to be the text response?)
A few thoughts -- with some color on 'the why' because we'd love to get your input on how best to get the story and data across. And thoughts you have would be great.
So for method: We did NOT force single token responses. Our goal was to say "if we use [model x] to serve an app for [this task], how accurate would it be?" -- so we wanted to get as close to pasting the prompt directly in and just grading if the output was correct or not. In some cases, that directly works; in others, we'd have to lightly adjust the system prompt (e.g. "Answer ONLY yes, no, or maybe"); and in some cases, it required significant effort (e.g. to parse stubbornly verbose responses).
For the models like GPT-4o, Llama3-70B, and Sonnet that have great instruction following behavior, this works in a straightforward way (and is something we should be able to just add in an appendix). We were surprised how hard this was for a fair number of the domain-specific models with great log-prob benchmark results on the leaderboard -- ultimately a huge gap between numbers saying 'this is a great medical AI model!' and the ability to use it in production -- and to us that was an important part of the story.
For this set of models where a ton of engineering was required to get workable responses, sharing code is the best we can do. I worry a little about rabbit holing on details of how we could improve tuning or output parsing, because if a model requires so much bespoke effort to work on a task it's been built to perform (in the log-prob terms), the point still stands that you couldn't be confident using it across different types of tasks.
Stepping back, for us this method supported our experience that benchmark performance is pretty disconnected to how a model did with records. This behavior was a big piece of that puzzle that we wanted to show. I think there's some nuance though in how we get this across without getting tied up in the details and options for benchmark hacking.
To your question about the difference between our results and log-prob with T=0: behaviorally, I think of a model like Grok that is tuned to be funny, and perhaps it heavily downweights 'yes' or 'no' on a task like this in favor of saying something entertaining; it may have excellent log-probability benchmark performance, but it would be a much worse choice to power your app than the benchmark scores suggest. We wanted our accuracy to be more reflective of that reality-in-production.
And to your comment about using the phrase state-of-the-art: for us, we _didn't_ want to say "you can get the best model for PubMedQA by doing xyz like we did"; instead, we wanted to say "even if you fully invest in getting great benchmark performance, it doesn't do much for your ability to work with records." So for us, s-o-a is more shorthand for saying "we appropriately exhausted what one can do to tune benchmark performance, and here's a top line number that shows that, so we can stand by the relationship we see between benchmarks and performance on records."
Finally, a last note on something I was seeing yesterday when pawing through some structuring and abstraction tasks that GPT-4o got wrong but LLMD did well. It really is amazing how many different pockets of necessary domain bias/contextual bias the records are teaching the model. One obvious example I was seeing was GPT-4o is undertrained to interpret whether "lab" means "lab test" or "laboratory facility." LLMD has picked up on the association that a task asking for a reference range is referring to a lab test, and that behavior is coming from pre-training and instruction fine-tuning (I suspect more the latter). In contrast, if we don't tune the prompt to be explicit, GPT-4o will start dropping street names into the lab-name outputs, etc.
To me, the implication is that you could do a whack-a-mole approach to load the prompt with ultra precise instructions and it would improve performance on records. But based on what we saw in the paper, that likely _only_ works on the big models like GPT-4o and Sonnet, and not on the domain models that are so hard to coerce into giving reasonable responses. But also, there's a long-tail of such things that would drown you, and so you really have no choice to train on records data. Another tiny example we saw a few weeks ago that has a huge impact on app level performance was that the unit for MCV test is so often wrong in records, but the answer can be assumed to be fL in most cases. So we'd need to add tons of things like that if we didn't have records to train on.
tldr; you need to train on records; if you can't and you have a very well defined purpose/input space, use a big model like GPT-4o and load on the prompt to be very precise -- that should work well; pursuing benchmark performance doesn't get you much practically; if you need to work in an unconstrained environment, you have to train on records to pick up all those small biases that matter.
There's so much good stuff here, and I agree it's an important message for you to get across.
I think trying to convey these ideas through a quantitative benchmark result (particularly a benchmark which has a clear common interpretation that you're essentially redefining) risks 1) misleading readers, and 2) failing to convey the rich and detailed analysis you've included here in your HN comment.
I'd suggest you restrict your quantitative PubMedQA analysis to report previously published numbers for other models (so you're not in the role of having to defend choices that might cripple other models) or a very straightforward log probs analysis if no outside numbers are available (making it clear which numbers you've produced vs sourced externally). Then separately explain that many of the small models with high benchmark scores exhibit poor instruction following capabilities (which will not be a surprise for many readers, since these models aren't necessary tuned or evaluated for that), and you can make the point that some of them are so poor at instruction following that they're very hard to deploy in contexts that require instruction following; you could even demonstrate that they're only able to follow an instructions to "conclude answers with 'Final Answer: [ABCDE]'" on x% of questions, given a standard prompt that you've created and published. In other words, if it's clear that the problem is in instruction following, analyze that.
(Not all abstraction pipelines leveraging an LLM need it to exhibit instruction following, and in your own case, I'm not sure you can claim that your model follows instructions well on the basis of its PubMedQA or abstraction performance, since you've fine tuned on prompt,answer pairs in both domains. You'd need a different baseline for comparison to really explore this claim.)
Then I'd suggest creating a detailed table of wrong/surprising stuff that frontier models don't understand about healthcare data, but which your model does understand. Categorize them, show examples in the table, and explain them in narrative much like you've done here.
Steve here, one of the co-authors. Totally valid on OpenBio. I will say that comparison numbers for this paper were such a challenge, in part because we found that a lot of the LLMs on the Medical LLM leaderboard struggled to follow even slight changes in instructions. On one hand it felt inaccurate to just print '[something very low]% Accuracy' on structuring/abstraction tasks and call it a day, but it also seemed like the amount of engineering effort needed to get non-trivial results from those LLMs was saying something important about how they worked.
I think that's especially true when you look at how well GPT-4o worked out of the box -- it makes clear what you get from the battle-hardening that's done to the big commercial models. For the numbers we did include, the thought was that was the most meaningful signal was that going from 8B to 70B with Llama3 actually gives you a lot in terms of mitigating that brittleness. That goes a step towards explaining the story of what we're seeing, moreso than showing a bunch of comparison LLMs fall over out of the box.
In the end, we presented those models that did best with light tuning and optimization (say a week's worth of iteration or so). I anticipate that we'll have to expand these results to include OpenBio as we work through the conference reviewer gauntlet. Any others you think we definitely should work to include? Would definitely be helpful!
While tech workers are unregulated, clinicians are highly regulated. Ultimately the clinician takes on the responsibility and risk relying on these computer systems to treat a patient, tech workers and their employers aren't. Clinicians do not take risks with patients because they have to contend with malpractice lawsuits and licensing boards.
In my experience, anything that is slightly inaccurate permanently reduces a clinician's trust in the system. This matters when it comes time to renew your contracts in one, three, or five years.
You can train the clinicians on your software and modify your UI to make it clear that a heuristic should be only taken as a suggestion, but that will also result in a support request every time. Those support requests have be resolved pretty quickly because they're part of the SLA.
I just can't imagine any hospital renewing a contract when their support requests is some form of "LLMs hallucinate sometimes." I used to hire engineers from failed companies that built non-deterministic healthcare software.