By David Clain, HDAI Chief Product Officer and VP Corporate Strategy and Dr. Scott Cullen HDAI Senior Advisor
In the rush to implement AI solutions in healthcare, many organizations are turning to Large Language Models (LLMs) like GPT-4 to summarize patient charts, generate clinical notes, and support decision- making. While these tools show remarkable capabilities, we’re discovering that LLMs alone may not be the complete solution for healthcare’s most pressing challenges.
The Allure of the Quick Fix
It’s easy to understand the appeal. A clinician faced with reviewing dozens of charts before rounds could theoretically ask an LLM to “summarize this patient’s history” and receive a concise, well- structured narrative in seconds. The promise of this technology is tantalizing, especially when physicians spend an average of six hours per shift in the EHR, with nearly two hours dedicated to chart review alone.
But this approach comes with significant risks that should give healthcare leaders pause.
The Hidden Dangers of General Summarization
Consider this sobering example from a Bloomberg report last summer: A nurse received an AI- generated handover report that completely omitted a patient’s critical drug allergies. This wasn’t because the AI was poorly designed, but because asking a general question (“summarize this chart”) of a complex dataset (thousands of clinical notes) creates an environment ripe for critical omissions.
Even the most advanced LLMs today share three fundamental limitations when tasked with general clinical summarization:
- They hallucinate. When processing large amounts of text, LLMs sometimes generate plausible- sounding but factually incorrect information.
- They omit critical details. As in the drug allergy example, important information can be left out entirely.
- They lack transparency. General summaries don’t provide clear traceability to source data, making verification time-consuming and reducing clinician trust.
In healthcare, where decisions impact lives, these limitations aren’t merely inconvenient—they’re dangerous.
A Better Approach: Hybrid AI Systems
Rather than abandoning AI technology, forward-thinking healthcare organizations are adopting hybrid approaches that pair LLMs with other AI techniques to create more reliable systems. Here’s what this looks like in practice:
1. Structured Data First
Before involving LLMs, extract and utilize all available structured data from the EHR. If a diagnosis, lab result, or medication list already exists in structured form, there’s no need to rely on text extraction.
2. Purpose-Built LLM “Agents” for Specific Tasks
Instead of asking one LLM to summarize everything, deploy multiple purpose-built LLM agents that each address a specific, constrained question:
“Does this patient have food insecurity?”
“What was the most recent ejection fraction recorded?”
“Have there been medication changes in the last 7 days?”
By limiting both the question scope and the text provided to each agent, we dramatically reduce the risk of hallucinations and omissions.

3. Predictive Models for Risk Stratification
LLMs aren’t designed to predict future events. Purpose-built predictive models trained on outcomes data are far more effective at identifying patients at risk for specific complications, readmissions, or clinical deterioration.
4. Transparency Throughout
Every piece of information presented should be traceable to its source, allowing clinicians to verify and explore further when needed. This builds trust and provides necessary context.
Real-World Results
Early testing of this hybrid approach has shown promising results. In transitional care management, nurses experienced a 50% reduction in chart review time without sacrificing quality. More importantly, they reported greater satisfaction with the process—finding what they needed faster and with less effort.
A heart failure registry manager noted that summaries generated through this hybrid approach were not only faster to review but more thorough than traditional manual chart review.
What This Means for Healthcare Leaders
As healthcare organizations evaluate AI solutions, they should look beyond the impressive demos of general LLM capabilities and ask:
- Does this solution provide transparency and traceability? Does it use structured data appropriately?
- Does it constrain LLM use to specific, well-defined tasks?
- Does it complement LLMs with purpose-built predictive models?
- Has it been rigorously validated in clinical settings?
The future of clinical decision support isn’t about replacing human judgment with a single AI system, but augmenting it with multiple specialized tools working in concert—each doing what it does best.
By embracing this hybrid approach, healthcare organizations can realize the benefits of AI while mitigating its risks, ultimately delivering what matters most: better care for patients and a better experience for clinicians.
HDAI partners with leading health systems to implement hybrid AI solutions that combine predictive modeling, specialized LLM agents, and human clinical intelligence—all embedded directly within existing EHR workflows. Contact us to learn more about our approach to AI-assisted chart review and risk stratification.
SHARE