This is part 1 in our FHIRside Chat Series.
Healthcare executives talk about AI right now the way hospitals once talked about EHRs: as a looming inevitability. The promise is familiar – less burnout, more efficiency, better care.
At a recent event hosted by IKS Health, we discussed what organizations are discovering after actually trying to deploy agentic AI.
The most important takeaway wasn’t about model performance, safety frameworks, or prompt engineering. The explanation turned out to be simpler: most health AI initiatives don’t fail because the models are weak, but because the systems can’t assemble a complete picture of the patient.
FHIR is a Health Level Seven International® standard for exchanging health care information electronically; the health care community is adopting this next generation exchange framework to advance interoperability, so it’s critical that healthcare organizations are prepared and knowledgeable.
The real risk isn’t bad AI — it’s incomplete context
Every clinician knows this scenario.
A patient arrives for a follow-up after a hospitalization. Your chart shows warfarin. The patient’s pill bottle shows apixaban. Outside cardiology changed the medication. Urgent care drew labs. The emergency department adjusted dosing. None of those results are visible. Some documents exist somewhere in the media tab. Others are in a payer portal. Some are only in the patient’s memory.
This is not a clinical reasoning problem. It’s an information problem. Clinicians compensate for incomplete data by asking more questions, ordering repeat labs, and delaying decisions. We know the chart is incomplete, so we practice cautiously.
AI does not practice cautiously. AI treats the dataset as the patient. When key history is missing, the model does not recognize uncertainty – it confidently reasons over an incomplete record. In other words, AI doesn’t just make mistakes when data is missing. It makes plausible mistakes.
The danger is not that AI will hallucinate. It’s that it will produce recommendations that sound clinically reasonable while reasoning over an incomplete record.
Many health systems are puzzled that promising AI pilots don’t scale. The algorithms work in demonstrations but struggle in daily workflows. The reason is rarely model accuracy. It’s fragmentation.
Patient information in most organizations lives simultaneously in the structured chart, scanned documents, outside specialist records, health information exchanges, and payer systems. Humans know these gaps exist and mentally adjust. AI cannot. When the system cannot reliably access medication history, outside imaging, prior authorizations, or longitudinal clinical narrative, AI’s outputs become unreliable in exactly the cases where help is most needed — complex patients. Organizations then conclude that AI “isn’t ready.” In reality, the infrastructure isn’t ready.
Healthcare has spent decades digitizing records, but not decades making data computable.
When AI has complete longitudinal data, its role changes dramatically. Instead of generating answers, it becomes an attention system. It can surface relevant prior labs, identify medication interactions, summarize outside encounters, and anticipate administrative barriers. For example, identifying that an insurer requires six weeks of physical therapy before approving an MRI allows the clinician to order the correct first step immediately rather than after a denial and weeks of delay.
None of this requires superintelligence. It requires access to the record. The most immediate value of agentic AI is not autonomous medicine. It is restoring clinical awareness inside a fragmented system.
What AI is revealing about healthcare
The lesson from early AI deployments is not that medicine requires superintelligent models. It’s that medicine requires complete context. Clinicians already understand this. When we suspect the chart is incomplete, we delay decisions, repeat labs, and call outside offices. We practice cautiously because we know the record rarely tells the entire patient story.
AI does not have that instinct. It assumes the record it sees is the record that exists.
Which leads to a more troubling realization.
If safe AI requires a complete longitudinal view of the patient, then the limiting factor is no longer the model — it is whether healthcare organizations can actually assemble that view at all. In many systems today, critical pieces of the patient story live in scanned documents, outside specialist records, payer portals, and vendor-controlled systems. The information exists, but it is not reliably accessible in a way software can reason over.
So before healthcare asks how powerful AI will become, it must answer a simpler question:
Can we actually access and account for the full patient record?
In Part 2, we’ll explore why that question is harder than it sounds — and why many organizations discover they don’t truly control the data they depend on.