Part 2: Do Healthcare Organizations Actually Control Their Own Data?

  • Ajai Sehgal, Chief AI Officer

    Ajai Sehgal is Chief AI Officer at IKS Health, driving enterprise AI strategy to leverage data and advanced analytics to accelerate innovation and improve healthcare outcomes.

  • Ben Crocker, MD, SVP, Care Design and Innovation

    Dr. Crocker is responsible for designing new products and solutions as well as refining/improving existing services within the Clinical Care Solutions pillar of IKS Health’s Provider Enablement Platform for both current and prospective clients.

This is part 2 in our FHIRside Chat Series.

At a recent event hosted by IKS Health, we discussed what organizations are discovering after actually trying to deploy agentic AI. In Part 1, we discussed why AI systems struggle in healthcare: not because the models are weak, but because the data they rely on is incomplete.

That leads to a deeper question. If healthcare runs on data, who actually controls it?

Most healthcare leaders would answer: “We do.”

In practice, that’s often not true. Consider a simple requirement for safe clinical AI: you must be able to audit what AI saw before it made a recommendation. If AI flags a patient as high risk, you need to verify which medications it considered, which labs were included, what history it used, and what history it missed.

In many organizations, this is difficult or impossible.

The EHR allows clinicians to view records, but it does not always allow the organization to computationally extract the complete longitudinal dataset the AI used. Portions of the chart exist in scanned documents, encounter notes, outside records, and payer systems. Some data can be viewed but not bulk accessed. Some can be exchanged but not analyzed.

You can read the chart. You cannot reliably compute over it.

That is the difference between having a record and controlling data.

FHIR is a Health Level Seven International® standard for exchanging health care information electronically; the health care community is adopting this next generation exchange framework to advance interoperability, so it’s critical that healthcare organizations are prepared and knowledgeable.

Why EHR interoperability is not enough

Healthcare has spent years pursuing interoperability — the ability to send patient records between organizations. Interoperability helps clinicians exchange documents. It does not allow software to reason over clinical history.

When another hospital sends a CCD document, a physician can read it. But an AI system cannot easily incorporate thousands of those documents into a consistent, queryable history. The information exists, but it is not addressable at scale.

This is why AI pilots work in demonstrations but fail in operations. AI requires not just access to records, but structured, computable, longitudinal data. Interoperability moves documents. Integration exposes information. Healthcare has largely achieved the first and is only beginning the second.
Once AI participates in care delivery, data access is no longer a technical issue. It is a responsibility issue. If a model contributes to a clinical decision, the organization must be able to audit inputs, reproduce outputs, identify missing context, and detect bias or failure modes.

You cannot govern an AI system whose inputs you do not fully control. This is why “data ownership” matters. Ownership is not about possession of records. It is about the ability to independently access, analyze, and verify them.

Without that capability, healthcare organizations are accountable for decisions made using systems they cannot fully evaluate.

Where FHIR enters the conversation

FHIR matters because it changes how the medical record can be used. Instead of functioning only as documentation for humans, the record becomes accessible to software agents in real time. Clinical history, medications, labs, and encounters become queryable rather than merely viewable.

This is what enables safe agentic AI. AI does not require perfect data. It requires transparent data. When organizations can independently access their own longitudinal clinical information, they can audit model behavior, manage risk, integrate operational workflows, and use automation safely.

The key shift is not adopting AI. It is moving from documentation systems to computable systems.

Healthcare’s AI future will not be determined by which model is most advanced. It will be determined by which organizations can access and govern their own clinical information.

Until then, AI will remain impressive in pilots and unreliable in practice – not because it lacks intelligence, but because it lacks a reliable view of the patient.

Share: