From The Editor | April 20, 2026

HEOR, AI, And The Future Of Clinical Trial Technology

John Oncea Profile Photo

By John Oncea, Chief Editor, Clinical Tech Leader

This is the first in a three-part series based on a conversation with Rob Abbott, CEO of ISPOR, about the organization’s 2026-2027 Top 10 HEOR Trends Report. Part two, coming April 22, examines real-world evidence and wearables; part three, coming April 24, covers HTA integration, digital twins, and what it all means for how clinical technology is built and procured.


Rob Abbott, CEO, ISPORToday’s trial tech stack is being reshaped by health economics and outcomes research (HEOR), health technology assessment (HTA), and AI governance, and clinical tech leaders who don’t understand these forces are facing an uphill battle.

Rob Abbott, CEO of ISPOR, which he describes as “the world’s largest and, I would say, most influential society advancing research excellence in HEOR and then, crucially, translating that research into policy impact,” has thought about this a lot. With roughly 20,000 members across 100 countries, ISPOR exists to improve health outcomes, and Abbott is direct about what stands in the way: misalignment between the people who build clinical tools and the people who need evidence from them.

“If HEOR isn’t in the room,” he told Clinical Leader Chief Editor Dan Schell, “then you’ve already created the misalignment ClinOps keeps encountering later.”

That sentence is worth sitting with. It doesn’t just describe a process problem. It describes a design problem, one baked into clinical technology at the architectural level, in which the people asking, “Does this work, for whom, and at what cost?” are brought in after the build rather than before.

What HEOR Actually Is, And Why It Matters To Tech Teams

Think of HEOR, Abbott says, as a two-part question: does what we’re studying actually help people, and is it worth the cost? It’s not just an academic framework; it’s a practical scorecard that asks not only whether a drug or device works in a controlled trial, but how it performs in real life, how it affects quality of life, and what it costs patients, payers, and health systems.

Take an inhaler study. A HEOR-informed approach wouldn’t just compare two inhalers on lung function improvement. It would capture emergency room visits, treatment adherence, and total healthcare spending, the full picture of which option delivers genuine value. That kind of analysis helps clinical researchers make their studies relevant beyond the trial setting, and it shapes which endpoints will matter to payers and providers long after the trial ends.

For clinical technology specifically, HEOR does something even more important: it provides a framework for evaluating whether a tool, be it a device, a diagnostic, or a digital health platform, improves outcomes enough to justify its cost and integration burden. It also surfaces adoption barriers early: usability, adherence, workflow impact, and whether the technology will actually be used consistently once the study ends. A researcher studying a remote monitoring device, for instance, should be asking whether it reduces hospital visits, improves quality of life, and saves staff time, before procuring it, not after.

AI Jumps To Number One

When ISPOR published its Top 10 HEOR Trends report two years ago, AI ranked third. In the 2026-2027 edition, it’s number one. I asked Abbott why.

“The pace of change, and the acceleration, has really been quite dramatic,” he said. “We can see the potential for smarter and faster patient recruitment for clinical trials. We can then leverage artificial intelligence for better trial design. There’s the potential for real-time monitoring during the trial, as opposed to after the fact. There’s the potential for predicting trial outcomes.”

But Abbott was clear that efficiency gains like faster data automation and smarter recruitment aren’t the real reason AI climbed to the top spot. The real reason is more fundamental: AI makes possible a shift from what he called “linear clinical trial execution” to “dynamic adaptive trial design and execution, where you’re able to leverage AI to enable continuous learning during the trial itself.”

That means real-time adaptation as data comes in. It means parallel decision analysis instead of sequential. It means the mental model around trial design changes, not just the speed of execution. That’s a different kind of value proposition than “AI makes things faster,” and it’s why clinical technology platforms that haven’t started thinking about adaptive design infrastructure are going to find themselves behind the curve faster than they expect.

The Governance Problem

None of that promise matters without guardrails, and Abbott is emphatic on this point.

“The fundamental thing is we need human oversight to ensure accuracy and to ensure that every patient’s needs are met,” he said. ISPOR is actively evaluating the creation of an AI collaborative to define what legal, ethical, and practical guardrails look like for HEOR and health technology. The PALISADE checklist – ISPOR’s guidance framework for using machine learning in HEOR in a way that’s transparent and trustworthy – has already been cited by NICE in its AI position statement, signaling that regulators are watching what the research community builds.

The worst-case scenario, in Abbott’s framing, is a “Wild West” outcome in which excitement about AI overrides judgment, and decisions are made without adequate human oversight. The best case is that the research community establishes guardrails early enough to realize AI’s full potential, particularly its ability to absorb, digest, and analyze very large datasets in a fraction of the time it takes human researchers, while humans retain the role of evaluating that data and rendering final judgments.

“It has to be done responsibly,” Abbott said. “We want to ensure that humans remain at the helm.”

For sponsors and CROs building or procuring clinical technology right now, that means data quality isn’t optional infrastructure; it’s the foundation everything else is built on. AI can synthesize and surface, but it cannot compensate for poor underlying data. “Data quality is foundational,” Abbott said, and it’s worth treating it as such in every vendor evaluation, platform build, and protocol design process that touches AI.

The MAHA Commission report – cited in the HEOR Trends document – illustrated the stakes: it referenced clinical studies that don’t exist. Hallucinations in AI-assisted literature review are a documented, real risk. A responsible verification workflow isn’t bureaucratic overhead. It’s the difference between evidence that holds up and evidence that doesn’t.

Part two of this series examines where real-world evidence meets clinical trial technology, and what the convergence of RCTs and RWD means for the platforms being built to support it.