Why HEOR Keeps Arriving Too Late — And How ClinOps Can Fix It
By Dan Schell, Chief Editor, Clinical Leader

When I interviewed ISPOR CEO Rob Abbott, I started with a question I hear constantly from clinical operations leaders: If health economics and outcomes research (HEOR) is supposed to shape value and real-world relevance, why does that input so often show up after a protocol is already locked? Abbott didn’t disagree with the premise. In fact, he was surprisingly blunt about it. As he put it early in our conversation, “Clinical ops teams aren’t wrong; there really is an ideal window for HEOR, and we usually miss it.”
HEOR is new territory for me, so I was appreciative when Abbott walked me through what needs to change, why it hasn’t changed yet, and how ClinOps teams can finally bring HEOR into alignment with trial design. He also gave a realistic view of how AI will reshape evidence work, and what humans will still need to do.
Bring HEOR Upstream
Abbott believes the biggest HEOR misconception is that it’s primarily a downstream function. In reality, its highest value is early, long before endpoints are selected or sites are activated. HEOR should be part of target product profile development alongside regulatory, commercial, and early clinical planning.
That’s where companies define the fundamentals: key value drivers, target indications, meaningful comparators, populations that matter, and outcomes that will influence not only regulators but HTA bodies and payers. According to Abbott, “If HEOR isn’t in the room then, you’ve already created the misalignment ClinOps keeps encountering later.”
He stressed how much opportunity gets lost when teams wait until Phase 2 or later to bring HEOR into planning. Once a protocol is finalized, the door to meaningful changes is essentially closed. What follows instead is a predictable sequence: patchwork evidence generation, protocol amendments, competing timelines, and last-minute scrambles to satisfy payer expectations that no one considered early enough.
ISPOR has been working to shift that mindset by helping companies better define the business value of HEOR and by publishing good practices on key topics such as RWD and RWE. (For readers who want to dig deeper, check out ISPOR’s Good Practices Reports & More page.)
Where HEOR Sits — And Why It Matters
Over the past year and a half, Abbott has noticed a wave of reorganizations in large biopharma companies. HEOR teams aren’t being eliminated, but they are being shuffled. Some are placed into medical affairs and some are merged with policy or market access functions. Each move brings trade-offs.
Abbott’s concern isn’t political; it’s practical. HEOR’s influence depends on access to decision-makers and visibility into planning. If it sits too far down in the organization, it can’t contribute early enough. If it sits too close to commercial, it may lose scientific credibility. And if it sits solely within medical affairs, rigor and independence can suffer.
What’s causing the reshuffling? He thinks some of it may be self-inflicted. “For years, HEOR focused heavily on methods. We weren’t telling the story of our business impact inside companies. And when you don’t articulate your value, someone else ends up deciding where you belong.”
To correct that, ISPOR is now leading an empirical ROI study across several life sciences organizations. The goal is to quantify how HEOR reduces risk, improves launches, and influences everything from reimbursement outcomes to trial design efficiencies. That work is still underway, but Abbott expects it to give executives the clarity they’ve been missing.
Fixing the Communication Gaps
One repeated theme was the disconnect between regulatory, ClinOps, HEOR, and HTA expectations. Each group is driven by its own timelines, its own deliverables, and a different understanding of what “good” evidence looks like. That separation creates predictable breakdowns.
Regulatory teams plan to satisfy FDA or EMA expectations. HEOR teams build for payers and HTA bodies. ClinOps tries to navigate both worlds while juggling feasibility, timelines, and operational realities. Meanwhile, the three groups often don’t talk to each other until decisions have already been made.
Abbott outlined several reasons for the misalignment. HEOR often sits outside core decision-making bodies. Evidence generation is siloed, with no single owner of integrated evidence strategy. And each function creates its own assumptions about timing and priority without fully understanding the dependencies upstream or downstream.
Don’t get me wrong; he wasn’t suggesting a massive organizational overhaul. Instead, he proposed some practical, testable steps:
- Create early cross-functional checkpoints before protocol drafting.
- Use joint endpoint matrices that show where regulatory, clinical, payer, and HTA needs overlap or conflict.
- Build a shared, integrated evidence plan across functions.
- Designate a lead for value and evidence strategy who can manage cross-functional alignment.
Abbott said some companies are piloting these approaches on individual programs, but very few have adopted them as a standard operating model. He believes the companies that do adopt these measures will likely reduce amendments, avoid redundant evidence work, and gain a clearer picture of what their data must accomplish for real-world success.
AI’s Role in Evidence Work
Prior to our interview, I had read the article Raising The Speed Limit For Health Economics And Outcomes Research that Abbott had cowritten in January with Mitch Higashi, Ph.D., associate chief science officer at ISPOR. I wanted to ask him about a topic everyone in the industry is wrestling with: How will AI actually automate evidence generation? I was pleased to hear his answer was much more pragmatic than the abundance of hype swirling around today’s social media.
He explained that AI can absolutely accelerate screening, data extraction, table drafting, and technical modeling tasks. It will speed up survival-curve fitting, help identify real-world cohorts, and handle the most time-consuming parts of evidence synthesis. But it won’t replace health economists or evidence strategists, and no major HTA body is ready to accept fully automated systematic reviews. “In general, for whatever kind of document or project you’re working on, AI can probably generate about 80% of your final product,” he said. “But, humans still own — and are needed — for that last 20%. That’s because that last 20% is where the judgment lives.” He sees AI as a way to free HEOR and ClinOps teams from tedious manual work so they can focus on conceptual modeling, causal reasoning, strategy development, and interpretation — the parts of evidence generation that determine whether a trial’s results will stand up to regulatory scrutiny, reimbursement negotiations, and real-world variability.