From The Editor | April 22, 2026

Real-World Evidence And Wearables Are Reshaping Trial Tech

John Oncea Profile Photo

By John Oncea, Chief Editor, Clinical Tech Leader

This is part two of a three-part series based on a conversation with Rob Abbott, CEO of ISPOR, about the 2026-2027 Top 10 HEOR Trends Report. Part one covered HEOR fundamentals and AI governance. Part three (coming April 24) covers HTA integration, digital twins, and the technology procurement implications of all three.


Rob Abbott, CEO, ISPORHere’s a number worth paying attention to: the European Medicines Agency reported a 47.5% year-over-year increase in regulator-led real-world evidence (RWE) studies. If that doesn’t prompt a conversation in your organization about whether your clinical data infrastructure is ready for a world where RWE is routine rather than supplemental, it should.

Real-world evidence moved to number two in ISPOR’s 2026-2027 Top 10 HEOR Trends Report. Rob Abbott, ISPOR’s CEO, frames this not as a trend competing with randomized controlled trials, but as the emergence of something more integrated and more demanding of the technology that supports it.

From Separate Tracks To A Continuous Evidence Ecosystem

“We are moving toward a more integrated evidence ecosystem where clinical trials and real-world data coexist,” Abbott told me. “These are no longer separate things, but part of a continuous approach to understanding value.”

That reframe matters for clinical technology in a specific way. If RWE and RCTs are two separate workflows, you can build two separate systems – or buy two separate platforms – and stitch them together later. But if they’re part of a continuous evidence ecosystem, the data infrastructure has to support both from the start. That means interoperability, data quality standards, and integration capabilities can’t be retrofitted. They have to be in the design.

RCT-Duplicate – a study demonstrating that RCTs can be effectively emulated using real-world data – is one signal of where this is heading. The question is no longer whether RWE can approximate RCT findings under certain conditions. The question is whether your platform can actually handle the data flows required to support that kind of analysis.

HARPER+ And What Requirements Mean For Vendors

HARPER+ is now required by CMS for device studies using real-world evidence. For anyone who thought RWE protocols were still aspirational guidelines, that development should be clarifying. When a protocol template becomes a regulatory requirement, the specifications for what counts as good practice stop being advisory and start being enforceable.

Abbott described HARPER as establishing “the first generation of guardrails on the wise use of RWE,” a framework that encourages transparency about the data informing decisions and a commitment to progressively improving data quality. The first ripples in the pond, as he put it, address data quality from electronic health records. The next generation will grapple with wearable data, which is messier, more variable, and harder to validate.

“Not all data is created equally,” Abbott said. “The evidentiary package is going to be tied to the quality of the data that informed it.”

For sponsors evaluating vendors and platforms, that means asking harder questions about data quality specifications. When a protocol becomes a requirement, Abbott noted, “the things that an individual CRO would need to do with respect to patient recruitment and patient populations become much more well-defined.” Platform vendors who can’t meet those specifications clearly and demonstrably are going to find themselves at a disadvantage in a market where requirements are only going to tighten.

Wearables: Signal, Noise, And The Data Quality Problem

Patient-centricity and wearables ranked among the top HEOR trends for a reason that goes beyond the technology itself. Wearables – smartwatches, continuous glucose monitors, wearable ECGs – integrate the patient’s voice into research in a “more thoughtful, continuous, and data-driven way,” as Abbott put it, than has historically been possible.

A 2025 Lancet Digital Health study illustrated the direction this is heading: researchers used multimodal wearable sensors, host response biomarkers, and machine learning to predict systemic inflammation following controlled exposure to influenza without relying on patient-reported symptoms. Abbott called it “a perfect storm of wearable technology, machine learning, and biomarkers coming together,” and said to expect more of it.

But wearables come with documented challenges that clinical technology platforms have to be designed to handle. Abbott ticked through them directly: inconsistent data accuracy, poor integration in some cases with clinical systems, technical limitations, and ethical considerations that can’t be waved away by enthusiasm about the technology. There’s also an equity problem that trial designers and platform builders should take seriously. “The people who are most likely to be using wearables today tend to be in upper-income countries and or of a higher socioeconomic status,” Abbott said. “That could create health inequities.”

That’s not a small caveat. If wearable-dependent trial designs systematically underrepresent lower-income populations, the real-world evidence generated from those trials will have the same blind spots. Platform architecture that doesn’t account for participant diversity in digital access is building bias into the evidence from the start.

The Interoperability Gap

Data fragmentation remains one of the core structural challenges in RWE. DARWIN EU – the EU’s regional aggregation effort – represents one attempt to address it at scale, but Abbott was candid about how much work remains.

“How do we get those data sets to speak to one another?” he asked. The gap between data from an EHR and data from a wearable isn’t just technical; it’s methodological. ISPOR has been collaborating with ISPE (the International Society for Pharmacoepidemiology) on articulating what data quality standards for RWE should actually look like, which is meaningful groundwork. But HL7 FHIR and similar interoperability standards, while important, haven’t solved the problem of making fundamentally different data sources coherent enough to inform clinical decisions.

For clinical technology builders, this is a design brief, not a background condition. The platforms being built today will either have the integration and data harmonization capabilities to participate in a continuous evidence ecosystem, or they won’t. There’s less and less room for “we’ll figure out interoperability later.”

“Wearables are not going away,” Abbott said. “I expect them to get better. I expect them to get more refined in their ability to monitor certain health metrics and report those accurately.”

The question for clinical technology is whether the infrastructure being built right now will be able to absorb that refinement, or whether it will need to be rebuilt once wearable data becomes a first-class evidentiary input.

Part three of this series looks at HTA integration, digital twins, software as a medical device, and what one mandate, if Abbott could issue it, would change about how clinical technology is built and procured.