From The Editor | April 2, 2026

AI In Clinical Trials: Real Impact, Real Limits, What's Next

John Oncea Profile Photo

By John Oncea, Chief Editor, Clinical Tech Leader

clinical-trial-tech-2026-GettyImages-2149749988

In Part One, we looked at how Anthropic’s new ClinicalTrials.gov connector works and what Claude can do with it. Here, with the help of author and patient recruitment consultant Ross Jackson, we assess what it actually changes for those running trials and what it doesn’t.

What Claude Can Do For You

For sponsors, CROs, investigators, and site coordinators, the connector has meaningful implications across four areas.

Literature and feasibility research are the first. Before designing a trial, teams spend significant time surveying the competitive landscape: what’s been tried, what endpoints are used, and who the key investigators are. That work traditionally takes hours of manual searching. With this connector, it collapses into a few questions. Competitive intelligence that used to require a dedicated analyst can now be done in a conversation.

Next, site selection and investigator identification. Finding qualified PIs and active sites for a new trial involves a lot of manual cross-referencing. The search_investigators capability can surface this information quickly by condition, geography, and institution — useful for CROs and sponsors during the site activation phase.

Then there’s protocol design benchmarking. The analyze_endpoints tool lets researchers look at what outcome measures peer trials are using. This can inform endpoint selection during protocol writing, which is typically a time-intensive collaborative process.

Finally, patient recruitment support. Site coordinators often help patients and physicians understand what trials are available. A conversational AI that can match patients by age, sex, condition, and eligibility keywords could meaningfully augment how patients are screened and referred.

It’s worth being clear about the limits:

  • It doesn’t touch trial operations – no EDC, no randomization systems, no eClinRO/ePRO platforms. The data flowing inside a running trial is completely separate.
  • It’s read-only – nothing here can submit, modify, or register a trial on ClinicalTrials.gov.
  • It doesn’t replace regulatory expertise – interpreting eligibility criteria or protocol decisions still requires trained clinical staff.

This connector is a small piece of a larger shift in clinical research technology: AI being layered on top of existing data infrastructure to reduce the cognitive and time burden of information retrieval. The underlying database hasn’t changed; what’s changed is the interface to it. That’s meaningful for productivity, but the transformative technology shifts in clinical trials (decentralized trials, wearable data capture, AI-assisted monitoring) are happening at the operational layer, not the public database layer.

The Real-World Impact: What Changes And What Doesn’t

Jackson has spent years being called in when trials go wrong, so he’s well-positioned to assess what a tool like this actually changes and what it doesn’t.

He sees genuine value in the competitive density analysis that Claude can now perform. “I’ve seen sites running overlapping studies simultaneously, where the competition for coordinator time, not patients, was what broke enrollment,” he says. AI can surface that kind of density instantly: which trials are targeting a similar patient population, how closely eligibility criteria overlap, whether the same investigators are involved, and where geographic concentration creates friction. “AI accelerates the map,” Jackson says. “It does not replace the navigator.”

He’s also cautiously optimistic about what better pre-launch intelligence could mean for sponsor-CRO relationships. If sponsors had an independent AI tool to benchmark a CRO’s projected timeline against comparable trials in the database, it could give them a more informed basis for challenging assumptions, something Jackson sees as long overdue.

“One of the long-standing issues in this space is that timeline forecasts can be accepted too passively.” But he’s quick to add that comparable doesn’t always mean equivalent, saying, “A site projected at three patients per month based on a similar trial from five years ago is not a sound benchmark if the competitive landscape has shifted, the eligibility criteria have tightened, or the patient community has more options than it did.”

His bigger concern is the false sense of security that better tooling can create. “AI can improve visibility. It does not automatically improve trial design or execution.” Most struggling trials, he argues, don’t fail because nobody looked at the data. They fail because practical barriers to enrollment were never removed – restrictive eligibility criteria, too many endpoints, too much burden on patients and sites, weak follow-up processes, and unrealistic assumptions about site capacity. If a sponsor uses Claude to produce a thorough competitive landscape analysis and then mistakes that for recruitment readiness, the tooling has done more harm than good.

If sponsors had been using AI tools like this five years ago, Jackson believes planning would have improved more than delivery. “Teams might have challenged assumptions earlier, benchmarked more intelligently, and understood the competitive landscape faster. That matters. But recruitment problems are often execution problems as much as information problems. Unless the industry also changes how trials are designed, how sites are supported, and how patient burden is managed, I do not think the overall track record would look radically different.”

What The Future Holds

The ClinicalTrials.gov connector gives AI read access to public trial data, but Jackson is clear-eyed about where the ceiling is. The operational data that really matters – site logs, pre-screening records, referral pathways, screening failure patterns, vendor performance – lives inside private CRO systems, and it isn’t going to become openly accessible any time soon. “That data is commercially valuable, messy, and often fragmented,” he says.

There are early signs of progress. Anthropic’s Medidata connector, announced alongside the ClinicalTrials.gov integration in January 2026, gives Claude access to historical enrollment data and site performance metrics for Medidata customers, a meaningful step toward the operational layer.

But Jackson distinguishes between structured, aggregated data shared with sponsor permission and the kind of real-time, site-level intelligence that would move the needle. What he expects instead is that larger sponsors, CROs, and integrated site networks will build increasingly powerful internal AI layers on top of their own proprietary data, potentially widening the gap between organizations with strong data infrastructure and those without.

What would a genuinely transformative AI tool look like? Jackson has a clear answer: early warning. “Not just reporting what has already gone wrong but identifying – before a trial launches or very early after launch – the specific features most likely to cause recruitment underperformance.”

That means connecting protocol complexity, site capability, patient burden, and competing demand into a practical risk model. “The most useful AI would not just describe the landscape. It would show where a recruitment strategy is most likely to break and why.”

That tool doesn’t exist yet. What does exist is a meaningful improvement in how clinical researchers access and interrogate public data, and that’s worth something, even if it’s only the first layer of a much larger problem. As Jackson puts it, “Building that bridge between insight and action is still a human job and, in my experience, one that the industry still underinvests in.”