From The Editor | May 14, 2026

How AI Is Changing Clinical Trial Design, Not Just Speed

John Oncea Profile Photo

By John Oncea, Chief Editor, Clinical Tech Leader

GettyImages-1473211827-AI-Computer-profiles

The conversation around AI in clinical trials keeps pulling everything toward the same talking points: faster timelines, lower costs, leaner operations. Sponsors repeat it. Vendors sell it. Conference agendas are built around it.

Victoria Gamerman isn’t buying it, or at least, she’s not satisfied with it.

“True change in basic assumptions isn’t in doing the same things faster,” says Gamerman, Global Head of Digital Transformation for Clinical Development Operations at Boehringer Ingelheim. “It’s in asking entirely new questions.”

That reframe matters. If the industry’s benchmark for AI success is cycle time reduction, it will get exactly that and miss something far more significant in the process.

Simulating Outcomes Before Enrollment

Gamerman’s thesis is that clinical development is undergoing a fundamental shift in orientation from looking backward at what happened to looking forward at what might. The clearest expression of that shift is pre-enrollment trial simulation.

By connecting historical trial data, real-world data, and mechanistic patterns across linked data sources, AI can model how different protocol designs will perform before a single site is selected. A sponsor, for instance, could simulate how changing inclusion criteria affects enrollment velocity, dropout risk, and statistical power, then test several variations against each other before finalizing the protocol. That’s not a faster version of what statisticians already do, and it shifts the question from “how do we run this trial” to “what trial should we be running.”

Embedded in that capability is something traditional methods routinely miss. Pre-specified analysis plans, fixed sample sizes, and known population patterns are the grammar of conventional trial design – rigorous, but constraining. Gamerman points out that AI can surface novel patient subpopulations that don’t fit neatly into those pre-written frameworks; subgroups that would never emerge from conventional methods because they weren’t embedded in the assumptions to begin with.

That’s a meaningful expansion of what clinical development can accomplish. Gamerman has built her career around connecting the dots among patient-centricity, digital health, and real-world evidence to evolve clinical research through innovation and digital transformation. Trial simulation is where those threads converge most visibly, and where the productivity narrative around AI starts to look like an undersell.

AI Explainability In FDA Clinical Trial Submissions

During the Clinical Leader Live event, AI In Clinical Trials: What's New & What's Hype?, an attendee poll produced a result Gamerman found telling: transparency ranked last among AI priorities. In a regulated environment, that isn’t a philosophical gap – it’s a submission risk.

Her diagnosis is blunt. The industry is treating AI as a technology implementation rather than a business transformation. Technology implementations get evaluated on performance metrics. Business transformations get evaluated on whether the organization can defend its decisions to regulators, clinicians, and ultimately patients. Those are different tests, and the industry is mostly studying for the wrong one.

The explainability problem is structural. The teams building the models are often walled off from the teams who will eventually have to defend them to the FDA, EMA, or other authorities. When those two groups don’t share a common language, transparency becomes a checkbox appended to a submission package rather than a design principle baked into the model from the start.

The appeal of opacity is real. A black box that promises fast results is seductive when development timelines are measured in years and budgets in billions. But Gamerman draws a hard line: if a clinician cannot explain why an algorithm recommended a specific cohort, they shouldn’t trust it. If a data scientist cannot articulate the model’s assumptions and limitations, a regulator shouldn’t accept it.

Practically, this means explainability documentation – detailing the training data, assumptions, and known limitations of any AI model used in trial design – should be a standard deliverable, not an optional supplement. The organizations building that infrastructure now will have a structural advantage when regulatory scrutiny intensifies. And it will intensify.

Why AI Fails Without Integrated Clinical Data

If there’s a single concept tying Gamerman’s thinking together, it’s this: AI is an amplifier. It will amplify the quality of your data architecture, whether that quality is excellent or poor.

Sophisticated algorithms running on top of disconnected, context-poor data don’t produce better results; they produce faster, wrong answers. Clinical trial data in one silo, real-world data in another, genomic data somewhere else, safety databases somewhere else entirely – that fragmented architecture doesn’t become a coherent data ecosystem because AI is pointed at it.

Gamerman has described AI’s promise as something that “can be stalled by data that lacks context,” advocating for a trifecta of success: deep context through knowledge graphs, predictive analytics that de-risk development, and a clear business case with measurable ROI.

The diagnostic question she would have sponsors ask before any AI investment is simple: Is our data connected enough, and is it of sufficient quality with enough context, to give AI something meaningful to work with? If the honest answer is no, deploying a more sophisticated algorithm doesn’t solve the problem – it defers it to a much more expensive moment downstream.

For clinical tech leaders evaluating AI platforms right now, that means the due diligence question isn’t just “what does this tool do?” It’s “what does this tool require from our data architecture, and do we have it?” Those are different conversations, and the second one is the one that determines whether the first one ever pays off.

Victoria Gamerman, Ph.D., is Global Head of Digital Transformation, Clinical Development Operations, at Boehringer Ingelheim. She is also the principal of RWD Insights, LLC.