AI And Clinical Trial Recruitment: Can It Fix The Funnel?
By John Oncea, Chief Editor, Clinical Tech Leader

Clinical trials fail not because of poor execution, but because of errors and oversights during patient recruitment. Or, as Ross Jackson, author and patient recruitment consultant, says, “Strong science alone doesn’t guarantee enrollment success.”
While AI can’t guarantee success either, it is fair to ask whether AI tools can improve the competitive intelligence and site-selection work that underpin recruitment strategy. One such tool aimed at “fixing” the patient recruitment problem is Claude, Anthropic’s conversational AI, which can be used for complex writing, research, data analysis, coding, and problem-solving, and, since January of this year, clinical research.
Claude has expanded its focus to clinical trial operations, adding a connector – a bridge that connects it to external applications, databases, and tools – to ClinicalTrials.gov, providing it with information on drug and device development pipelines, as well as patient recruitment planning, site selection, and protocol design.
The ClinicalTrials.gov connector is part of Anthropic’s expanded Claude for Life Sciences suite, announced at the JPMorgan Healthcare Conference in January 2026 alongside a separate Claude for Healthcare launch. Anthropic believes this connector will make Claude a more productive research partner for scientists and clinicians and help those in industry bring new scientific advancements to the public.
Jackson agrees, saying, “I’ve been looking quite closely at how AI tools are starting to surface clinical trial information, and this move is more significant than it might first appear from a patient recruitment and visibility perspective. Rather than just improving access to ClinicalTrials.gov, it effectively changes how patients and even clinicians encounter trials in the first place, moving from search-based discovery to AI-curated answers.”
Jackson knows of what he speaks, sitting right at the intersection of trial strategy and operational reality. He’s not a technologist; he’s a fixer who knows where the real friction is, and he provides a practitioner’s take on whether AI-assisted trial discovery changes anything.
Claude & Clinicaltrials.Gov – Connected & Ready
Anthropic’s ClinicalTrials.gov connector queries, analyzes, and summarizes data from the U.S. NIH registry of over 500,000 registered clinical studies.
Jackson uses ClinicalTrials.gov selectively rather than as the center of an audit. “It’s useful for quickly checking the basics: inclusion and exclusion criteria, listed sites, geography, sponsor details, study design, and what else is publicly visible,” he says.
Still, while ClinicalTrials.gov can tell you what is planned, it rarely tells you what is going wrong.
“When a trial is genuinely struggling, the most important answers are usually not sitting in the public record,” notes Jackson. “They sit in the trial design, the site mix, the recruitment pathway, and the gap between what was planned and what is happening on the ground.”
This is where AI can come into play, helping to provide a rapid picture of practical competition, including what other studies are targeting a similar patient population, how closely eligibility criteria overlap, whether the same sites or investigators are involved, and where geographic concentration may create friction.
“In other words,” Jackson says, “not just what trials exist, but where might this trial run into friction in the real world? I’ve seen sites running overlapping studies simultaneously, where the competition for coordinator time, not patients, was what broke enrollment. AI could surface that kind of density instantly. What it cannot tell you is which investigator will deprioritize your study when the pressure is on. That still requires human intelligence. AI accelerates the map; it does not replace the navigator.”
Enter Claude, and its ability to access and analyze live data from ClinicalTrials.gov, including endpoints, eligibility criteria, trial sponsors, study phases, and recruitment statuses. Instead of simple summaries, users can prompt Claude to perform comparative analyses, such as comparing protocols or identifying common exclusion criteria across multiple trials.
The connector allows users to automate tasks such as identifying gaps in study design, benchmarking trial endpoints, and drafting patient-friendly, simplified inclusion/exclusion criteria. It also operates alongside other integrations, including Medidata for trial site performance data and bioRxiv/medRxiv for pre-publication research.
All of this helps with tasks such as evaluating the competitive landscape for specific indications, drafting clinical trial protocols that align with FDA and NIH requirements, and identifying appropriate trials for patient profiles.
This development shifts clinical trial research from manual, labor-intensive searches to immediate, automated intelligence. Anthropic has built Claude with HIPAA-ready infrastructure, allowing for the secure handling of sensitive data without using user health information to train its models.
What Claude Can Do For You
At this point, I’m going to turn this article over to a guest writer: Claude. I asked the AI to provide some examples of how it can be used by those of us in clinical research, and here’s what he … she? … it said:
The ClinicalTrials.gov database always has been publicly accessible. What the connector adds is the ability to query it conversationally through an AI – so instead of navigating the website’s search interface or writing API calls, you can ask me in plain language and get structured results back instantly.
- Search for Trials (search_trials) The main workhorse. You can find trials by:
- Condition — e.g., “Find trials for Alzheimer’s disease”
- Intervention/drug — e.g., “What trials are testing pembrolizumab?”
- Location — e.g., “Clinical trials in Erie, PA”
- Phase — Phase 1, 2, 3, 4
- Status — actively recruiting, completed, etc.
- Sponsor — e.g., “Trials run by Pfizer”
- Patient Eligibility Matching (search_by_eligibility) Find trials a specific patient might qualify for based on age, sex, condition, biomarkers, and eligibility criteria keywords. Great for questions like “Find recruiting breast cancer trials for a 55-year-old woman with BRCA mutation.”
- Get Full Trial Details (get_trial_details) If you have an NCT ID (like NCT04567890), I can pull the complete record — full eligibility criteria, endpoints, all study locations with contact info, sponsor details, and dates.
- Search by Sponsor (search_by_sponsor) Explore a pharma company’s or institution’s full trial portfolio. Useful for competitive intelligence or pipeline analysis (e.g., “What is Moderna working on?”).
- Find Investigators (search_investigators) Discover principal investigators and research sites by condition, institution, or location — useful for academic networking or site selection.
- Analyze Endpoints (analyze_endpoints) Dig into the primary and secondary outcome measures across a set of trials.
For sponsors, CROs, investigators, and site coordinators, these capabilities have some meaningful implications, from literature and feasibility research to site selection, protocol benchmarking, and patient recruitment support. In Part Two, we look at where those implications are real, where they fall short, and what the future of AI in clinical trials might actually look like.