Why Patients Drop Out And How To Stop Them
By John Oncea, Chief Editor, Clinical Tech Leader

Shortly after being named Chief Editor of Clinical Tech Leader in February – I’m talking less than a week after – I received a call from Square 1 Clinical Research, asking if I’d be interested in taking part in a clinical trial she was recruiting for.
Being a naturally curious person, I’d have said yes, even without having taken on the role of Chief Editor. But, c’mon ... being asked to enroll in a clinical trial a week after taking a job writing about clinical trials? Seems pretty serendipitous, right?
That trial, A Phase 3, Randomized, Double-Blind, Placebo-Controlled Clinical Study Evaluating the Safety, Tolerability, Immunogenicity, and Efficacy of a Variant-Adapted BNT162b2 Vaccine in Healthy Participants 50 Through 64 Years of Age, carries the ClinicalTrials.gov code of NCT07300839. According to Tammy King, RN, CRC at Clinical One Research, the trial is a vaccine study with the endpoint of COVID infections reported.
It’s sponsored by BioNTech SE in collaboration with Pfizer, and the plan was to enroll approximately 25,500 patients across 208 sites across the U.S. King told me enrollment currently closed, and the randomized subjects are continuing according to protocol.
The trial’s primary outcomes include safety and reactogenicity measures, along with COVID-19 case incidence over the follow-up period. Secondary outcomes include confirmed COVID-19 incidence after vaccination and immunogenicity measures such as geometric mean ratios from baseline to post-vaccination time points.
Great Expectations
In addition to getting an inside view of how a clinical trial works, I was excited to interact with the technologies that would be used. After all, that’s my job – writing about clinical research tech. I was hoping to see wearables, sensors, and remote patient monitoring in action. I practiced my signature for the moment it came time to provide eConsent. I thought maybe we might even meet virtually!
King set me straight during my screening visit, explaining that if I wanted to see trial technology in action, I’d need to enroll in a medical device study, as they typically utilize tech more. “Given this is a single-dose, placebo-controlled vaccine study, it will be about as straightforward as could be,” said King. “There will be no complex dosing regimens, serious illnesses, or intensive monitoring protocols to worry about. The study visits are few, the procedures are simple (a shot, some questionnaires, and a couple of blood draws), and the follow-up is self-reported via electronic diary.”
In addition, BioNTech and Pfizer tend to do well when working together, and this was far from the pair’s first collaboration. Their original BNT162b2 trial (NCT04368728) saw them build and establish the operational machinery needed to run protocols like this at a massive scale, so the consent processes, data collection systems, and site training are mature. All of this leads to smoother participant experiences compared to first-time sponsors or smaller trials.
Having a few issues is a plausible experience for a healthy participant in a straightforward vaccine trial, and this one has been smooth. From here on out, all I have to do is complete the follow-up period, report any COVID symptoms, complete any check-in visits or surveys, and not get another COVID vaccine.
Why Patients Walk Away
While my trial is going swimmingly, participants in other trials do sometimes encounter bumps – from sites being slow to confirm appointments as they get up to speed, to clunky apps or portals, to reminder fatigue – that cause them to quit. “Patients drop out for all sorts of reasons,” said King. “They moved out of the area, lost interest, experienced adverse events, or even died (unrelated to the study).”
Then there’s site variability, especially in large trials where a well-staffed academic medical center runs very differently from a small private research clinic. These frictions, while minor in isolation, can snowball into one of clinical research’s most persistent and costly problems: patient dropout.
Many clinical trials report dropout rates around 25–30%, and some studies have reported substantially higher attrition depending on trial design and population, according to the National Center for Biotechnology Information (NCBI). The consequences of poor retention are severe, from delayed timelines and escalated costs to compromised data quality and even outright trial failure.
Excessive trial complexity and overly stringent eligibility criteria may limit patient enrollment and retention. In a survey, patients who described site visits as stressful were more likely to drop out than those who did not, according to the NCBI.
Travel is another quiet killer. Only a small share of Americans have ever participated in clinical trials, and geographic distance remains a major access barrier for many patients, according to Deloitte. Flexible visit windows, proactive scheduling support, and alternative visit models help patients balance trial participation with real-life responsibilities. Where appropriate, in-home visits, remote assessments, or hybrid trial models can significantly reduce time burden while maintaining data quality.
Financial stress is equally underappreciated. Transportation, time off work, and childcare are silent dropout drivers that coordinators may never hear about unless they proactively ask. Clear reimbursement processes, timely payments, and support for accompanying caregivers reduce financial stress and uncertainty. When patients understand what will be reimbursed, how, and when, they can focus on participation, not paperwork or cash flow concerns.
Then there is the communication gap. Most dropouts are predictable if you track signals: missed reminders, delayed callbacks, repeated confusion, transportation complaints, caregiver instability, and financial stress. Sites that build a retention rescue playbook and empower coordinators to act on early warning signs consistently outperform those that wait for participants to go quiet.
“Usually, the subject will let us know when scheduling the next visit. Involvement is voluntary, so if the subject really wants to drop out, it is their right,” King said. “If there is a concern about a procedure or test, it is talked out, and then the decision is made by the subject.”
Crucially, retention is won before the first visit. If patients feel surprised, confused, or unsupported early, the dropout clock starts ticking. King agrees, saying, “If all expectations are transparent at the time of informed consent and questions are answered correctly, the subject usually is very prepared for the length of the study and the requirements of the study. When recruitment teams oversell convenience, retention collapses later – the pitch and the reality have to match.”
Technology To The Rescue
The industry’s response to the retention problem has evolved well beyond the first generation of DCT tools, as long as it's supported. “EDC, randomization, and registering lab samples reduce workload,” says King, “but not all technology is great, and without WiFi or internet, the study will shut down until it is back up and running.”
One important shift underway is the move from reactive retention management to more predictive intervention. According to NCBI, some AI-enabled systems can analyze multiple data streams, including patient-reported outcomes, wearable outputs, and eDiary logs, to help flag early signs of disengagement and support more targeted retention efforts.
The architecture behind these systems is increasingly sophisticated. Modern attrition models operate across three parallel clocks: an engagement clock tracking daily signals such as diary completion, app opens, and wearable syncs; a visit clock monitoring weekly signals including appointment deltas, reschedule rates, and transport reliability; and a clinical clock capturing biweekly signals like adverse event density and dose modifications. Each layer feeds a hazard model trained on survival-curve logic — and the output isn’t a risk score; it’s a routing decision that directs the right support resource to the right participant at the right time.
Across studies, AI-driven tools have shown promise in improving recruitment and operational efficiency. Still, the size of the benefit varies widely by use case and study design, according to ScienceDirect. For site teams facing coordinator burnout and compressed timelines, these numbers aren’t academic; they represent real capacity that has been recovered.
Digital Biomarkers And Passive Data Capture
Perhaps the most structurally significant technology shift is the maturation of digital biomarkers as potential trial endpoints and supportive measures, according to the Digital Medicine Society (DiME). As ICH E6(R3) continues to shape more flexible and risk-based trial conduct, digital biomarkers are increasingly being explored for endpoint sensitivity, proactive safety monitoring, and broader trial access.
The shift from active to passive data capture is potentially meaningful from a retention standpoint. Digital biomarkers derived from wearable sensors and smartphones can provide more continuous data streams than periodic clinic visits alone, though their use still depends on validation, context, and the endpoint being measured, according to the Association of Clinical Research Professionals (ACRP). When data collection is ambient rather than episodic, the burden on participants decreases, and so does the friction that drives dropout.
Technology companies are using machine learning and AI to support clinical research in this space, and the FDA finalized guidance in 2023 on the use of digital health technologies for data acquisition in clinical investigations, including validation, data integrity, patient safety, and regulatory considerations. The European Medicines Agency has similarly signaled engagement with DHT integration as part of its regulatory science strategy.
The commercial trajectory reflects growing interest in digital biomarkers, with sponsors, CROs, and device manufacturers increasing their investment and positioning in the space, according to MarketsandMarkets.
Generative AI And LLMs Across The Trial Lifecycle
Retention begins at the protocol level, and generative AI is now being explored there, too. Retrieval-augmented generation systems grounded in curated databases of historical protocols, regulatory documents, and scientific literature may help assist in drafting and reviewing trial designs. And, according to ClinicalTrials.gov, some AI-assisted screening tools have shown promise in helping identify potentially eligible patients from clinical notes, but performance claims should be tied to the specific study and not presented as a general rule.
Downstream, LLMs are beginning to be tested in active study workflows. In some AI-augmented screening studies, staff have been able to review more potentially eligible patients than with manual screening alone, while maintaining similar eligibility rates, according to the NCBI.
Generative AI-driven chatbots and digital messaging tools are also being explored for participant engagement, including efforts aimed at underserved communities, according to the NCBI. For IRBs and sponsors attentive to diversity, equity, and inclusion mandates, this is no small thing: if AI-assisted engagement can systematically reduce the dropout differential across demographic groups, it addresses one of the field’s most persistent and least-discussed data quality problems.
Agentic AI And What Comes Next
The frontier is agentic AI — systems that can move beyond simple responses to plan and execute multi-step tasks across tools and data sources. Unlike standard generative AI, agentic AI can be designed to break down complex problems, reason through multiple steps, and support workflow execution, with potential applications in protocol review, eligibility matching, document automation, and safety surveillance, according to the CDC.
Long-term, digital twins are moving from a research concept toward broader operational interest. Advances such as TWIN-GPT have demonstrated the ability to synthesize patient trajectories from sparse datasets, imputing missing values and enabling more complete personalized twin generation even when real-time data inputs are limited, a direct response to one of the field’s most stubborn data integrity challenges, according to the University of Minnesota.
Regulatory frameworks are working to keep pace. The FDA’s draft guidance on AI tools in drug development emphasizes defining the model’s context of use, managing risk, and maintaining controls for updates, signaling that AI can be used if systematic controls are in place. The message to sponsors and CROs building these capabilities now is clear: governance infrastructure is essential.
The Bottom Line
My experience in the BioNTech/Pfizer C4591081 trial has been seamless: one shot, a couple of blood draws, and an eDiary. But that simplicity is the exception, not the rule. For the millions of participants in more complex trials – longer durations, more demanding protocols, more vulnerable populations – the forces pushing toward dropout are real and well-documented.
The clinical research industry is responding, and the technology exists to make trials significantly more patient friendly. The challenge now is consistent adoption: ensuring that the tools transforming trials at the largest sponsors filter down to the hundreds of smaller sites running studies. When that gap closes, trials and the medicine they produce will be better for everyone.