Digital Endpoints Are Ready. Clinical Development Isn't
By John Oncea, Chief Editor, Clinical Tech Leader

For years, digital endpoints occupied the “promising but emerging” tier of clinical development: interesting enough to pilot, not trusted enough to drive. That framing has expired. Regulatory guidance from the FDA is clearer than it has ever been. The underlying sensor and wearable technologies have matured substantially. Validation evidence is accumulating across therapeutic areas. And patients are consistently telling trial designers they want measures that reflect how they actually live, not just what shows up in a clinic visit.
Yet digital endpoints are still rarely used in pivotal trials.
That gap – between readiness and adoption – is the most important problem in clinical research technology right now. And according to Jennifer Goldsack, CEO of the Digital Medicine Society (DiMe), the answer is not hiding in a regulatory dossier. “The scientific, regulatory, and economic foundations for digital endpoints are largely in place,” she says. “What slows progress is how decisions get made across clinical, statistical, regulatory, digital, and outcomes teams.”
That is a precise and important diagnosis. The field is not waiting on science. It is waiting for organizations to act like the science is ready.
A Maturity Problem, Not A Viability Problem
The most significant conceptual shift in this space is one that many clinical development leaders have been slow to make: the central question is no longer whether digital endpoints can work; it’s whether organizations are structured to use them well.
Goldsack has argued publicly that the field needs less focus on whether digital endpoints are still “emerging” and more attention to implementation, execution, and real-world application. For sponsors and CROs that are still treating digital measures as exploratory or peripheral, that is a direct challenge to the default posture.
The evidence supports the challenge. The FDA has issued final guidance on decentralized clinical trials and digital health technologies. DiMe and partner organizations through DATAcc have built open-access frameworks, validation playbooks, and endpoint libraries specifically designed to reduce implementation burden. Peer-reviewed literature is growing. The tools exist. The guidance exists. What often does not exist is an internal operating model capable of using them.
This distinction matters for how clinical development leaders invest their energy. If the bottleneck were regulatory uncertainty, the answer would be more dialogue with the FDA. If it were technology immaturity, the answer would be more vendor evaluation. But if the bottleneck is organizational – how decisions are made, who owns what, and when functions align – then the fix is structural, not technical.
Where Adoption Actually Stalls
Goldsack is specific about the friction points, and her precision is useful. “Adoption stalls at the interfaces between functions,” she says. Clinical, statistical, regulatory, digital, and outcomes teams each own a piece of the problem. What they rarely have is a mechanism for bringing those perspectives together early enough to make a confident, coordinated decision.
That creates two predictable failure modes.
The first is alignment failure. Teams cannot agree on what “fit for purpose” means for a given endpoint, what evidence is needed to support it, or who is responsible for generating that evidence. Without a shared framework, programs default to familiar endpoints, not because those measures are scientifically optimal, but because they feel lower risk. That risk perception, Goldsack argues, is often more organizational than scientific. “In high-risk environments like clinical development, lack of alignment is interpreted as risk, and risk defaults to precedent.”
The second failure mode is execution fragmentation. Once a direction is set, validation planning, data quality requirements, and regulatory strategy often proceed in parallel rather than as a coordinated effort. That introduces delays, creates avoidable uncertainty, and erodes confidence in the approach, even when the underlying endpoint is well-supported.
Both failure modes reinforce each other. A team that struggles to align early will struggle even harder to execute later. And every program that stalls reinforces institutional skepticism about whether digital endpoints are worth the effort.
Why Intention Is Still The Deciding Factor
A theme running through Goldsack’s thinking is that organizational intention – genuine commitment to doing something differently – remains the separator between programs that advance and programs that do not. Tools can reduce friction once that intention exists, but they cannot manufacture it.
“Do digital technologies offer enormous promise to transform the way that we develop new medical products? Yes, absolutely,” she said. “Do I think that shoehorning technologies into every nook and cranny of clinical research is suddenly going to create an industry that’s sustainable in the current environment? Not without an awful lot of intention.”
That qualifier, intention, is doing real work in her argument. It explains why organizations with access to the same guidance, the same technology vendors, and the same regulatory environment can produce such different outcomes. The teams that move forward are not operating with better information. They are operating with a clearer commitment to acting on the information they have.
This is also where digital endpoint adoption differs from many other clinical technology challenges. The hard part is not acquiring software or sensors. It is the organizational discipline to define what a meaningful endpoint looks like, decide whether the evidence supports it, and work backward through validation and execution with enough cross-functional alignment to sustain momentum.
The Case For Moving Now
Goldsack’s argument about competitive dynamics is worth taking seriously. The organizations that build internal capability to design and execute digital endpoint programs now will accumulate an advantage that compounds. Earlier and more efficient trials, stronger patient-centered evidence, and better-informed regulatory conversations are not just operational improvements; they are strategic ones.
The economic dimension reinforces the urgency. DiMe and allied groups have published work showing digital endpoints can improve trial efficiency and economics. For clinical development leaders under pressure to reduce cost and cycle time, that is not an abstract benefit. It is a direct answer to one of the most persistent pressures in the industry.
The flip side is equally important. Organizations that treat digital endpoints as an exploratory category, something to revisit when the field matures further, are operating on an assumption that the field no longer supports. As Goldsack puts it, “The organizations that will continue to develop drugs sustainably into the future are the ones that move on this now. Organizations that hesitate, whether out of conservatism that no longer reflects regulatory signals or simple resistance to change, will fall behind.”
For clinical development leaders, that makes the next move less about waiting for more proof and more about building the internal conditions that allow existing proof to become practice.
The field is not still becoming ready. It has been ready. The question now is whether organizations are.
The Digital Medicine Society’s sDHT Adoption Navigator, developed with FDA funding, is available as a free, open-access resource at dimesociety.org.