How AI-Enabled eTMF Systems Are Impacted By The EU AI Act (Part 1)
By Donatella Ballerini, GCP consultant

As AI becomes increasingly embedded in eTMF systems, organizations across the clinical research ecosystem are entering a new and complex regulatory landscape shaped by the European Union Artificial Intelligence Act (EU AI Act).
The EU AI Act is not a technical guideline or a voluntary best practice framework. It is a binding horizontal regulation that applies across all industries and sectors, including life sciences and clinical research. Its core objective is to ensure that AI systems placed on or used within the EU market are safe, transparent, trustworthy, and respectful of fundamental rights, including data protection, nondiscrimination, and human oversight.
For organizations already operating under GCP, GMP, GDPR, and quality system regulations, the EU AI Act introduces a very familiar regulatory philosophy — but applies it to a new object of control: AI systems themselves. In essence, the EU AI Act treats AI not simply as software functionality but as a regulated capability that must be governed throughout its entire life cycle.
This analysis reflects a regulatory risk-based interpretation of the EU AI Act as applied to AI-enabled eTMF systems. Formal classification will ultimately depend on regulatory guidance, implementing acts, and enforcement practice. However, risk governance frameworks must be designed for highest plausible regulatory exposure, not minimal interpretations.
What The EU AI Act Is — And What It Is Not
The EU AI Act establishes (not a surprise!) a risk-based regulatory framework for AI systems, meaning that the level of regulatory control is proportional to the level of risk an AI system poses to individuals, society, and public interests such as health and safety.
This approach is conceptually aligned with frameworks already well known in clinical research:
- Risk-based monitoring under ICH-GCP
- Criticality assessments in TMF management
- Risk classification of computerized systems under GAMP
- Impact-based assessments under GDPR
However, the EU AI Act differs in one critical way: It explicitly regulates AI decision-making, even when AI is used in support functions rather than direct clinical interventions.
The act:
- defines what qualifies as an AI system
- classifies AI systems into risk categories
- imposes mandatory obligations based on that risk
- assigns legal responsibilities to different actors (providers, deployers, importers, distributors)
- introduces enforcement mechanisms and penalties comparable to GDPR.
What the EU AI Act does not do is ban AI innovation. Instead, it creates a structured regulatory environment in which AI can be deployed responsibly — particularly in regulated domains such as clinical trials, where data integrity, traceability, and patient protection are paramount.
The EU AI Act: Core Principles And AI Risk Categorization
Based on the risk approach supported by the EU AI Act, AI systems are regulated according to how they are used, what decisions they support, and the potential consequences of their outputs, rather than their mere existence as software.
The level of regulatory control, therefore, depends on the context, purpose, and degree of autonomy of the AI system. An AI tool that supports administrative tasks with no impact on regulated decisions will be subject to minimal obligations, while an AI system that influences compliance, safety oversight, or fundamental rights will face significantly stricter requirements.
This approach ensures that regulatory obligations are proportionate to the potential harm an AI system could cause. High-risk uses are tightly governed to protect fundamental rights, health, and safety, while low-risk applications are allowed to operate with fewer constraints, encouraging faster innovation where the impact is limited.
The EU AI Act identifies four categories of risk:
- Unacceptable Risk – AI systems with practices judged to pose a clear threat to health, safety, and fundamental rights are prohibited outright. Examples include manipulative AI, social scoring, and biometric surveillance in public spaces.
- High Risk – AI systems that could significantly affect health, safety, fundamental rights, or legal outcomes are permitted but subject to stringent requirements, such as risk management, transparency, human oversight, documentation, conformity assessment, and ongoing monitoring. An example is an AI system supporting recruitment by automatically screening job applications and generating a ranked shortlist of candidates for interview. The AI analyzes CVs, cover letters, and application forms to assess candidates against predefined criteria such as qualifications, experience, skills, and employment history. In this case, the AI system directly influences access to employment, which is a protected legal and social outcome, can significantly affect individuals’ fundamental rights, including nondiscrimination and equal opportunity, and shapes human decision-making in a legally regulated context, even when final decisions remain formally human-led.
The potential for bias, lack of transparency, or systematic exclusion — even without malicious intent — is sufficient to trigger high-risk classification.
- Limited Risk – AI systems that pose limited potential for harm (e.g., simple chatbots) must meet transparency obligations so that users are informed they are interacting with AI.
- Minimal or No Risk – AI systems with negligible effects on individuals or society are largely unregulated by the AI Act, though best practices still apply. An example might be an organization that uses an AI system to automatically categorize and route internal IT support tickets submitted by employees. The AI analyzes the text of each request to:
- identify the technical issue type (e.g., password reset, software access, hardware malfunction)
- assign a priority level based on keywords
- route the ticket to the appropriate IT support team.
The system operates exclusively within internal IT service management and does not influence employment decisions, performance evaluations, access to rights, or regulatory outcomes.
Where does an AI-enabled eTMF fit in this framework? It is important to distinguish between AI as a technology and AI as a regulated function. Not all AI embedded in an eTMF system will automatically be qualified as high-risk under the EU AI Act. Remember that risk classification is determined by use case, decision impact, and regulatory function, not by the presence of AI itself. Let’s do some practical examples: An AI capability that supports basic administrative tasks — such as improving search functionality — may present limited regulatory risk and therefore be subject to lighter obligations. By contrast, AI capabilities that automatically flag inspection readiness risks or influence oversight decisions directly affect how regulatory compliance is demonstrated.
At the same time, the framework provides clarity and flexibility for organizations. It clearly links compliance obligations to defined risk categories, helping companies understand what is expected of them, while remaining adaptable to technological evolution. As AI systems change, mature, or are used in new ways, their risk classification — and the associated obligations — has to be reassessed, ensuring that regulation remains relevant without stifling responsible innovation.
Why The EU AI Act Matters For eTMF
Forget the TMF as a passive repository of documents: From a regulatory perspective, it is the primary structured evidence base demonstrating that a clinical trial has been planned, conducted, monitored, and reported in accordance with ICH-GCP, applicable regulatory requirements, and ethical standards. Regulators assess compliance by reviewing what is documented in the TMF. For this reason, the TMF acts as the proxy for trial conduct. If an activity is not adequately documented in the TMF, regulators may conclude that it did not occur — or that it occurred without appropriate control. When AI is embedded into an eTMF system, it begins to actively shape this regulatory evidence base. At present, the majority of the AI capabilities in an eTMF system are focused on automated document classification and filing, but considering how AI is evolving fast, the next generation of AI-enabled eTMF systems will be able to perform:
- metadata extraction and population
- detection of missing, late, or inconsistent documentation
- risk scoring of TMF completeness or quality
- pattern identification and analysis
- predictive signals for inspection readiness.
These functions go beyond operational efficiency. They influence decisions such as:
- whether a study is considered inspection-ready
- whether a site or country is flagged as high risk
- whether oversight actions are triggered or deprioritized
- whether gaps in patient safety documentation are detected early or missed.
From a regulatory standpoint, this moves AI in eTMF into the realm of decision support for GCP-critical processes and this is why regulators increasingly expect organizations to demonstrate control, transparency, and human oversight over AI-supported TMF activities. If an AI system influences the structure, quality, completeness, prioritization, or interpretation of TMF content, it directly influences how compliance and patient protection are demonstrated. Under the EU AI Act, high-risk classification is not determined by whether a system is labelled “administrative,” “supportive,” or “assistive.” It is determined by whether the system:
- supports or influences decisions affecting compliance, safety, or fundamental rights
- is relied upon in regulated processes.
An AI-enabled eTMF becomes high-risk when its outputs are used to make or defer decisions, including:
- declaring inspection readiness
- prioritizing or deprioritizing oversight
- assessing TMF quality or risk levels
- identifying (or failing to identify) gaps in safety documentation.
At that point, the AI system is no longer operational support. It is decision support for regulated outcomes.
- Alters human behavior, prioritization, or oversight actions
GCP is based on the principle that:
- inability to demonstrate oversight is itself a compliance failure
- latent risk is sufficient to constitute noncompliance.
If AI:
- masks quality deficiencies
- dilutes safety-relevant signals
- produces false assurance of control
…then the sponsor cannot demonstrate that:
- safety risks were identified in time
- oversight was adequate
- patient protection was actively managed.
High-risk classification does not require actual harm, only credible potential to affect protected interests — which include patient safety, rights, and data integrity.
Conclusion
Integrating AI into eTMF systems marks a structural shift in how clinical trial compliance, oversight, and patient protection are demonstrated. As this article has shown, the EU AI Act does not introduce an unfamiliar regulatory philosophy for the life sciences sector. Rather, it extends well-established principles of risk-based oversight, accountability, and life cycle governance to a new and powerful object of control: AI-driven decision support.
When AI is embedded in an eTMF, it no longer operates at the periphery of trial operations. It actively shapes the regulatory evidence base on which inspectors rely to assess GCP compliance, sponsor oversight, and the protection of participant rights and safety. In use cases where AI influences TMF quality assessment, completeness evaluation, inspection readiness, or oversight prioritization, it meets all functional criteria of a high-risk AI system under the EU AI Act.
This first part of the article establishes why AI-enabled eTMF systems fall within the scope of the EU AI Act and when they should be treated as high-risk. The next and more practical question is therefore unavoidable: What must organizations do about it?
In Part 2, we will move from regulatory interpretation to operational implementation. We will examine:
- the concrete EU AI Act compliance requirements applicable to high-risk AI in eTMF contexts
- the roles and responsibilities of key stakeholders, including sponsors, service providers, and technology vendors
- practical implementation steps to achieve compliance with the EU AI Act.
Understanding the regulatory rationale is the foundation. Translating it into compliant, inspection-ready practice is the real challenge — and the focus of what comes next.
About The Author:
With over 20 years of experience in the pharmaceutical industry, Donatella Ballerini is a senior clinical quality and documentation expert specializing in GCP compliance, eTMF governance, and inspection readiness. After establishing a strong foundation in rare diseases and neonatology at Chiesi Farmaceutici, she progressed to leadership roles where she spearheaded the transition from paper to electronic TMF models and headed the GCP Compliance and Clinical Trial Administration Unit. Currently, as the head of eTMF Services at Montrium and an independent GCP consultant, she leads global consultancy in process optimization and TMF risk management. Donatella contributes to the CDISC TMF Standard Model, while also lecturing at the University of Parma and authoring several industry-leading e-books. Recently, she has expanded her impact by leading AI implementation projects to ensure the ethical and compliant adoption of new technologies within clinical operations.