Blog post
Smarter Tools Stricter Rules: EU AI Regulatory Webinar Part 1

August 12, 2025
We apologize for the recording issues during the first webinar. Please find the meeting notes below
Key takeaways
- The EU AI Act applies to machine-based learning systems that operate with autonomy, exhibit adaptiveness, and generate outputs impacting environments or people
- Companies must implement harmonized governance frameworks that align with both GDPR and EU AI Act requirements
- Human oversight is mandatory for AI systems, requiring documented evidence that qualified individuals can understand and override AI decisions
- Explainability is a legal requirement under both the EU AI Act (Articles 13 and 50) and GDPR (Articles 12-15 and 22)
- Risk classification determines compliance requirements - high-risk AI systems face stricter regulations
- Data quality, traceability, and representativeness must be embedded from the ground up in AI system design
Discussed topics
EU AI Act Overview and Relevance to Clinical Trials
- Details
- Jeanette: The EU AI Act wasn't written specifically for life sciences but its core principles (risk classification, transparency, oversight) are highly relevant to clinical trials due to handling of personal health data
- Cindy: Article 3 defines what constitutes an AI system under the regulatory framework - systems with autonomy that generate outputs impacting environments or people
- Nikoo: Although the AI Act formalizes many principles already considered under GCP and GDPR, it pushes into new territories especially for high-risk AI systems
AI Act and GDPR as Combined Compliance Framework
- Details
- Jeanette: These regulations form a shared compliance framework with GDPR ensuring proper data handling and the AI Act structuring how data is processed
- Nikoo: Treating these systems as separate silos risks redundant processes, conflicting documentation, and compliance gaps
- Cindy: Both statutes address risk but have different intended purposes - the AI Act focuses on system risk while GDPR addresses personal data risk
Data Governance and System Design
- Details
- Nikoo: Article 10 mandates that training, validation, and testing data must be relevant, representative, free of error, complete, and appropriate to the intended use
- Jeanette: Modern AI audit trails should extend beyond traditional compliance logging by embedding AI-specific details including training versions, input props, and human review checkpoints
- Nikoo: Companies must implement a harmonized, risk-based, and contractually enforceable governance model with external partners
Governance Blind Spots in AI Document Creation
- Details
- Jeanette: For informed consents, additional consent may be needed for data processed by AI systems; reproducibility and "memory" issues present challenges for protocol generation
- Cindy: Oversimplification is problematic as AI can go too far, potentially losing scientific precision and regulatory compliance; hidden bias and intellectual property risks are also concerns
Human Oversight and Explainability
- Details
- Nikoo: Meaningful human oversight requires documented evidence that qualified individuals understand the system, can influence decisions, and override AI at key points
- Nikoo: Sponsor responses to questions about AI decisions must be technically accurate, regulator-ready, patient-safe, and aligned with both GCP and EU AI Act obligations
- Jeanette: Explainable AI reveals logic through feature attribution methods (SHAP, LIME), decision tree visualization, audit logs, and system-generated QA reports
- Cindy: Explainability is a legal requirement under both the EU AI Act (Articles 13 and 50) and GDPR (Articles 12-15 and 22)
Risk Classification and Implications
- Details
- Cindy: Determining whether AI is high-risk or low-risk is critical as it affects compliance requirements; Annex 3 and Article 6 work together to determine risk classification
- Jeanette: Underclassifying a system can lead to insufficient validation, weak change control, and compliance blind spots
- Nikoo: Underclassification risks include insufficient validation, weaker change control, holes in data integrity, inadequate vendor oversight, and ultimately weaker control over the AI system
Follow-up Webinar
- Details
- Lisa: Conducted a poll showing 80% of attendees are interested in a part 2 webinar
- Lisa: The follow-up session will be scheduled within the next 1-2 months
Challenges
- Ensuring proper risk classification of AI systems to avoid underclassification and subsequent compliance issues
- Implementing meaningful human oversight that meets regulatory requirements while maintaining efficiency
- Aligning data governance frameworks with both GDPR and EU AI Act when working with external partners
- Addressing governance blind spots in AI-generated documentation, particularly for informed consent and protocols
- Providing sufficient explainability of AI decisions to regulators and internal stakeholders
Action items
- All attendees
- Complete feedback form to help improve future webinars
- Send any additional questions via email to Lisa or Stacey
- Request training certificates if needed
- Webinar hosts
- Schedule part 2 of the EU AI Regulatory webinar within the next 1-2 months
- Distribute slide deck and feedback form to attendees
- Prepare responses to any questions submitted via email