Safe Use of Large Language Models in Regulation (LLMs) 

The EMA sets guiding principles for large language models (LLMs) in regulatory science. Learn practical safeguards and how the MyRBQM® Portal aligns.

Safe Use of Large Language Models in Regulation (LLMs) 

The EMA sets guiding principles for large language models (LLMs) in regulatory science. Learn practical safeguards and how the MyRBQM® Portal aligns.

Safe Use of Large Language Models (LLMs) in Regulatory Science

The European Medicines Agency (EMA) has issued comprehensive guidance on the safe, responsible and effective use of large language models (LLMs) in medicines regulation. As AI-driven tools like GPT and other domain-trained models move from experimental to enterprise use, regulators emphasize that trust, transparency and governance are now non-negotiable.

It means study teams can rely on MyRBQM® Portal to support decisions transparently, consistently, and in line with modern expectations for trustworthy and explainable AI.

Why it matters:

LMs can transform tasks such as document review, data summarization and regulatory knowledge mining. But without proper controls, they risk generating misleading outputs, mishandling sensitive data or creating audit-gaps. The EMA underlines that human oversight, prompt governance and traceable decision logic must guide LLM deployment. 

Key Principles for Responsible Use:

  • Understand the model’s nature (open source vs proprietary), training limits and deployment scope. 
  • Ensure input data is protected and prompts are engineered to avoid bias or inadvertent disclosure. 
  • Require expert review of outputs to validate accuracy, relevance and compliance. 
  • Maintain audit logs of AI-driven workflows, decisions and governance steps. 
  • Establish ongoing training and knowledge sharing, fostering a culture of safe AI adoption. 

Implications for Clinical Trial Oversight:

In the world of clinical trials, oversight is evolving from site visits to data-driven, cross-functional decision frameworks. The guidelines for LLMs align with the broader shift represented by ICH E6(R3), where quality must be built in and decisions must be traceable. When LLMs assist in drafting monitoring summaries, generating risk registers or supporting queries, they must be embedded in controlled workflows, not used in isolation.

LLMs offer real operational advantages — but only when they are governed, audited and human-anchored. Organisations that adopt them with clarity and structure will gain a competitive edge in regulatory readiness and operational agility. 

See MyRBQM® Portal in Action

Explore how MyRBQM® Portal integrates AI-friendly workflows into oversight frameworks, ensuring LLM-enabled tools support (rather than replace) decision-makers. 

Stay Informed with Us

MyRBQM® Portal Onboarding Plan

Download the MyRBQM® Portal Onboarding Plan — a realistic 3–6-month roll-out blueprint covering pilots, integrations, governance, validation, and hypercare. Adaptable to in-house, hybrid, and outsourced delivery models....

AI-Driven Predictive Analytics in Risk-Based Monitoring

Discover how AI-driven predictive analytics is reshaping risk-based monitoring under ICH E6(R3), enabling earlier intervention and stronger oversight. ...

Media Inquiries

Need a quote, speaker, or more info about Cyntegrity? Reach out directly to our media contact for timely assistance.

Safe Use of LLMs in Regulation

And Safe Use of LLMs in Regulation

We Safe Use of LLMs in Regulation