Advancing the Credibility of AI Models in Drug and Biologic Development: FDA's Proposed Framework

Advancing the Credibility of AI Models in Drug and Biologic Development: FDA's Proposed Framework

DATE
January 21, 2025
SHARE
The Language of Genomes

The U.S. Food and Drug Administration (FDA) released draft guidance on the use of AI in biological product and drug development, emphasizing model reliability and safety. This marks the agency's second effort to provide specific AI-related recommendations, following earlier guidance for AI-enabled medical device providers, now focusing on its role in drug and biologic development.

Transforming Every Stage of Drug Development

AI technologies are being increasingly adopted across all stages of drug development, from non-clinical research and clinical trials to manufacturing and post-market surveillance. Recognizing this, the FDA's Center for Drug Evaluation and Research (CDER) has worked to ensure that AI tools contribute meaningfully to the development of safe and effective treatments.

This draft guidance, titled ‘Artificial Intelligence in Drug Manufacturing and Development: Advancing Credibility, builds on years of collaboration with industry experts, academic researchers, and technology developers. Since 2016, the FDA has reviewed over 500 AI-related submissions, highlighting the growing applications of AI in the field. The draft also reflects the collective efforts of the FDA’s human and animal medical product centres, the Oncology Center of Excellence, and the Office of Combination Products. Together, these entities aim to create consistent and clear guidelines for AI’s application in drug and biologic development.

Key Aspects of the Draft Guidance

The FDA’s draft guidance introduces several key principles to support the development and evaluation of AI models for regulatory decision-making.

The draft guidance emphasizes the need to define the context of use for AI models. This involves identifying the specific application of a model, such as analyzing clinical trial data, optimizing manufacturing processes, or predicting patient outcomes. By clearly establishing the purpose, the FDA can better assess how robust and credible the AI model must be to meet regulatory standards.

Central to the guidance is the concept of model credibility, which includes ensuring that AI models perform reliably and accurately for their intended use. Credibility assessment will be guided by the following factors, including relevance of the training dataset, model transparency, and risk-based framework. The FDA’s proposed risk-based framework requires more rigorous testing for high-risk applications of AI while lower-risk uses may involve less extensive evaluation. This approach balances the need for safety with the flexibility to foster innovation.

Responsibilities for Sponsors

The term “sponsors” refers to organizations, companies, or individuals submitting applications for drug or biological product approval to the FDA. Under the new guidance, sponsors are expected to:

  • Define the AI model’s purpose: Clearly articulate how the AI model will be used.
  • Evaluate performance: Sponsors are expected to rigorously test AI models, ensuring accuracy, reliability, and the absence of bias. Performance metrics should align with the model’s intended use and be validated against relevant datasets.
  • Document comprehensively: Sponsors must provide the FDA with thorough records of the AI model’s development, including information on its design, training datasets, evaluation processes, and testing outcomes. Such transparency is critical for regulatory review.

Sponsors are encouraged to engage with the FDA early in the development process to align expectations and ensure compliance with regulatory requirements. Early consultation can streamline the submission process and help address potential concerns proactively.

Broader Implications for AI in Healthcare

This draft guidance aligns with other FDA initiatives, such as the agency's recent draft guidance on AI-enabled medical devices. By addressing AI across a range of healthcare applications, the FDA is demonstrating a comprehensive approach to regulating emerging technologies.

Transparency is a key principle underpinning these efforts. The FDA aims to provide clear guidelines that promote trust and understanding among stakeholders. By collaborating across its medical product centres—including those focused on drugs, biologics, and medical devices—the FDA hopes to encourage the responsible and ethical use of AI in healthcare.

As AI technologies evolve, the FDA remains committed to updating its policies. These updates will strike a balance between fostering innovation and maintaining the safety and efficacy of medical products, ensuring that AI continues to benefit patients and healthcare providers alike.