FDA Draft Guidance on Considerations for AI Use

05/21/2025

In January 2025, the US Food and Drug Administration (FDA) published a draft guidance titled “Considerations for the Use of Artificial Intelligence (AI) to Support Regulatory Decision-Making for Drug and Biological Products” for trial sponsors and stakeholders intending to use AI to generate data and clinical evidence. AI in this guidance is defined as a machine-based system that can make predictions, recommendations, and decisions. In a clinical research setting, AI can be an effective tool to streamline many administrative processes such as patient recruitment and selection, writing essential documents like trial protocols and study reports, as well as data collection and management.

With these beneficial uses also comes challenges and hazards for data quality that should be considered before the implementation of AI such as the potential introduction of bias from the datasets that the AI model is trained on, also known as machine learning. AI learns as new data is inputted and this learning can cause the AI model’s performance to change over time, also known as data drift. This needs to be monitored throughout a trial’s life cycle. AI models should be fit for use, meaning they are only trained on datasets relevant to the research being conducted. The AI model should also be credible or trustworthy for a particular context of use (COU) which defines the specific role and scope of the AI model used.

The considerations for AI use include the risk-based credibility framework which is broken down into seven steps:

  1. Define the question of interest that will be addressed by the AI Model: The guidance provides hypothetical examples of scenarios and what the corresponding question of interest could be. It also advises that all evidentiary sources (ex. in vitro testing, in vivo animal testing etc.) used in conjunction with AI models to answer the question be identified and included in the following two steps.

  2. Define the COU for the AI Model: The COU defines the specific role and scope of the AI model used to address the question of interest, details what will be modeled, and how outputs will be used. This step should also include a statement about the evidentiary sources. Examples are also provided.

  3. Assess the AI Model Risk: Model Risk is the possibility that the AI Model output may lead to an incorrect decision and result in an adverse outcome in a trial participant. Model Risk is determined by assessing the model influence, which is how much AI contributes to a decision compared to other evidentiary sources, and decision consequences which is the significance of the adverse event resulting from an incorrect decision.  

  4. Develop a plan to establish AI Model credibility within the COU: Also referred to as a credibility assessment plan, this plan should include:

  • A description of the model development process,

  • A description of the model evaluation process.

This plan should be shared with the FDA to ensure adequacy.

5. Execute the plan

6. Document the results of the credibility assessment plan and discuss deviations from the plan: These results should be included in a report that can be made available to the FDA upon request.

7.   Determine the adequacy of the AI model for COU: At this step the sponsor should review the report and adjust the AI model accordingly.

The full guidance is available on the FDA website here, the comment period has ended, however, late comments are currently being accepted here. The use of AI in the clinical research setting has been increasing in recent years and will certainly continue into the future. To stay up to date on clinical research related topics such as this, please visit our website and sign up for our free blog and newsletter. 

-The Clinical Pathways Team

Enjoy this blog? Please like, comment, and share with your contacts.