On June 2, 2025, FDA announced the launch of Elsa, a generative AI tool designed to “help employees—from scientific reviewers to investigators—work more efficiently.” Per FDA, the tool “modernizes agency functions and leverages AI capabilities to better serve the American people.” While Elsa may add efficiencies to FDA’s review processes, it also raises a number of questions for regulated industry.
What Is Elsa and How Will It Be Used?
According to FDA, Elsa is a high-security, large language AI tool in the GovCloud environment “that offers a secure platform for FDA employees to access internal documents while ensuring all information remains within the agency.” The model is already being used by the agency to “accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.”
It can also be used to summarize adverse events in support of product safety reviews, perform label comparisons, and generate code to assist in the development of databases for nonclinical applications. FDA further plans to integrate the use of AI into more of the agency’s functions: in its announcement of the initial pilot project, FDA stated that the tool will “allow scientists and subject-matter experts to spend less time on tedious, repetitive tasks that often slow down the review process.”
Importantly, FDA has conveyed that “[t]he models do not train on data submitted by regulated industry, safeguarding the sensitive research and data handled by FDA staff.” This is significant as specific FDA rules apply to the agency’s ability to use application information and there are confidentiality rules for both pre-market and post-approval products.
Implications for Regulated Industry
While the use of AI may help to build efficiencies into FDA processes, it also gives rise to a number of considerations for regulated industry:
Strategies for Responding to AI-Generated Review Findings
FDA has stated that AI will be used to assist in product scientific and safety evaluations, prompting the question of to what degree findings (and which findings) generated by AI will be vetted by FDA staff or to what extent the AI’s performance will be monitored. This question is important because AI is not perfect and is known to hallucinate. AI is also known to drift, meaning its performance changes over time.
Accordingly, as AI is used by FDA staff in the review of data and information submissions, including in support of marketing authorizations, companies must be ready to respond to AI-generated analyses.
If the AI flags a concern or discrepancy, sponsors should be prepared to explain or contest it to the extent appropriate. In practice, this means providing clear, well-documented data to allow any AI conclusions to be traced and validated as well as clear explanations for why a finding may be without merit. Companies should also begin noting whether they experience a decrease in accuracy or quality in FDA communications. This information can help identify areas where the performance of the AI is lacking and be an important data point in responses to FDA inquiries.
Submission Format and Data Quality
FDA’s AI tools will rely on parsing through electronic documents. To support an efficient review, especially given FDA’s recent staffing limitations, companies should ensure submission files are well-structured and complete so they can be interpreted by the AI application. Companies should pay careful attention to data formatting and labeling as inconsistencies or missing metadata could be more easily spotted by AI or compromise the review.
Review Timeline and Fee Programs
Bringing regulated products to market under the traditional review process can take many years. FDA expects AI to significantly shorten that timeline, however whether this is the case remains to be seen. There are lingering questions regarding whether AI-driven efficiencies will actually positively impact review times and how the AI-driven efficiencies align with current user-fee performance goals.
Sponsors should watch for FDA guidance and statements on how these new tools interact with fee-funded review commitments and how/whether the FDA user-fee programs are modified in the upcoming reauthorization cycle. In any case, FDA intends the AI tool to be used to speed up decisions, so companies may see shorter review cycles, requiring them to be ready to quickly reply to FDA inquiries and information requests.
FDA’s Broader AI Goals
The launch of Elsa is part of a broader “AI-forward” strategy. In January 2025, the agency issued a draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biologics. FDA has issued a number of different guidances for medical devices, including a draft guidance on lifecycle management and marketing submission recommendations for AI-enabled device software functions. These guidances and FDA’s internal programs, among other FDA AI efforts, signal that FDA is committed to integrating AI into its processes and regulated industry.
Key Takeaways
FDA emphasizes that AI will augment, not replace, human experts; reviewers remain responsible for directing the AI assistant and verifying its outputs. Going forward, however, sponsors should account for FDA’s use of AI during product reviews. By anticipating these changes, manufacturers can adapt their submission strategies.