A recently issued Food and Drug Administration (FDA) Warning Letter citing a drug manufacturer for improper use of artificial intelligence (AI) suggests FDA’s scrutiny of AI is expanding. Although not the first FDA Warning Letter related to AI, prior Warning Letters focused on issues surrounding the regulatory status of the AI systems themselves, namely whether a given AI system was a medical device subject to FDA oversight. This Warning Letter, however, indicates FDA is now scrutinizing the use of AI in other contexts, such as regulated product manufacturing and quality (in this case, for pharmaceuticals).
As life sciences companies rapidly deploy AI across their FDA-regulated business operations, they should bear in mind that they remain fully responsible for any AI-generated outputs and work product, including any errors, omissions, or oversights.
FDA’s Findings: AI Use and Compliance Failures
FDA’s Warning Letter indicates that the drug manufacturer informed FDA that it had used an AI tool to generate “drug product specifications, procedures, and master production or control records” intended to satisfy FDA requirements. FDA cited the company for several failures related to its use of AI, including:
- Failure to ensure that AI-generated documents are adequately reviewed/validated by the company’s quality unit for accuracy and compliance with the relevant cGMP requirements
- Overreliance on the AI tool for compliance. In one telling example, company representatives allegedly attributed their lack of awareness of certain process validation requirements to the failure of their AI system to flag such requirements
As stated above, this is the first time FDA has issued a Warning Letter related to company use of AI as a compliance tool, demonstrating that the agency’s focus on AI has expanded beyond AI as a regulated product and that other FDA centers (beyond the Center for Devices and Radiological Health or CDRH) are also paying attention to AI. This Warning Letter sends an unambiguous message: Reliance on AI is not a defense against regulatory violations. AI can be used as a tool (e.g., in document creation or compliance support), but the ultimate responsibility for compliance lies with the regulated entity.
Implications and Recommendations
For any company deploying AI, this Warning Letter should serve as a wake-up call, not only because FDA is watching, but because it brings to the forefront broader considerations about what it means to appropriately and responsibly deploy AI in a regulated industry.
Three takeaways merit particular attention:
- Human oversight is non-negotiable: AI can be a valuable tool for enhancing compliance, but it cannot act as a substitute for the expertise and judgement of qualified human professionals. Any AI-generated compliance documents, procedures, or recommendations must be thoroughly reviewed and approved by authorized personnel in accordance with FDA’s laws and regulations.
- Accountability cannot be outsourced to technology: Manufacturers remain accountable for compliance failures, even when those failures stem from technology-driven processes. Companies should critically examine their current use of AI and other automated systems in compliance functions to ensure that appropriate human validation and oversight mechanisms are in place.
- AI governance is a compliance imperative: Companies should ensure that they have robust AI governance frameworks, including clear policies, defined roles, and meaningful training programs, to guide the appropriate and effective use of AI across their organization.
Conclusion
The recent Warning Letter demonstrates that FDA is scrutinizing companies’ use of AI and serves as a reminder of the risks associated with overreliance on AI. As AI adoption accelerates across the pharmaceutical and life sciences sectors, companies must ensure that their personnel are exercising proper judgement instead of deferring unreservedly to AI-generated outputs.
Ultimately, the lesson of this Warning Letter is straightforward: FDA is watching and will continue to hold companies and their personnel responsible for regulatory compliance. It is the company and its employees that will bear the consequences should something go wrong. AI is a tool, and it should be used to support, rather than supplant, human oversight and expertise.