pills As Prescribed

YOUR GO-TO SOURCE FOR ANALYSIS OF ISSUES AFFECTING THE PHARMA & BIOTECH SECTORS
A recently issued Food and Drug Administration (FDA) Warning Letter citing a drug manufacturer for improper use of artificial intelligence (AI) suggests FDA’s scrutiny of AI is expanding. Although not the first FDA Warning Letter related to AI, prior Warning Letters focused on issues surrounding the regulatory status of the AI systems themselves, namely whether a given AI system was a medical device subject to FDA oversight.
The FDA released updates to two guidance documents on January 6: General Wellness: Policy for Low Risk Devices (General Wellness) and Clinical Decision Support Software (CDS). FDA did not issue a traditional press release; instead, FDA Commissioner Makary took to social media to announce the updates in a video, in which he touted them as “promot[ing] more innovation with AI and medical devices” and that FDA has “a clear lane for medical grade products,” but needs “to adapt with the times and be proactive with guidance.” Consistent with the Commissioner’s messaging, the updates to the General Wellness guidance expand the types of products that qualify for enforcement discretion (i.e., do not need to comply with FDA’s device requirements). The updates to the CDS guidance, however, do not appear to significantly modify FDA’s interpretation of the CDS exemption. Nonetheless, these updated guidance documents signal FDA leadership’s willingness to ease regulatory burdens for digital health and wearables.
On June 2, 2025, FDA announced the launch of Elsa, a generative AI tool designed to “help employees—from scientific reviewers to investigators—work more efficiently.” Per FDA, the tool “modernizes agency functions and leverages AI capabilities to better serve the American people.” While Elsa may add efficiencies to FDA’s review processes, it also raises a number of questions for regulated industry.