The applications of artificial intelligence (AI) are seemingly innumerable with benefits reportedly as prolific as the technology behind them, including in the healthcare space. To keep pace, the US Food and Drug Administration (FDA) has been providing evolving guidance and regulation around AI and machine learning (ML) technologies that developers and users of healthcare applications must consider.
Here we highlight FDA’s current regulatory scheme for AI/ML-based software, potential FDA enforcement discretion that may apply, and recent FDA developments impacting AI/ML technologies.
- FDA’s regulation of digital health products (including AI/ML) has evolved over the last number of decades and will continue to evolve as the technology expands. FDA understands that its current tools may not be suitable for regulating software that encompass AI/ML and is working with industry and consumers on establishing a regulatory program for these devices.
- FDA is using a risk-based approach to focus its oversight function on software that pose a significant risk to patient health and safety because of the large swath of software products that may meet the definition of a “medical device.”
- FDA has issued several guidance documents on the regulation of digital health. Each guidance document describes regulatory pathways that a manufacturer can potentially utilize in marketing its software. The challenge is trying to figure out which pathway applies to a software product, and the potential risks and rewards the pathway may provide to the manufacturer.
For more insights on this topic, check out our AI Boot Camp webinar, “Software as a Medical Device: US FDA Regulatory and Legal Framework.”