Three Considerations for AI Healthcare Application Developers and Users

February 01, 2021

The applications of artificial intelligence (AI) are seemingly innumerable with benefits reportedly as prolific as the technology behind them, including in the healthcare space. To keep pace, the US Food and Drug Administration (FDA) has been providing evolving guidance and regulation around AI and machine learning (ML) technologies that developers and users of healthcare applications must consider.

Here we highlight FDA’s current regulatory scheme for AI/ML-based software, potential FDA enforcement discretion that may apply, and recent FDA developments impacting AI/ML technologies.

  • FDA’s regulation of digital health products (including AI/ML) has evolved over the last number of decades and will continue to evolve as the technology expands. FDA understands that its current tools may not be suitable for regulating software that encompass AI/ML and is working with industry and consumers on establishing a regulatory program for these devices.
  • FDA is using a risk-based approach to focus its oversight function on software that pose a significant risk to patient health and safety because of the large swath of software products that may meet the definition of a “medical device.”
  • FDA has issued several guidance documents on the regulation of digital health. Each guidance document describes regulatory pathways that a manufacturer can potentially utilize in marketing its software. The challenge is trying to figure out which pathway applies to a software product, and the potential risks and rewards the pathway may provide to the manufacturer.

If you are interested in Software As a Medical Device: US FDA Regulatory and Legal Framework, as part of our Artificial Intelligence Boot Camp, we invite you to subscribe to Morgan Lewis publications to receive updates on trends, legal developments, and other relevant areas.