The risks associated with the growth of AI in the healthcare and life sciences industries, as well as recent federal and state activity and enforcement actions, emphasize the importance of understanding and implementing a robust AI compliance program.
Artificial intelligence (AI) has the potential to transform the healthcare industry by offering groundbreaking possibilities in diagnosis, patient care, operational efficiency, and more. As we continue to see the rapid integration of AI technology across the industry, however, it is important to note that this evolution also poses significant legal and regulatory challenges.
Healthcare providers, life sciences companies, and other stakeholders are now forced to consider how best to navigate a complex and quickly evolving landscape where the potential benefit of AI is matched by its risks.
The adoption of AI in the healthcare industry has been bolstered by advancements in predictive analytics, machine learning algorithms, and ambient AI products. From AI-assisted surgery to real-time diagnostic tools, these technologies have the potential to benefit patient safety and reduce burnout. For instance, ambient AI has demonstrated immense potential for enhancing patient outcomes and streamlining clinical workflows.
However, if used improperly or without proper monitoring, ambient AI and other AI technologies also pose significant risks (including but not limited to misuse, hallucinations, model degradation, monitoring failures, inadequate oversight, disclosure/transparency concerns, and over- and under-utilization) that could expose the healthcare provider or entity to not only patient care concerns but also potential liability under the False Claims Act (FCA) and other federal and state laws and regulations.
Diagnostic Accuracy
AI-powered diagnostic tools, while innovative, are also potentially imperfect and therefore should not replace a healthcare provider’s clinical judgment. Misdiagnoses or delayed diagnoses could threaten patient safety and lead to significant liability exposure for the user. The reliance on probabilistic algorithms also suggests that AI tools may fail to account for rare or complex medical conditions, further underscoring the need for human oversight.
Without continuous validation and monitoring by a human, AI-powered diagnostic tools could generate erroneous outputs, compromising the tool’s reliability and its ultimate effectiveness in clinical settings. As AI continues to evolve, maintaining transparency in how these tools generate diagnostic results and how such results are validated and monitored by clinicians will be critical to preserving trust in the tool and ensuring patient safety.
Failure to Monitor AI Tools
All AI tools used in the healthcare setting—not just those focuses on diagnostics—must be regularly monitored by humans to ensure the tool is functioning properly and as intended. Failure to monitor an AI tool can result in undetected errors or deviations from its intended function, which could in turn, lead to potential delays in necessary critical interventions. For example, there are instances where an AI tool might hallucinate and generate false or misleading outputs. Hallucinations, when not caught through regular monitoring of the AI tool, could lead to harmful interventions if such hallucinations were used to diagnose a patient and provide treatment recommendations.
Additionally, over time, AI tools may experience degradation in their performance, and without sufficient monitoring and the provision of regular updates to the AI tool when needed, degraded AI tools could generate concerns for patient safety as well as other regulatory compliance concerns.
Patient Care and Human Oversight
The use of AI in patient care must be accompanied by human oversight; overreliance on AI tools in patient care can lead to critical errors. For instance, ambient AI tools designed to monitor patients’ vitals must be carefully monitored to prevent lapses in care, and an ambient AI tool designed to listen and take notes on a patient’s interaction with their healthcare provider should not be relied upon for purposes of formulating a diagnosis.
Relying solely on an AI tool for clinical decision-making without the additional incorporation of human judgment may lead to a diagnosis that overlooks nuanced patient factors and/or symptoms. As such, users must ensure proper oversight of AI tools, including (but not limited to) critically evaluating and validating the accuracy of all outputs and addressing biases.
Data Integrity
Since AI relies on datasets to function effectively, substandard or biased data can result in coding errors that jeopardize patient safety. For example, an ambient AI tool used to triage emergency patients may under-prioritize certain demographics due to inherent biases in its training data.
These compromises to data integrity not only jeopardize patient safety but could also expose healthcare organizations to potential FCA liability if inaccurate coding leads to improper billing. Additionally, mistakes in a patient’s care that stem from flawed or inaccurate AI-generated data could erode trust among patients and providers, hindering the adoption of otherwise beneficial AI technologies.
Coding Errors
Coding errors, which can occur during the development and/or implementation of AI algorithms, may result in faulty decision-making processes, inaccurate predictions, or misinterpretation of medical data. In healthcare settings, such errors can lead to dire consequences, including incorrect diagnoses, inappropriate treatment recommendations, and/or delays in critical interventions. Addressing this risk may require regular testing, validation, and monitoring of the AI tool to ensure it functions as intended.
A certain level of collaboration between a tool’s developers and the target user (i.e., clinicians) may be needed to ensure that algorithms are designed and implemented in a way that aligns with clinical realities and patient needs.
Privacy and Security Risks
AI tools are also being used to collect and process sensitive patient data, increasing the risk of data breaches and unauthorized access, exposing the user to a potential HIPAA violation. With the rise of cloud-based AI platforms, the risk of third-party breaches or improper data sharing becomes even greater, especially if these platforms lack robust security and privacy compliance measures.
A recent rise in FCA government enforcement actions against major government contractors who are alleged to have inadequate cybersecurity measures further illustrates the importance of maintaining adequate cybersecurity measures and underscores the need for healthcare organizations to address the privacy and security risks associated with utilizing an AI tool that collects and processes patient data.
Transparency and Disclosure Requirements
Patients may have a legal right to know when an AI tool is being utilized by their healthcare provider to formulate a diagnosis or treatment plan. Therefore, transparency is critical and failure to disclose such use could undermine trust in the healthcare system as a whole and lead to ethical and legal challenges for the user.
These risks are closely tied to the principle of informed consent, which is a fundamental ethical and legal standard across medical practices. Informed consent requires that patients be provided with sufficient and material information about their diagnosis and treatment options—which arguably includes disclosure of any AI tools, technologies, or methodologies being used—to make knowledgeable and voluntary decisions about their care. Transparency also plays a vital role in educating patients about the limitations of AI, such as its reliance on probabilistic algorithms and the potential for errors, which can help manage expectations and build trust.
In October 2023, the Biden administration, with the goal of ensuring safe and effective use of AI in the healthcare industry, issued an Executive Order (No. 14110) on “Safe, Secure, and Trustworthy Artificial Intelligence” which emphasized the need for federal agencies to develop frameworks ensuring the safe deployment of AI across the healthcare industry. The order highlighted the importance of addressing biases, ensuring transparency, and prioritizing data security. On the first day of the Trump administration’s second term, Executive Order No. 14110 was revoked and replaced with a new order that prioritizes AI research and innovation.
At the state level, legislative activity has also ramped up, as discussed in greater detail in our recent LawFlash. States such as (but not limited to) California, Virginia, and Utah have already introduced or passed laws addressing AI transparency, bias mitigation, and accountability in the healthcare industry and other related sectors. For instance, Virginia H 2154 requires hospitals, nursing homes, and certified nursing facilities to implement policies on the permissible access to the use of intelligent personal assistants, including AI software, provided by a patient.
As an example of another area of focus in some of these recent state legislative actions, California SB 1120 requires healthcare service plans or disability insurers that use AI for utilization review or management to implement safeguards related to equitable use, compliance with state and federal regulations, and disclosure. This California law, like others that have been enacted and proposed, explicitly requires that determinations of medical necessity be made only by a licensed healthcare provider. Additionally, Utah HB 452 requires any person who provides the services of a regulated occupation (including healthcare professionals) to disclose the use of generative AI in the provision of regulated services.
Moreover, there are additional states with pending legislation similar to those laws already adopted by other states. These recently enacted and proposed laws signal heightened scrutiny of AI applications and their potential risks to patient safety and data security.
In recent years, enforcement related to the use of AI in healthcare, particularly under the FCA, has intensified. The FCA imposes liability on entities that submit false or fraudulent claims for payment to the federal government. In the context of AI, enforcement actions have primarily involved situations where healthcare providers or vendors knowingly relied on flawed AI tools that generated inaccurate billing codes or diagnostic results. Some of the recent enforcement actions in this space include the following:
The increase in enforcement actions involving the use of AI in healthcare highlights the critical need for healthcare entities and providers to implement robust AI compliance programs to ensure that all AI tools are used appropriately and in compliance with applicable current and emerging laws and regulations at the federal and state levels.
To address these risks and the exposure to agency enforcement actions, healthcare and life sciences companies should consider developing specialized AI compliance programs. While compliance programs should incorporate the Office of Inspector General’s seven elements of an effective compliance program into its structure, these programs should also include a focus tailored to the unique challenges posed by AI. Some of the key elements of an effective AI compliance program are as follows:
The rapid growth of AI in healthcare presents novel opportunities but also poses significant risks when AI tools are used inappropriately or simply left unattended. Companies in the healthcare and life sciences sectors must take steps to proactively address these challenges, such as by implementing tailored AI compliance programs. Adopting a structured approach that includes governance, training, oversight, and regular monitoring can help healthcare and life sciences organizations harness the benefits of AI while minimizing legal and regulatory exposure.
As the regulatory landscape continues to evolve, staying informed and adaptable will be essential to a company’s success. To make this ever-evolving landscape more manageable, Morgan Lewis will continue monitoring developments and provide updates as information is released. Morgan Lewis lawyers are seasoned in providing strategic counseling to clients in the healthcare and life sciences industries on the proper use of AI in healthcare settings and in implementing needed AI compliance tools and considerations.
If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following: