LawFlash

AI in Healthcare: Opportunities, Enforcement Risks and False Claims, and the Need for AI-Specific Compliance

July 14, 2025

The risks associated with the growth of AI in the healthcare and life sciences industries, as well as recent federal and state activity and enforcement actions, emphasize the importance of understanding and implementing a robust AI compliance program.

Artificial intelligence (AI) has the potential to transform the healthcare industry by offering groundbreaking possibilities in diagnosis, patient care, operational efficiency, and more. As we continue to see the rapid integration of AI technology across the industry, however, it is important to note that this evolution also poses significant legal and regulatory challenges.

Healthcare providers, life sciences companies, and other stakeholders are now forced to consider how best to navigate a complex and quickly evolving landscape where the potential benefit of AI is matched by its risks.

POTENTIAL OPPORTUNITIES AND RISKS ASSOCIATED WITH THE GROWTH OF AI IN HEALTHCARE

The adoption of AI in the healthcare industry has been bolstered by advancements in predictive analytics, machine learning algorithms, and ambient AI products. From AI-assisted surgery to real-time diagnostic tools, these technologies have the potential to benefit patient safety and reduce burnout. For instance, ambient AI has demonstrated immense potential for enhancing patient outcomes and streamlining clinical workflows.

However, if used improperly or without proper monitoring, ambient AI and other AI technologies also pose significant risks (including but not limited to misuse, hallucinations, model degradation, monitoring failures, inadequate oversight, disclosure/transparency concerns, and over- and under-utilization) that could expose the healthcare provider or entity to not only patient care concerns but also potential liability under the False Claims Act (FCA) and other federal and state laws and regulations.

Diagnostic Accuracy

AI-powered diagnostic tools, while innovative, are also potentially imperfect and therefore should not replace a healthcare provider’s clinical judgment. Misdiagnoses or delayed diagnoses could threaten patient safety and lead to significant liability exposure for the user. The reliance on probabilistic algorithms also suggests that AI tools may fail to account for rare or complex medical conditions, further underscoring the need for human oversight.

Without continuous validation and monitoring by a human, AI-powered diagnostic tools could generate erroneous outputs, compromising the tool’s reliability and its ultimate effectiveness in clinical settings. As AI continues to evolve, maintaining transparency in how these tools generate diagnostic results and how such results are validated and monitored by clinicians will be critical to preserving trust in the tool and ensuring patient safety.

Failure to Monitor AI Tools

All AI tools used in the healthcare setting—not just those focuses on diagnostics—must be regularly monitored by humans to ensure the tool is functioning properly and as intended. Failure to monitor an AI tool can result in undetected errors or deviations from its intended function, which could in turn, lead to potential delays in necessary critical interventions. For example, there are instances where an AI tool might hallucinate and generate false or misleading outputs. Hallucinations, when not caught through regular monitoring of the AI tool, could lead to harmful interventions if such hallucinations were used to diagnose a patient and provide treatment recommendations.

Additionally, over time, AI tools may experience degradation in their performance, and without sufficient monitoring and the provision of regular updates to the AI tool when needed, degraded AI tools could generate concerns for patient safety as well as other regulatory compliance concerns.

Patient Care and Human Oversight

The use of AI in patient care must be accompanied by human oversight; overreliance on AI tools in patient care can lead to critical errors. For instance, ambient AI tools designed to monitor patients’ vitals must be carefully monitored to prevent lapses in care, and an ambient AI tool designed to listen and take notes on a patient’s interaction with their healthcare provider should not be relied upon for purposes of formulating a diagnosis.

Relying solely on an AI tool for clinical decision-making without the additional incorporation of human judgment may lead to a diagnosis that overlooks nuanced patient factors and/or symptoms. As such, users must ensure proper oversight of AI tools, including (but not limited to) critically evaluating and validating the accuracy of all outputs and addressing biases.

Data Integrity

Since AI relies on datasets to function effectively, substandard or biased data can result in coding errors that jeopardize patient safety. For example, an ambient AI tool used to triage emergency patients may under-prioritize certain demographics due to inherent biases in its training data.

These compromises to data integrity not only jeopardize patient safety but could also expose healthcare organizations to potential FCA liability if inaccurate coding leads to improper billing. Additionally, mistakes in a patient’s care that stem from flawed or inaccurate AI-generated data could erode trust among patients and providers, hindering the adoption of otherwise beneficial AI technologies.

Coding Errors

Coding errors, which can occur during the development and/or implementation of AI algorithms, may result in faulty decision-making processes, inaccurate predictions, or misinterpretation of medical data. In healthcare settings, such errors can lead to dire consequences, including incorrect diagnoses, inappropriate treatment recommendations, and/or delays in critical interventions. Addressing this risk may require regular testing, validation, and monitoring of the AI tool to ensure it functions as intended.

A certain level of collaboration between a tool’s developers and the target user (i.e., clinicians) may be needed to ensure that algorithms are designed and implemented in a way that aligns with clinical realities and patient needs.

Privacy and Security Risks

AI tools are also being used to collect and process sensitive patient data, increasing the risk of data breaches and unauthorized access, exposing the user to a potential HIPAA violation. With the rise of cloud-based AI platforms, the risk of third-party breaches or improper data sharing becomes even greater, especially if these platforms lack robust security and privacy compliance measures.

A recent rise in FCA government enforcement actions against major government contractors who are alleged to have inadequate cybersecurity measures further illustrates the importance of maintaining adequate cybersecurity measures and underscores the need for healthcare organizations to address the privacy and security risks associated with utilizing an AI tool that collects and processes patient data.

Transparency and Disclosure Requirements

Patients may have a legal right to know when an AI tool is being utilized by their healthcare provider to formulate a diagnosis or treatment plan. Therefore, transparency is critical and failure to disclose such use could undermine trust in the healthcare system as a whole and lead to ethical and legal challenges for the user.

These risks are closely tied to the principle of informed consent, which is a fundamental ethical and legal standard across medical practices. Informed consent requires that patients be provided with sufficient and material information about their diagnosis and treatment options—which arguably includes disclosure of any AI tools, technologies, or methodologies being used—to make knowledgeable and voluntary decisions about their care. Transparency also plays a vital role in educating patients about the limitations of AI, such as its reliance on probabilistic algorithms and the potential for errors, which can help manage expectations and build trust.

GOVERNMENT OVERSIGHT

In October 2023, the Biden administration, with the goal of ensuring safe and effective use of AI in the healthcare industry, issued an Executive Order (No. 14110) on “Safe, Secure, and Trustworthy Artificial Intelligence” which emphasized the need for federal agencies to develop frameworks ensuring the safe deployment of AI across the healthcare industry. The order highlighted the importance of addressing biases, ensuring transparency, and prioritizing data security. On the first day of the Trump administration’s second term, Executive Order No. 14110 was revoked and replaced with a new order that prioritizes AI research and innovation.

At the state level, legislative activity has also ramped up, as discussed in greater detail in our recent LawFlash. States such as (but not limited to) California, Virginia, and Utah have already introduced or passed laws addressing AI transparency, bias mitigation, and accountability in the healthcare industry and other related sectors. For instance, Virginia H 2154 requires hospitals, nursing homes, and certified nursing facilities to implement policies on the permissible access to the use of intelligent personal assistants, including AI software, provided by a patient.

As an example of another area of focus in some of these recent state legislative actions, California SB 1120 requires healthcare service plans or disability insurers that use AI for utilization review or management to implement safeguards related to equitable use, compliance with state and federal regulations, and disclosure. This California law, like others that have been enacted and proposed, explicitly requires that determinations of medical necessity be made only by a licensed healthcare provider. Additionally, Utah HB 452 requires any person who provides the services of a regulated occupation (including healthcare professionals) to disclose the use of generative AI in the provision of regulated services.

Moreover, there are additional states with pending legislation similar to those laws already adopted by other states. These recently enacted and proposed laws signal heightened scrutiny of AI applications and their potential risks to patient safety and data security.

RECENT ENFORCEMENT ACTIONS

In recent years, enforcement related to the use of AI in healthcare, particularly under the FCA, has intensified. The FCA imposes liability on entities that submit false or fraudulent claims for payment to the federal government. In the context of AI, enforcement actions have primarily involved situations where healthcare providers or vendors knowingly relied on flawed AI tools that generated inaccurate billing codes or diagnostic results. Some of the recent enforcement actions in this space include the following:

  1. In 2024, US Department of Justice subpoenaed several pharmaceutical and digital health companies regarding their use of generative AI in electronic medical record (EMR) systems to determine whether the tool results in care that is either excessive or medically unnecessary.
  2. FCA investigations into Medicare Advantage plans that use AI tools to identify unreported diagnoses and make coverage decisions.
  3. A Texas attorney general settlement with a company that sold a generative AI tool that created documentation of the patients’ condition and a treatment plan in the patient’s chart and were marketed as being “highly accurate,” leading to allegations of false, misleading, and deceptive claims about the accuracy of the AI tool.
  4. Commercial insurance companies are being named in class action lawsuits for allegedly using AI algorithms to override treating physicians’ medical necessity determinations.
  5. A large commercial insurance company is being sued for allegations that an AI tool used to predict fraudulent claims has racial biases.

The increase in enforcement actions involving the use of AI in healthcare highlights the critical need for healthcare entities and providers to implement robust AI compliance programs to ensure that all AI tools are used appropriately and in compliance with applicable current and emerging laws and regulations at the federal and state levels.

IMPLEMENTING AN AI COMPLIANCE PROGRAM

To address these risks and the exposure to agency enforcement actions, healthcare and life sciences companies should consider developing specialized AI compliance programs. While compliance programs should incorporate the Office of Inspector General’s seven elements of an effective compliance program into its structure, these programs should also include a focus tailored to the unique challenges posed by AI. Some of the key elements of an effective AI compliance program are as follows:

  1. AI Governance Committee: Establishment of a multidisciplinary governance committee to oversee all AI-related activities at the company. This committee should include representatives from legal, compliance, IT, clinical operations, and risk management. The committee’s responsibilities should include vetting all AI tools, addressing potential biases, and ensuring alignment with federal and state regulatory requirements. Understanding what AI is being implanted within an organization is an imperative.
  2. Written Policies and Procedures: Development and implementation of comprehensive policies and procedures that address AI procurement, deployment, and monitoring. These policies should align with evolving industry standards and regulatory requirements. Policies that help guide an organization with an understanding of the evolving nature of AI are critical to demonstrate the compliance efforts of the organization.
  3. Training and Resources for Employees: Development and implementation of regular training programs to educate employees about AI risks, regulatory obligations, and best practices/industry standard. Training and resources are likewise a critical element to help an organization grow with AI capabilities as well as monitor risks as appropriate.
  4. Routine Monitoring, Auditing, and Risk Assessment: Regular assessment of AI systems for compliance with the company’s internal policies and procedures and all federal and state laws and regulations. Organizations should also conduct periodic risk assessments to identify emerging challenges and refine the AI compliance program accordingly. Failure to conduct routine monitoring or audits could leave healthcare organizations vulnerable to claims that the organization failed to take reasonable steps to ensure compliance or knowingly ignored (through deliberate ignorance or reckless disregard) red flags, such as data degradation. The ongoing monitoring of AI is an aspect that many organizations should consider more fully at the time of implementation and during the longevity of the useful life of the AI. Monitoring activities may provide an organization with valuable responses to allegations of negligence or fraud.

CONCLUSION

The rapid growth of AI in healthcare presents novel opportunities but also poses significant risks when AI tools are used inappropriately or simply left unattended. Companies in the healthcare and life sciences sectors must take steps to proactively address these challenges, such as by implementing tailored AI compliance programs. Adopting a structured approach that includes governance, training, oversight, and regular monitoring can help healthcare and life sciences organizations harness the benefits of AI while minimizing legal and regulatory exposure.

As the regulatory landscape continues to evolve, staying informed and adaptable will be essential to a company’s success. To make this ever-evolving landscape more manageable, Morgan Lewis will continue monitoring developments and provide updates as information is released. Morgan Lewis lawyers are seasoned in providing strategic counseling to clients in the healthcare and life sciences industries on the proper use of AI in healthcare settings and in implementing needed AI compliance tools and considerations.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

Authors
B. Scott McBride (Houston / Dallas)
Sydney Menack (Washington, DC)