LawFlash

(AI)n’t Done Yet: States Continue to Craft Rules to Manage AI Tools in Healthcare

2025年04月23日

As the regulation of artificial intelligence (AI) tools in healthcare settings rapidly evolves, state medical boards and related agencies are at the forefront of development and enforcement. While some states have taken proactive steps to implement comprehensive frameworks to address ethical use, data privacy, and safety standards, others are focusing on fostering innovation and reducing regulatory burdens on healthcare providers.

The following state bills are of particular importance for healthcare providers and payors seeking to integrate AI tools into the provision and administration of healthcare services.

ENACTED LEGISLATION

State

Type

Bill Number

Description

California

Provider-Focused

AB 3030

Requires health facilities, clinics, physician offices, or offices of a group practice that use generative AI for patient communications to include

  •           a disclaimer that the communication was created by generative AI; and
  •            clear instructions describing how a patient may contact a human healthcare provider, employee, or other appropriate person.

See our LawFlash for more information.

California

Payor-Focused

SB 1120

Requires healthcare service plans or disability insurers that use AI for utilization review or utilization management to implement safeguards related to equitable use, compliance with state and federal regulations, and disclosure. The bill also requires that determinations of medical necessity are made only by a licensed healthcare professional.

Colorado

General

SB24-205

Requires developers and deployers of “high-risk AI systems”—including any person doing business in Colorado that uses a “high-risk AI system”—to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination by implementing certain safeguards and requiring disclosures on the use of AI. Colorado defines “high-risk AI systems” to include systems that make (or have a substantial factor in making) a decision that has a material legal or similarly significant effect on the provision or denial of healthcare services (e.g., approval or denial of care).

Utah

Provider-Focused

HB 452

Requires suppliers of mental health chatbots to disclose the use of AI to users and implement other safeguards to protect personal information.

Utah

General

SB 149/SB 226

Requires any person who provides the services of a regulated occupation—including healthcare professionals—to disclose the use of generative AI in the provision of the regulated services.

Vermont

General

HB 410

Directs the State of Vermont Agency of Digital Services to inventory all automated decision systems that are being developed, employed, or procured by the state’s government and establishes a Division of Artificial Intelligence and Artificial Intelligence Advisory Council to review and further advise on the regulation of AI systems in the state.

Virginia

Provider-Focused

H 2154

Requires hospitals, nursing homes, and certified nursing facilities to implement policies on the permissible access to and use of intelligent personal assistants, including AI software, provided by a patient.

Washington

General

SB 5838

Establishes a task force to assess current uses and trends and make recommendations to the legislature regarding AI, including with respect to healthcare and accessibility. The act requires the task force to produce a final report to the governor by July 1, 2026.

 

PROPOSED LEGISLATION

State

Type

Bill Number

Description

Arizona

Payor-Focused

HB 2175

Prohibits use of AI to deny a claim or prior authorization for medical necessity, experimental status, or other reason that involves the use of medical judgment.

Connecticut

Payor-Focused

 HB 5587

Prohibits health insurers from using AI as the primary method to deny health insurance claims.

Connecticut

Payor-Focused

SB 447

Prohibits a health carrier from using AI in the evaluation and determination of patient care.

Connecticut

Payor-Focused

 SB 817/

HB 5590

Prohibits a health insurer from using AI to automatically downcode or deny a health insurance claim without peer review.

Florida

Payor-Focused

SB 794

Requires that an insurer’s decision to deny a claim is made by a qualified human professional and that an AI model may not serve as the sole basis for determining whether to adjust or deny a claim.

Illinois

Provider-Focused

SB 2259

Requires a health facility, clinic, physician’s office, or office of a group practice that uses generative AI for patient communications to include

  •            a disclaimer that the communication was created by generative AI; and
  •          clear instructions describing how a patient may contact a human healthcare provider, employee, or other appropriate person.

Illinois

Payor-Focused

SB 1425

Prohibits an insurer from issuing a denial—or reducing or terminating an insurance plan solely based on the use of an AI system—and requires disclosure of an insurer’s use of AI.

Indiana

Provider-Focused

HB 1620

Requires healthcare providers to disclose the use of AI technology when AI is used to (1) make or inform decisions involving the healthcare of an individual or (2) generate patient communications.

Indiana

Payor-Focused

HB 1620

Requires insurers to disclose the use of AI technology when AI is used to (1) make or inform decisions involving coverage or (2) generate communications to insureds regarding coverage.

Maryland

Payor-Focused

HB 820

Prohibits a health insurance carrier from using AI tools to deny, delay, or modify health services.

Massachusetts

Payor-Focused

S 46

Requires carriers or utilization review organizations that use AI algorithms or tools for utilization review or utilization management to implement certain safeguards and provide disclosures related to its use. The bill also requires that determinations of medical necessity are made only by a licensed healthcare professional.

Massachusetts

Payor-Focused

H. 1210

Requires carriers to disclose if AI algorithms or automated decision tools will be utilized in the claims review process.

Massachusetts

General

H 94

Requires developers and deployers of “high-risk AI systems”—including any entity using AI systems to make decisions impacting consumers in the state—to implement certain safeguards and provide disclosures to protect consumers against algorithmic discrimination and mitigate risk related to the use of AI systems. Massachusetts defines “high-risk AI systems” to include systems that materially influence decisions that have significant legal, financial, or personal implications on healthcare services.

Massachusetts

General

H 1210

Grants patients and residents of health facilities the right to be informed if the information they receive is generated by AI, as well as the ability to contact a human health provider in the event the information was not previously reviewed and approved by a provider.

Nebraska

General

LB 642

Requires developers and deployers of “high-risk AI systems”—including any person doing business in Nebraska that uses a “high-risk AI system”—to implement certain safeguards and provide disclosures to protect consumers from the known risks of algorithmic discrimination. Nebraska defines “high-risk AI systems” to include systems that have a material legal or similarly significant effect on the provision or denial of healthcare services without human review or intervention.

New Mexico

General

HB 60

Requires developers and deployers of “high-risk AI systems”—including any person who uses AI systems—to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination by implementing certain safeguards and requiring disclosures on the use of AI. New Mexico defines “high-risk AI systems” to include systems that make (or have a substantial factor in making) a decision that has a material legal or similarly significant effect on the provision or denial of healthcare services.

New York

Payor-Focused

A3991

Requires healthcare service plans that use AI algorithms or tools for utilization review or utilization management to implement certain safeguards and provide disclosures related to its use. It also requires that determinations of medical necessity are made only by a licensed healthcare professional.

New York

Payor-Focused

A3993

Prohibits insurers from using clinical algorithms in their decision-making that discriminate on the basis of race, color, national origin, sex, age, or disability.

New York

Payor-Focused

A1456

Requires health insurers to notify enrollees about the use or lack of use of AI-based algorithms in the utilization review process.

New York

General

A3356

Requires developers and operators of “high-risk advanced AI systems” to obtain a license from the state. “High-risk advanced AI systems” include those that manage, control, or significantly influence healthcare or healthcare-related systems, including but not limited to diagnosis, treatment plans, pharmaceutical recommendation, or storing of patient records.

Oklahoma

Provider-Focused

HB 1915

Requires hospitals, physician practices, or other healthcare facilities responsible for implementing AI devices for patient care purposes to implement a quality assurance program and establish an AI governance group for the safe, effective, and compliant use of AI devices in patient care.

Rhode Island

Payor-Focused

H 5172/SB 13

Requires health insurers to disclose the use of AI to manage claims and coverage, including the use of AI to issue adverse determinations to enrollees, and that any adverse determinations are reviewed by a healthcare professional.

Tennessee

Payor-Focused

HB 1382

Requires health insurance issuers that use AI for utilization review or utilization management to implement safeguards related to equitable use, compliance, and disclosure. It also requires that determinations of medical necessity are made only by a licensed healthcare professional.

Texas

Provider-Focused

SB 1411

Prohibits a physician or healthcare provider from using AI-based algorithms when providing a medical or healthcare service to discriminate on the basis of race, color, national origin, gender, age, vaccination status, or disability.

Texas

Payor-Focused

SB 815

Prohibits a health benefits utilization reviewer from using automated decision systems, including AI systems, to make adverse determinations.

Texas

Payor-Focused

SB 1411

Prohibits a health benefit plan issuer from using AI-based algorithms in the issuer’s decision-making to discriminate on the basis of race, color, national origin, gender, age, vaccination status, or disability.

Texas

Payor-Focused

SB 1822

Requires issuers of health insurance policies to disclose to enrollees or any physician or healthcare provider whether the issuer or the issuer’s utilization agent uses AI-based algorithmics in conducting utilization reviews.

Texas

General

HB 1709

Requires developers and deployers of “high-risk AI systems”—including any person doing business in the state that puts into effect or commercializes a “high-risk AI system”—to implement certain safeguards and provide disclosures to protect consumers against algorithmic discrimination and mitigate risk related to use of AI systems. Texas defines “high-risk AI systems” to include systems that are a substantial factor in decisions that have material, legal, or similarly significant effect on a consumer’s access to, the cost of, or terms or conditions of a healthcare service or treatment.


KEY TAKEAWAYS

As states continue to refine their regulatory frameworks regarding AI in clinical settings, the balance between innovation and patient protection is likely to remain a critical focus for legislators. Absent federal preemption, divergent state frameworks may create compliance complexities for multi-state providers and insurers. With increasing regulatory fragmentation, healthcare providers will need to implement additional compliance infrastructure and training to navigate disparate requirements across states.

This will include objectives like creating an AI governance committee, developing appropriate policies and procedures, training staff on the use of AI, and evaluating whether staff are knowingly (or even unknowingly) using AI in the performance of their duties. Stakeholders should also note the difference between regulation of generative AI (e.g., used in patient communications) and broader algorithmic tools used in medical decision-making.

At the federal level, in the absence of sweeping legislative change, the US Centers for Medicare & Medicaid Services is expected to continue to release guidance—likely through the Physician Fee Schedule updates—on how it will reimburse AI tools and is positioned to serve as a major driver of the implementation of AI tools moving forward, especially as it evolves its payment mechanisms to permit greater consideration for supportive tools in medical practice.

How We Can Help

Our healthcare team stands ready to assist organizations in developing policies and protocols, understanding the application of federal and state laws on their business, protecting data, and conducting internal inquiries on the use of AI within their organization.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

Authors
Jacob J. Harper (Washington, DC)
Rachel L. Lamparelli (Washington, DC)