Insight

AI Enforcement Accelerates as Federal Policy Stalls and States Step In

April 02, 2026

Artificial intelligence is evolving faster than the legal frameworks that govern it. In the United States, comprehensive federal AI legislation remains elusive, yet regulatory and litigation activity is intensifying across multiple fronts. Federal agencies are relying on existing statutes to police AI-related conduct, whereas states are moving aggressively to enact AI laws and expand enforcement under antitrust, consumer protection, and false claims statutes and private plaintiffs are testing new liability theories across industries.

For companies deploying AI tools, whether for pricing, hiring, healthcare, marketing, or operational efficiency, the absence of a single unified federal AI statute does not necessarily translate to reduced risk. Rather, the enforcement landscape is becoming more complex and more fragmented.

This Insight examines the evolving federal and state AI enforcement landscape and outlines practical considerations for companies deploying AI technologies.

FEDERAL AI OVERSIGHT IN THE ABSENCE OF COMPREHENSIVE LEGISLATION

Despite significant executive and congressional attention, no comprehensive federal AI statute has been enacted. But the absence of overarching legislation has not created a regulatory vacuum. Federal agencies are relying on existing authorities to regulate AI-related conduct:

  • Federal Trade Commission: Section 5 of the FTC Act remains a primary enforcement vehicle for allegedly unfair or deceptive AI practices, including misleading claims about AI capabilities, undisclosed use of AI tools, and data practices tied to automated decision-making.
  • Securities and Exchange Commission: The SEC has focused on so-called “AI washing,” that is, public companies overstating or misrepresenting the use or performance of AI in disclosures to investors.
  • False Claims Act: The Department of Justice has signaled willingness to pursue FCA theories where AI tools are used in government-funded programs, including healthcare reimbursement and cybersecurity compliance contexts.
  • Antitrust Enforcers: DOJ and FTC have taken active positions in cases involving algorithmic pricing and information-sharing allegedly facilitated by AI systems.

In July 2025, the White House released Winning the Race: America’s AI Action Plan, outlining 90 federal policy actions across innovation, infrastructure, and international leadership. Subsequent executive actions signaled a push to limit what the administration views as “onerous” state-level AI regulation and challenge conflicting state laws in court.

The result is a federal posture that favors innovation and centralized policy direction, while relying heavily on existing laws rather than new AI-specific legislation.

A RAPIDLY EXPANDING PATCHWORK OF STATE AI LAWS

In the absence of federal preemption, states have moved to fill that perceived void. State-level activity generally falls within three categories: AI-specific statutes, expanded use of existing consumer protection and antitrust laws, and attorney general investigations coupled with multistate actions.

Targeted AI Laws

Several states, including California, Colorado, New York, and Texas, have enacted AI-specific statutes focused on discrete risks rather than comprehensive cross-sector regulation.

New York’s Algorithmic Pricing Disclosure Act, for example, requires businesses to disclose when individualized pricing is set by an algorithm using a consumer’s personal data. California has adopted AI transparency measures that mandate disclosure of AI-generated content and in certain contexts require documentation regarding training data used to develop generative AI systems.

California and Texas have also imposed healthcare-specific restrictions that limit the use of AI in medical necessity determinations and require meaningful human oversight in clinical decision-making. Colorado’s AI Act, by contrast, targets so-called “high-risk” systems and imposes governance, risk assessment, and documentation obligations for AI tools used in consequential decisions such as employment, insurance, and other areas that directly affect what are deemed to be areas of heightened consumer rights or access to services.

Consumer Protection and UDAP Authority

State attorneys general are also deploying broad “unfair and deceptive acts or practices” (UDAP) statutes to investigate AI-related conduct. These statutes are powerful enforcement tools because they often permit per-violation penalties, do not require proof of individual damages, and are frequently structured in ways that make cases difficult to remove to federal court. AI-related marketing claims, disclosures, bias allegations, and data practices all fall within potential UDAP scrutiny.

State Antitrust and Algorithmic Pricing

State AGs have taken a particularly active role in scrutinizing the use of algorithmic pricing tools. This mirrors a broader national trend of heightened state AG antitrust enforcement, particularly in emerging technology sectors.

EMERGING THEORIES IN AI-RELATED PRIVATE LITIGATION

Private litigation has accelerated in step with regulatory activity. Current AI-related lawsuits fall into several recurring categories:

  • Algorithmic pricing and antitrust concerns
  • Copyright and training data scraping
  • AI washing and securities litigation
  • Consumer protection and deceptive marketing
  • Biometric and privacy claims
  • Employment discrimination
  • Deepfakes and harm-to-children allegations

Algorithmic Pricing and Hub-and-Spoke Theories

A central feature of recent AI antitrust litigation and enforcement actions is allegations that algorithmic pricing software can be used to facilitate a so-called “hub-and-spoke” conspiracy among competitors. In these cases, the AI vendor is characterized as the “hub,” while competing firms—the “spokes”—allegedly share competitively sensitive nonpublic data through the platform. Plaintiffs argue that this conduct enables coordinated pricing behavior and results in artificially inflated prices.

Courts have reached differing conclusions on whether such conduct should be evaluated under the per se rule or the rule of reason as well as whether the plaintiffs have sufficiently alleged an “agreement” amongst competitors purportedly using the same pricing software.

The Regulation-Litigation Feedback Loop

AI litigation is shaping regulation in real time. Courts interpreting existing statutes, such as copyright law, biometric privacy acts, and consumer protection laws, are effectively defining guardrails for AI deployment.

At the same time, regulatory initiatives create new private rights of action or expand litigation risk. For example:

  • Biometric statutes have generated extensive class action litigation
  • State consumer privacy laws are being used to challenge AI-based profiling and targeted advertising
  • State AG investigations frequently precede parallel private class actions

This feedback loop means companies must monitor both legislative developments and emerging case law.

EMERGING RISK AREAS TO WATCH IN 2026

Based on enforcement signals and litigation patterns, several risk areas are likely to intensify:

  • Privacy and Data Governance: AI systems depend on large volumes of personal data. State AGs are increasingly focused on privacy compliance and transparency, particularly where federal privacy legislation remains stalled.
  • Securities and Disclosure Risk: Public companies must carefully evaluate how they describe AI capabilities and risks. Misleading claims, whether overstating performance or downplaying cybersecurity vulnerabilities, may trigger SEC scrutiny.
  • FCA Exposure in Government-Funded Contexts: Where AI tools are deployed in healthcare reimbursement, defense contracting, or grant-funded programs, inaccurate certifications or overstatements regarding performance, bias controls, or cybersecurity compliance may present FCA risk.
  • Multistate Enforcement: State AGs continue to expand coordinated enforcement efforts, leveraging multistate investigations and task forces. AI-related conduct, particularly involving pricing, marketing, or youth protection, will likely remain a target.

PRACTICAL CONSIDERATIONS FOR COMPANIES DEPLOYING AI

Companies can continue innovating while mitigating risk by embedding governance and compliance measures into their AI development and deployment.

  • Implement Clear Disclosures: Companies should consider identifying where AI is used in customer-facing or employment contexts and ensure that those uses are clearly and accurately disclosed. Transparency regarding AI-generated content and automated decision-making is increasingly expected by regulators and courts. Public statements about AI capabilities should be carefully aligned with actual system functionality to avoid allegations of misleading claims or overstatement.
  • Strengthen Governance and Oversight: Organizations should consider establishing cross-functional AI governance committees that include legal, compliance, technology, and business stakeholders. Robust documentation of model development, training data sources, and validation processes may be critical to demonstrating accountability and defensibility. Where AI systems influence consequential decisions (e.g., hiring, credit, healthcare, pricing), companies should conduct bias and fairness audits and maintain records of those assessments.
  • Address Antitrust Risk in Algorithmic Pricing: Companies using algorithmic pricing tools should avoid sharing competitively sensitive nonpublic information through AI platforms or intermediaries. Pricing decisions should remain unilateral, with clear internal policies reinforcing independent decision-making authority. Businesses should also document the business justifications for their AI tools, for example increased competition and output, operational efficiencies, improved accuracy, and consumer benefits.
  • Enhance Internal Controls and Training: Personnel should be trained on the appropriate and compliant use of AI systems, including the risks associated with automated decision-making and data inputs. Consider integrating AI-related risks into broader compliance, cybersecurity, and risk management programs. Companies should also review vendor agreements and data-sharing arrangements to ensure that contractual protections and oversight mechanisms appropriately address AI-related legal and regulatory exposure.
  • Monitor Regulatory Developments: Given the pace of change at both the federal and state level, ongoing monitoring is essential. Companies operating nationally should account for the growing state-level patchwork while also tracking executive actions and federal agency enforcement priorities.

LOOKING AHEAD

The regulatory trajectory for AI in the United States is defined less by sweeping federal legislation than by layered enforcement: federal agencies using existing authorities, states enacting targeted laws, and private plaintiffs advancing novel theories. The absence of a comprehensive national AI statute has not slowed enforcement; it has only diversified it.

For companies, the imperative is clear: treat AI governance as a core compliance function, not an afterthought. By embedding transparency, documentation, and cross-functional oversight into AI strategy, businesses can mitigate litigation exposure while continuing to benefit from AI’s transformative potential.