LawFlash

State Attorneys General Escalate Online Platform Scrutiny Over CSAM & AI-Generated Sexual Content

2026年01月26日

State Attorneys General (AGs) nationwide are increasing enforcement activity focused on online child sexual abuse material (CSAM) and the use of artificial intelligence (AI) tools to generate, manipulate, or distribute sexually explicit content, including non-consensual deepfakes involving minors. Recent high-profile investigations underscore a broader and accelerating trend that technology companies, platforms, and AI developers should take seriously.

AN EVOLVING ENFORCEMENT LANDSCAPE

Historically, state enforcement efforts in this space centered primarily on individual criminal prosecutions for possession or distribution of CSAM. However, AG activity is evolving in notable ways:

Expansion to Artificial Intelligence (AI)-Generated Content

As generative AI tools have become more widely available, states have moved to update statutes and charging theories to encompass computer-generated or manipulated sexual imagery, including deepfakes depicting minors. Several AG offices have publicly emphasized that AI-generated content will be treated no differently than traditional CSAM for enforcement purposes, even where statutory nuances cast doubt on such forceful proclamations.

Increased Focus on Platforms & Technology Providers

State AGs are no longer limiting attention to individual offenders. They are increasingly pursuing platform-level accountability, invoking consumer protection, unfair practices, and child safety statutes to examine whether companies implemented adequate safeguards, moderation tools, and reporting mechanisms.

More Public & Aggressive Investigative Postures

Recent actions—including public announcements of investigations, cease and desist orders, and demands for immediate remedial steps—reflect a shift toward using enforcement actions not only to pursue violations but also to signal expectations for the broader technology sector.

Multistate Coordination & Legislative Momentum

AG offices are increasingly coordinating across jurisdictions, while state legislatures continue to enact new laws targeting AI-enabled sexual exploitation. This combination has expanded both the reach and the complexity of state-level enforcement risk.

CHALLENGES FACING STATE ATTORNEYS GENERAL

Despite this escalation, AG offices face significant hurdles in investigating and prosecuting these matters, including:

  • Attribution and evidentiary challenges: Determining who generated, prompted, or knowingly distributed illicit content is often difficult, particularly where anonymization, reposting, or automated systems are involved.
  • Jurisdictional complexity: Content, users, and infrastructure frequently span multiple states and countries, requiring cross-border coordination, data preservation, and cooperation with federal and international partners.
  • Technical and forensic limitations: Effective investigations increasingly depend on access to model logs, audit trails, and image provenance data, and these resources vary widely across companies and technologies.
  • Resource constraints and constitutional considerations: AG offices must balance aggressive enforcement with due process, statutory limits, and First Amendment concerns, especially as courts are just beginning to interpret newly enacted AI-related laws.

Importantly, while these challenges may slow some cases, they have not dampened enforcement appetite. To the contrary, they heighten regulators’ focus on whether companies acted responsibly and proactively once risks became apparent.

WHAT TECHNOLOGY COMPANIES SHOULD DO NOW

In light of these developments, companies should consider taking the following steps:

  • Evaluate and document safeguards
    • Assess whether existing content moderation, detection, and abuse-prevention tools meaningfully address AI-generated sexual content and CSAM risks; regulators will likely expect companies to demonstrate how safeguards function in practice—not merely that policies exist
  • Prepare for investigation-ready response
    • Establish clear protocols for intake, escalation, preservation, and response to potential CSAM issues, including coordination between legal, trust and safety, security, and executive teams
    • Assume inquiries may be urgent, public, and multijurisdictional, and have your crisis management team, including outside counsel and coordinating vendors, ready before notice hits
  • Strengthen governance around AI deployment
    • Companies developing or integrating generative AI should review governance frameworks related to model testing, logging, auditability, vendor oversight, and enforcement of user policies—particularly where products may be misused to create non-consensual or abusive content involving minors

CONCLUSION

State AG enforcement in this area is no longer speculative. Investigations and public statements make clear that online child safety and AI-generated sexual content are top enforcement priorities. Companies that take proactive, documented steps now will be better positioned to manage regulatory risk and respond effectively to inquiries when they arise.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

Authors
Ashley R. Lynam (Philadelphia)