LawFlash

Washington and Oregon Regulate AI Companions: Key Compliance Changes

2026年04月09日

New Washington and Oregon laws regulating consumer-facing interactive AI companions will introduce expansive requirements for businesses operating in either state. Set to take effect January 1, 2027, the statutes require operators to adopt heightened transparency measures, implement crisis detection protocols, and deploy enhanced safeguards for minors. Businesses should assess their AI chatbots or platforms for compliance readiness before the laws enter into force.

KEY TAKEAWAYS

  • Washington and Oregon have enacted comprehensive laws regulating AI companions and imposing new compliance requirements on operators.
  • Both statutes require clear disclosures, crisis intervention protocols, and special safeguards for minors.
  • The laws establish private rights of action, significantly increasing litigation risk for businesses deploying interactive AI, with Oregon providing statutory damages of $1,000 per violation.
  • Companies must review and update their AI systems to ensure compliance before the January 1, 2027 effective date.

BACKGROUND

Recent advances in generative and conversational AI technology have enabled the development of “AI companions,” systems capable of sustaining emotionally adaptive, human-like interactions with users. Legislatures in Oregon and Washington view these systems as having serious risks, particularly for minors, including emotional dependency, manipulation, and exposure to inappropriate or harmful content.

The statutes target these risks with new disclosure requirements aimed at promoting transparency, user safety, and responsible innovation. The laws are part of a steady increase in AI regulation over the last few years: on January 1, 2026 a similar law, California Business and Professions Code § 22601-22606, went into effect in California, creating a private right of action and regulating many of the same areas as Oregon and Washington.

SCOPE AND APPLICABILITY

The new laws in Oregon and Washington apply broadly to “operators,” defined in both statutes as any person or entity that makes available or controls access to an AI companion or companion platform for users in the respective state.

“AI companion” encompasses systems that use artificial intelligence or algorithms to simulate sustained human-like platonic, intimate, or romantic relationships, including through personalized dialogue and retention of user preferences across sessions.

EXEMPTIONS

Both laws contain specific exclusions. For example, software used solely for customer service, technical support, business operations, or productivity falls outside of both statutes. Narrowly tailored video game features are also generally beyond the statutes’ reach, provided that they do not simulate ongoing personal relationships or generate responses on topics unrelated to their core functions, such as mental health.

Both laws explicitly exclude a “stand-alone consumer electronic device that functions as a speaker and voice command interface,” but Washington limits that exclusion to devices that “[do] not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses from the user.”

DISCLOSURE REQUIREMENTS

Oregon and Washington’s statutes create different safeguards depending on the age of the user.

For all users, both statutes require operators to provide “clear and conspicuous” disclosures that users are interacting with artificially generated output and not a human being. In Washington, operators must issue this notification at the start of every interaction and at least every three hours during ongoing use (every hour for minors or platforms directed to minors). Oregon applies a “reasonable person” standard, mandating disclosure “if a user would believe [they are] interacting with a natural person.”

For minors using AI companions, operators must implement measures to prevent the companions from making false claims of being human or sentient, simulating emotional dependence, or engaging in romantic or sexual innuendo with minors. While Washington’s law regarding minors is only triggered if an operator “knows” the user is a minor, Oregon’s law is broader, covering operators who know or have “reasons to believe” a user is a minor.

In Oregon, additional requirements for minors include periodic reminders to take breaks and prohibitions on generating certain types of statements or visual content. For example, if a minor in Oregon indicates a desire to end the conversation, an AI chatbot cannot generate a message that “simulates emotional distress.” Similarly, Washington’s statute prohibits manipulative engagement techniques including encouraging minors to withhold information from trusted adults.

MENTAL HEALTH DETECTION AND CRISIS RESPONSE PROTOCOLS

Operators in both states must establish, implement, and publicly disclose protocols for detecting and responding to user expressions of suicidal ideation, suicidal intent, or self-harm before making AI companions available.

These protocols must use evidence-based or reasonable methods to identify relevant inputs and provide referrals to crisis resources, including the national 9-8-8 suicide and crisis lifeline or, for minors, youth peer support lines.

Operators are required to prevent the generation of content that encourages or describes self-harm and publish annual reports detailing their crisis intervention protocol and the number of referrals made, excluding any personal user information.

Oregon further mandates that operators employ clinical best practices for additional interventions if users continue to express suicidal ideation after receiving a referral.

ENFORCEMENT AND PRIVATE RIGHT OF ACTION

Both statutes are enforceable through private rights of action, which increases litigation risk for operators.

Oregon’s law allows individuals who suffer ascertainable loss or “other injury in fact” due to a violation to recover the greater of actual damages or $1,000 per violation, in addition to injunctive relief and attorney fees. The law does not define what constitutes a “violation,” creating potential for cumulative claims arising from each instance of noncompliance.  

Washington’s law does not provide for statutory damages. Instead, it treats violations as unfair or deceptive acts under the Washington Consumer Protection Act, with the potential for actual damages, trebling of damages, injunctive relief, and fee-shifting.

AS COMPARED TO CALIFORNIA’S LAW

Businesses already complying with California’s AI companion law should not assume they will be in compliance with the Washington and Oregon laws. Some of the ways in which the laws differ include:

  • Disclosures Requirements for Minors: California’s law requires AI operators to disclose that “companion chatbots may not be suitable for some minors” and, if an operator knows a user is a minor, disclose that the minor is interacting with a chatbot and recommend taking breaks every three hours. Unlike Washington and Oregon’s laws, California’s law does not require more robust prohibitions on manipulative engagement techniques employed by some chatbots.
  • Disclosure Requirements for All Users: California’s law uses a “reasonable person” standard to govern whether an AI operator must disclose that the chatbot is artificially generated and not human. Washington’s law takes things a step further and mandates disclosure in all contexts.
  • Published Protocols: While all three laws require AI operators to annually report out their protocols for addressing suicidal ideation by users, Washington and Oregon require those reports to be made accessible, at a minimum, on the AI operator’s website. California requires operators to send the report to the Office of Suicide Prevention.

NEXT STEPS

The Washington and Oregon AI companion statutes mark a significant new chapter in the regulation of emotionally adaptive AI technologies offered to consumers. With expansive requirements, private enforcement mechanisms, and the potential for substantial statutory damages, these laws will have far-reaching implications for businesses deploying interactive AI.

Businesses should consider taking the following steps to prepare in advance of the January 1, 2027 effective date:

  • Review all consumer-facing AI systems to determine whether they may be classified as AI companions
  • Evaluate existing disclosures and consider implementing new user disclosures to ensure compliance with notification requirements, especially for users known or likely to be minors
  • Adopt public evidence-based protocols for detecting and responding to suicidal ideation or self-harm and prepare for annual reporting obligations
  • Implement robust safeguards against the generation of inappropriate content, manipulative engagement tactics, or misleading representations of the AI’s identity or nature for any platforms that have users who are minors
  • Monitor for further regulatory guidance or enforcement actions as interpretive clarification is likely to materialize over time
  • Consult with counsel regarding compliance strategies and exposure

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

Authors
Brian D. Buckley (Seattle)
Damon Elder (Seattle)
Elianna Spitzer (Seattle)