BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

The first half of 2023 was one of the most active six months for legislative and regulatory developments around artificial intelligence (AI). Our colleagues recently noted the European Parliament’s adoption of a draft AI Act as well as the significant activity in the United States related to regulating AI, both at the federal and state level. AI is also increasingly giving rise to data privacy concerns.

In this blog post, we collate some key legislative and regulatory developments from the first half of 2023.

United States

In early May 2023, US Federal Trade Commission (FTC) Chair Lina Khan penned an article for The New York Times (subscription may be required) in which she expressed concern about dominant firms unfairly controlling key AI inputs upstream and scammers deceiving individual consumers downstream. Chair Khan stated that the FTC will strive to apply existing laws and frameworks to new AI developments to prevent further market crystallization in favor of a few firms, drawing parallels to the FTC’s approach to the regulation of technology companies two decades ago.

This followed FTC guidance issued in February 2023 regarding the applicability of the FTC’s enforcement authority to address unfair or deceptive AI advertising claims—specifically, focusing on whether marketers make false or unsubstantiated claims about AI-powered products.

In April, the FTC and officials from three other federal agencies released a joint statement on enforcement efforts against discrimination and bias in automated systems. The agencies clarified their individual approaches to AI regulation and jointly highlighted three key sources that may produce outcomes resulting in unlawful discrimination: (1) data and datasets, (2) model opacity and access, and (3) design and use of automated systems.

In January, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework Version 1.0, a voluntary, non–sector-specific guide for organizations developing, designing, and using AI-related products and services to manage risks of AI and promote trustworthy AI systems. The framework specifically identifies privacy as significant for input and output risk. The core of the framework describes four specific functions to help organizations address the risks of AI in practice:

  1. Govern – maintaining the culture of risk management within an organization;
  2. Map – establishing the context to frame risks related to an organization;
  3. Measure – assessing and monitoring AI risk; and
  4. Manage – allocating resources to mapped and measured risks.

United Kingdom

In July 2023, Nikhil Rathi, the Chief Executive of the Financial Conduct Authority (FCA), highlighted in a speech certain risks that generative AI may pose to financial markets and consumers through misinformation. He also stated that the FCA incorporates AI within its supervision technology for firm segmentation, the monitoring of portfolios, and when identifying risky behaviors. Chief Executive Rathi spotlighted two existing frameworks in place to address many of the issues that come with AI: (1) a new Consumer Duty that will require firms to demonstrate how all parts of their supply chain secure good consumer outcomes and (2) requirements for accountability under the Senior Managers & Certification Regime.

In March, the UK government published a white paper setting out a “pro innovation” AI regulatory framework. The framework did not introduce any new legal requirements, but instead proposed leveraging the existing powers of UK regulators and their domain-specific expertise, i.e., regulating use, not technology. This approach was reiterated by Chief Executive Rathi in the aforementioned speech: “[W]hile the FCA does not regulate technology, we do regulate the effect on—and use of—tech in financial services.” The feedback period for the framework closed on June 21, 2023, and the UK government has yet to publish a response.

European Union

In June 2023, the European Parliament passed text for a draft EU AI Act, which it has now begun negotiating with the European Council (representing member states of the European Union) with the goal of establishing a final version of the law. The draft AI Act includes a list of prohibited AI practices, a classification and registration framework for AI systems (including “high-risk” applications), and requirements for generative AI systems such as ChatGPT to disclose that content was AI-generated. Following negotiations with the European Council, the European Parliament and European Commission will prepare a reconciled text that will then return to the European Parliament for ratification, possibly in Q4 2023.

Summer associate Cooper Attig contributed to this post.