BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

On October 11, the Bank of England (BoE), the Prudential Regulation Authority (PRA), and the UK Financial Conduct Authority (FCA) (together, the Supervisory Authorities) published a discussion paper (DP5/22) on the safe and responsible adoption of artificial intelligence (AI) in financial services (Discussion Paper). The Discussion Paper forms part of the Supervisory Authorities’ AI-related program of works, including the AI Public Private Forum and is being considered in light of the UK government’s efforts towards regulating AI.

The purpose of the Discussion Paper is to provide a platform for assessing the desirability of regulating AI technology adoption in UK financial services by safeguarding each of the Supervisory Authorities’ own objectives. The BoE’s objectives are to maintain financial stability and support the UK government’s economic policy. The PRA focuses on the promotion of safety, soundness, and competition for services provided by PRA-authorized firms and insurance firms, while the FCA’s strategic objective is to ensure market integrity, effective competition, and protection of consumers in the UK financial system.

To help in this aim, the Discussion Paper is inviting responses across three main categories:

1. Supervisory authorities’ objectives and remits

The Supervisory Authorities consider it useful to distinguish what constitutes AI by either (1) providing a more precise legal definition of what AI is (and what it is not); or (2) viewing AI as part of a wider spectrum of analytical techniques with a range of characteristics for mapping out AI.

The Supervisory Authorities are considering whether adopting a more precise definition may be helpful—and if so, what that definition should be. They consider that adopting a more precise definition may have conflating effects by, for example, creating a common language for firms and regulators which may ease uncertainty, but which could also result in difficulties if the definition adopted is, in practice, too broad.

2. Benefits, risks, and harms of AI in financial services

The Supervisory Authorities classified the benefits, risks, and harms into different categories based on each of their objectives:

  • Consumer protection
  • Competition
  • Safety and soundness of firms
  • Insurance policyholder protection
  • Financial stability
  • Market integrity

For example, in relation to competition, they considered that consumer-facing AI systems, such as those in Open Banking (which is the use of open APIs that enable third-party developers to build applications and services around regulated financial institutions) can improve competition by improving a consumer’s ability to access, assess, and act on information. On the other hand, AI systems could also potentially facilitate collusive strategies between sellers.

The Supervisory Authorities are also trying to understand what novel challenges are specific to the use of AI within financial services and how these may be assessed and mitigated by firms and/or the Supervisory Authorities.

3. Regulation

The Supervisory Authorities are exploring whether current legal requirements and guidance is sufficient in addressing the risks and harms associated with the adoption of AI in UK financial services and what additional intervention may be necessary for its safe and responsible adoption.

Next Steps

The Discussion Paper closes on February 10, 2023, and can be accessed in full on the BoE’s website.

Trainee solicitor Samuel Omotayo contributed to this blog post.