Following an initial announcement in early 2021, the UK government has recently launched its first National Artificial Intelligence (AI) Strategy. This new strategy indicates that the United Kingdom may be planning on diverging from the legislative approach taken by the EU Commission in its “AI package.”
The EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which forms part of the Commission’s overall “AI package.” The legal framework for AI addresses the risks generated by specific uses of AI and focuses on imposing prescribed obligations with respect to such high-risk use cases, including obligations to undertake relevant risk assessments, have in place mitigation systems such as human oversight, and provide transparent information to users.
The intention of the EU Regulation is to have a single set of complementary rules, with extra-territorial application. This means that AI providers who make their systems available in the European Union, or whose systems affect people in the European Union or have an output in the European Union, irrespective of their country of establishment, will be required to comply with the EU Regulation. Non-compliance could lead to General Data Protection Regulation-style fines for companies and providers, with proposed fines of up to the greater of 30 million euros ($34.8 million) or 6% of worldwide turnover.
The National AI Strategy does not provide a UK legislative framework for AI, but it does provide some signs that the United Kingdom’s approach will differ from that taken by the EU Commission. Currently, the United Kingdom regulates AI through cross-sector legislation. In 2018, the UK government agreed with the House of Lords’ view that "blanket AI-specific regulation (like the EU), at this stage, would be inappropriate” and that "existing sector-specific regulators are best placed to consider the impact on their sector."
The National AI Strategy outlines four key reasons why a sector-led approach, rather than a European-style overarching approach, is logical:
- The boundaries of the potential harms of AI are grey.
- Use cases for AI have the potential to be highly complex.
- Empowering regulators and industries to respond and work with innovators in their sectors to advise on interpretation of existing regulations will enable a much faster response to individual harms.
- It may be difficult to differentiate between the specific impact of AI against other external factors, such as other ongoing technology changes.
In its strategy, the UK government acknowledges that there are challenges to be addressed as part of these sector specific regulations. These include
- inconsistent or contradictory approaches across sectors,
- overlap between regulatory mandates,
- the potential for issues to fall between the gaps,
- narrow framing of AI regulation around existing legislation, and
- growing international focus on developing cross-sector AI regulations (potentially undermining UK national efforts to build a consistent approach).
These challenges raise the question of whether the United Kingdom’s current approach is adequate. An upcoming White Paper by the Office for Artificial Intelligence will address this, along with consideration of alternative approaches.
In the European Union, the European Parliament and EU member states need to adopt the EU Commission’s proposals on AI for the EU Regulation to become effective.
In the United Kingdom, the upcoming White Paper from the Office for Artificial Intelligence should detail the proposed UK position on governing and regulating AI, as well as the challenges of the sector-specific approach. This is expected to be published in early 2022.
In response to the National AI Strategy and the EU Regulation, the Department for Culture, Media and Sports in the UK is running a consultation on potential AI-related reforms to the data protection framework. This is due to close on November 19, 2021.