Tech & Sourcing @ Morgan Lewis


In an age marked by remarkable advancements in artificial intelligence (AI), the question of how to effectively govern this rapidly evolving technology has become increasingly pressing. On August 31, 2023, a significant milestone was achieved with the publication of the Governance of Artificial Intelligence: Interim Report by the Science, Innovation and Technology Committee of the UK government (the Interim Report).

The Interim Report represents a pivotal moment in the ongoing dialogue surrounding AI governance, shedding light on the progress made thus far and the challenges that lie ahead.

The Interim Report highlights the multifaceted nature of AI governance, recognizing that it cannot be addressed through a one-size-fits-all approach: it calls for a nuanced strategy that considers technical, ethical, legal, and economic aspects, among others.

The Twelve Challenges of AI Governance

While the Interim Report marks a significant step forward in AI governance, it also acknowledges some of the formidable challenges that lie ahead, including in the following areas:

  • Bias: Despite efforts to mitigate bias in AI systems, challenges persist in achieving unbiased and fair AI algorithms. The Interim Report underscores the importance of developing AI systems that prioritize fairness, transparency, and accountability and calls for guidelines to minimize bias and discrimination in AI algorithms.
  • Privacy: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public may desire. The report recognizes the significance of data privacy in AI governance and calls for strengthened data protection measures, including robust data anonymization techniques, to ensure that individuals’ privacy rights are upheld.
  • Misrepresentation: AI can allow for the generation of material that may deliberately misrepresent someone’s behavior, opinions, or character.
  • Access to Data: The most powerful AI needs very large datasets, which currently are not easily accessible.
  • Access to Compute: The development of powerful AI requires significant computing power, which, as aforementioned, is difficult to access.
  • Black Box: Transparency requirements are hindered by AI products that cannot explain why they reach certain results. The report encourages the establishment of mechanisms to track AI systems’ decision-making processes and outcomes.
  • Open Source: Allowing code to be proprietary could allow for more dependable regulation of harms than if the code were openly available.
  • Intellectual Property: Some AI models and tools make use of other people’s proprietary content, and such rights need to be protected.
  • Liability: Policy must establish whether developers or providers of the technology bear any liability for harms done by third parties using the AI tools.
  • Employment: AI advancement may disrupt the job market, and policy must anticipate and mitigate this disruption.
  • International Coordination: Achieving global consensus on AI governance remains a daunting task. Given the borderless nature of AI technologies, international standards and agreements are crucial to harmonizing regulations and addressing cross-border challenges; however, different countries and regions may have divergent interests and priorities.
  • An Existential Challenge: Some people believe that AI is a major threat to human life. Policy makers should strike a balance between fostering innovation and safeguarding against potential harms.


The journey toward establishing robust governance for AI is complex. As AI technologies continue to permeate our lives, from autonomous vehicles to healthcare diagnostics, the need for clear ethical and legal frameworks has never been more critical. The Interim Report acknowledges this imperative and lays the foundation for policymakers and key industry players to address these issues.