The United Kingdom’s Online Safety Act (OSA or the Act), which received Royal Assent in October 2023, establishes a new statutory framework to address harmful online content, protect children, and promote accountability among digital service providers. For counsel advising platforms, publishers, and other organizations operating in the digital space, the OSA introduces a complex set of compliance considerations, particularly given its extraterritorial scope, risk-based obligations, and substantial enforcement powers.
Tech & Sourcing @ Morgan Lewis
TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS
The UK Information Commissioner’s Office has launched two consultations as part of the transition to the Data User and Access Act framework. These consultations will be of particular interest to organisations operating UK-facing websites, analytics tools, and online advertising services.
On 19 June 2025, the UK Parliament enacted the Data (Use and Access) Act 2025 (DUAA), marking the most significant UK data protection reform since the UK General Data Protection Regulation (UK GDPR). Rather than overhauling the current regime, DUAA introduces targeted amendments to the UK GDPR, the Data Protection Act 2018, and Privacy and Electronic Communications Regulations (PECR), aiming to support responsible data use while preserving core privacy protections.
In an era when data is everything, everywhere, all at once and computation has almost no limit, ensuring privacy while leveraging data analytics is paramount. The US Department of Commerce’s National Institute of Standards and Technology (NIST) recently published NIST Special Publication 800-226 (the Guidelines), a comprehensive guide for evaluating and achieving differential privacy, a cutting edge approach to protecting individual privacy when using and relying on large datasets.
In a recent report, a team of Morgan Lewis lawyers discussed enforcement of the US Department of Justice’s (DOJ’s) Data Security Program (DSP). The report outlines critical considerations for companies and entities that may be affected by the extensive requirements of this national security initiative.
In June 2025, cybersecurity researchers discovered a leak of 16 billion passwords in one of the largest data breaches ever, impacting a wide range of platforms and placing billions of users’ information at risk. This incident underscores the urgent need for companies to adopt proactive cybersecurity measures and remain vigilant in the face of evolving threats.
In our March 2024 blog post Study Finds Average Cost of Data Breaches Continued to Rise in 2023, we highlighted the key findings of the Ponemon Institute’s Cost of a Data Breach Report 2023. Each year, the report sets forth a vast dataset analyzing data breaches at hundreds of organizations to spot trends and developments in security risks and best practices. The Ponemon Institute recently published its Cost of a Data Breach Report 2024, showing an increase in data breach costs in many areas of business.
Partners Ksenia Andreeva and Kristin Hadgis and associate Oliver Bell recently authored an Insight titled The Evolving Framework of Data Governance: A Global Perspective. The article explores and summarizes hot topics in data governance, data privacy laws, and the evolving global landscape, including in the United Kingdom, European Union, United States, and Middle East. As the focus on transparency and cybersecurity increases, and as markets become more globalized, keeping up to date on privacy regulations and rules can be a demanding undertaking.
Artificial intelligence (AI) is reshaping modern society, enabling the automation and modification of routine human activities and, consequently, enhancing efficiency and productivity. Like any technological development, AI presents both benefits and risks. Concerns include potential biases, privacy intrusions, and ethical dilemmas.
While artificial intelligence has not quite yet achieved singularity, the last fortnight brought about a substantial update to the AI regulatory landscape. As of February 2, Chapters I and II of the EU AI Act have entered into force. This includes Article 5, which prohibits certain AI systems whose use may intrude upon an individual’s privacy. This includes certain AI systems relating to emotion recognition in the workplace, subliminal manipulation, and predictive policing. Separately, EU AI Act obligations relating to AI literacy have also gone into effect.