The European Parliament recently published a report (the Report) on the interplay between several key EU digital regulations to assess overlap and gaps and highlight the regulatory complexity that these different regimes have collectively imposed on businesses.
Amid press reports of potential revisions to the EU AI Act, the Report identifies specific areas that may be ripe for simplification.
Background
Commissioned by the European Parliament’s Committee on Industry, Research and Energy, the Report considers EU digital regulations including the EU AI Act, General Data Protection Regulation (GDPR), EU Data Act, Digital Services Act (DSA), Digital Markets Act (DMA), and Cyber Resilience Act (CRA), among others.
The Report focuses very clearly on whether regulatory complexity could be stifling innovation, noting AI investment and innovation metrics show a relative underperformance within the European Union compared with other regions such as the United States and China. For example, it cites research that private AI investment in the United States in 2024 far outstripped that in the EU, and leading AI companies remain predominantly non-European: only three entries of Forbes’s 2025 AI 50 List were EU-based as opposed to 42 from the United States.
The Report describes a contrast between a decentralised, multistakeholder and market-driven regulatory model of the United States and the coherent, risk-based framework of EU digital regulation, which has led to criticism of difficulties in implementation consistently across use cases.
Overlap and Gaps
The Report identifies the following key challenges across EU digital regulations:
- Structural Challenges of the EU AI Act: The Report identifies several tensions within the EU AI Act. The act’s applicability to both AI system providers and deployers imposes complex chains of responsibility that may be difficult to navigate, particularly for small to medium-sized enterprises and nonspecialist users. The classification of high-risk AI systems relies on Annex-based lists and subjective assessments of potential harms, which the Report states introduces legal ambiguity.
- Impact Assessments under the EU AI Act and GDPR: The EU AI Act introduces requirements for fundamental rights impact assessments (FRIAs) in cases that often also trigger data protection impact assessments (DPIAs) under the GDPR. FRIAs and DPIAs differ in scope, supervision, and procedural requirements, which can result in duplication and uncertainty. The Report also finds that there is ambiguity over how personal data controllers and AI providers should manage rights of access, rectification, and erasure when personal data becomes embedded in complex AI models.
- Data Portability: The EU Data Act sets out requirements for access to and portability of data generated by cloud-based services and connected products and services. The Report highlights a “significant” cumulative compliance burden between the EU AI Act, which governs the design and deployment of AI systems, and the EU Data Act.
- Cybersecurity: Designers of AI systems must navigate overlapping requirements, but leaving room for interpretative uncertainty, between the EU AI Act and the CRA, which sets out mandatory cybersecurity requirements for all digital products. Further, certain entities deemed essential and important must comply with overlapping risk management frameworks under the NIS2 Directive and the EU AI Act, which is especially pronounced in incident reporting obligations and the governance of supply chains.
- Online Platforms and Search Engines: There may be some interaction between obligations of transparency, risk assessments, and content moderation of AI-generated content under the DSA and DMA, on the one hand, and the EU AI Act, on the other. The DMA’s provisions on data access, interoperability, and anti–self-preferencing could be relevant to AI APIs or foundation models offered by the DMA-designated “gatekeepers.”
Recommendations
The Report notes that many of the obligations arising from the regulations listed above can be reasonably justified in isolation, but cumulatively their application may produce duplicative, inconsistent, or unclear requirements that deter uptake and delay time to market of new digital and AI-based products.
Looking ahead, the Report encourages joint guidance and coordinated enforcement practices by EU regulatory authorities in the short term, such as mutual recognition of impact assessments.
In the medium term, the Report recommends “light-touch” legislative amendments to clarify key role definitions, such as what constitutes a high-risk AI system, and to streamline overlapping obligations, such as around cybersecurity and assessments of fundamental rights. In the long term, the Report recommends further consolidation and simplification of the EU digital regulatory architecture.