Countries in Focus: European Union
Building upon the “Notable Legislation” section of CNTI’s issue primers, find more information here about recent and proposed legislation in key countries across our issue areas. These pages add additional information at the supranational or country level where there has been a high level of legislative activity requiring more detail or context.
The EU’s law on artificial intelligence, the AI Act, is arguably the most developed law about AI and includes regulations that assign the application of AI to three risk categories. The law bans uses of generative AI systems with “unacceptable” risk levels (i.e., AI systems that violate people’s fundamental rights and safety) and increases regulation of “high-risk” systems. Analyses of whether foundation model providers currently comply with these requirements have found they largely do not.
There is some discussion about the EU AI Act eventually serving as a global standard for policy. However, concerns about the AI Act include a lack of clarity of what constitutes AI, a lack of flexibility in the regulation process and unintended legal implications for marginalized communities (e.g., migrants and refugees). The European Parliament voted to adopt the act in June 2023. Updates on the latest status can be found on the AI Act website.
Policymakers in the EU have begun addressing algorithmic transparency as part of the broader Digital Services Act (DSA), which entered into force in November 2022. In particular, the act requires platforms to provide more transparency around their content moderation and algorithmic content curation efforts, to take further steps to address harmful content and to disclose data to independent researchers. In their terms of service, very large platforms must also publish the primary parameters of their recommendation systems. In addition, the DSA asks companies to give users the right to opt out of recommendation algorithms based on personal data processing and instead engage organically with content not based on profiling. Concerns about the act often focus on its feasibility on a cross-national scale: how regulators will monitor these standards and whether individual countries may follow through with such regulation.
The European Union’s sweeping 2018 data protection legislation, known as the GDPR, created new means of safeguarding citizens’ personal data but in doing so introduced new concerns around a “splinternet” experience, where one’s location would determine what content is banned. In particular, the GDPR’s ‘right to be forgotten’ risks fact-based investigative journalism including personal data (such as #MeToo stories) qualifying for removal demands. (Between 2015 and 2021, Google and Bing received over 1 million ‘right to be forgotten’ requests.)
Like the United States, the European Union introduced exemptions in April 2022 for internet service providers to continue operating in Russia despite sanctions following the country’s invasion of Ukraine. The order came in response to warnings from civil society groups that restricting internet access by pulling EU-based software companies and internet providers from the country could risk further isolating Russian citizens and journalists. The order promoted an open flow of information for citizens and independent media.
Meanwhile, the EU’s 2022 ban of Russian state media RT and Sputnik due to legitimate concerns about the spread of propaganda and disinformation, raised important questions surrounding the risk of closing of the global information space and cutting off the ability of policymakers, journalists, activists and citizens to learn how state media may try to mislead or manipulate public opinion. Additionally, some experts question whether it sets a precedent for further legal media bans in Europe.