How can we ensure that mechanisms to stem disinformation aren’t used to restrict press independence or free speech?LAST UPDATED: December 01, 2023
Publishers, platforms and policymakers share a responsibility to respond to growing concerns around disinformation. It is increasingly important to understand and navigate challenging trade-offs between curbing problematic content and protecting independent journalism and fundamental human rights. Efforts to stem misinformation must ensure that governments cannot determine the news that the public receives or serve as arbitrators of truth or intent. Legislation should articulate high-level goals, understand that initiatives in one country or online context inherently impacts other contexts and delegate enforcement to independent bodies with clear structures for transparency and accountability.
The spread of false and misleading information is not a new problem and appears in many forms: online and offline, through public and private channels and across a variety of mediums. In a world with growing digital media platforms (where false and misleading information is rapidly spread and, at times, amplified), new technologies for digital manipulation, political upheavals in the Global North, coordinated election disinformation and hostile propaganda campaigns, problematic COVID-19 information, rampant denial of climate science and declining trust in institutions and news, the credibility of information the public gets online has become a global concern.
Of particular importance, and the focus of this primer, is the impact of disinformation – false information created or spread with the intention to deceive or harm – on electoral processes, political violence and information systems around the world.
Disinformation is distinct from but often used interchangeably with terms like misinformation (which is also false, but may be benign and can be spread without the intention of harm) or malinformation (which is also intentionally harmful, but is not necessarily false). With this in mind, CNTI chooses to use the term disinformation given its (1) falsity and (2) malicious intent. This primer addresses the opportunities and challenges that come with legislative policy responses to disinformation.
The level and impact of disinformation campaigns around the world have a wide range. Disinformation may come from people doctoring photos and creating memes that go viral, transnational actors and trolls seeking to sow distrust and confusion, and elected officials attempting to win elections, maintain power or incite hatred.
Governments around the world are taking action to curb disinformation: some with the goal of supporting an informed citizenry and others with the goal of undermining it. Actions taken by one entity can impact other countries when groups in one country learn from and model successful efforts elsewhere. Digital platforms and interest groups have also put in place content moderation processes to stem disinformation (though some platforms have begun to move away from these efforts), but these can vary by language and context.
Among the many well-intended legislative proposals to address disinformation, one overarching concern is that the vagueness of what constitutes disinformation (especially the difficulty of interpreting actors’ intent) can result in policy that controls the press and limits free expression. Even legislation aimed at supporting an informed citizenry can potentially lead to restrictions on both the news media and the general public within a country. Further, policies that target disinformation can easily serve as models for authoritarian regimes or antidemocratic actors to exploit. In these cases, the actual – and, at times, intended – effects are restriction of media freedom, censorship of opposing voices and control of free expression.
Thus, it is critical to balance the opportunities and risks of policy responses to the challenge of disinformation.
What Makes It Complex
The term itself is often interchanged with similarly opaque concepts such as misinformation, malinformation, information disorder and the particularly contested term “fake news.” Determining which content fits within each category is subject to disagreement and is often politicized. To the public, these terms are used to encompass anything from poor or sensationalized journalism and fabricated content to political propaganda and hyper-partisan content. As challenging as it may be, it is important to strive for a clear and consistent understanding of what disinformation is. Without such agreement, developing effective measures to support an informed citizenry and safeguard an independent press has the added challenge of needing to withstand differences among these labels.
Addressing disinformation is critical, but some regulative approaches can put press freedom and human rights at great risk.
Governments’ involvement in decisions around content can allow them to serve as the arbiters of what content appears online, with potentially dangerous consequences for an independent press and free expression. Legislation may intentionally target specific groups, including journalists, political opponents and activists, and it may include loopholes that allow for suppression or censorship. Legislation intended to support an informed public may also unintentionally stem speech protected under the basic human right to express ideas. Further, legislation that is effective in one context may introduce different risks in another. Prohibiting acts of expression, particularly using vague or undefined terms, can infringe upon international human rights laws. If measures against disinformation are selectively or unequally enforced — whether by police, regulatory bodies or technology companies — they can be used as a tool for crushing political dissent, impinging upon freedom of opinion and paving the way for illegal surveillance or self-censorship.
For example, when state actors apply blocks and bans to journalists they may harm both the freedom of the press and citizens’ basic right to access and share ideas and information, both of which impact people’s participation in public and political life.
It may not be possible to develop disinformation interventions that suit all digital contexts.
The various forms, contexts and audiences in the online space introduce different (and unequal) harms and risks. For instance, users of encrypted messaging apps such as WhatsApp or Telegram in India, Brazil and elsewhere have differing rights to and expectations of privacy and reach than users of Twitter or Reddit. Even within the same platform, structured spaces can vary from public to private. Dismantling encrypted spaces, in particular, does little to combat disinformation and discourages free expression. Another major challenge is in determining which platforms and which countries fall under the parameters of a piece of legislation. This is further complicated by the fact that U.S.-based technology companies’ levels of cultural expertise and engagement lessen the further they get from the U.S., which means the effectiveness of efforts to combat disinformation vary. Finally, disjointed efforts to combat online disinformation risk contributing to a fragmented internet, in which people’s online experiences vary by country or region. We address this issue in a separate issue primer.
State of Research
In recent years, as governments, platforms and funders turned their attention and investments toward policy and technical solutions to address mis- and disinformation (though this has started to wane), academic and media attention to the topic has dramatically increased. The research to date has produced helpful insights, including putting the scope of mis- and disinformation in context with other online content. The research field also has several shortcomings that reveal the need for a deeper and more global approach:
- Many studies of mechanisms to stem mis- and disinformation, such as fact–checking (which has leveled off), corrections, and media literacy initiatives, are robust in their methods but may be too piecemeal or too limited to lay out the full scope of the issue and what may or may not work to address it. Other interventions such as content labels, classifications, warnings or “pre-bunking” have proven to be counterproductive or inconsistently effective in practice. Even when strategies seem to work, it remains unclear whether the effects last.
- More broadly, drawbacks to existing research on disinformation include: concentration around a few wealthy democracies, disproportionate attention to the Global North and English-speaking communities, inconsistent concepts for and definitions of problematic information, limited understanding of the effects of media and media technologies, an overemphasis on social media platforms like Twitter and an underemphasis of the historical and political contexts of disinformation.
- Some experts openly caution against the outsized influence of the current research on policy and political responses to disinformation, which they say has contributed to problematic actions or policies that threaten press freedom and human rights.
Future work could provide more systematic global research needed to design more effective measures against mis- and disinformation. This includes studying the scale and impact of mis- and disinformation in countries outside of the U.S. and in comparative contexts. For instance, it is unclear whether strategies that are proven to be effective in countries with higher education and literacy levels would also apply elsewhere. There is also a need for understanding the agents and infrastructures involved in the spread of mis- and disinformation online and offline, particularly when it comes to video and image-based content as well as messaging applications. Finally, more data and research is needed to understand the effects that laws against disinformation – and related government action against platforms – have on civil liberties.
Harvard Kennedy School Misinformation Review (2023)
Summary: This U.S.-focused research examines strategies for making misinformation interventions responsive to four communities of color, identifying opportunities for more equitable and effective efforts to combat mis- and disinformation.
CNTI’s Takeaway: These findings note the importance of disinformation policy and educational initiatives that account for language diversity, the nuances of ethnic media ecologies and historical contexts for historically marginalized groups.
International Development Research Center of Canada (2022)
Summary: The study of false and misleading information has been dominated by research in the global North. This project maps the actors, approaches, and research landscape in global South countries as of 2022 to identify opportunities for critical future work in these spaces.
CNTI’s Takeaway: This work suggests a range of countries in which to expand misinformation research across the Middle East, Latin America, Sub-Saharan Africa, and Asia, and reminds us that we must take into account the cultural and geopolitical contexts of misinformation in each country.
CAMRI Policy Briefs and Reports (2021)
Summary: These reports analyze misinformation regulations and media literacy initiatives in Sub-Saharan Africa as of 2021, identifying recommendations in each area to promote a healthier information environment.
CNTI’s Takeaway: Systematic research on the types, drivers, and effects of misinformation in sub-Saharan Africa is a necessary next step toward responding to misinformation in the region.
Reuters Institute for the Study of Journalism (2021)
Summary: This paper summarizes existing concepts and empirical research findings related to mis- and disinformation as of 2021, and identifies risks to free expression and press independence based on governments’ and platforms’ proposed and enacted responses to this content.
CNTI’s Takeaway: This helpful resource breaks down several potential evidence-based practical, legal, and platform responses to misinformation.
Internet Policy Review (2021)
Summary: This article argues the legal definitions used as the basis of the European Commission’s disinformation policy are too vague. Examining recent legislation in European Union member states shows huge discrepancies in national approaches to disinformation, often trending toward laws focused on criminalisation that are problematic for free expression.
CNTI’s Takeaway: This offers a useful critique of existing transnational disinformation policy as of 2021 and clearly describes the harmful consequences of vague language in these forms of legislative action.
Summary: Actions to combat disinformation should support, and not violate, free expression and independent journalism. Access to reliable and trustworthy information is a critical counter to disinformation.
CNTI’s Takeaway: This offers useful frameworks for (1) understanding the life cycle of misinformation and (2) assessing whether legislative policy responses to it also protect free expression.
The International Journal of Press/Politics (2020)
Summary: This research studies 18 Western democracies to identify what structural conditions make countries more or less likely to demonstrate high resilience to online misinformation.
CNTI’s Takeaway: A robust, independent press – including high media trust and news consumption levels and strong public service media – helps to mitigate the harms of online misinformation.
European Commission (2018)
Summary: In 2018, the European Commission established an expert group to advise on disinformation policy initiatives and summarizes its review of best practices and proposed future research in this report.
CNTI’s Takeaway: Written by a collaborative group of 39 experts and leaders in technology, journalism, and policy, this report offers valuable best practices to consider when developing responses to disinformation.
State of Legislation
The global landscape around what legislators consider harmful content or disinformation is diverse, often complicated and reaches back centuries in some countries. Legislators’ treatment of disinformation has ranged from a desire to protect election integrity against domestic or foreign interference to obvious schemes to stifle political dissent. There has been a considerably greater effort to regulate what can be said and by whom in recent years, particularly in the wake of the COVID-19 pandemic. Efforts to respond to disinformation are critical, but policy must not set the stage for the dismantling of an independent press or an open internet. Specific areas of concern include:
- Both highly democratic countries and authoritarian regimes increasingly regulate online discourse. The latter regularly target critical voices under the banner of tackling disinformation, often by abusing a state of crisis or emergency to justify state censorship, and often without time limits. Measures may either have vague wording and broad scope, thus intentionally or unintentionally creating room for misuse, or may be too narrow in scope to effectively combat disinformation.
- Sanctions within the legislative framework include financial penalties, jail time, bandwidth restrictions, advertising bans and blocking, depending on whether they address individuals or companies. Several challenges remain, including how to enforce regulations across borders (if the source for disinformation comes from a wholly different legal environment) as well as how to prevent “chilling effects” and self-censorship in newsrooms for fear of punishment.
- Even non-authoritarian governments have vastly different approaches toward what they deem illegal content. In the EU, companies hosting others’ data are liable if, upon actual knowledge of it, they fail to act and remove illegal content. This fundamentally differs from existing immunities in countries like the U.S., likely due to its historical commitment to free speech. This legal patchwork creates further challenges for transnational corporations.
- Despite the wealth of evidence that disinformation flows both top-down and bottom-up, policy attempting to address top-down disinformation (e.g., from domestic politicians or celebrities and foreign governments) has largely been absent while there has been an overwhelming focus on bottom-up mis- and disinformation (e.g., within platforms and their users).
As disinformation receives growing attention by elected leaders and academics, there should be a similar focus on legislative attempts to face a rapidly changing information environment. As much as possible, efforts need to be rooted in a clear understanding of the actors and their differing roles as well as in protection for an independent press, freedom of expression and fundamental human rights. Both the inadequacies and best practices of existing global legislation must be discussed openly. Additionally, in rule-of-law countries, there is a need for more political and public awareness that legislation may be weaponized by authoritarian regimes, worsening already restrictive situations for human rights groups, political opposition and independent news media.
Australia’s parliament is drafting legislation that would give the Australian Communications and Media Authority (ACMA) the ability to fine digital platforms if they fail to remove mis- and disinformation. Currently, news publisher content is excluded from oversight. Concerns include how the bill will define mis- and disinformation, how and to what extent the ACMA would govern this process and what mechanisms will exist for appeals.
Brazil’s PL 2630 (known as the “Fake News” bill) failed a fast-track vote in April 2022 and stalled again in May 2023. Among other features, PL 2630 would require platforms to detect and remove illegal content. Concerns about the bill include its threat to free expression and potential to compensate creators of disinformation. Aggressive platform responses to shape public debate received international criticism, and experts noted concerns with the overall escalation of responses.
Read more here.
In May 2023, the Cyberspace Administration of China launched a campaign to close more than 100,000 online accounts that disseminate what the CAC referred to as “fake news,” including AI-generated impersonations of state-controlled media.
As of late 2023, the Croatian government is considering legislation that would make leaking information before and during a criminal trial illegal. Members of the Croatian press are concerned the proposed legislation could be used to target both whistleblowers and the journalists they contact. The president of Croatia and other government officials claim the proposal is aimed at protecting the validity of investigations and assuring the presumption of innocence.
Approved in November 2022, the EU Digital Services Act (DSA) package aimed to update the existing EU legal framework and address issues such as illegal content, transparent advertising and disinformation. It seeks to harmonize various national laws created in recent years. Service providers will have until January 2024 to comply with its provisions.
In 2020, in the midst of the COVID-19 pandemic, Hungary’s parliament passed measures including jail terms for journalists and others spreading pandemic “misinformation.”
In 2020, in the midst of the COVID-19 pandemic, Nicaragua’s congress passed measures regulating what can be published on social media platforms and by news publishers, including jail terms for publishing “false and/or misrepresented information.” The legislation has been condemned as attempts to silence political opposition to President Daniel Ortega.
Papua New Guinea
Proposed in early 2023, the Papua New Guinea government’s draft national media development policy to promote “a professional, ethical, and responsible media industry” would require “media outlets” and journalists to apply for accreditation or a license. The policy notes that it is intended to address misinformation, among other goals, but offers a narrow window for stakeholder feedback and may undermine media independence and freedoms.
In January 2017, amendments to Russia’s № 208-FZ entered into force that made owners of news aggregators liable for spreading so-called “fake news.” Links to news items that originate verbatim from media outlets registered in Russia (and thus state-regulated) are exempt from liability. The law has effectively created a mechanism of media control through existing media regulation structures. Concerns include the amendments’ vague wording increasing state intervention and power.
Taiwan’s government, facing a barrage of foreign disinformation, has established government initiatives and partnerships with technology companies like Line to track and debunk disinformation. Policymakers have considered bills to curb disinformation on TikTok. In February 2023, proposed amendments to Taiwan’s defense mobilization act included setting penalties for spreading disinformation. Concerns include whether this could be used to silence domestic political dissent.
The U.K.’s proposed Online Safety Bill, a complicated and sweeping set of measures, is intended to counter a broad range of digital harms. Measures that would have forced social media platforms to remove “legal but harmful” content were cut after critics argued the legislation threatened free expression.
Resources & Events
Notable Articles & Statements
Korean president’s battle against ‘fake news’ alarms critics
The New York Times (November 2023)
Chilling legislation: Tracking the impact of “fake news” laws on press freedom internationally
Center for International Media Assistance (July 2023)
Most Americans favor restrictions on false information, violent content online
Pew Research Center (July 2023)
Twitter agrees to comply with tough EU disinformation laws
The Guardian (June 2023)
Regulating online platforms beyond the Marco Civil in Brazil: The controversial “fake news bill”
Tech Policy Press (May 2023)
What’s the key to regulating misinformation? Let’s start with a common language
Poynter (April 2023)
Policy reinforcements to counter information disorders in the African context
Research ICT Africa (February 2023)
Lessons from the global South on how to counter harmful information
Herman Wasserman (April 2022)
Why we need a global framework to regulate harm online
World Economic Forum (July 2021)
How well do laws to combat misinformation work?
Empirical Studies of Conflict Project, Princeton University (May 2021)
Rush to pass ‘fake news’ laws during Covid-19 intensifying global media freedom challenges
International Press Institute (October 2020)
Disinformation legislation and freedom of expression
UC Irvine Law Review (March 2020)
Story labels alone don’t increase trust
Center for Media Engagement (2019)
A human rights-based approach to disinformation
Global Partners Digital (October 2019)
Six key points from the EU Commission’s new report on disinformation
Clara Jiménez Cruz, Alexios Mantzarlis, Rasmus Kleis Nielsen, and Claire Wardle (March 2018)
Protecting democracy from online disinformation requires better algorithms, not censorship
Council on Foreign Relations (August 2017)
Key Institutions & Resources
Center for an Informed Public: University of Washington research center translating research about misinformation and disinformation into policy, technology design, curriculum development and public engagement.
Empirical Studies of Conflict (ESOC): Multi-university consortium that identifies global disinformation campaigns and their effects on worldwide democratic elections.
EU Disinfo Lab: Independent nonprofit organization gathering knowledge and expertise on disinformation in Europe.
First Draft: Offers training, research and tools on how to combat online mis- and disinformation.
Global Disinformation Index: Nonprofit organization aiming to provide transparent, independent neutral disinformation risk ratings across the open web.
Laws on Expression Online: Tracker and Analysis (LEXOTA): Coalition of civil society groups that launched an interactive tool to help track and analyze government responses to online disinformation across Sub-Saharan Africa.
LupaMundi: Interactive map presenting national laws to combat disinformation in several languages.
OECD DIS/MIS Resource Hub: Peer learning platform for sharing knowledge, data and analysis of government approaches to tackling mis- and disinformation.
PEN America: Nonprofit organization aiming to protect free expression in the United States and worldwide.
Poynter’s guide to anti-misinformation actions around the world: Compiled a global guide for 2018-2019 interventions for or attempts to legislate against online misinformation.
Social Science Research Council’s MediaWell: Collects and synthesizes research on topics such as targeted disinformation.
Francisco Brito Cruz, Executive Director, InternetLab
Patrícia Campos Mello, Editor-at-Large and Reporter, Folha de São Paulo
Joan Donovan, Former Research Director, Shorenstein Center on Media, Politics and Public Policy
Pedro Pamplona Henriques, Co-Founder, The Newsroom
Clara Jiménez Cruz, CEO, Maldita.es
Tanit Koch, Journalist, The New European
Vivek Krishnamurthy, Professor, University of Ottawa
Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism
Elsa Pilichowski, Director for Public Governance, OECD
Maria Ressa, CEO, Rappler
Anya Schiffrin, Director of Technology, Media, and Communications, Columbia University
Nabiha Syed, CEO, The Markup
Scott Timcke, Senior Research Associate, Research ICT Africa
Claire Wardle, Executive Director, First Draft
Herman Wasserman, Professor, University of Cape Town
Gavin Wilde, Senior Fellow, Carnegie Endowment for International Peace
Recent & Upcoming Events
Annual IDeaS Conference: Disinformation, Hate Speech, and Extremism Online
April 13-14, 2023 – Pittsburgh, Pennsylvania, USA
June 5–8, 2023 – San José, Costa Rica
Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil
Cambridge Disinformation Summit
University of Cambridge
July 27–28, 2023 – Cambridge, United Kingdom
EU DisinfoLab 2023 Annual Conference
October 11–12, 2023 – Krakow, Poland
Issue primers have been reviewed at multiple stages by more than 20 global research and industry expert partners, including CNTI advisory committee members, representing five regions. We invite you to send us research, legislation and other resources. Read more about CNTI’s issue primer and other research quality standards.Download PDF