Addressing Disinformation

Addressing Disinformation

How can we ensure that mechanisms to stem disinformation aren’t used to restrict press independence or free speech?


Publishers, platforms and policymakers share a responsibility to respond to growing concerns around disinformation. It is increasingly important to understand and navigate challenging trade-offs between curbing problematic content and protecting independent journalism and fundamental human rights. Efforts to stem misinformation must ensure that governments cannot determine the news that the public receives or serve as arbitrators of truth or intent.

Legislation should articulate high-level goals, understand that initiatives in one country or online context inherently impacts other contexts and delegate enforcement to independent bodies with clear structures for transparency and accountability.

The spread of false and misleading information is not a new problem and appears in many forms: online and offline, through public and private channels and across a variety of mediums. In a world with growing digital media platforms (where false and misleading information is rapidly spread and, at times, amplified), new technologies for digital manipulation, political upheavals in the Global North, coordinated election disinformation and hostile propaganda campaigns, problematic COVID-19 information, rampant denial of climate science and declining trust in institutions and news, the credibility of information the public gets online has become a global concern. 

Of particular importance, and the focus of this primer, is the impact of disinformationfalse information created or spread with the intention to deceive or harm – on electoral processes, political violence and information systems around the world.

Disinformation is distinct from but often used interchangeably with terms like misinformation (which is also false, but may be benign and can be spread without the intention of harm) or malinformation (which is also intentionally harmful, but is not necessarily false). With this in mind, CNTI chooses to use the term disinformation given its (1) falsity and (2) malicious intent. This primer addresses the opportunities and challenges that come with legislative policy responses to disinformation.

The level and impact of disinformation campaigns around the world have a wide range. Disinformation may come from people doctoring photos and creating memes that go viral, transnational actors and trolls seeking to sow distrust and confusion, and elected officials attempting to win elections, maintain power or incite hatred

Governments around the world are taking action to curb disinformation: some with the goal of supporting an informed citizenry and others with the goal of undermining it. Actions taken by one entity can impact other countries when groups in one country learn from and model successful efforts elsewhere. Digital platforms and interest groups have also put in place content moderation processes to stem disinformation (though some platforms have begun to move away from these efforts), but these can vary by language and context.

Among the many well-intended legislative proposals to address disinformation, one overarching concern is that the vagueness of what constitutes disinformation (especially the difficulty of interpreting actors’ intent) can result in policy that controls the press and limits free expression. Even legislation aimed at supporting an informed citizenry can potentially lead to restrictions on both the news media and the general public within a country. Further, policies that target disinformation can easily serve as models for authoritarian regimes or antidemocratic actors to exploit. In these cases, the actual – and, at times, intended – effects are restriction of media freedom, censorship of opposing voices and control of free expression. 

Thus, it is critical to balance the opportunities and risks of policy responses to the challenge of disinformation.

A core challenge in addressing disinformation with policy is a lack of agreement on the definition of disinformation and what kinds of content constitute it.

The term itself is often interchanged with similarly opaque concepts such as misinformation, malinformation, information disorder and the particularly contested term “fake news.”Determining which content fits within each category is subject to disagreement and is often politicized. To the public, these terms are used to encompass anything from poor or sensationalized journalism and fabricated content to political propaganda and hyper-partisan content. As challenging as it may be, it is important to strive for a clear and consistent understanding of what disinformation is. Without such agreement, developing effective measures to support an informed citizenry and safeguard an independent press has the added challenge of needing to withstand differences among these labels.

Addressing disinformation is critical, but some regulative approaches can put press freedom and human rights at great risk.

Governments’ involvement in decisions around content can allow them to serve as the arbiters of what content appears online, with potentially dangerous consequences for an independent press and free expression. Legislation may intentionally target specific groups, including journalists, political opponents and activists, and it may include loopholes that allow for suppression or censorship. Legislation intended to support an informed public may also unintentionally stem speech protected under the basic human right to express ideas. Further, legislation that is effective in one context may introduce different risks in another. Prohibiting acts of expression, particularly using vague or undefined terms, can infringe upon international human rights laws. If measures against disinformation are selectively or unequally enforced — whether by police, regulatory bodies or technology companies — they can be used as a tool for crushing political dissent, impinging upon freedom of opinion and paving the way for illegal surveillance or self-censorship.

Adopting measures such as blocks or bans to combat disinformation can allow state actors to exercise undue control over the flow of information and can isolate users from an open, global internet.

Governments’ involvement in decisions around content can allow them to serve as the arbiters of what content appears online, with potentially dangerous consequences for an independent press For example, when state actors apply blocks and bans to journalists they may harm both the freedom of the press and citizens’ basic right to access and share ideas and information, both of which impact people’s participation in public and political life.

It may not be possible to develop disinformation interventions that suit all digital contexts.

The various forms, contexts and audiences in the online space introduce different (and unequal) harms and risks. For instance, users of encrypted messaging apps such as WhatsApp or Telegram in India, Brazil and elsewhere have differing rights to and expectations of privacy and reach than users of Twitter or Reddit. Even within the same platform, structured spaces can vary from public to private. Dismantling encrypted spaces, in particular, does little to combat disinformation and discourages free expression. Another major challenge is in determining which platforms and which countries fall under the parameters of a piece of legislation. This is further complicated by the fact that U.S.-based technology companies’ levels of cultural expertise and engagement lessen the further they get from the U.S., which means the effectiveness of efforts to combat disinformation vary. Finally, disjointed efforts to combat online disinformation risk contributing to a fragmented internet, in which people’s online experiences vary by country or region. We address this issue in a separate issue primer.

In recent years, as governments, platforms and funders turned their attention and investments toward policy and technical solutions to address mis- and disinformation (though this has started to wane), academic and media attention to the topic has dramatically increased. The research to date has produced helpful insights, including putting the scope of mis- and disinformation in context with other online content. The research field also has several shortcomings that reveal the need for a deeper and more global approach:

Future work could provide more systematic global research needed to design more effective measures against mis- and disinformation. This includes studying the scale and impact of mis- and disinformation in countries outside of the U.S. and in comparative contexts. For instance, it is unclear whether strategies that are proven to be effective in countries with higher education and literacy levels would also apply elsewhere. There is also a need for understanding the agents and infrastructures involved in the spread of mis- and disinformation online and offline, particularly when it comes to video and image-based content as well as messaging applications. Finally, more data and research is needed to understand the effects that laws against disinformation – and related government action against platforms – have on civil liberties.

The global landscape around what legislators consider harmful content or disinformation is diverse, often complicated and reaches back centuries in some countries. Legislators’ treatment of disinformation has ranged from a desire to protect election integrity against domestic or foreign interference to obvious schemes to stifle political dissent. There has been a considerably greater effort to regulate what can be said and by whom in recent years, particularly in the wake of the COVID-19 pandemic. Efforts to respond to disinformation are critical, but policy must not set the stage for the dismantling of an independent press or an open internet. Specific areas of concern include:

  • Both highly democratic countries and authoritarian regimes increasingly regulate online discourse. The latter regularly target critical voices under the banner of tackling disinformation, often by abusing a state of crisis or emergency to justify state censorship, and often without time limits. Measures may either have vague wordingand broad scope, thus intentionally or unintentionally creating room for misuse, or may be too narrow in scope to effectively combat disinformation.

  • Sanctions within the legislative framework include financial penalties, jail time, bandwidth restrictions, advertising bans and blocking, depending on whether they address individuals or companies. Several challenges remain, including how to enforce regulations across borders (if the source for disinformation comes from a wholly different legal environment) as well as how to prevent “chilling effects” and self-censorship in newsrooms for fear of punishment.

  • Even non-authoritarian governments have vastly different approaches toward what they deem illegal content. In the EU, companies hosting others’ data are liable if, upon actual knowledge of it, they fail to act and remove illegal content. This fundamentally differs from existing immunities in countries like the U.S., likely due to its historical commitment to free speech. This legal patchwork creates further challenges for transnational corporations.

  • Despite the wealth of evidence that disinformation flows both top-down and bottom-up, policy attempting to address top-down disinformation (e.g., from domestic politicians or celebrities and foreign governments) has largely been absent while there has been an overwhelming focus on bottom-up mis- and disinformation (e.g., within platforms and their users).

As disinformation receives growing attention by elected leaders and academics, there should be a similar focus on legislative attempts to face a rapidly changing information environment. As much as possible, efforts need to be rooted in a clear understanding of the actors and their differing roles as well as in protection for an independent press, freedom of expression and fundamental human rights. Both the inadequacies and best practices of existing global legislation must be discussed openly. Additionally, in rule-of-law countries, there is a need for more political and public awareness that legislation may be weaponized by authoritarian regimes, worsening already restrictive situations for human rights groups, political opposition and independent news media.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

Korean president’s battle against ‘fake news’ alarms critics
The New York Times (November 2023)

Chilling legislation: Tracking the impact of “fake news” laws on press freedom internationally
Center for International Media Assistance (July 2023)

Most Americans favor restrictions on false information, violent content online
Pew Research Center (July 2023)

Twitter agrees to comply with tough EU disinformation laws
The Guardian (June 2023)

Regulating online platforms beyond the Marco Civil in Brazil: The controversial “fake news bill”
Tech Policy Press (May 2023)

What’s the key to regulating misinformation? Let’s start with a common language
Poynter (April 2023)

Policy reinforcements to counter information disorders in the African context
Research ICT Africa (February 2023)

Lessons from the global South on how to counter harmful information
Herman Wasserman (April 2022)

Why we need a global framework to regulate harm online
World Economic Forum (July 2021)

How well do laws to combat misinformation work?
Empirical Studies of Conflict Project, Princeton University (May 2021)

Rush to pass ‘fake news’ laws during Covid-19 intensifying global media freedom challenges
International Press Institute (October 2020)

Disinformation legislation and freedom of expression
UC Irvine Law Review (March 2020)

Story labels alone don’t increase trust
Center for Media Engagement (2019)

A human rights-based approach to disinformation
Global Partners Digital (October 2019)

Six key points from the EU Commission’s new report on disinformation
Clara Jiménez Cruz, Alexios Mantzarlis, Rasmus Kleis Nielsen, and Claire Wardle (March 2018)

Protecting democracy from online disinformation requires better algorithms, not censorship
Council on Foreign Relations (August 2017)

Center for an Informed Public: University of Washington research center translating research about misinformation and disinformation into policy, technology design, curriculum development and public engagement.

Empirical Studies of Conflict (ESOC): Multi-university consortium that identifies global disinformation campaigns and their effects on worldwide democratic elections.

EU Disinfo Lab: Independent nonprofit organization gathering knowledge and expertise on disinformation in Europe.

First Draft: Offers training, research and tools on how to combat online mis- and disinformation.

Global Disinformation Index: Nonprofit organization aiming to provide transparent, independent neutral disinformation risk ratings across the open web.

International Press Institute (IPI): Monitored media freedom violations, including policies or legislation passed against online misinformation, throughout the COVID-19 pandemic.

Laws on Expression Online: Tracker and Analysis (LEXOTA): Coalition of civil society groups that launched an interactive tool to help track and analyze government responses to online disinformation across Sub-Saharan Africa.

LupaMundi: Interactive map presenting national laws to combat disinformation in several languages.

OECD DIS/MIS Resource Hub: Peer learning platform for sharing knowledge, data and analysis of government approaches to tackling mis- and disinformation.

PEN America: Nonprofit organization aiming to protect free expression in the United States and worldwide.

Poynter’s guide to anti-misinformation actions around the world: Compiled a global guide for 2018-2019 interventions for or attempts to legislate against online misinformation.

Social Science Research Council’s MediaWell: Collects and synthesizes research on topics such as targeted disinformation.

Francisco Brito Cruz, Executive Director, InternetLab

Patrícia Campos Mello, Editor-at-Large and Reporter, Folha de São Paulo

Joan Donovan, Former Research Director, Shorenstein Center on Media, Politics and Public Policy

Pedro Pamplona Henriques, Co-Founder, The Newsroom

Clara Jiménez Cruz, CEO, Maldita.es

Tanit Koch, Journalist, The New European 

Vivek Krishnamurthy, Professor, University of Ottawa

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Elsa Pilichowski, Director for Public Governance, OECD

Maria Ressa, CEO, Rappler

Anya Schiffrin, Director of Technology, Media, and Communications, Columbia University

Nabiha Syed, CEO, The Markup

Scott Timcke, Senior Research Associate, Research ICT Africa

Claire Wardle, Executive Director, First Draft 

Herman Wasserman, Professor, University of Cape Town

Gavin Wilde, Senior Fellow, Carnegie Endowment for International Peace

Annual IDeaS Conference: Disinformation, Hate Speech, and Extremism Online
IDeaS
April 13-14, 2023 – Pittsburgh, Pennsylvania, USA

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

Cambridge Disinformation Summit
University of Cambridge
July 27–28, 2023 – Cambridge, United Kingdom

EU DisinfoLab 2023 Annual Conference
EU DisinfoLab
October 11–12, 2023 – Krakow, Poland

Korean president’s battle against ‘fake news’ alarms critics
The New York Times (November 2023)

Chilling legislation: Tracking the impact of “fake news” laws on press freedom internationally
Center for International Media Assistance (July 2023)

Most Americans favor restrictions on false information, violent content online
Pew Research Center (July 2023)

Twitter agrees to comply with tough EU disinformation laws
The Guardian (June 2023)

Regulating online platforms beyond the Marco Civil in Brazil: The controversial “fake news bill”
Tech Policy Press (May 2023)

What’s the key to regulating misinformation? Let’s start with a common language
Poynter (April 2023)

Policy reinforcements to counter information disorders in the African context
Research ICT Africa (February 2023)

Lessons from the global South on how to counter harmful information
Herman Wasserman (April 2022)

Why we need a global framework to regulate harm online
World Economic Forum (July 2021)

How well do laws to combat misinformation work?
Empirical Studies of Conflict Project, Princeton University (May 2021)

Rush to pass ‘fake news’ laws during Covid-19 intensifying global media freedom challenges
International Press Institute (October 2020)

Disinformation legislation and freedom of expression
UC Irvine Law Review (March 2020)

Story labels alone don’t increase trust
Center for Media Engagement (2019)

A human rights-based approach to disinformation
Global Partners Digital (October 2019)

Six key points from the EU Commission’s new report on disinformation
Clara Jiménez Cruz, Alexios Mantzarlis, Rasmus Kleis Nielsen, and Claire Wardle (March 2018)

Protecting democracy from online disinformation requires better algorithms, not censorship
Council on Foreign Relations (August 2017)

Center for an Informed Public: University of Washington research center translating research about misinformation and disinformation into policy, technology design, curriculum development and public engagement.

Empirical Studies of Conflict (ESOC): Multi-university consortium that identifies global disinformation campaigns and their effects on worldwide democratic elections.

EU Disinfo Lab: Independent nonprofit organization gathering knowledge and expertise on disinformation in Europe.

First Draft: Offers training, research and tools on how to combat online mis- and disinformation.

Global Disinformation Index: Nonprofit organization aiming to provide transparent, independent neutral disinformation risk ratings across the open web.

International Press Institute (IPI): Monitored media freedom violations, including policies or legislation passed against online misinformation, throughout the COVID-19 pandemic.

Laws on Expression Online: Tracker and Analysis (LEXOTA): Coalition of civil society groups that launched an interactive tool to help track and analyze government responses to online disinformation across Sub-Saharan Africa.

LupaMundi: Interactive map presenting national laws to combat disinformation in several languages.

OECD DIS/MIS Resource Hub: Peer learning platform for sharing knowledge, data and analysis of government approaches to tackling mis- and disinformation.

PEN America: Nonprofit organization aiming to protect free expression in the United States and worldwide.

Poynter’s guide to anti-misinformation actions around the world: Compiled a global guide for 2018-2019 interventions for or attempts to legislate against online misinformation.

Social Science Research Council’s MediaWell: Collects and synthesizes research on topics such as targeted disinformation.

Francisco Brito Cruz, Executive Director, InternetLab

Patrícia Campos Mello, Editor-at-Large and Reporter, Folha de São Paulo

Joan Donovan, Former Research Director, Shorenstein Center on Media, Politics and Public Policy

Pedro Pamplona Henriques, Co-Founder, The Newsroom

Clara Jiménez Cruz, CEO, Maldita.es

Tanit Koch, Journalist, The New European 

Vivek Krishnamurthy, Professor, University of Ottawa

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Elsa Pilichowski, Director for Public Governance, OECD

Maria Ressa, CEO, Rappler

Anya Schiffrin, Director of Technology, Media, and Communications, Columbia University

Nabiha Syed, CEO, The Markup

Scott Timcke, Senior Research Associate, Research ICT Africa

Claire Wardle, Executive Director, First Draft 

Herman Wasserman, Professor, University of Cape Town

Gavin Wilde, Senior Fellow, Carnegie Endowment for International Peace

Annual IDeaS Conference: Disinformation, Hate Speech, and Extremism Online
IDeaS
April 13-14, 2023 – Pittsburgh, Pennsylvania, USA

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

Cambridge Disinformation Summit
University of Cambridge
July 27–28, 2023 – Cambridge, United Kingdom

EU DisinfoLab 2023 Annual Conference
EU DisinfoLab
October 11–12, 2023 – Krakow, Poland