Enhancing Algorithmic Transparency

Enhancing Algorithmic Transparency

How can public policy enhance algorithmic transparency and accountability while protecting against political or commercial manipulation?


Digital platforms have become central to how people around the world find and share news and information. Currently, each platform operates under its own rules and conventions, including what content is shown and prioritized by algorithmic infrastructures and internal company policies. Establishing legal and organizational policy to promote algorithmic transparency is one critical step toward a structure that allows for more accountability for often-opaque digital platform practices related to content selection and moderation processes. Various stakeholders – including policymakers, researchers, journalists and the public – also often have different purposes and uses for transparency, which need to be thought through to be sure it serves those needs while protecting against the risks of political or commercial manipulation. These considerations also carry through to who is designated to regulate transparency requirements.

The internet is enmeshed in virtually every aspect of modern life. Most often, the public’s online experiences flow through digital platforms, which serve as intermediaries between the content being delivered and the individuals consuming it. That provides convenience but also raises serious concerns. A central one is how much control digital platforms have, or should have, over what people see, especially when it comes to news and information, and how transparent digital platforms’ decision-making processes, typically conducted through the use of algorithms, needs to be.

Each digital platform, and its algorithmic infrastructure, operates under its own rules and conventions to determine which content is and is not allowed as well as what content is prioritized on the platform. Algorithmic choices have far-reaching implications ranging from determining user experiences by selecting one news story over another to facilitating hate speech, disinformation and violence

Policymakers, journalists, social media researchers, civil society organizations, private corporations and members of the public are raising awareness of and questions about what are often-opaque digital platform practices related to content selection and moderation processes. The global news industry in particular – which relies to a significant degree on digital intermediaries to reach audiences – has articulated the need for a better understanding of how algorithms rank, boost, restrict and recommend content (including, but not limited to, news) and target consumers. Without a better understanding of how these algorithms are currently built, it is difficult for publishers to work within their constraints to reach audiences or to play any role in determining how algorithms should be built to promote an informed public.

Thus, in addition to ensuring that algorithms are designed to identify fact-based, independent journalism (a topic CNTI addresses in a separate issue primer), it is critical to enhance transparency in and accountability for how digital platform algorithms function.

The key challenge is that regulating or mandating transparency is more complex than it appears, with many elements that need to be thought through. What forms of transparency are necessary and most likely to be effective for protecting and promoting access to high-quality, diverse and independent news? What does transparency mean to policymakers, journalists, researchers and the public? How can we enable algorithmic transparency in ways that protect against political and commercial manipulation or abuse and ensure user privacy? What are its limitations? What systems or entities should be beholden to these levels of transparency? And who should be empowered to set these rules? 

Addressing the questions introduced in this primer is a critical first step toward developing approaches to algorithmic accountability that support an independent, competitive press and an informed society. Policymakers and other stakeholders can benefit from a more nuanced understanding of what transparency can, and cannot, accomplish as well as what new risks could be introduced. Policymakers must be forward-thinking about how and where algorithms may be used in the future, what data could be exposed and who could ultimately gain access to it as a result of increased transparency.

One hurdle in creating valuable algorithmic transparency is being sure that those on the receiving end are armed with the necessary knowledge to assess these processes.

A good deal of tension has arisen over how technology companies have responded to requests for independent researcherjournalist and public access to algorithmic processes or their digital trace data. Transparency policies must take into account each stakeholder’s differing needs, aims and interests and consider what forms of transparency are necessary and appropriate for each. Because the algorithmic processes are complicated and often rely on technological jargon, there is the risk that even with increased transparency, a lack of understanding of these processes will endure for some stakeholders.

“Open-source” transparency is not enough to achieve an understanding of algorithmic processes or solve the problems of platform accountability. This is true in two different manners. First, some tech leaders have suggested “open-source” algorithms as an answer to transparency and accountability issues, but algorithmic processes require a level of technical knowledge most stakeholders do not possess in order to understand their complexities. “Open-source” transparency requires individuals to have the time and capacity to learn the complexities of data governance and content moderation. Context is critical in learning how algorithms work; without a clear understanding of the internal policies dictating algorithmic behaviors or access to their underlying training data, raw information is not interpretable.

Determining the parameters of transparency is challenging to do in a single policy.

There is broad agreement that transparency is a critical component of digital platform governance and content moderation, and that “more transparency is better,” but, it is not always clear what, and where, transparency is being sought. Collaboration between experts is critical to addressing the multifaceted impacts policy on transparency may have. Further, the aim of transparency differs in market-based versus state-led models, with the former aiming to empower users and the latter aiming to empower regulators. Policymakers, researchers, the news media and civil society must work together to weigh stakeholders’ needs against platforms’ capabilities and consider what regulatory structures will serve to enforce this legislation. Finally, as we address in a separate issue primer, addressing transparency on its own does not address disagreements over how to ensure people see fact-based, independent news content on digital platforms, nor does it address the choices individuals themselves make about what they click on or what kind of information they seek out.

Digital platform regulation needs to protect against political and commercial manipulation as a consequence of algorithmic transparency.

As discussed in our issue primer on addressing disinformation, algorithmic transparency (and researcher access to digital trace data) can both help experts understand the spread of online disinformation and how to counter it and, at the same time, introduce new risks for manipulation and abuse. Governments in many parts of the world increasingly pressure platforms to make content decisions that benefit their own political interests. There is also the risk that commercial actors or institutions manipulate algorithms to exploit virality, amplifying large quantities of clickbait and problematic content over fact-based, independent information – particularly by using new technologies like generative AI. Both introduce threats to an independent press and an informed public. Policy must safeguard against these risks and insulate regulation from political and commercial manipulation, both by ensuring oversight remains independent from governments and by separating regulatory bodies from direct involvement in content moderation decisions.

Platforms’ commercial structures are real factors to be considered, both in the way they can negatively drive internal decisions and in the value they bring to our digital environment.

Technology companies are private business entities that operate in a competitive marketplace with a certain level of value placed on their proprietary information and intellectual property. The current concealed nature of algorithmic selection and ranking, however, risks commercial incentives negatively affecting the work of journalists and the content the public receives. Crafting the best transparency policies will require a balance between these two elements, including consideration of potential unintended consequences for innovation or competition, such as deterring startups or smaller players from participation due to costs associated with adhering to regulation.

Debates around transparency introduce important opportunities and risks for the relationships between governments and platforms.

This is particularly important when it comes to who oversees or regulates algorithmic transparency and what can be asked of platforms when it comes to content moderation. Legislation that does not address these critical questions can threaten an independent press and freedom of expression (for instance, via “jawboning” practices or other legal demands of platforms). This is a consistent complexity across many of CNTI’s issue areas and speaks to the importance of considering the balance between legislative and organizational policy (including government-enforced organizational policy) in addressing these challenges.

Policy must balance transparency and accountability against user privacy concerns when applicable.

Digital platforms’ efforts to improve transparency, when they exist at all, are at times negotiated within the contexts of user privacy concerns and data security concerns. While this does not affect all approaches to algorithmic transparency, questions about what user data digital platforms would be obligated to share and what personal information would be contained in that data will inevitably lead to trade-offs between digital platform transparency and user privacy.

Over the past decade, questions surrounding digital platform governance have moved to the forefront of political, legal and academic debates. Recently, a breadth of interdisciplinary research has focused on social media platforms and messaging apps central to global journalistic work as well as on the broader range of software companies and sharing or gig economy apps central to contemporary online commerce. 

Much of this research fits into two categories: governance of social media and governance by social media. Research focused specifically on digital platform algorithmic transparency represents one small segment of work in this field. 

To date, this research has been far more theoretical than empirical, largely due to a lack of data (as we note in greater detail below) and the slow-moving nature of legislative policy, but it still sheds light on the varying forms of transparency different stakeholders expect. For instance, public transparency via disclosures looks different from, and accomplishes different aims than, research transparency via data access. 

Research findings have also noted the limitations of algorithmic transparency alone as a means of establishing digital platform accountability as well as the challenges various legislative efforts face in addressing algorithmic transparency, ranging from potential free expression violations to individual privacy infringements. These limitations have led some experts to call for forms of digital platform accountability beyond algorithmic transparency. 

Collectively, this work also reveals how little has changed, both via research and policy, in almost a decade of calls for algorithmic accountability.

Looking forward, there is much that we do not yet know about digital platforms’ algorithmic infrastructures. Research in this area is critical to inform current debates about internet governance. Experts have called for independent researchers and civil society to be granted more access to the inner workings of digital platform technology companies, the algorithms they develop and the trace data they collect – particularly as digital platforms steadily move away from more open models of free data sharing through their APIs. Of course, what access to give to whom is itself part of the debate about transparency. Still, access, when it has the appropriate safeguards, can help to better inform policy-making and contribute to democratic processes as a form of checks and balances on government and corporate power.

Content moderation is the element of transparency that has, to date, received the most policy attention. A wide range of policy initiatives around the world have begun, albeit slowly, to regulate content moderation processes and/or government takedown requests on digital platforms, which introduce opportunities and risks. We address some of these in a separate issue primer

Legislative policy around addressing algorithmic transparency and accountability more broadly is still nascent. While some technology companies have taken steps internally to improve organizational transparency, governments are beginning to consider policy initiatives that require specific forms of platform accountability, with a focus on (1) supporting research access to data, (2) protecting user data privacy and (3) disclosing certain content moderation practices. 

Some experts have cautioned against the risk that poorly designed transparency laws could become a mechanism for state intervention in and control over platforms’ editorial policies. In contexts like the U.S. with strong constitutional protections of free expression, it is unclear whether the government can mandate transparency. 

The establishment of international ethical frameworks for platform accountability is complicated by the fact that most institutional structures for oversight, such as review boards or ethics committees, vary greatly by country. Nonetheless, policymakers, particularly in Europe and the U.S., have called for international cooperation when it comes to facilitating transparency and access to cross-platform research. Experts have also called for multi-stakeholder and (supra-)national legislative efforts to govern digital spaces.

Even if not intentional, policymakers need to consider that a legislative framework in any one country has the potential to influence regulation globally. For example, a rule-of-law-based approach, while effective in stable democracies, could later serve as a blueprint for suppression elsewhere.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

YouTube launches new watch page that only shows videos from “authoritative” news sources
Nieman Lab (October 2023)

X changes its public interest policy to redefine ‘newsworthiness’ of posts
TechCrunch (October 2023)

Platform Accountability and Transparency Act reintroduced in Senate
Tech Policy Press (June 2023)

Google opposed a shareholder proposal asking for more transparency around its AI algorithms
Business Insider (June 2023)

Meta explains how AI influences what we see on Facebook and Instagram
The Verge (June 2023)

Beyond Section 230: Three paths to making the big tech platforms more transparent and accountable
Nieman Lab (January 2023)

Declaration of principles for content and platform governance in times of crisis
AccessNow (November 2022)

“This is transparency to me”: User insights into recommendation algorithm reporting
Center for Democracy & Technology (October 2022)

Frenemies: Global approaches to rebalance the Big Tech v. journalism relationship
Techtank/The Brookings Institution (August 2022)

How social media regulation could affect the press
Committee to Protect Journalists (January 2022)

If Big Tech has the will, here are ways research shows self-regulation can work
The Conversation (February 2021)

Competition issues concerning news media and digital platforms
Organisation for Economic Co-operation and Development (December 2021)

Why am I seeing this? How video and e-commerce platforms use recommendation systems to shape user experiences
Open Technology Institute (March 2020)

No more magic algorithms: Cultural policy in an era of discoverability
Data & Society (May 2016)

Africa Freedom of Information Centre (AFIC): Pan-African, membership-based civil society network and resource center promoting the right of access to information, transparency and accountability across Africa.

Algorithmic Impact Methods Lab: Data & Society lab advancing assessments of algorithmic systems in the public interest.

AlgorithmWatch: Research and advocacy organization committed to analyze automated decision-making systems and their impact on society.

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S): Cross-disciplinary national research center for responsible automated decision-making.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Centre for Media Pluralism and Media Freedom (CMPF): European University Institute research and training center, co-financed by the European Union.

European Centre for Algorithmic Transparency: European Commission center aiming to contribute to a safer online environment and to support oversight of the Digital Services Act.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa, including data governance.

Jordan Open Source Association: Nonprofit organization aiming to promote openness in technology and to defend the rights of technology users in Jordan.

Karlsruhe Institute for Technology (KIT): Research and education facility seeking to develop industrial applications via real-world laboratories.

Chinmayi Arun, Executive Director, Yale Information Society Project

Susan Athey, Economics of Technology Professor, Stanford University

Emily Bell, Director, Tow Center for Digital Journalism, Columbia Journalism School

Guy Berger, Former Director of Policies & Strategies in Communication and Information, UNESCO

Neil W. Netanel, Professor of Law, University of California – Los Angeles

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Matthias Spielkamp, Executive Director, AlgorithmWatch

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Association for Computing Machinery
2024 – TBD

YouTube launches new watch page that only shows videos from “authoritative” news sources
Nieman Lab (October 2023)

X changes its public interest policy to redefine ‘newsworthiness’ of posts
TechCrunch (October 2023)

Platform Accountability and Transparency Act reintroduced in Senate
Tech Policy Press (June 2023)

Google opposed a shareholder proposal asking for more transparency around its AI algorithms
Business Insider (June 2023)

Meta explains how AI influences what we see on Facebook and Instagram
The Verge (June 2023)

Beyond Section 230: Three paths to making the big tech platforms more transparent and accountable
Nieman Lab (January 2023)

Declaration of principles for content and platform governance in times of crisis
AccessNow (November 2022)

“This is transparency to me”: User insights into recommendation algorithm reporting
Center for Democracy & Technology (October 2022)

Frenemies: Global approaches to rebalance the Big Tech v. journalism relationship
Techtank/The Brookings Institution (August 2022)

How social media regulation could affect the press
Committee to Protect Journalists (January 2022)

If Big Tech has the will, here are ways research shows self-regulation can work
The Conversation (February 2021)

Competition issues concerning news media and digital platforms
Organisation for Economic Co-operation and Development (December 2021)

Why am I seeing this? How video and e-commerce platforms use recommendation systems to shape user experiences
Open Technology Institute (March 2020)

No more magic algorithms: Cultural policy in an era of discoverability
Data & Society (May 2016)

Africa Freedom of Information Centre (AFIC): Pan-African, membership-based civil society network and resource center promoting the right of access to information, transparency and accountability across Africa.

Algorithmic Impact Methods Lab: Data & Society lab advancing assessments of algorithmic systems in the public interest.

AlgorithmWatch: Research and advocacy organization committed to analyze automated decision-making systems and their impact on society.

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S): Cross-disciplinary national research center for responsible automated decision-making.

Center for Democracy & Technology: Nonprofit organization aiming to promote solutions for internet policy challenges.

Centre for Media Pluralism and Media Freedom (CMPF): European University Institute research and training center, co-financed by the European Union.

European Centre for Algorithmic Transparency: European Commission center aiming to contribute to a safer online environment and to support oversight of the Digital Services Act.

iWatch Africa: Non-governmental media and policy organization tracking digital rights in Africa, including data governance.

Jordan Open Source Association: Nonprofit organization aiming to promote openness in technology and to defend the rights of technology users in Jordan.

Karlsruhe Institute for Technology (KIT): Research and education facility seeking to develop industrial applications via real-world laboratories.

Chinmayi Arun, Executive Director, Yale Information Society Project

Susan Athey, Economics of Technology Professor, Stanford University

Emily Bell, Director, Tow Center for Digital Journalism, Columbia Journalism School

Guy Berger, Former Director of Policies & Strategies in Communication and Information, UNESCO

Neil W. Netanel, Professor of Law, University of California – Los Angeles

Rasmus Kleis Nielsen, Director, Reuters Institute for the Study of Journalism

Matthias Spielkamp, Executive Director, AlgorithmWatch

Abraji International Congress of Investigative Journalism
Brazilian Association of Investigative Journalism (Abraji)
June 29–July 2, 2023 – São Paulo, Brazil 

RightsCon
Access Now
June 5–8, 2023 – San José, Costa Rica

ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Association for Computing Machinery
2024 – TBD