Helping the Public Navigate the Role of AI in News

Helping the Public Navigate the Role of AI in News

Challenging assumptions about AI’s role in news, we question if the public needs to assess it, why, where, and when. Transparency in news-making is pivotal for trust.

By Felix M. Simon

Many observers believe that AI will fundamentally transform the news and our information landscape. A frequently asked question in this context — including as a part of a recent CNTI Convening — is what information the public needs to assess the role of AI in news and information?

I would like to challenge the question somewhat by complicating the underlying assumption (“The public needs information about the role of AI in news”).

This might well be true, but I think it is more fruitful to ask: Does the public need to be able assess the role of AI in news and information and if so (1) why, (2) where/for what and (3) when?

Why? Because rephrasing the original question allows us to see the implicit assumption it makes more clearly. A different way of asking also allows us to get around firmly held convictions and hopefully arrive at more useful and correct answers. Let me explain.

From a normative point of view within the industry and academia (at least in journalism and communication studies), the broad answer to the original question has been “yes” and is contained in it. By asking “What information does the public need to…” one is essentially saying: The public should know and the news industry should be as transparent as possible about its AI use.

The reason is to be found in the past: Historically, a commitment to (some) transparency over “how the news gets made” belongs to journalisms’ authoritative rituals — the routines that allow journalism to distinguish itself from other forms of media work, stress its commitment to the truth, win and retain legitimacy and trust. 

As publishing moved to digital formats, new players entered the public arena, and content became more ephemeral, calls for — and commitment to — transparency increased, the idea being that a “doubling down” would act as a safeguard for news. A recent example are the enhanced bylines of the New York Times.

This also plays out with renewed force with the introduction of AI systems in news work. First, because the normative commitment to transparency means “by default” that the public has a right to know how the news gets made and distributed (and therefore that AI is in the mix) — something that is reflected in many AI guidelines.

Second, because these systems can (or are expected to) shape the work of news organizations in unforeseen ways, can be inscrutable in their automated decision making and introduce outside logics into news work. The worry is that this will ultimately limit the autonomy of news organizations and journalists and compromise their ability to inform publics in an unbiased and error-free manner — in return potentially further eroding audience trust in the news as well as the credibility of news content (with knock-on effects on subscriptions, news consumption, etc.). Here, transparency seems to be seen as a kind of “insurance policy” against these effects.

Third, because of the hope that negative public perceptions of AI writ large (that might predispose audiences negatively towards news (co-)created with AI) will dissolve (or diminish) if the news industry is transparent about its use of AI.

There are likely other reasons which I am missing here, but broadly speaking current thinking seems to coalesce around these points.

The problem with all this, however, is that the idea that more transparency will help address these issues under all circumstances is for now mostly a normative assumption and research on these questions is scant. Empirically speaking, few studies to date have considered audience perceptions of automated, AI-informed production of news, and the body of existing literature on these topics is scant and inconsistent or observational in nature with inevitable questions around temporal validity given such a fast-changing phenomenon.

In recent research led by Benjamin Toff, we found what we called “the paradox of AI disclosure for news.” Despite the normative assumption that labeling would increase trust, it did the opposite in our study (although effects were small). For our sample of US users, we found on average that audiences perceived news labeled as AI-generated as less trustworthy, not more, even when articles themselves were not evaluated as any less accurate or unfair. Furthermore, we found that these effects were largely concentrated among those whose pre-existing levels of trust in news are higher to begin with and among those who exhibit higher levels of knowledge about journalism. We also found that negative effects associated with perceived trustworthiness were largely counteracted when articles disclose the list of sources used to generate the content.

However, this was early work with various limitations and only more work will be able to show if these effects hold (and under which conditions).

Going forward, we need to address several questions — ideally in collaborations between academics, publishers and platforms — as it can be difficult for the former to study these questions without access to audience data or real-world experiments (as opposed to lab settings).

  • How well do audiences understand how AI does/does not come to be used in the news? How does this perception differ from actual uses? And how do different levels of knowledge about AI (in news and/or in general) shape perceptions about its use in news (and with what effects)?
  • What kind of AI uses in the news production and distribution process are acceptable to (different) audiences and why, and which are not? 
  • How do, for example, transparency labels reflect this and how do audiences perceive different transparency labels? What does this in turn mean for when transparency labels need to be/do not need to be used?

One thing we should always bear in mind is the question of temporal validity: Current findings will be influenced by hype around AI and (negative) public perceptions. This is likely to change over time, as AI becomes more normalized and commoditized. This, however, will mean that in practice any measure will need to be re-assessed and adapted continuously, too.

Finding an answer to the original question therefore strikes me as finding a trade-off between a normative commitment to transparency and the empirical reality (as of yet understudied) of how such a commitment will be perceived and what effects it will have (in terms of trust/credibility perceptions, willingness to consume/subscribe).

In other words: We already know that the normative answer to “What information does the public need to assess the role of AI in news & information?” is implicitly given by the way the question is phrased (Yes, the public needs information!). If we agree that it is worth sticking with this, this still leaves ample room for an evidence-based discussion about the conditions for such a commitment.

Felix M. Simon is a Communication Researcher and Dieter Schwarz Scholar at the Oxford Internet Institute, University of Oxford