If, When and How to Communicate Journalistic Uses of AI to the Public

If, When and How to Communicate Journalistic Uses of AI to the Public

Conclusions of a Day-Long Discussion among Global Cross-Industry Experts


“When I hear questions about why, why should we be labeling? My question is, why not? … I think people want to be let in on the process of how journalism works, how news works, and I don’t necessarily see the harm in being more open about how we are doing this work so that they feel more comfortable,” remarked one participant at a Center for News, Technology & Innovation (CNTI) event on communicating uses of AI in journalism to the public, the third in a series on AI in Journalism.

Others argued there should be a clear reason for the label. “We put labels on things and we look at labels on things because we’re concerned about how they’re going to affect us as human beings.” Labels without clear purpose or nuance could result in people ignoring the ones that really matter. 

Labels applied without a clear understanding of how the public will receive them could also backfire or be “very misleading,” others suggested, especially generic one-size-fits-all labels, as evidenced in research cited during CNTI’s second AI convening

At the same time, multiple research studies reinforce the public’s desire for transparency from journalists when it comes to uses of AI in their work. What is the right path when it comes to AI labels? How do we decide when to apply them and to which elements? 

Participants

  • Akintunde Babatunde, Centre for Journalism Innovation and Development
  • Ludovic Blecher, IDation
  • Alexandra Borchardt, Independent Researcher
  • Madhav Chinnappa, Human Native AI
  • Kristen Davis, CinqC
  • Wahyu Dhyatmika, Tempo.co
  • Craig Forman, NextNews Ventures, CNTI Board Chair (host)
  • Richard Gingras, Google
  • Natali Helberger, University of Amsterdam; AI, Media and Democracy Lab
  • Jeff Jarvis, Stony Brook University; Montclair State University
  • Gábor Kardos, Magyar Jeti Zrt.
  • Lexie Kirkconnell-Kawana, Impress
  • Tanit Koch, The New European (moderator)
  • Verena Krawarik, Austria Presse Agentur
  • Caro Kriel, Thomson Foundation (host)
  • Claire Leibowicz, Partnership on AI; University of Oxford
  • Irene Jay Liu, International Fund for Public Interest Media
  • Helena Martins, Google
  • Karen Mcluskie, Department for Business and Trade (U.K.)
  • Amy Mitchell, CNTI (host)
  • Sophie Morosoli, University of Amsterdam; AI, Media and Democracy Lab
  • Dan Nechita, Transatlantic Policy Network
  • Claire Pershan, The Mozilla Foundation
  • Varun Shetty, OpenAI
  • Felix Simon, Reuters Institute; Oxford Internet Institute 
  • Krishna Sood, Microsoft
  • Anastasia Stasenko, pleias
  • Agnes Stenbom, Schibsted
  • Rayan Temara, Lie Detectors
  • Federica Varalda, Thomson Foundation
  • Lynn Walsh, Trusting News

For more details, see the Appendix.

CNTI set out to explore these questions in our third convening on Artificial Intelligence (AI) in Journalism, this one co-sponsored by Thomson Foundation and held on October 30th in Brussels, Belgium. 

The answers are not easy, but they are important. This day-long discussion among journalists, current and former members of European governments, researchers, technologists, civil society experts, and philanthropists outlined critical components for success and other considerations that help pave a way forward.

The session, held under a modified Chatham House Rule, continued the tone and style that began with CNTI’s inaugural convening in October 2023 (“Defining AI in News”), followed by the second in February 2024 (“Watermarks are Just One of Many Tools Needed”). CNTI uses research as the foundation for these collaborative, solutions-oriented conversations among thought leaders who may not agree on all the elements, but who all care about finding strategies to safeguard an independent, diverse news media and public access to a plurality of fact-based news.

  1. Before Determining Communication Methods, We First Need to Establish Categories for Types of AI Use
  2. Meaningful Communication Relies on Journalists First Being Skilled in and Comfortable with Technology
  3. We Can Learn a lot about AI Use and Labeling from Other Industries 
  4. Journalists’ Reporting about AI — and Technology More Broadly — Impact Public Perception and Understanding
  5. The Research to Date Identifies a Consistent Desire for Transparency but What Transparency Should Look Like is Less Clear
  6. Visuals, Colors, Words: The Details of the Design Matter, and May Need to be Localized
  7. All in All, This is a Time of Seismic Societal Change in Journalism and Beyond 

This was the third in a series of CNTI convenings on enabling the benefits while managing the harms of AI in Journalism. Stay tuned for details about CNTI’s upcoming AI convenings.

To kick things off, participants worked on a general framework for the various ways AI is —or could be —used in journalism. Four main categories emerged. Because the public interacts with each of these aspects differently, what —if anything —gets communicated about the use of AI should be considered for each context. 

  1. Operations: How news organizations implement AI systems and technologies in their internal processes and workflow. As Karen Mcluskie described it, “streamlining the workflow of pretty much anybody connected to the newsroom. It can be used for business intelligence and other things that run the business of news.”
  2. Distribution: The platforms and processes that bring content to an organization’s audience and to the public more broadly. For example, personalizing content and providing recommendations for users based on their prior behavior.
  3. Editorial and Contextual: Editorial and contextual uses of AI such as helping with story-writing or the use of internal AI chatbots that cull through the newsroom’s archives to add historical context. Sometimes, these uses result in public-facing AI-generated content, but not always. 
  4. Investigation: Investigative uses of AI that help journalists examine large amounts of data, cull through historical documents or  identify an image or a quote.

In thinking about each context, it is valuable to differentiate between whether AI use is wholly new or if it is additive. Was a form of this practice being done using pre-AI — or pre-GenAI — technology? Is it a use of technology that the public is already familiar with or used to? If so, how does that affect how we define responsible communication?

Another aspect to evaluate is the level of editorial and human oversight involved. As Verena Krawarik shared, “If the automation level is very, very low … like spell check … this is nothing we have to disclose,” but, on the other hand, “if you are up for a totally automatic process and pushing content out without editorial human oversight at the end, please label it.” 

Finally, it is important to realize that journalistic uses — even when widely used and understood by journalists — may be quite different from the public’s knowledge and/or experiences with AI. How these uses are explained to the public becomes even more important and may require additional description and articulation. Automated production of video formats, transcription of audio and large language model (LLM) summaries of stories are just a few examples of AI uses specific to journalism.

Journalists’ ability to make reasoned decisions about the value of a technology — and communicate about it to the public — requires enough technical facility to both try something out and understand how it works. 

Many newsrooms face a lack of openness to — let alone excitement for — new technologies or even change more broadly. Resistance to technology and change can leave journalism behind other industries in adoption and experimentation with technology (including AI), thereby negatively impacting the industry. It is the responsibility, Agnes Stenbom shared, “of the media companies to equip ourselves with those technical skills” to better understand AI tools and how they can be used.

It also became clear through discussion at this session, that pressure to innovate cannot come solely from the business side of the organization. Having the business side push to integrate AI tools often fails. Journalists tend to see these motivations as purely for “cost saving” or “efficiency,” rather than for empowering journalism producers. Instead, it is important that journalists feel the potential value of these technologies and discover uses themselves. As Gábor Kardos shared, “It’s not just about whether we sell it right internally, [journalists] need to explore it. So the way we try to sell it, so to say, is we try to elevate the successes that [journalists] achieve themselves.” Several attendees agreed that having hands-on experience with AI tools is critical for understanding how these technologies can improve existing workflows and benefit journalists. The need for greater technological knowledge, skill, and — thereby — confidence was also raised in CNTI’s convening in Mexico City on Online Safety and Security, suggesting a wide breadth of benefits. 

But how do we do it? Attendees shared their own experiences with bringing artificial intelligence into news organizations, lessons learned and suggestions for generating new ideas and interest in this technology. One recommendation stood out the most:

Journalists might consider viewing generative AI as a “sparring partner,” in which journalists bounce ideas off of them, use them to lightly edit sections of a story or even iterate several article titles for journalists’ consideration. In other words, AI systems can be viewed as a helpful assistant for journalists. Thus, news organizations can develop ways in which, as Ludovic Blecher put it, “tech[nology] is here to serve and not to replace.” Part of this process can be facilitated by news organizations developing internal technical expertise about how to build, update and implement AI models. If one hesitation among some journalists has been relying too much on technology for their core work, this could be a way of shifting the perspective. “[T]he true power of AI,” Richard Gingras offered, “is, how do we use it to allow us to do things that were very hard to do, particularly in the fast process of breaking news?” 

Progress Depends on Improved Relations between Journalism and Technology Companies 

In addition to a general resistance to new technologies, there is a strong distrust of technology companies, stemming from issues as wide-ranging as control of information, content moderation, copyright concerns, a lack of transparency in algorithmic structures and worries over further economic imbalance. Relations tend to be especially tense in North American and Western European markets. These tensions, which are discussed in several of CNTI’s issue primers, were prominent in this discussion, much as they were in other discussions CNTI has hosted. It is critical that we work to improve these relationships (an effort central to CNTI’s mission).

One step technology companies, especially those building AI models, could take is to be more transparent about how journalism content is used and how AI models are built. Not only is this a central element of tension between journalism and technology companies, it is also important for the relationship journalists build with the public. Varun Shetty shared that, “We need to do a really good job of better explaining to both our partners and the public how these models work, what they really do, what they don’t do. One of the things that I spend a lot of time discussing with news partners and other media partners is the fact that these models aren’t queryable databases. They are reasoning engines.” One key takeaway from a series of focus groups CNTI held earlier this year on public thoughts about uses of AI in journalism is that the public needs to know that journalists can vouch for the AI model being used and the content in it. That, in turn, requires journalists to have access to more of that information. Indeed, recent research shared in this session on journalists’ perceptions of AI tools developed by outside technology companies found that many have ethical concerns stemming in part from the lack of transparency being provided by AI model developers. 

It is also important to think about the value of content in the AI era, including journalistic content utilized by AI companies to build their systems. As Madhav Chinnappa stated, “I would counsel the Big Tech companies and the regulators to not fall into that trap of thinking about how it used to work … the point is that we’re in a different age and how that value exchange works is really important because I think what we should be optimizing for is the health of the information ecosystem.”

Journalists can help by sharing their experiences at a global scale. Some geographic areas — often those with fewer financial resources — express more excitement about new technological tools, including AI. Akintunde Babatunde shared one example from West Africa, in which an AI transcription tool, Dubawa, is used by journalists to quickly turn audio interviews from different languages into English text. Journalists in these settings are coming together “with technology professionals, with data scientists … to share learnings,” said Akintunde. Journalists worldwide can learn from one another about what technological resources can be used to tell their stories and the best ways to do so. 

Finally, as new uses emerge, technology companies, policymakers and the journalistic community need to work harder to ensure equal access to the benefits of AI spanning different types of newsrooms and global regions. The major capital requirements for developing AI capacity and rapid developments in AI may — and often do — leave many volunteer and lower-revenue news organizations behind. As Lexie Kirkconnell-Kawana stated, “One of the things I think we should be thinking about is how we ensure that there is cross pollination across the sector, that the larger organizations are able to support the smaller ones and are able to cooperate and share that IP and share the use cases that are emerging from it.” 

In a point that applies to so many parts of society and life, there is much to be learned from what has come before and what colleagues elsewhere are trying. There may be journalistic applications, for example, in predictive AI modeling projects in the scientific community. Scientists are working on a “digital twin” of the Earth, Destination Earth (or DestinE), to model future environmental impacts of climate change. The news industry could apply similar methods “to help us model out what is the future of news, and what is the future of trust? What is the future of value and engagement?” said Kristen Davis. The predictive power of AI has so far been underutilized in the news industry. 

Other attendees referenced applying methods they learned from the public services industry. In France, for example, agents use AI to help answer citizens’ questions. The agents feed the question to an internal AI system and then, based on the AI system’s help, are able to provide answers to the citizen more quickly and often more thoroughly. 

It also may be necessary — or at least helpful — to bring leaders from other sectors to work with journalism organizations as they think about their future. The journalism industry is in a period of rapid transition. These changes in technology likely require professionals and leaders in the industry to rethink journalism. As Agnes Stenbom shared, “… in order to rethink journalism, we have to bring in people who are not committed to the traditional forms of journalism.”

Past and current uses of labels in other sectors can also help in thinking through what would be valuable to the public. Several attendees likened the role of labels and public disclosure of AI use to existing or past examples from the healthcare, medicine, nutrition, food and science industries. Lessons from these industries are valuable resources for understanding best practices — and potential pitfalls — in labeling. As one participant explained, “We’re actually already incredibly well trained using these kinds of labels.” The public has experience with labels and it’s important to build public awareness, knowledge and literacy using concepts that are already familiar. 

Because labels on medication and food items provide valuable information to the public, attendees considered how these types of labels could be adapted to news and journalism content that uses AI. Several remarked that gradation scales, similar to Nutri-Score used in parts of Europe, could provide a spectrum that allows audiences to identify fully human-generated content on one hand, and fully computer-generated content on the other. Content created with the assistance of generative AI but was mainly authored by a human journalist, would fit somewhere in the middle. Yet, formulating/designing these labels and educating the public about them would take time; during the current period of rapid technological development and AI integration, labeling practices may not be able to keep pace with emerging uses.

A similar theme arose during CNTI’s second convening in which the topic of organic standards and labeling were discussed and how segments of the public have come to value organic labels. 

Attention was also drawn to learning from labeling attempts that failed. One example provided by Claire Leibowicz from her forthcoming PhD research at Oxford was California’s Proposition 65, which requires warnings about public exposure to cancer-causing chemicals. The labels were found to be overwhelming and unhelpful by most, but some suggested they were useful since they prompted companies to reduce the use of such chemicals when they first rolled out. Another example provided was the attempt to put novel labels on wine. These labels were unsuccessful because consumers did not understand what the labels meant or what they signaled — an important consideration for news organizations when creating labels. For labels to be effective, the public must (1) understand what they signal and (2) find value in what they signal. 

Participants also raised the effect that journalistic coverage of AI, and technology more broadly, has on public perception. Some expressed a  sense that much of the coverage to date portrays AI as a negative, threatening technology, using language that cautions against using AI without providing coverage about how it can be useful for many industries.

Alexandra Borchardt summarized, “I think that it’s the responsibility of the media to really cover these things in much more breadth and much more constructively than they do.” Covering the nuance of AI — what it can and can’t do — is important for audiences to understand. While a healthy skepticism is valuable, several felt a fuller picture of the potential benefits and potential harms would serve to create a more informed public — and one that is then open to certain uses inside journalism.

Another way to think about how to cover AI is to consider journalists to be teachers. Journalists have an opportunity to serve as valuable teachers for complex topics like AI — filling a similar role that many held when covering the COVID-19 pandemic. As with COVID-19, building AI literacy needs to involve meeting the public where they are and providing various opportunities to learn about AI based on individuals’ interest and preexisting knowledge. Rayan Temara shared Lie Detectors’s approach of “training local journalists to go into a class and train children, aged 10 to 15, and also teachers on how they can tackle disinformation, how they can resist polarization through digital and media literacy,” which facilitates valuable interactions between journalists and their communities. One innovative educational product brought up at the convening was Minecraft’s AI Literacy program targeted at educators and children. 

Journalists’ role in educating the public is complicated because, as attendees pointed out, the way journalists use AI can be — and likely already is — very different from how the public uses it. “It’s completely and fundamentally different to use AI systems such as ChatGPT or Gemini, where you have lots of processing, and standalone open-source models whose usage by professionals have significantly increased,” said Anastasia Stasenko. For journalists to be able to explain their types of uses to the public, they must first understand AI uses thoroughly themselves. 

The public’s reliance on journalists as a kind of middle person between the individual and AI also came up during CNTI’s recent focus group work. Many participants said they expect journalists to be able to vouch for any work produced by AI that goes into their reporting, and to be able to communicate it clearly to their audiences. 

The research community has begun to amass a meaningful amount of data on uses of AI in news. Some studies reveal consistent findings while others present more conflicting evidence, suggesting that this is still a very new and developing technology with a need for continued research.

At the convening, preliminary findings from a Netherlands-based focus group were shared, revealing, according to Sophie Morosoli, that the public has a “strong wish for transparency in the sense that news organizations should be transparent about their use of AI, no matter what.” This finding is supported by Trusting News’s research shared that day by Lynn Walsh which finds that 94% of U.S. news consumers surveyed say they want newsrooms to be transparent about their use of AI and over 90% want to know a human was involved in the process of creating content. These findings also comport with both the Reuters Institute’s survey research and CNTI’s recent series of focus groups in Australia, Brazil, South Africa and the U.S.: the majority of individuals in each country expressed acceptance of news organizations using AI if audiences were informed about its use. The public’s desire for transparency is generally consistent across studies and geographies. 

Felix Simon shared recent preliminary research he’d conducted in collaboration with Benjamin Toff that featured a striking finding: “For our sample of U.S. users, we found, on average, that audiences perceived news labeled as AI-generated as less trustworthy, not more, even when articles themselves were not evaluated as any less accurate or unfair.” Yet, when the authors provided users with the sources used to generate the article, these negative effects diminished. Further research is warranted to better understand how the public interprets labels on AI-generated articles and how to best present this information.

What that transparency — or disclosure — looks like, is another body of research, and one with less of a consistent takeaway. Identifying the most appropriate and useful label remains an open question

It is also important to consider potential unintended effects of transparency and labeling. As Claire Leibowicz shared, “People interpret AI or not [AI] symbols or signals as implying something about truthfulness,” which they are not designed to do. Based on the Partnership on AI’s case studies of technology companies like Meta and Microsoft, the public may interpret AI labels as indicating truthfulness even when these labels do not imply anything about integrity or factuality.

We may see an analogue across these various research findings to the implied truth effect, in which warnings about misinformation have been shown to increase trust of news stories which did not receive warnings. That is, people may incorrectly assume that content without an AI label (but in which AI was used) is fully human-generated. The public may learn the value of labels and assume the absence of any signal represents inaccuracy or low quality.

The existing research also suggests AI labeling may require unique geographic and linguistic decisions which raises the question of whether it’s possible to create language-agnostic, global labels recognizable by all. 

Research has also explored how individuals within newsrooms perceive AI tools developed by outside technology companies. Results from the AI, Media and Democracy Lab found many newsroom staff have ethical concerns about using AI. Part of this concern stems from, as highlighted by Claire Pershan, the lack of meaningful transparency being provided by AI model developers. Natali Helberger emphasized the importance of “having a debate on when exactly these tools are, especially when they come from outside companies, trustworthy and safe to use and can be trusted to be in conformity with professional values that are so important in newsrooms.” News organizations also need to continue developing applications of AI with an understanding that these novel tools necessitate testing and likely require human oversight. 

Even as research continues to develop methods for achieving transparency, as one participant stated, “If we don’t have a shared definition of what journalism is, it’s going to be really hard to have shared definitions of where transparency is needed.” The concept of journalism was recently addressed in a CNTI focus group essay and remains relevant to AI labeling.

As we delved into possible designs, views differed on what would be most appropriate and most effective. The bullet points below synthesize the various areas of discussion. 

  • Should the disclosure and/or label come from the news provider or an outside, independent agency? There were differing views about the role of institutions to oversee disclosure and enforcement of labeling AI-related content. Sophie Morosoli’s focus group research finds that participants desired labels “linked to a certain sense of institutional accountability … that there is an external institution that can be held accountable. So if you use this label, you know that certain values and certain standards are in place.” Yet, not all participants at the convening agreed that an outside institution should be responsible for providing this service and that news organizations’ personal brand is important for building trust.
  • A gradient scale model. As discussed above, some in attendance suggested disclosure of AI uses could follow a gradient scale model. On one end of the spectrum is the absence of AI involvement and on the other end is the absence of human involvement. This approach provides users with a better idea of the degree to which AI was used in news and journalism products, but it also requires the public to understand the gradations and underlying scoring. Public understanding would likely necessitate training, and the scoring itself would require updates as AI technologies continue to develop.
  • Should designs include text or not? Another area of discussion centered on whether transparency and disclosure designs should feature text or merely images. While certain design aspects likely necessitate text to explain how AI models and tools were used in the development of a news product, other design aspects can be textless as long as the public understands what the images mean in context. For example, people are unlikely to read several lines of disclaimer text on their produce but will check for labels identifying it as organic or fair trade.
  • Labeling “AI for good.” Related considerations are when and how uses of AI to protect the public  need to be disclosed. One participant shared the example of a news organization that used synthetic media to alter the faces of individuals who were sharing personal experiences of sensitive matters (e.g., alcoholism or abuse). This allowed first person involvement in the story without compromising their identity.
  • Labeling human-generated content instead. An intriguing possibility was raised by Dan Nechita, who, during his time in the European Parliament, worked on the E.U.’s AI Act. He contended that, “[o]ne of the solutions will be to actually label human-generated content — to label authenticity.” Those who use generative AI with “ill intent” are not going to label their content; thus, in an information environment that stresses the importance of labeling AI-related content, their lack of labeling “will give them, in a sense, validation for their content,” shared Dan. His perspective aligns with a variation of the implied truth effect. It will be important to consider the implications of labeling AI-generated content given the possibility that not all AI-generated content will be tagged as such.
  • Yet, as Lexie Kirkconnell-Kawana pointed out, “we can’t rely on this presumption that human-generated journalism is any better than AI-generated journalism, because there’s certainly still lots of malpractice and wrongdoing and poor practice that goes on across the industry as well.” 

Karen Mcluskie highlighted the profound uncertainty surrounding AI’s long-term societal impact. Drawing parallels to past technological revolutions, she noted that such advancements often reshape the “human operating system” in ways that become obvious only in hindsight, but are nearly imperceptible in the moment. Relatedly, Krishna Sood shared Microsoft’s perspective that the “transformative potential of AI is akin to electricity, and so hopefully, that starts to reframe the thinking as to whether you should fear it, the questions you should ask about it, and how you should adopt it within your organization.” 

Karen emphasized that governments today have a critical role in expanding our “imagination horizon” to anticipate AI’s effects. She observed that we already foresee challenges like disruptions to the labor market and the proliferation of misinformation, but she was particularly intrigued by the unknowable impacts from a world of computer-generated digital content. “What happens to the human operating system,” she asked, “when we are imbued and surrounded by image and video of other humans that is not real?” It is worth considering the implications of these changes.

For journalism these changes mean rethinking the role journalism plays in society and the services it provides. As Dan Nechita remarked, “I really believe that the survival of the [journalism] industry will rest on its ability to do something that AI cannot do.” The context he highlighted was that, at present, AI models lack creativity: “It’s [the risk] not even about discriminatory or false, or misinformation or disinformation or hallucinations. It’s also about uniformizing … the body of knowledge and the essence of truth.” 

The modern newsroom may also require, as some expressed, newsroom leaders who embrace technological experimentation to understand where these technologies can be immediately useful and where further research and testing are needed. Ludovic Blecher emphasized that, with effective executive direction, AI integration in news production can empower journalists to create more meaningful content. “I think it’s good sometimes,” he said, “to sell AI by explaining to your newsroom that it will help them to get rid of the boring tasks and focus on what matters the most.”

Governments’ role may also need to adapt to a new era. Even as governments actively consider bills and regulations, today’s fast changing, digital world may necessitate new kinds of regulatory structures.

Finally, in all of this work, it is critical to be forward-thinking — which includes understanding younger generations and their news habits. Younger generations are consuming information in unique ways that we all need to pay more attention to. As Helena Martins from Google stated, “One of the things that is on the top of our minds … is how to reach Gen Z because they have a completely different way of consuming and seeking information.” 

AI development is dramatically shaping the information environment and forward-thinking is needed to understand how these changes will affect news consumption and interactions with journalistic content.

CNTI is excited to build on the convening with a variety of next steps. One is the formation of a research working group to provide a further distillation of what the global research adds up to across the areas of AI transparency, disclosure and labeling — particularly for news and journalism.

Another next step is to continue work on design options, possibly with another working group. CNTI espouses a global approach to research and it is critical to understand how design approaches will work across different regions, cultures and languages. We encourage individuals interested in being a part of either group to contact CNTI at info@innovating.news

This convening, co-hosted by Thomson Foundation, is the third in a series of convenings CNTI is hosting on enabling the benefits of AI while also guarding against the harms. It is just one of many initiatives within the larger work CNTI, an independent global policy research center, does to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations. Please visit our website: www.innovating.news to see more of CNTI’s work and sign up to receive updates, and, as always, please contact us with questions and ideas.

See the Appendix below for numerous valuable resources shared by participants of this event as well as other acknowledgements.

Akintunde Babatunde
Director of Programs, Centre for Journalism Innovation and Development
Irene Jay Liu
Director of AI, Emerging Tech & Regulation, International Fund for Public Interest Media
Ludovic Blecher
CEO, IDation
Helena Martins
Senior Manager of Government Affairs and Public Policy (Search and AI), Google
Alexandra Borchardt
Independent Researcher
Karen Mcluskie
Deputy Director of Technology, Department for Business and Trade (U.K.)
Madhav Chinnappa
VP of Partnerships, Human Native AI
Amy Mitchell
Executive Director, Center for News, Technology & Innovation
Kristen Davis
Founder & CEO, CinqC
Sophie Morosoli
Postdoctoral Researcher, University of Amsterdam; AI, Media and Democracy Lab
Wahyu Dhyatmika
CEO, Tempo.co
Dan Nechita
EU Director, Transatlantic Policy Network
Craig Forman
Managing General Partner, NextNews Ventures (CNTI Executive Chair)
Claire Pershan
EU Advocacy Lead, The Mozilla Foundation
Richard Gingras
Strategic Advisor, Google (CNTI Board)
Varun Shetty
Head of Media Partnerships, OpenAI
Natali Helberger
Professor, University of Amsterdam; Director, AI, Media and Democracy Lab
Felix Simon
Research Fellow in AI and News, Reuters Institute for the Study of Journalism; Research Associate, Oxford Internet Institute (CNTI Advisory Committee)
Jeff Jarvis
Author and Visiting Professor, Stony Brook University; Montclair State University (CNTI Advisory Committee)
Krishna Sood
Assistant General Counsel, Microsoft
Gábor Kardos
CEO, Magyar Jeti Zrt.
Anastasia Stasenko
Co-Founder, pleias
Lexie Kirkconnell-Kawana
CEO, Impress
Agnes Stenbom
Head of IN/LAB & Trust Initiatives, Schibsted
Tanit Koch
Journalist/Co-Ownder, The New European (CNTI Advisory Committee, moderator)
Rayan Temara
Outreach and Policy Officer, Lie Detectors
Verena Krawarik
Head of Innovation, Austria Presse Agentur
Federica Varalda
Managing Director – Development, Thomson Foundation
Caro Kriel
Chief Executive, Thomson Foundation
Lynn Walsh
Assistant Director, Trusting News
Claire Leibowicz
Head of AI and Media Integrity, Partnership on AI; DPhil Candidate, University of Oxford

CNTI’s cross-industry convenings espouse evidence-based, thoughtful and challenging conversations about the issue at hand, with the goal of building trust and ongoing relationships along with some agreed-upon approaches to policy. To that end, this convening adhered to a slightly amended Chatham House Rule:

  1. Individuals are invited as leading thinkers from important parts of our digital news environment and as critical voices to finding feasible solutions. For the purposes of transparency, CNTI publicly lists all attendees and affiliations present. Any reporting on the event, including CNTI’s reports summarizing key takeaways and next steps, can share information (including unattributed quotes) but cannot explicitly or implicitly identify who said what without prior approval from the individual.
  2. CNTI does request the use of photo and video at convenings. Videography is intended to help with the summary report. Any public use of video clips with dialogue by CNTI or its co-hosts requires the explicit, advance consent of the subject.
  3. To maintain focus on the discussion at hand, we ask that there be no external posting during the event itself.

To prepare, we asked that participants review CNTI’s Issue Primers on AI in Journalism, Algorithmic Transparency and Journalistic Relevance as well as the report from CNTI’s second convening event.

Participants at our convening event shared a number of helpful resources. Many of these resources are aimed at (1) developing AI literacy, (2) learning how to implement AI technologies in newsrooms and (3) developing an understanding for how the public perceives AI use in news. These resources are presented in alphabetical order by organization/sponsor below. 

The AI, Media and Democracy Lab released a recent report in collaboration with the Associated Press that examined how employees in the news industry view and use AI in their daily work. The researchers find that ethical concerns (e.g., having generative AI produce articles) rank highly among individuals in the news industry. 

Verena Krawarik, of the Austrian Press Agency (APA), shared research with colleagues that develops a taxonomy for how AI can be used in the media industry — and the levels of human oversight required to manage these applications. The authors devise six levels of AI automation and assess over 35 different applications (e.g., text summarization, text-to-speech, business decision support, etc.) based on their usage and technological sophistication.

A transcription tool for African journalists, Dubawa Audio Platform, was discussed as an example of how technological tools are assisting journalists in the region to more quickly deliver information to the audiences. The platform was initiated by a Friend of CNTI, the Centre for Journalism Innovation and Development (CJID). 

Alexandra Borchardt shared the European Broadcasting Union’s 2024 report titled “Trusted Journalism in the Age of Generative AI.”

Felix Simon shared a recent column that considered how AI labels may affect audiences’ trust in and perspectives toward news organizations as well as the potential limitations of a “human-in-the-loop” approach to implementing AI tools. 

Human Native AI recently released a helpful overview for learning AI terminology. These concepts and technologies are rapidly developing, so understanding the nuances between key terms is important for having informed, evidence-based conversations. 

One of the takeaways from the Brussels conversation was the need to develop internal expertise about AI within newsrooms. Microsoft has developed partnerships to assist with AI adoption and has also joined collaborative fellowship programs for U.S. news organizations. 

Mozilla has produced a report on AI transparency that was released in the lead up to the E.U.’s AI Act. The authors find that many technology organizations developing AI models and tools do not provide meaningful transparency to end users.

Partnership on AI provides several resources including policy recommendations for AI labeling and important considerations when disclosing AI-related content to audiences.

Relatedly, Trusting News’s 2024 research on AI labeling and disclosure reveals large majorities of the U.S. public desire transparency about uses of AI in news.

We appreciate all of our participants for sharing these resources with CNTI. 

The Center for News, Technology & Innovation (CNTI), an independent global policy research center, seeks to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations. CNTI’s cross-industry convenings espouse evidence-based, thoughtful but challenging conversations about the issue at hand, with an eye toward feasible steps forward. 

The Center for News, Technology & Innovation is a project of the Foundation for Technology, News & Public Affairs.

CNTI sincerely thanks the participants of this convening for their time and insights. We are grateful to Thomson Foundation, the co-sponsor of this AI convening, and to our venue, Résidence Palace. Special thanks to Caro Kriel, Federica Veralda, Kaspar Loftin and Angela Watt for their support, and to Tanit Koch for moderating such a productive discussion. CNTI is generously supported by Craig Newmark Philanthropies, John D. and Catherine T. MacArthur Foundation, John S. and James L. Knight Foundation, the Lenfest Institute for Journalism and Google.

CNTI is generously supported by Craig Newmark Philanthropies, John D. and Catherine T. MacArthur FoundationJohn S. and James L. Knight Foundation, The Lenfest Institute for Journalism and Google.