Focus Group Insights #2: Perceptions of Artificial Intelligence Use in News and Journalism

Focus Group Insights #2: Perceptions of Artificial Intelligence Use in News and Journalism

Defining News Initiative


By Amy Mitchell, Celeste LeCompte, and Samuel Jens

Artificial intelligence (AI) tools have been used in the news industry for over a decade to assist in automating tasks, including summarizing data from outside sources, crafting financial reports and sports stories and performing grammatical checks. However, the release of OpenAI’s ChatGPT in November 2022 ushered in an era of generative artificial intelligence that dramatically reshaped this landscape, and newsrooms’ use of AI tools has been increasing. The news industry’s adoption has been spurred by desires for efficiency, hopes of growing readership and general market pressures. These processes and decisions are becoming well documented within academia and the trade press. 

However, much less is known about the public’s awareness of and attitudes towards the use of AI in news and journalism. In June 2024, we conducted a series of focus groups with individuals across Australia, Brazil, South Africa, and the United States, to better understand how the public defines “news,” “journalism” and “journalists” and the role that each of them play in keeping individuals informed about important events and issues. We also explored how participants understand the role of technology, and AI in particular, in shaping their information access and environment.  

This essay — exploring how the public views artificial intelligence (AI) use in news and journalism — is the second in a series that highlights themes from the discussions that will inform additional research. Each focus group included participants with a mix of socio-demographics and levels of interest in keeping up with current events. More about the focus groups can be found at the end of this essay.

Focus group discussions of AI were centered on the use of generative AI in news, specifically. We wanted to know: How does the use of generative AI fit into participants’ understanding of news and journalism? How comfortable are participants with the use of AI in this context? Do they see it as improving or hurting their access to news and information? 

AI is a complex technology that is changing rapidly, and participants had a wide range of awareness of what it is, how it works, how it can be used today, and emerging applications. Participants’ knowledge — including incorrect information — played a role in shaping their opinions, both positive and negative.

Key Observations

  • Participants’ thoughts about AI use in news were tied closely to the attributes they expect of journalists. As discussed in CNTI’s first focus group essay, participants expect journalists to be skilled researchers, independent thinkers and strong communicators. This plays a role in shaping their thoughts about when, how and why AI tools can be used in the context of news.
  • The quality and availability of data sources feeding AI, as well as the AI models themselves, are key considerations. While not always distinguished in the conversation, there were three primary kinds of “sources” of data discussed: training data, journalists’ original data sets and information on the internet more broadly. When it came to the models themselves, there were diverging opinions about their ability to “get it right,” and those with more confidence were more comfortable with uses by journalists. 
  • Meaningful transparency is critical. Participants generally favored disclosure about the use of AI technology. Many felt that disclosures should include details about why AI was used, the sources it relied on and whether or not the information had been reviewed by a human.
  • This is a key moment in a still evolving area. Participants see AI technology as an important emerging technology that will dramatically impact their lives in the future; participants had positive and negative feelings about that fact, but many expressed a willingness to change their opinion about the risks and benefits of AI as the technology evolves.

We delve into these themes in detail and highlight important nuances in how the public thinks about these concepts across different contexts.

Personal Use and General Attitudes About AI

Many participants drew on their own experiences using AI to ground their perspectives about the use of the technology in news. From awareness and use to general attitudes about technology, participants’ context for the discussion varied dramatically across and within groups. Several participants said that they had not used AI tools; some said that they did not feel well enough informed about the topic to weigh in on the discussion.

Of those who did have experience with AI, many were familiar with grammar tools that implement AI to improve language and word choice; several participants had also used tools that allow them to generate text, and described using AI to draft emails, letters and other documents. 

“…But I myself as a student, I have so much to do, I need some help, just a foundation of something that I am going to write, something to add to it.” (Brazil, from Portuguese)

“I’ve used it to write a couple of things that I just honestly didn’t have the time to really sit and think.” (Australia)

Some participants described using AI tools to help them do research and learn new information. In some cases, people described using AI assistants embedded in search platforms and mobile devices, as well as standalone apps like ChatGPT. Some described using these tools to replace traditional search queries, with the resulting output answering a specific question; others focused on their use of AI to get started on exploring a topic or an idea in a new way. 

“If you don’t…quite know how to learn something, you can just chuck a prompt in there.” (Australia)

“If you’re using Edge […] you will then have a chat box which is an AI, right, where you can then communicate as if you are communicating with a human. That way it helps you to narrow down what you’re asking, and it actually searches on your behalf for exactly what you are looking for.” (South Africa)

Few participants said they had used software to generate images, and many expressed negative feelings about AI-generated or -edited images that they had encountered.

“We’ve seen or heard of images being shared online that were completely made from AI, and it’s amazing how many people don’t realize that it’s fake.” (United States)

Participants across multiple groups and countries raised concerns about the impact of increasing AI usage (in general) on critical thinking skills, worrying that use of these tools will encourage intellectual or creative laziness among users. 

“It makes us to not think actively, not think aggressively, but also not become creative in how we think. We then rely on machines to do the work.” (South Africa)

These concerns were almost exclusively directed at other people’s use of AI technologies — and they were often raised in the context where learning is a key task for the end user: students, trainees, early career professionals, and journalists were all named as examples.

“As an educator, I don’t want my students using AI to write their essays. It is important for students to understand the writing process and be able to express their thoughts through writing.” (United States)

“I’ve used ChatGPT just to ask a general question and I’ve found that the answer generally is very general … The thing is, if you use that in that context, you’re not actually learning because someone’s giving you the answer, whereas you’ve actually learned … analytical skills or research skills or how to analyse the topic, you’ll actually come out better in the end.” (Australia)

Use of AI To Get Informed

Many participants saw clear advantages to using AI tools to get informed on important topics and issues, using AI-assisted search tools or conversational tools like ChatGPT. Many participants described having AI summarize available and relevant information on a topic. They described it as making it “easier” and “faster” to learn new information. 

“AI will give me 10x more than what humans can give me.” (South Africa)

In some cases, participants described such AI tools as a good way of getting informed on current events and issues; AI assistants embedded in search tools, in particular, played a role in providing news and information to them. 

“So if you use the engines, using AI, it makes your search reliable and important. So it’s exactly what you’re getting from the radio — what you can get there and even get data and more of the in-depth information.” (South Africa)

“With the AI, co-pilot, I really like it because it’s made life so much easier because what it’s done really, it’s like taking all the data from everywhere and putting it together for you […]. And from that, the AI is generating what we as humans have inputted and then it distributes to everyone, everywhere.” (South Africa)

The majority of participants did not have an expectation of full accuracy when using AI tools in this way. In using AI tools to inform themselves, participants often noted that they believed additional verification steps are needed to confirm that the information they receive from generative AI tools is correct.

“[The] AI is … getting different types of information throughout, sourcing that information for me, and is giving me that information. Then what I need to do, I need to then cross-check it…” (South Africa) 

Some participants referenced following links to sources provided by AI assistants or described evaluating the accuracy of AI-generated information based on their own knowledge. Several participants said that this is similar to the way they check information and news that they get from other non-AI sources, including radio, mobile phones and social media platforms.

A Nuanced View of How Generative AI Can Be Used by Journalists

Although many participants have found generative AI to be helpful for informing themselves, they expressed significantly more discomfort with journalists and news organizations using the tools to create published information. 

“I am skeptical of the ‘news’ generated by AI. Not opposed to it. Just approaching it with a little skepticism.” (United States)

“When you’re creating something for your personal use, it’s okay. But for professionals, journalists, it’s not acceptable.” (Brazil, from Portuguese)

However, participants distinguished between many kinds of uses, and those that had the most frequent support among participants were those most similar to ones participants reported using in their own lives, suggesting that familiarity with and experience with AI technology helps people feel more comfortable with its use by others, an idea supported by existing research.

Some of the discomfort seemed tied to participants’ expectations that journalists be highly skilled researchers and thus do their own research. This informed some participants’ negative attitudes about journalists’ use of AI tools:

“In my opinion, if they are using AI, then what is the need for the reporters in the first place?” (United States)

“Someone who went to school for years, the journalist, someone who went to school for years now, they cannot think for themselves. They use computer software to think for them. I don’t think that’s a good thing.” (South Africa)

“No, we’re talking about people who perform work in journalism. They don’t have to use AI software, because that person already has the content, the person has to write about the content, they are not creating that content, because the creation of content is considered fake news.” (Brazil, from Portuguese)

However, participants often cited those same skills as reasons journalists could use AI, as long as they ensured that the information generated was accurate. 

“I think it’s convenient, it’s easy, but it has to be true. And you have to revise it, and you have to evaluate it. So, you don’t run the risk of being just a copy.”  (Brazil, from Portuguese)

“I am fine with people creating content like that to save time but they need to double check because AI can’t always be reliable. Sharing is fine as well but the person who shares should verify the credibility before sharing. That goes for reporters and people working in news as well.”  (United States)

Some participants said that their perceptions of their local news organizations would shift to becoming less trusting of the content these organizations produced if AI was used to create it without that layer of oversight. 

“If I knew that my local news organization was using AI to get their news and write their news stories, it would make me very distrusting of them.” (United States)

“I think that it would somehow lose some credibility because somehow, you’re going to believe that it’s not 100% trustworthy, that you’re going to come across some misinformation there.” (Brazil, from Portuguese)

Two key factors impacted participants’ general sense of when and how generative AI could be used in news, including their perception of the quality of the data available to AI, and the ability of AI models to accurately synthesize information given specific expectations for news. In the following sections, we explore these concerns.

The Quality of the Data in AI Tools Matters Greatly …

News is about facts for most people, and whether or not AI is used, participants cared deeply about the sources of information used to produce the information that they rely on. They frequently raised questions, voiced concerns, or expressed excitement about “sources” as they weighed both their own use of AI tools and use by journalists and news organizations.

“What I’d like to know is, […] Is it generic or is it from a source?” (South Africa)

“I would like to know the sources from which the data was extracted and summarized by the AI.” (United States)

“Depends on where their sources come from, I suppose.” (Australia)

Many participants seemed to understand that AI tools use vast amounts of data to power their models and were broadly aware that the quality and availability of data sources and the models used by AI have an impact on the quality of the outputs. 

While participants did not always explicitly distinguish between them in conversation, there were three primary ways in which they understood data sources as playing a role in the quality of generative AI outputs. 

Training data: Throughout the discussion, participants talking about AI’s “sources” sometimes seemed to be referring to the data used to train AI models, as in discussions of conversations with ChatGPT. As discussed above, many participants felt that this information was insufficiently reliable on its own and requires some degree of verification.

“It can be wrong because AI only uses the data it’s learned or has been fed.” (Australia)

“And so I wouldn’t rely on the journalism that comes from it [AI] because it’s information that is sourced from somewhere else and taken and put into one app.” (South Africa) 

Information on the Internet more broadly: In other cases, participants discussed the use of AI-powered search tools that help them find and summarize information from the Internet. In this case, “sources” often seemed to refer to online content accessed by the search engine. Often, their perception of the quality of information on the Internet affected their sense of the reliability of these tools. 

When you go to the AI and you ask for something, it gives you all of that information at once, where you don’t have to go and search on this specific site or that specific site that Google is giving you.” (South Africa)

“I have peace of mind because they [AI tools] are basing themselves on information that is online. So, they are real information.” (Brazil, from Portuguese)

“I really just don’t have a good feeling about AI. I haven’t fleshed out all my thoughts on it, but I just know that the Internet has always led people astray when it comes to current events, and I have no idea who is creating the content behind AI, but I don’t trust it.” (United States)

Journalists’ own original data sets: When discussing the use of AI by reporters, participants frequently discussed whether reporters would be providing information to the AI for synthesis and summary. (And indeed, some news organizations are building such models.) This differed from their own use of AI, and was the most common use case in which participants said they would be comfortable with reporters using AI tools: 

“[I]s it okay if you create news content using AI? My answer is, if the AI uses content that is true facts, [if] the person who is asking for that creation feeds that tool with factual information.” (Brazil, from Portuguese)

“I don’t know how artificial intelligence could help in creating a piece of news […] but the facts themselves have to be found and tracked by the journalist.”  (Brazil, from Portuguese)

South African participants in one group also raised the possibility that emerging technology such as drones could provide unique, useful data to AI tools that would make reporting faster and safer for journalists: 

“With the technology that has been built, it allows them to actually give us the coverage all around and live and happening right now. So I think that’s the exciting thing. […] we still will get that excitement of having someone who’s at the scene by using drones, right?” (South Africa)

“With this new technology, you don’t have to be at war. You can use drones. So me personally, I think AI is the better.” (South Africa)

… As Does the Quality of the Models Themselves 

Most participants were uncomfortable with journalists using AI to generate the entire content of a published story, but they had diverging opinions about how reporters could use AI tools for summarization and synthesis. Perceptions about the ability of AI models to accurately synthesize information from the underlying data informed these opinions, as well as the data quality concerns described above. 

Participants who believed that the AI tools used by journalists would have access to abundant, accurate, and up-to-date information were the most likely to see the value of AI for people creating journalism content. 

“As long as reporters are getting intel from MULTIPLE, trustworthy sources, I think the use of AI in news organizations is okay.” (United States) 

“The computer is able to make suggestions because you’re drawing from a huge pool of information. [A journalist] may have once been able to read one book, but now she’s got a pool of information of the latest research, the latest this, the latest that.” (Australia)

“I would only want AI used when there is a critical mass of information available as opposed to limited or scattered information.” (United States)

In some cases, participants specifically said they would not want journalists and news organizations to use familiar, off-the-shelf AI tools such as ChatGPT, to generate content, referencing their concerns about the quality of the output of such tools. 

“…people who use ChatGPT in helping you formatting a piece of news, it’s okay. But the issue is if you use a text that has already been created as a piece of information. That is true and ChatGPT is not 100% reliable.” (Brazil, from Portuguese)

In some areas of coverage, participants expressed concerns that there may not be sufficient information for AI to leverage in content creation, either because of gaps in the sources AI is trained on — or because of gaps in what is known more broadly by the public. 

“These things are trained to generate information that’s already there. But not information that we don’t know.” (South Africa)

Some participants saw the opportunity for AI tools to help reporters create content if they provided original research and facts for the software to work with

“…the software can help the person [journalist] but it’s not acceptable to write from scratch.” (Brazil, from Portuguese)

Participants had specific concerns about AI models’ ability to successfully generate information with characteristics that are particularly important in a news context. 

Recency: Multiple participants specifically raised concerns that AI tools would lack access to sufficiently recent information:

“News is often cutting-edge and I am concerned about how much AI had the opportunity to learn about breaking news.” (United States)

“[If] you ask it [AI chatbot] about information that has happened probably this year, you might find that it won’t have full information about it. But if you ask it about something that happened five or six years back, then it can give you the entire information. So for me, there is some danger there.” (South Africa)

Local relevance: Other participants voiced concerns that AI-generated news content would be unable to provide a local perspective for current and recent events. 

“So I don’t think the artificial intelligence has those spices, those local spices where you make news to be more interesting. So we’re going to lose a lot when it comes to journalism.” (South Africa)

Nuance: Some participants expressed concerns about AI-generated content “losing” important context and detail when working with large volumes of data, wondering how divergent information or lesser-known facts are incorporated into summarized data. 

“It gathers, but does not credit the original content creators — and may even combine content that was never written to be associated. AI removes ‘nuance’ — subtle expressions and meanings that are important.” (United States)

Neutrality or bias? Some participants said they believe that AI-generated content could be more neutral than that generated by humans:

“If you really want something to be unbiased, then computer-generated is the way to go because computers cannot be biased, partisan, racist, cranky, or anything else. In finance, a lot of bias has been removed by using AI algorithms for things like underwriting because computers cannot be biased the way people can be, they just compute.” (United States)

However, others raised concerns about the potential for bias in AI-generated content, citing bias in models, or raising concerns about the potential bias of the sources available to a particular AI tool.

“I don’t trust many things AI generated because who is generating the information that the AI program is using? What’s their biases? What angle is the information coming from? I’d be more comfortable knowing who and what company the engineer behind the AI program works for.” (United States)

These are widely discussed issues in technology and media circles, but most participants lacked familiarity with the debate. 

An Important Role for Journalists in AI Oversight

“I love what [AI] is doing to us, because it’s bringing technology and humans together. But we need to also manage that as well.” (South Africa)

Because of their concerns about data quality and the limitations of AI models, participants frequently described journalists as playing an important oversight role in a landscape that includes more AI-produced content. 

Journalists are expected to have strong research skills, and many participants saw a need for them to check the accuracy of information provided by AI, including fact-checking information, providing nuance, as well as supplying previously unknown information, which would not be available to AI tools. 

In some cases, they described the potential for journalists to draft television scripts or news articles using AI tools — but often with the caveat that additional research or fact-checking would be required.  

“…it is ok for anchors and organizations to have quick summaries but [they] still [need to be] diligent to do second-level research.” (United States)

“To make me more comfortable with computer-generated content, I would like to know whether it was thoroughly verified and fact-checked by a human.” (United States)

“There are systems that you need to follow. And also just to verify that this information is true and it will give you details in terms of the background of that source, whether it’s fake or is it verified information or not.” (South Africa)

Other participants noted that they would want journalists to review content for a more “human perspective” — to ensure that it doesn’t miss important framing or nuance.

“…it should be used as reference for help but still need human involvement especially on sensitive topic.” (United States)

It’s great for reporting facts, but it might not be able to give that kind of analysis maybe or human input…” (Australia)

Some participants saw the opportunity for AI tools to help reporters create content if they provided original research and facts for the software to work with

“AI might be able to provide a summary of the “ingredients” to this story, but to simply copy/paste what AI generates would be irresponsible and would likely miss important perspectives and information.” (United States)

“[The AI] cannot go alone to collect and present the information. They are programmed. They depend on the journalist. The journalist, they collect the information, they feed the AI, so meaning the AI, they cannot work alone. They need the journalist in order to function…” (South Africa)

Several participants also noted that while AI could be used for summarizing general fact-based information, it was “unacceptable” for it to generate opinion-based content. This aligned with participants’ definitions for news, in general, which should not include opinion. 

“So, if it’s an opinion-based piece of news or an article or something, then it has to be written by the person. It has to be written by whoever is giving that opinion.” (Brazil, from Portuguese)

In other portions of our focus group discussion, many participants expressed a belief that journalists should be mission driven and have a strong moral or ethical approach to their work.  This was mirrored in a common theme among participants that using generative AI methods was only acceptable if the person using them did so in a responsible manner. Participants were most comfortable with AI usage by trusted individuals who would use it “responsibly.”  

“…if it’s a person I trust, if that person uses it responsibly, it’s okay.” (Brazil, from Portuguese)

As a result, several individuals contended that AI was unlikely to be able to successfully provide this detail on topics such as politics and health, where nuance or rapidly changing information is required. 

“Unless it was a very cut and dry topic, with only one right “answer”, I wouldn’t trust AI, because let’s be real — what topics are so black and white? Not many.” (United States)

Technical and Aesthetic Improvements Have Broad (But Not Universal) Support

“I think if it’s the manual labor part of things — if you’re putting a video together as a news organization … and it’s the frills and it’s the bows, it’s the presentation…. I think that’s good.” (Australia) 

Journalists are expected to communicate information to the public, and many participants saw potential advantages for reporters using the technology to improve the aesthetic or technical qualities of the work by assisting with drafting, copyedits and graphic design applications. 

As described above, participants are very familiar with the use of AI tools that help them improve their own writing mechanics. Most believed that generative AI technologies are appropriate for tasks such as grammatical checks and drafting emails, which they view as having clear rules that software could follow, with minimal oversight. 

“I don’t really have an issue with, you know, spell check or Grammarly, the little app that you can have to fix your grammar. I think that makes sense.” (Australia)

Many participants also said they are generally comfortable with journalists using generative AI tools to help “find better words” or make content “more professional,” in the words of a few (South African) participants.

“If you have done your own research, if you’ve written up your whole thing, you know and you’re just using it to edit it, polish it up, stuff like that, you’ve done like the groundwork yourself. I feel like I’m more, I’m more comfortable with that.” (Australia)

Other participants disagreed with the use of AI to edit or “polish” content, arguing that using AI would cause content to lose individuality. 

“For me, when I say it’s lazy, it’s because it doesn’t give you the component of writing your own words to deliver your own version and your own style.” (South Africa)

Visual Content Creation by Generative AI Strongly Opposed

Notably, no participants said that they supported news organizations and journalists using generative AI to create or enhance images and videos in their coverage, a finding supported by recent research from Reuters.

“I definitely like seeing what’s actually happening. I don’t want some AI generated computer-generated stuff in front of me.” (Australia) 

In the case of AI, people frequently described synthetic, AI-generated images as fake; many people have encountered faked images on social media, increasing their level of distrust in AI-generated imagery. 

“It’s not real and shouldn’t be made.” (United States)

In the Australian groups, participants also discussed negative examples of news organizations that had used AI to alter real images (for example, expanding the length of a photo to include details that were not in the original and altering a female lawmaker’s dress to make it more revealing).  One participant also mentioned the use of an AI-generated host on a news program from Chinese state-run media: 

“It’ll be reporting on, like, accurate news, where they’ve an actual figure with the voice generated by AI. So everything is AI — but you can’t tell the difference.” (Australia)

In discussions with participants about their preferences for news content, we previously noted that participants named photos and videos as important elements of “on the scene” reporting that gave them confidence in the quality of news content. Likely because images are perceived as critical to understanding news events, synthetic or AI-altered images were widely seen as not acceptable in news content.

“I don’t think news sources should be using AI to create videos, because I prefer if they gave me the real video or, like, the real pictures of what happened.” (Australia)

“It is okay to use AI as long as it’s stated beforehand. This would give people the ability to make their own choice whether to trust the information or not.” (United States)

The vast majority of participants said that they would feel most comfortable with news organizations using AI if they were informed about its use. Encouragingly, the reasons that people gave for wanting to be informed, and the types of information they would want in such a disclosure, were nuanced. 

  • An opportunity to learn: Many individuals said that labeling AI-generated content would give them the opportunity to evaluate the quality of the information themselves. Because they have uncertainty about the use of AI in news, participants often indicated that it would help them exercise their own judgment in an emerging context. 

“I think labeling it would be good. It wouldn’t immediately turn me off something. I might be interested to read and see if it’s the same quality…” (Australia)

“…I think that people don’t tell us that they are used to this computer-generated content. […] And I have to know the source, because then I know if I can trust it or not.” (Brazil, from Portuguese)

  • Giving credit where due: Participants in several groups — notably Australia and the US — said that disclosure was required in order to provide a full picture of the various sources and creators of the content. Not disclosing the use of AI would be “taking credit” for work that was done by another entity, or akin to plagiarism.

“…I believe that if the image was created by artificial intelligence, as a consequence of copyrights, it has to be mentioned that it was created by artificial intelligence.”  (Brazil, from Portuguese)

“If you haven’t developed it yourself, you can’t put your name to it. You need to disclose this as the source.” (Australia) 

  • Organizational trust matters. For a few participants, disclosure is an important part of maintaining trust in the organization’s overall transparency and standards. 

“Accountability. You need to be able to penalize the company, instead of just blaming it on the algorithm, because the algorithm cannot take responsibility.” (Australia)

“Just be open and honest. If you’re not gonna be honest, then you’re deceiving people and I don’t like that.” (Australia) 

For many participants, this means more than disclosing when AI is used — they were also interested in how AI was used and what benefits or improvements were gained from doing so.

“I just would want to know why you would need to use AI-generated information on a report. Like, what part are you using? Why would you want to do it?” (Australia)

Not all uses need to be disclosed. A few participants who found it acceptable to use AI only for limited technical purposes — spell check, graphic design, and related “polish” activities — said that they did not see a need for disclosure.

“It’s not necessary to inform me that they used artificial intelligence in writing.” (Brazil, from Portuguese)

Broadly, participants seem to believe that we are at a moment of transition, and that their experience with generative AI tools will change. 

“The benefit is more access to information. One thing we must understand about AI is it is still in the beginning stages, so there is obviously going to be wrong things, but we are fixing it as we go.  Facebook is not the same that you found in 2007, it is now advanced, so you can’t judge AI now.“ (South Africa)

“It has the potential to be dangerous and […] human beings are gullible and take the easy way out. So. I think it’s opened a very dangerous door just the same as the Internet opened a very dangerous door and look, damage it’s done to our kids.” (Australia)

Participants often described the expansion of AI technology as unavoidable, despite their concerns about the implications of its broad use and adoption. They drew parallels to a variety of automation technologies and the spread of the Internet, citing concerns about the ways in which technology can transform society, sometimes for the worse. 

However, they also see ways in which AI can still be improved to avoid negative outcomes. The discussions demonstrated a desire for AI developers to ensure that models and data sources improve accuracy, for journalists to ensure the accuracy and nuance of information in their environment, and for news organizations to use AI in responsible, additive ways. As the technology continues to evolve, public expectations and perceptions will likely evolve as well. 

This essay is the second in a series of insights drawn from these focus groups. Even as CNTI uses the full focus group discussions to inform a much larger quantitative survey in the fall, we felt it was worth sharing some insights now. Forthcoming insight pieces will explore participants’ thoughts on uses of AI in journalism, the role of technology in getting informed, and decision-making around who to rely on and how to verify information.

Additionally, a part of CNTI’s mission is to help synthesize research conducted across the community and the globe. To that end, it was a pleasure to see that several of the points discussed above come through in a recent report on AI in News produced by the Reuters Institute. Reinforcing findings in this emerging area of technology are especially meaningful and helpful in designing further studies. CNTI will continue to look across the research community to both synthesize and contribute to this important area of work.  

About the Defining News Initiative

The Defining News Initiative is an 18-month effort that seeks to understand how concepts of journalism, news and information access are being defined in countries around the world. In three different realms — in legislation, among the public and among journalists themselves — our research and analyses will provide clarity and insight on the importance these definitions play in safeguarding an independent news media, freedom of expression, and the public’s access to a plurality of news in ways that inform policy discussions and decision-making.

How We Conducted This Research

CNTI contracted with Langer Research Associates to recruit participants for a combination of virtual — synchronous and asynchronous — and in-person focus groups and focus groups moderators, in four target countries: Australia, Brazil, South Africa and the United States. 

These countries were selected strategically to capture geographic, cultural, and political contexts, as well as different news environments. Our recruitment efforts involved a screening questionnaire that asked potential participants about their information-seeking interest and behavior, prioritizing, but not exclusively relying on responses from individuals who reported that they keep up with events and issues of the day in some capacity. We recruited a total of 89 participants from these four countries (22 in Australia, divided into 2 groups; 25 in Brazil, divided into 2 groups; 29 in South Africa, divided into 3 groups; 15 in the U.S.), which we conducted between June 3 and June 7, 2024. In our recruitment efforts, we were intentional about maintaining diversity based on gender and age. 

Recruitment and focus group discussion materials were designed by CNTI researchers and were reviewed by Langer Research Associates, local vendors and others with research and subject matter expertise. All focus groups in Brazil were conducted virtually in Portuguese and one focus group in South Africa was conducted in-person, with participants conversing both in Zulu and English. For focus groups conducted in languages other than English, such as the ones in Brazil and South Africa, transcripts were translated into English.