CNTI’S Assessment
The impact of AI is massive and widespread, and the landscape is changing rapidly. We are already seeing impacts for news reporting, circulation and consumption. Taking advantage of the benefits while limiting the harms will require a careful balancing act in our deeply complex information environment. For newsrooms, the use of generative AI tools offers benefits for productivity and innovation. At the same time, it risks inaccuracies, ethical issues and undermining public trust.
Policy deliberation: Legislation will need to offer clear and consistent definitions of AI categories, grapple with the repercussions of AI-generated content for copyright and civil liberties and offer accountability for violations. The Artificial Intelligence and Data Act (AIDA)’s failure to pass in Canada also suggests that deliberations will need to include avenues for meaningful public participation.
Public understanding: As newsrooms implement AI, they need to remember that while communicating about how they are using AI is important, transparency alone is not enough. The public largely lacks a nuanced understanding of journalistic practices and they need that context to make sense of AI. That means transparency initiatives must be broader than initially conceived and include information about human journalists’ work.
Governance: Without policy guidance, technology companies’ own decisions will continue to dictate how AI is developed, implemented and used. Further downstream, publishers will also be responsible for establishing transparent, ethical guidelines for and education about AI use. Forward-thinking collaboration among policymakers, publishers, technology developers and other stakeholders is critical to strike the right balance and support public access to information.
The Issue
Early forms of artificial intelligence (prior to the development of generative AI) have been used for years to both create and distribute online news and information. Until the recent wave of accessible generative AI systems (such as DALL-E, ChatGPT and Gemini), public debates around the rise of artificial intelligence largely focused on its potential to disrupt manual labor and operational work. These newer systems, however, have raised concerns about their potential for
- destabilizing white-collar jobs and media work,
- abusing copyright (both against and by newsrooms),
- giving the public inaccurate information and
- eroding trust.
At the same time, these technologies also create new pathways for innovation in news production, such as generating summaries or newsletters, covering local events (with mixed results), finding stories and moderating comments sections.
As newsrooms experiment with using generative AI, some of their practices are being criticized for errors and a lack of transparency. News publishers themselves are claiming copyright and terms-of-service violations by those using news content to build and train new AI tools. In some cases, they are forming coalitions or striking deals with tech companies. They are also concerned that generative AI tools are changing how people search online, shifting traffic away from news content.
These developments introduce novel legal and ethical challenges for journalists, creators, policymakers and social media platforms. This includes how publishers use AI in news production and distribution, how AI systems draw from news content and how AI policy around the world will shape both.
What Makes it Complex
Lack of agreement about what constitutes AI makes scoping policy a challenge.
What comprises “artificial intelligence” is often contested and difficult to define depending on how broad the scope is (e.g., if it includes or excludes classical algorithms) and whether one uses technical or “human-based” language. The widespread definitional disagreement can lead to misalignment in policy conversations (e.g., policymakers working from different but unarticulated definitions) and unclearly scoped policies.
In considering legislation, it is unclear how to determine which AI news practices would fall within legal parameters and how news practices differ from other AI uses.
What specific practices and types of content would be subject to legislative policy? How would it apply to other types of AI? The proprietary and fluid nature of AI systems introduce new challenges for regulation. As policymakers attempt to legislate AI, they must consider the future risks of each system. These technologies will continue to evolve at a faster pace than policy can reasonably keep up with, and policymakers must determine how best to protect citizens’ and creators’ fundamental rights and safety.
The quantity and type of data collected by generative AI programs introduce new privacy and copyright concerns.
Among these concerns is what data is collected and how it is used by AI tools, in terms of both the input (the online data scraped to train these tools) and the output (the automated content itself). Additionally, publishers have questioned whether their articles are being used to train AI tools without authorization, potentially violating terms-of-service agreements. In response, news publishers have turned to both litigation — seeking penalties against AI companies for using their copyrighted content without permission — and licensing agreements — allowing tech companies to pay to use news content to train generative AI models. Another CNTI primer delves into the copyright challenges of AI, which present ethical dilemmas over profiting from AI models trained on copyrighted creative work without attribution or compensation.
Establishing transparency and disclosure standards for AI practices requires a coordinated approach between legal and organizational policies. While it may make sense to address some areas of transparency through legal requirements (like current advertising disclosures), other areas are more appropriately addressed organizationally.
In many cases, it will be up to tech companies and publishers to establish their own principles, guidelines and policies for navigating the use of AI within their organizations — ranging from appropriate application to labeling to image manipulation. But these standards will need to both fit alongside legal requirements and be similar enough across organizations for the public to understand them. Newsroom education and communication will also be critical, as both journalists and the public are often unsure of how, or to what extent, news organizations rely on AI. The public generally assumes news rooms are implementing AI without human oversight (which is rare, but gets a lot of attention when it occurs). For technology companies specifically, there is ongoing debate over requirements of algorithmic transparency and the degree to which legal demand for this transparency could enable bad actors to hack or otherwise take advantage of the system in harmful ways.
The use of generative AI tools to create news stories presents a series of challenges around providing fact-based information to the public.
Not only do AI tools include false or entirely made-up information, but they are also “confidently wrong,” creating convincingly high-quality content and offering authoritative arguments for inaccurate information. To date, licensing deals have largely failed to improve this challenge. Distinguishing between legitimate and illegitimate content (or even satire) is becoming increasingly difficult — particularly as counter-AI tools have so far been mostly ineffective. Further, it is easy to produce AI-generated images or content, which can be exploited by spammers who churn out AI-generated “news” content or antidemocratic actors who create scalable and potentially persuasive propaganda and disinformation. While it is clear automation offers many opportunities to improve news efficiency and innovation, it also risks further commoditizing and undermining public trust in news.
Content generators and policymakers need to be aware of inherent biases in generative AI tools and guard against them.
Because AI technologies are usually trained on massive swaths of data scraped from the internet, they tend to replicate existing social biases and inequities. For instance, photo-editing apps can produce hypersexualized and racialized images, and other image generation apps (like DALL-E and Stable Diffusion) can be used to amplify stereotypes and produce fodder for harassment or disinformation. ChatGPT has been shown to generate violent, racist and sexist content (e.g. offering less risky and more patronizing financial advice to users with jobs associated with women). Additionally, using AI systems in social and cultural settings for which they were not intended, along with the human effort in places like Kenya to clean up harmful AI outputs, creates problems for labor and data control. Finally, while natural language processing is rapidly improving, AI tools’ training in dominant languages makes it harder for those who speak marginalized languages to access information. Researchers are working to reduce some of these biases, as are participatory grassroots organizations; however, these biases are deeply rooted and the tools are used globally, so fixing these problems will be difficult.
State of Research
Artificial intelligence is no longer a fringe technology. Research finds three-quarters of companies report AI adoption as of 2024. Experts have begun to document the increasingly critical role of AI for news publishers and technology companies, both separately and in relation to each other. And there is mounting evidence that AI technologies are routinely used both in social platforms’ algorithms and in everyday news work, though the latter is often concentrated among larger and upmarket publishers who have the resources to invest in these practices.
There are limitations to what journalists and the public understand when it comes to AI. Research shows there are gaps between the pervasiveness of AI uses in news and journalists’ understandings of and attitudes toward these practices. Audience-focused research on AI in journalism consistently finds that news consumers often cannot discern between AI-generated and human-generated content. Audiences also perceive certain types of AI-generated news as less biased than human writers, despite ample evidence that AI tools can perpetuate social biases and enable the development of disinformation.
There is a growing base of empirical research on AI, although it remains more qualitative than quantitative, which allows us to answer some important questions, but makes a representative assessment of the situation difficult. Theoretical work has focused on the changing role of AI in journalism practice, the central role of platform companies in shaping AI and the conditions of news work, and the implications for AI dependence on journalism’s value and its ability to fulfill its democratic aims. Work in the media policy space has largely concentrated around European Union policy debates and the role of transparency around AI news practices in enhancing trust.
Future work should prioritize evidence-based research on how AI reshapes the news people get to see — both directly from publishers and indirectly through platforms. AI research focused outside of the U.S. and outside economically developed countries would offer a fuller understanding of how technological changes affect news practices globally. On the policy side, comparative analyses of use cases would aid in developing transnational best practices in news transparency and disclosure around AI.
57%
of companies based in emerging economies reported AI adoption in 2021
(McKinsey, 2021)
67%
of media leaders in 53 countries say they use AI for story selection or recommendations to some extent
(Reuters Institute for the Study of Journalism, 2023)
Notable Studies
State of Legislation
The latest wave of AI innovation has, in most countries, far outpaced governmental oversight or regulation. Regulatory responses to emerging technologies like AI vary by country and range from direct regulation to soft law (e.g. guidelines) to industry self-regulation. Some governments, such as Russia and China, directly or indirectly facilitate — and thus often control — the development of AI in their countries, allowing them to collect and use individuals’ data. Others attempt to facilitate innovation by involving various stakeholders. Some, like the EU, actively seek to regulate AI technology and protect the public against its risks.
These differences reflect a lack of agreement over what values should underpin AI legislation or ethics frameworks and make global consensus over its regulation challenging. That said, legislation in one country can have important effects elsewhere. The EU AI Act, for example, inspired legislation in Canada and Brazil. Because of this, it is important that those proposing policy and other solutions recognize global differences and consider the full range of potential impacts without compromising democratic values of an independent press, an open internet and free expression.
Legislative policies specifically intended to regulate AI can easily be weakened by a lack of clarity around what qualifies as AI, making violations incredibly hard to identify and enforce. Given the complexity of these systems and the speed of innovation in this field, experts have called for individualized and adaptive provisions rather than one-size-fits-all responses. Recommendations for broader stakeholder involvement in building AI legislation also include engaging groups, such as marginalized or vulnerable communities, that are often most impacted by its outcomes.
Finally, as the role of news content in the training of AI systems becomes an increasingly central part of regulatory and policy debates, responses to AI developments will likely need to account for the protection of an independent, competitive news media. Currently, this applies to policy debates about modernizing copyright and fair use provisions for digital content as well as collective bargaining codes and other forms of economic support between publishers and the companies that develop and commodify these technologies.
Notable Legislation
Archived materials
CNTI regularly updates our issue primers. For archived research and notable articles for the AI in Journalism issue primer, please visit this page.