Ethical AI in Journalism: A Discerning Eye in Times of War and Elections

Ethical AI in Journalism: A Discerning Eye in Times of War and Elections

The rise of AI in journalism presents both innovation and disinformation challenges. Lessons from Ukraine’s war underscore the importance of ethical AI usage in conflict zones. As the 2024 elections near, safeguarding media integrity is paramount.


By Anna Bulakh

In the rapidly transforming media and information space, the advent of artificial intelligence (AI) presents a dual-faced narrative – a beacon of innovation and a potential harbinger of disinformation. In an age where digital technology blurs the lines between reality and fabrication, it is imperative that journalists have the tools to identify and dispel falsehoods as well as the resources to ethically integrate AI into their reporting.

As we have seen around the world, from the war in Ukraine to the political stage in the United States, the rise of digital disinformation is making journalists’ mission to share facts and educate consumers increasingly difficult. We have moved beyond mere “deepfakes” to the challenge of powerful generative AI systems that can create text, audio, video and more that can fool even the most discerning of people. But we can learn from these challenges and support journalists’ efforts to act on our behalf as protectors of the truth.

The war in Ukraine serves as a vivid illustration of the critical, and dangerous, role of digital technology in war zones. Here, digital tools have been a double-edged sword – both disseminating crucial information and creating disinformation. 

It has been 10 years since the Maidan revolution, the moment Ukrainian society exercised its democratic right to determine its future – a future aligned with the values of democracy, of the European Union and its Western partners. Several months later, Russian proxies and military annexed Crimea and launched war in eastern Ukraine, where I’m from. The power of information became existential and still is. At that time, the social media platform Vkontakte, which had been banned in Ukraine since 2017, became a source for dissemination of Russian propaganda and the leakage of personal data of Ukrainian protesters and defense volunteers. It took us eight years to learn more and build literacy about privacy, digital vulnerabilities and the importance of civil society’s engagement in building information trust. We have had a rise of fact-checking organizations, discussions on the role of social media and new technologies for information security since 2014. 

In 2022, after Russia launched a more expansive invasion, we saw a new level of digital power and vulnerability. Messenger channels and new applications became essential to unite the Ukrainian population and mobilize the defense forces. The distribution of fact-checked information was essential to avoiding chaos during the first days of full-scale invasion. While Russia dominated the information space in 2014, Ukraine had learned to cherish and embrace the power of digital in the years since. We invested in digitalization and more connectivity from phones. New applications to register suspicious movements in your neighborhood were available on the first days of full-scale war. The IT community was mobilized and started developing new tools, from tracking disinformation to satellite imaginary maps to support information collection.

Those tools proved critical in 2022 when the Russian side released its first high-profile political deepfake: Ukrainian President Volodymyr Zelenskyy asking civilians to lay down their arms to Russian forces, a message he never conveyed. 

In 2023, we see an improvement in quality with the new deepfake of the commander in chief of Ukraine’s armed forces calling for a coup d’etat. With the rise of AI and more access to generative AI tools, should we be newly concerned, or should we rely on the same approaches we used to combat disinformation in the past? The answer is that AI will heighten the dilemma for us as societies, citizens and experts of what the truth is and how it should be verified and disseminated. 

These digital deceptions, swiftly debunked, underscore the importance of ethical AI usage in conflict zones where the line between truth and falsehood can significantly impact the societal fabric and morale. But as generative AI becomes more powerful, are journalists armed with the technological tools needed to safeguard society?

As the 2024 elections approach, the role of generative AI stands at a crucial juncture. It wields the power to create hyper-realistic media, yet this capability comes with risks of disinformation and manipulation. The key concern is AI’s ability to craft content indistinguishable from authentic news coverage. The antidote lies in deploying robust provenance, watermarking, and detection tools. These technological sentinels are crucial in assuring the authenticity of media content, a cornerstone in preserving the integrity of the democratic process.

Creating ethical boundaries for AI in the information domain, including journalism, is essential. This task involves formulating a framework rooted in transparency, accountability and rigorous verification of AI-generated content. Media houses and technology firms must collaborate to develop standards and practices that ensure AI’s ethical usage, ensuring that the pursuit of innovation in journalism does not compromise the sanctity of truth and integrity. 

My involvement with the advisory committee at the Center for News, Technology, and Innovation (CNTI) — a newly launched initiative aimed at fostering collaboration among leaders in journalism, technology, research and policy — stems from this very need. During a recent event on AI in journalism convened by CNTI, a significant insight emerged regarding the prevailing discourse: a common tendency to blur the distinctions between broad challenges inherent to the internet and social media era and those specifically related to AI technologies. Such conflation muddies the waters of understanding, hindering our ability to address the unique and emerging risks associated with the rapidly advancing domain of generative AI.

Effective regulatory oversight is vital in guiding the ethical application of AI in journalism. Yet, it can be efficient only with the working technical tools to support trust. Defenders of the truth should be armed. Governments and regulatory bodies can develop effective guardrails only in collaboration, focusing on transparency in AI-generated content and safeguarding against its misuse for misinformation. This is particularly relevant in war zones or high-stakes events like elections, where accurate information can significantly influence public perception and policy.

Fact-checkers and social media platforms are pivotal in combating AI-generated misinformation when equipped with the technical tools. Their critical role in maintaining a factual and trustworthy digital information ecosystem is more pronounced during sensitive periods like elections and conflicts, as evidenced in Ukraine.

To make it happen, tech and policy players must provide better solutions to verify the information, provide the infrastructure to interact fast and share the reports. 

In this digital epoch, where AI in journalism weaves a complex narrative of ethical challenges and opportunities, a vigilant and proactive approach to ensuring AI’s ethical usage is imperative. This journey involves developing advanced tools for content verification, establishing clear ethical guidelines, and fostering collaboration between media outlets, technology providers and regulatory bodies. Drawing lessons from Russia’s war in Ukraine and the challenges posed in democratic processes like elections, our commitment to ethics and responsibility becomes not just a choice but a necessity to protect our democracies and societies.