Media, Technology, and Civic Institutions are Up to the Task of Dealing with Negative AI Generated Election Content

Media, Technology, and Civic Institutions are Up to the Task of Dealing with Negative AI Generated Election Content

In a discussion that’s often overwhelmed by bad news, whether of the media or despair about manipulated content, there are glimmers of hope that show all of us how there can be a better and effective way forward into the future.


By Taylor Barkley

At my organization, the newly launched Abundance Institute, we are tracking the use of artificial intelligence (AI) in the US election. This is in response to much hype over the past few years from tech commentators like Aza Raskin that “2024 will be the last human election” and from politicians like Senator Richard Blumenthal who opened a hearing on the topic on AI election deep fakes by stating, “AI is already being used to interfere with our elections, sowing lies about candidates and suppressing the vote.” 

Overly negative hype for AI in general was in fact the subject of my most recent paper, “The AI Technopanic and Its Effects” which coauthored is Nirit Weiss-Blatt, PhD and Adam Thierer, argues that overly negative media coverage has a negative effect on public and policymaker opinion which can lead to the creation of innovation-stifling regulations. And The Techlash and Tech Crisis Communication a book by Dr. Weiss-Blatt provides a rigorous analysis of an overly critical tech media ecosystem.

However, in our study of recent uses of AI in election material, there was a bright spot that bucked this trend. A tech critical media helped the public get to the truth. It also highlighted the important interplay between journalists, technology, social media platforms, and new institutions combined in an effort to help the public understand when AI is used in election material.

As we were compiling the summary for our AI and elections tracker, it became clear that a story from 2023 stood out as one of the most prevalent use cases so far of AI generated content in election material. A campaign affiliate of Florida Governor and former presidential candidate Ron DeSantis’s campaign used AI generated images in a video without any label. We described the incident in our Substack post:

  • On June 5, 2023, DeSantis War Room, a communications arm of the former Ron DeSantis presidential campaign, posted a video on X that showed real clips and audio of President Trump explaining why he didn’t fire Anthony Fauci. Interspersed in the video (between 00:24 and 00:30) was an image collage that included both real and apparently AI-generated images of President Trump hugging and showing affection to Fauci. The video did not note it included AI-generated or manipulated images. According to NPR reporting, the fact-checking organization AFP detected the fake images two days after they were posted. The post received a community note on X. The story was also widely covered by major news organizations. The three fake collage images have distinct characteristics of being AI generated using a tool like DALL-E or Midjourney. However, it is not known which system was used. On January 21, 2024 DeSantis dropped out of the presidential campaign.

Note the flow of events: a media-funded fact-checking company identified this content within two days, Community Notes members on X labeled the video with context (the exact timing of the notice is unclear), and multiple media outlets (Reuters, New York Times, CNN, NPR, The Verge) all reported on the presence of AI content.

The bright spot of the incident described above is the distributed yet collaborative effort between journalists, media companies, businesses, social media platforms, and individual internet users. It’s a collaborative effort that shows that systems exist for dealing with fake AI generated content.

Each stage of the verification process highlighted a particular set of institutions that could provide a useful model for dealing with AI-generated mis- and disinformation in the future.

The Third Party Institution

Most of the media stories which reported on this instance credited the fact-checking organization AFP Fact Check with flagging the incident first. AFP Fact Check is, as their website describes, “…a department within Agence France-Presse (AFP), a multi-lingual, multicultural news agency whose mission is to provide accurate, balanced and impartial coverage of news wherever and whenever it happens in the world on a continuous basis.” Their funding comes from the French government and corporate clients from media and technology. Their operating expenditures in 2021 were 309 million Euros. Because of their funding and place in the global media ecosystem, they were likely able to quickly notify wire and news outlets with this information. Other fact checkers and watchdog groups, both for and non-profit, could easily fill a similar role.

The Role of Platforms and Community Notes

In addition to the fact checking from AFP, the original post received a Community Notes notification on X.com which remains in place. It reads, “At 0:25 in this video, a collage of photos of President Trump hugging Anthony Fauci appear. These pictures are not real; they are AI-generated images.” Community Notes was a feature that started as Birdwatch on Twitter in 2021. It was designed to democratize the content moderation efforts of X (formally Twitter). This increases the capacity of large social media platforms to moderate content  which is crucial due to the volume of content posted. After Elon Musk’s purchase of Twitter and rebrand to X, Birdwatch was reactivated and renamed Community Notes. According to X, there are over 500k Community Notes contributors as of May 2024. Contributors must meet certain criteria and are approved by employees of X. Not just anyone can participate. Early indications from some studies say that it is effectively combating misleading information. Other major platforms like YouTube, encouraged by the successes of Community Notes, are experimenting with similar efforts.

Media

Stepping into the discussion alongside fact checkers and X Community Notes were media outlets and wire services. These journalists, companies, and institutions provided the necessary contextual reporting service to describe the context of what happened, including reporting on Community Notes and AFP’s fact check. They served as a third and additional source of validation for the DeSantis video included content that was likely generated by AI. Their reach was likely far wider than that of Community Notes or AFP so, in addition to service as a validator, they served a crucial function in spreading the word.

The Path Forward

In the midst of concern and worry about the impact of AI generated media content in elections, this case study provides a reason for hope. Media outlets play a vital role providing reach that users of X or dedicated fact checking organizations do not have.

In the long run, media outlets working in conjunction with platforms, platform users, and fact checking organizations, could help restore lost trust in media sources. The very nature of this multi-institutional effort means media outlets and journalists aren’t alone in their verification efforts. This multi-institutional landscape also benefits audiences who might trust one method over another. All of these institutions reciprocally support each other making the system as a whole stronger.

These positive developments are also happening in the context of government threats to speech on platforms and threats against journalists. The Future of Free Speech at Vanderbilt University has tracked a notable rise in speech restrictive policies in 22 democracies across the globe. As their most recent report says, “Except for 2015, every year witnessed a majority of developments limiting expression, with a noticeable upsurge in 2022.” Journalists around the world face constant challenges and pressure. Here in the US, the Supreme Court’s recent decision in Murthy v Missouri, although laying good groundwork for restrictions against government jawboning, did not provide the clarity hoped for by free speech advocates. A system with multiple and different modes of accountability is better able to withstand censorship efforts.

With all these trends in a global ecosystem of information abundance and new means of producing convincing content with AI tools, there’s a need now more than ever for a robust, distributed, multi-institutional system of information verification. This particular case study provides hope that current systems and efforts can rise to the task of telling the real story. Then it’s up to the readers on whether they believe all the sources or not.

Policymakers should take note. Instead of pursuing new laws banning AI generated content in election material—which would have profound free speech implications—they should take stock of how well current tools are actually performing. At least in the US, they’re performing well enough to counter the calls for any government bans on AI-generated content.

Conclusion

There remain many challenges in an information environment as dense as the 21st Century. This example is by no means meant to imply that all worries are over, whether that be concerns about an overly critical media industry or disinformation online or any related issues. Much remains to be done. However, what this example does demonstrate is that there is actually a place for a multi-institutional effort to highlight some of the high profile instances, which are the ones of greatest concern. In a discussion that’s often overwhelmed by bad news, whether of the media or despair about manipulated content, there are glimmers of hope that show all of us how there can be a better and effective way forward into the future.

Taylor Barkley is the Director of Public Policy at the Abundance Institute.