Synthetic Media & Deepfakes

Synthetic Media & Deepfakes

How do we protect societies from synthetic media and “deepfakes”?


Deepfakes, a form of synthetic media content that uses artificial intelligence (AI) to create realistic depictions of people and events, have proliferated in recent years. There are many questions about how this content affects journalists, fact-based news and mis- and disinformation. In addressing these concerns it is important to consider freedom of expression and safety. Relatedly, policies targeting deepfakes must be clear about what types of content qualify as such. Detection technologies and provenance approaches are being rapidly developed but it is unlikely they can prevent all potential harms by AI-altered content. Additional research should consider (1) what effects deepfakes have on journalism, (2) how content labeling addresses concerns about deepfakes (and what types are most effective), (3) what international standards should be applied to content to confirm its authenticity and (4) how best to teach the public to identify synthetic media.

Manipulated imagery has been around for over 150 years but it has reached a new level with “deepfakes.” The term “deepfake” originated in 2017 to describe audio and videos manipulated with the assistance of artificial intelligence (AI) to resemble a real person even when the person portrayed did not say or do what is depicted in the content. Deepfakes are a subset of “synthetic media” which include audio, images, text and video content created with the assistance of AI. There continue to be conversations about how to differentiate synthetic audiovisual content from deepfake audiovisual content. Like many topics CNTI covers, the definitional clarity of these terms remains a work in progress and is important when considering policy.  

The number of deepfakes online increased tenfold from 2022 to 2023. While some research raises questions about the degree to which harm can be directly attributed to manipulated media, there is certainly some evidence of it in countries such as Slovakia, the United Kingdom and the United States. This is especially alarming amid a record-breaking number of national elections being held in 2024 and as broader global concerns grow about threats to the overall stability of a country. A March 2022 deepfake, for example, depicted the Ukrainian President falsely ordering his country’s military to surrender. 

To date, the highest-quality, most-convincing deepfakes require a large amount of training data and a lot of time to be persuasive. Therefore, most actors intending to cause harm are likely to pursue less resource-intensive meansto spread false narratives such as disinformation tactics discussed in a separate CNTI issue primer. But as technological innovations advance, deepfakes are rapidly becoming easier to make and more persuasive.

Alongside the worry about the direct impact of a deepfake is concern about a “liar’s dividend” and the sowing of further distrust in “real news,” the news media and government figures.

Responses to the growth in deepfakes are occurring on several fronts. Many digital and online platforms (e.g., Facebook, Instagram, TikTok, X and YouTube) have begun implementing disclosure policies that require advertisements using AI-created content to be labeled. Others are banning certain types of synthetic material, creating training resources to assist in combating these images and working on ways to embed content with tags to confirm authenticity

News organizations have also developed online training courses to assist with identifying deepfakes. However, interviews with expert fact-checkers reveal that while deepfakes are a concern, many feel that more critical threats come from text, images, audio and/or video taken out of context (i.e., “decontextualized”) as well as other forms of manipulated media such as “cheap fakes” where information is recontextualized into a false narrative. 

Methods to detect synthetic media have advanced over the last several years. These identification technologies examine the shadows, geometry, pixels and/or audio anomalies of suspected synthetic media and look for hidden watermarks to evaluate authenticity. Detection challenges have encouraged researchers and the public to experiment with new ways to accurately identify deepfakes. While these techniques will almost certainly be unable to fully erase the threats of synthetic media, they do offer steps toward establishing guardrails to protect the public’s access to authentic information. To identify how to best implement detection methods, further collaboration across various sectors (e.g., technology, communications, policy, government, etc.) is needed. 

Most countries do not have specific legislation on synthetic media, but among those that do, policies fall into two general categories: (1) banning all deepfake content that does not obtain consent from the individual(s) depicted or (2) requiring disclosure and/or labeling of deepfake content. Deepfake-related regulation is particularly complex due to many countries’ protections for freedom of speech and expression. 

Clearly delineating what qualifies as a deepfake is difficult but critical.

Governments and researchers are confronting how best to differentiate deepfakes from other types of synthetic media content. For instance, there is a question about whether the material depicted must be deceptive in nature to be classified as a “deepfake.” Other definitional considerations revolve around intentharm and consent. For example, a 2019 synthetic video of soccer star David Beckham speaking nine languages was intended to disseminate factual information about malaria but it used deepfake technologies to make the dialogue sound authentic. The intent of this deepfake was not to be deceptive or spread harm but it is still widely considered a deepfake because of how it was made. On the flip side, altered images have been around for over 150 years – without the use of artificial intelligence – but may be intentionally deceptive, similar to AI-generated deepfakes. The spectrum of synthetic media becomes increasingly complex with the inclusion of “shallowfakes” and “cheap fakes” – forms of manipulated media that do not require advanced technological tools. Better delineating what types of content are classified broadly as synthetic media versus specifically as “deepfakes” (a subset of synthetic media) is crucial to separate the benign and beneficial uses (e.g., for education and entertainment) from harmful uses.

While developing software to detect and counter deepfakes requires strong digital infrastructure and financial resources that only certain countries have available, new labeling and disclosure tools are making methods for addressing deepfakes more accessible globally.

Developing independent software for detecting and countering deepfakes is expensive, but tools to identify whether or not media sources have been manipulated are becoming available for wider use. One potential strategy is to use trained human graders in combination with pre-trained AI detection models. Researchers find these combined approaches can have advantages over using a single detection method. In response to the growing number of deepfakes, content creators and the technology industry have also begun developing ways to tag and label manipulated media. These include both direct and indirect disclosure approaches to maintain transparency and assert provenance (i.e., authenticity) as well as different types of content labeling. Watermarking is one technique that can be visible to users or embedded in media to certify its authenticity. Arriving at a global standard for this type of labeling should be a priority. The Coalition for Content Provenance and Authenticity (C2PA) is one possible standard which has received support from Adobe, Google, Intel and Microsoft among many other organizations. While technology tools and disclosure and labeling requirements greatly help to address deepfakes, they are unlikely to remove all mis- and disinformation from the news ecosystem so understanding how to mitigate threats from all sources is critical for promoting fact-based news. 

Efforts to regulate deepfake content must be compatible with laws protecting freedom of speech and expression.

Governments need to determine where to draw the line between legal and illegal deepfake content. In countries that legally protect free speech, deepfakes present difficult circumstances for determining what is legal or illegal content. To a degree, sharing false statements is a protected right under freedom of speech and expression laws so banning all deepfake content is likely illegal, thus making the regulation of deepfakes particularly complex

Synthetic media create opportunities for journalists to protect their identity in threatening situations, but deceptive behavior runs counter to many news outlets’ codes of ethics on misrepresentation.

The technological innovations brought forth by deepfakes may allow journalists and/or their sources to remain anonymous by altering their appearance and voice when working on sensitive projects. However, this goes against many outlets’ codes about journalists remaining honest and transparent during reporting as well as policies about deception. While news organizations may outline rare circumstances for journalists to protect anonymity or to engage in deceptive practices, these are only allowed for matters of public interest or personal safety. Determining when journalists can use deepfakes for their work is an important ethical consideration.

While deepfakes are a relatively new technological innovation, much research has explored how individuals interpret false information presented in differing forms of media (e.g., text, audio and/or video). In one recent study, even in the presence of a warning that deepfake content would be encountered, nearly 80% of participants did not correctly identify the only deepfake in a series of five videos. Other work has found that up to half of respondents in a nationally representative sample cannot differentiate between manipulated and authentic videos. These findings indicate that deepfakes are difficult to counter and that correcting false information is needed to support a well-informed public. 

There are also evidence-based areas of hope. Research suggests people can be trained to better detect deepfakes. Interventions that focus on the accuracy of information and the low cost of producing consumer-grade deepfakes (which can now be done in a matter of minutes using apps and websites) show positive results in an effort to counter the negative effects of this type of content. 

To complement training individuals about detecting deepfakes, one strategy is to study how responsive individuals are to informational fact checks and labels on manipulated media: 

  • Fact-checking politicians’ false statements has been shown to decrease beliefs that the statements were factual, though further research into how partisanship shapes interpretation of synthetic media is crucial. Developing digital media literacy approaches, like how to spot false information, will likely be important to help individuals recognize high-quality, fact-based news. 
  • Findings suggest that tagging information as false, while beneficial, also has consequences for true, authentic information. General, broad disclosures about false information can cause viewers to discount true, accurate news. The public may grow more accustomed to looking for labels and other kinds of disclosures as a way of trusting its veracity but that remains a question for now.
  • False information that is not tagged as such is found to be interpreted as more accurate than other false information that has been tagged as false. These findings suggest that labels can be effective but they must be comprehensive and cover all applicable synthetic media.
  • The value of asserting provenance, or tracking the authenticity and origin of content, has also been studied. While the public may not widely grasp provenance as a concept, it has been shown to decrease trust in deceptive media when presented to individuals. Further education on the importance of provenance for a fact-based news ecosystem is needed.

Future research should continue to study (1) how individuals engage with synthetic content and (2) how persuasive the public finds this content, which is especially relevant given how realistic and life-like these media are becoming. In response to the increased presence of synthetic media, researchers should also consider what techniques – including labeling and disclosure – are most effective for mitigating the negative effects stemming from deepfake content. Understanding how to respond to and “treat” individuals who have encountered deepfake content is an important consideration that can support fact-based news endeavors. Finally, research should also examine how newsrooms will confront the proliferation of deepfake content and its potential harms.

Most countries do not have any existing policies that specifically target deepfakes. The existing legislation on deepfakes grapples with how to accommodate less harmful and/or benign uses of synthetic media (e.g., art, education, comedy) while addressing a broad range of harmful uses (e.g., nonconsensual adult content, deceiving consumers). Defining the harmful, illegal uses of deepfake content is critical for effective policy. Legislation attempting to protect societies from deepfake content must decide whether to ban all AI-generated deepfake content or to set restrictions on material allowed in deepfake content and develop regulations on how it is presented to audiences (e.g., labeling). 

Another crucial consideration for public policy is how to best classify deepfake media content. Much of the legislation being debated and passed in U.S. states is about images, videos and audio. However, some pieces of legislation also address text-related synthetic media, while others focus solely on videos – even if generative AI was not used in the manipulated videos. Global standards for what types of media are classified as deepfake content might be helpful but only if they are fully inclusive of the ways people receive information and interact with news and allow for future developments.

Current approaches across several U.S. states, the European Union and China involve implementing disclosure requirements or labeling content that has been generated using AI. These approaches include watermarking, content labeling and disclaimers. The parties responsible for enforcing these types of deepfake regulations have included government agencies but concerns persist that the technology is moving more quickly than legislation and oversight. For many countries, current laws that may be affected by deepfake content (e.g., right to privacy, defamation or cybercrime) do not specifically address synthetic media. These gaps make regulating manipulated content difficult. Enforcement is also problematic in cases where the content creator resides outside of a country’s jurisdiction. 

Experts recommend focusing policies on the general harms of technological innovations rather than on the technologies themselves as it is likely impossible to detect and/or ban all manipulated synthetic media. There are also concerns that deepfake regulation will curtail freedom of speech and expression. Future legislation should consider how to best craft regulations to avoid the costs of such regulations outweighing the benefits. As such, regulations ought to consider the balance between the need for freedom of expression (and an open internet) while also protecting against the harms of mis- and disinformation.

All

Notable Articles & Statements

Key Institutions & Resources

Notable Voices

Recent & Upcoming Events

A look at global deepfake regulation approaches
Responsible Artificial Intelligence Institute (April 2023)

Artificial intelligence, deepfakes, and disinformation
RAND Corporation (July 2022)

Deepfakes and international conflict
Brookings Institute (January 2023)

From deepfakes to TikTok filters: How do you label AI content?
Nieman Lab (May 2021)

Increasing threats of deepfake identities
U.S. Department of Homeland Security (n.d.)

Regulating AI deepfakes and synthetic media in the political arena
Brennan Center for Justice (December 2023)

Snapshot paper – Deepfakes and audiovisual disinformation
Centre for Data Ethics and Innovation (September 2019)

Tackling deepfakes in European policy
European Parliamentary Research Service (July 2021)

Coalition for Content Provenance and Authenticity: Organization that develops technological standards for identifying authentic media.

Partnership on AI: Non-profit organization that is dedicated to understanding AI through cross-industry discussions and partnerships to promote positive outcomes for society and has an initiative for synthetic media.

Responsible Artificial Intelligence Institute: Non-profit organization focusing on how to assist organizations with responsible AI usage and implementation.

University of North Carolina Center on Technology Policy: Public policy-focused organization addressing current technology issues and providing meaningful policy considerations.

WITNESS: Non-profit organization that provides information about how individuals around the world may use technology and video recordings to improve and secure human rights.

David Doermann, School of Engineering and Applied Sciences, University at Buffalo

Hany Farid, Electrical Engineering & Computer Sciences and the School of Information, University of California, Berkeley

Henry Ajder, Founder, Latent Space

Matthew Groh, Kellogg School of Management, Northwestern University

Matthew Wright, Department of Cybersecurity, Rochester Institute of Technology

Maura Grossman, Cheriton School of Computer Science, University of Waterloo

Sam Gregory, Executive Director, WITNESS

Siwei Lyu, School of Engineering and Applied Sciences, University at Buffalo

Deepfakes and the Law Conference
University of Leeds and City University, London
May 20, 2024 – London, UK or Online

The Impact of Deepfakes on the Justice System
American Bar Association
January 22, 2024 – Online

A look at global deepfake regulation approaches
Responsible Artificial Intelligence Institute (April 2023)

Artificial intelligence, deepfakes, and disinformation
RAND Corporation (July 2022)

Deepfakes and international conflict
Brookings Institute (January 2023)

From deepfakes to TikTok filters: How do you label AI content?
Nieman Lab (May 2021)

Increasing threats of deepfake identities
U.S. Department of Homeland Security (n.d.)

Regulating AI deepfakes and synthetic media in the political arena
Brennan Center for Justice (December 2023)

Snapshot paper – Deepfakes and audiovisual disinformation
Centre for Data Ethics and Innovation (September 2019)

Tackling deepfakes in European policy
European Parliamentary Research Service (July 2021)

Coalition for Content Provenance and Authenticity: Organization that develops technological standards for identifying authentic media.

Partnership on AI: Non-profit organization that is dedicated to understanding AI through cross-industry discussions and partnerships to promote positive outcomes for society and has an initiative for synthetic media.

Responsible Artificial Intelligence Institute: Non-profit organization focusing on how to assist organizations with responsible AI usage and implementation.

University of North Carolina Center on Technology Policy: Public policy-focused organization addressing current technology issues and providing meaningful policy considerations.

WITNESS: Non-profit organization that provides information about how individuals around the world may use technology and video recordings to improve and secure human rights.

David Doermann, School of Engineering and Applied Sciences, University at Buffalo

Hany Farid, Electrical Engineering & Computer Sciences and the School of Information, University of California, Berkeley

Henry Ajder, Founder, Latent Space

Matthew Groh, Kellogg School of Management, Northwestern University

Matthew Wright, Department of Cybersecurity, Rochester Institute of Technology

Maura Grossman, Cheriton School of Computer Science, University of Waterloo

Sam Gregory, Executive Director, WITNESS

Siwei Lyu, School of Engineering and Applied Sciences, University at Buffalo

Deepfakes and the Law Conference
University of Leeds and City University, London
May 20, 2024 – London, UK or Online

The Impact of Deepfakes on the Justice System
American Bar Association
January 22, 2024 – Online