Viral AI-Generated Deepfakes of Biden and Harris Spread Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In the 2024 US presidential race, AI-generated deepfake videos and audio clips falsely depicting President Joe Biden cursing and Vice President Kamala Harris speaking incoherently went viral on TikTok and X, misleading voters. Platforms like TikTok removed the content while fact-checkers and platforms tagged or debunked the manipulated media.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the creation and viral spread of AI-generated manipulated audio (deepfake) of a political figure, which misleads the public and causes harm to communities by spreading false information. The AI system's use in generating this content is central to the harm. The misinformation has already occurred and caused social harm, meeting the criteria for an AI Incident. The article also notes platform responses but the primary focus is on the harm caused by the AI-generated manipulated media.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomySafetyAccountabilityRobustness & digital securityRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Fake Kamala Harris Audio Goes Viral on TikTok, Remains on X

2024-07-24
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated manipulated audio (deepfake) of a political figure, which misleads the public and causes harm to communities by spreading false information. The AI system's use in generating this content is central to the harm. The misinformation has already occurred and caused social harm, meeting the criteria for an AI Incident. The article also notes platform responses but the primary focus is on the harm caused by the AI-generated manipulated media.
Thumbnail Image

Fake Kamala Harris Audio Goes Viral on TikTok, Remains on X

2024-07-24
PC Magazine
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system or synthetic media technology used to create manipulated audio (deepfake) of a public figure. The manipulated content has been widely disseminated, causing misinformation and potential harm to the community by confusing or deceiving people. This constitutes harm to communities as defined in the framework. Since the harm is realized (the manipulated audio is viral and misleading), this qualifies as an AI Incident rather than a hazard or complementary information. The event is not merely about policy or platform responses but about the actual spread and impact of AI-generated manipulated media causing harm.
Thumbnail Image

Kamala Harris deepfakes are going viral on TikTok and Elon Musk's X

2024-07-23
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation with AI-generated audio) whose use has directly led to the spread of misinformation, a form of harm to communities and political discourse. The viral nature and millions of views indicate realized harm rather than just potential harm. The AI system's role is pivotal in creating and disseminating the manipulated content. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm caused by the use of an AI system.
Thumbnail Image

Kamala Harris deepfake removed by TikTok after going viral

2024-07-24
Newsweek
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of generative AI used to create a deepfake audio clip. The use of this AI-generated manipulated content has directly led to misinformation and potential harm to the community by misleading voters about a political figure's statements. The widespread dissemination of this deepfake constitutes an AI Incident because the harm (misinformation and its societal impact) is occurring. The platform's removal efforts are complementary but do not negate the incident itself.
Thumbnail Image

Deepfake Video Of Biden Cursing After Dropping Out Of Race Goes Viral | BOOM

2024-07-22
BOOMLive
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate a deepfake video, which is a clear example of AI-generated manipulated content. The video is being actively shared and has the potential to cause harm to communities by spreading false information and misleading the public about a political figure. This constitutes harm to communities through misinformation, which fits the definition of an AI Incident. The harm is realized as the video is viral and actively influencing public perception, not merely a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Kamala Harris Deepfake Removed By TikTok After Going Viral

2024-07-24
MyrtleBeachOnline
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of generative AI used to create a deepfake video, which is an AI system generating manipulated audiovisual content. The use of this AI-generated deepfake has directly led to misinformation and potential harm to the democratic process by misleading voters, which constitutes harm to communities. The removal of the content by TikTok and fact-checking efforts are responses to this harm. Therefore, this qualifies as an AI Incident because the AI-generated deepfake has directly caused harm through misinformation dissemination.