AI-Generated Fake Trump Audio Spreads Epstein Files Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated audio clip falsely depicting Donald Trump demanding officials block the release of Epstein-related documents went viral on social media, amassing millions of views. Disinformation watchdogs confirmed the audio was fake, highlighting the growing use of AI deepfakes to spread political misinformation and cause public confusion in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to generate a fake audio clip that was widely disseminated, leading to misinformation and disinformation. This misinformation can harm communities by creating confusion, polarizing public opinion, and undermining democratic processes. Since the AI-generated content directly led to the spread of false narratives and public confusion, this qualifies as an AI Incident under the definition of harm to communities. The event is not merely a potential risk but an actual occurrence of harm caused by AI-generated disinformation.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"If I Go Down...": Trump's Fake AI Audio Clip On 'Epstein Files' Goes Viral

2025-11-22
NDTV
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake audio clip that was widely disseminated, leading to misinformation and disinformation. This misinformation can harm communities by creating confusion, polarizing public opinion, and undermining democratic processes. Since the AI-generated content directly led to the spread of false narratives and public confusion, this qualifies as an AI Incident under the definition of harm to communities. The event is not merely a potential risk but an actual occurrence of harm caused by AI-generated disinformation.
Thumbnail Image

AI fake audio clip of Trump 'not releasing' Epstein files goes viral

2025-11-22
South China Morning Post
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate synthetic audio that falsely portrays a public figure making statements that did not occur. This use of AI-generated deepfake audio contributes to misinformation, which can harm communities by misleading the public and potentially influencing political discourse. Since the AI-generated content is actively spreading and causing harm through disinformation, this qualifies as an AI Incident due to harm to communities through misinformation.
Thumbnail Image

Fake AI Trump Audio Clip On 'Epstein Files' Gains Traction

2025-11-22
Channels Television
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the audio clip is AI-generated and that it has been widely circulated, causing misinformation and confusion among the public. The harm is realized as the disinformation is actively spreading and influencing public perception, which fits the definition of an AI Incident due to harm to communities. The AI system's development and use directly contributed to this harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake AI Trump audio clip on 'Epstein files' gains traction - Jamaica Observer

2025-11-22
Jamaica Observer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (OpenAI's Sora text-to-video model) generating synthetic audio/video content that is falsely attributed to a public figure. The AI-generated clip has been widely shared, causing misinformation and confusion among the public, which constitutes harm to communities. The harm is realized (not just potential), as the disinformation is actively influencing public perception and political discourse. Therefore, this meets the criteria for an AI Incident due to the direct role of AI in generating and spreading harmful false content.
Thumbnail Image

Fake AI Trump audio clip on 'Epstein files' gains traction

2025-11-22
eNCAnews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the audio clip is AI-generated using OpenAI's Sora model, confirming the involvement of an AI system. The harm arises from the use of this AI-generated fake content to spread false narratives about a politically sensitive topic, which has gained significant traction and views on social media. This misinformation can disrupt public understanding and trust, which is a form of harm to communities. Since the AI system's use has directly led to this harm, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake AI Trump audio clip on 'Epstein files' gains traction

2025-11-22
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The AI system generated a fake audio clip that is being amplified, which directly leads to harm by spreading false information about a politically sensitive issue. This constitutes harm to communities and the political environment, fitting the definition of an AI Incident due to the realized harm from the AI-generated content.
Thumbnail Image

Fake Trump AI Audio About Blocking Epstein Files Goes Viral

2025-11-22
Bangla news
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating synthetic audio (deepfake) that is used to spread false political information. The widespread dissemination of this misinformation has already occurred, causing harm to communities by creating confusion and undermining trust in information. The AI system's role is pivotal in creating the fake audio, directly leading to the harm described. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake AI Trump audio clip on 'Epstein files' gains traction

2025-11-25
jen.jiji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (OpenAI's Sora text-to-video model) to create a synthetic audio clip falsely depicting President Trump. The clip has been widely circulated and amplified, causing misinformation and confusion among the public, which is a form of harm to communities and undermines the integrity of information. This harm is realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident due to the direct role of the AI system in generating harmful disinformation.