AI-Generated Deepfake Videos Falsely Depict Jake Paul Coming Out as Gay

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos using OpenAI's Sora falsely depicted YouTuber and boxer Jake Paul coming out as gay, leading to widespread misinformation and reputational harm. The realistic videos went viral on TikTok, causing distress and raising concerns about the misuse of AI for creating deceptive content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the use of an AI system to create deepfake videos that have directly led to misinformation and reputational harm to Jake Paul, which is a violation of rights and harm to communities. The AI system's use is central to the harm, and the harm is realized (videos are viral and misleading). Therefore, this is an AI Incident rather than a hazard or complementary information. The article does not focus on responses or governance but on the harm caused by the AI-generated content.[AI generated]
AI principles
AccountabilityHuman wellbeingPrivacy & data governanceSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
OtherGeneral public

Harm types
ReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Jake Paul responds to deepfake coming-out clips as AI-generated celeb videos go viral - The Mirror

2025-10-09
Mirror
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system to create deepfake videos that have directly led to misinformation and reputational harm to Jake Paul, which is a violation of rights and harm to communities. The AI system's use is central to the harm, and the harm is realized (videos are viral and misleading). Therefore, this is an AI Incident rather than a hazard or complementary information. The article does not focus on responses or governance but on the harm caused by the AI-generated content.
Thumbnail Image

YouTuber Jake Paul Responds to Fake 'Coming Out' Videos: Deepfake Explained

2025-10-09
Us Weekly
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (OpenAI's Sora) used to generate deepfake videos, which is an AI system by definition. The videos are misleading and have fooled viewers, indicating a potential for harm such as misinformation or reputational damage. However, the article does not document any actual harm occurring, such as injury, legal rights violations, or significant disruption. The event is more about raising awareness of the technology's capabilities and societal responses, including Jake Paul's reaction and industry concerns about intellectual property. This fits the definition of Complementary Information, as it enhances understanding of AI impacts and responses without describing a concrete AI Incident or a plausible AI Hazard leading to harm.
Thumbnail Image

Jake Paul Slams Viral AI Video That Falsely Shows Him Coming Out as Gay

2025-10-08
Mandatory
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated deepfake videos that falsely portray Jake Paul in a way that could harm his reputation and cause emotional distress. The AI system's outputs are being used to harass or bully, which constitutes harm to a person. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework's definition of harm to a person or group due to AI misuse.
Thumbnail Image

YouTuber Jake Paul Responds to Fake 'Coming Out' Videos: Deepfake Explained

2025-10-10
Sun Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (OpenAI's Sora) to generate deepfake videos that have been widely viewed and believed by the public, causing reputational harm and misinformation. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities (through misinformation) and violations of rights (use of likeness without consent). The harm is realized, not just potential, as viewers have been fooled and the subject's reputation affected. The article also discusses the broader intellectual property concerns and responses, but the primary event is the harmful use of AI-generated deepfakes.
Thumbnail Image

Jake Paul responds to AI deepfake videos that show him coming out as gay

2025-10-07
PinkNews | Latest lesbian, gay, bi and trans news | LGBTQ+ news
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake generation) was used to create false videos of Jake Paul, which have been widely disseminated. This constitutes a violation of rights and harm to the individual, fulfilling the criteria for an AI Incident under violations of human rights or harm to communities. The harm is realized as the videos have gone viral and caused distress, not merely a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

"People need to get a life": Jake Paul responds to influx of AI deepfake videos of his likeness

2025-10-10
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos that have caused harm to Jake Paul by misrepresenting him and affecting his relationships and business. This constitutes harm to a person (emotional and reputational harm) and harm to communities (through offensive and potentially homophobic content). The AI system's use is central to the incident, as the videos are AI-generated and have led directly to the harms described. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

AI videos showed influencer Jake Paul coming out as gay. His reaction has been surprising.

2025-10-10
LGBTQ Nation
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Sora 2) generating deepfake videos that have been disseminated, causing harm to Jake Paul through unauthorized use of his likeness and false representation. This constitutes a violation of personal rights and reputational harm, fitting the definition of an AI Incident. The AI system's malfunction in safety guardrails and the direct impact on the individual confirm the classification. Although the article also discusses responses and platform safeguards, the primary focus is on the harm caused by the AI-generated content, not just complementary information or potential hazards.