AI-Generated Deepfake Video of Ghislaine Maxwell Sparks Misinformation in Canada

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI face-swapping system was used to create a viral video falsely showing Ghislaine Maxwell in Quebec City, Canada. The manipulated video led to public confusion and conspiracy theories before being debunked, highlighting the risks of AI-generated misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The video explicitly involves an AI system (face-swapping AI) used to generate manipulated content that led to misinformation and public confusion, which is a form of harm to communities. The AI system's use directly contributed to the spread of false impressions about Maxwell, fulfilling the criteria for an AI Incident. The creator's clarification and labeling do not negate the fact that harm occurred. Therefore, this event is best classified as an AI Incident.[AI generated]
AI principles
Transparency & explainabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicWomen

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fact File: Viral video of Ghislaine Maxwell in Quebec City made with AI, creator says

2026-02-23
Barchart.com
Why's our monitor labelling this an incident or hazard?
The video explicitly involves an AI system (face-swapping AI) used to generate manipulated content that led to misinformation and public confusion, which is a form of harm to communities. The AI system's use directly contributed to the spread of false impressions about Maxwell, fulfilling the criteria for an AI Incident. The creator's clarification and labeling do not negate the fact that harm occurred. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Ghislaine Maxwell in Canada's Quebec City? TRUTH behind viral video

2026-02-24
WION
Why's our monitor labelling this an incident or hazard?
The video involves an AI system (face-swapping AI) used to create misleading content. However, the article does not report any realized harm such as injury, rights violations, or disruption caused by the video. The misinformation is identified and debunked, and the main focus is on clarifying the truth and providing context. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information that supports understanding of AI-generated misinformation and its societal implications.
Thumbnail Image

What a viral fake video of Ghislaine Maxwell in Quebec City says about AI deception | CBC News

2026-02-25
CBC News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake face swap) used to create a misleading video, which has led to misinformation and conspiracy theories, a form of harm to communities. However, the article does not report direct harm such as physical injury, legal violations, or critical infrastructure disruption caused by the video. Instead, it discusses the broader implications of AI deception, media literacy, and governance challenges. The AI system's role is pivotal in creating the deceptive content, but the article's main focus is on societal understanding and response rather than a specific incident of harm. Thus, it fits the definition of Complementary Information, providing supporting context and highlighting the need for media literacy to mitigate AI-related harms.
Thumbnail Image

Une fausse vidéo de Ghislaine Maxwell à Québec devient virale

2026-02-23
Le Soleil
Why's our monitor labelling this an incident or hazard?
An AI system (face-swapping AI) is explicitly involved in creating the manipulated video. The use of AI led to misinformation and conspiracy theories, which can be considered harm to communities if widespread and impactful. However, the article emphasizes the satirical intent, the creator's apology, and the addition of AI labels, indicating mitigation efforts. There is no clear evidence that the misinformation caused significant harm or violations of rights. Therefore, the event does not meet the threshold for an AI Incident. It also does not describe a plausible future harm scenario beyond the current misinformation spread, so it is not an AI Hazard. Instead, it provides complementary information about AI misuse, public reaction, and labeling practices, fitting the Complementary Information category.
Thumbnail Image

Fact Check: Ghislaine Maxwell Was NOT Spotted In Québec, Canada -- AI Face Swap Video

2026-02-25
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate a face swap video that misleads viewers about Ghislaine Maxwell's location. The AI-generated content directly causes harm by spreading false information, which harms public understanding and trust, a form of harm to communities. The presence and use of the AI system is clear, and the harm is realized through the deception and misinformation spread. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Non, Ghislaine Maxwell ne se promenait pas à Québec

2026-02-22
Radio Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the video was created using AI face-swapping technology, which is an AI system. The manipulated video has been widely viewed and has led to the spread of false beliefs and conspiracy theories about Ghislaine Maxwell's whereabouts and identity. This misinformation harms communities by distorting public understanding and trust. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. Although the creator claims the intent was not to spread disinformation, the viral impact and resulting misinformation constitute realized harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

What a viral fake video of Ghislaine Maxwell in Quebec City says about AI deception | RCI

2026-02-25
Radio Canada
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a deepfake video by digitally swapping faces, which is a clear AI application. The use of this AI-generated content has directly led to the spread of misinformation and conspiracy theories, which harms communities by misleading the public and undermining trust in factual information. The harm is realized and ongoing, as the video has gone viral and continues to generate false claims. Hence, this meets the criteria for an AI Incident due to harm to communities caused by AI-generated misinformation.
Thumbnail Image

Fake video of Ghislaine Maxwell in Canada manipulated using AI

2026-02-26
Fact Check
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create manipulated video content (face-swapping AI). The AI system's use has directly led to the dissemination of false information about a high-profile individual, which constitutes harm to communities through misinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm (misinformation and deception) that affects public discourse and trust. Therefore, this event is classified as an AI Incident.