Deepfake Video of Princess Leonor Dancing on TikTok Goes Viral

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos falsely depicting Princess Leonor dancing on TikTok have gone viral, using the face-swapping technology to mislead viewers and damage her reputation. The videos, which use footage of another influencer, have been widely debunked but highlight the risks of AI-driven misinformation and reputational harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI system (deepfake technology) to create and spread false video content. The harm is realized in the form of misinformation and reputational damage to Princess Leonor, a public figure, which affects communities and could be seen as a violation of rights. The AI system's use directly led to the spread of false information, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article confirms the video is fake and viral, indicating the harm has occurred.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
ReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

¿Has visto a la princesa Leonor bailando en un TikTok? No es ella: es un deep fake

2022-08-02
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deep fake technology) used to generate manipulated video content. Although the current video is acknowledged as fake and no direct harm is reported, the technology's use to create convincing false videos poses a credible risk of harm such as misinformation, reputational damage, or manipulation. Since no actual harm has been reported or confirmed, but plausible future harm exists, this qualifies as an AI Hazard rather than an AI Incident. The article also explains the broader risks of deep fakes, reinforcing the potential for harm.
Thumbnail Image

El video de la princesa Leonor bailando en TikTok se destapa como un deepfake

2022-08-02
Vanitatis
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create a realistic fake video, which fits the definition of an AI system. However, the article does not describe any realized harm (such as injury, rights violations, or significant community harm) caused by this video. It also does not present a credible imminent risk of harm from this specific video. Instead, it informs about the existence and detection of deepfakes and their social impact, which aligns with Complementary Information. There is no direct or indirect harm reported, nor a clear plausible future harm from this particular event, so it is not an AI Incident or AI Hazard.
Thumbnail Image

La princesa Leonor, víctima del "deepfake": el vídeo de TikTok en el que aparece bailando

2022-08-05
La Razón
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create and spread false video content. The harm is realized in the form of misinformation and reputational damage to Princess Leonor, a public figure, which affects communities and could be seen as a violation of rights. The AI system's use directly led to the spread of false information, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article confirms the video is fake and viral, indicating the harm has occurred.
Thumbnail Image

No, no es la princesa Leonor bailando en TikTok: se trata de un montaje deepfake

2022-08-04
LaSexta
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated video content. Although the article does not report direct harm occurring from this specific video, the use of deepfakes to spread false or misleading content poses a plausible risk of harm to individuals' reputations and to communities through misinformation. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, even if no harm has yet been realized in this case.
Thumbnail Image

Por qué no debes creerte el TikTok de Leonor que está circulando

2022-08-03
El HuffPost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create a manipulated video, which is an AI-generated synthetic media. The video is false and misleading, but the article's main focus is on debunking the misinformation and clarifying that the video is not genuine. There is no direct or indirect harm reported such as injury, rights violations, or significant community harm. The event does not describe a new AI Incident or AI Hazard but rather provides supporting information to understand the context of AI-generated deepfakes and their societal impact. Hence, it fits the definition of Complementary Information.
Thumbnail Image

Los vídeos de la princesa Leonor bailando en Tik Tok: son un montaje

2022-08-03
Antena3
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fake visual content. The article explicitly states that these videos are AI-manipulated and falsely attribute actions to the princess Leonor, which constitutes a violation of rights and harm to reputation. Since the AI system's use has directly led to this harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.