Dutch Crown Princess Amalia Targeted in AI Deepfake Pornography Scandal

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake pornographic videos featuring Dutch Crown Princess Amalia and other public figures were created and distributed without consent via the platform MrDeepFakes. Over 20 victims filed complaints, highlighting significant privacy violations and psychological harm. The incident has sparked public and legal concern in the Netherlands.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI technology (deepfake generation) to create non-consensual pornographic videos involving a prominent individual, Princess Amalia. This constitutes a violation of her rights and privacy, a clear harm under the framework's category (c) violations of human rights or breach of legal protections. The AI system's use is central to the harm, making this an AI Incident rather than a hazard or complementary information. The harm is ongoing and has already occurred, as the videos have been created and distributed, and legal actions have been initiated by other victims.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsHuman wellbeingAccountabilityRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Other

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Prinzessin Amalia wurde Opfer KI-generierter Nacktvideos

2025-08-15
Promiflash.de
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI technology (deepfake generation) to create non-consensual pornographic videos involving a prominent individual, Princess Amalia. This constitutes a violation of her rights and privacy, a clear harm under the framework's category (c) violations of human rights or breach of legal protections. The AI system's use is central to the harm, making this an AI Incident rather than a hazard or complementary information. The harm is ongoing and has already occurred, as the videos have been created and distributed, and legal actions have been initiated by other victims.
Thumbnail Image

Amalia der Niederlande ist Opfer im Skandal um gefälschte Nacktvideos

2025-08-14
T-online.de
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate fake pornographic videos by transferring faces onto other bodies without consent, which is a direct misuse of AI technology causing harm to individuals' rights and dignity. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of human rights and personal harm. The involvement of multiple victims and legal complaints further supports this classification.
Thumbnail Image

Prinzessin Amalia in gefälschten Sexvideos aufgetaucht

2025-08-15
Blick.ch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate deepfake videos, which are fake pornographic videos created by AI face-swapping technology. The harm is realized as the videos have been distributed online, causing reputational and psychological harm to Princess Amalia, a violation of her rights. The incident has led to legal actions and investigations, confirming the direct link between the AI system's malicious use and harm. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm to a person and violation of rights.
Thumbnail Image

Royales KI-Porno-Opfer!: Gefälschte Sex-Videos von Prinzessin Amalia aufgetaucht

2025-08-15
rtl.de
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to create fake pornographic videos without consent, which is a direct violation of personal and intellectual property rights. The involvement of AI in generating these deepfake videos directly led to harm to the individuals depicted, including Princess Amalia. This fits the definition of an AI Incident as it involves violations of human rights and breaches of legal protections due to the AI system's use.
Thumbnail Image

Ihre eigene Bachelor-Arbeit handelte davon: Prinzessin Amalia ist Opfer perfider Deepfakes

2025-08-15
TAG24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake content that has been used maliciously to create fake pornographic videos of a real person without consent. This misuse of AI has directly led to harm in terms of violation of personal rights and potential psychological harm to the individual targeted. Therefore, it qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Kronprinzessin Amalia Opfer von Deepfake Skandal - KI generierte Sexvideos

2025-08-15
brisant.de
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated pornographic videos with faces of real individuals superimposed onto other bodies. The harm includes violations of privacy, potential psychological harm, and reputational damage to the victims, including a public figure, Crown Princess Amalia. The distribution of these videos on a large platform with hundreds of thousands of users and millions of views confirms the realized harm. The legal context in the Netherlands criminalizing such acts further supports the classification as an AI Incident. The AI system's use directly led to violations of rights and harm to individuals, fulfilling the criteria for an AI Incident.
Thumbnail Image

La princesa Amalia ha sido víctima de vídeos para adultos creados con Inteligencia Artificial

2025-08-17
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create fake pornographic videos of Princess Amalia, which have been widely viewed and caused harm. The AI system's use here directly led to violations of personal rights and reputational harm, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized, not just potential, as the videos have been published and viewed extensively. Hence, this is classified as an AI Incident.
Thumbnail Image

Difunden unas imágenes de contenido adulto de la princesa Amalia que son totalmente falsas

2025-08-16
okdiario.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create false adult images of a public figure, which have been widely disseminated, causing harm. The harm includes violation of privacy and potential psychological and reputational damage, which are recognized as violations of human rights. The AI system's use directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Descubren videos pornográficos de la princesa Amalia hechos con IA

2025-08-17
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a clear use of AI systems to create manipulated videos. The harm is realized as the victim is subjected to non-consensual pornographic material, causing reputational and psychological harm, and violating privacy and potentially other rights. The widespread distribution and the ongoing investigation confirm the direct link between AI use and harm. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and harm to communities.
Thumbnail Image

Amalia de Holanda, víctima de un delito digital: un portal de pornografía manipulada mediante inteligencia artificial difunde vídeos falsos de la princesa

2025-08-17
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create manipulated pornographic videos (deepfakes) of a public figure, Princess Amalia. The videos have been widely viewed and have caused harm to her privacy and reputation, which are forms of harm to individuals and violations of rights. The AI system's use in generating and disseminating these videos directly led to this harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to the individual.
Thumbnail Image

Publican vídeos porno falsos de la princesa Amalia Holanda usando inteligencia artificial

2025-08-15
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI (deepfake technology) to create false pornographic videos, which have been widely distributed and viewed millions of times, causing significant harm to the victim's reputation and privacy. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to the individual. The involvement of AI is clear, the harm is realized, and legal violations are noted, confirming the classification as an AI Incident.
Thumbnail Image

POLÉMICA EN PAÍSES BAJOS: Publicaron videos íntimos falsos de la princesa Amalia hechos con IA

2025-08-16
Noticias de Venezuela y el Mundo - Caraota Digital
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create false videos that have been widely distributed, causing harm to the victim's reputation and privacy. The harm is realized and ongoing, as the videos have been viewed millions of times and have led to legal action. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to the individual and community.
Thumbnail Image

Alerta en Países Bajos: publican videos pornográficos de la Princesa Amalia creados con IA

2025-08-18
BioBioChile
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create deepfake videos, which are false and harmful representations of a person. The videos have been distributed and viewed extensively, indicating realized harm. This constitutes a violation of rights and harm to the community, fitting the definition of an AI Incident. The involvement of AI in generating the content and its direct link to harm justifies this classification.
Thumbnail Image

La princesa Amalia de Holanda, víctima de vídeos pornográficos creados con Inteligencia Artificial

2025-08-18
20 minutos
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation of pornographic videos using AI, which directly leads to harm by violating the privacy and rights of the princess. The AI system's use in generating these fake videos is central to the harm caused. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of human rights and harm to the individual and community reputation.
Thumbnail Image

Amalia de Holanda es víctima de deepfake tras publicarse fotografías pornográficas creadas con IA

2025-08-18
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake technology to create and publish false pornographic images and videos of a public figure and others without consent. This constitutes a violation of human rights and privacy, which is a breach of applicable law protecting fundamental rights. The harm is actual and ongoing, as evidenced by the complaints and legal actions taken. Hence, the event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Publican vídeos pornográficos hechos con IA de la princesa Amalia de Holanda

2025-08-18
Ara en Castellano
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create pornographic deepfake videos of Princess Amalia, which have been widely disseminated online. This use of AI has directly caused harm to the individual by violating her privacy and potentially causing psychological and reputational damage. The harm is realized and ongoing, meeting the criteria for an AI Incident under violations of human rights and harm to communities. The investigation and potential legal actions further support the seriousness of the incident.
Thumbnail Image

Princess Catharina Amalia is 'victim of horrific deepfake porn attack'

2025-08-18
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create deepfake videos of the Princess, which is a direct misuse of AI technology causing harm to her personal dignity and privacy, thus constituting a violation of rights. The harm is realized, not just potential, as the videos were circulated online. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual). The involvement of authorities and the criminal nature of the content further support this classification.
Thumbnail Image

Dutch Princess Catharina-Amalia became a victim of 'deepfake' pornography!

2025-08-18
Haberler.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated deepfake technology to create pornographic videos without consent, which is a direct violation of the princess's rights and causes significant harm. The involvement of AI in creating the deepfake content and the resulting harm to the individual and community meets the criteria for an AI Incident. The response by authorities further confirms the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Future Queen Is the Victim of a Deepfake Porn Attack

2025-08-18
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-generated deepfake technology to create pornographic videos without consent, which is a clear violation of fundamental rights and causes harm to the victim. The AI system's outputs (deepfake videos) directly led to harm to Princess Catharina-Amalia and other women targeted. The FBI and Dutch authorities' involvement to remove the content further confirms the harm caused. Therefore, this qualifies as an AI Incident due to violations of human rights and harm to individuals caused by AI misuse.
Thumbnail Image

Princess among dozens of women targeted in deepfake scandal

2025-08-20
honey.nine.com.au
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create manipulated deepfake videos that caused harm to the individuals depicted, including a violation of their rights and reputational harm. The harm has already occurred through the distribution of these videos, meeting the criteria for an AI Incident. The involvement of AI in generating the deepfakes is explicit, and the harm is direct and realized, not merely potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Dutch crown princess falls victim to deepfake porn attack

2025-08-19
NEWS.am STYLE
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated videos by superimposing faces onto other bodies. The creation and distribution of non-consensual deepfake pornographic content is a clear violation of human rights and privacy, causing harm to the targeted individuals. The article describes realized harm (psychological, reputational) and legal violations, with authorities taking action to stop the distribution. This fits the definition of an AI Incident because the AI system's use has directly led to harm and legal violations.
Thumbnail Image

Princess, 21, falls victim to 'deepfake' porn sickos as cops hunt vile creators

2025-08-18
The US Sun
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos, which are manipulated media created by AI. The harm is realized as the princess is a victim of non-consensual explicit content, which is a violation of her rights and causes personal and reputational harm. The involvement of law enforcement and the criminalization of such acts further confirm the recognition of harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Princess Catharina-Amalia supported by King Willem-Alexander and Queen Maxima amid upsetting news

2025-08-19
HELLO!
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created through AI systems capable of manipulating visual content. The harm caused includes violations of privacy and fundamental rights of the victims, including the princess and other women. The fact that the videos have been widely distributed and required intervention by authorities to remove them confirms that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Future queen of the Netherlands Catharina-Amalia becomes victim of deepfake porn attack for the second time

2025-08-21
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake pornography, which is a direct misuse of AI technology causing harm to the victim's rights and dignity, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized as the videos were circulated online, and authorities intervened to remove the content. The repeated targeting of the princess and the involvement of AI-generated content confirm the AI system's role in causing harm.
Thumbnail Image

Future queen of Netherlands falls victim to deepfake porn attack, morphed videos circulated online

2025-08-21
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake pornography videos, which have been circulated online causing harm to the victims. The harm includes violations of privacy and fundamental rights, as well as reputational and psychological damage. The involvement of law enforcement to remove the content confirms the harm is realized. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to persons and violations of rights.
Thumbnail Image

Future Queen Of Netherlands Targeted In Deepfake Pornography Scandal, Videos Removed By FBI

2025-08-21
NDTV
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-generated deepfake technology to create and distribute non-consensual pornographic videos, which is a direct violation of personal rights and privacy, thus constituting harm under the AI Incident definition. The AI system's use directly led to the harm experienced by the victims. The involvement of law enforcement and removal of videos confirms the harm has materialized. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Princess Catharina-Amalia Targeted in Deepfake Porn Attack; Cybercriminals Circulate Morphed Videos of Future Queen of the Netherlands | 🌎 LatestLY

2025-08-21
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated manipulated content. The harm is realized as the videos were circulated, causing violations of privacy and human rights (harms under category (c) and (d)). The involvement of law enforcement and the shutdown of hosting sites further confirm the materialization of harm. Hence, this is an AI Incident rather than a hazard or complementary information.