Protests and Legislative Action in Germany Over AI-Generated Deepfake Sexual Abuse

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Around 10,000 people protested in Berlin against digital sexual violence, following allegations that AI tools were used to create pornographic deepfakes without consent. The German government is preparing urgent legislation to address legal gaps exposed by the incident involving actress Collien Fernandes and her ex-husband.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to create deepfake images with sexual content without consent, which constitutes a violation of rights and harm to the individual involved. The harm has already occurred, as evidenced by the public outcry and legal complaints. The government's legislative response aims to address these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and digital sexual abuse).[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Γερμανία: Διαδήλωση κατά της ψηφιακής σεξουαλικής βίας - Νομοσχέδιο κατά των πορνογραφικών deepfakes ετοιμάζει η κυβέρνηση

2026-03-22
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of AI-generated pornographic deepfakes, which are used maliciously to harm individuals, constituting a violation of rights. However, the article focuses on the protest and the government's legislative plans to address these harms, rather than describing a specific AI Incident where harm has already occurred or a direct AI Hazard with imminent risk. Thus, it fits the definition of Complementary Information, detailing societal and governance responses to AI-related harms.
Thumbnail Image

Βερολίνο: 10.000 στους δρόμους για τον "ψηφιακό βιασμό" - "Νομοθεσία-εξπρές" ετοιμάζει η κυβέρνηση

2026-03-22
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images with sexual content without consent, which constitutes a violation of rights and harm to the individual involved. The harm has already occurred, as evidenced by the public outcry and legal complaints. The government's legislative response aims to address these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and digital sexual abuse).
Thumbnail Image

Γερμανία: Διαδήλωση κατά της ψηφιακής σεξουαλικής βίας - Real.gr

2026-03-22
Real.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create pornographic deepfake images without consent, which constitutes a violation of individual rights and causes harm to the victim and potentially to communities. The AI system's use in generating these images has directly led to harm, fulfilling the criteria for an AI Incident. The article also discusses the legislative response, but the primary focus is on the realized harm from AI misuse.
Thumbnail Image

Γερμανία: Διαδήλωση κατά της ψηφιακής σεξουαλικής βίας - Ν/σ κατά των πορνογραφικών deepfakes ετοιμάζει η κυβέρνηση

2026-03-22
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation of pornographic deepfakes using AI tools, which have caused harm to individuals (violation of rights and harm to communities). The government's preparation of legislation to criminalize such AI-generated content addresses a direct harm caused by AI misuse. Since the harm is occurring and the article discusses the legislative response, this qualifies as an AI Incident due to realized harm from AI misuse in digital sexual violence.
Thumbnail Image

Γερμανία / 10.000 άτομα διαδήλωσαν κατά της ψηφιακής σεξουαλικής βίας στο Βερολίνο

2026-03-22
Αυγή
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to create pornographic deepfakes without consent, which constitutes a violation of personal rights and digital sexual violence. The harm has already occurred as evidenced by the complaint and the public protest. The government's legislative response is a reaction to this realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

Βερολίνο: "Σεισμός" μετά τον "ψηφιακό βιασμό" της Κολίν Φερνάντες - Νόμος-εξπρές και 500 εκατ. ευρώ για το κυνήγι των AI Deepfakes - GOVNews.gr

2026-03-23
GOVNews.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create pornographic deepfake images of a person without consent, which is a clear violation of human rights and digital violence. This harm has already occurred, as evidenced by the victim's legal actions and public protests. The article also discusses governmental responses to address this harm through new legislation. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content infringing on rights and causing personal harm.
Thumbnail Image

Caso di deepfake porno scuote la Germania, Berlino prepara una legge - Notizie - Ansa.it

2026-03-20
ANSA.it
Why's our monitor labelling this an incident or hazard?
The deepfake content is AI-generated and has directly led to harm to the individual involved, constituting a violation of rights. This meets the criteria for an AI Incident because the AI system's use has directly caused harm. The government's legislative response is complementary information but secondary to the primary incident of harm caused by the AI deepfake. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Scandalo in Germania: attrice denuncia l'ex marito per video porno fake, il governo corre ai ripari

2026-03-20
Gazzetta del Sud
Why's our monitor labelling this an incident or hazard?
The article describes an AI system's use (deepfake technology) to produce fake pornographic content, which has directly harmed the actress by violating her rights and causing reputational and personal harm. The government's response to tighten laws against such AI-generated content further confirms the recognition of harm caused by AI misuse. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of an AI system.
Thumbnail Image

Deepfake porno, scandalo in Germania: "Violata virtualmente"

2026-03-22
:: O T T O P A G I N E . I T :: - Ottopagine L'irpinia Nel Suo Quotidiano
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (deepfake technology) to create manipulated pornographic videos without consent, leading to significant harm to the victim's personal and professional life. The AI system's use directly caused the harm described, fulfilling the criteria for an AI Incident. The event involves the use and misuse of an AI system resulting in violations of rights and harm to the individual and community, not merely a potential or future risk, nor is it a general update or unrelated news.
Thumbnail Image

Un caso di deepfake porno scuote la Germania: Berlino prepara una legge

2026-03-20
Tgcom24
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of deepfake pornographic material directly involves AI systems capable of generating realistic fake images and videos. The harm caused includes violation of personal rights and reputational damage, which falls under violations of human rights and breach of applicable laws protecting individual rights. Since the harm is realized and the AI system's use is central to the incident, this qualifies as an AI Incident.
Thumbnail Image

L'attrice Collien Fernandes vittima del porno deepfake, "stuprata virtualmente" per anni: è stato il marito

2026-03-21
virgilio.it
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used maliciously to create non-consensual pornographic content, which is a direct violation of the victim's rights and causes harm to her as an individual. The harm is realized and ongoing, fulfilling the criteria for an AI Incident. The involvement of the AI system in the creation and dissemination of harmful manipulated content directly led to violations of rights and psychological harm. The legislative response is complementary information but does not change the primary classification of the event as an AI Incident.
Thumbnail Image

"Mi hai stuprata virtualmente": Collien Fernandes accusa il marito Christian Ulmen per aver diffuso suoi video e foto pornografici creati col deepfake. Il Governo tedesco agisce

2026-03-23
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used maliciously to create non-consensual pornographic content, which constitutes a violation of human rights and digital sexual violence, thus causing direct harm to the individual. The involvement of AI in the creation and dissemination of harmful content meets the criteria for an AI Incident. The governmental response and legislative efforts are complementary information but secondary to the primary incident of harm caused by the AI system's misuse.
Thumbnail Image

scandalo a luci rosse in germania: l'attrice collien fernandes ha scoperto che il marito...

2026-03-23
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The creation and distribution of deepfake pornographic content using AI systems constitutes a violation of personal rights and causes harm to the individual involved. The AI system's misuse here has directly led to harm (psychological and reputational) to the victim, fulfilling the criteria for an AI Incident. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in generating the fake content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Cette histoire de deepfakes dans un couple de stars qui secoue l'Allemagne

2026-03-24
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake technology to create non-consensual sexual content, which constitutes a violation of human rights and personal dignity. The harm is realized and ongoing, as the victim has suffered harassment and defamation for years. The AI system's role is pivotal in producing the harmful content. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Célèbre actrice et présentatrice, elle est victime de "deepfakes" pendant des années et découvre que c'était son mari qui les diffusait en ligne

2026-03-23
Ladepeche.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake videos and fake messages impersonating the victim, which directly caused harm to her psychological health, dignity, and privacy, constituting violations of human rights. The harm is realized and ongoing, not merely potential. The involvement of AI in creating manipulated content is explicit. The article also discusses legal and governance responses, but the primary focus is on the incident of harm caused by AI misuse. Hence, the classification is AI Incident.
Thumbnail Image

" Tu m'as violée virtuellement " : pendant 10 ans, son mari publiait des " deepfakes " sexuels à son effigie, l'effroyable affaire qui secoue l'Allemagne

2026-03-23
Le Parisien
Why's our monitor labelling this an incident or hazard?
The creation and publication of deepfake sexual images using AI technology without the subject's consent is a clear violation of human rights, specifically the right to privacy and protection from defamation and sexual violence. The AI system was used maliciously to generate harmful content, directly causing harm to the victim. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

C'était l'un des couples les plus populaires du pays : un présentateur télé accusé par sa femme d'avoir publié des deepfakes sexuels d'elle

2026-03-23
midilibre.fr
Why's our monitor labelling this an incident or hazard?
The article describes the generation and dissemination of deepfake sexual images and videos using AI technology, which caused direct harm to the victim, including psychological abuse and violation of personal rights. The AI system's use in creating these manipulated images is central to the harm experienced. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and violations of rights.
Thumbnail Image

Scandale en Allemagne : une célèbre animatrice découvre que son mari acteur a créé et diffusé des centaines de vidéos explicites d'elle pendant dix ans

2026-03-24
Voici.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to generate harmful and non-consensual sexual videos and fake profiles, which directly caused harm to the victim's personal rights and dignity. The AI system's use here is malicious and has led to violations of human rights and harm to the individual and community. The harm is realized and ongoing, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cette affaire de " deepfakes " comparée à l'affaire Pelicot fait scandale en Allemagne

2026-03-23
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The article explicitly states that deepfake videos were created using AI technology to generate false sexual content involving the victim, which were distributed widely causing psychological and reputational harm. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of AI in generating the deepfakes is clear and central to the harm described. The governmental response to criminalize such acts further supports the recognition of harm caused by AI misuse.
Thumbnail Image

" On m'a volé mon corps pendant des années " : en Allemagne, une actrice accuse son ex-mari de " viols virtuels "

2026-03-23
Paris Match
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of deepfake videos and fake profiles using AI technology to impersonate and harass the victim, which has caused direct harm to her. The AI system's misuse has led to violations of her rights and significant personal harm. The involvement of AI in generating manipulated content and the resulting harm meets the criteria for an AI Incident under the framework.
Thumbnail Image

L'affaire du " viol virtuel " d'une actrice par son mari, comparée à l'affaire Mazan, bouleverse l'Allemagne - Elle

2026-03-24
Elle
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake technology to create non-consensual pornographic videos, which were then widely distributed to harm the victim. The harm is realized and significant, including psychological trauma and violation of rights. The AI system's use is central to the incident, as the deepfakes are the means by which the harm was inflicted. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to the individual and community.
Thumbnail Image

Victime de "deepfakes" sexuels, une animatrice télé allemande dénonce les "viols virtuels" de son ex-mari

2026-03-24
RFI
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake sexual content without consent, which directly leads to harm to the victim's rights and personal dignity, fitting the definition of an AI Incident under violations of human rights and breach of obligations protecting fundamental rights. The AI system's use here is malicious and has caused realized harm, not just potential harm. The article also covers responses to this incident, but the primary focus is on the incident itself and its consequences.
Thumbnail Image

Un vaste scandale de "viols virtuels" agite l'Allemagne

2026-03-22
Courrier international
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos and fake profiles, which are AI systems capable of generating realistic synthetic media. The use of these AI systems directly led to harm to the victim's personal dignity, privacy, and psychological well-being, as well as potential violations of legal rights. The malicious use of AI-generated content to harass and defame an individual fits the definition of an AI Incident, as the AI system's use directly caused harm to a person. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Lutte contre les " deepfakes " : l'Allemagne veut criminaliser les contenus à caractère sexuel

2026-03-24
www.paris-normandie.fr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake generation using AI) and the use of such AI systems has directly led to harm, including violations of personal rights and psychological harm to individuals depicted in non-consensual sexual deepfake videos. The article discusses actual harms that have occurred and the legislative response to criminalize and regulate these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

On m'a volé mon corps" : une animatrice de télévision accuse son ex-mari d'avoir publié des deepfakes sexuels à son effigie

2026-03-23
www.lejdc.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the generation of deepfakes, which are AI-generated synthetic media, used here to create non-consensual sexual content. This constitutes a violation of personal rights and defamation, which falls under harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. The harm is realized and ongoing, as the victim has suffered identity theft and defamation over years. Therefore, this qualifies as an AI Incident due to the direct and harmful use of AI systems to produce deepfake content causing significant personal harm.
Thumbnail Image

"Viols virtuels" : l'effarante affaire entre mari et femme qui choque l'Allemagne

2026-03-24
Madame Figaro
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of deepfake videos and AI-generated synthetic voice content without the victim's consent, which is a direct misuse of AI technology causing harm to the victim's personal rights and psychological well-being. The AI system's role is pivotal in fabricating realistic but false sexual content, leading to violations of rights and significant harm. This meets the criteria for an AI Incident as the harm has already occurred and is directly linked to the AI system's use.
Thumbnail Image

"Tu m'as violée virtuellement": l'affaire qui secoue l'Allemagne et relance le débat sur les deepfakes sexuels

2026-03-24
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake sexual content without consent, which has caused direct harm to the victim, including psychological and reputational damage, and violations of personal rights. The AI-generated content was widely disseminated, leading to harassment and public outcry. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to the individual and community. The article also discusses legal actions and policy responses, but the primary focus is on the realized harm caused by the AI-generated deepfakes.
Thumbnail Image

Son mari créait des deepfakes pornographiques d'elle depuis dix ans

2026-03-25
24heures
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated deepfake pornographic content, which is a direct violation of the victim's rights and privacy. The AI system's use here is malicious and has caused harm to the individual, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in producing the falsified content.
Thumbnail Image

2026-03-25
next.ink
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools were used to create manipulated sexual images, videos, and voice content impersonating the victim without consent, which were disseminated over years causing ongoing harm. This use of AI directly led to violations of personal rights, defamation, and psychological harm, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential. The case also highlights the broader societal issue of AI-enabled deepfake sexual abuse, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Regierung verschärft Gesetz: Für Deepfakes soll es künftig bis zu zwei Jahre Haft geben

2026-03-24
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake technology, which uses AI to create manipulated images, videos, and audio. The article discusses the use and misuse of such AI-generated content causing harm to individuals' rights and reputations, which is a violation of human rights and personal rights. However, the article describes a proposed law to address these harms rather than an actual incident of harm caused by AI systems. Therefore, it does not report a realized AI Incident but rather a governance response to potential and ongoing harms related to AI misuse. This fits the definition of Complementary Information, as it provides societal and legal responses to AI-related harms without describing a new AI Incident or AI Hazard itself.
Thumbnail Image

Gericht ermittelt seit vier Monaten gegen Christian Ulmen

2026-03-24
Frankfurter Rundschau
Why's our monitor labelling this an incident or hazard?
The article describes a concrete case where AI-generated deepfake pornography is alleged to have caused harm to a person, constituting digital sexualized violence and violations of privacy and personal rights. The AI system's use in creating and distributing deepfakes is central to the incident. The legal investigation and proposed legislation further confirm the recognition of harm caused by AI misuse. Hence, this is an AI Incident as the AI system's use has directly led to harm (violation of rights and digital sexual violence).
Thumbnail Image

Hubig will in Kürze Gesetzentwurf vorlegen: Bis zu zwei Jahre Haft für Deepfakes und stärkere Persönlichkeitsrechte vorgesehen

2026-03-24
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The article centers on a planned law addressing harms caused by AI-generated deepfakes and manipulated content, aiming to strengthen personal rights and impose penalties. However, it does not describe a specific event where an AI system's development, use, or malfunction has directly or indirectly caused harm (AI Incident), nor does it describe a plausible future harm from AI systems in a concrete event (AI Hazard). Instead, it reports on a policy initiative and societal response to known issues involving AI-generated content, which fits the definition of Complementary Information as it provides context and governance developments related to AI harms.
Thumbnail Image

Bis zu zwei Jahre Haft für Deepfakes und stärkere Persönlichkeitsrechte geplant

2026-03-24
stern.de
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic manipulated content. The creation and distribution of pornographic deepfakes constitute a violation of personal rights and can cause harm to individuals, fitting the definition of AI-related harm. However, this article mainly reports on the proposed legislation and the legal response to such harms, not on a new incident or hazard event. Therefore, it is best classified as Complementary Information, as it provides context and governance response to known AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Spanien als Vorbild: Gesetz gegen digitale Gewalt

2026-03-24
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation) and addresses harms related to violations of rights and gender-based violence caused by AI-generated content. However, the article focuses on the upcoming legislation and plans to introduce penalties rather than reporting an actual incident of harm or a specific event where harm occurred. Therefore, it is a governance and societal response to AI-related harms, providing complementary information about efforts to address AI misuse.
Thumbnail Image

Nach Fall von Collien Fernandes: So hart sollen Deepfake-Ersteller und Täter bald bestraft werden

2026-03-24
op-online.de
Why's our monitor labelling this an incident or hazard?
The article centers on a legislative initiative to address harms caused by AI-generated deepfakes, which have already caused harm in the case of Collien Fernandes. While the harms from AI-generated content are real and significant, the article's main focus is on the political and legal response to these harms, including proposed changes to criminal law. It does not describe a new AI Incident or Hazard event but rather provides complementary information about societal and governance responses to existing AI-related harms.
Thumbnail Image

Nach Fall von Collien Fernandes: So hart sollen Deepfake-Ersteller und Täter bald bestraft werden

2026-03-24
Soester-Anzeiger.de
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of deepfake technology used to create sexualized images without consent, which constitutes a violation of personal rights and digital violence. The harm from such AI misuse is established and ongoing, as evidenced by the Fernandes case. However, the article focuses on legislative efforts to address these harms through stricter laws and penalties, rather than describing a new AI Incident or an immediate AI Hazard. It provides important complementary information about societal and governance responses to AI harms, fitting the definition of Complementary Information rather than AI Incident or AI Hazard.
Thumbnail Image

Bis zu zwei Jahre Haft für Deepfakes und stärkere Persönlichkeitsrechte geplant

2026-03-24
unternehmen-heute.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the mention of deepfakes and AI-manipulated content. The proposed law is a governance response to harms caused by the use of AI-generated manipulated media, which can violate personal rights and cause significant harm to individuals. Although the article does not describe a specific realized AI Incident, it focuses on legal and societal measures to address existing harms and prevent future incidents involving AI-generated content. Therefore, this is Complementary Information about societal and governance responses to AI-related harms rather than a new AI Incident or AI Hazard.
Thumbnail Image

Änderungen an einem Gesetz: EU-Parlament stimmt für Verbot von KI-Systemen für Porno-Deepfakes

2026-03-26
RP Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate pornographic deepfakes, which constitute a violation of personal rights and privacy, thus a breach of fundamental rights. The creation and distribution of such AI-generated content has already caused harm to individuals, as evidenced by the legal complaint. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights). The legislative response is complementary information but the primary event is the harm caused by AI-generated deepfakes.
Thumbnail Image

Atriz luso-descendente acusa ex-marido de criar deepfakes pornográficas: "Violaste-me virtualmente"

2026-03-30
SAPO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and sharing of pornographic deepfake images generated through AI, which directly caused harm to the actress by violating her rights and causing emotional trauma. This constitutes an AI Incident because the AI system's use directly led to harm (violation of rights and emotional harm). The subsequent legal actions and public debate are complementary information but do not change the primary classification of the event as an AI Incident.
Thumbnail Image

Atriz luso-descendente acusa marido de partilhar deepfakes pornográficas com o seu rosto: "Violaste-me"

2026-03-29
Revista SÁBADO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the pornographic images are deepfakes generated by AI, which have been shared without consent, causing harm to the actress. The use of AI to create and distribute non-consensual sexual content is a violation of personal rights and constitutes harm to the individual and community. The involvement of AI in generating the harmful content and its distribution directly led to the harm described. Although there is dispute about the perpetrator, the harm from the AI-generated deepfakes is clear and ongoing. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

'Pessoa mais próxima de mim': como atriz alemã descobriu que marido usou sua imagem para produzir ponografia fake por mais de 10 anos?

2026-03-29
O Globo
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The use of such technology to produce fake pornography without consent constitutes a violation of personal rights and digital violence, which falls under harm to individuals and potentially breaches of human rights. The fact that this has occurred over more than ten years and has led to public and governmental reactions confirms that harm has materialized. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Deepfakes: especialista explica riscos legais e como se proteger após caso de atriz alemã Collien Fernandes - SRzd

2026-03-30
SRzd
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake content, which is explicitly stated. The harm has already occurred, including violation of personal rights and emotional harm, fulfilling the criteria for an AI Incident. The article also covers legal frameworks and societal reactions, but the primary focus is on the realized harm caused by the AI-generated deepfakes. Therefore, this is classified as an AI Incident.
Thumbnail Image

Atriz alemã afirma ter sofrido anos de violência digital com deepfakes pornôs criados pelo marido

2026-03-29
Portal meionews.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The creation and distribution of non-consensual pornographic deepfake videos constitute a direct harm to the individual depicted, amounting to digital violence and violation of personal rights. Since the deepfakes were actively shared and caused harm over years, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in the incident.
Thumbnail Image

Deepfakes pornográficos motivam protestos na Alemanha

2026-03-31
O Antagonista
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and distribution of pornographic deepfake videos generated by AI, which have caused direct harm to the victim, including digital abuse and violation of rights. The AI system's role in producing these fake videos is central to the harm experienced. The event involves the use and misuse of AI systems leading to violations of human rights and digital violence, fitting the definition of an AI Incident. The ongoing legal investigation and public protests further confirm the materialized harm rather than a potential risk, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

Caso de actriz víctima de videos sexuales generados con IA por su marido remece a Alemania: La comparan con Gisèle Pelicot

2026-03-31
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos used maliciously to harm a person, constituting a clear violation of rights and digital abuse. The harm has already occurred, as the actress was victimized by these AI-generated videos. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating and spreading false sexual content.
Thumbnail Image

"Llevo chaleco antibalas por las amenazas": una presentadora alemana denuncia a su marido por hacer 'deepfakes' porno con su cara

2026-03-29
LaSexta
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate deepfake sexual content without consent, which has directly harmed the individual through violation of rights, emotional distress, and threats to personal safety. The AI system's misuse by the husband is central to the harm caused. This fits the definition of an AI Incident as it involves harm to a person and violation of rights directly linked to the AI system's use.
Thumbnail Image

La Jornada: Caso de pornografía con IA desata protestas en Alemania

2026-04-01
La Jornada
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI-generated deepfake videos were used maliciously to harm an individual, leading to protests and legal action. The AI system's use in generating false sexual videos constitutes a violation of rights and abuse, fulfilling the criteria for harm under the AI Incident definition. The harm is realized, not just potential, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

Actriz denunció videos pornográficos falsos con su imagen: Su marido resultó ser el responsable

2026-03-30
Cooperativa
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools were used to generate fake pornographic videos and voice calls impersonating the actress, which is a misuse of AI technology causing harm to her personal rights and dignity. The harm has already occurred and is ongoing, fulfilling the criteria for an AI Incident. The involvement of AI in the creation and dissemination of manipulated content that violates rights is direct and central to the event.
Thumbnail Image

Una actriz dice que descubrió al responsable de acosarla en línea: su esposo

2026-03-29
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-generated deepfake images and AI-generated audio to impersonate and harass Collien Fernandes, which is a direct misuse of AI technology causing harm to an individual. The harms include violations of privacy, psychological harm, and abuse, fitting the definition of an AI Incident under violations of human rights and harm to individuals. The involvement of AI in generating ultrafake images and audio is clear, and the harm is realized and ongoing, not merely potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Revuelo en Alemania por caso Collien Fernandes: la actriz acusa a su esposo por hacerle videos sexuales con IA - El Sol de México | Noticias, Deportes, Gossip, Columnas

2026-03-31
OEM
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake sexual videos impersonating a real person, which were then distributed without consent. This constitutes a violation of personal rights and has caused harm to the individual, including reputational and professional damage. The AI's role in creating and spreading false content that led to these harms qualifies this event as an AI Incident under the definitions provided, specifically under violations of human rights and harm to communities.
Thumbnail Image

Actriz denunció creación de contenido pornográfico falso con su imagen: el responsable sería su propio esposo

2026-03-28
T13 (teletrece)
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create fake pornographic content (deepfakes) and to falsify the victim's voice in calls, which led to harassment and identity theft. These actions caused direct harm to the victim's rights and personal dignity, fulfilling the criteria for an AI Incident under violations of human rights and breach of obligations protecting fundamental rights. The involvement of AI in the creation and dissemination of harmful content and impersonation is clear and central to the harm described.
Thumbnail Image

Alemania marcha en apoyo a actriz víctima de videos porno falsos con IA

2026-03-31
CRHoy.com | Periodico Digital | Costa Rica Noticias 24/7
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos as the source of false pornographic content causing harm to the actress, including online harassment and threats. This constitutes direct harm to a person and a violation of rights, fitting the definition of an AI Incident. The legal investigations and public protests further confirm the materialized harm caused by the AI system's misuse.
Thumbnail Image

Impacto en Alemania: actriz acusa a su esposo de crear deepfakes sexuales y difundirlos en Internet

2026-03-28
24horas.cl - Home
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content used maliciously to impersonate and harm an individual, fulfilling the criteria of an AI Incident due to violations of rights and harm to the individual. The AI system's use in creating and spreading false sexual images and voice imitations directly caused harm, meeting the definition of an AI Incident. The legal and political responses further confirm the recognition of harm caused by AI misuse in this context.
Thumbnail Image

Escándalo de porno con IA de estrella de TV desata debate nacional en Alemania | Sitios Argentina.

2026-03-30
SITIOS ARGENTINA - Portal de noticias y medios Argentinos.
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and distributed without consent, causing harm to the individual's rights and personal dignity. The harm includes violation of rights, digital violence, and threats, which are direct consequences of the AI system's misuse. The ongoing investigation and legal actions further confirm the realized harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to the individual and community.
Thumbnail Image

Movimiento #MeToo divide a Alemania tras escándalo de IA

2026-03-31
Tribuna Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated fake sexual videos (deepfakes) used maliciously to harm a person, which is a direct harm caused by the AI system's outputs. The harm includes violation of personal rights and digital abuse, fitting the definition of an AI Incident. The event is not merely a potential risk but an actual occurrence with realized harm, as the victim has suffered abuse over years. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Digitale Gewalt: Wie die Bundesregierung Opfern helfen will

2026-03-25
inFranken.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-manipulated material such as deepfakes as part of the digital violence the government aims to address through new legislation. However, it does not describe any specific AI incident or harm that has occurred, nor does it report a near miss or imminent threat. Instead, it details ongoing policy planning and legislative proposals to prevent and punish such harms. Therefore, this is best classified as Complementary Information, providing context and governance response to AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Wie die Bundesregierung Opfern von digitaler Gewalt helfen will

2026-03-25
WEB.DE
Why's our monitor labelling this an incident or hazard?
The article mentions AI in the context of deepfakes—AI-manipulated images and videos used in digital violence—but does not describe a concrete event where AI caused harm. The discussion centers on legislative proposals and societal debates, which qualify as governance and societal responses to AI-related issues. Therefore, this is Complementary Information, as it provides context and updates on policy measures addressing AI-related harms without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Digitale Gewalt: Wie die Bundesregierung Opfern helfen will

2026-03-25
Heilbronner Stimme
Why's our monitor labelling this an incident or hazard?
The article does not describe a specific AI incident or hazard where harm has occurred or is imminent due to AI system development, use, or malfunction. Instead, it outlines planned legal reforms and debates around digital violence involving AI-generated content (deepfakes) and surveillance technologies. This fits the definition of Complementary Information, as it provides context on governance responses and policy development addressing AI-related harms but does not report a new incident or hazard itself.
Thumbnail Image

Digitale Gewalt: Wie die Bundesregierung Opfern helfen will

2026-03-25
Aachener Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI in the context of deepfakes, which are AI-generated manipulated media, and discusses legislative plans to criminalize their unauthorized creation and distribution. However, it does not report any actual incident of harm caused by AI systems, nor does it describe a specific event where AI use or malfunction has led to harm. Instead, it focuses on the government's policy response and planned legal reforms to address potential harms from AI-manipulated content and digital violence. This fits the definition of Complementary Information, as it provides governance and societal response context to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

Sexualisierte Gewalt im Netz: Digitale Gewalt: Wie die Bundesregierung Opfern helfen will

2026-03-25
Rhein-Neckar-Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-manipulated material (deepfakes) as a form of digital sexual violence targeted by the proposed legislation, indicating the involvement of AI systems in the context of harm. However, the article does not describe any realized harm or specific event where AI use has directly or indirectly caused harm. Instead, it outlines government plans to address and prevent such harms. Therefore, this is a plausible future risk scenario related to AI systems, qualifying as an AI Hazard rather than an AI Incident. The article also includes discussion of governance and policy responses, but since the main focus is on the potential for harm and legislative measures, it is not merely complementary information.
Thumbnail Image

Fall Collien Fernandes: Sachsens Justizministerin warnt vor Schnellschüssen bei Verschärfung des Strafrechts

2026-03-26
DNN - Dresdner Neueste Nachrichten
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-manipulated deepfake videos as a concern and discusses legislative proposals to criminalize their unauthorized creation and distribution. However, it does not report any specific AI incident where harm has already occurred. The focus is on the potential for harm and the need for well-considered legal frameworks. Therefore, this qualifies as an AI Hazard, since the development and use of AI systems for creating deepfakes could plausibly lead to harms such as violations of privacy and sexualized digital violence, but no concrete incident is described as having happened yet.
Thumbnail Image

Atriz alemã denuncia ex-marido e abre debate sobre violência sexual digital

2026-03-24
Terra
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and distribution of pornographic deepfake videos using AI technology, which directly harmed the actress by violating her rights and causing psychological trauma. The AI system's role in generating these videos is central to the harm described. The incident meets the criteria for an AI Incident as it involves realized harm (violation of rights and harm to the individual) caused by the use of an AI system. The subsequent legal and societal responses further confirm the significance of the harm caused.
Thumbnail Image

Atriz alemã denuncia ex-marido por violência sexual digital

2026-03-24
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate deepfake pornographic videos, which is a clear example of an AI system's use causing harm. The harm includes violation of personal rights, digital sexual violence, and psychological harm to the victim. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The subsequent legal and societal responses further confirm the recognition of harm caused by AI misuse in this context.
Thumbnail Image

"A vergonha tem de ser combatida": Governo alemão quer criminalizar divulgação de deepfakes depois de atriz acusar ex-marido

2026-03-25
Observador
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, an AI system that generates realistic fake images or videos. The malicious use of this AI system has directly led to harm, including identity theft, defamation, and violation of personal rights, which are breaches of fundamental rights under applicable law. The government's legislative response further confirms the recognition of this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Atriz descobre que marido criava vídeos sexuais com deepfake: "Meu corpo foi roubado"

2026-03-28
Diário do Centro do Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI (deepfake technology) to create non-consensual pornographic videos, which is a direct violation of the victim's rights and constitutes digital sexual violence. The harm has already occurred, as the victim suffered from the distribution of these videos over years. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The article also discusses legal and societal responses, but the primary focus is on the realized harm caused by the AI system's misuse.
Thumbnail Image

Atriz alemã denuncia ex-marido e abre debate sobre violência sexual digital

2026-03-24
UOL notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic fake videos, to create pornographic content featuring the actress's image without her consent. This use of AI directly led to harm by violating her rights and causing personal and reputational damage. The involvement of AI in generating the deepfake content and the resulting harm to the individual meets the criteria for an AI Incident under violations of human rights and personal rights.
Thumbnail Image

Atriz alemã denuncia ex-marido por violência sexual digital

2026-03-25
Portal Tela
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography used without consent, which is a direct violation of personal rights and constitutes harm to the individual and communities. The deepfake technology is an AI system that generated harmful content, leading to digital sexual violence. The harm is realized, not just potential, as the actress has suffered ongoing harassment and has filed formal complaints. The government's legislative response and public mobilization are complementary developments but do not negate the primary classification as an AI Incident. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Manifestantes se mobilizam na Alemanha em apoio à atriz vítima de deepfake

2026-03-31
O Povo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos causing harm to the actress, which is a direct harm resulting from the use of an AI system. The harm includes violation of personal rights and reputational damage, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The protests and legal concerns further underscore the realized harm and societal impact.
Thumbnail Image

Manifestantes se mobilizam na Alemanha em apoio à atriz vítima de 'deepfake'

2026-03-31
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation) to create and distribute harmful fake videos, directly causing harm to the actress through harassment and violation of her rights. This meets the definition of an AI Incident because the AI system's use has directly led to harm (psychological, reputational, and rights violations). The article focuses on the harm caused and the legal and societal responses, not just on the technology or potential future harm, so it is not a hazard or complementary information.
Thumbnail Image

Milhares protestam na Alemanha em apoio à atriz que teve vídeos pornográficos falsos divulgados pelo ex-marido

2026-03-31
O Globo
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate fake pornographic videos, which directly harmed the actress by violating her rights and privacy. The dissemination of such deepfakes is a clear example of harm to an individual and community, fitting the definition of an AI Incident. The article reports realized harm, not just potential harm, and thus it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Escândalo de "deepfakes" sexuais provoca onda #MeToo na Alemanha

2026-03-31
Folha - PE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake pornographic videos (deepfakes) of a person without consent, which is a direct misuse of AI technology causing harm to the victim. This harm includes violation of personal and digital rights, psychological abuse, and reputational damage. The involvement of AI in creating these videos is clear, and the harm is realized and ongoing, as evidenced by the public outcry and legal investigations. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Manifestantes se mobilizam na Alemanha em apoio à atriz vítima de 'deepfake'

2026-03-31
UOL notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake pornographic videos (deepfakes) of the actress, which is a direct misuse of AI technology causing harm to her personal rights and dignity. The public protests and calls for legal reform underscore the recognized harm and societal impact. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident involving violations of rights and harm to the individual and community.