AI-Generated Deepfake Abuse Leads to Legal Action and Media Consequences in Germany

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Actress Collien Fernandes accused her ex-husband Christian Ulmen of using AI-generated deepfake pornography and fake profiles to commit digital violence, identity theft, and emotional harm. Legal proceedings have begun in Spain and Germany, and broadcaster ProSieben removed Ulmen's show following the allegations. The incident highlights AI's role in personal rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI-generated deepfake content that has been distributed and caused harm to Collien Fernandes. The harm is realized and ongoing, as the fake images and videos have been circulating for years, and the victim has filed a legal complaint. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person, specifically violations of rights and reputational harm. The article does not focus on future risks or responses but on the actual harm caused by the AI-generated content.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Nach Vorwürfen gegen Ex-Mann: Wo sich Collien Fernandes aktuell befindet

2026-03-20
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake content that has been distributed and caused harm to Collien Fernandes. The harm is realized and ongoing, as the fake images and videos have been circulating for years, and the victim has filed a legal complaint. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person, specifically violations of rights and reputational harm. The article does not focus on future risks or responses but on the actual harm caused by the AI-generated content.
Thumbnail Image

Nacktbilder, Stalking, Deepfakes: So schützen Sie sich vor digitaler Gewalt

2026-03-20
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images used maliciously to harm an individual, which is a direct harm to the person and a violation of rights. The AI system's role in generating the fake pornographic images is pivotal to the harm described. Therefore, this qualifies as an AI Incident under the framework's definition of harm to persons and violations of rights caused by AI misuse.
Thumbnail Image

Collien Fernandes erhebt schwere Vorwürfe: Anzeige gegen ihren Ex Christian UImen

2026-03-19
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated or AI-manipulated content (deepfake pornography) to impersonate a person and cause harm through non-consensual sexual exploitation and identity theft. The harm is realized and significant, including violations of personal rights and psychological harm. The AI system's role is pivotal in generating the fake images and videos. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Meinung: News des Tages: Collien Fernandes kämpft gegen digitale Gewalt, Ideen für pünktlichere Züge, ukrainischer Seehandel

2026-03-19
Spiegel Online
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media, often used maliciously to create fake pornographic videos. The article reports that such AI-generated content has been used to harm Collien Fernandes, constituting digital sexual violence and a violation of her rights. The harm is realized and ongoing, and the AI system's misuse is central to the incident. Hence, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Nach Collien Fernandes: Auch Mareile Höppner ist Opfer von KI-Fakes

2026-03-20
Focus
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornography targeting specific individuals, causing harm to their personal rights and dignity. The article mentions that the victims are publicly identified and that legal actions have been initiated, indicating realized harm. The AI system's misuse is central to the incident, fulfilling the criteria for an AI Incident due to violation of human rights and harm to persons. Hence, the classification is AI Incident.
Thumbnail Image

Wie Collien Fernandes: Auch Mareile Höppner ist von KI-Fakes betroffen - "Schmutziger Alltag"

2026-03-20
Focus
Why's our monitor labelling this an incident or hazard?
The creation and distribution of AI-generated deepfake pornography is a clear example of harm caused by the use of AI systems, specifically generative AI used to produce fake explicit content without consent. This directly violates the rights of the individuals depicted and causes significant personal and social harm. The involvement of AI in producing these fake images/videos and the resulting legal and social consequences confirm this as an AI Incident. The article focuses on the harm already caused and the responses to it, rather than just potential future harm or general AI news.
Thumbnail Image

Collien Fernandes zeigt Christian Ulmen an: Vorwürfe von psychischer Gewalt

2026-03-20
20 Minuten
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated pornographic content was used maliciously to harm Collien Fernandes, which is a direct harm to her psychological well-being and personal rights. The AI system's use in generating and distributing harmful content directly led to realized harm, fulfilling the criteria for an AI Incident. The involvement of AI is clear and central to the harm described, and the harm is materialized, not just potential.
Thumbnail Image

Collien Fernandes erhebt schwere Vorwürfe gegen Ex-Mann Christian Ulmen

2026-03-20
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake technology, which is an AI system capable of generating synthetic media. The harm caused includes violations of personal rights and digital sexual violence through the distribution of deepfake pornographic content, which is a clear harm to the individual and community. The AI system's use has directly led to these harms. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

"Mir wurde über Jahre mein Körper geklaut": Collien Fernandes zeigt Christian Ulmen an

2026-03-19
Frankfurter Rundschau
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of pornographic deepfake videos, which are typically generated using AI systems capable of synthesizing realistic fake videos. This use of AI has caused direct harm to Collien Fernandes by subjecting her to online abuse and emotional distress. The harm is realized and ongoing, meeting the criteria for an AI Incident. There is explicit mention of the harm caused by the AI-generated content, and the AI system's role is pivotal in producing the fake videos.
Thumbnail Image

Strafanzeige gegen Ex-Mann Christian Ulmen: Collien Fernandes bei VOXStimme 2024: "Diese sexualisierte Gewalt kann Leben ruinieren"

2026-03-20
rtl.de
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated realistic videos or images. The article reports that Collien Fernandes is a victim of deepfake-based digital sexual violence, which constitutes harm to her personal rights and well-being. This harm is directly caused by the use of an AI system (deepfake generation). Hence, the event meets the criteria for an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Nach Vorwürfen gegen Christian Ulmen - wie finde ich heraus, ob Deepfakes von mir im Netz kursieren?

2026-03-20
rtl.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes causing harm (fake sexual content), which is a form of violation of rights and harm to individuals. However, the article focuses on explaining what deepfakes are, the challenges in detecting them, and advice on how to find such content online. It does not report a new AI Incident or AI Hazard but provides context and guidance related to an existing issue. Hence, it fits the definition of Complementary Information as it supports understanding and response to AI harms rather than describing a new primary harm event.
Thumbnail Image

Schweigen gebrochen: Fernandes löst Beben aus

2026-03-20
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake technology to create fake pornographic images and videos of the victim, which were then used to sexually exploit her and contact her personal and professional circles. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident where AI use has directly led to harm. Although the accused is presumed innocent, the harm caused by the AI system's misuse is clear and ongoing.
Thumbnail Image

Anzeige in Spanien! Was kommt jetzt auf Christian Ulmen zu?

2026-03-20
rtl.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated sex videos as part of the allegations, indicating AI system involvement in creating harmful content. The harms described (identity theft, privacy violations, sexualized violence) align with violations of rights and harm to individuals. However, the event is at an early investigative stage with no confirmed incident or harm established. The main focus is on the legal process and potential outcomes, making it an update or complementary information rather than a confirmed AI Incident or AI Hazard. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos as part of the online attacks against Collien Fernandes. These deepfakes are pornographic and have been widely viewed, causing harm to her personal dignity and privacy. The creation and dissemination of such AI-generated content directly led to harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating the deepfakes and the resulting violation of rights and harm to the individual is clear and direct.
Thumbnail Image

Demokratisierung des Missbrauchs: Warum wir den Kampf gegen Deepfake-Pornos verlieren

2026-03-19
der Standard
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is generated using AI systems that synthesize realistic but fake images or videos of individuals. The described case involves the malicious use of such AI tools to create and spread non-consensual explicit content, causing significant harm to the victim's personal rights and dignity. This is a direct harm caused by the use of an AI system, fitting the definition of an AI Incident due to violation of human rights and harm to the individual.
Thumbnail Image

Deepfake-Pornos: Collien Fernandes erhebt schwere Vorwürfe gegen Ex-Mann Christian Ulmen

2026-03-19
der Standard
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is generated using AI systems that synthesize realistic but fake images or videos. The creation and dissemination of such content without consent is a violation of human rights and constitutes harm to the individual involved. Since the article details the occurrence of this harm and the AI system's role in producing the deepfake content, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to individuals.
Thumbnail Image

Digitale Gewalt: Regeln fehlen - und die Politik steht unter Druck

2026-03-20
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear from the mention of AI-generated deepfake videos. The harm described (emotional and reputational harm from non-consensual deepfake sexual content) fits under harm to individuals and violation of rights. However, the article does not describe a new specific AI Incident but rather discusses the broader issue and ongoing debate, with a particular case mentioned as context. Therefore, this is best classified as Complementary Information, as it provides context and societal response to existing AI-related harms rather than reporting a new incident or hazard.
Thumbnail Image

SZ-Podcast: Digitale Gewalt - Es geht um Entwürdigung

2026-03-20
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfakes, which are AI-generated manipulated videos and images, causing harm to Collien Fernandes by spreading false and damaging content. This is a direct harm to the individual's dignity and privacy, fitting the definition of harm to persons and violation of rights. The AI system's use (deepfake generation) has directly led to this harm. The legal complaint and ongoing investigation further confirm the seriousness of the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

Collien Fernandes zeigt ihren Ex-Mann Christian Ulmen an

2026-03-20
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-enabled technologies (likely deepfake generation and automated fake account creation) to produce and distribute fake pornographic videos and images, as well as fake social media accounts impersonating the victim. This has directly led to harm including violation of privacy, reputational damage, and psychological trauma to the victim. The involvement of AI is reasonably inferred from the nature of the fake content and accounts. The harm is realized and ongoing, meeting the criteria for an AI Incident under violations of human rights and harm to individuals. The article also discusses legal actions and public reactions, but the primary focus is on the harm caused by the AI-enabled impersonation and fake content.
Thumbnail Image

Nacktfotos und Deep Fakes: Digitale Gewalt - "Es sind fast immer die Ex-Partner"

2026-03-20
RP Online
Why's our monitor labelling this an incident or hazard?
The article references digital violence involving deepfakes and fake profiles, which reasonably implies the use of AI systems for generating manipulated images and identities. This involvement can lead to harm such as violations of privacy and psychological harm, fitting the definition of an AI Incident. Although the article mentions allegations and the presumption of innocence, the described harms are consistent with realized harms caused by AI-generated content. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by AI systems in digital violence contexts.
Thumbnail Image

Selbst von KI-Fakes betroffen: Mareile Höppner steht Collien Fernandes zur Seite

2026-03-20
Bunte
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake content (KI-Fakes) causing harm to individuals, which is a direct violation of personal rights and can be considered harm to persons. The involvement of AI in creating these fake videos is clear, and the harm is realized as the victims are publicly acknowledging the impact and pursuing legal remedies. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Fall Fernandes: Wie Spanien digitale sexuelle Gewalt verfolgt

2026-03-21
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake pornographic images and AI-generated voice content used to harass and harm a person, which is a direct harm to the individual's rights and dignity. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and digital sexual violence). The article focuses on the harm caused and legal responses, not just general AI news or potential future harm. Hence, it is classified as an AI Incident.
Thumbnail Image

Vorwürfe digitaler Gewalt: "Sie ist bei weitem kein Einzelfall"

2026-03-20
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create fake pornographic images and videos and to manipulate voice recordings, which are explicitly described as AI-based deepfake technologies. The harm is realized, including psychological distress and violation of personal rights, fulfilling the criteria for harm to persons and communities. The AI system's use is directly linked to the harm, making this an AI Incident rather than a hazard or complementary information. The article also discusses the broader societal impact and legal context, but the primary focus is on the harm caused by AI-generated digital violence.
Thumbnail Image

Digitale sexualisierte Gewalt: Wie groß ist das Problem?

2026-03-20
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images and videos (deepfakes) that have been used to impersonate and harm individuals, which is a direct violation of personal rights and constitutes digital sexualized violence. The article reports on actual harm caused by these AI-generated materials, including fake social media profiles and pornographic content, which meets the criteria for an AI Incident. The involvement of AI in creating these harmful materials is explicit, and the harm to individuals' rights and dignity is direct and realized, not merely potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Pornografische Deep-Fakes: Collien Fernandes erhebt schwere Vorwürfe gegen Ex-Mann

2026-03-20
taz.de
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of pornographic deepfake videos and fake profiles using AI-generated content directly caused harm to Collien Fernandes, including digital violence and violation of her rights. The AI system's misuse is central to the harm, fulfilling the criteria for an AI Incident. The article details ongoing legal and political responses but the primary focus is on the harm already caused by the AI system's outputs, not just potential or future harm or complementary information.
Thumbnail Image

Es geht um Fake-Pornos: Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
MOPO.de
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic synthetic media. The creation and distribution of non-consensual deepfake pornographic videos and fake accounts directly harm the individual impersonated, violating their rights and causing emotional distress. The article describes realized harm through the use of AI-generated content, meeting the criteria for an AI Incident. The involvement of AI is explicit through the mention of deepfakes, and the harm is direct and ongoing, with legal proceedings initiated. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Nach Vorwürfen: Serie mit Christian Ulmen online nicht mehr abrufbar

2026-03-20
MOPO.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of pornographic AI deepfakes, which are AI-generated manipulated videos, causing harm to an individual by misusing their likeness without consent. This is a clear violation of rights and personal harm, fitting the definition of an AI Incident. The removal of the series is a response to the harm caused. Although legal proceedings are ongoing and the accused denies the allegations, the AI system's role in causing harm is central to the event described.
Thumbnail Image

KI-Pornos, Vergewaltigung! Collien nennt Schock-Details | Heute.at

2026-03-20
Heute.at
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-generated voice to impersonate the victim, which is an AI system's use contributing to the harm. The harms described include identity theft, psychological and emotional harm, and violations of personal rights, which fall under violations of human rights or breach of obligations intended to protect fundamental rights. Since the AI system's use directly led to these harms, this qualifies as an AI Incident.
Thumbnail Image

Heftige Vorwürfe! Collien Fernandes zeigt Ex Ulmen an | Heute.at

2026-03-19
Heute.at
Why's our monitor labelling this an incident or hazard?
The creation and use of fake profiles and realistic pornographic content imply the involvement of AI systems capable of generating such content. The alleged actions constitute violations of personal rights and could be considered harm to the individual. Since the harm is alleged and under investigation, and the article does not confirm realized harm or legal findings, this event is best classified as an AI Incident due to the direct link between AI-generated fake profiles and the violation of rights. The AI system's use in creating fake profiles and content directly led to the alleged harm.
Thumbnail Image

Digitale Gewalt: Collien Fernandes macht Ex-Mann Ulmen Vorwürfe - was tun bei Deepfakes

2026-03-20
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake technology, which can generate synthetic media. The alleged use of deepfakes to harm a person fits the definition of an AI Incident if harm has occurred. However, since the allegations are unverified and the accused denies the claims, and no confirmed harm or legal ruling is reported, the event currently represents a plausible risk or potential harm rather than a confirmed incident. Thus, it is best classified as an AI Hazard, reflecting the credible potential for harm from AI misuse in this context.
Thumbnail Image

Digitale Gewalt: Collien Fernandes macht Ex-Mann Ulmen Vorwürfe - was tun bei Deepfakes

2026-03-20
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated manipulated media, so the involvement of an AI system is explicit. The alleged creation and distribution of Deepfake images of a person without consent is a violation of personal rights and can be considered digital violence, which is a form of harm to individuals. The article states that the victim has filed a police report, indicating that harm has occurred and legal processes are underway. Hence, the event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

"Virtuell vergewaltigt": Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-20
GameStar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology used to create fake pornographic videos and fake profiles, which have caused direct harm to the individual by violating her rights and causing psychological harm. The AI system's use (deepfake generation) directly led to these harms, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The legal complaint and the described harms confirm that the incident has materialized, not just a potential risk.
Thumbnail Image

Ex-Mann soll Deepfake-Pornos veröffentlicht haben: Der Fall Fernandes in 5 Punkten

2026-03-20
watson.ch/
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a clear example of an AI system's use leading to harm. The deepfakes caused direct harm to the victim's privacy, reputation, and mental well-being, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of AI in creating realistic fake pornographic content without consent is central to the harm described. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wie sich Collien Fernandes einst für Opfer von Deepfakes stark machte

2026-03-19
Bunte
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of fake profiles and fake nude photos/videos that appear to be of Collien Fernandes but are not genuine. The creation of such realistic fake media is typically enabled by AI deepfake technology, which is an AI system. The harm includes violation of personal rights and privacy, which falls under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized and ongoing, not merely potential. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Collien Fernandes: "Es war wie bei einer Todesnachricht"

2026-03-20
oe24
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated manipulated media, and their use here has caused direct harm to Collien Fernandes through the spread of non-consensual explicit content and related abuses. The article details ongoing harm and legal actions related to this misuse. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to a person (psychological and reputational harm), fulfilling the harm criteria (a) and (c) (violation of rights).
Thumbnail Image

Ulmen und Fernandes: "Virtuell vergewaltigt" - wie gefährlich sind Fake-Pornos?

2026-03-20
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake pornographic images and voice simulations, which are used to harass and sexually violate the victim digitally. This use of AI has directly caused psychological harm and violation of rights to the victim, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. The involvement of AI in the creation and dissemination of these harmful materials is central to the incident described.
Thumbnail Image

Collien Fernandes macht sexualisierte Gewalt durch Exmann Christian Ulmen öffentlich

2026-03-19
Kurier
Why's our monitor labelling this an incident or hazard?
An AI system (specifically, AI-generated voice technology) was used by the perpetrator to impersonate Fernandes and commit sexualized violence and harassment. The harm is realized and ongoing, involving violations of personal rights and causing psychological and social harm. The AI system's use was central to the perpetration of these harms, making this an AI Incident under the definitions provided.
Thumbnail Image

Fall Collien Fernandes: Braucht es ein KI-Verbot für Männer?

2026-03-20
Kurier
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake technology to create pornographic content without consent, which is a direct violation of the victim's rights and causes psychological harm. The AI system's misuse is central to the harm described. The article explicitly mentions deepfakes and the creation of fake online accounts and conversations, indicating AI system involvement in causing harm. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Collien Fernandes zeigt Ex Ulmen an: "Virtuelle Vergewaltigung

2026-03-19
B.Z. Berlin
Why's our monitor labelling this an incident or hazard?
The incident describes the use of fake online profiles and manipulated content to impersonate the victim, which is consistent with AI-enabled identity theft and deepfake or synthetic media generation capabilities. The harm includes violation of personal rights and psychological harm to the victim, meeting the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The involvement of AI or algorithmic systems is reasonably inferred from the nature of the fake profiles and content manipulation described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

"Die Spur"-Doku: Täter war Collien Fernandes wohl ganz nah

2026-03-20
Promiflash.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake pornography, which involves AI-generated synthetic media. The creation and use of these deepfakes have directly harmed Collien Fernandes by violating her rights and causing emotional distress, as well as causing financial harm to a third party through fraudulent requests. The AI system's use in generating these fake videos and profiles is central to the harm described. Hence, this is an AI Incident involving violations of rights and harm to individuals.
Thumbnail Image

KI-Pornos verschickt: Collien Fernandes zeigt Ex Christian Ulmen an

2026-03-19
Nau
Why's our monitor labelling this an incident or hazard?
The incident describes the creation and use of fake social media profiles impersonating the actress, sending pornographic videos and engaging in deceptive chats with men. The scale (around 30 men) and the nature of the fake profiles suggest the use of AI systems for generating realistic fake content and managing interactions. The harm is realized as emotional and reputational damage to the victim, constituting a violation of rights. The AI system's misuse directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Interview zur Deepfake-Porno-Affäre: "Fernandes wurde von Ulmen zum Objekt degradiert"

2026-03-20
Berner Zeitung
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic synthetic media. The malicious use of this AI system to create and distribute fake pornographic content without consent directly harms the victim's rights and causes significant personal and social harm. This aligns with the definition of an AI Incident as the AI system's use has directly led to violations of human rights and harm to the individual.
Thumbnail Image

Digitale Gewalt: Was plant die Regierung gegen Deepfakes - Frankenpost

2026-03-20
Frankenpost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic images, which directly harm the individual by violating their rights and causing reputational and emotional damage. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and a violation of rights. The article describes realized harm through the circulation of these images, not just potential harm, and discusses legal implications, confirming the incident classification.
Thumbnail Image

"Digitale Vergewaltigung": Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
Basler Zeitung
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake pornography, which is generated by AI systems that synthesize realistic fake images and videos. The victim has suffered harm through the unauthorized creation and distribution of these AI-generated images, which is a direct violation of her rights and has caused emotional and psychological harm. The AI system's role in creating the fake content is pivotal to the harm experienced. Hence, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

"Digitale Vergewaltigung": Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
Der Bund
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake pornography, which is generated using AI systems that create synthetic but realistic images or videos. The harm includes psychological and emotional violence, violation of personal rights, and non-consensual use of AI-generated content. The AI system's role is pivotal as it enabled the creation of fake pornographic material that was distributed, leading to direct harm to the victim. Therefore, this qualifies as an AI Incident due to realized harm caused by the malicious use of AI technology.
Thumbnail Image

Fake-Pornos von Collien Fernandes: Wie groß ist die Gefahr?

2026-03-20
DNN - Dresdner Neueste Nachrichten
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake pornographic images and voice simulations without consent, which has directly led to significant emotional and reputational harm to the victim, Collien Fernandes. This constitutes harm to the individual (a form of harm to health and dignity) and a violation of rights. The AI system's use in creating and distributing this content is central to the harm described. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm. The article also discusses the broader societal implications and legal responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

"Virtuelle Vergewal­tigung": Wie groß ist die Gefahr im Netz?

2026-03-20
GT - Göttinger Tageblatt
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake sexual content and voice simulations without consent, directly causing harm to the individual involved (Collien Fernandes) and potentially others. This constitutes violations of personal rights and emotional harm, which are recognized harms under the AI Incident definition. The article confirms ongoing legal proceedings and real harm, not just potential risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Christian Ulmen: ProSieben greift durch - "jerks" von Joyn gelöscht

2026-03-20
OK! Magazin
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that can cause significant harm by violating privacy and personal rights. The article describes an incident where deepfake technology was allegedly used to impersonate and harm an individual, which constitutes a violation of rights and harm to the person. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Collien Fernandes beschuldigt Ex-Mann Christian Ulmen der "virtuellen Vergewaltigung"

2026-03-19
Luxemburger Wort
Why's our monitor labelling this an incident or hazard?
The article describes a case where fake pornography and fake profiles were used to harm the actress, which is consistent with AI-generated deepfake content or AI-enabled manipulation. The harm is realized and ongoing, involving violation of personal rights and dignity. The AI system's use in generating or distributing such content is reasonably inferred given the context of fake profiles and fake pornography online. Hence, this is an AI Incident involving violations of human rights and harm to the individual.
Thumbnail Image

Christian Ulmen angezeigt: Collien Fernandes spricht von "virtueller Vergewaltigung" und mehr

2026-03-19
OVB Online
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI or AI-enabled technologies to fabricate digital identities and generate fake intimate content, causing significant harm to the victim's personal and digital rights. The harm includes violations of privacy, identity theft, and psychological trauma, which fall under violations of human rights and harm to the individual. The AI system's involvement is reasonably inferred from the creation of fake profiles and videos, which typically require AI-based generative or manipulation tools. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in this context.
Thumbnail Image

"Virtuelle Vergewaltigung":: Wie groß ist die Gefahr im Netz?

2026-03-20
Marler Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images and voice mimicking for sexualized content without consent, which has been distributed and caused harm to the victim. This constitutes a violation of personal rights and emotional harm, fitting the definition of an AI Incident. The AI system's use in generating and spreading such content is central to the harm described. Although legal proceedings are ongoing and the accused denies the allegations, the harm from AI-generated content is clearly occurring. Hence, this is not merely a potential hazard or complementary information but an actual incident involving AI harm.
Thumbnail Image

"Es geht um Macht, Kontrolle und die Zerstörung des Gegenübers"

2026-03-20
FOCUS
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is evident through the mention of deepfake pornography and fake profiles, which rely on AI technologies for content generation and identity manipulation. The harm described includes violations of personal rights and digital violence, which are direct harms caused by the use of AI-generated content. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI systems in the form of digital violence and rights violations.
Thumbnail Image

Fernandes zu Deepfake-Pornos: "Man wird zum Objekt"

2026-03-20
ZDFheute
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated deepfake pornography, which is a clear example of AI misuse causing direct harm to individuals. The harms include violation of personal rights, digital sexual abuse, and psychological trauma, fitting the definition of an AI Incident under violations of human rights and harm to communities. The article reports realized harm rather than potential harm, and the AI system's role is pivotal in enabling the creation and dissemination of the deepfake content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Fall Ulmen: Justizministerin will gegen digitale Gewalt vorgehen

2026-03-20
ZDFheute
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is created using AI systems that generate manipulated videos, which constitutes digital violence and harm to individuals. The article reports on actual harm experienced by a person due to AI-generated content, fulfilling the criteria for an AI Incident. The political response and proposed legislation are complementary but secondary to the primary harm described. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated deepfake content.
Thumbnail Image

Collien Fernandes: Schwere Vorwürfe gegen Ex-Mann Christian Ulmen

2026-03-21
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake generation) to create and distribute non-consensual explicit content, which is a clear violation of personal rights and privacy. The harm has already occurred as the content was distributed, and legal action has been taken. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Cathy Hummels solidarisiert sich mit Collien Fernandes und teilt eigene Erfahrung

2026-03-21
rtl.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated sexual content and fake profiles impersonating the victim, which is a direct use of AI systems to create harmful content. This has led to realized harm including identity theft, sexualized digital violence, and emotional distress, which are violations of personal and possibly human rights. The involvement of AI in generating the harmful content and the resulting impact on the victim meets the criteria for an AI Incident. The article also discusses legal actions and societal responses, but the primary focus is on the harm caused by the AI-generated content, not just complementary information or potential future harm.
Thumbnail Image

Bewegende Worte und ein aussagekräftiges Bild!: "DANKE, DANKE, DANKE" - Collien Fernandes meldet sich zu "überwältigendem Support"

2026-03-21
rtl.de
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI-generated sexual content of a person was created and distributed without consent, which is a clear violation of rights and causes harm to the individual involved. The AI system's role in generating the content is central to the harm, making this an AI Incident under the framework's definition of violations of human rights or breach of legal protections.
Thumbnail Image

"DANKE, DANKE, DANKE" - Collien Fernandes meldet sich zu "überwältigendem Support"

2026-03-21
rtl.de
Why's our monitor labelling this an incident or hazard?
The article centers on allegations of AI-generated deepfake content used maliciously, which involves AI technology. However, the main focus is on the social and legal context, public support, and calls for change rather than on the AI system itself causing harm autonomously or malfunctioning. The harm is linked to human misuse of AI, and the article does not detail a new AI Incident or AI Hazard but rather provides context and updates on an ongoing situation involving AI misuse. Therefore, it fits best as Complementary Information.
Thumbnail Image

Als Collien ihren Ex anzeigte, schickte Polizei sie weg | Heute.at

2026-03-21
Heute.at
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake technology and fake profiles to impersonate Collien Fernandes, which is an AI system application. The impersonation led to sexual chats in her name and reputational damage, constituting harm to her rights and community. The involvement of AI in generating deepfake content and fake profiles directly led to this harm. Hence, this event meets the criteria for an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Collien Fernandes und das "Täterparadies": Warum ihr Wohnort entscheidend ist

2026-03-21
watson.ch/
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create and distribute manipulated pornographic content without consent, causing direct harm to Collien Fernandes and potentially many others. This constitutes digital violence and a violation of personal rights, fitting the definition of harm to persons and communities. The article describes realized harm, not just potential risk, and discusses legal and political responses to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Collien Fernandes: "Deutschland ist Täterparadies" - was sich jetzt ändern soll

2026-03-20
WAZ
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic content without consent, causing real harm to the victim, including psychological trauma and violation of personal rights. The article explicitly describes the harm caused by the AI-generated content and the victim's experience, fulfilling the criteria for an AI Incident. The discussion of legal reforms is complementary but does not overshadow the primary incident of harm caused by AI misuse.
Thumbnail Image

Bekenntnis bei Demo: Auch von Luisa Neubauer kursieren wohl sexualisierte Fake-Bilder

2026-03-22
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated fake sexual images (deepfakes) and identity impersonation online, which directly harms the individuals involved by violating their privacy and potentially causing psychological harm. The AI system's use in creating and distributing such manipulated content is central to the harm described. Therefore, this qualifies as an AI Incident due to violations of rights and harm to persons caused by AI-generated manipulated content.
Thumbnail Image

Vorwürfe gegen Christian Ulmen: Fahri Yardım bricht sein Schweigen im Fall Collien Fernandes: - WELT

2026-03-23
DIE WELT
Why's our monitor labelling this an incident or hazard?
The article mentions deepfake technologies, which are AI systems, in the context of alleged digital sexualized violence and internet abuse. This suggests potential harm linked to AI misuse. However, the article does not provide concrete evidence or detailed description of an AI system's direct or indirect role causing harm, nor does it describe a specific incident with confirmed outcomes. The focus is on public statements and social responses, without clear confirmation of an AI Incident or a plausible AI Hazard. Therefore, the article is best classified as Complementary Information, providing context and societal response to a broader AI-related issue rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Christian Ulmen: Das pikante Geständnis in einem acht Jahre alten Podcast - WELT

2026-03-22
DIE WELT
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems to create deepfake pornographic content and AI-generated voice to impersonate a person without consent, leading to serious harm described as 'virtual rape' by the victim. This is a direct harm caused by the use of AI technology, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The involvement of AI in generating the content and voice is explicit and central to the harm caused.
Thumbnail Image

Fahri Yardım: Der Schauspieler bricht sein Schweigen zum Fall Christian Ulmen - und übt Selbstkritik

2026-03-22
Spiegel Online
Why's our monitor labelling this an incident or hazard?
The article references digital sexualized violence involving deepfakes, which are AI-generated synthetic media, indicating AI system involvement in the harm. The harm (sexualized digital violence) is occurring, and AI-generated content is a contributing factor, which fits the definition of an AI Incident. However, the article mainly reports on public reactions and statements rather than new details of the incident itself. Since the harm is realized and AI-generated deepfakes are central to the abuse, this qualifies as an AI Incident due to violation of rights and harm to individuals through AI misuse.
Thumbnail Image

Während ihrer Deepfake-Recherche - schrieb Collien HIER mit Ehemann Christian Ulmen?

2026-03-22
rtl.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes and voice fakes to create fake profiles and simulate sexual content without consent, which is a direct violation of personal rights and causes harm to the individual. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident. The involvement of AI is clear, the harm is realized (not just potential), and the harm includes violation of rights and personal harm, fitting the definition of an AI Incident.
Thumbnail Image

Deepfake-Pornos: Was die Täter dazu antreibt

2026-03-22
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic fake images or videos. The use of such technology to create pornographic content without consent and to impersonate someone on social media to exploit others directly causes harm to the person involved, including violations of rights and personal harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Debatte über digitale Gewalt: Collien Fernandes erfährt breite Solidarität aus Politik und Gesellschaft

2026-03-22
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic synthetic videos. The article details how such AI-generated deepfake pornographic videos of Collien Fernandes have been created and distributed without her consent, causing harm to her personal dignity and safety. This constitutes a violation of rights and digital violence, which are harms under the AI Incident definition. The article also mentions legal actions and proposed laws addressing this harm, confirming the harm is realized and significant. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Collien Fernandes drehte 2024 ZDF-Doku über Deepfakes: Inhalt und Verfügbarkeit in Mediathek

2026-03-22
watson.de/
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create harmful content that has directly caused harm to individuals, including the featured person Collien Fernandes. The documentary documents these harms and the ongoing challenges in addressing them. Since the harm is realized and directly linked to the use of AI systems, this qualifies as an AI Incident rather than a hazard or complementary information. The article is not merely about AI technology or a general discussion but focuses on actual harm caused by AI-generated deepfake pornography.
Thumbnail Image

Collien Fernandes und die gegen sie gerichtete digitale Gewalt: Eine Chronik

2026-03-22
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake images, videos, and voice recordings that have been used to harass and harm Collien Fernandes. This is a direct harm caused by the malicious use of AI systems to create deceptive and harmful content, fitting the definition of an AI Incident due to violation of rights and harm to the individual. Although the perpetrator is not legally confirmed, the harm from the AI-generated content is realized and ongoing.
Thumbnail Image

Collien Fernandes: Was die Politik aus dem Fall Ulmen lernen will

2026-03-22
Berliner Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create harmful content (deepfake pornography and fake profiles) that caused direct harm to a person (Collien Fernandes). This constitutes a violation of personal rights and harm to the individual, fitting the criteria for an AI Incident. Although the article focuses on the victim's statement and legal actions, the AI system's misuse is central to the harm described. Therefore, this is classified as an AI Incident.
Thumbnail Image

Fernandes gegen Ulmen: "Testimonials können kooperierenden Marken auf die Füße fallen"

2026-03-22
Wirtschafts Woche
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-generated deepfake technology to create fake pornographic content of an individual without consent, which is a violation of personal rights and causes harm. The AI system's use in generating fake profiles and manipulated content directly led to harm to the person involved. Therefore, this qualifies as an AI Incident under the framework, as it involves harm to a person and violation of rights due to AI misuse.
Thumbnail Image

Schrieb Collien Fernandes unwissentlich mit ihrem Mann?

2026-03-23
Nau
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes and fake voices to impersonate Collien Fernandes, which is an AI system's use leading to harm. The harms include violation of personal rights, emotional harm described as 'virtual rape,' and legal complaints filed. The AI system's role is pivotal in creating the fake profiles and content. Therefore, this qualifies as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Collien-Ex Ulmen führte schon als Teenager Sex-Telefonate

2026-03-21
Nau
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or AI-like systems to create fake profiles and pornographic content impersonating a real person, which constitutes a violation of intellectual property and personal rights. The harm is realized as the ex-wife has filed legal complaints for identity theft and related offenses. The AI system's use in generating fake content is central to the incident, making this an AI Incident rather than a hazard or complementary information. The past teenage phone calls do not involve AI and are background context only.
Thumbnail Image

So erfuhr Collien Fernandes von den KI-Pornos

2026-03-22
Nau
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated fake profiles and explicit content to impersonate and harm a person, which is a direct violation of rights and causes psychological and reputational harm. The creation and dissemination of fake nude photos and videos strongly suggest the use of AI technologies such as deepfake generation. The harm is realized and ongoing, meeting the criteria for an AI Incident. The involvement of AI is reasonably inferred from the nature of the fake content and profiles. Hence, this is not merely a potential hazard or complementary information but a concrete incident of AI misuse causing harm.
Thumbnail Image

Collien Fernandes kündigt für Sonntag Demo gegen sexualisierte Gewalt in Berlin an

2026-03-21
DER STANDARD
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of deepfake pornographic videos using AI technology without the consent of the victim, which is a clear violation of rights and causes harm to the individual. The AI system's use in generating fake explicit content directly leads to harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Virtuelle Gewalt, echte Wunden

2026-03-22
DER STANDARD
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated pornographic content used to harm an individual by creating and distributing non-consensual deepfake images, which is a direct violation of personal rights and causes psychological and reputational harm. The AI system's role in generating the content is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The harm is realized, not just potential, as the victim reports ongoing impact.
Thumbnail Image

Pornografische KI-Fakes: Bremer Innensenatorin will schärfere Strafen - buten un binnen

2026-03-21
buten un binnen
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of pornographic AI-generated fake images directly harms the individual's rights and dignity, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The AI system's use in generating these fake images is central to the harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Collien Fernandes zeigte Ex-Mann Christian Ulmen wegen "virtueller Vergewaltigung" an: So ist die Gesetzeslage zu Deepfake-Pornos in Österreich

2026-03-22
Salzburger Nachrichten
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornography, which is a direct misuse of AI technology causing harm to an individual's rights and dignity. The harm is realized as the victim has been subjected to non-consensual AI-generated explicit content, leading to legal and societal repercussions. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal protections.
Thumbnail Image

Collien Fernandes: "Digitale Gewalt ist reale Gewalt - Deutschland ist ein Täterparadies"

2026-03-21
Der Bund
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content used maliciously to harm an individual, constituting digital violence with real psychological consequences. The AI system's misuse has directly led to harm, fulfilling the criteria for an AI Incident. The article focuses on the harm caused and the legal and societal responses, not merely on potential or future risks or general AI developments. Therefore, this is classified as an AI Incident.
Thumbnail Image

Nein zu sexualisierter digitaler Gewalt!

2026-03-21
Rote Fahne News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and dissemination of deepfake pornographic content, which is AI-generated and non-consensual, causing harm to the victim. This fits the definition of an AI Incident as it involves the use of an AI system leading to violations of human rights and harm to the individual. The harm is realized, not just potential, and the AI system's role is pivotal in generating the fake content.
Thumbnail Image

#trending Spezial: Collien Fernandes

2026-03-23
MEEDIA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos and AI-generated voice impersonations, which were maliciously used to harass and harm Collien Fernandes and other women. The harms include violations of privacy, psychological trauma, and reputational damage, fitting the definition of an AI Incident due to direct harm caused by the AI system's outputs. The prolonged and systematic nature of the abuse, as well as the involvement of AI-generated content, clearly meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Progessivere Rechtssprechung: Was Deutschland im Umgang mit Deepfake-Pornos von Spanien lernen kann

2026-03-23
RP Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic content without consent, which is a clear violation of personal rights and can be considered a form of harm to the individual and community. The article describes an actual incident where such AI-generated content was created and distributed, leading to legal action. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and online harassment).
Thumbnail Image

Absurde TV-Szene: Chattete Collien hier mit Ex-Mann? | Heute.at

2026-03-23
Heute.at
Why's our monitor labelling this an incident or hazard?
The article describes a case where deepfake AI technology was used to create fake profiles impersonating Collien Fernandes, leading to virtual abuse and dissemination of pornographic content. This use of AI has directly caused harm to the person involved, including violations of personal rights and psychological harm, which fits the definition of an AI Incident. The involvement of AI in the creation and use of deepfakes is explicit, and the harm is realized, not just potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

Freunde von Christian Ulmen distanzieren sich - die Reaktionen zum Fall Fernandes

2026-03-23
watson.ch/
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create and distribute manipulated images and videos that have directly led to harm, including violations of personal rights and digital sexual violence. The harm is realized and ongoing, as evidenced by public outcry and calls for legal reform. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to an individual and the broader community affected by digital sexual violence.
Thumbnail Image

Γερμανία: Γνωστή ηθοποιός έπεσε θύμα πορνογραφικού deepfakes από τον σύζυγό της - Αλλάζει το νομοσχέδιο για την ψηφιακή βία

2026-03-21
NewsIT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create pornographic deepfake images, which directly harmed the actress by violating her rights and causing psychological trauma. This fits the definition of an AI Incident as the AI system's use directly led to harm (violation of rights and harm to the individual). The legislative response is complementary information but does not change the classification of the core event as an AI Incident.
Thumbnail Image

Γερμανία: Ηθοποιός κατήγγειλε "ψηφιακό βιασμό" από τον πρώην σύζυγό της -Ν/σ εξπρές για deepfakes προωθεί η κυβέρνηση

2026-03-21
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create deepfake images, which are manipulated realistic images generated by AI technology. The use of these AI-generated images for sexual harassment and manipulation directly harms the victim, constituting a violation of rights and digital sexual violence. The harm is realized and ongoing, as the victim has suffered from this abuse for years. The government's legislative response further confirms the recognition of this as a serious AI-related harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to a person.
Thumbnail Image

Ψηφιακή βία και AI: Η Γερμανία νομοθετεί για τα deepfakes μετά από υπόθεση "ψηφιακού βιασμού"

2026-03-21
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that have been used maliciously to harass and harm an individual, which fits the definition of an AI Incident due to violation of rights and harm to the person. The harm is realized and ongoing, and the AI system's role is pivotal in enabling the creation and dissemination of these fake images. The legislative response and public reaction are complementary information but do not change the classification of the core event as an AI Incident.
Thumbnail Image

Η Γερμανία ψηφίζει νομοσχέδιο για την προστασία από την ψηφιακή βία και τα πορνογραφικά deepfakes - "Τα συχνότερα θύματα είναι γυναίκες"

2026-03-21
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated deepfake technology to create fake pornographic images of a person without consent, which is a direct violation of her rights and causes significant harm. The harm is realized and ongoing, as the victim has suffered digital sexual violence and reputational damage. The legislative response aims to address this harm and prevent future incidents. The AI system's misuse is central to the harm described, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

Γερμανία: Νομοσχέδιο για την ψηφιακή βία και τα πορνογραφικά deepfakes, μετά την καταγγελία ηθοποιού για "ψηφιακό βιασμό" | in.gr

2026-03-21
in.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to create pornographic deepfake images of a person without consent, which is a direct violation of personal rights and constitutes harm to the individual. The AI system's use here is malicious and has directly led to harm, fulfilling the criteria for an AI Incident. The legislative response and public outcry further confirm the seriousness and realized harm of the incident. Hence, it is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

Σάλος στη Γερμανία: Ηθοποιός κατήγγειλε για "ψηφιακό βιασμό" τον πρώην σύζυγό της

2026-03-21
Cretalive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create fake pornographic images and profiles, which were used to harass and abuse the victim. This use of AI directly led to harm to the victim's rights and well-being, fulfilling the criteria for an AI Incident. The article describes actual harm caused by the AI system's misuse, not just potential harm, and the societal and legal responses further confirm the seriousness of the incident.
Thumbnail Image

Γερμανία / Νομοσχέδιο για την ψηφιακή βία και τα πορνογραφικά deepfakes φέρνει η κυβέρνηση

2026-03-21
Αυγή
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake pornographic images, which were used maliciously to harass and harm the victim over a long period. This use of AI directly led to violations of the victim's rights and caused significant personal harm, fitting the definition of an AI Incident. The legislative response and public reaction underscore the seriousness and materialization of harm. Hence, the event is not merely a potential hazard or complementary information but a clear AI Incident involving harm caused by AI misuse.
Thumbnail Image

Νομοσχέδιο-εξπρές στη Γερμανία για ψεύτικες εικόνες, βίντεο και ψηφιακή κακοποίηση | ΕΙΔΗΣΕΙΣ

2026-03-21
Pelop.gr
Why's our monitor labelling this an incident or hazard?
The article focuses on a legislative response to AI-enabled harms, specifically sexualized deepfakes and digital abuse, which have already caused violations of personal rights and digital violence. The AI system involvement is clear (deepfake generation using AI), and the harms (violation of personality rights, digital abuse) are established and ongoing. However, the article primarily reports on the upcoming law and the societal discussion rather than a new or specific AI Incident or Hazard event. Therefore, this is best classified as Complementary Information, as it provides important context and governance response to AI harms rather than describing a new incident or hazard itself.
Thumbnail Image

Νομοθετική αντεπίθεση στη Γερμανία μετά σκάνδαλο "ψηφιακού βιασμού" με deepfakes

2026-03-21
The PressRoom
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit through the use of deepfake technology to create fake pornographic content, which directly caused harm to the victim. This harm includes violations of personal rights and psychological injury, fitting the definition of an AI Incident. The legislative response and societal mobilization are complementary information but do not negate the fact that harm has already occurred due to AI misuse.
Thumbnail Image

Germany to target pornographic deepfakes amid celebrity case

2026-03-20
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating pornographic deepfakes that have directly caused harm to an individual (identity theft, sexualized digital abuse), which fits the definition of an AI Incident due to violations of personal rights and harm to the individual. The article also discusses the legal and societal response to this harm, but the primary focus is on the realized harm caused by the AI system's outputs. Therefore, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Germany to crack down on sexualised deepfakes

2026-03-20
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated pornographic images (sexualised deepfakes) being spread maliciously, causing harm to the victim (a TV personality). This is a direct harm caused by the use of an AI system (deepfake generation). The legal complaint and government response indicate recognition of this harm and the need for prosecution. Therefore, this event qualifies as an AI Incident due to realized harm from the malicious use of AI-generated content violating personal rights and causing digital sexual violence.
Thumbnail Image

Thousands rally in Berlin against online sexual violence, deepfakes

2026-03-22
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create sexualized fake images, which constitutes an AI system's involvement. The harms described include violations of human rights and psychological harm to victims, fitting the definition of harm to persons and communities. The event involves the use and consequences of AI systems (deepfakes) causing real harm, thus qualifying as an AI Incident. The political response to reform laws is complementary but the primary focus is on the ongoing harm and victim solidarity, confirming the classification as an AI Incident rather than merely complementary information or a hazard.
Thumbnail Image

Germany to crack down on sexualised deepfakes

2026-03-20
The Local
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI-generated sexualised deepfake images have been created and distributed, causing harm to the victim (a TV personality). This is a direct violation of personal and possibly human rights, fulfilling the criteria for harm under the AI Incident definition. The involvement of AI systems in generating these images is explicit, and the harm is realized, not just potential. The government's planned legal response is complementary information but does not change the classification of the event as an AI Incident.
Thumbnail Image

Thousands rally in Berlin against online sexual violence, deepfakes

2026-03-22
dpa International
Why's our monitor labelling this an incident or hazard?
The article centers on a protest and political attention regarding online sexual violence involving AI-generated deepfakes, which are a form of AI system misuse causing harm. However, it does not report a specific incident of harm caused by an AI system in a particular event but rather discusses the broader societal issue and legislative efforts. Therefore, this is best classified as Complementary Information, as it provides context and updates on societal and governance responses to AI-related harms rather than describing a discrete AI Incident or AI Hazard.
Thumbnail Image

"Die KI darf nicht zur Waffe gegen Frauen werden"

2026-03-29
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of AI systems to create sexualized deepfake images, which directly harms individuals by violating their personal integrity and rights. The article explicitly mentions the need for legal sanctions against such AI-generated content, indicating that harm is occurring or has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating harmful content against women.
Thumbnail Image

"Waffe gegen Frauen": Regierung nimmt Kampf gegen Deepfakes auf

2026-03-29
der Standard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) that can be used maliciously to harm individuals, specifically women, through digital violence. Although no specific harm or incident is described as having occurred, the announcement of measures to combat such misuse indicates recognition of a credible risk that these AI systems could lead to harm. Therefore, this qualifies as an AI Hazard, as the development and use of deepfake AI systems could plausibly lead to violations of rights and harm to communities if left unchecked.
Thumbnail Image

Wien: Sporrer und Holzleitner im Kampf gegen digitale Gewalt

2026-03-29
www.kleinezeitung.at
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and distribution of sexualized deepfakes generated by AI, which have caused harm to a person (the actress Collien Fernandes) through non-consensual and damaging content. This constitutes a violation of rights and harm to the individual and community. The involvement of AI in generating the harmful content is clear, and the harm is realized, not just potential. The article also describes legal and policy responses, but the primary focus is on the incident of harm caused by AI misuse. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ministerinnen kämpfen gegen sexualisierte Deepfakes

2026-03-29
oe24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create sexualized deepfakes, which directly harm individuals by violating their rights and dignity, constituting a violation of human rights and potentially causing psychological harm. The article describes realized harm through the creation and distribution of these AI-generated images, as well as ongoing legal and policy responses. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and digital violence).
Thumbnail Image

Sporrer und Holzleitner kämpfen gegen digitale Gewalt

2026-03-29
Vorarlberg Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create sexualized deepfakes without consent, which has directly harmed individuals by violating their rights and dignity, fulfilling the criteria for harm to persons and communities. The article describes actual harm (non-consensual distribution of AI-generated pornographic content) and ongoing legal and policy responses, indicating the AI system's role in causing the incident. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sporrer und Holzleitner im Kampf gegen sexualisierte Deepfakes

2026-03-29
nachrichten.at
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute sexualized deepfake content without consent, which has directly harmed the victim and sparked public outrage. The article details the harm caused by the AI-generated content and the legal and political responses to address this harm. Since the AI system's misuse has directly led to violations of rights and harm to individuals, it meets the criteria for an AI Incident. The focus is on actual harm caused by AI misuse rather than potential or future harm, and the article does not primarily discuss responses or broader ecosystem context alone, so it is not Complementary Information.
Thumbnail Image

Sporrer und Holzleitner im Kampf gegen digitale Gewalt

2026-03-29
m.noen.at
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and distribution of pornographic deepfakes generated by AI, which is a direct misuse of AI technology causing harm to the victim's rights and dignity. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to the individual). The announcement of measures by government ministers further supports the recognition of this as a significant harm event involving AI.
Thumbnail Image

Actress Says She's Found Her Secret Online Abuser: Her Husband

2026-03-27
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of computer-generated audio to impersonate the actress's voice, which qualifies as an AI system generating synthetic content. The use of deepfake imagery and audio to impersonate and abuse the actress constitutes a direct harm to her personal rights and well-being, fulfilling the criteria for an AI Incident. The involvement of AI in the abuse and the resulting harm to the individual and community (through the spread of false and harmful content) justifies classification as an AI Incident rather than a hazard or complementary information. The event is not unrelated because AI-generated content is central to the abuse described.
Thumbnail Image

Germany investigates TV star's ex amid sexualised deepfakes uproar

2026-03-29
MM News
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated pornographic deepfake images of a person without consent, which is a violation of personal rights and constitutes digital violence. The AI system's use in generating these images and their malicious spread has directly caused harm to the individual, meeting the criteria for an AI Incident under violations of human rights and harm to communities. The investigation by prosecutors into stalking related to this AI misuse further supports the classification as an AI Incident.
Thumbnail Image

How deepfake porn scandal surrounding TV star rocked Germany

2026-03-29
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and distribution of AI-generated deepfake pornographic content, which is a direct use of AI systems to produce harmful material. The harm includes severe personal and psychological injury to Fernandes, as well as broader societal harm through online abuse and threats. The involvement of AI in generating the deepfake content and its role in causing these harms meets the criteria for an AI Incident. The ongoing investigations and legal actions further confirm the realized harm rather than just potential risk.
Thumbnail Image

How deepfake porn scandal surrounding TV star rocked Germany

2026-03-29
BBC
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate deepfake pornographic content without consent, which has directly harmed the victim by violating her rights and causing abuse. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person (violation of rights and abuse). The article focuses on the harm caused by the AI-generated content and the resulting legal and social consequences, not just on general AI developments or responses, so it is not Complementary Information or an AI Hazard. Therefore, the classification is AI Incident.
Thumbnail Image

German deepfake porn case sparks protests and pressure for change in law

2026-03-26
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake content used maliciously to impersonate and sexually exploit an individual, which constitutes a violation of human rights and digital violence. The harm is direct and ongoing, as the deepfakes were posted and shared online, causing personal and social harm. The involvement of AI in generating the content and the resulting legal and social consequences clearly classify this as an AI Incident under the OECD framework.
Thumbnail Image

How an AI Porn Scandal Around a TV Star Has Sparked Outrage in Germany

2026-03-29
TimesNow
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is an AI system generating harmful content. The distribution of such content has caused realized harm to the individual involved (Collien Fernandes) and has sparked societal outrage, indicating harm to communities. This fits the definition of an AI Incident as the AI system's use has directly led to violations of personal rights and harm to communities. Therefore, the event is classified as an AI Incident.
Thumbnail Image

German Deepfake Porn Case Sparks Protests and Pressure for Change in Law

2026-03-26
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article describes an AI system generating pornographic deepfakes that impersonate a person without consent, constituting a violation of personal rights and digital violence. This harm has already occurred, as the actor has filed charges and there are ongoing legal proceedings. The AI system's role in creating the harmful content is central to the incident. Therefore, this qualifies as an AI Incident due to the realized harm to the individual's rights and the societal impact described.
Thumbnail Image

Germany probes TV star's ex-husband amid sexualised deepfakes uproar

2026-03-27
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated pornographic images (deepfakes) being circulated, which constitutes a misuse of AI technology causing direct harm to the individual involved. The harm includes violation of rights and digital violence, fitting the definition of an AI Incident. The reopening of a legal probe and the government's pledge to legislate against such acts further confirm the seriousness and realized nature of the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Deepfake porn row rocks Germany as TV presenter accuses ex-husband

2026-03-29
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a clear example of an AI system's use leading to harm through violations of personal rights and causing psychological and reputational damage. The victim suffers direct harm from the distribution of non-consensual AI-generated content, meeting the criteria for an AI Incident under violations of human rights and harm to individuals. The ongoing legal and societal responses further confirm the materialized harm and the AI system's pivotal role in causing it.
Thumbnail Image

German deepfake porn case sparks protests and pressure for change in law

2026-03-26
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake technology to create non-consensual pornographic content, which directly harms the victim and constitutes a violation of rights. The article details the harm caused, legal actions, and societal reactions, indicating that the AI system's use has already resulted in realized harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Who Is Collien Fernandes? Deepfake Scandal Rocks Germany As TV Presenter Accuses Ex-Husband Of Creating Fake Porn Videos

2026-03-29
NewsX
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a clear example of AI misuse causing harm to an individual's rights and dignity. The harm is realized, as Fernandes has experienced abuse and defamation due to the AI system's outputs. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community. The article also discusses societal and legal responses, but the primary focus is on the incident itself.
Thumbnail Image

Why is Germany talking about deepfakes and sexual violence?

2026-03-26
The Local
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake generation) used maliciously to create and distribute sexualized images of a real person without consent, causing direct harm to the victim (psychological trauma, violation of rights). This meets the criteria for an AI Incident as the AI system's use has directly led to harm. The article also discusses societal and governance responses, but the primary focus is on the incident and its consequences, not just the responses. Hence, it is classified as an AI Incident rather than Complementary Information or AI Hazard.
Thumbnail Image

Germany's Government Faces Pressure to Strengthen Digital Violence Laws | Entertainment

2026-03-26
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake pornographic content, which is a direct misuse of AI technology causing harm to an individual's rights and dignity. The harm includes defamation, threats, and digital violence, which fall under violations of human rights and harm to communities. The ongoing legal proceedings and legislative efforts further confirm the materialization of harm rather than a potential risk. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

German Deepfake Porn Case Spurs Calls for Legal Reform in Germany

2026-03-26
Global Banking & Finance Review
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornography being used maliciously to impersonate and harm a person, which is a direct harm caused by the use of an AI system. The harm includes violation of rights and digital violence, fitting the definition of an AI Incident. The involvement of AI in generating the pornographic content is clear, and the harm is realized, not just potential. The legal and societal responses are complementary information but do not change the classification of the core event as an AI Incident.
Thumbnail Image

Thousands join latest protest in Germany against sexualized deepfakes

2026-03-29
dpa International
Why's our monitor labelling this an incident or hazard?
The article centers on protests and political calls for tougher laws against sexualized deepfakes, which are AI-generated manipulated content. While the deepfakes allegedly caused harm to the individual, the article focuses on the societal and political response rather than detailing a confirmed AI Incident with direct legal or official recognition of harm caused by the AI system. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

"C'est l'affaire Pelicot du numérique" : on vous raconte l'affaire d'une célèbre actrice allemande qui accuse son ex-mari de deepfakes pornographiques

2026-04-14
Franceinfo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and dissemination of AI-generated deepfake pornographic videos and voice manipulations targeting Collien Fernandes, which constitutes a direct violation of her rights and has caused her significant psychological trauma and harassment. The AI system's role in generating these fake videos and content is central to the harm described. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person (psychological harm, harassment, violation of rights).
Thumbnail Image

Vu d'Europe : "Tu m'as violée virtuellement", une actrice allemande dénonce les deepfakes sexuels orchestrés par son ex-mari - RTBF Actus

2026-04-15
RTBF
Why's our monitor labelling this an incident or hazard?
The article describes the creation and circulation of AI-generated deepfake sexual videos targeting an actress, which constitutes a violation of her rights and causes harm. The AI system's use in generating these videos is central to the harm experienced, fulfilling the criteria for an AI Incident due to violations of rights and harm to the individual. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in the incident.
Thumbnail Image

"L'affaire Pelicot du numérique" : une actrice allemande accuse son ex-mari d'avoir diffusé de fausses vidéos pornographiques

2026-04-14
RTL.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and dissemination of deepfake pornographic videos, which are AI-generated falsified content. The harm caused includes psychological trauma and reputational damage to the actress, as well as threats received, which are direct harms resulting from the AI system's misuse. The involvement of AI in generating deepfakes and the resulting harm to the victim meet the criteria for an AI Incident under the framework, as it involves violations of personal rights and harm to the individual and community.
Thumbnail Image

" L'affaire Pelicot du numérique " : l'actrice Collien Fernandes se confie alors qu'elle accuse son ex-mari d'avoir diffusé de faux contenus pornographiques

2026-04-13
Nice-Matin
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated deepfake videos and manipulated images, which are AI systems creating realistic fake content. The harm is realized and significant, including psychological harm, violation of privacy and personal rights, and reputational damage. The AI system's use in fabricating and distributing these fake contents is central to the harm experienced by the victim. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Allemagne: "C'est l'affaire Pelicot du numérique!"

2026-04-13
Le Matin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create realistic fake pornographic videos and images (deepfakes) that have been distributed to harm the actress. The AI system's outputs have directly led to significant psychological and reputational harm to the victim, fulfilling the criteria for an AI Incident. The involvement of AI in generating manipulated content that causes violations of rights and personal harm is clear and central to the event. Hence, this is not merely a potential hazard or complementary information but a realized incident of AI harm.
Thumbnail Image

L'affaire "Pelicot du numérique" : en Allemagne, les deepfakes sexuels interrogent le droit

2026-04-16
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems used to create deepfake images and voice manipulations, which directly led to harm including psychological trauma, violation of personal and sexual rights, and reputational damage. The AI system's use was malicious and deceptive, causing real harm to the victim and others deceived by the content. The article discusses the legal implications and the need for stronger regulation, confirming the realized harm and the AI system's pivotal role. Hence, it meets the criteria for an AI Incident as the AI system's use directly caused violations of rights and harm to the individual and community.
Thumbnail Image

Una actriz dice que descubrió al responsable de acosarla en línea: su esposo

2026-03-27
The New York Times
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-generated deepfake images and AI-generated audio to impersonate Fernandes, which caused direct harm to her by spreading false and abusive content online. The AI system's outputs were used maliciously by her husband to harass and deceive others, leading to violations of her rights and psychological harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to an individual. The event is not merely a potential hazard or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

La denuncia en España de una actriz a su exmarido por enviar 'deepfakes' sexuales sacude a Alemania

2026-03-24
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI systems to create deepfake videos and AI-generated voice to impersonate the actress, which were then distributed to multiple individuals, causing psychological, emotional, and reputational harm. This constitutes a violation of human rights and digital sexual violence, fulfilling the criteria for an AI Incident. The involvement of AI in generating fake sexual content and voice impersonation is central to the harm caused. The article also discusses legal responses and societal reactions, but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

Escándalo en Alemania: famosa actriz denuncia a su exmarido por difundir miles de imágenes sexuales falsas hechas con IA

2026-03-27
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (deepfake technology and AI-generated voice) to create and disseminate manipulated sexual content without consent, directly causing harm to the actress's reputation, emotional health, and professional standing. The AI system's use here is malicious and central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The ongoing distribution of this content confirms realized harm rather than potential harm, ruling out AI Hazard or Complementary Information classifications.
Thumbnail Image

Conmoción por la denuncia en España de una actriz a su exmarido por enviar 'deepfakes' sexuales

2026-03-27
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated synthetic media. The use and distribution of these deepfakes have caused direct harm to the actress, including harassment and violation of her rights, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The ongoing investigation and legislative responses are complementary information but do not negate the primary classification as an AI Incident due to realized harm.
Thumbnail Image

Collien Fernandes, la actriz que denunció a su exmarido por difundir imágenes sexuales falsas y podría cambiar la ley alemana

2026-03-24
eldiario.es
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake sexual images and videos, which were then maliciously disseminated, causing harm to the individual involved. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to the individual. The article describes realized harm (defamation, violation of privacy, and digital sexual violence) caused by AI-generated content. The legislative response and public protests further confirm the seriousness of the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Famosa actriz alemana denuncia a su esposo por vender miles de imágenes sexuales de ella hechas con IA

2026-03-24
BioBioChile
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images and voice content impersonating the actress without consent. The distribution of these AI-generated sexual images and videos caused direct harm to the actress's personal and professional life, constituting a violation of rights and digital violence. The AI system's role is pivotal in creating the harmful content, and the harm has already materialized. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Caso Collien Fernandes: Deepfakes, violencia digital y la denuncia contra Christian Ulmen

2026-03-27
Excélsior
Why's our monitor labelling this an incident or hazard?
The article describes an AI system's use (deepfake generation) to create and disseminate false sexual content and voice impersonations, which has directly harmed the victim by violating her rights and causing emotional distress. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual). The ongoing legal and political responses are complementary information but do not change the classification of the core event as an AI Incident.
Thumbnail Image

La Jornada: Caso de pornografía hecha con IA desata protestas para cambiar ley

2026-03-27
La Jornada
Why's our monitor labelling this an incident or hazard?
The article describes an AI system generating pornographic deepfake content that impersonates a real individual without consent, which is a direct violation of personal rights and constitutes digital violence. The harm is realized as the victim has suffered identity impersonation and sexual harassment through AI-generated content. The public protests and governmental pressure for legal changes further confirm the incident's significance. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Actriz Collien Fernandes acusa a su ex por fotos falsas íntimas

2026-03-27
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images and voice content used maliciously to harm an individual, constituting a violation of rights and harm to communities. The harm is realized and ongoing, with direct involvement of AI systems in the creation and dissemination of false intimate content. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Actriz DENUNCIA a su exesposo actor de difundir IMÁGENES SEXUALES de ella hechas con IA | El Popular

2026-03-26
Diario El Popular
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used maliciously to create and spread sexual images and videos without consent, causing harm to the individual involved. This constitutes a violation of rights and harm to the person, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's use is central to the incident.
Thumbnail Image

Escándalo en Alemania: Estrella de TV denuncia pornografía falsa

2026-03-24
notiulti.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake pornographic images and altered voice content without consent, leading to significant personal harm and violation of rights. The malicious use of AI-generated content to harass and defame an individual fits the definition of an AI Incident, as the AI system's use directly led to harm to a person. The subsequent societal and legal responses are complementary information but do not change the classification of the core event as an AI Incident.
Thumbnail Image

Collien Fernandes impulsa debate legal sobre 'deepfakes' en Alemania

2026-03-27
El Comercio
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create manipulated images and videos ('deepfakes') that have been distributed, causing harm to the victim's reputation and privacy, which constitutes a violation of rights. The article explicitly links the harm to the AI-generated content and the ongoing legal and societal responses. Therefore, this is an AI Incident because the AI system's misuse has directly led to harm to a person and violations of rights.
Thumbnail Image

La actriz alemana Collien Fernandes denuncia en España a su exmarido por difundir deepfakes pornográficos de ella

2026-03-28
Artículo 14
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems to generate deepfake pornographic images and AI-generated voice to impersonate the actress, which have been distributed and used to harass and harm her over years. This constitutes a violation of personal rights and digital sexual violence, fulfilling the criteria for harm to individuals and communities. The AI system's use is central to the harm caused, making this an AI Incident rather than a hazard or complementary information. The harm is realized and ongoing, not merely potential.
Thumbnail Image

Victime de deepfakes sexuels depuis 10 ans, elle découvre que son mari en est l'auteur : " On m'a volé mon corps "

2026-03-28
Ouest France
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of deepfake sexual content, which is generated using AI techniques to manipulate images and videos to falsely depict the victim. This use of AI has directly caused harm to the victim's personal rights, psychological well-being, and reputation. The involvement of AI in producing these manipulated contents is explicit and central to the harm described. Hence, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use in creating non-consensual deepfake pornography.
Thumbnail Image

Allemagne : Enquête ouverte contre l'acteur accusé par sa femme d'avoir utilisé son image dans des deepfakes pornos

2026-03-27
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic fake videos, to create pornographic content without the subject's consent. This has caused direct harm to the victim through harassment and violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to a person, fulfilling the criteria for violations of human rights and harassment.
Thumbnail Image

Une affaire de " deepfakes " pornographiques concernant l'acteur Christian Ulmen et son ex-femme secoue l'Allemagne

2026-03-27
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and dissemination of deepfake videos generated by AI, which have caused harassment and legal complaints. The harm is direct and significant, involving violations of rights and personal harm to the victim. The involvement of AI in generating the harmful content is clear and central to the incident. The ongoing legal investigation and societal reactions further confirm the seriousness of the harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Après la diffusion de deepfakes de sa femme, l'acteur Christian Ulmen visé par une enquête

2026-03-27
20minutes
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated deepfake pornographic videos targeting an individual, which is a clear violation of personal rights and constitutes harassment. The AI system's role in generating these videos is central to the harm experienced. The harm is realized, as evidenced by the legal complaint, investigation, and public outcry. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community.
Thumbnail Image

"Mon corps a été volé au fil des années" : en Allemagne, une affaire de deepfakes sexuels secoue la société et la classe politique

2026-03-28
Libération
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake sexual content, which has been disseminated over years causing harm to the individual (psychological and reputational harm) and raising societal and political alarm. The use of AI to create and spread non-consensual sexual deepfakes constitutes a violation of personal rights and can be classified as harm to individuals and communities. Since the harm is realized and ongoing, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"C'est notre Gisèle Pelicot" : on vous explique le scandale sexuel qui secoue l'Allemagne

2026-03-26
L'Obs
Why's our monitor labelling this an incident or hazard?
The article mentions fake videos with the actress's face and voice, which strongly suggests the use of AI-based deepfake technology. The creation and dissemination of such content without consent constitutes a violation of personal rights and can be considered harm to the individual. Since the AI system's use directly led to this harm, this qualifies as an AI Incident under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

" C'est notre Gisèle Pelicot " : on vous explique le scandale sexuel qui secoue l'Allemagne

2026-03-26
L'Obs
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake videos, which are fake but hyperrealistic images and videos created by AI. The malicious use of these AI-generated deepfakes has directly caused harm to the victim's personal rights and emotional well-being, constituting a violation of human rights and sexual harassment. The harm is realized and ongoing, making this an AI Incident. The governmental promise to criminalize such acts is a complementary response but does not change the classification of the event itself.
Thumbnail Image

Deepfakes : l'Allemagne sous le choc des viols virtuels

2026-03-27
Génération-NT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to generate deepfake sexual videos and clone the victim's voice to harass and manipulate others, which directly caused psychological and reputational harm to the victim. This is a clear violation of human rights and personal dignity, fitting the definition of harm under AI Incident (c). The AI system's development and use were central to the perpetration of this harm. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard or Complementary Information. The event is not unrelated as AI systems are explicitly involved in causing harm.
Thumbnail Image

Neue Vorwürfe von Collien Fernandes: Was Christian Ulmen mir gestanden hat

2026-03-26
Bild
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic fake videos and images, to create fake profiles and distribute manipulated pornographic content. This use has directly harmed the individual by damaging her reputation and causing emotional harm. The involvement of AI in generating deepfakes is central to the harm described. Hence, the event meets the criteria for an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Collien Fernandes mit neuen Vorwürfen gegen Christian Ulmen

2026-03-26
oe24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology to create fake pornographic videos and images impersonating Collien Fernandes, which were distributed to others without consent. This is a direct use of an AI system (deepfake generation) leading to harm (violation of personal rights, reputational damage, and psychological harm). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to a person.