AI-Generated Deepfake Abuse Leads to Legal Action and Media Consequences in Germany

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Actress Collien Fernandes accused her ex-husband Christian Ulmen of using AI-generated deepfake pornography and fake profiles to commit digital violence, identity theft, and emotional harm. Legal proceedings have begun in Spain and Germany, and broadcaster ProSieben removed Ulmen's show following the allegations. The incident highlights AI's role in personal rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI-generated deepfake content that has been distributed and caused harm to Collien Fernandes. The harm is realized and ongoing, as the fake images and videos have been circulating for years, and the victim has filed a legal complaint. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person, specifically violations of rights and reputational harm. The article does not focus on future risks or responses but on the actual harm caused by the AI-generated content.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Nach Vorwürfen gegen Ex-Mann: Wo sich Collien Fernandes aktuell befindet

2026-03-20
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake content that has been distributed and caused harm to Collien Fernandes. The harm is realized and ongoing, as the fake images and videos have been circulating for years, and the victim has filed a legal complaint. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person, specifically violations of rights and reputational harm. The article does not focus on future risks or responses but on the actual harm caused by the AI-generated content.
Thumbnail Image

Nacktbilder, Stalking, Deepfakes: So schützen Sie sich vor digitaler Gewalt

2026-03-20
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images used maliciously to harm an individual, which is a direct harm to the person and a violation of rights. The AI system's role in generating the fake pornographic images is pivotal to the harm described. Therefore, this qualifies as an AI Incident under the framework's definition of harm to persons and violations of rights caused by AI misuse.
Thumbnail Image

Collien Fernandes erhebt schwere Vorwürfe: Anzeige gegen ihren Ex Christian UImen

2026-03-19
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated or AI-manipulated content (deepfake pornography) to impersonate a person and cause harm through non-consensual sexual exploitation and identity theft. The harm is realized and significant, including violations of personal rights and psychological harm. The AI system's role is pivotal in generating the fake images and videos. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Meinung: News des Tages: Collien Fernandes kämpft gegen digitale Gewalt, Ideen für pünktlichere Züge, ukrainischer Seehandel

2026-03-19
Spiegel Online
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media, often used maliciously to create fake pornographic videos. The article reports that such AI-generated content has been used to harm Collien Fernandes, constituting digital sexual violence and a violation of her rights. The harm is realized and ongoing, and the AI system's misuse is central to the incident. Hence, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Nach Collien Fernandes: Auch Mareile Höppner ist Opfer von KI-Fakes

2026-03-20
Focus
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornography targeting specific individuals, causing harm to their personal rights and dignity. The article mentions that the victims are publicly identified and that legal actions have been initiated, indicating realized harm. The AI system's misuse is central to the incident, fulfilling the criteria for an AI Incident due to violation of human rights and harm to persons. Hence, the classification is AI Incident.
Thumbnail Image

Wie Collien Fernandes: Auch Mareile Höppner ist von KI-Fakes betroffen - "Schmutziger Alltag"

2026-03-20
Focus
Why's our monitor labelling this an incident or hazard?
The creation and distribution of AI-generated deepfake pornography is a clear example of harm caused by the use of AI systems, specifically generative AI used to produce fake explicit content without consent. This directly violates the rights of the individuals depicted and causes significant personal and social harm. The involvement of AI in producing these fake images/videos and the resulting legal and social consequences confirm this as an AI Incident. The article focuses on the harm already caused and the responses to it, rather than just potential future harm or general AI news.
Thumbnail Image

Collien Fernandes zeigt Christian Ulmen an: Vorwürfe von psychischer Gewalt

2026-03-20
20 Minuten
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated pornographic content was used maliciously to harm Collien Fernandes, which is a direct harm to her psychological well-being and personal rights. The AI system's use in generating and distributing harmful content directly led to realized harm, fulfilling the criteria for an AI Incident. The involvement of AI is clear and central to the harm described, and the harm is materialized, not just potential.
Thumbnail Image

Collien Fernandes erhebt schwere Vorwürfe gegen Ex-Mann Christian Ulmen

2026-03-20
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake technology, which is an AI system capable of generating synthetic media. The harm caused includes violations of personal rights and digital sexual violence through the distribution of deepfake pornographic content, which is a clear harm to the individual and community. The AI system's use has directly led to these harms. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

"Mir wurde über Jahre mein Körper geklaut": Collien Fernandes zeigt Christian Ulmen an

2026-03-19
Frankfurter Rundschau
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of pornographic deepfake videos, which are typically generated using AI systems capable of synthesizing realistic fake videos. This use of AI has caused direct harm to Collien Fernandes by subjecting her to online abuse and emotional distress. The harm is realized and ongoing, meeting the criteria for an AI Incident. There is explicit mention of the harm caused by the AI-generated content, and the AI system's role is pivotal in producing the fake videos.
Thumbnail Image

Strafanzeige gegen Ex-Mann Christian Ulmen: Collien Fernandes bei VOXStimme 2024: "Diese sexualisierte Gewalt kann Leben ruinieren"

2026-03-20
rtl.de
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated realistic videos or images. The article reports that Collien Fernandes is a victim of deepfake-based digital sexual violence, which constitutes harm to her personal rights and well-being. This harm is directly caused by the use of an AI system (deepfake generation). Hence, the event meets the criteria for an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Nach Vorwürfen gegen Christian Ulmen - wie finde ich heraus, ob Deepfakes von mir im Netz kursieren?

2026-03-20
rtl.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes causing harm (fake sexual content), which is a form of violation of rights and harm to individuals. However, the article focuses on explaining what deepfakes are, the challenges in detecting them, and advice on how to find such content online. It does not report a new AI Incident or AI Hazard but provides context and guidance related to an existing issue. Hence, it fits the definition of Complementary Information as it supports understanding and response to AI harms rather than describing a new primary harm event.
Thumbnail Image

Schweigen gebrochen: Fernandes löst Beben aus

2026-03-20
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake technology to create fake pornographic images and videos of the victim, which were then used to sexually exploit her and contact her personal and professional circles. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident where AI use has directly led to harm. Although the accused is presumed innocent, the harm caused by the AI system's misuse is clear and ongoing.
Thumbnail Image

Anzeige in Spanien! Was kommt jetzt auf Christian Ulmen zu?

2026-03-20
rtl.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated sex videos as part of the allegations, indicating AI system involvement in creating harmful content. The harms described (identity theft, privacy violations, sexualized violence) align with violations of rights and harm to individuals. However, the event is at an early investigative stage with no confirmed incident or harm established. The main focus is on the legal process and potential outcomes, making it an update or complementary information rather than a confirmed AI Incident or AI Hazard. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos as part of the online attacks against Collien Fernandes. These deepfakes are pornographic and have been widely viewed, causing harm to her personal dignity and privacy. The creation and dissemination of such AI-generated content directly led to harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating the deepfakes and the resulting violation of rights and harm to the individual is clear and direct.
Thumbnail Image

Demokratisierung des Missbrauchs: Warum wir den Kampf gegen Deepfake-Pornos verlieren

2026-03-19
der Standard
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is generated using AI systems that synthesize realistic but fake images or videos of individuals. The described case involves the malicious use of such AI tools to create and spread non-consensual explicit content, causing significant harm to the victim's personal rights and dignity. This is a direct harm caused by the use of an AI system, fitting the definition of an AI Incident due to violation of human rights and harm to the individual.
Thumbnail Image

Deepfake-Pornos: Collien Fernandes erhebt schwere Vorwürfe gegen Ex-Mann Christian Ulmen

2026-03-19
der Standard
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is generated using AI systems that synthesize realistic but fake images or videos. The creation and dissemination of such content without consent is a violation of human rights and constitutes harm to the individual involved. Since the article details the occurrence of this harm and the AI system's role in producing the deepfake content, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to individuals.
Thumbnail Image

Digitale Gewalt: Regeln fehlen - und die Politik steht unter Druck

2026-03-20
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear from the mention of AI-generated deepfake videos. The harm described (emotional and reputational harm from non-consensual deepfake sexual content) fits under harm to individuals and violation of rights. However, the article does not describe a new specific AI Incident but rather discusses the broader issue and ongoing debate, with a particular case mentioned as context. Therefore, this is best classified as Complementary Information, as it provides context and societal response to existing AI-related harms rather than reporting a new incident or hazard.
Thumbnail Image

SZ-Podcast: Digitale Gewalt - Es geht um Entwürdigung

2026-03-20
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfakes, which are AI-generated manipulated videos and images, causing harm to Collien Fernandes by spreading false and damaging content. This is a direct harm to the individual's dignity and privacy, fitting the definition of harm to persons and violation of rights. The AI system's use (deepfake generation) has directly led to this harm. The legal complaint and ongoing investigation further confirm the seriousness of the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

Collien Fernandes zeigt ihren Ex-Mann Christian Ulmen an

2026-03-20
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-enabled technologies (likely deepfake generation and automated fake account creation) to produce and distribute fake pornographic videos and images, as well as fake social media accounts impersonating the victim. This has directly led to harm including violation of privacy, reputational damage, and psychological trauma to the victim. The involvement of AI is reasonably inferred from the nature of the fake content and accounts. The harm is realized and ongoing, meeting the criteria for an AI Incident under violations of human rights and harm to individuals. The article also discusses legal actions and public reactions, but the primary focus is on the harm caused by the AI-enabled impersonation and fake content.
Thumbnail Image

Nacktfotos und Deep Fakes: Digitale Gewalt - "Es sind fast immer die Ex-Partner"

2026-03-20
RP Online
Why's our monitor labelling this an incident or hazard?
The article references digital violence involving deepfakes and fake profiles, which reasonably implies the use of AI systems for generating manipulated images and identities. This involvement can lead to harm such as violations of privacy and psychological harm, fitting the definition of an AI Incident. Although the article mentions allegations and the presumption of innocence, the described harms are consistent with realized harms caused by AI-generated content. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by AI systems in digital violence contexts.
Thumbnail Image

Selbst von KI-Fakes betroffen: Mareile Höppner steht Collien Fernandes zur Seite

2026-03-20
Bunte
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake content (KI-Fakes) causing harm to individuals, which is a direct violation of personal rights and can be considered harm to persons. The involvement of AI in creating these fake videos is clear, and the harm is realized as the victims are publicly acknowledging the impact and pursuing legal remedies. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Fall Fernandes: Wie Spanien digitale sexuelle Gewalt verfolgt

2026-03-21
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake pornographic images and AI-generated voice content used to harass and harm a person, which is a direct harm to the individual's rights and dignity. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and digital sexual violence). The article focuses on the harm caused and legal responses, not just general AI news or potential future harm. Hence, it is classified as an AI Incident.
Thumbnail Image

Vorwürfe digitaler Gewalt: "Sie ist bei weitem kein Einzelfall"

2026-03-20
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create fake pornographic images and videos and to manipulate voice recordings, which are explicitly described as AI-based deepfake technologies. The harm is realized, including psychological distress and violation of personal rights, fulfilling the criteria for harm to persons and communities. The AI system's use is directly linked to the harm, making this an AI Incident rather than a hazard or complementary information. The article also discusses the broader societal impact and legal context, but the primary focus is on the harm caused by AI-generated digital violence.
Thumbnail Image

Digitale sexualisierte Gewalt: Wie groß ist das Problem?

2026-03-20
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images and videos (deepfakes) that have been used to impersonate and harm individuals, which is a direct violation of personal rights and constitutes digital sexualized violence. The article reports on actual harm caused by these AI-generated materials, including fake social media profiles and pornographic content, which meets the criteria for an AI Incident. The involvement of AI in creating these harmful materials is explicit, and the harm to individuals' rights and dignity is direct and realized, not merely potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Pornografische Deep-Fakes: Collien Fernandes erhebt schwere Vorwürfe gegen Ex-Mann

2026-03-20
taz.de
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of pornographic deepfake videos and fake profiles using AI-generated content directly caused harm to Collien Fernandes, including digital violence and violation of her rights. The AI system's misuse is central to the harm, fulfilling the criteria for an AI Incident. The article details ongoing legal and political responses but the primary focus is on the harm already caused by the AI system's outputs, not just potential or future harm or complementary information.
Thumbnail Image

Es geht um Fake-Pornos: Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
MOPO.de
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic synthetic media. The creation and distribution of non-consensual deepfake pornographic videos and fake accounts directly harm the individual impersonated, violating their rights and causing emotional distress. The article describes realized harm through the use of AI-generated content, meeting the criteria for an AI Incident. The involvement of AI is explicit through the mention of deepfakes, and the harm is direct and ongoing, with legal proceedings initiated. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Nach Vorwürfen: Serie mit Christian Ulmen online nicht mehr abrufbar

2026-03-20
MOPO.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of pornographic AI deepfakes, which are AI-generated manipulated videos, causing harm to an individual by misusing their likeness without consent. This is a clear violation of rights and personal harm, fitting the definition of an AI Incident. The removal of the series is a response to the harm caused. Although legal proceedings are ongoing and the accused denies the allegations, the AI system's role in causing harm is central to the event described.
Thumbnail Image

KI-Pornos, Vergewaltigung! Collien nennt Schock-Details | Heute.at

2026-03-20
Heute.at
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-generated voice to impersonate the victim, which is an AI system's use contributing to the harm. The harms described include identity theft, psychological and emotional harm, and violations of personal rights, which fall under violations of human rights or breach of obligations intended to protect fundamental rights. Since the AI system's use directly led to these harms, this qualifies as an AI Incident.
Thumbnail Image

Heftige Vorwürfe! Collien Fernandes zeigt Ex Ulmen an | Heute.at

2026-03-19
Heute.at
Why's our monitor labelling this an incident or hazard?
The creation and use of fake profiles and realistic pornographic content imply the involvement of AI systems capable of generating such content. The alleged actions constitute violations of personal rights and could be considered harm to the individual. Since the harm is alleged and under investigation, and the article does not confirm realized harm or legal findings, this event is best classified as an AI Incident due to the direct link between AI-generated fake profiles and the violation of rights. The AI system's use in creating fake profiles and content directly led to the alleged harm.
Thumbnail Image

Digitale Gewalt: Collien Fernandes macht Ex-Mann Ulmen Vorwürfe - was tun bei Deepfakes

2026-03-20
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake technology, which can generate synthetic media. The alleged use of deepfakes to harm a person fits the definition of an AI Incident if harm has occurred. However, since the allegations are unverified and the accused denies the claims, and no confirmed harm or legal ruling is reported, the event currently represents a plausible risk or potential harm rather than a confirmed incident. Thus, it is best classified as an AI Hazard, reflecting the credible potential for harm from AI misuse in this context.
Thumbnail Image

Digitale Gewalt: Collien Fernandes macht Ex-Mann Ulmen Vorwürfe - was tun bei Deepfakes

2026-03-20
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated manipulated media, so the involvement of an AI system is explicit. The alleged creation and distribution of Deepfake images of a person without consent is a violation of personal rights and can be considered digital violence, which is a form of harm to individuals. The article states that the victim has filed a police report, indicating that harm has occurred and legal processes are underway. Hence, the event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

"Virtuell vergewaltigt": Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-20
GameStar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology used to create fake pornographic videos and fake profiles, which have caused direct harm to the individual by violating her rights and causing psychological harm. The AI system's use (deepfake generation) directly led to these harms, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The legal complaint and the described harms confirm that the incident has materialized, not just a potential risk.
Thumbnail Image

Ex-Mann soll Deepfake-Pornos veröffentlicht haben: Der Fall Fernandes in 5 Punkten

2026-03-20
watson.ch/
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a clear example of an AI system's use leading to harm. The deepfakes caused direct harm to the victim's privacy, reputation, and mental well-being, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of AI in creating realistic fake pornographic content without consent is central to the harm described. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wie sich Collien Fernandes einst für Opfer von Deepfakes stark machte

2026-03-19
Bunte
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of fake profiles and fake nude photos/videos that appear to be of Collien Fernandes but are not genuine. The creation of such realistic fake media is typically enabled by AI deepfake technology, which is an AI system. The harm includes violation of personal rights and privacy, which falls under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized and ongoing, not merely potential. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Collien Fernandes: "Es war wie bei einer Todesnachricht"

2026-03-20
oe24
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated manipulated media, and their use here has caused direct harm to Collien Fernandes through the spread of non-consensual explicit content and related abuses. The article details ongoing harm and legal actions related to this misuse. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to a person (psychological and reputational harm), fulfilling the harm criteria (a) and (c) (violation of rights).
Thumbnail Image

Ulmen und Fernandes: "Virtuell vergewaltigt" - wie gefährlich sind Fake-Pornos?

2026-03-20
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake pornographic images and voice simulations, which are used to harass and sexually violate the victim digitally. This use of AI has directly caused psychological harm and violation of rights to the victim, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. The involvement of AI in the creation and dissemination of these harmful materials is central to the incident described.
Thumbnail Image

Collien Fernandes macht sexualisierte Gewalt durch Exmann Christian Ulmen öffentlich

2026-03-19
Kurier
Why's our monitor labelling this an incident or hazard?
An AI system (specifically, AI-generated voice technology) was used by the perpetrator to impersonate Fernandes and commit sexualized violence and harassment. The harm is realized and ongoing, involving violations of personal rights and causing psychological and social harm. The AI system's use was central to the perpetration of these harms, making this an AI Incident under the definitions provided.
Thumbnail Image

Fall Collien Fernandes: Braucht es ein KI-Verbot für Männer?

2026-03-20
Kurier
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake technology to create pornographic content without consent, which is a direct violation of the victim's rights and causes psychological harm. The AI system's misuse is central to the harm described. The article explicitly mentions deepfakes and the creation of fake online accounts and conversations, indicating AI system involvement in causing harm. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Collien Fernandes zeigt Ex Ulmen an: "Virtuelle Vergewaltigung

2026-03-19
B.Z. Berlin
Why's our monitor labelling this an incident or hazard?
The incident describes the use of fake online profiles and manipulated content to impersonate the victim, which is consistent with AI-enabled identity theft and deepfake or synthetic media generation capabilities. The harm includes violation of personal rights and psychological harm to the victim, meeting the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The involvement of AI or algorithmic systems is reasonably inferred from the nature of the fake profiles and content manipulation described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

"Die Spur"-Doku: Täter war Collien Fernandes wohl ganz nah

2026-03-20
Promiflash.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake pornography, which involves AI-generated synthetic media. The creation and use of these deepfakes have directly harmed Collien Fernandes by violating her rights and causing emotional distress, as well as causing financial harm to a third party through fraudulent requests. The AI system's use in generating these fake videos and profiles is central to the harm described. Hence, this is an AI Incident involving violations of rights and harm to individuals.
Thumbnail Image

KI-Pornos verschickt: Collien Fernandes zeigt Ex Christian Ulmen an

2026-03-19
Nau
Why's our monitor labelling this an incident or hazard?
The incident describes the creation and use of fake social media profiles impersonating the actress, sending pornographic videos and engaging in deceptive chats with men. The scale (around 30 men) and the nature of the fake profiles suggest the use of AI systems for generating realistic fake content and managing interactions. The harm is realized as emotional and reputational damage to the victim, constituting a violation of rights. The AI system's misuse directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Interview zur Deepfake-Porno-Affäre: "Fernandes wurde von Ulmen zum Objekt degradiert"

2026-03-20
Berner Zeitung
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic synthetic media. The malicious use of this AI system to create and distribute fake pornographic content without consent directly harms the victim's rights and causes significant personal and social harm. This aligns with the definition of an AI Incident as the AI system's use has directly led to violations of human rights and harm to the individual.
Thumbnail Image

Digitale Gewalt: Was plant die Regierung gegen Deepfakes - Frankenpost

2026-03-20
Frankenpost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic images, which directly harm the individual by violating their rights and causing reputational and emotional damage. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and a violation of rights. The article describes realized harm through the circulation of these images, not just potential harm, and discusses legal implications, confirming the incident classification.
Thumbnail Image

"Digitale Vergewaltigung": Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
Basler Zeitung
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake pornography, which is generated by AI systems that synthesize realistic fake images and videos. The victim has suffered harm through the unauthorized creation and distribution of these AI-generated images, which is a direct violation of her rights and has caused emotional and psychological harm. The AI system's role in creating the fake content is pivotal to the harm experienced. Hence, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

"Digitale Vergewaltigung": Collien Fernandes zeigt Ex-Mann Christian Ulmen an

2026-03-19
Der Bund
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake pornography, which is generated using AI systems that create synthetic but realistic images or videos. The harm includes psychological and emotional violence, violation of personal rights, and non-consensual use of AI-generated content. The AI system's role is pivotal as it enabled the creation of fake pornographic material that was distributed, leading to direct harm to the victim. Therefore, this qualifies as an AI Incident due to realized harm caused by the malicious use of AI technology.
Thumbnail Image

Fake-Pornos von Collien Fernandes: Wie groß ist die Gefahr?

2026-03-20
DNN - Dresdner Neueste Nachrichten
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake pornographic images and voice simulations without consent, which has directly led to significant emotional and reputational harm to the victim, Collien Fernandes. This constitutes harm to the individual (a form of harm to health and dignity) and a violation of rights. The AI system's use in creating and distributing this content is central to the harm described. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm. The article also discusses the broader societal implications and legal responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

"Virtuelle Vergewal­tigung": Wie groß ist die Gefahr im Netz?

2026-03-20
GT - Göttinger Tageblatt
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake sexual content and voice simulations without consent, directly causing harm to the individual involved (Collien Fernandes) and potentially others. This constitutes violations of personal rights and emotional harm, which are recognized harms under the AI Incident definition. The article confirms ongoing legal proceedings and real harm, not just potential risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Christian Ulmen: ProSieben greift durch - "jerks" von Joyn gelöscht

2026-03-20
OK! Magazin
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that can cause significant harm by violating privacy and personal rights. The article describes an incident where deepfake technology was allegedly used to impersonate and harm an individual, which constitutes a violation of rights and harm to the person. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Collien Fernandes beschuldigt Ex-Mann Christian Ulmen der "virtuellen Vergewaltigung"

2026-03-19
Luxemburger Wort
Why's our monitor labelling this an incident or hazard?
The article describes a case where fake pornography and fake profiles were used to harm the actress, which is consistent with AI-generated deepfake content or AI-enabled manipulation. The harm is realized and ongoing, involving violation of personal rights and dignity. The AI system's use in generating or distributing such content is reasonably inferred given the context of fake profiles and fake pornography online. Hence, this is an AI Incident involving violations of human rights and harm to the individual.
Thumbnail Image

Christian Ulmen angezeigt: Collien Fernandes spricht von "virtueller Vergewaltigung" und mehr

2026-03-19
OVB Online
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI or AI-enabled technologies to fabricate digital identities and generate fake intimate content, causing significant harm to the victim's personal and digital rights. The harm includes violations of privacy, identity theft, and psychological trauma, which fall under violations of human rights and harm to the individual. The AI system's involvement is reasonably inferred from the creation of fake profiles and videos, which typically require AI-based generative or manipulation tools. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in this context.
Thumbnail Image

"Virtuelle Vergewaltigung":: Wie groß ist die Gefahr im Netz?

2026-03-20
Marler Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images and voice mimicking for sexualized content without consent, which has been distributed and caused harm to the victim. This constitutes a violation of personal rights and emotional harm, fitting the definition of an AI Incident. The AI system's use in generating and spreading such content is central to the harm described. Although legal proceedings are ongoing and the accused denies the allegations, the harm from AI-generated content is clearly occurring. Hence, this is not merely a potential hazard or complementary information but an actual incident involving AI harm.
Thumbnail Image

"Es geht um Macht, Kontrolle und die Zerstörung des Gegenübers"

2026-03-20
FOCUS
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is evident through the mention of deepfake pornography and fake profiles, which rely on AI technologies for content generation and identity manipulation. The harm described includes violations of personal rights and digital violence, which are direct harms caused by the use of AI-generated content. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI systems in the form of digital violence and rights violations.
Thumbnail Image

Fernandes zu Deepfake-Pornos: "Man wird zum Objekt"

2026-03-20
ZDFheute
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated deepfake pornography, which is a clear example of AI misuse causing direct harm to individuals. The harms include violation of personal rights, digital sexual abuse, and psychological trauma, fitting the definition of an AI Incident under violations of human rights and harm to communities. The article reports realized harm rather than potential harm, and the AI system's role is pivotal in enabling the creation and dissemination of the deepfake content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Fall Ulmen: Justizministerin will gegen digitale Gewalt vorgehen

2026-03-20
ZDFheute
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is created using AI systems that generate manipulated videos, which constitutes digital violence and harm to individuals. The article reports on actual harm experienced by a person due to AI-generated content, fulfilling the criteria for an AI Incident. The political response and proposed legislation are complementary but secondary to the primary harm described. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated deepfake content.