Collien Ulmen-Fernandes Targeted by Deepfake Pornography

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Collien Ulmen-Fernandes, a German actress, discovered deepfake pornography of herself online during a ZDF documentary investigation. AI technology was used to superimpose her face onto explicit images, leading to privacy violations and financial exploitation. This incident highlights the misuse of AI in creating harmful deepfake content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes actual incidents where AI was used to create and disseminate deepfake pornographic images of a real person without consent, infringing on her privacy and dignity. This constitutes a direct AI Incident with realized harm to her rights and reputation.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityTransparency & explainabilityRobustness & digital security

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
Women

Harm types
Human or fundamental rightsEconomic/PropertyPsychologicalReputational

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

Während TV-Recherche: Collien Ulmen-Fernandes findet Fake-Pornos von sich im Internet!

2024-12-11
TAG24
Why's our monitor labelling this an incident or hazard?
The article describes actual incidents where AI was used to create and disseminate deepfake pornographic images of a real person without consent, infringing on her privacy and dignity. This constitutes a direct AI Incident with realized harm to her rights and reputation.
Thumbnail Image

"Es ist schon heftig": Collien Ulmen-Fernandes geschockt von Pornofotos

2024-12-11
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of an AI system (deepfake face-swap) to create and disseminate pornographic images and videos without consent, causing clear harm (violation of human rights and personal dignity). This meets the criteria for an AI Incident.
Thumbnail Image

Collien Ulmen-Fernandes kämpft gegen Deepfake-Pornos im Netz

2024-12-11
GMX News
Why's our monitor labelling this an incident or hazard?
Deepfake creation and distribution relies on AI systems and has directly resulted in non-consensual pornographic content, doxxing, and identity theft, causing realized harm to the victim. This constitutes an AI Incident under the definitions of direct harm to individuals’ rights and reputation.
Thumbnail Image

Collien Ulmen-Fernandes: "Porno-Bild von mir entdeckt"

2024-12-11
Express.de
Why's our monitor labelling this an incident or hazard?
The involvement of AI is explicit: deepfake techniques were used to generate realistic fake pornographic images. The misuse of this AI system has directly led to reputational damage, violations of privacy and personal rights, and attempted financial extortion, constituting actual harm rather than a hypothetical risk.
Thumbnail Image

Collien Ulmen-Fernandes: Gefälschte Intim-Aufnahmen von Schauspielerin im Netz aufgetaucht

2024-12-11
News.de
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system generating manipulated images and videos. The use of these deepfakes has directly caused harm: reputational harm to the actress, financial harm to fans through ticket scams, and violation of personal rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to multiple harms including violation of rights and harm to communities through fraud and reputational damage.
Thumbnail Image

Von Collien Ulmen-Fernandes existieren pornografische Bilder

2024-12-11
Promiflash.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic content, which is a clear violation of personal rights and can cause significant harm to the individual's reputation and emotional well-being. The AI system's role is pivotal as it enabled the realistic fabrication of images and videos that impersonate the actress. This constitutes a violation of rights under applicable law, fitting the definition of an AI Incident. The harm is realized, not just potential, as the actress has discovered and reported these materials, and legal steps have been initiated.
Thumbnail Image

Collien Ulmen-Fernandes: Deutsche Moderatorin findet sich in Fake-Pornos

2024-12-11
Blick.ch
Why's our monitor labelling this an incident or hazard?
AI was directly used to produce and distribute pornographic deepfakes and conduct fraudulent activities under the victim’s identity, causing realized harm to her personal rights and reputation. This constitutes an AI Incident.
Thumbnail Image

Deepfake-Pornos - Die Jagd nach den Tätern

2024-12-11
Zweites Deutsches Fernsehen
Why's our monitor labelling this an incident or hazard?
The article describes ongoing incidents in which AI systems were used to generate non-consensual, pornographic deepfakes of real individuals, directly causing harm (sexualized humiliation and rights violations). This meets the definition of an AI Incident, as the AI system’s use has concretely resulted in harm.
Thumbnail Image

Mittwoch, 11. Dezember 2024, 22.15 Uhr / Die Spur / Deepfake-Pornos - Die Jagd nach den Tätern/ 1.00 Uhr / Die Spur / Deepfake-Pornos - Das Geschäft mit dem Missbrauch

2024-12-09
PRESSEPORTAL
Why's our monitor labelling this an incident or hazard?
AI systems are explicitly used to generate deepfake pornography that sexualizes and humiliates women without their consent. Actual harm—violation of personal integrity and privacy, and non-consensual sexual content—has occurred and is under investigation. This meets the definition of an AI Incident under violations of human rights and personal harm.
Thumbnail Image

Mittwoch, 11. Dezember 2024, 1.00 Uhr / Die Spur / Deepfake-Pornos - Das Geschäft mit dem Missbrauch / Film von Marie Bröckling, Collien Ulmen-Fernandes, / Birgit Tanner

2024-12-10
PRESSEPORTAL
Why's our monitor labelling this an incident or hazard?
The deepfake production uses AI to generate and distribute pornographic images without consent, directly violating the victims’ rights and causing tangible harm. This is a clear case of AI-enabled non-consensual pornography and digital sexual abuse, meeting the definition of an AI Incident.
Thumbnail Image

"Deepfake-Pornos - Die Jagd nach den Tätern" bei ZDF nochmal sehen: Wiederholung der Dokureihe online und im TV

2024-12-11
News.de
Why's our monitor labelling this an incident or hazard?
The use of AI to generate non-consensual deepfake pornography directly leads to violations of human rights, specifically privacy and dignity, and causes harm to individuals. Since the AI system's use has directly resulted in harm through the creation and dissemination of such content, this qualifies as an AI Incident under the framework.
Thumbnail Image

ZDFinfo-Programmänderung / Donnerstag, 19. Dezember 2024

2024-12-10
PRESSEPORTAL
Why's our monitor labelling this an incident or hazard?
The article mentions deepfake pornography, which involves AI systems generating manipulated videos, a known AI Incident category due to violations of rights and harm to individuals. However, the article is about a TV program airing on this topic, not a new incident or hazard itself. Therefore, it fits the definition of Complementary Information as it provides context and awareness about AI harms rather than reporting a new event causing or potentially causing harm.
Thumbnail Image

"Die Spur: Deepfake Pornos" mit Collien Ulmen-Fernandes im ZDF

2024-12-10
DIGITAL FERNSEHEN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake pornographic content without consent, causing harm to the health and dignity of individuals (harm to persons) and violating their rights. The harm is realized and ongoing, as women are digitally abused and struggle to seek justice. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content and violations of rights.
Thumbnail Image

"Deepfake-Pornos - Die Jagd nach den Tätern" bei ZDF im Stream und TV: Folge 1 der Dokureihe

2024-12-11
News.de
Why's our monitor labelling this an incident or hazard?
The article discusses the phenomenon of AI-generated deepfake pornography, which is a known AI-related harm involving violations of rights and harm to individuals. However, the article is primarily a TV program description and does not report a new specific AI Incident or AI Hazard event. It serves as complementary information by raising awareness and providing context about ongoing harms and investigative efforts related to AI misuse.
Thumbnail Image

Mittwoch, 11. Dezember 2024, 22.15 Uhr / Die Spur / Deepfake-Pornos - Die Jagd nach den Tätern/ 1

2024-12-09
firmenpresse.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake pornographic content, which causes direct harm to the victims by violating their rights and dignity. The harm is realized and ongoing, as the images are distributed online, causing sexual and reputational harm. The AI system's role is pivotal in generating the manipulated content. This fits the definition of an AI Incident because the AI's use has directly led to violations of human rights and harm to individuals and communities. The article also discusses societal and legal responses, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

Mittwoch, 11. Dezember 2024, 1.00 Uhr / Die Spur / Deepfake-Pornos - Das Geschäft mit dem Missbrauc

2024-12-10
firmenpresse.de
Why's our monitor labelling this an incident or hazard?
The use of AI to create non-consensual deepfake pornography directly leads to harm by violating individuals' rights and causing psychological and reputational damage. The AI system's role in generating these manipulated images is pivotal to the harm. Therefore, this event qualifies as an AI Incident due to realized harm involving AI-generated content used for abuse and violation of rights.