AI-Generated Deepfake Images Used to Harass Slovenian Activist

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Artificial intelligence was used to create and distribute fake nude images and videos of Nika Kovač, director of Inštitut 8. marec, in Slovenia. These deepfakes, shared online without consent, were used for harassment and discrediting, highlighting the growing harm of AI-enabled image abuse against women.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate fake intimate content without consent, which directly leads to harm to individuals' rights and dignity, constituting violations of human rights and harm to communities. The creation and dissemination of such AI-generated deepfake pornography is a clear AI Incident as it has already caused harm. The article also includes calls for legal and systemic responses, but the primary focus is on the realized harm caused by AI misuse.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenCivil society

Harm types
PsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Inštitut 8. marec z manifestom opozarja na nove oblike nasilja nad ženskami

2026-03-07
MMC RTV Slovenija
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI enabling the creation of non-consensual deepfake intimate content, which constitutes a form of harm to individuals (women) through digital violence. However, the article does not describe a specific AI incident where harm has occurred or a particular event of AI malfunction or misuse causing direct harm. Instead, it presents a manifesto with proposed legislative and systemic measures to address these harms. This fits the definition of Complementary Information, as it provides context, societal response, and governance proposals related to AI harms but does not report a new AI Incident or AI Hazard itself.
Thumbnail Image

V Inštitutu 8. marec zgroženi: Nekdo je objavil gole posnetke Nike Kovač

2026-03-07
SiOL
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake intimate content without consent, which directly leads to harm to individuals' rights and dignity, constituting violations of human rights and harm to communities. The creation and dissemination of such AI-generated deepfake pornography is a clear AI Incident as it has already caused harm. The article also includes calls for legal and systemic responses, but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

Na spletu so se pojavile gole fotografije in posnetki Nike Kovač (video) - Svet24.si

2026-03-07
Svet24.si - Vsa resnica na enem mestu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI technology used to create deepfake pornographic content without consent, which has directly led to harm to the individual's reputation and personal life, as well as broader harm to women targeted by such AI-generated content. The misuse of AI in this way constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in enabling the creation and spread of these manipulated images and videos.
Thumbnail Image

Na spletu objavljene fotografije gole Nike Kovač

2026-03-07
Spletni portal, ki mu lahko zaupate - Žurnal24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to create fake pornographic images and videos without consent, which is a direct violation of human rights and causes harm to individuals. The harm is realized as these images are used for humiliation, extortion, and discrediting women, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Na spletu lažne fotografije in posnetki gole Nike Kovač

2026-03-07
Dnevnik
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to create fake intimate images and videos, which are then spread online to harm and discredit women. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized as the content is actively used for online harassment and abuse, not just a potential risk. Therefore, this event qualifies as an AI Incident.