AI-Generated Deepfake Videos Cause Public Misinformation in Montenegro Fugitive Case

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos depicting fugitive Miloš Medenica circulated on social media, causing public confusion and undermining trust in institutions. Forensic analysis by Montenegro's Ministry of Interior confirmed AI manipulation, complicating law enforcement efforts and spreading misinformation about Medenica's status.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to create deepfake videos that misrepresent a real person, leading to misinformation and public confusion. The AI-generated content is actively disseminated and has caused harm by undermining trust in public institutions and complicating law enforcement efforts. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through disinformation and manipulation, as described in the article.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI ili Miloš Medenica? Kada svaka slika može biti laž

2026-03-24
Cafe del Montenegro
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos that misrepresent a real person, leading to misinformation and public confusion. The AI-generated content is actively disseminated and has caused harm by undermining trust in public institutions and complicating law enforcement efforts. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through disinformation and manipulation, as described in the article.
Thumbnail Image

MUP Crne Gore o navodnim snimcima Medenice: Potreban oprez prilikom procjene

2026-03-27
Dnevne novine Dan
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the videos are identified as AI-generated or digitally manipulated content. The use of AI-generated videos to spread misleading information constitutes harm to communities by disseminating false narratives and damaging reputations, which fits the definition of an AI Incident. The event describes realized harm (disinformation and reputational damage) caused by AI-generated content. Although forensic proof is not fully conclusive publicly, the Ministry's analysis and the spread of such content confirm the AI system's role in causing harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI ili Miloš Medenica? Kada svaka slika može biti laž

2026-03-24
RTCG - Radio Televizija Crne Gore - Nacionalni javni servis
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos, which are AI-generated synthetic media. Although the article does not confirm that the videos have caused direct harm yet, it clearly outlines the plausible risk of harm to communities through misinformation, erosion of trust in institutions, and interference with law enforcement. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm, but no confirmed incident of harm has occurred as per the article. Therefore, the classification is AI Hazard.
Thumbnail Image

AI ili Miloš Medenica? Kada svaka slika može biti laž

2026-03-24
Radio Slobodna Evropa
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos that mislead the public about the identity and status of a fugitive criminal. This use of AI has directly led to harm by spreading false information, eroding public trust in institutions, and complicating police operations. These effects constitute harm to communities and violate the public's right to accurate information, fitting the definition of an AI Incident. The article explicitly states that the videos are AI-generated and are causing real-world negative impacts, not just potential harm.
Thumbnail Image

'Potreban oprez prilikom procjene': MUP Crne Gore o navodnim snimcima Medenice

2026-03-27
Radio Slobodna Evropa
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate manipulated video content (deepfakes). The use of these AI-generated videos has directly led to harm by spreading misleading information that damages reputations and misleads the public. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and reputational harm, which is a significant and clearly articulated harm. The police investigation and forensic analysis confirm the AI involvement and the harm caused by the manipulated content.
Thumbnail Image

MUP o navodnim snimcima Medenice: Potreban oprez prilikom pr

2026-03-27
Aktuelno
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated videos used to spread false information, which harms the reputation of public officials and misleads the public. The forensic analysis confirms AI involvement in content creation, and the dissemination of such content has already occurred, causing reputational harm and misinformation. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and violation of rights through misinformation and reputational damage.