AI-Generated Deepfake Video Used in Presidential Harassment Campaign Against Journalist in Argentina

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video falsely accusing journalist Julia Mengolini of incest was widely circulated online, leading to a coordinated harassment campaign amplified by Argentine President Javier Milei. The incident resulted in reputational harm, misogynistic abuse, and threats, raising concerns about press freedom and increased risk of violence due to AI-driven disinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI-generated video used to spread false and harmful content about a journalist, which has been widely disseminated and supported by a political figure, leading to harassment and threats. This meets the definition of an AI Incident because the AI system's use has directly led to harm to a person (reputational and psychological harm) and harm to communities (erosion of freedom of expression and increased risk of violence). The involvement of AI in generating the defamatory video is central to the harm described, not merely background information or potential future risk.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
WorkersCivil societyGeneral public

Harm types
ReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

La guerra de Milei contra los medios se evidencia en una campaña impulsada con IA

2025-07-02
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated video used to spread false and harmful content about a journalist, which has been widely disseminated and supported by a political figure, leading to harassment and threats. This meets the definition of an AI Incident because the AI system's use has directly led to harm to a person (reputational and psychological harm) and harm to communities (erosion of freedom of expression and increased risk of violence). The involvement of AI in generating the defamatory video is central to the harm described, not merely background information or potential future risk.
Thumbnail Image

En Argentina, la difamación impulsada por IA muestra la guerra de Milei contra la prensa

2025-07-03
Clarin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated defamatory video that falsely accuses a journalist of incest, which has been widely circulated and led to public harassment and incitement by a political figure. The AI system's role in creating harmful false content directly led to reputational harm and increased risk of violence, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The president's amplification of the content further intensifies the harm. Hence, this is not merely a potential risk or complementary information but a realized AI Incident.
Thumbnail Image

Para The New York Times, Milei "está erosionando la libertad de prensa" | "La hostilidad del presidente a periodistas aumenta el riesgo de violencia", dice el medio estadounidense

2025-07-03
Página/12
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated fake video used to falsely accuse a journalist, which has circulated widely online. The president's active participation in amplifying this campaign, even if he did not share the video itself, contributed to the harm. The harm includes reputational damage, threats, and a chilling effect on press freedom, constituting violations of human rights and harm to communities. The AI system's role in generating the fake content is pivotal to the incident, meeting the criteria for an AI Incident.
Thumbnail Image

Para el The New York Times, el ataque de Milei contra los periodistas pone en "peligro la libertad de prensa y "aumenta el riesgo de violenc

2025-07-03
Perfil
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate manipulated videos used in a campaign of harassment and discrediting against journalists. The AI-generated content is a direct factor in the harm caused, including threats, intimidation, and erosion of press freedom. The president's amplification of these AI-generated attacks further exacerbates the harm. The harms include violations of human rights (freedom of expression and press), intimidation, and increased risk of violence, fitting the definition of an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Describen "la guerra de Milei contra los medios": ¿Impulso de la IA para el ataque de trolls y bots?

2025-07-02
PoliticArgentina.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a false video used to attack a journalist, which was widely disseminated and amplified by the president and his followers. The resulting harm includes reputational damage, misogynistic insults, and potential threats of physical aggression, fulfilling the criteria for harm to communities and violations of rights. Therefore, this event qualifies as an AI Incident due to the direct role of AI-generated content in causing harm.
Thumbnail Image

El New York Times consideró que Javier Milei está erosionando la libertad de prensa

2025-07-04
Diario La Gaceta
Why's our monitor labelling this an incident or hazard?
The mention of an AI-driven campaign of discredit against a journalist implies the use of AI systems to generate or amplify harmful content, which directly harms the journalist's rights and potentially the broader community by eroding press freedom and increasing violence risk. This constitutes a violation of human rights and harm to communities, fitting the definition of an AI Incident.
Thumbnail Image

Con la voz quebrada, Mengolini contó la "tortura" que padeció por la campaña libertaria de "deep fake"

2025-07-15
La Nueva
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to create deepfake videos that have caused direct harm to a person, including emotional distress, reputational damage, and threats to personal safety. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and violations of rights. The involvement of AI in generating the harmful content and the resulting real-world consequences confirm this classification.
Thumbnail Image

Julia Mengolini se quebró al hablar de la "tortura" que padeció por la campaña libertaria de "deep fake"

2025-07-15
Diario El Día
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos that have been used in a campaign of hate and misinformation against Julia Mengolini. These videos have caused her severe emotional distress and threats, which are direct harms to her health and rights. The AI system's outputs (deepfake videos) have been used maliciously, leading to realized harm. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Julia Mengolini denunció una campaña de "tortura digital" y advirtió: "Me quisieron torturar y lo lograron"

2025-07-15
El Intransigente
Why's our monitor labelling this an incident or hazard?
The article explicitly states that videos created with AI were used in a campaign of digital torture against Julia Mengolini, including false and pornographic content. This AI-generated content has led to real harm: psychological distress, threats, and the need for personal security measures. The AI system's use in generating harmful deepfake videos directly caused harm to the individual, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Julia Mengolini contó la "tortura" que padeció por la campaña libertaria de "deep fake" | El Litoral

2025-07-15
ellitoral.com.ar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create deepfake videos, which are a form of AI-generated content. These videos have been used maliciously to harass and defame the journalist, causing emotional and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person (emotional distress, harassment) and violations of rights (defamation, harassment). The involvement of AI in generating the harmful content and the resulting realized harm clearly classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mengolini asegura que vivió una "tortura" con "deep fake"

2025-07-15
Telesol Diario
Why's our monitor labelling this an incident or hazard?
The article explicitly states that false videos were created using artificial intelligence (deepfake technology) and spread to harass and harm the journalist Julia Mengolini. The harm is realized and significant, including emotional distress and reputational damage, which fits the definition of an AI Incident under violations of rights and harm to communities. The involvement of AI in generating the harmful content and its role in the campaign of harassment is direct and central to the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Julia Mengolini y Juan Grabois exigieron "juicio y castigo severo" contra el "terrorismo de Milei" tras el ataque que sufrió la periodista

2025-07-16
Perfil
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and dissemination of videos generated with artificial intelligence that caused psychological torture and public harm to the journalist. The AI system's use in fabricating false and harmful content directly led to realized harm (psychological, reputational, and social) and violations of rights. The involvement of state actors and the President in promoting this attack further aggravates the harm and legal implications. This fits the definition of an AI Incident as the AI system's use directly led to significant harm and rights violations.
Thumbnail Image

Julia Mengolini denunció a Javier Milei por amenazas y le dieron un botón antipánico

2025-07-15
minutouno.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and viral spread of a video made with artificial intelligence that falsely portrays a person in a damaging and intimate scenario. This use of AI-generated deepfake content has directly led to harm, including threats and social harassment, which constitute violations of rights and harm to the individual and community. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly caused significant harm.
Thumbnail Image

Julia Mengolini denunció a Milei por amenazas y le dieron un botón antipánico

2025-07-16
Diario El Cordillerano
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create false videos that were widely spread, resulting in threats and intimidation against the victim. The AI-generated content directly contributed to harm to the journalist's personal safety and dignity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and involves violations of rights and harm to the community through disinformation and harassment. Hence, the classification as AI Incident is appropriate.