AI-Edited Photo Causes Reputational Harm to Argentine Footballer

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-edited photo depicting Enzo Hoyos, a new player for Ferro Carril Oeste, with exaggerated overweight circulated widely on social media, leading to public criticism and reputational harm. The manipulated image sparked controversy and memes, highlighting the risks of AI-generated misinformation in sports communities in Argentina.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to edit the photo, which directly led to reputational harm and social disruption for the individual involved. The harm is realized and directly linked to the AI-generated manipulated content. This fits the definition of an AI Incident because the AI system's use caused harm to a person (reputational and social harm) through misinformation and harassment. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

La tajante respuesta del refuerzo de Ferro que fue criticado en redes sociales por su estado físico

2026-01-15
lagaceta.com.ar
Why's our monitor labelling this an incident or hazard?
An AI system was used to edit the photo, which directly led to reputational harm and social disruption for the individual involved. The harm is realized and directly linked to the AI-generated manipulated content. This fits the definition of an AI Incident because the AI system's use caused harm to a person (reputational and social harm) through misinformation and harassment. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Retocaron su foto con IA, lo hicieron con sobrepeso y el jugador respondió: "Vi la foto y...

2026-01-15
La Voz
Why's our monitor labelling this an incident or hazard?
An AI system was used to edit the photo, which was then widely shared and caused reputational harm to the player. This fits the definition of an AI Incident because the AI-generated manipulated image directly led to harm to the community (reputational/social harm) and possibly the player's rights to fair representation. The harm is realized, not just potential, as the controversy and public reaction occurred. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Una fake news con Inteligencia Artificial puso en el centro de la escena a un jugador de Ferro

2026-01-14
Los Andes
Why's our monitor labelling this an incident or hazard?
While the article involves an AI system in the form of AI-generated fake news (an AI system generating manipulated images), the event does not describe any actual harm occurring to individuals or communities. The player acknowledges the fake image and treats it with humor, focusing on his sports performance. There is no evidence of injury, rights violation, or other significant harm resulting from the AI-generated content. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context on AI-generated misinformation and the player's response, enhancing understanding of AI's societal impact without reporting a new harm or credible future harm.