AI-Generated Fake Images of Manolo García's Concert Incident Cause Public Alarm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

After Manolo García's crowd surfing at a Barcelona concert, AI-manipulated images falsely depicting his injury circulated online, causing public concern and reputational harm. The artist condemned the unauthorized use of his image and the spread of misinformation, highlighting the social impact of AI-generated fake content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to create manipulated images that falsely show the artist injured, which caused real emotional harm and public alarm. The AI system's outputs directly led to misinformation and distress, fulfilling the criteria for harm to communities and individuals. Since the harm has already occurred and is directly linked to the AI-generated content, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
OtherGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Manolo García se muestra enfadado con la manipulación de su imagen con IA tras su caída del escenario

2026-05-12
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The AI system's use here is in generating manipulated images that simulate a harmful event (the fall) that did not actually cause physical injury. The harm is reputational and emotional, stemming from unauthorized use of the artist's image and misleading content. While this is a violation of rights (image rights and possibly privacy), the article does not report a legal ruling or formal complaint yet, nor does it describe a direct injury or systemic harm. The event is primarily about the misuse of AI-generated content causing concern and the artist's response, which fits the definition of Complementary Information as it provides context and societal response to AI misuse rather than documenting a new AI Incident or AI Hazard.
Thumbnail Image

Manolo García estalla contra la manipulación con IA de las imágenes de la caída en el concierto en Barcelona: 'Me parece punible'

2026-05-13
El Periódico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create manipulated images that falsely show the artist injured, which caused real emotional harm and public alarm. The AI system's outputs directly led to misinformation and distress, fulfilling the criteria for harm to communities and individuals. Since the harm has already occurred and is directly linked to the AI-generated content, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Manolo García estalla contra los bulos generados tras su salto al público en Barcelona

2026-05-12
Diario de Pontevedra
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate the artist's image and generate false viral content that caused public concern and alarm. This misinformation is a direct harm caused by the AI system's outputs. Although no physical harm occurred, the social harm and reputational damage, as well as the spread of false information, fit within the definition of an AI Incident under harm to communities and violation of rights. Therefore, this event qualifies as an AI Incident.