Meta's AI Moderation System Censors Artistic Nude Content by Sculptor Jago

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's AI content moderation algorithm repeatedly misclassified artistic nude images from sculptor Jago's work 'La David' as explicit content, resulting in censorship and account restrictions. This automated action limited Jago's artistic expression and audience reach, raising concerns about AI-driven violations of artistic freedom and rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Meta's content moderation algorithm) whose use has directly led to a violation of the artist's rights to freedom of expression and access to cultural content, which can be considered a violation of human rights or fundamental rights. The censorship of legitimate artistic content by an automated system without human oversight or proper context constitutes harm to the artist and the community's access to art. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in content moderation leading to unjustified censorship and account blocking.[AI generated]
AI principles
FairnessAccountabilityTransparency & explainabilityRespect of human rights

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Meta, ancora censura all'arte: cancellato post con l'opera dell'artista Jago e oscurato l' account

2025-08-20
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's content moderation algorithm) whose use has directly led to a violation of the artist's rights to freedom of expression and access to cultural content, which can be considered a violation of human rights or fundamental rights. The censorship of legitimate artistic content by an automated system without human oversight or proper context constitutes harm to the artist and the community's access to art. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in content moderation leading to unjustified censorship and account blocking.
Thumbnail Image

Meta censura scultura dell'artista Jago ritenuta pornografica

2025-08-20
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Meta's algorithm) that automatically flagged and censored artistic nude images, leading to shadowbanning of the artist's profile. This is a direct consequence of the AI system's use in content moderation. The harm involves violation of rights related to freedom of expression and artistic freedom, which falls under violations of human rights or fundamental rights. The harm is realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bloccati i contenuti social con i nudi della David di Jago, posti limiti all'account dell'artista: 'Scelta incomprensibile'

2025-08-22
Tiscali Spettacoli
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Meta's algorithm) involved in content moderation that restricts artistic nude content. The harm is indirect and relates to censorship and limitation of artistic expression on social media. There is no indication of physical harm, legal violations, or significant community harm as defined for AI Incidents. The event focuses on the challenges and responses to AI-driven content moderation, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta 'censura' Jago, bloccati i contenuti con i nudi della David - Arte - Ansa.it

2025-08-20
ANSA.it
Why's our monitor labelling this an incident or hazard?
An AI content moderation system is explicitly involved, as the algorithm flagged the artistic nude images and videos, leading to content censorship and account restrictions. This has directly led to harm in the form of limiting the artist's freedom of expression and the community's access to artistic content, which falls under harm to communities and a violation of rights. The AI system's role is pivotal as it automatically flagged the content, triggering the restrictions. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Meta censura lo scultore Jago, bloccati i contenuti social della sua opera "La David". L'artista: "Una scelta incomprensibile"

2025-08-22
Blitz quotidiano
Why's our monitor labelling this an incident or hazard?
An AI system (Meta's content moderation algorithm) is explicitly involved, as it automatically flagged and limited content based on its interpretation of nudity. The harm here is a violation of rights, specifically artistic freedom and expression, which falls under violations of human rights or breach of obligations protecting fundamental rights. The AI system's use led directly to the restriction of content and account visibility, constituting an AI Incident. Although a human has final decision power, the AI's automated flagging triggered the restriction, making the AI system's role pivotal in causing harm.
Thumbnail Image

L'algoritmo di Meta oscura Jago: il nudo artistico finisce di nuovo nel mirino - La Provincia Di Varese

2025-08-21
La Provincia di Varese, Il quotidiano di Varese online
Why's our monitor labelling this an incident or hazard?
Meta's algorithm, an AI system for content moderation, misclassified artistic nude images as explicit content, resulting in account restrictions and censorship. This automated action caused harm to the artist by limiting their ability to share artistic work and reach new audiences, which constitutes a violation of rights. The harm is realized and directly linked to the AI system's use, fitting the definition of an AI Incident.