Facebook AI Moderation Wrongly Censors Auschwitz Memorial Posts

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Facebook's AI-driven content moderation system mistakenly flagged and removed 21 posts from the Auschwitz Museum, which honored Holocaust victims, citing reasons like nudity and hate speech. The incident sparked public and governmental outcry, highlighting the harm caused by algorithmic errors in suppressing important historical memory and violating community rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The content moderation system on Facebook is an AI system that automatically reviews and flags content based on learned patterns. Here, the AI system's use has directly led to harm by unjustly removing or flagging posts that honor Holocaust victims, which constitutes harm to communities and a violation of rights to access truthful historical information. The incident involves the AI system's use (content moderation) causing realized harm (censorship and damage to the museum's work and community trust). Therefore, this qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketingIT infrastructure and hosting

Affected stakeholders
GovernmentGeneral public

Harm types
Public interestHuman or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionOrganisation/recommenders

In other databases

Articles about this incident or hazard

Thumbnail Image

Muzeum Auschwitz: "Algorytmy Facebooka wymazują historię"

2024-04-13
Newsweek Polska
Why's our monitor labelling this an incident or hazard?
The content moderation system on Facebook is an AI system that automatically reviews and flags content based on learned patterns. Here, the AI system's use has directly led to harm by unjustly removing or flagging posts that honor Holocaust victims, which constitutes harm to communities and a violation of rights to access truthful historical information. The incident involves the AI system's use (content moderation) causing realized harm (censorship and damage to the museum's work and community trust). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Muzeum Auschwitz cenzurowane przez Facebooka. Minister: to skandal

2024-04-14
Onet Wiadomości
Why's our monitor labelling this an incident or hazard?
Facebook's content moderation system is an AI system that automatically detects and removes content based on learned patterns. Here, the AI system's malfunction or erroneous classification has directly led to harm in the form of censorship of important memorial posts, which can be considered harm to communities and a violation of rights related to freedom of expression and historical memory. The event involves the use and malfunction of an AI system causing realized harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facebook obraża pamięć ofiar Auschwitz: To skandal i ilustracja problemów z automatyczną moderacją treści | Niezalezna.pl

2024-04-13
NIEZALEZNA.PL
Why's our monitor labelling this an incident or hazard?
Facebook's content moderation system is an AI system that automatically classifies and removes posts. The removal of posts commemorating Auschwitz victims constitutes harm to communities and a violation of rights, as it erases historical memory and disrespects victims. The AI system's malfunction (incorrect classification and removal) directly led to this harm. Therefore, this event qualifies as an AI Incident because the AI system's use caused realized harm through inappropriate content moderation.
Thumbnail Image

Wicepremier Gawkowski: ukrywanie postów Muzeum Auschwitz przez Facebook to skandal

2024-04-13
wnp.pl
Why's our monitor labelling this an incident or hazard?
Facebook's automatic content moderation system is an AI system that analyzes and flags content. Here, it erroneously removed posts honoring Auschwitz victims, which is a direct harm to the community's right to remember and honor historical victims, and can be seen as a violation of rights and harm to communities. The incident involves the AI system's malfunction (incorrect flagging and removal) causing realized harm, not just potential harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Wstawili zdjęcie ofiar Holokaustu. Facebook je oflagował m.in. za nagość

2024-04-13
nextgazetapl
Why's our monitor labelling this an incident or hazard?
Facebook's content moderation relies heavily on AI systems to detect policy violations. Here, the AI system incorrectly flagged historical photos of Holocaust victims as violating community standards, leading to suppression of important historical content. This constitutes an AI Incident because the AI system's malfunction (false positive moderation) directly led to harm: violation of rights related to historical memory and expression, and harm to communities by erasing or obscuring Holocaust remembrance. The event involves the use and malfunction of an AI system causing realized harm, not just potential harm or complementary information.
Thumbnail Image

Facebook ukrył wpisy Muzeum Auschwitz. Stanowcza reakcja Polski

2024-04-13
nextgazetapl
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-driven automated content moderation systems by Meta (Facebook) that have led to the wrongful removal or hiding of historical and memorial posts. The AI system's malfunction or misapplication in content moderation has directly caused harm to the museum's mission and to communities by suppressing important historical memory and causing distress to survivors and their families. This constitutes a violation of rights related to freedom of expression and access to information, which falls under harm to communities and violations of rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in content moderation.
Thumbnail Image

Meta wydała oświadczenie ws. postów Muzeum Auschwitz na Facebooku. "Przepraszamy"

2024-04-17
nextgazetapl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's content moderation algorithms) that mistakenly flagged legitimate educational content as violating community standards, leading to temporary suppression of posts commemorating Auschwitz victims. This is a direct harm to the community's right to access important historical information and can be seen as a violation of rights. Meta's apology and restoration of posts confirm the AI system's role in causing the harm. Therefore, this qualifies as an AI Incident due to the AI system's malfunction causing realized harm.
Thumbnail Image

"Ukrywanie postów upamiętniających ofiary Auschwitz to skandal"

2024-04-13
TVN24
Why's our monitor labelling this an incident or hazard?
Facebook's content moderation system is an AI system that automatically flags and removes content based on algorithmic analysis. Here, it has removed posts commemorating Auschwitz victims, which the museum argues is wrongful and harmful. This is a direct harm caused by the AI system's malfunction or misapplication, leading to violation of rights (right to access and share historical information) and harm to communities (erasure of historical memory). The event involves the use and malfunction of an AI system leading to realized harm, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wicepremier idzie na zwarcie z Facebookiem. Poszło o Muzeum Auschwitz

2024-04-13
Business Insider
Why's our monitor labelling this an incident or hazard?
Facebook's automated moderation algorithms, which are AI systems, have directly led to the removal and restriction of legitimate historical content posted by the Auschwitz Museum. This constitutes harm to communities by suppressing access to important historical and cultural information, and can be considered a violation of rights related to information access and remembrance. The event describes realized harm caused by the AI system's malfunction in content moderation, not just a potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facebook ocenzurował wpisy Muzeum Auschwitz! KOMENTARZE

2024-04-13
wpolityce.pl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: Facebook's automated content moderation algorithms. The system's malfunction led to the removal of legitimate posts commemorating Auschwitz victims, which is a form of harm to communities and a violation of rights to access and preserve historical information. The harm is realized, not just potential, as posts were removed and only partially restored after appeal. This fits the definition of an AI Incident because the AI system's malfunction directly caused harm to the museum and the broader community by censoring important historical content. The event is not merely a governance or policy update, nor is it unrelated to AI, so it is not Complementary Information or Unrelated. It is also not a hazard since harm has already occurred.
Thumbnail Image

Facebook cenzuruje Muzeum Auschwitz. Gawkowski mówi o skandalu

2024-04-13
rmf24.pl
Why's our monitor labelling this an incident or hazard?
Facebook's content moderation system is an AI system that automatically detects and removes content based on certain criteria. The removal of posts commemorating Auschwitz victims under false pretenses indicates a malfunction or misclassification by the AI system. This has led to harm by censoring important historical and cultural information, impacting communities and potentially violating rights to information and remembrance. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction in content moderation.
Thumbnail Image

Facebook ukrył posty Muzeum Auschwitz. Wicepremier zapowiada podjęcie kroków

2024-04-14
wiadomosci.radiozet.pl
Why's our monitor labelling this an incident or hazard?
The automated moderation system on Facebook is an AI system that made erroneous decisions to hide posts commemorating Auschwitz victims, which directly led to harm in the form of disrespect and offense to the memory of victims and the community. This is a violation of rights and harm to communities as defined in the framework. The event is not merely a product update or general news but describes a concrete incident where AI system malfunction caused harm. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Facebook poukrywał posty Muzeum Auschwitz. Gawkowski: To skandal

2024-04-13
Dziennik
Why's our monitor labelling this an incident or hazard?
The content moderation system on Facebook is an AI system that automatically reviews and flags content. Here, it malfunctioned by incorrectly labeling legitimate memorial posts as harmful content, leading to their removal. This directly harmed the Auschwitz Museum by undermining its work and disrespecting victims' memory, which is a harm to communities and a violation of rights. The AI system's malfunction is the direct cause of this harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

"Skandal". Facebook cenzuruje wpisy Muzeum Auschwitz-Birkenau

2024-04-13
Do Rzeczy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Facebook's content moderation algorithms) that has directly led to harm by censoring historical posts from a reputable institution, thereby violating rights to access information and potentially causing harm to communities by suppressing important historical memory. The AI system's malfunction or misapplication in content moderation is central to the incident. The involvement of AI in content moderation and the resulting unjustified censorship fits the definition of an AI Incident due to violation of rights and harm to communities. The political and institutional responses are complementary but do not negate the realized harm caused by the AI system's actions.
Thumbnail Image

Facebook ukrywa posty Muzeum Auschwitz! Wicepremier Gawkowski mówi o skandalu

2024-04-13
polityka.se.pl
Why's our monitor labelling this an incident or hazard?
Facebook's content moderation system is an AI system that automatically flags and removes posts based on algorithmic analysis. Here, it has mistakenly removed posts that commemorate Holocaust victims, which constitutes harm to communities by erasing historical memory and is offensive to the victims' legacy. The harm is direct and realized, as the posts were removed or hidden, and the museum's work was undermined. This fits the definition of an AI Incident because the AI system's use has directly led to harm (harm to communities and violation of rights).
Thumbnail Image

Facebook znowu dyktuje warunki. Muzeum Auschwitz ofiarą cenzury algorytmów

2024-04-15
cyberdefence24.pl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Facebook's content moderation algorithm) whose use has directly led to harm in the form of censorship of legitimate posts by a historical institution. This constitutes a violation of rights related to freedom of expression and access to information, which falls under harm to communities and violations of rights. The algorithm's failure to properly assess appeals exacerbates the harm. Therefore, this qualifies as an AI Incident.