Chinese AI Censorship Erases Tiananmen History

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Leaked documents reveal that Chinese authorities are using advanced AI censorship tools to automatically remove any online references to the Tiananmen Square massacre. The system flags subtle visual metaphors – like a banana and four apples imitating the famous 'Tank Man' – erasing historical memory and violating free expression.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI systems involved are language models that have been deliberately designed or configured to censor sensitive content related to Tiananmen Square. This censorship constitutes a violation of the right to access information, a human rights issue. Since the AI systems' use directly leads to the suppression of information and thus a violation of rights, this qualifies as an AI Incident under the framework, specifically a violation of human rights (c). The article describes ongoing realized harm through the AI systems' censorship, not just a potential risk or future harm, so it is not a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomyTransparency & explainabilityAccountability

Industries
Government, security, and defence

Affected stakeholders
General publicCivil society

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Tiananmen: un silencio en China que se extiende a las redes

2025-06-04
El Nacional
Why's our monitor labelling this an incident or hazard?
The AI systems involved are language models that have been deliberately designed or configured to censor sensitive content related to Tiananmen Square. This censorship constitutes a violation of the right to access information, a human rights issue. Since the AI systems' use directly leads to the suppression of information and thus a violation of rights, this qualifies as an AI Incident under the framework, specifically a violation of human rights (c). The article describes ongoing realized harm through the AI systems' censorship, not just a potential risk or future harm, so it is not a hazard or complementary information.
Thumbnail Image

El silencio en China sobre Tiananmen se extiende a las redes y a los modelos de IA

2025-06-04
14ymedio
Why's our monitor labelling this an incident or hazard?
The AI systems mentioned (Alibaba's Qwen3, Bytedance's Doubao, and DeepSeek) are explicitly described as AI language models that refuse to provide information about the Tiananmen massacre, indicating their use of AI to enforce censorship. This censorship leads to a violation of human rights, specifically the right to access information and freedom of expression, which is a recognized harm under the framework. Since the AI systems' use directly results in this harm, the event qualifies as an AI Incident. The article documents realized harm caused by the AI systems' operation under government censorship policies, not merely a potential or future risk, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Los modelos de IA chinos, plagados de censura

2025-06-04
La RepúblicaEC
Why's our monitor labelling this an incident or hazard?
The AI systems involved are language models and chatbots that deliberately censor or refuse to provide information about a significant historical event due to government-imposed restrictions. This use of AI leads to a violation of human rights, specifically the right to information and freedom of expression, as the AI systems systematically omit or censor content on a large scale. Therefore, this qualifies as an AI Incident under the framework, as the AI systems' use has directly led to a violation of human rights.
Thumbnail Image

澳媒:密件揭示中國AI審查抹去天安門大屠殺歷史記憶 | 國際焦點 | 國際 | 經濟日報

2025-06-04
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for content moderation and censorship, including computer vision and natural language processing techniques to filter and remove sensitive content. The AI's role is pivotal in systematically erasing historical information, which is a violation of human rights (freedom of information and expression) and causes harm to communities by creating a false historical narrative. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use in censorship and suppression of information.
Thumbnail Image

一根香蕉四個蘋果的圖 為何令中共高度緊張 | 六四 | 中共審查 | 天安門大屠殺 | 大紀元

2025-06-04
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for automated content moderation and censorship by the Chinese government. The AI system is used to identify and remove content related to the Tiananmen Square massacre, including symbolic imagery, thereby directly causing harm by suppressing historical facts and restricting access to truthful information. This constitutes a violation of human rights (freedom of expression and access to information) and harm to communities by creating a false historical narrative. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to significant harm.
Thumbnail Image

澳媒:密件揭示中國AI審查抹去天安門大屠殺歷史記憶 | 中國 | Newtalk新聞

2025-06-04
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used for content filtering and censorship under government directives, which results in the erasure of historical facts about the Tiananmen Square massacre. This censorship suppresses fundamental rights to information and freedom of expression, causing harm to communities by creating a false historical narrative. The AI system's role is pivotal as it automates and scales the censorship process, making it more effective and harder to detect. Hence, the event meets the criteria for an AI Incident due to realized harm linked directly to AI use.
Thumbnail Image

草木皆兵 為何1根香蕉4蘋果圖令中共緊張(圖) - 亞洲 -

2025-06-05
看中国
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for real-time content moderation and censorship, which directly leads to violations of human rights by suppressing information about the Tiananmen Square massacre. The AI system's role is pivotal in identifying and removing symbolic content, effectively erasing historical memory and restricting freedom of expression. This constitutes a breach of obligations intended to protect fundamental rights, meeting the criteria for an AI Incident. The article provides concrete examples of AI-driven censorship causing realized harm, not just potential harm, and thus it is not merely a hazard or complementary information.
Thumbnail Image

澳媒:密件揭中國AI審查 抹去六四記憶 DeepSeek也拒答 | 聯合新聞網

2025-06-05
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools used for content filtering and censorship under government directives, which directly lead to the suppression of information about a significant historical event. This suppression harms communities by erasing collective memory and distorting historical facts, infringing on rights to information and freedom of expression. The AI system's role is pivotal in enabling large-scale, systematic censorship that is more sophisticated and less detectable. Therefore, this qualifies as an AI Incident due to realized harm to communities and violation of rights caused by AI-enabled censorship.
Thumbnail Image

澳洲媒體:泄露文件揭示中國如何利用人工智能抹去六四歷史 | RCI

2025-06-04
Radio Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools used for content censorship on social media platforms, trained and used to remove politically sensitive content. The AI system's role is pivotal in enforcing censorship that suppresses historical facts and political criticism, which is a violation of human rights. The harm is realized and ongoing, as the AI system actively removes such content and causes informational harm to communities and individuals. Hence, this is an AI Incident rather than a hazard or complementary information.