DeepSeek AI Censors Tiananmen Content, Shares Data with Beijing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese AI DeepSeek, from a Zhejiang startup, censors any mention of June 4 and Tiananmen, refusing even basic date queries. A US House report reveals it shares user data with Beijing. It is banned by the US DoD, NASA, Congress, Taiwan, and removed from Italian app stores.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (DeepSeek) whose use directly leads to harm by suppressing access to politically sensitive information, which constitutes a violation of rights and harm to communities. The AI's censorship mechanism is a programmed feature that restricts truthful responses, thereby causing informational harm. The brief moment when the AI provided an uncensored factual answer before reverting to censorship demonstrates the AI's role in controlling information. This meets the criteria for an AI Incident because the harm (restriction of information and violation of rights) is realized and directly linked to the AI system's use and design.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

客座评论:当DeepSeek不小心记起了"六四"

2025-06-03
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use directly leads to harm by suppressing access to politically sensitive information, which constitutes a violation of rights and harm to communities. The AI's censorship mechanism is a programmed feature that restricts truthful responses, thereby causing informational harm. The brief moment when the AI provided an uncensored factual answer before reverting to censorship demonstrates the AI's role in controlling information. This meets the criteria for an AI Incident because the harm (restriction of information and violation of rights) is realized and directly linked to the AI system's use and design.
Thumbnail Image

中国AI也怕提"六四" DeepSeek回答被骂翻 | deepseek | 审查 | 大纪元

2025-06-04
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that censors politically sensitive content and shares user data with a government known for suppressing fundamental rights. This use of AI directly leads to violations of human rights, including freedom of expression and privacy, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations under applicable law. The censorship and data sharing have caused harm to communities and individuals, as evidenced by international bans and investigations. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

中國AI也怕提「六四」 DeepSeek回答被罵翻 | deepseek | 審查 | 大紀元

2025-06-04
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system used for answering user queries. Its refusal to provide truthful answers about the date around June 4th and other politically sensitive topics is a direct use of AI for censorship, which is a violation of rights and harms communities by suppressing information. The sharing of user data with the Chinese government further implicates privacy and human rights violations. These harms are realized and ongoing, not merely potential. Hence, this event meets the criteria for an AI Incident involving violations of human rights and harm to communities.
Thumbnail Image

中國AI怕六四?一句「今天幾號」讓它卡殼了! | 測試 | 新唐人电视台

2025-06-04
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The AI system DeepSeek is explicitly involved and is used in a way that censors information about a politically sensitive date, effectively restricting access to information. This is a violation of human rights, specifically the right to information and freedom of expression. The harm is realized and ongoing, as users experience the AI's refusal to provide information. The AI's development and use include a built-in mechanism for over-censorship, which directly causes this harm. Therefore, this event qualifies as an AI Incident under the framework, as it involves an AI system whose use leads to a violation of rights.
Thumbnail Image

【新聞直擊】當心!Amazon中國工具危险 北京一招讓iPhone變低端 | 六四 | DeepSeek | 美越關稅 | 新唐人电视台

2025-06-05
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The DeepSeek AI system is explicitly mentioned and its behavior is described as automatically suppressing answers to sensitive questions, which is a direct use of AI leading to a violation of rights (censorship and restriction of information). This meets the definition of an AI Incident because the AI's use directly leads to harm to communities and a breach of rights. The other reported events do not involve AI systems or plausible AI involvement, so they are unrelated. Hence, the overall classification focuses on the DeepSeek censorship as an AI Incident.
Thumbnail Image

「六四」當天中國聊天機器人裝死 機密文件曝光 | DeepSeek | 深度求索 | 網絡審查 | 新唐人电视台

2025-06-05
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot explicitly mentioned as being used to censor sensitive political content by refusing to answer questions about the Tiananmen Square incident. The article also reveals that AI tools are trained and deployed to perform large-scale content censorship under government directives. This use of AI directly leads to harm by violating rights to information and expression, suppressing historical truth, and manipulating public knowledge, which fits the definition of an AI Incident involving violations of human rights and harm to communities. Therefore, the event is classified as an AI Incident.
Thumbnail Image

6月5日兩岸掃描 「六四」當天中國聊天機器人裝死 引嘲諷 | 北京助理法官捲鉅款外逃 | DeepSeek | 六四紀念日 | 新唐人电视台

2025-06-06
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system explicitly mentioned. Its use on June 4 to avoid answering questions about the date, especially regarding the Tiananmen incident, shows the AI system's role in censoring information. This censorship directly harms the right to access information and harms communities by suppressing discourse on a significant historical event. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident under the framework, as the AI system's use has directly led to a violation of rights and harm to communities.
Thumbnail Image

"六四"禁忌五花八门 大陆网友利用App大胆冲塔(图) - 社会百态 -

2025-06-05
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI '豆包' and 'DeepSeek') that refuse to generate or answer sensitive content related to the Tiananmen Square incident, demonstrating AI-driven censorship. The AI systems' refusal to generate or provide information directly restricts users' access to information, violating human rights. Furthermore, algorithmic controls on pricing and money transfers related to sensitive numbers indicate AI or algorithmic moderation causing harm. The use of the '学习通' app to circumvent censorship shows the AI's role in both suppression and resistance, but the primary harm described is the AI-enabled censorship and suppression of speech. This fits the definition of an AI Incident as it involves violations of human rights caused by AI system use.
Thumbnail Image

中国AI也怕提"六四" DeepSeek回答被骂翻

2025-06-04
botanwang.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that censors politically sensitive content, directly leading to violations of human rights, specifically the right to access information and freedom of expression. The AI's refusal to answer questions about the Tiananmen Square incident and the reported data sharing with the Chinese government for surveillance and censorship purposes demonstrate direct harm caused by the AI system's use. The event involves realized harm (censorship and suppression) rather than just potential harm, and the AI system's role is pivotal in causing these harms. Hence, it meets the criteria for an AI Incident under violations of human rights.