OnePlus AI Writer Temporarily Disabled After Reports of Political Censorship

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OnePlus temporarily disabled its AI Writer feature after users reported it censored politically sensitive topics, particularly those related to China, such as the Dalai Lama, Taiwan, and Arunachal Pradesh. The company attributed the issue to technical inconsistencies and is investigating the AI system's malfunction.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (AI Writer) is explicitly involved as it is the tool generating or editing text. The censorship behavior, where the AI refuses to generate content on certain topics, constitutes a violation of rights (freedom of expression) and informational harm to users. This harm is realized as users are unable to generate content on these topics, which aligns with the definition of an AI Incident. The company's disabling of the feature to fix the issue and the lack of clarity on whether the censorship was intentional or a technical fault does not negate the fact that harm occurred. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's censorship behavior.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Consumer products

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

OnePlus Removes AI Writing Feature to Fix It After Censorship Claims

2025-12-08
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The AI system (AI Writer) is explicitly involved as it is the tool generating or editing text. The censorship behavior, where the AI refuses to generate content on certain topics, constitutes a violation of rights (freedom of expression) and informational harm to users. This harm is realized as users are unable to generate content on these topics, which aligns with the definition of an AI Incident. The company's disabling of the feature to fix the issue and the lack of clarity on whether the censorship was intentional or a technical fault does not negate the fact that harm occurred. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's censorship behavior.
Thumbnail Image

OnePlus and Oppo AI refuses to answer 'Is Arunachal Pradesh a part of India?'; company responds | Mint

2025-12-08
mint
Why's our monitor labelling this an incident or hazard?
An AI system (the AI Writer tool) is explicitly involved, and its malfunction (biased refusal to process certain politically sensitive content) is described. While the AI's behavior reflects censorship aligned with a political stance, the article does not report direct or realized harm such as injury, legal violations, or significant community harm. The company is investigating and has disabled the feature temporarily, indicating a response to a technical issue. Therefore, this event does not meet the threshold for an AI Incident (no clear realized harm) or an AI Hazard (no credible plausible future harm beyond the current issue). Instead, it provides an update on an AI system's problematic behavior and the company's mitigation efforts, fitting the definition of Complementary Information.
Thumbnail Image

OnePlus temporarily pulls AI tool after censorship claims

2025-12-08
Android Authority
Why's our monitor labelling this an incident or hazard?
The AI Writer tool is an AI system generating text content based on user prompts. Its refusal to generate content on specific topics constitutes a form of censorship, which is a violation of users' rights to access information and freedom of expression. This harm affects communities and users who rely on the tool for information generation. The company acknowledging the issue and disabling the tool indicates the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's biased or censored outputs.
Thumbnail Image

OnePlus AI features face flak over its refusal to respond to queries on Arunachal Pradesh, pulls AI tools offline

2025-12-08
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The AI system (OnePlus AI Writer) is involved and malfunctioning in its response to certain queries, but there is no evidence or report of any harm (physical, rights-based, or community-related) resulting from this malfunction. The company is taking corrective action by disabling the feature temporarily. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides an update on the AI system's status and the company's response to a technical problem.
Thumbnail Image

OnePlus temporarily disables a major AI feature following allegations of censoring sensitive geopolitical terms

2025-12-08
Phone Arena
Why's our monitor labelling this an incident or hazard?
The AI Writer is an AI system (a large language model) integrated into a consumer product. Its malfunction or design leads to censorship of politically sensitive terms, which constitutes a violation of rights related to access to information and freedom of expression. The harm is realized as users experience suppression of certain information, which can influence societal understanding and discourse. The event involves the use of the AI system and its outputs directly causing this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OnePlus suspends AI writer feature over Arunachal Pradesh controversy

2025-12-07
NewsBytes
Why's our monitor labelling this an incident or hazard?
The AI Writer is an AI system exhibiting biased or censored behavior, which is a recognized issue. However, the article does not describe any direct harm or violation that has occurred, only user complaints and the company's suspension of the feature. There is no indication of injury, rights violations, or other harms materializing. The event focuses on the company's response to the issue rather than the harm itself. Thus, it fits the definition of Complementary Information, providing context and updates on an AI system's problematic behavior and mitigation efforts, rather than constituting an AI Incident or AI Hazard.
Thumbnail Image

OnePlus AI 'Chinese Political Censorship' Is a Bug, Company States

2025-12-07
Android Headlines
Why's our monitor labelling this an incident or hazard?
The AI system (OnePlus AI Writer) is explicitly involved as it is the feature blocking political content. The harm is realized as users experience censorship and inability to generate or access certain political topics, which constitutes a violation of rights (freedom of expression and information). The company acknowledges the issue as a technical bug but does not deny the harm caused. The AI system's malfunction or filtering behavior is the direct cause of the harm. Hence, this event meets the criteria for an AI Incident due to violation of human rights caused by the AI system's malfunction.
Thumbnail Image

OnePlus AI serving Chinese agenda? users report censorship of China-sensitive political topics -tech gaint responds News24 -

2025-12-07
News24
Why's our monitor labelling this an incident or hazard?
The OnePlus AI Writer assistant is an AI system involved in generating content. Its use has directly led to harm by censoring politically sensitive topics, restricting users' rights to free expression and access to information, which are fundamental human rights. The censorship is unintentional but results from the AI's malfunction or design, causing realized harm. Hence, this event meets the criteria for an AI Incident due to violation of human rights through AI-driven content censorship.
Thumbnail Image

OnePlus AI Censorship Bug: Company Response - News Directory 3

2025-12-08
News Directory 3
Why's our monitor labelling this an incident or hazard?
The AI Writer assistant is an AI system whose malfunction or design is causing politically motivated censorship, which is a violation of rights and harms communities by limiting free expression. The widespread disabling of AI features across many apps shows the AI system's role is pivotal in causing this harm. Therefore, this qualifies as an AI Incident because the AI system's malfunction or use has directly led to harm (censorship and restriction of expression).
Thumbnail Image

OnePlus Temporarily Disables AI Writer in Notes App After User Reports - Gizmochina

2025-12-11
Gizmochina
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the AI Writer feature) whose malfunction (technical inconsistencies) led to its temporary disabling. However, there is no indication that this malfunction caused any direct or indirect harm to users or others, such as injury, rights violations, or disruption. The issue is being managed as a precautionary measure to maintain quality and reliability. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides an update on the status and response to a known AI system issue without describing realized or plausible harm.