Microsoft Bing's AI-Driven Censorship Enforces Chinese Government Controls Globally

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Microsoft's Bing search engine, powered by AI, enforces Chinese government censorship by filtering politically sensitive content using blacklists. This censorship, intended for China, has at times affected users worldwide, suppressing information on topics like human rights and democracy, and resulting in violations of freedom of expression and access to information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article details how Bing, an AI-powered search engine, is used in China to enforce government censorship by filtering search results based on blacklists of sensitive terms. This use of AI directly leads to violations of human rights, including suppression of freedom of expression and access to information. The accidental extension of censorship to users outside China further demonstrates harm caused by the AI system's deployment. The harms are realized and significant, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a clear case where AI-enabled censorship has caused harm to rights and communities.[AI generated]
AI principles
Respect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

微軟搜索引擎Bing被揭配合中共防火牆 - 大紀元

2024-03-09
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article details how Bing, an AI-powered search engine, is used in China to enforce government censorship by filtering search results based on blacklists of sensitive terms. This use of AI directly leads to violations of human rights, including suppression of freedom of expression and access to information. The accidental extension of censorship to users outside China further demonstrates harm caused by the AI system's deployment. The harms are realized and significant, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a clear case where AI-enabled censorship has caused harm to rights and communities.
Thumbnail Image

報告:微軟搜索軟件Bing幫助中共網絡審查 | 必應 | 黑名單 | 自我審查 | 新唐人电视台

2024-03-10
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article details how Bing's search engine, which uses AI for search and content filtering, actively censors politically sensitive content per Chinese government demands. This censorship extends beyond China, affecting users in other countries, thus causing harm to communities by restricting access to information and violating human rights related to freedom of expression and access to information. The AI system's use in filtering and blacklisting content is central to the harm described. Hence, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's use in censorship and suppression of information.
Thumbnail Image

美媒:微软搜索引擎Bing助北京维护防火墙

2024-03-08
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Bing's search engine with algorithmic filtering and blacklists) whose use has directly led to violations of human rights by censoring politically sensitive content. The censorship is systemic and affects users globally due to the misapplication of the blacklist, causing harm to communities and individuals' rights to information and expression. This meets the criteria for an AI Incident because the AI system's use has directly caused harm through suppression of information and violation of rights.
Thumbnail Image

"微软"的版本间的差异 - China Digital Space

2024-03-11
China Digital Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Bing, which uses AI technologies such as GPT-4 in its Copilot tool, and describes how it enforces censorship rules mandated by the Chinese government. This censorship suppresses access to information on politically sensitive topics, constituting a violation of human rights (freedom of expression and access to information). The harm is realized and ongoing, affecting users both within China and internationally. The AI system's role in filtering and manipulating search results and autosuggestions is pivotal to this harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

微软

2024-03-11
China Digital Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Bing's search algorithms and AI-powered tools) to implement censorship and content filtering that suppresses politically sensitive information, including human rights abuses. This censorship is not only active in China but also affects users internationally, indicating direct harm to communities and violations of human rights. The AI system's role is pivotal in enforcing these restrictions, making this a clear AI Incident under the framework's criteria for violations of human rights and harm to communities. The article provides evidence of realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the censorship and its impacts are central to the report.
Thumbnail Image

Bing被揭配合中共防火牆| 台灣大紀元

2024-03-12
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system—Microsoft's Bing search engine—that uses AI-driven filtering and content moderation to comply with CCP censorship laws. The use of blacklists and real-time filtering by human reviewers supported by automated systems fits the definition of an AI system influencing outputs that affect virtual environments (search results). The censorship leads to violations of human rights (freedom of expression and access to information), which is a direct harm caused by the AI system's use. The article documents actual realized harm, not just potential harm, and thus this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مایکروسافت بینگ چگونه به حفظ دیوار آتش بزرگ چین کمک می‌کند؟

2024-03-14
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (machine learning algorithms) used by Microsoft Bing to filter and censor content in compliance with Chinese government demands. This censorship restricts access to information on politically sensitive topics, which is a violation of human rights (freedom of expression and access to information). The censorship is active and ongoing, affecting users both inside and outside China, thus causing realized harm. The involvement of AI in content filtering and the resulting suppression of information meets the criteria for an AI Incident under the OECD framework, as it directly leads to violations of human rights.
Thumbnail Image

نقش غول بزرگ فناوری در فیلترینگ اینترنت

2024-03-14
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
Bing is an AI-powered search engine that uses AI systems to generate search results and filter content. The article reports that Bing has filtered sensitive content related to the Tiananmen Square protests ('Tank Man') not only in China but globally, effectively censoring information. This use of AI to restrict access to information constitutes a violation of human rights (freedom of expression and access to information). The harm is realized and ongoing, as users worldwide are affected. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in content filtering and censorship.
Thumbnail Image

مایکروسافت بینگ چگونه به حفظ دیوار آتش بزرگ چین کمک می‌کند؟

2024-03-13
دیجیاتو
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI systems (machine learning algorithms analyzing text, images, and cloud data) in Bing to filter and censor content according to Chinese government regulations. This censorship leads to harm by restricting access to information about human rights violations and political events, which is a violation of fundamental rights. The AI system's deployment and use directly contribute to this harm. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to violations of human rights and harm to communities through censorship and information suppression.
Thumbnail Image

Microsoft is Attracting Growing Criticism for Censoring Bing in China

2024-03-20
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-powered search and content filtering systems (Bing's search engine) that censor information in compliance with Chinese government demands. This censorship results in violations of human rights, including suppression of information about abuses against Uyghurs and other topics, which is a direct harm to communities and a breach of fundamental rights. The AI system's role in filtering and removing content is pivotal to this harm. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to violations of human rights.
Thumbnail Image

Microsoft is Attracting Growing Criticism for Censoring Bing in China

2024-03-21
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Bing search engine) that censors content in China, leading to violations of human rights by restricting access to information on important topics. This censorship is a direct use of AI technology to suppress information, which falls under violations of human rights as defined in the framework. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Microsoft is Attracting Growing Criticism for Censoring Bing in China - BNN Bloomberg

2024-03-20
BNN
Why's our monitor labelling this an incident or hazard?
Bing is an AI system that generates search results based on user queries. The article explicitly describes how Bing's AI-driven search results are censored in China, removing information on critical human rights issues. This censorship is a direct violation of human rights and harms communities by suppressing truthful information about abuses. The AI system's role in filtering and censoring content is pivotal to this harm. The involvement is in the use of the AI system to comply with restrictive laws, leading to realized harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Microsoft Attracts Growing Criticism for Censoring Bing in China

2024-03-20
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI in Bing's search engine to censor content in China, which results in violations of human rights. The AI system's role in filtering and removing information about human rights and democracy constitutes a breach of fundamental rights. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use in censorship.