Alibaba Faces Backlash Over Uyghur-Identifying Facial Recognition AI

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Alibaba Cloud developed and tested facial recognition software capable of identifying Uyghur individuals, raising concerns about ethnic profiling and human rights violations. Although Alibaba claims the feature was not deployed and has since removed it, the incident highlights the risks of AI-enabled surveillance targeting vulnerable minorities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI facial recognition software developed and demonstrated by Alibaba and Kingsoft Cloud that can identify ethnic minorities, including Uighurs, a group subject to persecution and human rights abuses by the Chinese government. The AI system's use or potential use in surveillance and targeting of these minorities constitutes a violation of human rights, fulfilling the criteria for harm under the AI Incident definition. Even if the companies claim limited or no deployment, the existence and promotion of such technology in this context directly or indirectly contributes to harm. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
FairnessRespect of human rightsPrivacy & data governanceTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
Other

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detection

In other databases

Articles about this incident or hazard

Thumbnail Image

As China Tracked Muslims, Alibaba Showed Customers How They Could, Too

2020-12-16
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI facial recognition software developed and demonstrated by Alibaba and Kingsoft Cloud that can identify ethnic minorities, including Uighurs, a group subject to persecution and human rights abuses by the Chinese government. The AI system's use or potential use in surveillance and targeting of these minorities constitutes a violation of human rights, fulfilling the criteria for harm under the AI Incident definition. Even if the companies claim limited or no deployment, the existence and promotion of such technology in this context directly or indirectly contributes to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alibaba offered clients facial recognition to identify Uighur people, report reveals

2020-12-17
the Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Alibaba's facial recognition AI system designed to identify Uighur people, which is used to flag individuals for authorities in a context of systemic oppression and human rights abuses. The AI system's use directly leads to violations of human rights (harm category c), including surveillance, detention, and repression of Uighurs. The involvement of AI in enabling these harms is clear and direct, fulfilling the criteria for an AI Incident. The company's claim that the feature was only tested does not negate the fact that the system was developed and offered, and the evidence indicates it was accessible to clients for this purpose. Hence, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

As China Tracked Muslims, Alibaba Showed Customers How They Could, Too

2020-12-17
Moneycontrol
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI systems (facial recognition software) developed and demonstrated by Alibaba and Kingsoft Cloud to detect ethnic minorities, including Uighurs. This technology is directly connected to the Chinese government's oppressive surveillance and persecution campaign against Muslim minorities, which is widely condemned as a human rights violation. Even if the companies claim limited or no deployment, the promotion and testing of such AI capabilities for ethnic profiling and surveillance purposes have already contributed to or enabled violations of fundamental rights. Therefore, this qualifies as an AI Incident due to the direct or indirect role of AI in human rights abuses.
Thumbnail Image

Report: China's Alibaba Made 'Uyghur Alert' Facial Recognition System

2020-12-19
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (facial recognition software) developed by Alibaba that identifies Uyghurs, an ethnic minority subjected to severe human rights abuses by the Chinese government. The AI system's role in enabling surveillance and targeting of Uyghurs directly contributes to violations of human rights and fundamental rights, including forced labor and detention in concentration camps. Although Alibaba claims the system was only used in testing, the existence of similar systems deployed by Huawei and police departments confirms real-world use. Therefore, this is an AI Incident due to the direct link between the AI system's use and violations of human rights.
Thumbnail Image

China's Alibaba "eliminates" ethnic tag that identifies Uighur muslims - world news - Hindustan Times

2020-12-18
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed by Alibaba Cloud that could identify Uighur Muslims, an ethnic minority facing documented human rights abuses. The use or potential use of this AI system for ethnic identification and surveillance directly relates to violations of fundamental rights. Although Alibaba claims the feature was not deployed and has been removed, the development and existence of such technology in this context is a direct factor in enabling human rights violations. Hence, this qualifies as an AI Incident due to the direct link between the AI system's development and the harm to human rights.
Thumbnail Image

Alibaba software can recognise Uighurs: report

2020-12-17
The Hindu
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed by Alibaba that can identify Uighurs, a minority group facing documented human rights abuses. The software's capability to recognize ethnicity is an AI feature with direct implications for surveillance and potential harm. Although Alibaba states the feature was only tested and not used operationally, the mere existence and offering of such technology in a context known for rights violations constitutes a plausible risk of harm. There is no confirmed incident of harm caused by the software's deployment, so it does not meet the threshold for an AI Incident. However, the credible potential for harm through misuse or deployment in surveillance justifies classification as an AI Hazard.
Thumbnail Image

Alibaba under fire for offering face recognition software that can identify Uyghurs

2020-12-18
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and tested by Alibaba that can identify Uyghur minorities, a group facing documented human rights abuses. The use or potential use of this AI system for ethnic identification in Xinjiang is directly connected to violations of human rights, fulfilling the criteria for an AI Incident. Even if the feature was only in testing, the development and offering of such technology in this context has already led to significant harm or risk thereof, given the known abuses against Uyghurs. Therefore, this qualifies as an AI Incident due to the direct or indirect role of the AI system in human rights violations.
Thumbnail Image

Alibaba says Uygur-tracking facial recognition violates company values

2020-12-18
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition software) that was developed and promoted with the capability to identify specific ethnic groups, including Uygurs. This use of AI for ethnic identification and profiling is a violation of human rights and fundamental rights, fulfilling the criteria for an AI Incident. Although Alibaba has removed the feature and disavowed its use, the fact that the technology was developed, advertised, and could have been used for such profiling means harm has occurred or is ongoing. Therefore, this is not merely a potential hazard or complementary information but an AI Incident.
Thumbnail Image

China's Alibaba seeks to distance itself from Uighur facial-recognition software

2020-12-18
alaraby
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial-recognition software) developed by Alibaba Cloud that could identify Uighur Muslims, a minority group facing mass detention and surveillance, which is a violation of human rights. The AI system's development and potential use for ethnic targeting directly relate to harm (violation of fundamental rights). Although Alibaba claims the feature was not deployed, the development and testing of such technology in this context is sufficient to classify this as an AI Incident. The involvement of AI in enabling or facilitating surveillance and ethnic identification that contributes to human rights abuses fits the definition of an AI Incident.
Thumbnail Image

China's Alibaba 'dismayed' by Uighur facial-recognition software | News24

2020-12-18
News24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial-recognition software) developed by Alibaba Cloud with an ethnic tag feature targeting Uighurs, a group facing documented human rights abuses. The software was not deployed, so no direct harm has occurred yet, but the development of such technology in this context poses a credible risk of future human rights violations. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The article also discusses broader surveillance and repression issues, reinforcing the plausible future harm. There is no indication of remediation or governance response focused on this specific incident, so it is not Complementary Information.
Thumbnail Image

Alibaba 'dismayed' by tech that can identify Uighurs, East Asia News

2020-12-18
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition software) developed by Alibaba Cloud that can identify Uighurs, a minority group subject to surveillance and alleged human rights abuses. Although Alibaba claims the feature was only for testing and not deployed, the technology's existence and potential use for ethnic identification pose a credible risk of human rights violations. Since no actual harm or deployment causing harm is reported, this is a plausible future harm scenario, fitting the definition of an AI Hazard. The event is not Complementary Information because it focuses on the existence and implications of the AI feature itself, not on responses or updates to prior incidents. It is not Unrelated because the AI system and its potential for harm are central to the report.
Thumbnail Image

As China tracked Muslims, Alibaba showed customers how they could, too

2020-12-16
bdnews24.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems developed by Alibaba and Kingsoft Cloud that can identify ethnic minorities, specifically Uighurs, which are used or intended for surveillance and repression by the Chinese government. This constitutes a violation of human rights and fundamental rights, fulfilling the criteria for harm under the AI Incident definition. The AI system's development and potential use have directly or indirectly led to or enable harm to communities and violations of rights. Even if the companies claim limited or no deployment, the documented testing and promotion of such capabilities in official documentation demonstrate the AI system's pivotal role in enabling these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China's Alibaba pushed software that identifies Uighurs: Report

2020-12-17
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system designed to identify individuals based on biometric data. Its development and testing for identifying Uighurs directly relate to violations of human rights, as the technology supports surveillance and repression of a minority group. The article indicates that this software was offered and tested, which implies direct involvement in enabling harm. The harms include violations of fundamental rights and harm to communities, fitting the criteria for an AI Incident. Although Alibaba claims the feature was only used in testing, the context of its deployment in a repressive environment and the broader surveillance infrastructure in Xinjiang confirm the AI system's role in causing harm. Thus, the event is classified as an AI Incident.
Thumbnail Image

As China tracked Muslims, Alibaba showed customers how they could, too

2020-12-16
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems developed by Alibaba and Kingsoft Cloud that can detect faces of ethnic minorities, including Uighurs, which are used or intended for use in surveillance linked to persecution and human rights abuses. The AI system's development and potential deployment have directly or indirectly contributed to violations of human rights, fulfilling the criteria for an AI Incident. Even if the companies claim limited or no deployment, the documented testing and promotion of such technology in this context is sufficient to classify this as an AI Incident due to the serious harm involved.
Thumbnail Image

Alibaba Says Uygur-tracking Facial Recognition Violates Company Values, Removes Software

2020-12-18
TheStreet
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition software—that was developed and promoted with the capability to identify people by ethnicity, which is a form of racial profiling and discrimination. This directly relates to violations of human rights, specifically the rights of ethnic minorities such as Uygurs, which is a recognized harm under the AI Incident definition. Although Alibaba has removed the software and denied its use, the fact that it was developed, promoted, and potentially used constitutes an incident rather than a mere hazard or complementary information. The event describes realized harm through the potential and actual use of AI for ethnic profiling, which is a serious violation of rights and thus an AI Incident.
Thumbnail Image

China's Alibaba 'dismayed' by Uighur facial-recognition software

2020-12-18
Northern Ireland News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial-recognition software) developed by Alibaba Cloud that can identify Uighur individuals, a minority group facing documented human rights abuses and mass surveillance. The use or potential use of this AI system for ethnic identification aligns with violations of fundamental rights. Even though the feature was not deployed, its development and testing in this context represent a significant AI-related harm or risk. Given the direct connection to human rights violations, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Alibaba offered clients 'Uighur-detection-as-a-service,' study finds

2020-12-18
The Next Web
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as a facial recognition technology designed to identify Uighurs, which is an AI system by definition. Its development and offering as a service for identifying a persecuted ethnic group directly implicates violations of human rights. The harm is realized or at least strongly implied given the context of persecution and surveillance of Uighurs in China. The company's own documentation and the availability of the system to customers, even if later retracted, indicate the AI system's role in enabling such harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and human rights violations.
Thumbnail Image

China's Alibaba pushed software that identifies Uighurs: Report

2020-12-17
Telangana Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and tested by Alibaba to identify Uighurs, a minority group facing documented human rights abuses. The AI system's role in enabling ethnic identification for surveillance purposes directly contributes to violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized or ongoing given the context of state surveillance and repression in Xinjiang. The software's development and testing for this purpose, even if not fully deployed, is sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China's Alibaba 'Dismayed' By Uighur Facial-recognition Software

2020-12-18
International Business Times
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (facial-recognition software) developed and tested by Alibaba Cloud that could identify Uighurs, a minority group facing serious human rights abuses. The AI system's development and potential use for ethnic identification in a surveillance context could plausibly lead to violations of human rights, fitting the definition of an AI Hazard. There is no indication that the system was deployed or caused direct harm, so it does not meet the criteria for an AI Incident. The focus is on the potential for harm rather than realized harm, and the company's distancing statement confirms the system was not used operationally. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

As China tracked Muslims, Alibaba showed customers how they could, too

2020-12-19
Honolulu Star-Advertiser
Why's our monitor labelling this an incident or hazard?
The article describes AI systems developed by Alibaba and Kingsoft Cloud that can identify ethnic minorities, including Uighurs, which are targeted by the Chinese government's oppressive surveillance and detention campaigns. The AI system's role in enabling or facilitating ethnic profiling and surveillance that leads to persecution constitutes a violation of human rights. Even if the companies claim limited or no deployment, the development and promotion of such technology with this intended use is directly linked to harm. Therefore, this qualifies as an AI Incident due to the direct or indirect contribution of the AI system to human rights violations.
Thumbnail Image

Uighurs: China's Marketplace Giant Alibaba Admits It Created Software Used To Track Ethnic Minority

2020-12-19
Christianity Daily
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as a facial recognition software capable of identifying Uighurs, an ethnic minority subject to repression and alleged genocide by the Chinese government. The software's development and potential use for ethnic surveillance and targeting directly implicate it in violations of human rights. Even if not deployed, the demonstration and offering of such technology to customers for monitoring purposes is a direct link to harm. This meets the criteria for an AI Incident due to the direct or indirect role of the AI system in human rights violations.
Thumbnail Image

China's Alibaba Pushed Software That Identifies Uighurs: Report

2020-12-17
courthousenews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and offered by Alibaba that can identify Uighurs, a minority group subject to mass surveillance and internment in China. The use or potential use of this AI system for ethnic identification in a context of alleged human rights abuses (internment camps, surveillance) constitutes a violation of human rights, fulfilling the criteria for an AI Incident. Even if the feature was only used in testing, the development and offering of such technology in this context is directly linked to harm to fundamental rights.
Thumbnail Image

'NYT' reports Alibaba Group pushed software that identifies Uighurs - Taipei Times

2020-12-17
Taipei Times
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (facial-recognition software) developed and tested by Alibaba that can identify Uighurs, a minority group subject to surveillance and alleged human rights abuses. The AI system's development and testing in this context pose a credible risk of human rights violations if deployed, fulfilling the criteria for an AI Hazard. There is no evidence that the system was used operationally to cause harm, so it does not meet the threshold for an AI Incident. The event is not merely complementary information because it reveals a significant potential for harm linked to the AI system's capabilities and context of use.
Thumbnail Image

Alibaba offered clients facial recognition to identify Uighur people, report reveals

2020-12-17
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system explicitly mentioned. Its use to identify Uighur people and flag them for authorities constitutes a violation of human rights, as it enables surveillance and potential repression of a minority group. This is a direct link between the AI system's use and harm to human rights, qualifying the event as an AI Incident under the framework.
Thumbnail Image

Alibaba 'dismayed' that Alibaba Cloud developed feature allowing firms to identify Uighur minorities | Hong Kong Free Press HKFP

2020-12-18
Hong Kong Free Press HKFP
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed by Alibaba Cloud that can identify a specific ethnic minority group, the Uighurs. Given the documented human rights abuses against Uighurs in China, the AI system's development directly relates to potential violations of fundamental rights. Even though the feature was not deployed, its creation and testing represent a significant risk and implicate the company in enabling technology that could be misused for ethnic surveillance and repression. This meets the criteria for an AI Incident due to the direct link to violations of human rights and the pivotal role of the AI system in this harm context.
Thumbnail Image

Alibaba Facial Recognition Tech Picks Out Uyghur Minorities

2020-12-18
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition with ethnic identification capabilities) developed and offered by Alibaba. The system is used by law enforcement to identify and track Uyghur minorities, leading to violations of fundamental human rights and ethnic repression. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework's definition of harm to human rights caused directly or indirectly by AI system use.
Thumbnail Image

More 'Uighur alarms' found in China -- this time an Alibaba face biometrics feature

2020-12-17
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition with ethnic detection capabilities) developed and potentially used to identify Uighurs, a minority group subject to persecution. This use directly relates to violations of human rights and harm to communities, fulfilling the criteria for an AI Incident. The system's deployment or even testing in this context is not hypothetical but has been documented, and the harms are well-established given the political and social context in China. The removal of the feature from public view does not negate the incident classification, as the harm or risk of harm is real and linked to the AI system's use.
Thumbnail Image

Alibaba latest to face Uygur-recognition row

2020-12-17
The Standard
Why's our monitor labelling this an incident or hazard?
The article describes Alibaba's AI facial recognition system designed to identify Uygurs, which is linked to China's broader surveillance and repression of this minority group. While Alibaba claims the feature was only tested and not used operationally, the technology's capability and context imply a credible risk of enabling human rights violations. This constitutes a plausible future harm scenario (AI Hazard) rather than a confirmed incident, as no direct harm from Alibaba's system is reported. The involvement of AI in ethnic identification and surveillance in Xinjiang is well documented, and the potential for misuse is significant, justifying classification as an AI Hazard.
Thumbnail Image

Alibaba 'Dismayed' Over Ethnic Profiling Feature in Facial Recognition Technology - CFO

2020-12-18
CFO
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of an AI system (facial recognition technology) that included ethnicity as an attribute, which can lead to ethnic profiling—a violation of human rights. While the technology was not deployed, the mere development and potential use of such profiling technology in a sensitive context constitutes a plausible risk of harm. Therefore, this qualifies as an AI Hazard because it could plausibly lead to violations of human rights if deployed or misused.
Thumbnail Image

Alibaba 'dismayed' by reports its software was used to identify Uyghurs | NewsChannel 3-12

2020-12-18
NewsChannel 3-12
Why's our monitor labelling this an incident or hazard?
The article describes an AI system developed by Alibaba that included ethnicity detection, specifically targeting Uyghurs, a minority group facing repression. The system was used in trials and promoted to clients, which implies actual use beyond mere development. This constitutes a violation of human rights and discriminatory profiling, which are harms under the AI Incident definition. Although Alibaba claims the system was not deployed commercially, the promotion and testing of such technology with ethnic profiling attributes directly or indirectly leads to harm to communities and breaches of fundamental rights. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China's Alibaba 'dismayed' by Uighur facial-recognition software | The Malaysian Insight

2020-12-18
themalaysianinsight.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial-recognition software) developed and used by Alibaba's cloud computing unit. The software's capability to identify Uighur individuals implicates it in potential human rights violations, as the Uighur minority in China faces documented repression. Even if direct harm is not explicitly reported, the AI system's role in enabling surveillance and identification of a vulnerable group constitutes a violation or breach of fundamental rights or at least a credible risk thereof. Given the information, this qualifies as an AI Incident due to the direct or indirect link to human rights violations.
Thumbnail Image

China's Alibaba pushed software that identifies Uighurs, says report | The Malaysian Insight

2020-12-17
themalaysianinsight.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (face-recognition software) used to identify ethnic minorities, specifically Uighurs. The use of such technology in this context is directly connected to human rights violations, as it enables surveillance and potential persecution of a vulnerable group. Therefore, this constitutes an AI Incident due to the direct link between the AI system's use and violations of fundamental rights.
Thumbnail Image

Alibaba criticised for racial profiling through facial recognition

2020-12-18
Computing
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (facial recognition technology) developed to identify Uyghurs, an ethnic minority, which is linked to serious human rights concerns. Although Alibaba claims the system was only used in testing and not deployed, the mere development and offering of such technology, especially in a context where Uyghurs face alleged oppression, constitutes a credible risk of harm. There is no evidence in the article that the system has caused direct harm yet, so it does not meet the threshold for an AI Incident. However, the plausible future harm from misuse or deployment of this technology justifies classification as an AI Hazard.
Thumbnail Image

Alibaba 'dismayed' by its ethnicity profiling algorithm

2020-12-18
BOLSAMANIA
Why's our monitor labelling this an incident or hazard?
The AI system in question is an ethnicity recognition algorithm, which is explicitly mentioned and involves AI-based facial recognition technology. The event concerns the development and testing of this system, which could plausibly lead to violations of human rights and ethnic discrimination if deployed, especially given the context of government policies targeting Uyghurs. Since no deployment or direct harm has been confirmed, but the risk is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The involvement of AI and the potential for serious harm through misuse or deployment for surveillance and profiling justifies this classification.
Thumbnail Image

Hashtag Trending - Anger over big tech's success during the pandemic; Bitcoin exceeds $23K; Alibaba helped clients identify Uighur people

2020-12-18
ITBusiness.ca
Why's our monitor labelling this an incident or hazard?
The facial recognition software sold by Alibaba is an AI system used to identify ethnic minorities, specifically Uighurs, which is linked to human rights violations given the context of oppression in Xinjiang. This is a direct use of AI technology causing harm to a vulnerable community, meeting the criteria for an AI Incident under violations of human rights. The other parts of the article do not describe AI systems or harms related to AI, so they are not relevant to the classification.
Thumbnail Image

China's Alibaba 'eliminates' ethnic tag that identifies Uighur muslims

2020-12-19
telecomlive.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (face-recognition software) developed and used by Alibaba Cloud that could identify Uighur Muslims, linking it to potential violations of human rights (ethnic profiling and surveillance). Although the feature was removed, the development and initial use of such a system constitutes an AI Incident because it directly relates to a breach of fundamental rights. The company's statement distancing itself does not negate the incident's occurrence but rather is a response to it.
Thumbnail Image

China-rights-ecommerce-Alibaba

2020-12-17
nampa.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (face-recognition software) developed and offered by Alibaba. The software's capability to identify Uighurs directly links it to potential or actual violations of human rights, given the context of China's controversial treatment of this minority group. This constitutes an AI Incident because the AI system's use has directly or indirectly led to a breach of fundamental rights through enabling ethnic identification and potential targeting.
Thumbnail Image

China-rights-ecommerce-Alibaba-telecommunication-retail

2020-12-18
nampa.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial-recognition software) developed and used by Alibaba. The software's capability to identify Uighur individuals directly implicates potential violations of human rights, specifically privacy and ethnic discrimination, which are harms under the framework. Although the article does not specify realized harm, the use of such technology in this context is widely recognized as leading to human rights abuses. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and violations of fundamental rights.
Thumbnail Image

Uyghurs: Chinese market giant Alibaba admits making software used to track ethnic minority: World: Christianity Daily | ExBulletin

2020-12-20
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event explicitly describes an AI system (facial recognition software) developed by Alibaba that identifies Uyghurs, an ethnic minority group. The system's use for ethnic identification and monitoring is directly linked to human rights violations, including surveillance and potential targeting of Uyghurs, which is a breach of fundamental rights. Although Alibaba claims the software was not deployed, the demonstration to customers and the nature of the technology imply a direct or indirect contribution to harm. This meets the criteria for an AI Incident because the AI system's development and potential use have led or could lead to violations of human rights, fulfilling the definition of harm (c).
Thumbnail Image

Alibaba's facial recognition tech can identify Uighur faces, report says

2020-12-17
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and tested by Alibaba that can identify Uighur faces, an ethnic minority group facing documented human rights abuses. The AI system's use for ethnic identification in Xinjiang is directly linked to surveillance practices that have been internationally criticized for violating fundamental rights. Even though the software was not commercially deployed, its development and testing in this context plausibly could lead to or facilitate human rights violations, meeting the criteria for an AI Hazard. There is no indication that harm has yet occurred from this specific software's deployment, so it is not an AI Incident. The report focuses on the potential and testing phase, not on a response or governance action, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority

2020-12-17
DhakaTribune
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition technology) is explicitly mentioned and is used to identify Uighurs, a minority group subject to alleged forced labor and repression. The system's deployment in content moderation to flag or remove Uighur livestreams constitutes a violation of human rights, fulfilling the criteria for an AI Incident. The harm is realized as the technology facilitates discrimination and suppression of a vulnerable group, not merely a potential risk. Although Alibaba claims the feature was only used in testing, the report indicates it was integrated into a content moderation service, implying actual use and harm.
Thumbnail Image

Alibaba's facial recognition tech specifically picks out Uighur minority, says new report

2020-12-17
Economic Times
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions an AI system (Alibaba's facial recognition technology) that can identify Uighurs and is used in content moderation, which directly leads to violations of human rights by enabling targeted surveillance and censorship. The harm is realized as the system flags Uighur individuals' content for review or removal, contributing to oppression and discrimination. The AI system's development and use are central to this harm, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority

2020-12-17
Investing.com UK
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition technology) is explicitly mentioned and is used to identify Uighurs, an ethnic minority. This identification capability is linked to content moderation that can lead to censorship or removal of content from Uighurs, which in the context of documented repression and forced labor camps, constitutes a violation of human rights. The AI system's use in this manner has directly led to harm by enabling surveillance and discrimination against a vulnerable group. Therefore, this event qualifies as an AI Incident due to violations of human rights caused by the AI system's use.
Thumbnail Image

Alibaba Facial Recognition Tech Specifically Picks Out Uighur Minority: Report

2020-12-17
MoneyControl
Why's our monitor labelling this an incident or hazard?
The facial recognition AI system explicitly identifies Uighurs, enabling targeted surveillance and content censorship, which directly contributes to violations of human rights as reported by credible sources. The system's role in facilitating ethnic profiling and potential repression is a clear harm to communities and a breach of fundamental rights. Although Alibaba claims the feature was only in testing, the report indicates the system's capability and potential use, which has already led to concerns and accusations of complicity in human rights abuses. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing harm to a minority group.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority - report

2020-12-17
Reuters India
Why's our monitor labelling this an incident or hazard?
The facial recognition AI system explicitly identifies Uighurs, a minority group subject to alleged mass detention and repression, which constitutes a violation of human rights. The AI system's use in surveillance and content moderation that targets this group directly or indirectly leads to harm by enabling discrimination, censorship, and potential persecution. The report confirms the AI system's capability and its integration into Alibaba's Cloud Shield service, which is used for content moderation, thus linking the AI system's use to real-world harms. Therefore, this event meets the criteria for an AI Incident due to violations of human rights caused by the AI system's deployment.
Thumbnail Image

Alibaba facial recognition singles out Uighur minority: report

2020-12-18
Nikkei Asia
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition with ethnicity tagging) is explicitly mentioned and used in a way that directly leads to harm: ethnic profiling of Uighurs, a minority group facing serious human rights concerns. This constitutes a violation of human rights and fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the system is used to flag and possibly remove content from Uighur users, contributing to repression. Alibaba's acknowledgment of the feature and its removal does not negate the incident, as the harm has already occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Alibaba facial recognition tech picks out China's Uighur minority, says report

2020-12-18
قناة العربية
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition with ethnicity tagging) is explicitly mentioned and was used in content moderation, which can lead to violations of human rights (harassment, discrimination, surveillance) against the Uighur minority. This constitutes harm to a group of people and breaches fundamental rights. The involvement of the AI system in tagging ethnicity and enabling surveillance is direct and material to the harm described. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

Alibaba unit develops facial recognition tech to identify Uighur people

2020-12-18
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The AI system in question is a facial recognition technology that can identify Uighur individuals, an ethnic minority subject to alleged human rights violations in China. The system's development and potential use for ethnic tagging in surveillance contexts directly implicate violations of human rights, fulfilling the criteria for an AI Incident under the framework. Despite Alibaba's claim that the feature was not intended for deployment, the fact that it was developed and integrated into a content moderation service capable of flagging Uighur individuals for review or removal indicates a direct link to harm. This meets the definition of an AI Incident as the AI system's use has directly or indirectly led to violations of fundamental rights.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority - report

2020-12-18
bdnews24.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software with ethnicity tagging) whose development and potential use directly relate to violations of human rights (ethnic profiling and surveillance of Uighurs). The software's capability to identify Uighurs and flag their videos for review or removal indicates a direct link to harm against a minority group. Even though Alibaba claims the feature was not deployed, the report reveals the system's existence and potential misuse, which constitutes an AI Incident due to realized or ongoing harm through surveillance and discrimination practices. The involvement of AI in tagging ethnicity and the resulting human rights implications meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alibaba facial recognition tech 'picks out Uighurs' - RTHK

2020-12-17
news.rthk.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition technology) developed and used by Alibaba that can identify Uighurs. The system's use in content moderation to flag Uighur individuals for review or removal directly implicates it in potential human rights violations, including ethnic discrimination and surveillance. Although Alibaba claims the feature was only used in testing, the presence and potential use of such technology in a real-world context linked to repression of a minority group meets the criteria for an AI Incident due to violations of human rights. The harm is direct and significant, involving fundamental rights and freedoms.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority

2020-12-18
Arab News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology with ethnicity tagging) developed and used by Alibaba. The system's capability to identify Uighurs is directly connected to surveillance practices that have been widely reported to contribute to human rights abuses, including forced labor and repression. Even if the feature was not intended for deployment, its development and potential use in content moderation that flags Uighur individuals constitutes a violation of human rights. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use in ethnic surveillance and discrimination.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority - report | Technology

2020-12-18
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The facial recognition AI system explicitly identifies Uighurs, a minority group facing documented human rights abuses, including forced labor camps. The AI's role in tagging ethnicity can enable or exacerbate these abuses, constituting a violation of human rights under the OECD framework. The report confirms the AI system's development and potential use in surveillance, which directly or indirectly leads to harm. Alibaba's statement that the feature was never intended for deployment does not negate the harm potential or the fact that the system was developed and possibly used. Hence, this event meets the criteria for an AI Incident due to realized or ongoing harm linked to the AI system's use in ethnic surveillance.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority -report

2020-12-17
Financial Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) developed and used by Alibaba that can specifically identify Uighur individuals, a minority group subject to alleged human rights abuses. The AI system's use in surveillance and content moderation targeting this group directly relates to violations of human rights, fulfilling the criteria for an AI Incident. The report highlights the AI system's role in enabling or facilitating these harms, not merely potential or hypothetical risks, thus it is not a hazard or complementary information but an incident.
Thumbnail Image

Alibaba facial recognition tech specifically picks out Uighur minority - report

2020-12-17
Financial Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) developed and tested by Alibaba that can identify Uighur individuals, a minority group facing documented human rights violations. The technology's capability to single out this group, even if only in testing, is linked to potential or actual violations of fundamental rights, including privacy and freedom from discrimination. Given the context of forced labor camps and surveillance in Xinjiang, the AI system's role is pivotal in enabling or facilitating these harms. Therefore, this qualifies as an AI Incident due to the direct or indirect link to violations of human rights.
Thumbnail Image

Alibaba denies using facial recognition tech to identify Uighurs

2020-12-17
BusinessLIVE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition technology) capable of identifying Uighurs, an ethnic minority, which is linked to serious human rights concerns. The system was reportedly in testing and possibly used in content moderation, which could lead to discrimination, censorship, or repression. Although no direct harm is confirmed, the plausible future harm of ethnic profiling and rights violations is credible given the context of forced labor camps and surveillance. Therefore, this event fits the AI Hazard category rather than an AI Incident, as harm is potential but not confirmed as having occurred due to this AI system.
Thumbnail Image

Alibaba says ethnic tag removed in cloud software said to detect Uighur Muslims

2020-12-18
The Standard
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition with ethnicity tagging) was developed and potentially used in a way that could lead to violations of human rights, specifically targeting Uighur Muslims. The software's ability to identify and flag Uighurs for content moderation could facilitate discriminatory surveillance and repression, which constitutes harm to human rights. Even if the feature was not officially deployed, its development and potential use create a credible risk of harm. However, since the feature was reportedly removed and not deployed to customers, the event is best classified as an AI Hazard reflecting plausible future harm rather than an incident of realized harm.
Thumbnail Image

Alibaba's facial recognition tech could be being used to identify Uighurs

2020-12-17
Latest Retail Technology News From Across The Globe - Charged
Why's our monitor labelling this an incident or hazard?
Alibaba's Cloud Shield facial recognition software is described as capable of identifying Uighurs specifically, and this capability is reportedly being used in a context where Uighurs face forced labor and detention, constituting serious human rights violations. The AI system's use in ethnic profiling and surveillance that facilitates these abuses meets the criteria for an AI Incident under violations of human rights. Although Alibaba claims the feature was only used in testing, the report suggests plausible deployment or at least potential misuse contributing to harm. Given the direct link between the AI system's use and ongoing human rights harms, this event qualifies as an AI Incident.
Thumbnail Image

Alibaba says its technology won't target Uighurs - PodcastFilmReview

2020-12-18
podcastfilmreview.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Alibaba's Cloud Shield content moderation technology) that was capable of identifying Uighur minorities, an ethnic group, which is a direct link to potential or actual human rights violations. The use of AI for ethnic tagging and targeting minorities is a violation of human rights and fundamental rights protections. Although Alibaba claims to have removed the ethnic tagging feature, the presence and use of such technology that led to targeting or identification of Uighurs constitutes an AI Incident. The harm is realized or at least directly linked to the AI system's use, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國監控幫兇 紐時:阿里巴巴也開發維吾爾人臉辨識 - 自由財經

2020-12-17
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI facial recognition systems developed and used by Alibaba and others to identify Uyghur people, facilitating government surveillance and persecution. This use directly leads to human rights violations and harm to minority communities, fulfilling the criteria for an AI Incident. The involvement is in the use of AI systems for oppressive surveillance, and the harm is realized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

助長迫害?阿里巴巴被爆提供客戶軟體 可「人臉辨識」維族人 - 國際 - 自由時報電子報

2020-12-17
自由電子報
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions an AI system (facial recognition software) capable of identifying Uyghur individuals, a minority group subject to persecution. The use or potential use of this AI system to monitor and report on this group directly relates to violations of human rights and harm to communities. The article reports that the software was demonstrated and tested, and while Alibaba denies deployment, the risk and potential harm are real and significant. This fits the definition of an AI Incident because the AI system's use or development has directly or indirectly led to harm or violations of rights.
Thumbnail Image

繼華為後 紐時:阿里開發偵測維吾爾人工具 | 聯合新聞網:最懂你的新聞網站

2020-12-18
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition software) developed by Alibaba that can identify Uyghur and other minority faces. The system's use for monitoring and potentially alerting authorities about these individuals poses a credible risk of human rights violations and harm to communities. While actual harm has not been confirmed, the plausible future misuse and the high risk of abuse justify classification as an AI Hazard rather than an AI Incident. The removal of the content from Alibaba's website and the company's statement indicate the system has not been deployed beyond testing, supporting the hazard classification.
Thumbnail Image

紐約時報:阿里巴巴提供軟體 人臉辨識維吾爾族 | 國際焦點 | 國際 | 經濟日報

2020-12-17
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and used by Alibaba. The software's capability to identify Uyghur individuals is directly related to surveillance practices that have been widely criticized for human rights violations. Even if the company claims the feature was only used in testing, the demonstration and availability of such technology in a context linked to ethnic profiling and potential police alerts constitutes an AI Incident due to the direct or indirect role of the AI system in violations of human rights and discriminatory practices against a minority group.
Thumbnail Image

阿里巴巴等中企開發人臉辨識軟件 監控維族 - 大紀元

2020-12-18
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition software with AI capabilities) developed and deployed to identify Uyghur people, a minority group, for surveillance purposes. This use directly implicates violations of human rights, as ethnic monitoring and targeting of Uyghurs by the Chinese government is widely documented as harmful and oppressive. The AI system's role is pivotal in enabling this surveillance and potential repression. Although some companies claim the software is only for testing, the presence of such technology and its use by government authorities indicates realized harm. Hence, this event meets the criteria for an AI Incident involving violations of human rights.
Thumbnail Image

阿里巴巴打造人臉辨識技術!專門偵測維吾爾人及少數民族 | 科技 | 新頭殼 Newtalk

2020-12-17
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The AI system described is explicitly an AI-powered facial recognition technology used to identify ethnic minorities, which is directly linked to surveillance practices that violate human rights. The system's development and potential use for monitoring and controlling minority populations constitute a breach of fundamental rights and obligations under applicable law. Even if the system has not been confirmed as deployed, the article indicates it was developed and tested, and the context implies a high risk of harm. Therefore, this event qualifies as an AI Incident due to the direct or indirect role of the AI system in human rights violations.
Thumbnail Image

紐時:阿里巴巴開發維吾爾族人臉識別工具

2020-12-17
RFI
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition software with ethnic identification capabilities) is explicitly mentioned and linked to potential and actual human rights violations (harm category c). The system's development and testing by Alibaba, and its potential use by clients for surveillance and repression of Uyghurs, directly implicate it in causing or enabling harm. Even if deployment is not confirmed, the high risk of misuse and the context of repression make this an AI Incident rather than merely a hazard. The article documents realized or ongoing harm risks tied to the AI system's use, not just potential future harm or general AI ecosystem information.
Thumbnail Image

阿里巴巴急澄清人臉識別不鎖定特定種族

2020-12-18
RFI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) that was developed with the capability to identify specific ethnic groups, which could lead to violations of human rights if used. Although the feature was not deployed or used by customers, the development and existence of such technology pose a plausible risk of harm. Therefore, this situation qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving human rights violations, but no direct harm has been reported so far.
Thumbnail Image

人臉辨識標註維族人 阿里:已刪功能 - 20201219 - 經濟

2020-12-18
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition with ethnic identification capability). While Alibaba claims the feature was only in testing and not used on customers, the capability itself poses a credible risk of human rights violations if deployed, especially given the context of Uyghur surveillance. No direct harm is reported, so it is not an AI Incident. The article does not focus on responses or governance measures but reports on the existence and removal of the feature, indicating a plausible future harm scenario. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

阿里巴巴打造人臉辨識技術!專門偵測維吾爾人及少數民族

2020-12-17
HiNet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) developed to identify ethnic minorities, specifically Uyghurs, which is directly linked to potential human rights violations. While the article notes no confirmed deployment beyond testing, the system's design and intended use for surveillance and alerting authorities about minority individuals pose a credible risk of harm. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of fundamental rights and other harms. There is no indication that harm has yet occurred from this specific system's deployment, so it is not classified as an AI Incident. The article also does not focus on responses or updates to prior incidents, so it is not Complementary Information.
Thumbnail Image

紐約時報:阿里巴巴提供軟體 人臉辨識維吾爾族 | 國際 | 中央社 CNA

2020-12-17
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition software—that can identify ethnic minorities, specifically Uyghurs. The use or potential use of such technology in surveillance and ethnic identification in Xinjiang is widely reported to be linked to human rights abuses. Even if Alibaba claims the feature was only tested, the development and demonstration of this capability directly relate to violations of fundamental rights. This meets the criteria for an AI Incident because the AI system's development and potential use have directly or indirectly led to harm or violations of human rights, fulfilling definition (c) under AI Incident. The event is not merely a potential risk (hazard) nor a complementary update; it concerns realized or ongoing harm linked to AI use.
Thumbnail Image

否認臉部辨識維族人 阿里巴巴:不針對特定種族 | 兩岸 | 中央社 CNA

2020-12-18
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article describes the development and testing of an AI facial recognition system that included ethnic classification, which is a clear AI system involvement. While no direct harm or incident is reported, the technology's capability to identify and potentially target specific ethnic groups like the Uyghurs poses a credible risk of human rights violations. Since the system was not used by customers and no harm has occurred yet, this situation fits the definition of an AI Hazard rather than an AI Incident. The denial and removal of related content indicate awareness and mitigation efforts but do not change the classification.
Thumbnail Image

阿里巴巴或因識別維吾爾人軟體入美黑名單(圖)

2020-12-17
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system developed by Alibaba that detects faces of Uyghur and other minorities for surveillance purposes, which is a direct violation of human rights. The system's use in automated ethnic identification and alerting authorities constitutes a breach of fundamental rights and likely contributes to harm against these groups. The fact that the US is considering sanctions and that the company removed related content after inquiry further supports that this is an AI Incident involving realized harm through the use of AI for ethnic surveillance and repression.
Thumbnail Image

中國國企成共犯!不只華為 阿里巴巴遭爆也開發維族臉部辨識工具

2020-12-18
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technology with AI capabilities) developed and used by Alibaba and Huawei to identify Uyghur individuals. The use of these AI systems for ethnic surveillance and automatic reporting to authorities constitutes a violation of human rights and fundamental freedoms. The harms are realized or ongoing, as the systems are actively used for monitoring and control, which fits the definition of an AI Incident due to direct involvement of AI in causing harm to a vulnerable group.
Thumbnail Image

阿里巴巴開發臉部辨識系統 竟能偵測維吾爾人

2020-12-18
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as a facial recognition technology capable of identifying Uyghur individuals, which is an AI system by definition. Its use in monitoring and censoring Uyghur content directly links it to violations of human rights and harm to a minority community. The involvement of the AI system in these activities constitutes an AI Incident because the harm (violation of rights and harm to communities) has occurred or is ongoing. The company's acknowledgment and removal of the feature do not negate the fact that the system was used in a harmful way. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

阿里巴巴等多家中企被指開發識別維吾爾人技術,華為海外高管出於人權顧慮辭職

2020-12-18
美國之音
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems developed and tested for automatic identification of Uyghur individuals, a minority group facing documented human rights abuses. The AI systems are used or intended for surveillance and monitoring that supports repression, constituting violations of human rights and harm to communities. The AI system's development and use are directly linked to these harms, meeting the criteria for an AI Incident. The fact that the software is reportedly in testing does not negate the harm, as the development and potential deployment in a repressive context is already harmful. The resignation of Huawei executives over these concerns further indicates the serious human rights implications. Therefore, this event is classified as an AI Incident due to direct involvement of AI in human rights violations.
Thumbnail Image

阿里巴巴称不会允许其技术用于针对和辨识特定种族群体

2020-12-18
CN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) that was capable of identifying specific ethnic groups, which constitutes a violation of human rights or breach of obligations intended to protect fundamental rights. Although the company has taken corrective action by removing racial labels, the prior existence and potential use of such technology for targeting ethnic groups represents an AI Incident due to the realized harm or risk of harm to human rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

阿里巴巴、华为等中企开发维族脸部辨识工具 - 大纪元

2020-12-18
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technology with AI capabilities) developed and deployed to identify Uyghur individuals, a minority group subject to human rights abuses. The use of these AI systems for surveillance and identification of Uyghurs directly contributes to violations of human rights, fulfilling the criteria for an AI Incident. The involvement is through the use of AI systems, and the harm is realized in the form of enabling oppressive surveillance and potential repression, which is a violation of fundamental rights. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

纽约时报:阿里巴巴提供软体 人脸辨识维吾尔族 - 法国国际广播电台 - RFI

2020-12-17
RFI
Why's our monitor labelling this an incident or hazard?
The event describes the development and use (even if only in testing) of an AI system (facial recognition software) that can identify Uyghur individuals, a minority group subject to controversial and potentially oppressive treatment. This implicates violations of human rights and fundamental rights, fitting the definition of an AI Incident due to the direct link between the AI system's capabilities and harm to a vulnerable community. The fact that the software was publicly demonstrated and documented by credible sources supports the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

纽时:阿里巴巴开发维吾尔族人脸识别工具

2020-12-17
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (facial recognition software) developed and tested by Alibaba and Kingsoft Cloud that can identify ethnic minorities, specifically Uyghurs. The use of such AI systems for ethnic surveillance in Xinjiang is linked to serious human rights concerns and potential harm to minority communities. Although the companies claim limited or no deployment beyond testing, the risk of misuse is high and plausible. Since no confirmed direct harm is reported yet, but the potential for significant harm is credible and consistent with the AI systems' capabilities and context, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the existence and potential use of these AI tools, not on responses or updates to prior incidents. It is not Unrelated because AI systems are central to the event and its implications.
Thumbnail Image

阿里巴巴急澄清人脸识别不锁定特定种族

2020-12-18
RFI
Why's our monitor labelling this an incident or hazard?
The article discusses the development and potential use of an AI facial recognition system that could identify specific ethnic groups, which raises concerns about possible misuse and human rights violations. However, Alibaba clarifies that the technology was only in testing, has been removed, and was not used in practice. Since no harm has occurred and the technology's use remains hypothetical, this constitutes a plausible risk rather than an actual incident. Therefore, the event is best classified as an AI Hazard, reflecting the credible potential for harm if such technology were used improperly, but no direct or indirect harm has been reported yet.
Thumbnail Image

遭指控人脸识别监测维吾尔人 华为2外国人高管或涉不满辞职

2020-12-18
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technology developed by Megvii and tested by Huawei and Alibaba) used to monitor Uyghur people and automatically alert police, which is a clear violation of human rights. The resignations of Huawei executives linked to dissatisfaction with this practice further confirm the AI system's role in the harm. Although Alibaba claims the technology was only in testing and removed, the testing and potential deployment of such AI systems for ethnic surveillance is a realized harm under the definitions provided. Hence, this is an AI Incident involving violations of human rights.
Thumbnail Image

阿里巴巴或因识别维吾尔人软件入美黑名单(图) - 财经新闻

2020-12-17
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly describes Alibaba's development and use of AI facial recognition software to identify Uyghur and other minority faces for surveillance purposes. This AI system's use is linked to human rights violations, as it enables monitoring and potential repression of ethnic minorities, which is a serious harm under the framework. The involvement of AI in facilitating these violations and the resulting sanctions and international criticism confirm this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

阿里巴巴被指开发维吾尔人面部识别工具

2020-12-17
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition software) developed by Alibaba that can identify Uyghur and other minority faces. This system is connected to the Chinese government's oppressive surveillance practices, which have led to mass detentions and human rights violations. The AI system's development and potential use directly or indirectly contribute to violations of fundamental human rights, fulfilling the criteria for an AI Incident. The company's admission that the feature was tested but not used beyond testing does not negate the harm risk, as the system's existence and potential deployment in such a context is harmful. Therefore, this event qualifies as an AI Incident due to the involvement of AI in human rights violations.
Thumbnail Image

阿里巴巴开发维吾尔人面部识别工具

2020-12-17
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as facial recognition tools capable of identifying ethnic minorities, which is a clear AI system. The development and testing of these tools by Alibaba and Huawei, with the potential for misuse in surveillance and ethnic profiling, could plausibly lead to violations of human rights (harm category c). There is no confirmed direct harm yet, but the credible risk of misuse and the serious nature of potential harm justify classification as an AI Hazard rather than an AI Incident. The removal of the information from websites and the public controversy further support the recognition of potential harm rather than confirmed harm at this stage.
Thumbnail Image

阿里巴巴等多家中企被指开发识别维吾尔人技术,华为海外高管出于人权顾虑辞职

2020-12-18
美国之音
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems developed and deployed for identifying Uyghur individuals, which is directly connected to human rights violations such as mass surveillance, detention, and alleged genocide. The AI systems' use in this context causes harm to a vulnerable ethnic group, fulfilling the criteria for an AI Incident under the definition of violations of human rights. The fact that these technologies are actively used or tested for such purposes, and the international condemnation and sanctions related to these practices, confirm the realized harm. The resignation of Huawei executives due to human rights concerns further supports the assessment of direct involvement of AI in causing harm.
Thumbnail Image

华为海外高管出于新疆人权顾虑辞职

2020-12-18
早报
Why's our monitor labelling this an incident or hazard?
The AI system involved is facial recognition software capable of identifying Uyghur individuals, a minority group facing documented human rights abuses. The development and testing of this AI system, especially given its potential or actual use in surveillance and repression, directly relates to violations of human rights (definition c). The resignation of Huawei executives over these concerns indicates the seriousness of the issue. Even if the software was only tested and not widely deployed, the AI system's role in enabling or facilitating ethnic profiling and surveillance is pivotal and constitutes an AI Incident. The event is not merely a potential hazard or complementary information but involves realized or ongoing harm linked to AI use.
Thumbnail Image

阿里称不允许其技术用于识别特定族群

2020-12-18
早报
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (facial recognition technology) that was capable of identifying specific ethnic groups, including Uyghurs, which is a clear violation of human rights and privacy. The technology's development and use for such purposes directly relate to harm (violation of rights). Although Alibaba claims to have removed these features and does not allow such use, the fact that the technology was capable and potentially used for this purpose constitutes an AI Incident. The harm is linked to the AI system's use and development, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中企疑开发识别维吾尔人技术,华为海外高管出

2020-12-18
iask.ca
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition software with ethnic identification capabilities) developed and tested by major Chinese tech companies. The AI's use or potential use in identifying and monitoring Uyghur people directly contributes to violations of human rights, including ethnic profiling and enabling state surveillance linked to alleged abuses such as forced detention and repression. The harm to a vulnerable minority group is clear and ongoing, meeting the criteria for an AI Incident. The event is not merely a potential risk but documents actual development and testing of such AI systems with real-world implications.
Thumbnail Image

被指锁定特定种族 阿里巴巴澄清

2020-12-19
多维新闻
Why's our monitor labelling this an incident or hazard?
An AI facial recognition system was developed and tested with the capability to identify specific ethnic groups, including Uyghurs, which is a sensitive and high-risk application given the known human rights issues in Xinjiang. Although Alibaba states the technology was only used in testing and not deployed, the mere development and existence of such a system that can identify ethnic minorities poses a credible risk of human rights violations if used. There is no report of actual harm occurring from the use of this system, so it does not meet the criteria for an AI Incident. The event is not merely complementary information because it highlights a credible risk of harm from the AI system's capabilities. Therefore, the classification is AI Hazard.
Thumbnail Image

Chine: un logiciel d'Alibaba accusé de cibler les Ouïghours

2020-12-17
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and used to identify Uyghurs, which is linked to human rights violations given the context of surveillance and repression in Xinjiang. The AI system's use directly leads to a breach of fundamental rights, fitting the definition of an AI Incident under violations of human rights.
Thumbnail Image

En Chine, le géant Alibaba accusé d'avoir voulu cibler les Ouïgours

2020-12-17
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and tested by Alibaba to identify Uyghurs, a minority group under heavy surveillance and repression in China. The use of AI for ethnic identification in this context is directly connected to violations of human rights and harm to communities, as documented by international experts and reports of mass internment and surveillance. Even if the system was only tested and not deployed, the development and intent to use such AI in this harmful context meets the criteria for an AI Incident because it directly or indirectly leads to violations of fundamental rights and harm to a vulnerable community.
Thumbnail Image

Un logiciel d'Alibaba accusé de les cibler

2020-12-17
Le Journal de Montreal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and used by Alibaba to identify Uyghurs, a minority group subject to severe state surveillance and alleged human rights abuses. The AI system's role in enabling targeted surveillance and identification of this group is directly linked to violations of human rights, fulfilling the criteria for an AI Incident. The article describes actual use or testing of the system, not just potential future harm, and the harm is significant and clearly articulated (human rights violations).
Thumbnail Image

Ouïghours : après Huawei, Alibaba accusé d'être au service de la répression chinoise

2020-12-17
LesEchos.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (facial recognition software) developed by Alibaba that can identify Uyghurs, an ethnic minority subject to mass repression by Chinese authorities. The system's use or potential use to alert authorities or monitor Uyghurs directly relates to violations of human rights and fundamental rights, fulfilling the criteria for an AI Incident. The harm (mass internment, forced labor, repression) is ongoing and the AI system's role in enabling or facilitating this harm is pivotal, even if the company denies deployment beyond testing. Therefore, this event qualifies as an AI Incident due to the direct or indirect contribution of the AI system to serious human rights violations.
Thumbnail Image

Chine : Alibaba aurait monté un logiciel capable d'identifier des Ouïghours

2020-12-17
Capital.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition software) developed by Alibaba to identify Uyghurs. The use of this AI system is connected to the Chinese government's security policies in Xinjiang, which include mass surveillance and alleged human rights abuses against Uyghurs. These actions constitute violations of human rights and harm to communities. Even if the software was only used experimentally, the development and potential deployment of such AI for ethnic identification in a repressive context meets the criteria for an AI Incident due to the direct or indirect role of the AI system in causing harm.
Thumbnail Image

Chine: un logiciel d'Alibaba accusé de cibler les Ouïghours

2020-12-17
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (facial recognition software) developed by Alibaba that can identify Uyghurs, which is an AI system by definition. The system's development and testing in a context of heavy surveillance and repression of a minority group plausibly could lead to violations of human rights and harm to communities, fulfilling the criteria for an AI Hazard. There is no evidence in the article that the system has been deployed or caused direct harm, so it does not qualify as an AI Incident. The involvement of AI in ethnic identification for surveillance purposes in a high-risk context justifies classification as an AI Hazard rather than Complementary Information or Unrelated.
Thumbnail Image

Chine : un logiciel d'Alibaba accusé de surveiller la minorité Ouïghour

2020-12-17
RTL.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed by Alibaba that can identify Uyghurs, a minority group subject to repression. The use or potential use of this AI system for ethnic surveillance and repression constitutes a violation of human rights, fulfilling the criteria for harm under the AI Incident definition. Even if the feature was only tested, the development and promotion of such technology in this context is directly linked to harm. The article describes realized or ongoing harm through surveillance and repression policies in Xinjiang, with AI playing a pivotal role. Therefore, this is classified as an AI Incident.
Thumbnail Image

Reconnaissance faciale des Ouïghours : Alibaba sous le feu des critiques

2020-12-17
La Nouvelle Tribune
Why's our monitor labelling this an incident or hazard?
An AI facial recognition system designed to identify ethnic minorities, especially in a politically sensitive and repressive context like Xinjiang, poses a credible risk of human rights violations. Although Alibaba states the system was only tested and not used operationally, the mere development and potential deployment of such technology can plausibly lead to harm, such as ethnic profiling, discrimination, and enabling state surveillance. Since no direct harm is reported as having occurred yet, but the risk is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Un logiciel de reconnaissance faciale d'Alibaba accusé de cibler les Ouïghours

2020-12-17
7sur7
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition software) developed by Alibaba to detect Uyghurs and other minorities. The context involves serious human rights concerns, including mass internment and surveillance. Even though the software was only used experimentally, the intended use in this repressive context plausibly leads to human rights violations, fitting the definition of an AI Hazard. There is no indication that harm has yet occurred directly from this software's deployment, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it highlights a credible risk of harm from the AI system's use.
Thumbnail Image

Chine: un logiciel d'Alibaba accusé de cibler les Ouïghours

2020-12-17
RTL Info
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition software) developed by Alibaba capable of identifying Uyghurs, an ethnic minority subject to severe state surveillance and alleged human rights abuses. The software's development and testing for ethnic identification directly relate to potential violations of human rights (definition c). Although Alibaba states the feature was only used in testing and not deployed, the mere existence and potential deployment of such technology in this context pose a credible risk of harm. No direct harm from this specific software is reported, so it does not meet the threshold for an AI Incident. Instead, it fits the definition of an AI Hazard, as the AI system's development and potential use could plausibly lead to human rights violations. The article's context about surveillance and repression in Xinjiang supports this assessment.
Thumbnail Image

Fitur Pengenal Wajah Uighur Milik Alibaba Terungkap Tuai Kecaman

2020-12-18
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed by Alibaba that can identify Uighur individuals, an ethnic minority subject to government surveillance and repression. The AI system's use or potential use in targeting this group constitutes a violation of human rights, fulfilling the criteria for an AI Incident. Although Alibaba states the feature was only for testing and not used by customers, the development and existence of such a system linked to ethnic profiling and content moderation that could lead to harm is sufficient to classify this as an AI Incident. The article describes realized harm in terms of surveillance and potential suppression, which is a direct or indirect harm caused by the AI system.
Thumbnail Image

Fitur Pengenal Wajah 'Uighur' Alibaba Tuai Kontroversi

2020-12-18
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (facial recognition software) developed by Alibaba that can identify Uighur individuals, an ethnic minority subject to government surveillance and repression. The system's capability to detect Uighurs and trigger content review or removal directly implicates it in violations of human rights and ethnic discrimination. Despite Alibaba's statement that the feature was only for testing and not used by customers, the development and potential deployment of such technology in a repressive context constitutes realized harm. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of fundamental rights and harms to a community (Uighurs).
Thumbnail Image

Alibaba Punya Teknologi Pengenal Wajah Identifikasi Uighur |Republika Online

2020-12-18
Republika Online
Why's our monitor labelling this an incident or hazard?
The AI system involved is a facial recognition technology capable of identifying Uighur individuals, which is explicitly mentioned. The system's use in content moderation that can detect and remove videos based on ethnic identity directly relates to violations of human rights, specifically targeting a minority group facing documented abuses. The harm is realized or ongoing, as the technology enables surveillance and potential suppression of Uighur individuals' content. Alibaba's acknowledgment of the feature and its removal does not negate the fact that the system was developed and used in a way that leads to harm. This meets the criteria for an AI Incident under violations of human rights (c).
Thumbnail Image

Fitur Pengenal Wajah Uighur Alibaba Tuai Kontroversi |Republika Online

2020-12-18
Republika Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and tested by Alibaba Cloud that can identify Uighur individuals. Although the feature is reportedly not in active use, its development and potential deployment pose a plausible risk of harm, specifically violations of human rights through ethnic targeting and surveillance. Since no actual harm has been reported as having occurred yet, but the potential for significant harm is credible and directly linked to the AI system's capabilities, this event fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the controversy and potential misuse rather than a realized harm incident.
Thumbnail Image

Fitur Pengenal Wajah Uighur Tuai Kontroversi - Internasional - koran.tempo.co

2020-12-18
koran.tempo.co
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI facial recognition system that identifies Uighur individuals and influences content moderation decisions, leading to potential suppression of Uighur voices and ethnic profiling. This use of AI directly results in violations of human rights, fulfilling the criteria for an AI Incident. The system's development and deployment have caused harm through ethnic discrimination and surveillance, which are serious breaches of fundamental rights. Alibaba's acknowledgment of the feature as a 'technology trial' does not negate the harm caused by its use. Hence, the event is classified as an AI Incident.
Thumbnail Image

Terungkap, Alibaba Gunakan Software Pengenal Wajah Uighur

2020-12-19
SINDOnews.com
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system capable of identifying individuals' ethnicity, specifically Uighurs. Its use in content moderation to detect and potentially remove videos featuring Uighurs directly implicates violations of human rights and fundamental rights, fulfilling the harm criteria (c). The report indicates that this system has been actively used, not merely a potential risk, thus qualifying as an AI Incident rather than a hazard or complementary information. Alibaba's statement that the feature was a test and not intended for customers does not negate the fact that the system was developed and used, causing harm.
Thumbnail Image

Dituding Bantu China Awasi Minoritas Uighur, Ini Kata Alibaba

2020-12-22
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (facial recognition by Alibaba Cloud) explicitly designed to identify Uighur individuals, an ethnic minority targeted by the Chinese government for repression. The AI system's use in content moderation to suppress Uighur presence online constitutes a violation of human rights and harm to communities. The involvement of AI in enabling state surveillance and censorship that represses a minority group meets the criteria for an AI Incident under the OECD framework, as the harm is realized or ongoing. Alibaba's acknowledgment of the technology's development further confirms AI involvement.
Thumbnail Image

Fitur Baru Alibaba Menuai Kecaman Dikecam

2020-12-21
krjogja.com
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition software) is explicitly involved, used to detect ethnicity, which is a sensitive and discriminatory application. The use of this AI system has directly led to harm in the form of ethnic discrimination and potential violation of human rights, as it targets a minority group for content removal or review. This fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Fitur Pengenal Wajah "Uighur" Alibaba Tuai Kontroversi | Koran Jakarta

2020-12-18
koranmediajakarta
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) developed and used by Alibaba to identify Uighur individuals, which is linked to surveillance and potential repression of this ethnic group. This constitutes a violation of human rights (definition c) as the AI system's use directly or indirectly leads to harm against a vulnerable community. The article details the system's capabilities and its application in content moderation that targets Uighur users, fulfilling the criteria for an AI Incident. Although Alibaba states the feature was only for testing, the controversy and potential for harm are realized given the system's design and context of use. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Alibaba 'dismayed' by its cloud unit's ethnicity detection algorithm

2020-12-18
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition algorithm) developed by Alibaba Cloud that can identify ethnicity, including Uyghurs. The use or potential use of this technology for ethnic profiling and surveillance constitutes a violation of human rights and fundamental rights protections. Although Alibaba claims the technology was not deployed, the development and testing of such a system with this capability is sufficient to classify this as an AI Incident because the AI system's role is pivotal in enabling ethnic profiling, which is a serious harm. The event describes realized harm in terms of the technology's capability and its potential misuse for discriminatory surveillance, which is a direct violation of rights. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alibaba says it won't allow its tech to target, identify ethnic groups

2020-12-18
MSN International Edition
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (facial recognition technology) that was capable of identifying ethnic groups, which can lead to violations of human rights. Although Alibaba has taken steps to remove this feature, the initial existence and potential use of such technology for ethnic identification constitutes an AI Incident due to the direct link to possible human rights violations. The company's statement and corrective action are responses to this incident but do not negate the fact that the AI system's prior capability posed a real harm.
Thumbnail Image

Alibaba says it won't allow its tech to target, identify ethnic groups

2020-12-18
Reuters India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) that was capable of identifying ethnic groups, which can lead to violations of human rights (harm category c). Although the company has taken steps to remove this feature, the initial development and potential use of such technology for ethnic targeting constitutes an AI Incident due to the direct link to possible human rights violations. The company's response is noted but does not negate the incident classification.
Thumbnail Image

Alibaba admits it built facial-recognition-as-a-service to detect oppressed Uyghur minority in China

2020-12-18
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The AI system explicitly involves facial recognition technology with ethnicity as an attribute, used for tagging and potentially targeting Uyghur individuals, a minority group subject to oppression and internment by the Chinese government. Despite the company's claim that the system was only tested and never deployed, the development and offering of such technology inherently violates human rights and legal protections against ethnic profiling. This meets the criteria for an AI Incident because the AI system's development and intended use directly relate to violations of human rights and discrimination, even if actual deployment is denied. The context and credible reports confirm the AI system's pivotal role in enabling such harms.
Thumbnail Image

Alibaba says it won't allow its tech to target, identify ethnic groups

2020-12-18
FashionNetwork.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (facial recognition technology) that was capable of identifying ethnic groups, which raises concerns about violations of human rights and potential misuse for ethnic targeting. Although the company has taken steps to eliminate ethnic tagging, the initial development and potential use of such technology constitutes an AI Incident due to the direct link to possible human rights violations. The harm is related to the potential for discrimination and targeting of ethnic minorities, which is a violation of fundamental rights.
Thumbnail Image

Alibaba 'dismayed' by its cloud unit's ethnicity detection algorithm (Rita Liao/TechCrunch)

2020-12-18
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The facial recognition algorithm developed by Alibaba Cloud explicitly identifies ethnicity, including Uyghurs, which is used in a context associated with human rights abuses by the Chinese government. The AI system's development and use directly contribute to violations of fundamental rights, fulfilling the criteria for an AI Incident under the OECD framework. The harm is realized and ongoing, not merely potential, as it supports profiling that leads to discrimination and repression of ethnic minorities.
Thumbnail Image

Alibaba 'dismayed' by its cloud unit's ethnicity detection algorithm - RocketNews

2020-12-18
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The AI system in question is a facial recognition algorithm capable of identifying ethnicity, which is explicitly mentioned. Its development and testing for ethnicity detection directly relate to profiling a vulnerable ethnic group, the Uyghurs, in a context known for human rights abuses. This constitutes a violation of human rights and fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized or ongoing due to the use of such technology by authorities for profiling and repression. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Alibaba 'dismayed' by its cloud unit's ethnicity detection algorithm.

2020-12-18
sgtalk.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition algorithm) developed and used to identify ethnicity, which directly relates to violations of human rights and fundamental rights by enabling profiling of a vulnerable ethnic group. This constitutes an AI Incident as the AI system's use has directly led to harm in terms of rights violations and societal harm to the Uyghur community.
Thumbnail Image

E-ticaret devi Alibaba'nın yüz tanıma sisteminde, Uygur Türklerini tanımlayan kod

2020-12-17
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Alibaba's facial recognition technology with ethnic classification capabilities) whose use directly leads to harm by enabling ethnic profiling and surveillance of Uygur Turks, a vulnerable minority group. This constitutes a violation of human rights and breaches obligations to protect fundamental rights. The AI system's development and deployment for this purpose is central to the harm described. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alibaba, Uygur Türklerini özel olarak ayıran kod mu kullanıyor?

2020-12-18
Hürriyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Alibaba's facial recognition software) explicitly designed to identify Uygur Turks, an ethnic minority, and to alert authorities for potential content removal or review. This use of AI directly leads to violations of human rights, specifically ethnic discrimination and surveillance, which are harms under the OECD framework. The article describes realized harm through the system's deployment and its role in ethnic profiling, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Çin e-ticaret şirketi Alibaba, Uygur Türkleri'ni takip ettiği ortaya çıktı - Yeni Akit

2020-12-17
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (facial recognition with ethnic classification) is explicitly mentioned. The system's use to identify and monitor Uygur Turks constitutes a violation of human rights (harm category c). The article describes the system actively functioning to detect and flag Uygur individuals, which directly leads to harm through enabling surveillance and potential repression. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and realized human rights violations.
Thumbnail Image

E-ticaret devi Alibaba'nın yüz tanıma sisteminde 'Uygur Türklerini tanımlayan kod' iddiası

2020-12-17
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Alibaba's Cloud Shield facial recognition system) that uses AI to identify and categorize individuals by ethnicity, specifically targeting Uygur Turks. This AI-enabled surveillance leads to direct harm by enabling ethnic profiling and potential human rights abuses, which are violations of fundamental rights under applicable law. The system's use in monitoring and flagging Uygur individuals for content removal or review constitutes a breach of rights and is a clear example of harm caused by AI use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Alibaba'nın Uygur Türkleri'ni Etiketlediği Öne Sürüldü

2020-12-18
Webtekno
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition with AI-supported algorithms) used to identify and label Uygur Turks, an ethnic minority, which is a direct violation of human rights. The system's use for ethnic profiling and surveillance is a clear harm to the community targeted. The involvement of AI in development and use for this purpose meets the criteria for an AI Incident. The removal of the code after exposure does not negate the fact that harm occurred or was intended. Hence, the event is classified as an AI Incident.
Thumbnail Image

Uygur Türklerine bunu da yaptılar, E-ticaret devi Alibaba'dan açıklama bekleniyor

2020-12-17
Yeni Çağ Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition and classification to identify Uygur Turks, an ethnic minority, and trigger content moderation actions. This use directly contributes to human rights violations, as it supports surveillance and discrimination against a vulnerable group already subject to repression. The system's role is pivotal in enabling these harms, fulfilling the criteria for an AI Incident under violations of human rights. The article reports realized harm through the system's deployment and its discriminatory targeting, not just potential harm.
Thumbnail Image

Chine : " Alibaba " accusé d'être impliqué dans la répression des Ouïghours

2020-12-17
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition technology) developed and offered by Alibaba that identifies Uyghur individuals, a minority group subject to repression and human rights abuses in China. The AI system is used to flag Uyghur persons to authorities, enabling surveillance and potential detention, which constitutes a violation of human rights. The harm is realized and ongoing, not merely potential. Therefore, this event meets the criteria for an AI Incident due to the AI system's direct involvement in human rights violations.
Thumbnail Image

Un logiciel de reconnaissance faciale d'Alibaba cible les Ouïghours en Chine

2020-12-17
Boursier.com
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system explicitly mentioned as capable of identifying Uyghurs and flagging their content. This use directly relates to human rights violations, as it facilitates surveillance and potential repression of a minority group. The report indicates the system was operational and used for these purposes, even if Alibaba claims it was only in testing, the harm or risk of harm to rights is realized or imminent. Therefore, this qualifies as an AI Incident under the definition of violations of human rights caused by the use of an AI system.
Thumbnail Image

Un logiciel de reconnaissance faciale d'Alibaba cible les Ouïghours en Chine - Infos Reuters

2020-12-17
L'usine nouvelle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition software) capable of identifying Uyghurs, an ethnic minority group, and its use in monitoring and potentially censoring their online content. This constitutes a violation of human rights and harm to communities, as it facilitates ethnic discrimination and suppression. The AI system's deployment in this context directly leads to harm, fulfilling the criteria for an AI Incident. Although Alibaba claims the feature was only used in testing, the documented presence and potential use of this AI system in surveillance and content moderation targeting Uyghurs is sufficient to classify this as an AI Incident due to the realized or ongoing harm implied by the report.
Thumbnail Image

Un logiciel de reconnaissance faciale d'Alibaba cible les Ouïghours en Chine | Zone bourse

2020-12-17
zonebourse
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system capable of identifying ethnic characteristics, specifically targeting Uyghurs. Its use in content moderation to detect and flag Uyghur individuals for review or removal directly relates to violations of human rights, as documented by credible reports of repression against Uyghurs in China. The AI system's role in enabling or facilitating this surveillance and censorship constitutes an indirect but pivotal factor in harm to a vulnerable group. Therefore, this event meets the criteria for an AI Incident due to the realized or ongoing harm linked to the AI system's use.
Thumbnail Image

La technologie de reconnaissance faciale d'Alibaba choisit spécifiquement la minorité ouïghoure - News 24

2020-12-17
News 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition technology) developed and used by Alibaba that can identify Uyghurs, a minority group subject to alleged human rights abuses. The AI system's use in content moderation to detect and potentially suppress Uyghur individuals' live videos constitutes a violation of human rights (definition c). The system's role is pivotal in enabling targeted surveillance and censorship, which are forms of harm to communities and rights violations. Despite the claim that the feature was only in testing and removed, the report indicates actual use and potential harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The involvement is in the use of the AI system, and the harm is realized or ongoing given the context of repression.
Thumbnail Image

アリババ、ウイグル族識別可能な顔認識システム開発=米調査会社

2020-12-17
JP
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) developed and used by Alibaba to identify Uyghur individuals, a minority group subject to documented human rights abuses. The AI system's use in monitoring and potentially censoring Uyghur content directly implicates it in violations of human rights and fundamental rights protections. The report indicates the system is operational at least in a test environment and has been used to flag content, which constitutes realized harm or at least direct involvement in harm. Therefore, this is classified as an AI Incident due to the direct or indirect harm caused by the AI system's use in ethnic surveillance and repression.
Thumbnail Image

アリババが「ウイグル人を識別する顔認識クラウドサービス」を提供していたとの指摘

2020-12-18
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) explicitly designed to identify Uyghur individuals. The use of this AI system is linked to ongoing human rights abuses, including surveillance, forced detention, and repression of Uyghurs in China. The AI system's development and potential deployment directly or indirectly contribute to violations of fundamental rights, fulfilling the criteria for an AI Incident under the OECD framework. The harm is realized or ongoing given the documented repression and the AI system's role in facilitating it. The event is not merely a potential risk or complementary information but a clear case of AI-enabled harm.
Thumbnail Image

アリババ、自社技術用いた民族識別を容認せず 米社の指摘受け表明

2020-12-18
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The event centers on AI facial recognition technology that can identify ethnic minorities, which is directly linked to potential violations of human rights (a recognized AI Incident harm category). Although Alibaba denies intentional use for ethnic targeting and has taken remedial action, the initial development and potential use of such technology for ethnic identification constitutes an AI Incident due to the direct link to human rights concerns. The company's response is part of the incident context but does not negate the incident classification.
Thumbnail Image

ウイグル人特定する顔認証技術、中国アリババも提供 米報道

2020-12-17
afpbb.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions an AI system (facial recognition software) used to identify Uyghur people, a minority group subject to state repression. The use of this AI system contributes to violations of human rights, fulfilling the criteria for harm (c). The involvement of Alibaba and Huawei in providing or testing such technology for surveillance purposes directly or indirectly leads to harm through enabling state repression. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"絶対王者ポケトーク"に挑む。アメリカ制裁リスト入りの中国企業「iFLYTEK」が日本で描くシナリオ

2020-12-18
ハフポスト
Why's our monitor labelling this an incident or hazard?
The article describes AI systems (voice recognition and voiceprint identification) developed and used by iFLYTEK, which is sanctioned due to alleged human rights abuses. While the AI technology is used in China for law enforcement, including potentially repressive purposes, the article does not report any realized harm or incident in Japan or elsewhere caused by these AI systems. The concerns about data leakage and political issues are potential risks but not confirmed incidents. Therefore, this is best classified as Complementary Information, providing context on AI development, geopolitical risks, and governance challenges rather than reporting a specific AI Incident or AI Hazard.