
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Chinese AI company SenseTime's facial recognition technology was used for surveillance of Uyghur populations, leading to U.S. sanctions over human rights violations. Regulatory adjustments allowed only a subsidiary to remain on the U.S. Entity List, enabling the parent company to largely avoid restrictions and continue operations, raising concerns about ongoing harm.[AI generated]
Why's our monitor labelling this an incident or hazard?
SenseTime's AI facial recognition system has been used by the Chinese government to surveil Uyghur populations, which constitutes a violation of human rights. The U.S. sanctions were imposed due to these abuses, and the article discusses how the company's subsidiary remains on the Entity List while the parent company is not, allowing business to continue with less restriction. This indicates the AI system's use has directly led to harm (human rights violations), and the circumvention of sanctions facilitates ongoing harm. The AI system's involvement in human rights abuses and the regulatory failure to fully restrict the company meet the criteria for an AI Incident under the OECD framework.[AI generated]