
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The Chinese government is using AI, including large language models and surveillance systems, to intensify censorship, monitor citizens, and suppress dissent, especially among ethnic minorities. These AI tools automate content control, enable predictive policing, and are being developed in minority languages, leading to widespread human rights violations and potential export abroad.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as large language models and AI censorship tools developed and used by the Chinese government and tech companies to monitor and control minority language communications. The use of these AI systems directly leads to violations of human rights, including surveillance, censorship, and suppression of minority groups' communications, fulfilling the criteria for harm under the AI Incident definition. The article details ongoing use and impact, not just potential risks, confirming it as an AI Incident rather than a hazard or complementary information.[AI generated]