
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A major data breach of China's AI-driven surveillance systems, including facial recognition and Covid tracking apps, exposed personal information of up to one billion citizens. The incident sparked public resistance, lawsuits, and concerns over privacy violations, fraud risks, and inadequate safeguards against misuse of AI-collected data.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition and biometric data collection by Chinese authorities, which are AI systems. The large-scale data breach of a police database containing sensitive personal information directly harms individuals by exposing them to fraud, extortion, and privacy violations, fulfilling the criteria for harm to persons and violation of rights. The breach resulted from the government's failure to secure AI-enabled surveillance data, indicating malfunction or misuse of AI systems. The harm is realized, not just potential, making this an AI Incident rather than a hazard or complementary information.[AI generated]