
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Police use of facial recognition AI has resulted in multiple wrongful arrests, particularly of Black individuals, and widespread misidentifications, raising concerns about racial bias and civil rights violations. Despite these harms, law enforcement agencies in the UK and US continue expanding its use, prompting criticism from rights advocates.[AI generated]
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by the police for identifying suspects. Its deployment has already led to privacy rights violations and breaches of equalities law as per a court ruling, indicating realized harm. The concerns about intrusive surveillance and chilling effects on protest rights further confirm harm to communities and fundamental rights. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing violations of human rights and harm to communities.[AI generated]