
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The Metropolitan Police in London used Palantir's AI tool to analyze internal data, uncovering widespread misconduct, corruption, and criminality among hundreds of officers. The AI-led investigation resulted in arrests and disciplinary actions for offenses including fraud, sexual assault, and abuse of authority, prompting consideration of expanded AI use in future policing.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Palantir's software) deployed by the Met Police to detect rule-breaking and criminal behavior among officers. The AI's outputs directly led to investigations and arrests, indicating a causal link between the AI system's use and realized harm, including violations of law and public trust. This fits the definition of an AI Incident, as the AI system's use has directly led to harm in the form of legal violations and damage to institutional integrity.[AI generated]