
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
West Midlands Police used Microsoft's Copilot AI tool to draft a report containing false information, which led to Maccabi Tel Aviv fans being banned from a football match in Birmingham. The AI-generated inaccuracies prompted a public apology, suspension of the AI tool, and an official review into the incident.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Microsoft Copilot) whose malfunction (hallucination) led to inaccuracies in an official police report. This report influenced a decision that harmed a community (Maccabi supporters) by banning them from attending a match based on false information, which constitutes harm to communities and a breach of trust. The police chief's apology and suspension of the AI tool confirm the AI's role in the incident. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm.[AI generated]