
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Opposition parties Linke and Grüne in Saxony, Germany, express serious concerns about the proposed police law enabling AI-based video surveillance and biometric analysis. Experts warn of potential constitutional violations and threats to civil liberties, highlighting uncertain legal consequences if AI systems are deployed in policing.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned in the context of biometric matching, AI video surveillance, and automated recognition technologies. The concerns raised relate to the potential for violations of rights and freedoms, which would constitute harm if realized. Since the law is still under discussion and not yet enacted, and no harm has occurred, this situation represents a plausible future risk of harm from AI use in policing. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI is central to the debate and potential harm.[AI generated]