
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A German court ruled that using AI-powered facial recognition for identity verification in online university exams violates GDPR by unlawfully processing biometric data. The court recognized psychological harm to a student and awarded compensation, establishing that such AI proctoring practices breach fundamental rights.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a 'KI-gestützte Software' (AI-supported software) performing automated biometric facial recognition to verify exam takers' identities. The court found this processing unlawful under GDPR, constituting a violation of fundamental rights and causing immaterial harm (psychological distress). Since the AI system's use directly caused harm recognized by the court, this qualifies as an AI Incident under the framework, specifically a violation of human rights and immaterial harm to a person.[AI generated]