German Court Bans AI-Based Biometric Checks in Online Exams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A German court ruled that using AI-powered facial recognition for identity verification in online university exams violates GDPR by unlawfully processing biometric data. The court recognized psychological harm to a student and awarded compensation, establishing that such AI proctoring practices breach fundamental rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as a 'KI-gestützte Software' (AI-supported software) performing automated biometric facial recognition to verify exam takers' identities. The court found this processing unlawful under GDPR, constituting a violation of fundamental rights and causing immaterial harm (psychological distress). Since the AI system's use directly caused harm recognized by the court, this qualifies as an AI Incident under the framework, specifically a violation of human rights and immaterial harm to a person.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Education and training

Affected stakeholders
Consumers

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Digitaler Durchblick: Urteil: Biometrie-Check bei Online-Prüfungen verboten

2026-03-25
NWZ Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a 'KI-gestützte Software' (AI-supported software) performing automated biometric facial recognition to verify exam takers' identities. The court found this processing unlawful under GDPR, constituting a violation of fundamental rights and causing immaterial harm (psychological distress). Since the AI system's use directly caused harm recognized by the court, this qualifies as an AI Incident under the framework, specifically a violation of human rights and immaterial harm to a person.
Thumbnail Image

Urteil: Biometrie-Check bei Online-Prüfungen verboten

2026-03-25
Freie Presse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (biometric facial recognition software) used in online exam proctoring. The court ruled that the automated processing of biometric data for identity verification violated GDPR, constituting a breach of fundamental rights. The student suffered psychological harm recognized as immaterial damage, directly linked to the AI system's use. This fits the definition of an AI Incident because the AI system's use directly led to a violation of rights and harm to a person.
Thumbnail Image

Urteil: Biometrie-Check bei Online-Prüfungen verboten

2026-03-25
Volksstimme.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (biometric facial recognition software) used in online exam proctoring. The court ruled that the automated processing of biometric data for identity verification violates GDPR, constituting a breach of fundamental rights. The plaintiff suffered immaterial harm (psychological distress) due to this unlawful AI use. The AI system's use directly led to a violation of rights and harm, meeting the criteria for an AI Incident. The ruling and damages awarded confirm the harm has materialized, not just a potential risk.
Thumbnail Image

Digitaler Durchblick: Urteil: Biometrie-Check bei Online-Prüfungen verboten

2026-03-25
Trierischer Volksfreund. Die Zeitung für die Region Trier/Mosel
Why's our monitor labelling this an incident or hazard?
The article involves an AI system in the form of automated biometric verification technology used during online exams. The ruling highlights that such use constitutes processing of biometric data, which is prohibited under GDPR. This relates to a violation of legal obligations protecting fundamental rights (privacy and data protection). Since the event concerns a realized legal violation linked to AI system use, it qualifies as an AI Incident due to breach of applicable law protecting fundamental rights.
Thumbnail Image

Digitaler Durchblick: Urteil: Biometrie-Check bei Online-Prüfungen verboten - Netzwelt - Rhein-Zeitung

2026-03-25
Rhein-Zeitung
Why's our monitor labelling this an incident or hazard?
The AI system involved is explicitly described as using automated facial recognition to process biometric data, which is a clear AI system. The use of this AI system led directly to a violation of fundamental rights under GDPR (a breach of applicable law protecting fundamental rights) and caused psychological harm to the student. The court's decision confirms the harm and the unlawfulness of the AI system's use. Hence, this event meets the criteria for an AI Incident because the AI system's use directly caused harm and legal violations.