Italian University Fined for Unlawful Use of Facial Recognition in Online Courses

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Italian Data Protection Authority fined eCampus University €50,000 for unlawfully using facial recognition AI to verify student attendance in online teaching courses. The university processed biometric data without proper legal basis or impact assessment, violating privacy laws and affecting hundreds of participants.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition) in an educational context. The AI system's use led to a violation of legal protections for biometric data, which are considered sensitive personal data under privacy laws. The misuse and unlawful processing of biometric data constitute a breach of fundamental rights and legal obligations. The harm here is a violation of rights (privacy and data protection), which fits the definition of an AI Incident. The sanction and investigation confirm that harm has occurred due to the AI system's use. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Education and training

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Garante Privacy sanziona eCampus sul riconoscimento facciale nella formazione online

2026-02-20
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) in an educational context. The AI system's use led to a violation of legal protections for biometric data, which are considered sensitive personal data under privacy laws. The misuse and unlawful processing of biometric data constitute a breach of fundamental rights and legal obligations. The harm here is a violation of rights (privacy and data protection), which fits the definition of an AI Incident. The sanction and investigation confirm that harm has occurred due to the AI system's use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Privacy: Garante sanziona per 50mila euro eCampus su trattamento dati biometrici

2026-02-20
Borsa italiana
Why's our monitor labelling this an incident or hazard?
The facial recognition system qualifies as an AI system because it processes biometric data for identification purposes. Its use led to a violation of privacy rights and data protection laws, which falls under harm category (c) - violations of human rights or breach of legal obligations protecting fundamental rights. The incident is not merely potential but has materialized, as evidenced by the imposed fine and official sanction. Therefore, this event is classified as an AI Incident.
Thumbnail Image

eCampus multata dal Garante Privacy per il riconoscimento facciale

2026-02-20
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used in an educational context. The use of this AI system led directly to a violation of legal obligations concerning biometric data processing, which is a breach of fundamental rights and applicable law. The harm here is a violation of privacy rights and illegal data processing, which fits the definition of an AI Incident under category (c) violations of human rights or breach of obligations under applicable law. The sanction and investigation confirm the harm has materialized, not just a potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

GARANTE PRIVACY * COMUNE PESCARA: "NO ALLE BODY CAM DELLA POLIZIA LOCALE, RISCHIO TRASFERIMENTO DATI VERSO PAESI EXTRA UE" - Agenzia giornalistica Opinione. Notizie da Italia - Mondo / Trentino Alto Adige

2026-02-20
Agenzia giornalistica Opinione
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems such as facial recognition biometric systems and body cams with data processing capabilities. The sanction against the university for illegal biometric data use constitutes an AI Incident because it involves realized harm (violation of privacy rights through unlawful biometric data processing). However, since the article covers multiple topics and the main narrative is about regulatory actions, opinions, and policy developments rather than a single new incident, the overall classification aligns with Complementary Information. The body cam issue represents a potential risk (hazard) but no direct harm is reported. The facial recognition misuse is a past incident now sanctioned. The article also discusses broader governance and policy responses, which are typical of Complementary Information. Given the article's broad scope and focus on regulatory updates and enforcement, Complementary Information is the most appropriate classification.
Thumbnail Image

Garante Privacy sanziona eCampus, stop al riconoscimento facciale nella formazione online

2026-02-20
Key4biz
Why's our monitor labelling this an incident or hazard?
The facial recognition system qualifies as an AI system because it processes biometric data to verify identity, a task involving AI-based pattern recognition. The incident stems from the use of this AI system without proper legal basis and safeguards, leading to violations of privacy rights and data protection laws, which are fundamental rights under applicable law. The harm is realized in the form of unlawful biometric data processing affecting many individuals. Therefore, this event meets the criteria of an AI Incident due to the AI system's use directly causing a breach of legal obligations protecting fundamental rights.