Mercadona Fined for Unlawful Use of Facial Recognition AI in Stores

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Spanish supermarket chain Mercadona was fined 2.5 million euros by the Spanish Data Protection Agency (AEPD) for deploying facial recognition AI in dozens of stores to identify individuals with restraining orders. The system violated data protection laws, leading to its removal and the termination of the pilot project.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the deployment and use of a facial recognition AI system that processed personal data without adequate legal basis or safeguards, leading to a violation of data protection laws and fundamental rights. The resulting fine and legal prohibition confirm that harm in the form of rights violations occurred. The AI system's use was central to the incident, fulfilling the criteria for an AI Incident under the framework, as it directly led to a breach of obligations intended to protect fundamental rights.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilityTransparency & explainability

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Las cámaras de reconocimiento facial le cuestan a Mercadona una multa de 2,5 millones de euros

2021-07-23
Genbeta
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and use of a facial recognition AI system that processed personal data without adequate legal basis or safeguards, leading to a violation of data protection laws and fundamental rights. The resulting fine and legal prohibition confirm that harm in the form of rights violations occurred. The AI system's use was central to the incident, fulfilling the criteria for an AI Incident under the framework, as it directly led to a breach of obligations intended to protect fundamental rights.
Thumbnail Image

Multan a supermercado que usaba cámaras de reconocimiento facial por seguridad

2021-07-23
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition technology) used in a real-world setting. The use of this AI system directly led to a violation of privacy rights and data protection laws, resulting in a regulatory fine. The harm here is a breach of legal obligations intended to protect fundamental rights, specifically privacy. The incident is not hypothetical or potential but has already occurred and caused harm, meeting the criteria for an AI Incident. The cancellation of the pilot and the fine are consequences of this harm, not merely complementary information.
Thumbnail Image

Mercadona paga una sanción de 2,5 millones de euros por sus cámaras de reconocimiento facial y no recurrirá "por la indefinición sobre esta tecnología"

2021-07-23
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by Mercadona. The use of this AI system led to a legal sanction for violating data protection laws, which protect fundamental rights. The sanction and legal rulings confirm that harm in the form of rights violations has materialized. The AI system's deployment without clear legal basis for biometric data processing directly caused this harm. Hence, this is an AI Incident as the AI system's use directly led to a breach of fundamental rights and legal obligations.
Thumbnail Image

El reconocimiento facial de Mercadona: la empresa abona una sanción de 2,5 millones de euros y detiene el proyecto piloto * - Maldita.es

2021-07-23
Maldita.es — Periodismo para que no te la cuelen
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) that processes sensitive biometric data. The system was used operationally, constituting use of the AI system. The deployment led to a regulatory sanction (a 2.5 million euro fine) due to violations of data protection laws, which protect fundamental rights. The system's inaccuracies and potential for false positives pose risks of wrongful identification and harm to individuals' rights. These factors meet the criteria for an AI Incident, as the AI system's use directly led to legal and rights-related harms. The event is not merely a potential risk or complementary information but a realized harm with regulatory consequences.
Thumbnail Image

Mercadona pagará 2,5 millones de euros por la sanción impuesta por la AEPD sobre sus sistema de identificación de clientes | Silicon

2021-07-23
Silicon
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (facial recognition technology) used for identification purposes. The use of this AI system led to a violation of data protection laws and privacy rights, which are fundamental human rights protected by law. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as evidenced by the regulatory sanction and court rulings. Therefore, this event qualifies as an AI Incident.