Unauthorized AI Facial Recognition at French Sporting Events

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Reports indicate that AI-driven facial recognition was used without consent during 48 races in France, scanning up to 300,000 individuals, including minors. Companies like PhotoRunning replaced manual processes with automated recognition, leading to significant violations of privacy rights and data protection laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the use of a facial recognition AI system (an AI system) in a way that breaches legal protections (GDPR), thus violating fundamental rights. This harm has already occurred as the recognition was used illegally, directly implicating the AI system's use in causing the rights violation. Therefore, this qualifies as an AI Incident due to the breach of legal and fundamental rights caused by the AI system's use.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Consumer servicesMedia, social platforms, and marketing

Affected stakeholders
General publicChildren

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
SalesMonitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Les coureurs et spectateurs du semi-marathon de Bordeaux auraient été soumis à une reconnaissance faciale illégale

2025-03-20
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The article describes the use of a facial recognition AI system (an AI system) in a way that breaches legal protections (GDPR), thus violating fundamental rights. This harm has already occurred as the recognition was used illegally, directly implicating the AI system's use in causing the rights violation. Therefore, this qualifies as an AI Incident due to the breach of legal and fundamental rights caused by the AI system's use.
Thumbnail Image

La reconnaissance faciale utilisée illégalement sur la SaintéLyon ?

2025-03-20
Lyon Capitale
Why's our monitor labelling this an incident or hazard?
The event describes the use of facial recognition AI technology without consent, violating GDPR rules. This constitutes a breach of legal obligations protecting fundamental rights to privacy and data protection. The AI system's use directly leads to this violation, qualifying it as an AI Incident. The presence of minors and the large scale of data processed further emphasize the seriousness of the harm. Therefore, this is classified as an AI Incident due to realized harm involving rights violations caused by AI use.
Thumbnail Image

SaintéLyon : un logiciel de reconnaissance faciale utilisé illégalement sur les coureurs et spectateurs

2025-03-20
France 3 Hauts-de-France
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition software, which is an AI system, employed without proper consent, thus breaching GDPR and personal data rights. This constitutes a violation of human rights and legal obligations protecting privacy and data protection. The harm is realized as the biometric data of a large number of individuals were collected unlawfully, including minors, which is a significant breach. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's illegal use.
Thumbnail Image

Bordeaux : les 13 000 coureurs du semi-marathon soumis à la reconnaissance faciale, qu'ils le veuillent ou non

2025-03-20
actu.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of facial recognition AI technology by PhotoRunning to identify runners and spectators without their consent, which is a violation of data protection laws and participants' rights. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the participants and spectators' biometric data are processed unlawfully, impacting their privacy rights. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing legal and rights violations.
Thumbnail Image

Un monde de tech - Reconnaissance faciale: 300 000 sportifs en France scannés à leur insu

2025-03-20
ار.اف.ای - RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for facial recognition that scanned hundreds of thousands of people without their knowledge or consent, including minors, which is illegal under EU law. This unauthorized biometric data collection constitutes a violation of fundamental rights and data protection regulations, fulfilling the criteria for an AI Incident under the OECD framework. The AI system's use directly caused harm by infringing on privacy rights and breaching legal obligations, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Il est clair que ça n'a pas été transparent" : l'utilisation illégale de la reconnaissance faciale lors de la SaintéLyon fait réagir

2025-03-22
France 3 Hauts-de-France
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (facial recognition software) used in a way that directly leads to a violation of personal data rights and privacy, which is a breach of applicable law protecting fundamental rights. The use was illegal and non-transparent, affecting a large number of people including minors, constituting an AI Incident under the framework. The harm is realized, not just potential, as the data was collected and processed without proper consent, thus violating rights and legal frameworks.
Thumbnail Image

Un logiciel de reconnaissance faciale utilisé illégalement dans des dizaines de courses organisées en France

2025-03-19
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) in a way that violates legal protections and fundamental rights, constituting a breach of obligations under applicable law. The harm is realized and significant, affecting hundreds of thousands of people, including minors, through illegal surveillance and data processing. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations caused by the AI system's use.
Thumbnail Image

Reconnaissance faciale : 300 000 sportifs scannés à leur insu lors de courses en France

2025-03-19
clubic.com
Why's our monitor labelling this an incident or hazard?
The article describes the deployment of an AI-powered facial recognition system to scan approximately 300,000 athletes during races without their explicit, informed, and freely given consent, which is required under GDPR. This constitutes a violation of fundamental rights concerning personal data and biometric information. The AI system's use directly caused this breach of legal obligations protecting individual rights. Therefore, this event qualifies as an AI Incident due to the realized harm of rights violations stemming from the AI system's use.
Thumbnail Image

Courses à pied : participants et public illégalement soumis à la reconnaissance faciale - Next

2025-03-19
Next
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of facial recognition AI technology to identify individuals at public sports events without their explicit, informed consent, which is illegal under GDPR. This unauthorized processing of biometric data constitutes a violation of fundamental rights and legal protections. The harm is realized as it affects a large number of people, including minors, and has led to complaints and regulatory scrutiny. Therefore, this qualifies as an AI Incident due to direct involvement of AI causing violations of rights and legal breaches.
Thumbnail Image

Justice. SaintéLyon, SaintéGones... Un logiciel de reconnaissance faciale utilisé illégalement sur les sportifs et spectateurs ?

2025-03-20
Le Progres
Why's our monitor labelling this an incident or hazard?
Facial recognition software is an AI system. The article suggests the software is used without proper legal basis, implying potential violations of privacy or data protection rights, which are human rights. Since no actual harm or legal ruling is reported, but the use is described as illegal or potentially illegal, this constitutes a plausible risk of harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The event is not unrelated because AI is clearly involved.
Thumbnail Image

Du casino au marathon, la reconnaissance faciale gagne du terrain, mais continue à susciter des inquiétudes

2025-03-20
France 3 Régions
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (facial recognition) that processes biometric data. The misuse at the marathon—collecting biometric data without consent—constitutes a breach of legal obligations protecting fundamental rights, specifically privacy and data protection rights. This breach has already occurred and caused harm to individuals' rights and privacy. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and legal obligations.
Thumbnail Image

La Chine renforce la gestion de la sécurité en matière de reconnaissance faciale

2025-03-21
china.org.cn/china.com.cn(中国网)
Why's our monitor labelling this an incident or hazard?
The article discusses new rules and principles for the use of facial recognition AI technology, focusing on security, privacy, and consent. There is no indication that an AI system has caused harm or malfunctioned, nor that harm has occurred or is imminent. Instead, the event is about regulatory measures to prevent potential harms and protect rights. Therefore, it is best classified as Complementary Information, as it provides governance context and societal response to AI-related risks rather than describing an AI Incident or Hazard.