Paris Authorities Authorize AI Video Surveillance Trials Ahead of Olympics

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Paris police authorized SNCF and RATP to conduct large-scale trials of AI-powered video surveillance during major events, including a football match and concert. The Cityvision AI system analyzes live camera feeds for security threats. While no harm has occurred, the deployment raises concerns about privacy and potential rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves an AI system—algorithmic video surveillance analyzing real-time camera feeds to detect specific events. While no harm or rights violations are reported, the technology's use in public surveillance inherently carries plausible risks of privacy infringement and potential misuse. The explicit exclusion of facial recognition reduces some risks but does not eliminate the hazard. Since no actual harm has occurred or been documented, the event does not qualify as an AI Incident. It is not merely complementary information because the focus is on the deployment and testing of the AI system with potential implications. Hence, the classification as an AI Hazard is appropriate.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Government, security, and defenceMobility and autonomous vehiclesDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Monitoring and quality controlICT management and information security

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Paris: la SNCF et la RATP autorisées à expérimenter la vidéosurveillance algorithmique

2024-04-17
BFMTV
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically algorithmic video surveillance analyzing real-time footage for security purposes. However, it only describes an authorized experiment and planned use, with no mention of any harm or incident caused by these systems. Since the event concerns the potential use of AI systems that could plausibly lead to harms such as privacy violations or misuse, but no harm has yet occurred, it fits the definition of Complementary Information. The article provides context on the deployment and regulatory framework of AI surveillance but does not report an AI Incident or AI Hazard.
Thumbnail Image

La vidéosurveillance algorithmique va faire l'objet de deux nouveaux tests ce week-end en région parisienne

2024-04-18
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (algorithmic video surveillance) in a real-world setting, which fits the definition of an AI system. However, the article only reports on authorized experiments and tests without any indication that the AI system has caused or contributed to any harm or incident. There is no mention of injury, rights violations, operational disruption, or other harms resulting from the AI's use. The concerns raised are about potential privacy issues and operational readiness, which are not described as realized harms. Therefore, this event does not qualify as an AI Incident or AI Hazard but rather as Complementary Information providing context on AI deployment, regulatory approval, and societal debate.
Thumbnail Image

France: la vidéosurveillance algorithmique de nouveau expérimentée à Paris

2024-04-18
RFI
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system—algorithmic video surveillance analyzing real-time camera feeds to detect specific events. While no harm or rights violations are reported, the technology's use in public surveillance inherently carries plausible risks of privacy infringement and potential misuse. The explicit exclusion of facial recognition reduces some risks but does not eliminate the hazard. Since no actual harm has occurred or been documented, the event does not qualify as an AI Incident. It is not merely complementary information because the focus is on the deployment and testing of the AI system with potential implications. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

Ligue 1. Qu'est-ce que la vidéosurveillance algorithmique, qui va être expérimentée lors de PSG-OL ?

2024-04-18
Ouest France
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (algorithmic video surveillance) being used in a real-world setting. However, there is no indication that the use of this AI system has directly or indirectly caused any harm or violation of rights so far. The article explicitly states that only technical tests have been conducted without resulting in actual arrests or incidents. Therefore, no AI Incident has occurred. The system's use could plausibly lead to future harms such as privacy violations or rights infringements, but the article does not report any such harm or credible risk materializing yet. Hence, this is best classified as Complementary Information, providing context on the deployment and experimentation of AI surveillance technology ahead of the Olympics.
Thumbnail Image

La surveillance algorithmique autorisée pour la SNCF et la RATP à Paris ce week-end

2024-04-18
Clubic.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (algorithmic video surveillance) actively deployed to analyze camera footage in public transport hubs and stations. However, the article does not report any actual harm or incident resulting from this use. Instead, it describes the planned or ongoing use of the system for security purposes. While there are potential privacy and human rights concerns associated with such surveillance, the article does not mention any realized violations or harms. Therefore, this event represents a plausible risk scenario where AI use could lead to harm (e.g., privacy violations, misuse, or errors leading to wrongful actions), but no harm has yet occurred or been reported. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

SNCF et RATP autorisées à lancer les caméras de surveillance algorithmique

2024-04-18
20minutes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithmic video surveillance systems (AI systems) authorized for experimental use. The use is planned and authorized but no harm or incident is reported. The potential for harm exists, such as privacy violations or misuse of surveillance data, but these are not realized in the article. Therefore, this event is best classified as an AI Hazard, as the development and use of these AI systems could plausibly lead to harms, but no direct or indirect harm has yet occurred.
Thumbnail Image

[Podcasts] C'était pas dans l'After - Samedi 20 avril 2024

2024-04-20
RMC SPORT
Why's our monitor labelling this an incident or hazard?
The use of algorithmic video surveillance implies the involvement of AI systems performing real-time analysis to detect potential security threats. While the article does not report any realized harm or incident resulting from this use, the deployment of such AI surveillance systems in public spaces could plausibly lead to harms such as violations of privacy rights or other human rights concerns. Therefore, this event represents an AI Hazard, as it could plausibly lead to an AI Incident involving rights violations or other harms, but no harm has yet been reported.
Thumbnail Image

La vidéosurveillance algorithmique sera testée par la RATP et la SNCF

2024-04-19
Les Numériques
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an algorithmic system (Cityvision) for video surveillance, which qualifies as an AI system under the definition. However, the event is about the authorized testing and deployment of this system, with no indication that any harm (such as injury, rights violations, or disruptions) has occurred or is occurring. The description focuses on the experimental use and legal authorization, with no reported incidents or harms resulting from the AI system's use. Therefore, this is not an AI Incident. It also does not describe a plausible future harm or risk scenario beyond the authorized testing, so it does not qualify as an AI Hazard. The article provides complementary information about the deployment and governance of AI surveillance technology, which is relevant for understanding the AI ecosystem and societal responses but does not report harm or risk of harm. Hence, the classification is Complementary Information.
Thumbnail Image

PSG-OL: 700 supporteurs lyonnais attendus au Parc... et une vidéosurveillance algorithmique testée

2024-04-20
RMC SPORT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithmic video surveillance (an AI system) for monitoring during a football match. While the system is deployed and operational, there is no indication of any injury, rights violation, or other harm occurring as a result. The use is experimental and authorized under a legal framework. Given the potential for such surveillance to lead to privacy or rights concerns, this qualifies as an AI Hazard rather than an Incident. It is not Complementary Information because the main focus is on the experimental deployment itself, not on responses or updates to prior incidents.
Thumbnail Image

PSG-OL : première pour la vidéosurveillance algorithmique

2024-04-20
Foot Mercato : Info Transferts Football - Actu Foot Transfert
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the article mentions an algorithmic video surveillance program analyzing camera footage to detect crowd behavior. The system is being used (not malfunctioning or under development) to monitor and prevent potential security issues. However, there is no report of any actual harm, injury, rights violation, or disruption caused by the AI system. The event describes a preventive security measure with plausible future benefits and risks but no realized harm. Therefore, it qualifies as Complementary Information, providing context on AI deployment in public security without reporting an AI Incident or AI Hazard.
Thumbnail Image

Cette vidéosurveillance dernier cri et décriée testée dans le métro et des gares à Paris : voici où et quand

2024-04-17
actu.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (algorithmic video surveillance) for real-time detection of security-related events. Although no direct harm has occurred yet, the system's use in public spaces with sensitive data collection and retention could plausibly lead to violations of human rights or societal harms (e.g., privacy infringement, surveillance overreach). Therefore, this event constitutes an AI Hazard, as the AI system's deployment could plausibly lead to harms, but no incident or realized harm is described in the article.
Thumbnail Image

La SNCF et la RATP vont vous surveiller grâce à l'IA, Paris donne son accord pour un essai

2024-04-18
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Cityvision') for algorithmic video surveillance, which qualifies as an AI system. The event involves the use of AI (use phase) to analyze surveillance footage. However, there is no indication that this use has directly or indirectly caused any harm such as injury, rights violations, or community harm. The system is being tested under legal authorization with data retention and privacy safeguards, and facial recognition is explicitly excluded. Therefore, no realized harm (AI Incident) is reported. Nonetheless, the deployment of AI surveillance systems with capabilities to track and analyze behavior could plausibly lead to human rights or privacy harms in the future if misused or expanded beyond the trial. Given the current description focuses on a controlled test without reported harm, the event is best classified as Complementary Information, as it provides context on AI use and governance responses rather than reporting an incident or hazard.
Thumbnail Image

Vidéosurveillance algorithmique : quelle est cette technologie qui sera testée à Paris ce week-end ?

2024-04-17
CNEWS
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (algorithmic video surveillance) analyzing real-time camera footage to detect security threats, which fits the definition of an AI system. The event is about the experimental use of this system, with no reported harm or incident occurring so far. However, the deployment of such surveillance technology could plausibly lead to harms such as violations of privacy rights or other human rights, or misuse leading to harm to communities. Since no harm has yet materialized, but there is a credible risk, this qualifies as an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the experimental deployment itself, not on responses or updates to prior incidents. It is not Unrelated because the AI system is clearly involved.
Thumbnail Image

Vidéosurveillance algorithmique : à Paris, les expérimentations se poursuivent ce week-end

2024-04-20
La Croix
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system for video surveillance and behavior detection, which is currently being tested experimentally. There is no indication that the AI system has caused any direct or indirect harm yet. The concerns raised relate to the system's reliability and potential for false positives, which could plausibly lead to harm in the future if the system is deployed widely without sufficient accuracy. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to harms such as violations of rights or public safety issues, but no harm has been reported at this stage. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI.
Thumbnail Image

SNCF, RATP : vous serez surveillé et analysé par une IA tout le week-end

2024-04-18
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the surveillance algorithm analyzing video footage in real time. The use of this AI system is authorized and actively deployed, which constitutes use of AI. While there are concerns about potential violations of individual rights and privacy, the article does not describe any actual harm or rights violations occurring yet. The event is about the deployment and legal authorization of AI surveillance technology and the societal debate around it, rather than a realized incident or a direct harm. Therefore, it fits best as Complementary Information, providing context on AI use, governance, and societal responses related to AI surveillance during a major event.
Thumbnail Image

Les caméras du futur braquées sur les supporters de l'OL dimanche

2024-04-18
foot01.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the article mentions the use of an AI algorithm analyzing video feeds in real time to detect specific behaviors. The use of this system is authorized and deployed, implying active use rather than mere development. While no direct harm is reported in the article, the deployment of AI surveillance with real-time behavioral analysis raises significant concerns about potential violations of human rights, such as privacy and freedom of assembly, especially given the targeting of specific supporter groups. Since the article does not report actual harm or incidents resulting from this deployment but describes a new use of AI surveillance technology that could plausibly lead to rights violations or other harms, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the deployment and use of the AI system with potential for harm, not on responses or updates to prior events.
Thumbnail Image

Paris continue de tester la vidéosurveillance algorithmique avant les Jeux olympiques

2024-04-19
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (video surveillance with AI algorithms) and its deployment in real-world settings. However, there is no indication that the AI system has caused any injury, rights violations, disruption, or other harms yet. The article discusses the potential for the system to improve security and the legal framework supporting its use. Since no harm has occurred but the system's use could plausibly lead to incidents (e.g., privacy concerns, misuse, or errors in detection), this qualifies as an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI system is central to the event.
Thumbnail Image

PSG-OL : les supporteurs seront surveillés avec ce nouveau dispositif controversé

2024-04-18
Lyon Capitale
Why's our monitor labelling this an incident or hazard?
The use of algorithmic video surveillance cameras constitutes the deployment of an AI system for monitoring and security purposes. However, the article only describes the authorized experimental use and planned testing of this technology, with no indication that any harm has occurred or that the system malfunctioned. The event is about the potential use and regulatory framework for this AI system, implying a plausible future risk but no realized harm yet. Therefore, it qualifies as an AI Hazard, as the AI system's use could plausibly lead to harms such as privacy violations or other security-related harms, but no direct or indirect harm is reported at this stage.
Thumbnail Image

JOP : la préfecture de Paris autorise deux nouvelles expérimentations de vidéosurveillance algorithmique (VSA) - Next

2024-04-19
Next
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (algorithmic video surveillance software Cityvision) in public surveillance experiments. While no direct or indirect harm is reported, the deployment of AI surveillance systems inherently carries plausible risks of privacy violations and potential rights infringements. Since the event involves the use of AI systems in a context where harm could plausibly occur but has not yet materialized, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are central to the event, and it is not Complementary Information as it does not update or respond to a prior incident but reports a new experimental deployment with potential risks.
Thumbnail Image

Caméras de surveillance et intelligence artificielle : premiers tests grandeur nature avant les JO de Paris

2024-04-21
TF1 INFO
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the surveillance cameras are coupled with AI algorithms analyzing multiple behavioral patterns in real time. The AI's use is experimental but active, with outputs used to alert human operators. While no direct harm (such as injury or rights violations) is reported as having occurred yet, the deployment raises plausible risks of harm to human rights and privacy, as highlighted by Amnesty International. The event thus represents an AI Hazard, since the AI system's use could plausibly lead to violations of rights or other harms, especially given the concerns about surveillance normalization and potential future misuse. There is no indication of realized harm at this stage, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the AI system's deployment and associated risks.