Santa Fe Launches AI-Powered Surveillance System with Facial Recognition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The government of Santa Fe, Argentina, is investing over $32 million to deploy an AI-powered video surveillance system, adding 2,000 new cameras with facial recognition and license plate reading capabilities. The system aims to modernize public security but raises potential privacy and human rights concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI systems (facial recognition, license plate reading, AI-based video analysis) in a public surveillance context, which is known to carry risks of privacy violations and potential human rights concerns. However, the article only discusses the tender and planned deployment, with no indication of actual harm or incidents caused by the AI system so far. Given the potential for future harm from such surveillance systems, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Licitan el nuevo sistema de videovigilancia con IA en Santa Fe: 2.000 cámaras, identificación facial y lectura de patentes

2026-01-14
Uno Santa Fe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (facial recognition, license plate reading, AI-based video analysis) in a public surveillance context, which is known to carry risks of privacy violations and potential human rights concerns. However, the article only discusses the tender and planned deployment, with no indication of actual harm or incidents caused by the AI system so far. Given the potential for future harm from such surveillance systems, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Incorporarán cámaras de seguridad con IA en la capital provincial - Sin Mordaza

2026-01-14
Sin Mordaza
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (facial recognition, behavior analysis, AI alert systems) in public surveillance, which can plausibly lead to harms like violations of human rights or privacy if misused or malfunctioning. However, since the system is still in the procurement and planning phase with no reported incidents or harms, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the planned deployment and its potential implications, not on responses or updates to past events.
Thumbnail Image

Un buscador "tipo Google" para las cámaras: cómo funcionará el sistema Lince en Santa Fe

2026-01-15
Uno Santa Fe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (BriefCam) used for video analysis and indexing, which qualifies as an AI system under the definitions. However, there is no mention of any harm caused or any malfunction. The system is being implemented with privacy considerations (no biometric identification) and is intended for lawful criminal investigation. The article mainly provides information about the system's capabilities, infrastructure, and operational plans, which fits the definition of Complementary Information. It does not report any realized or potential harm, nor does it warn of plausible future harm. Hence, the classification is Complementary Information.
Thumbnail Image

Sistema Lince en Santa Fe: dos mil nuevas cámaras e Inteligencia Artificial para reforzar la seguridad en la ciudad

2026-01-15
Uno Santa Fe
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition, license plate reading, video analysis) in public security. However, there is no indication that the AI system has caused any injury, rights violations, or other harms yet. The article highlights the system's deployment and expected positive impact on security, without mentioning any incidents or risks materializing. Therefore, this is not an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and details about the AI system's integration into public security infrastructure and its anticipated effects, which helps understand the evolving AI ecosystem in law enforcement.
Thumbnail Image

Pullaro licitó "Lince", el sistema con IA que ampliará la videovigilancia en Santa Fe

2026-01-15
ellitoral.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for video surveillance and identification tasks. The system is in the development and deployment phase, with no mention of any harm or malfunction occurring so far. There is no indication of direct or indirect harm to persons, property, rights, or communities at this stage. The article highlights the potential for improved security but does not discuss risks or incidents. Therefore, this event qualifies as Complementary Information, providing context and updates about AI deployment in public security without reporting an AI Incident or AI Hazard.
Thumbnail Image

Dos mil cámaras de vigilancia con IA para la ciudad de Santa Fe - GRUPO DERF

2026-01-15
GRUPO DERF
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for surveillance and identification tasks. However, the article only discusses the planned installation and expected operational timeline, with no indication of actual harm or incidents caused by the system. Since the system's use could plausibly lead to harms such as privacy infringements or misuse in the future, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it does not update or respond to a prior incident or hazard, nor is it unrelated as it clearly involves AI systems with potential societal impact.
Thumbnail Image

La capital sumará 2.000 cámaras de videovigilancia con IA - Sin Mordaza

2026-01-15
Sin Mordaza
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for video surveillance, facial recognition, and license plate identification, which are known to pose risks of privacy violations and potential human rights infringements. Although no harm has yet occurred or been reported, the large-scale deployment of such AI surveillance technology could plausibly lead to incidents involving violations of rights or other harms. Since the event concerns the planned implementation and not an incident with realized harm, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Pullaro: encabezó la licitación del nuevo sistema de videovigilancia que utiliza IA

2026-01-16
GRUPO DERF
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in the video surveillance system for identification and data processing, confirming AI system involvement. However, it only describes the system's procurement and intended deployment, with no indication of any realized harm or incident. Since the system is not yet operational and no harm has occurred, but the nature of the system (mass surveillance with facial recognition) plausibly could lead to harms such as privacy violations or misuse, this qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated since the AI system and its potential impacts are central to the report.