Brazilian Legislative Proposals Prioritize AI Surveillance and Policing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A report by IDMJR reveals that nearly half of AI-related legislative proposals in five Brazilian states (RJ, SP, ES, PR, SC) between 2023-2025 focus on public security, emphasizing surveillance technologies like facial recognition and drones. This prioritization raises concerns about potential privacy violations and threats to democratic rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article centers on legislative proposals and societal concerns about AI's role in surveillance and control, which could plausibly lead to harms such as violations of privacy and human rights. However, no actual harm or incident has occurred yet as per the article. Therefore, this qualifies as an AI Hazard because it identifies credible risks from the development and use of AI systems in surveillance and policing that could plausibly lead to incidents harming rights and privacy. It is not Complementary Information since it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI and potential harms.[AI generated]
AI principles
Privacy & data governanceDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Proposições legislativas sobre IA favorecem controle e vigilância

2026-04-08
Istoe dinheiro
Why's our monitor labelling this an incident or hazard?
The article centers on legislative proposals and societal concerns about AI's role in surveillance and control, which could plausibly lead to harms such as violations of privacy and human rights. However, no actual harm or incident has occurred yet as per the article. Therefore, this qualifies as an AI Hazard because it identifies credible risks from the development and use of AI systems in surveillance and policing that could plausibly lead to incidents harming rights and privacy. It is not Complementary Information since it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI and potential harms.
Thumbnail Image

Proposições legislativas sobre IA favorecem controle e vigilância

2026-04-08
O Povo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for surveillance and policing, such as facial recognition and drones, which are AI systems by definition. The legislative proposals aim to expand these uses, increasing the likelihood of misuse or harmful outcomes. While no direct harm is reported yet, the nature of these AI applications in security and surveillance is well-known to pose credible risks of human rights violations and community harm. Hence, the event is best classified as an AI Hazard due to the plausible future harm from these AI-enabled systems if deployed as proposed.
Thumbnail Image

Dossiê aponta priorização de vigilância sobre educação em proposições de IA - Jornal de Brasília

2026-04-08
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of surveillance and facial recognition technologies, which are AI applications. The concerns raised relate to potential violations of privacy and legal rights, which are recognized harms under the framework. However, the article focuses on legislative proposals and their implications rather than an actual event where AI use has caused harm. Therefore, it does not meet the criteria for an AI Incident (no realized harm) or an AI Hazard (no specific plausible imminent harm event). Instead, it provides complementary information about societal and governance responses and critiques regarding AI use in public security, fitting the definition of Complementary Information.
Thumbnail Image

Proposições legislativas sobre IA favorecem controle e vigilância

2026-04-08
O Cafezinho
Why's our monitor labelling this an incident or hazard?
The article centers on legislative proposals involving AI for surveillance and control, which could plausibly lead to harms like violations of privacy and fundamental rights. However, no actual harm or incident is reported as having occurred yet. The focus is on the potential risks and societal implications of these AI uses, making this an AI Hazard rather than an AI Incident. It is not merely complementary information because it highlights credible risks associated with AI deployment in surveillance and control, which could threaten democratic rights and privacy.
Thumbnail Image

Inteligência Artificial nas assembleias foca em vigilância e controle - News Rondônia

2026-04-09
News Rondonia
Why's our monitor labelling this an incident or hazard?
The article centers on the potential societal harms and risks posed by the development and use of AI surveillance technologies, including facial recognition and drones, in public security contexts. It highlights concerns about privacy violations, racial discrimination, and threats to democratic freedoms, which are plausible harms that could arise from these AI systems. However, it does not report a concrete incident of harm or malfunction caused by AI, nor does it describe a realized event of injury, rights violation, or disruption. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to AI incidents involving harm to rights and communities, but no actual harm is reported yet.
Thumbnail Image

Proposições legislativas sobre IA priorizam controle e vigilância, aponta dossiê

2026-04-08
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The article centers on a legislative and societal analysis of AI-related proposals and their implications for privacy and democratic rights. It does not report a specific AI system malfunction, misuse, or harm occurring, nor does it describe a concrete event where AI has directly or indirectly caused harm. The focus is on the potential risks and policy directions, making it a governance and societal response context. Therefore, it fits best as Complementary Information, providing important context and analysis about AI's societal impact and governance challenges, rather than reporting an AI Incident or Hazard.