Swedish Government Proposes Police Use of AI Facial Recognition Surveillance

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Swedish government has proposed legislation allowing police to use AI-powered real-time facial recognition for crime prevention and investigation. The plan, which seeks exceptions from EU bans, raises concerns about potential violations of privacy and human rights, though safeguards are promised. No actual harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the intended use of an AI system (real-time facial recognition) by police, which could plausibly lead to violations of personal privacy and human rights if misused or insufficiently regulated. Although no harm has yet occurred, the government's proposal to enable such use despite existing prohibitions and the acknowledged risks constitutes a credible potential for harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of rights and privacy.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Regeringen: Polisen ska få använda AI-kameror

2025-11-28
Di.se
Why's our monitor labelling this an incident or hazard?
The event involves the intended use of an AI system (real-time facial recognition) by police, which could plausibly lead to violations of personal privacy and human rights if misused or insufficiently regulated. Although no harm has yet occurred, the government's proposal to enable such use despite existing prohibitions and the acknowledged risks constitutes a credible potential for harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of rights and privacy.
Thumbnail Image

Regeringen: Polisen ska få övervaka med AI

2025-11-28
gp.se
Why's our monitor labelling this an incident or hazard?
The event involves the proposed use of an AI system (real-time facial recognition) by police, which is currently not in use but planned. The proposal acknowledges potential harms to personal privacy and civil rights, which are fundamental rights, and aims to regulate the use with safeguards. Since the AI system's use could plausibly lead to violations of human rights (privacy and personal integrity) if misused or overused, this constitutes an AI Hazard. There is no indication that harm has yet occurred, so it is not an AI Incident. The article is not merely complementary information about past incidents or governance responses but a description of a future regulatory proposal with potential risks. Therefore, the classification is AI Hazard.
Thumbnail Image

Regeringen: Polisen ska få övervaka med AI

2025-11-28
Sydsvenskan
Why's our monitor labelling this an incident or hazard?
The article discusses a legislative proposal enabling police use of AI facial recognition surveillance, which is an AI system. Although no harm has yet occurred, the use of such AI systems for mass surveillance could plausibly lead to violations of human rights and privacy, constituting an AI Hazard. Since the event concerns a proposal and potential future harm rather than an actual incident, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Regeringen: Polisen ska få övervaka med AI

2025-11-28
ttela.se
Why's our monitor labelling this an incident or hazard?
The article outlines a policy proposal for the use of AI facial recognition by police, which could plausibly lead to significant harms such as violations of personal privacy and human rights if implemented. However, no actual harm has yet occurred as the proposal is still under consideration and not in active use. Therefore, this situation represents a credible potential risk (AI Hazard) rather than a realized incident. The presence of AI is explicit, and the potential for harm to rights and privacy is clearly articulated, but since the system is not yet deployed or causing harm, it is classified as an AI Hazard.
Thumbnail Image

Regeringen: Polisen ska få övervaka med AI

2025-11-28
Norrköpings Tidningar
Why's our monitor labelling this an incident or hazard?
The article discusses a legislative proposal enabling AI use by police but does not report any actual harm or incident caused by AI. The potential for misuse or rights violations exists, but no direct or indirect harm has occurred yet. Therefore, this is best classified as an AI Hazard, as the use of AI in this context could plausibly lead to incidents involving violations of rights or other harms in the future.