Swedish Police Chief Advocates Real-Time AI Facial Recognition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Swedish police chief Petra Lundh supports implementing AI-powered real-time facial recognition to combat serious crime, pending new legislation aligned with EU rules. While intended to help identify suspects, the proposed use raises concerns about potential violations of personal privacy and human rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article discusses the intended use and legislative preparation for real-time facial recognition by police, which involves AI systems. However, no actual harm or incident has occurred yet; the event concerns potential future use that could plausibly lead to harms such as privacy violations or rights infringements. Therefore, it qualifies as an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityFairnessRobustness & digital securityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Rikspolischefen vill använda ansiktsigenkänning i realtid - Nyheter (Ekot)

2024-10-05
Sveriges Radio
Why's our monitor labelling this an incident or hazard?
The article discusses the intended use and legislative preparation for real-time facial recognition by police, which involves AI systems. However, no actual harm or incident has occurred yet; the event concerns potential future use that could plausibly lead to harms such as privacy violations or rights infringements. Therefore, it qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Rikspolischefen vill ha ansiktsigenkänning i realtid: "Tror det skulle göra skillnad"

2024-10-05
SVT Nyheter
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (real-time facial recognition) and its potential use by law enforcement. However, it does not describe any actual incident where the AI system has caused harm or led to violations. Instead, it discusses a legislative proposal and the balance between utility and privacy risks, indicating a credible potential for future harm related to privacy and rights. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm but no harm has yet occurred or been reported.
Thumbnail Image

Polisen vill ha ansiktsigenkänning

2024-10-05
SVT Nyheter
Why's our monitor labelling this an incident or hazard?
The article discusses the potential use of an AI system (real-time facial recognition) by law enforcement, which could plausibly lead to harms such as violations of privacy and human rights if misused or improperly regulated. However, since the technology is not yet in active use and no harm has been reported, this constitutes a plausible future risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Polisen vill ha ansiktsigenkänning i realtid

2024-10-05
Dagens Nyheter
Why's our monitor labelling this an incident or hazard?
The article discusses the planned use of AI-powered real-time facial recognition by police, which is an AI system. The event is about the potential use and legal regulation, not about an actual incident causing harm. The use of such technology could plausibly lead to violations of personal privacy and human rights, which fits the definition of an AI Hazard. Since no harm has yet occurred, and the article focuses on the potential and regulatory context, it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI and its societal implications.
Thumbnail Image

Polisen vill ha ansiktsigenkänning i realtid

2024-10-05
Sydsvenskan
Why's our monitor labelling this an incident or hazard?
Facial recognition systems are AI systems that analyze images to identify individuals. The article describes the intended use of such AI systems by police to locate suspects in real time. While no specific harm or incident is reported as having occurred, the deployment of real-time facial recognition raises plausible risks of harm such as violations of privacy and human rights. However, since the article focuses on the proposal and regulatory framework rather than an actual incident or realized harm, it constitutes an AI Hazard due to the plausible future harm from the use of AI facial recognition in law enforcement.
Thumbnail Image

Rikspolischefen: Vill se ansiktsigenkänning i realtid

2024-10-05
Omni
Why's our monitor labelling this an incident or hazard?
The article discusses the potential future use of an AI system (real-time facial recognition) by police. While no harm has yet occurred, the deployment of such technology could plausibly lead to violations of personal privacy and human rights, which fits the definition of an AI Hazard. There is no indication that the system is currently in use or causing harm, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential use and implications rather than updates or responses to existing incidents.
Thumbnail Image

Polisen vill ha ansiktsigenkänning i realtid

2024-10-05
strengnastidning.se
Why's our monitor labelling this an incident or hazard?
The article discusses the planned use of AI-powered real-time facial recognition by police, which is an AI system. Although no incident of harm is reported, the technology's use could plausibly lead to violations of personal privacy and human rights, fitting the definition of an AI Hazard. The event is about the potential impact of deploying this AI system under new legislation, not about an actual incident or a complementary update.
Thumbnail Image

Rikspolischefen vill ha ansiktsigenkänning i realtid

2024-10-05
Petterssons gör Sverige lagom!
Why's our monitor labelling this an incident or hazard?
The article discusses the police chief's desire to use real-time facial recognition, an AI system, for law enforcement purposes. While no actual harm or incident has occurred yet, the use of such technology could plausibly lead to violations of personal privacy and other rights, which are recognized harms under the framework. Since the event concerns a proposal and potential future use rather than an actual incident, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.