Punjab Police Deploys AI Facial Recognition System for Criminal Identification

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Punjab Police has launched the AI-powered 'Face Trace System' to identify and apprehend suspects and criminals using a database of 18 million individuals. While intended to enhance law enforcement efficiency, the system raises concerns about potential privacy violations and human rights risks due to its mass surveillance capabilities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition with AI technology) developed and deployed by Punjab Police. The system's purpose is to assist in identifying and apprehending criminals, which directly relates to law enforcement activities. While the article does not report any harm or misuse resulting from the system's deployment, the system's use could plausibly lead to harms such as violations of privacy or human rights if misused or if errors occur. However, since the article only reports the launch and intended use without any reported harm or incident, it does not qualify as an AI Incident. Instead, it represents a potential risk scenario where harm could plausibly arise in the future, thus fitting the definition of an AI Hazard.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityFairnessAccountabilityDemocracy & human autonomyRobustness & digital securitySafety

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

پنجاب پولیس نے مجرموں کی گرفتاری کیلئے مصنوعی ذہانت پر مبنی 'فیس ٹریس سسٹم' متعارف کرادیا

2023-07-26
DawnNews
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition AI) in active law enforcement operations to identify and capture suspects and criminals. The AI system's use directly affects individuals' rights and privacy, and its deployment in policing can lead to potential violations of human rights or legal obligations if misused or if errors occur. Since the system is actively being used to identify and arrest individuals, and given the potential for harm related to rights violations or wrongful arrests, this qualifies as an AI Incident under the framework.
Thumbnail Image

پنجا ب پولیس کے جدید ترین سسٹم فیس ٹریس کا باضابطہ آغاز

2023-07-26
Daily Pakistan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition with AI technology) developed and deployed by Punjab Police. The system's purpose is to assist in identifying and apprehending criminals, which directly relates to law enforcement activities. While the article does not report any harm or misuse resulting from the system's deployment, the system's use could plausibly lead to harms such as violations of privacy or human rights if misused or if errors occur. However, since the article only reports the launch and intended use without any reported harm or incident, it does not qualify as an AI Incident. Instead, it represents a potential risk scenario where harm could plausibly arise in the future, thus fitting the definition of an AI Hazard.
Thumbnail Image

پنجاب پولیس نے جدید ترین '' فیس ٹریس ٹیکنالوجی'' متعارف کروادی

2023-07-26
Daily Pakistan
Why's our monitor labelling this an incident or hazard?
The event involves the deployment and use of an AI system (facial recognition technology) by the police for identifying suspects and criminals. This use of AI directly affects individuals' rights and privacy, potentially leading to violations of human rights or legal obligations if misused or if errors occur. Since the system is actively being used to identify and apprehend individuals, and given the known risks of facial recognition AI (such as misidentification, privacy infringement, and potential abuse), this constitutes an AI Incident due to the direct involvement of AI in law enforcement actions that can lead to harm or rights violations.
Thumbnail Image

مجرمان کی شامت آگئی، پنجاب پولیس نے فیس ٹریس سسٹم متعارف کرا دیا

2023-07-26
ہم نیوز
Why's our monitor labelling this an incident or hazard?
The article describes the deployment of an AI-based facial recognition system by Punjab Police for identifying suspects and criminals. Although no direct harm or incident is reported, the nature of the system and its application in law enforcement plausibly could lead to violations of human rights, privacy breaches, or wrongful arrests, which are recognized harms under the framework. Since no actual harm has yet been reported, but the AI system's use could plausibly lead to such harms, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the system's introduction and planned improvements, not on any realized harm or incident, so it is not Complementary Information.
Thumbnail Image

پنجاب پولیس کے چہرہ شناخت کے جدید سسٹم فیس ٹریس کا افتتاح

2023-07-25
City 42
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction and intended benefits of an AI facial recognition system without mentioning any realized harm, misuse, or malfunction. There is no indication of direct or indirect harm to individuals, communities, or rights, nor any plausible future harm explicitly stated. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information since it provides context about AI deployment in law enforcement and the system's integration and planned improvements.
Thumbnail Image

اب مجرم کا پولیس سے بچنا مشکل ہوگیا شناخت کا مسئلہ ختم

2023-07-26
Neo TV
Why's our monitor labelling this an incident or hazard?
The event describes the deployment of an AI system (facial recognition) used by police to identify and apprehend criminals. While the article does not mention any harm occurring yet, the use of AI for mass surveillance and identification of suspects raises plausible risks of harm such as violations of privacy and potential human rights concerns. However, since no actual harm or incident is reported, and the focus is on the introduction and intended use of the system, this qualifies as an AI Hazard due to the plausible future risk of harm from the AI system's use.