AI Biometric Systems Lead to Legal and Human Rights Violations Globally

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered biometric identification systems have led to significant harm, including wrongful arrest and imprisonment, and legal violations. Incidents include a dissident detained via iris scan in Jordan, lawsuits against companies for violating biometric privacy laws with virtual try-on tools, and a law firm sued for improper use of fingerprint scanning systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems for remote biometric identification, which directly caused harm by enabling the detention of a dissident based on biometric data. This is a clear violation of human rights and political freedoms, fitting the definition of an AI Incident. The article also discusses the broader context of AI-enhanced biometric surveillance enabling authoritarian abuses, confirming realized harm and systemic risks. Therefore, the classification as an AI Incident is appropriate.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securityFairnessTransparency & explainabilityAccountabilityDemocracy & human autonomySafety

Industries
Government, security, and defenceDigital securityConsumer servicesIT infrastructure and hostingBusiness processes and support services

Affected stakeholders
General publicConsumersBusinessCivil society

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
ICT management and information securityMarketing and advertisementCompliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Why AI-enhanced identification endangers in the Middle East

2023-08-23
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for remote biometric identification, which directly caused harm by enabling the detention of a dissident based on biometric data. This is a clear violation of human rights and political freedoms, fitting the definition of an AI Incident. The article also discusses the broader context of AI-enhanced biometric surveillance enabling authoritarian abuses, confirming realized harm and systemic risks. Therefore, the classification as an AI Incident is appropriate.
Thumbnail Image

AI-enhanced identification: A danger in the Middle East?

2023-08-23
Standard Digital News - Kenya
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced biometric identification technologies that use AI algorithms for pattern recognition and remote identification. The detention of Khalaf al-Romaithi based on iris scan biometric data, which likely involved AI-enhanced identification, directly led to harm (imprisonment) and a violation of human rights. This qualifies as an AI Incident because the AI system's use directly contributed to harm. Additionally, the article discusses the broader risk of AI-enabled mass surveillance and authoritarian misuse, which could be considered AI Hazards, but the presence of a concrete harm incident takes precedence. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

ICO issues draft guidance on biometric data

2023-08-23
Lexology
Why's our monitor labelling this an incident or hazard?
The article discusses the ICO's publication of draft guidance on biometric data, highlighting concerns about risks such as discrimination and security breaches linked to AI and machine learning in biometric processing. However, it does not report any realized harm or a specific event where an AI system caused or could cause harm. The focus is on regulatory consultation and clarifying legal frameworks, which fits the definition of Complementary Information as it supports understanding and governance of AI-related risks without describing a new incident or hazard.
Thumbnail Image

How AI-enhanced identification is endangering civil society in the Middle East

2023-08-23
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for biometric identification (RBI) that directly led to the detention and imprisonment of a dissident, constituting a violation of human rights. The AI system's role is pivotal in identifying the individual through iris scans and other biometric data, enabling authoritarian governments to suppress dissent. This meets the criteria for an AI Incident because the AI system's use has directly caused harm to a person (violation of rights and imprisonment). The article also discusses the broader implications of AI biometric surveillance in authoritarian contexts, reinforcing the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

What Fashion And Beauty Brands Should Know Before Offering A Virtual Try-on Tool - Privacy Protection - United States

2023-08-24
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of virtual try-on tools that use biometric data (face, hand, or body scans) to generate outputs for users. The lawsuits mentioned indicate that the use of these AI systems has led to legal harm in the form of violations of biometric privacy laws, which are a breach of legal obligations protecting personal data and privacy rights. Although the harm is legal and privacy-related rather than physical, it fits within the definition of an AI Incident as it involves violations of applicable law intended to protect fundamental rights. Therefore, this event qualifies as an AI Incident due to the realized legal harms stemming from the use of AI virtual try-on tools without proper compliance.
Thumbnail Image

Even a Law Firm Is Being Sued Under BIPA - Identity News Digest

2023-08-23
FindBiometrics
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions biometric AI systems (fingerprint scanning, facial recognition) and their use in identity verification and access control. The lawsuit against the law firm for BIPA violations involves the use of an AI biometric system without proper consent, constituting a breach of legal rights (human rights violation). The wrongful arrest due to facial recognition errors is a direct harm to an individual caused by AI malfunction. These meet the criteria for AI Incidents as the AI systems' use has directly led to harm (legal rights violation and wrongful conviction). Other mentions of biometric system deployments and product launches do not describe harm or plausible harm and thus are not incidents or hazards but complementary information or unrelated news.
Thumbnail Image

UK Data Regulator Joins Scrutiny of WorldCoin - dechert.com

2023-08-23
Business Telegraph
Why's our monitor labelling this an incident or hazard?
WorldCoin uses biometric iris scans, which involve AI systems for biometric authentication and identity verification. The ICO and other regulators are investigating potential compliance issues related to data protection and privacy, which could lead to violations of rights if not properly managed. However, the article does not report any realized harm or violations yet, only ongoing scrutiny and investigations. Therefore, this event represents a plausible risk of harm due to the AI system's use but no confirmed incident. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.