Iran Uses AI Facial Recognition to Enforce Hijab Laws, Violating Women's Rights

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Iranian authorities are reportedly deploying AI-powered facial recognition technology to identify and penalize women violating strict hijab laws. This system enables remote surveillance, leading to fines, arrests, and human rights violations, as women are targeted and punished without direct police interaction, raising serious concerns about privacy and oppression.[AI generated]

Why's our monitor labelling this an incident or hazard?

Facial recognition technology is an AI system used here for automated surveillance and enforcement of dress codes. The use of this AI system has directly led to violations of human rights (compulsory dress enforcement, targeting women), threats, arrests, and deaths, which constitute harm to persons and communities. This meets the criteria for an AI Incident because the AI system's use has directly caused significant harm and rights violations.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Compliance and justiceMonitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Use of facial recognition tech in Iran to police women's dress code leaves many exposed to risk - StuffSA

2023-01-13
Stuff
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for automated surveillance and enforcement of dress codes. The use of this AI system has directly led to violations of human rights (compulsory dress enforcement, targeting women), threats, arrests, and deaths, which constitute harm to persons and communities. This meets the criteria for an AI Incident because the AI system's use has directly caused significant harm and rights violations.
Thumbnail Image

Iran's Using Facial Recognition Tech To Penalise Women Defying Strict Hijab Law

2023-01-11
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology, an AI system, to identify women violating hijab laws. The technology is used by the government to enforce punitive measures, including arrests, which constitutes a violation of human rights and fundamental freedoms. The harm is realized and ongoing, as women are being identified and penalized based on AI-driven surveillance. Therefore, this qualifies as an AI Incident due to direct involvement of AI in causing harm through rights violations and repression.
Thumbnail Image

Chinese facial recognition technology helping Iran to identify women breaking strict dress code: Report

2023-01-12
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of facial recognition AI systems to monitor and enforce dress codes in Iran, leading to direct harm through human rights violations. The AI system's use results in penalties and surveillance that restrict women's freedoms and contribute to repression. This fits the definition of an AI Incident, as the AI system's use has directly led to violations of fundamental rights and harms to communities.
Thumbnail Image

Iran Says Face Recognition Will ID Women Breaking Hijab Laws

2023-01-10
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of face recognition algorithms to identify individuals violating hijab laws, linking AI system use to direct enforcement actions such as fines and arrests. This enforcement has resulted in significant harm to individuals' rights and freedoms, including documented deaths and mass arrests related to these policies. The AI system's role is pivotal in enabling the government to monitor and penalize women for dress code violations, constituting a clear AI Incident under the framework's definition of violations of human rights and harm to communities.
Thumbnail Image

Iran Is Using Facial Recognition to Enforce ModestyB Laws

2023-01-10
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems by Iranian authorities to identify and penalize women for alleged violations of hijab laws. This AI-enabled surveillance has directly led to harm by facilitating arrests and citations without due process or direct interaction, violating fundamental human rights. The involvement of AI in the development and use of biometric identification systems for oppressive enforcement meets the criteria for an AI Incident, as it has directly caused harm to persons and communities through state repression.
Thumbnail Image

Iran wants to use facial recognition to identify women breaking hijab laws

2023-01-11
Metro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems to identify individuals violating hijab laws, which leads to arrests, fines, and restrictions on access to services. This constitutes a violation of human rights and fundamental freedoms, fulfilling the criteria for an AI Incident under the OECD framework. The AI system's use in surveillance and enforcement directly causes harm to individuals and communities, including potential imprisonment and suppression of dissent. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Report: Iran May Be Using Facial Recognition Technology to Police Hijab Law

2023-01-13
VOA Voice of America
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) by a government to enforce religious dress codes, which has directly led to arrests and suppression of women's rights. This is a clear violation of human rights and thus qualifies as an AI Incident under the framework. The involvement of AI in identifying individuals for law enforcement purposes and the resulting harm to fundamental rights is explicit and direct.
Thumbnail Image

Iran to use facial recognition to identify women without hijabs

2023-01-11
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition algorithms to identify individuals violating hijab laws, linking AI system use to direct enforcement actions such as fines and arrests. This enforcement has resulted in significant human rights violations, including suppression of freedom of expression and assembly, and has contributed to harm including deaths and mass arrests. The AI system's role is pivotal in enabling surveillance and targeting of women based on religious dress codes, fulfilling the criteria for an AI Incident under violations of human rights.
Thumbnail Image

Iran plans to use facial recognition technology to identify and prosecute women without hijabs- Technology News, Firstpost

2023-01-12
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology, an AI system, by Iranian authorities to identify women not complying with hijab laws. This use has directly led to arrests, fines, and at least one death in custody, which are clear violations of human rights and fundamental freedoms. The AI system's role is pivotal in enabling mass surveillance and enforcement, causing harm to individuals and communities. Therefore, this event meets the criteria for an AI Incident due to direct harm and rights violations caused by the AI system's use.
Thumbnail Image

Iran Apparently Using Facial Recognition to Catch Women Breaking Hijab Law

2023-01-12
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition AI systems by the Iranian state to monitor and enforce hijab laws, which has resulted in arrests and violent consequences for women. This constitutes a violation of human rights and fundamental freedoms, fulfilling the criteria for an AI Incident. The AI system's deployment is not hypothetical or potential harm but an ongoing reality causing direct harm to individuals and communities.
Thumbnail Image

Is Iran using facial recognition technology to police hijab law?

2023-01-11
Deseret News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition software, an AI system, by Iranian law enforcement to identify women violating hijab laws. This use has directly led to arrests, citations without direct police interaction, and is linked to widespread protests and brutal crackdowns causing deaths and mass arrests. The AI system's role in surveillance and enforcement is pivotal in these human rights violations and harm to communities, meeting the criteria for an AI Incident.
Thumbnail Image

Face Recognition Already Used to Identify Iranian Women Violating Hijab Laws?

2023-01-10
Tech Times
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (face recognition) is explicitly mentioned and reasonably inferred as being used to identify individuals remotely. The use of this AI system leads directly to harm in the form of human rights violations (enforcement of restrictive dress codes, imprisonment, flogging). The article indicates that women are being identified and penalized based on AI surveillance, fulfilling the criteria for an AI Incident under violations of human rights. Although the Iranian government has not officially confirmed current use, multiple credible reports and the consequences described indicate realized harm linked to AI use.
Thumbnail Image

Chinese facial recognition technology helping Iran to identify women breaking strict dress code: Report - WFIN Local News

2023-01-12
WFIN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology, an AI system, by Iranian authorities to enforce dress codes, leading to direct harm in the form of human rights violations against women. The technology is used to identify individuals remotely and issue fines or penalties, which constitutes a breach of fundamental rights and freedoms. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights) and oppression, not merely a potential or future risk.
Thumbnail Image

Iran Says Face Recognition Will ID Women Breaking Hijab Laws - WIRED

2023-01-11
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The use of AI face recognition to enforce hijab laws constitutes a violation of human rights, specifically privacy and freedom of expression, as it enables surveillance and punitive actions against individuals based on their appearance and behavior. The AI system's deployment directly leads to harm through potential arrests and fines, fulfilling the criteria for an AI Incident under violations of human rights.
Thumbnail Image

L'Iran utiliserait la reconnaissance faciale pour identifier les femmes qui ne portent pas le hijab

2023-01-13
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems to identify women violating hijab laws, leading to arrests and social restrictions. This constitutes a violation of human rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as arrests and penalties have already occurred. Therefore, this event qualifies as an AI Incident due to the direct role of AI in enabling state repression and rights violations.
Thumbnail Image

L'Iran affirme que la reconnaissance faciale permettra d'identifier les femmes qui enfreignent les lois sur le hijab, un outil appréciable pour les régimes autoritaires du monde

2023-01-13
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as facial recognition technology powered by AI algorithms used by the Iranian government to identify women violating hijab laws. The use of this AI system has directly led to human rights violations, including arrests, fines, and repression of women, which constitute harm under the framework. The article details realized harms, including arrests and deaths linked to enforcement actions facilitated by AI surveillance. Therefore, this qualifies as an AI Incident due to the direct and significant harm caused by the AI system's use in enforcing oppressive laws.
Thumbnail Image

La technologie chinoise de reconnaissance faciale aide l'Iran à identifier les femmes qui enfreignent le code vestimentaire strict : rapport - News 24

2023-01-12
News 24
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) developed and deployed by Iranian authorities to enforce strict dress codes. The technology is actively used to identify women violating the hijab law, leading to sanctions such as fines and other penalties. This constitutes a violation of human rights and fundamental freedoms, fulfilling the criteria for harm under the AI Incident definition. The article provides evidence that the AI system's use has already caused harm, not just potential harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'Iran utiliserait la reconnaissance faciale pour identifier les femmes sans Hijab

2023-01-11
L'ADN
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition AI systems to identify and penalize women for not wearing the hijab, which is a direct use of AI leading to violations of human rights and legal obligations protecting fundamental rights. The article reports actual arrests, fines, and social exclusion resulting from this AI-enabled surveillance, indicating realized harm. The AI system's role is pivotal in enabling the government to enforce these laws through automated identification and tracking, which constitutes an AI Incident under the framework's definition of violations of human rights and breach of legal protections.
Thumbnail Image

2

2023-01-13
developpez.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—facial recognition technology—to identify and target women violating hijab laws in Iran. The AI system's use has directly led to arrests, physical abuse, forced confessions, and suppression of human rights, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal protections. The article provides concrete examples of harm resulting from the AI system's deployment, including violent repression and surveillance abuses. Hence, the classification as an AI Incident is justified.