Bias and Privacy Risks in AI Emotion Recognition Technology

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered emotion recognition technology is increasingly used in areas like hiring, security, and policing, often without consent. These systems are criticized for racial bias and privacy violations, leading to discriminatory outcomes and raising concerns about their scientific validity and potential harm to individuals and communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (emotion recognition technology) and discusses their use and inherent biases. It highlights potential harms such as racial discrimination, privacy violations, and misuse in policing and surveillance, which align with violations of human rights and harm to communities. However, it does not describe a concrete event where harm has already occurred but rather focuses on the risks and controversies surrounding the technology's deployment and scientific validity. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident in the future.[AI generated]
AI principles
FairnessPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityAccountability

Industries
Business processes and support servicesDigital securityGovernment, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPsychologicalEconomic/Property

Severity
AI hazard

Business function:
Human resource managementCompliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

AI is increasingly being used to identify emotions - here's what's at stake

2021-04-15
The Conversation
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (ERT) and discusses realized harms such as racial bias leading to discriminatory outcomes and privacy concerns, which fall under violations of rights and harm to communities. These harms are ongoing and documented, making this an AI Incident rather than a mere hazard or complementary information. Although no single discrete event is described, the article's focus on the existing and active use of biased ERT causing harm justifies classification as an AI Incident. The citizen science project and calls for deliberation are complementary but secondary to the main narrative of harm caused by ERT.
Thumbnail Image

AI Is Increasingly Being Used to Identify Emotions, Here's What's at Stake - Neuroscience News

2021-04-18
Neuroscience News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (emotion recognition technology) and discusses their use and inherent biases. It highlights potential harms such as racial discrimination, privacy violations, and misuse in policing and surveillance, which align with violations of human rights and harm to communities. However, it does not describe a concrete event where harm has already occurred but rather focuses on the risks and controversies surrounding the technology's deployment and scientific validity. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

AI is increasingly being used to identify emotions: What's at stake?

2021-04-16
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ERT) used to detect emotions from facial expressions. It details the biases and inaccuracies in these systems, particularly racial bias, and the potential for these systems to be weaponized against communities, which could lead to violations of rights and harm to communities. However, no specific harm or incident is reported as having occurred; rather, the article focuses on the potential risks and ethical concerns. This aligns with the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but no concrete incident has yet materialized.