Google and Apple AI Photo Apps Fail to Address Racial Bias in Image Recognition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In 2015, Google Photos' AI mislabeled Black individuals as 'gorillas,' causing significant harm and controversy. Despite promises to fix the issue, both Google and Apple have since disabled gorilla recognition in their photo apps, highlighting persistent racial bias and unresolved flaws in AI image recognition systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (computer vision/image recognition) that have directly led to harm by mislabeling Black people as gorillas, a racist and offensive error. This constitutes a violation of human rights and causes social harm. The companies' decisions to disable certain recognition features are responses to these harms but do not negate the fact that the harms occurred. The article does not merely discuss potential future harm or general AI developments but focuses on concrete past incidents and their social implications. Hence, the classification as an AI Incident is appropriate.[AI generated]
AI principles
FairnessRespect of human rightsRobustness & digital securityTransparency & explainabilityAccountabilitySafety

Industries
Consumer productsMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detection

In other databases


Articles about this incident or hazard

Thumbnail Image

Google's Photo App Still Can't Find Gorillas. And Neither Can Apple's.

2023-05-22
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (computer vision/image recognition) that have directly led to harm by mislabeling Black people as gorillas, a racist and offensive error. This constitutes a violation of human rights and causes social harm. The companies' decisions to disable certain recognition features are responses to these harms but do not negate the fact that the harms occurred. The article does not merely discuss potential future harm or general AI developments but focuses on concrete past incidents and their social implications. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Google's photo app still can't find gorillas, and neither can Apple's

2023-05-23
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems in photo apps have misclassified Black individuals as gorillas, a harmful and offensive error rooted in biased training data. This is a direct harm to human rights and communities, fulfilling the criteria for an AI Incident. The AI system's development and use are central to the harm, and the companies' responses (disabling the feature) do not eliminate the harm but acknowledge its presence. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Google's Photo App Still Can't Find Gorillas. And Neither Can Apple's - ET Telecom

2023-05-23
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (Google Photos, Apple Photos) that use computer vision to label images. The misclassification of Black people as gorillas is a direct harm linked to the AI system's development and use, reflecting biased training data and resulting in offensive and discriminatory outcomes. This harm falls under violations of human rights and harm to communities. The disabling of the feature to avoid further harm confirms the recognition of the AI system's role in causing harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Google turns off ability to search photo collections for gorillas over racist AI

2023-05-22
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for image recognition and search, which have a history of causing harm through racist mislabeling. The disabling of the gorilla search function is a mitigation measure to prevent further harm. No new incident of harm is reported; rather, the article discusses the continued risk and the company's response. This fits the definition of Complementary Information, as it updates on mitigation and ethical considerations related to AI bias, rather than describing a new AI Incident or a plausible future hazard.
Thumbnail Image

Google's Photos App is Still Unable to Find Gorillas

2023-05-22
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image recognition algorithms) and their use in labeling photos. The misclassification of Black people as gorillas in 2015 caused harm related to racial bias and discrimination, which is a violation of human rights. However, the current situation is that Google and Apple have disabled the ability to label gorillas to avoid such harm, effectively limiting functionality to prevent future incidents. There is no new incident of harm occurring now, but rather a mitigation response to past harm and ongoing AI bias issues. Therefore, this is best classified as Complementary Information, as it provides an update and context on the AI systems' limitations and responses to prior harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Google's photo app still can't find gorillas, and neither can Apple's

2023-05-24
telecomlive.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google Photos' image recognition) directly caused harm by labeling Black individuals with a racist and offensive term. This is a clear example of an AI Incident because the AI's malfunction or bias led to a violation of human rights and harm to communities. The fact that Google disabled the 'gorilla' label afterward is a response but does not negate the incident itself.
Thumbnail Image

Google's Photo App Still Can't Find Gorillas. And Neither Can Apple's. | RMOL

2023-05-22
RMOL
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for image recognition and labeling, which have directly caused harm by misclassifying Black individuals as gorillas, a racist and offensive error. This mislabeling harms communities and violates rights related to dignity and non-discrimination. The harm is realized and ongoing, not merely potential, thus qualifying as an AI Incident. The article's focus on the persistence of this problem despite advances in AI confirms the direct link between AI system outputs and harm.
Thumbnail Image

Google's Photos app still can't find gorillas - ExBulletin

2023-05-23
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image recognition algorithms in Google Photos and Apple Photos) that have directly led to harm by misclassifying black people as gorillas, a form of racial bias and discrimination. This is a violation of human rights and causes harm to communities. The companies' workaround to disable gorilla recognition is a response to this harm but does not resolve the underlying AI bias. The incident is ongoing as the issue persists eight years later, indicating the AI system's malfunction or biased training data remains a problem. Hence, it meets the criteria for an AI Incident.