Google Photos AI Misclassifies Intimate Video, Causing Privacy Breach

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google Photos' facial recognition AI mistakenly identified a child's photo in the background of an intimate video, automatically categorizing it with the child's album and sharing it with the user's mother. This led to an unintended privacy breach and emotional distress, highlighting risks of AI-driven auto-sharing features.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (Google's facial recognition and photo organization) was used to automatically group photos and videos by detected faces. Due to the AI's detection of a child's face in the background of an intimate video, the system incorrectly categorized the video into the child's photo album. This led to the direct unintended sharing of private content with the child's grandmother, causing privacy harm and emotional distress to the user. The harm is realized and directly linked to the AI system's use and malfunction in categorization. Therefore, this qualifies as an AI Incident involving harm to privacy and emotional well-being, which can be considered harm to a person or group under the definitions provided.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersChildren

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Recognition/object detectionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

分享照片慘被Google功能出賣 人妻激戰片全被媽看光光 - 蒐奇 - 自由時報電子報

2021-04-06
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system (Google's facial recognition and photo organization) was used to automatically group photos and videos by detected faces. Due to the AI's detection of a child's face in the background of an intimate video, the system incorrectly categorized the video into the child's photo album. This led to the direct unintended sharing of private content with the child's grandmother, causing privacy harm and emotional distress to the user. The harm is realized and directly linked to the AI system's use and malfunction in categorization. Therefore, this qualifies as an AI Incident involving harm to privacy and emotional well-being, which can be considered harm to a person or group under the definitions provided.
Thumbnail Image

傳金孫照給阿嬤慘被Google出賣 激戰片全放出人妻羞炸 - 搜奇

2021-04-06
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Google's facial recognition AI) whose use directly led to a privacy harm (embarrassment and unintended sharing of intimate content). This constitutes a violation of privacy rights, which falls under harm to individuals. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to a person through privacy violation and emotional distress.
Thumbnail Image

人妻火辣激情影片誤傳予母親 全因 Google Photos 人臉辨識太強? - ezone.hk - 網絡生活 - 網絡熱話

2021-04-07
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Google Photos' facial recognition and automatic sharing feature). The AI system's use directly led to a privacy harm—an unintended sharing of private content causing personal embarrassment and potential violation of privacy rights. This constitutes a violation of personal privacy, which can be considered a breach of rights under applicable law protecting fundamental rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction or misuse in sharing sensitive content without proper user control or consent.
Thumbnail Image

Google Photos 面部識別累事 女兒「情趣」短片誤發母親

2021-04-06
ePrice.HK
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Google Photos' facial recognition) whose use directly led to harm—specifically, a privacy breach and emotional distress caused by the unintended sharing of sensitive content. The harm is realized and directly linked to the AI system's malfunction or misclassification and its automatic sharing functionality. Therefore, this qualifies as an AI Incident under the definitions provided, as it caused harm to the individual (privacy and emotional harm).
Thumbnail Image

辣媽自拍廚房壞壞片 遭手機「1功能」外洩老母看光光

2021-04-06
鏡週刊 Mirror Media
Why's our monitor labelling this an incident or hazard?
The event describes how an AI system (Google Photos' facial recognition and automatic sharing functionality) was used and malfunctioned in a sense that it automatically shared a private video containing sensitive content with the user's mother. This led to a violation of privacy and emotional distress, which qualifies as harm to a person. The AI system's role was pivotal in causing this harm because it automatically identified faces and shared content without the user's immediate consent at the time of sharing. Therefore, this qualifies as an AI Incident due to indirect harm caused by the AI system's use and configuration.
Thumbnail Image

給阿嬤看金孫照!1功能竟傳成「愛愛片」 人妻當場崩潰 | 新奇 | 三立新聞網 SETN.COM

2021-04-06
三立新聞
Why's our monitor labelling this an incident or hazard?
The Google Photos facial recognition is an AI system that automatically organizes media based on inferred facial identities. The malfunction or misclassification by this AI system directly led to the unintended sharing of private content, causing harm to the person's privacy and emotional well-being. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm (privacy violation and emotional distress).