
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Google Photos' facial recognition AI mistakenly identified a child's photo in the background of an intimate video, automatically categorizing it with the child's album and sharing it with the user's mother. This led to an unintended privacy breach and emotional distress, highlighting risks of AI-driven auto-sharing features.[AI generated]
Why's our monitor labelling this an incident or hazard?
An AI system (Google's facial recognition and photo organization) was used to automatically group photos and videos by detected faces. Due to the AI's detection of a child's face in the background of an intimate video, the system incorrectly categorized the video into the child's photo album. This led to the direct unintended sharing of private content with the child's grandmother, causing privacy harm and emotional distress to the user. The harm is realized and directly linked to the AI system's use and malfunction in categorization. Therefore, this qualifies as an AI Incident involving harm to privacy and emotional well-being, which can be considered harm to a person or group under the definitions provided.[AI generated]