Apple and Google silently scan mobile photos with AI, raising privacy concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Apple’s Enhanced Visual Search and Google's SafetyCore, new AI-driven features on iOS and Android, analyze users’ photos for object identification and sensitive content without user consent or notification. The silent deployment and lack of transparency violate privacy rights and spark concerns over intrusive, unconsented AI surveillance on personal devices.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (Enhanced Visual Search and SafetyCore) that analyze images on mobile devices. The deployment without user consent or notification implicates a violation of privacy rights, a form of human rights violation under the framework. Although no direct harm like data leakage is reported, the unauthorized scanning and lack of transparency constitute a breach of obligations protecting fundamental rights. Hence, this qualifies as an AI Incident due to indirect violation of rights caused by the AI systems' use.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityAccountabilityRespect of human rightsDemocracy & human autonomyRobustness & digital security

Industries
Consumer productsDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Apple y Google ahora escanean las fotos de tu móvil sin avisar: así puedes desactivarlo

2025-02-26
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Enhanced Visual Search and SafetyCore) that analyze images on mobile devices. The deployment without user consent or notification implicates a violation of privacy rights, a form of human rights violation under the framework. Although no direct harm like data leakage is reported, the unauthorized scanning and lack of transparency constitute a breach of obligations protecting fundamental rights. Hence, this qualifies as an AI Incident due to indirect violation of rights caused by the AI systems' use.
Thumbnail Image

Chegg demanda a Google por resúmenes de búsqueda de IA

2025-02-24
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI-generated search summaries) and alleges harm (economic and IP-related) caused by its use. However, the harm is presented as an allegation in a lawsuit rather than a confirmed incident with direct evidence of harm. The main focus is on the legal action and the broader implications for AI's impact on content providers, fitting the definition of Complementary Information. It does not describe a realized AI Incident or a plausible future hazard but rather a governance and societal response to AI deployment.
Thumbnail Image

Una nueva función de Android es escanear sus fotos para 'contenido sensible' - cómo detenerla

2025-02-25
Notiulti
Why's our monitor labelling this an incident or hazard?
SafetyCore is an AI system using machine learning to classify sensitive content on user devices. Its silent installation without explicit user consent and inability to opt out or disable it infringes on users' privacy and control, which are fundamental rights. The AI system's deployment has directly led to harm in terms of violation of user rights and privacy, as users are unaware and unable to control this intrusive scanning. This meets the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The article describes realized harm (privacy violation), not just potential harm, so it is not merely a hazard or complementary information.