Google Gemini AI Raises Global Privacy Concerns by Scanning Personal Photos and Emails

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Gemini AI now scans users' personal photos and emails to generate personalized content, raising significant privacy concerns. The opt-in feature accesses sensitive data from Google Photos and Gmail, prompting criticism over vague consent processes and potential rights violations. Privacy advocates and regulators are scrutinizing the update for possible misuse of personal information.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (Gemini) is explicitly involved, using AI to scan and process personal photos for image generation. The event stems from the use of the AI system's new feature. While no direct harm or rights violation is reported as having occurred, the scanning of all personal photos could plausibly lead to privacy harms or breaches if misused or if data is exposed. Since the article focuses on the potential privacy risks and user advisories without describing realized harm, this qualifies as an AI Hazard rather than an AI Incident. It is more than just general AI news because it details a specific AI system's new capability with potential for harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Google Starts Scanning All Your Photos As New Update Goes Live

2026-04-18
Forbes
Why's our monitor labelling this an incident or hazard?
An AI system (Gemini) is explicitly involved, using AI to scan and process personal photos for image generation. The event stems from the use of the AI system's new feature. While no direct harm or rights violation is reported as having occurred, the scanning of all personal photos could plausibly lead to privacy harms or breaches if misused or if data is exposed. Since the article focuses on the potential privacy risks and user advisories without describing realized harm, this qualifies as an AI Hazard rather than an AI Incident. It is more than just general AI news because it details a specific AI system's new capability with potential for harm.
Thumbnail Image

Google just gave Gemini access to all your photos: Here's how to turn it off

2026-04-18
GEO TV
Why's our monitor labelling this an incident or hazard?
An AI system (Gemini) is explicitly involved, accessing personal photo data to generate content. The event involves the use of the AI system and raises credible concerns about potential privacy harms, which could constitute violations of rights if misused. Since no direct harm has been reported yet but plausible future harm is highlighted by privacy watchdogs, this qualifies as an AI Hazard rather than an AI Incident. The article also provides instructions to disable the feature, indicating user control but not resolving the potential risk.
Thumbnail Image

Google Gemini Is Now Digging Through Your Private Photos

2026-04-17
Gadget Review
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Google Gemini's Personal Intelligence) that uses personal data to generate outputs, involving AI development and use. The system's operation directly impacts users' privacy by mining sensitive personal data without explicit, broad consent, which constitutes a violation of privacy rights, a subset of human rights. The geographic restrictions indicate recognition of legal compliance issues, reinforcing the assessment of rights violations. Although physical harm is not involved, the breach of privacy and potential misuse of personal data are significant harms under the framework. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nano Banana Google Photos sparks powerful new privacy concerns

2026-04-18
Pune Mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini app with Nano Banana image tool) that uses personal data to generate AI content. However, the concerns raised are about potential privacy implications and data usage policies rather than actual harm or violations that have occurred. There is no report of injury, rights violations, or other harms directly caused by the AI system's use or malfunction. The feature is opt-in and includes safety measures, and the article focuses on user concerns and calls for clearer consent and controls. This fits the definition of Complementary Information, as it provides supporting context and highlights governance and societal issues related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Google Gemini Personal Intelligence: AI Now Scans Photos and Emails - News Directory 3

2026-04-19
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini) that analyzes personal data (photos and emails) using generative AI and multimodal models. The system's use raises concerns about privacy and data protection rights, which fall under violations of human rights and legal obligations. Although the feature is opt-in, the consent process is criticized as vague and insufficient, and regulatory authorities are scrutinizing the practice. No actual harm such as data misuse or breaches is reported yet, so the event does not meet the threshold for an AI Incident. However, the plausible risk of harm through privacy violations and regulatory non-compliance is credible and significant, qualifying it as an AI Hazard.