Google Gemini AI Raises Global Privacy Concerns by Scanning Personal Photos and Emails

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Gemini AI now scans users' personal photos and emails to generate personalized content, raising significant privacy concerns. The opt-in feature accesses sensitive data from Google Photos and Gmail, prompting criticism over vague consent processes and potential rights violations. Privacy advocates and regulators are scrutinizing the update for possible misuse of personal information.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (Gemini) is explicitly involved, using AI to scan and process personal photos for image generation. The event stems from the use of the AI system's new feature. While no direct harm or rights violation is reported as having occurred, the scanning of all personal photos could plausibly lead to privacy harms or breaches if misused or if data is exposed. Since the article focuses on the potential privacy risks and user advisories without describing realized harm, this qualifies as an AI Hazard rather than an AI Incident. It is more than just general AI news because it details a specific AI system's new capability with potential for harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Google Starts Scanning All Your Photos As New Update Goes Live

2026-04-18
Forbes
Why's our monitor labelling this an incident or hazard?
An AI system (Gemini) is explicitly involved, using AI to scan and process personal photos for image generation. The event stems from the use of the AI system's new feature. While no direct harm or rights violation is reported as having occurred, the scanning of all personal photos could plausibly lead to privacy harms or breaches if misused or if data is exposed. Since the article focuses on the potential privacy risks and user advisories without describing realized harm, this qualifies as an AI Hazard rather than an AI Incident. It is more than just general AI news because it details a specific AI system's new capability with potential for harm.
Thumbnail Image

Google just gave Gemini access to all your photos: Here's how to turn it off

2026-04-18
GEO TV
Why's our monitor labelling this an incident or hazard?
An AI system (Gemini) is explicitly involved, accessing personal photo data to generate content. The event involves the use of the AI system and raises credible concerns about potential privacy harms, which could constitute violations of rights if misused. Since no direct harm has been reported yet but plausible future harm is highlighted by privacy watchdogs, this qualifies as an AI Hazard rather than an AI Incident. The article also provides instructions to disable the feature, indicating user control but not resolving the potential risk.
Thumbnail Image

Google Gemini Is Now Digging Through Your Private Photos

2026-04-17
Gadget Review
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Google Gemini's Personal Intelligence) that uses personal data to generate outputs, involving AI development and use. The system's operation directly impacts users' privacy by mining sensitive personal data without explicit, broad consent, which constitutes a violation of privacy rights, a subset of human rights. The geographic restrictions indicate recognition of legal compliance issues, reinforcing the assessment of rights violations. Although physical harm is not involved, the breach of privacy and potential misuse of personal data are significant harms under the framework. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nano Banana Google Photos sparks powerful new privacy concerns

2026-04-18
Pune Mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini app with Nano Banana image tool) that uses personal data to generate AI content. However, the concerns raised are about potential privacy implications and data usage policies rather than actual harm or violations that have occurred. There is no report of injury, rights violations, or other harms directly caused by the AI system's use or malfunction. The feature is opt-in and includes safety measures, and the article focuses on user concerns and calls for clearer consent and controls. This fits the definition of Complementary Information, as it provides supporting context and highlights governance and societal issues related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Google Gemini Personal Intelligence: AI Now Scans Photos and Emails - News Directory 3

2026-04-19
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini) that analyzes personal data (photos and emails) using generative AI and multimodal models. The system's use raises concerns about privacy and data protection rights, which fall under violations of human rights and legal obligations. Although the feature is opt-in, the consent process is criticized as vague and insufficient, and regulatory authorities are scrutinizing the practice. No actual harm such as data misuse or breaches is reported yet, so the event does not meet the threshold for an AI Incident. However, the plausible risk of harm through privacy violations and regulatory non-compliance is credible and significant, qualifying it as an AI Hazard.
Thumbnail Image

Google Starts Scanning All Your Photos As New Update Goes Live

2026-04-20
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini AI with Personal Intelligence) that processes personal photos to generate AI images. Although the feature is opt-in and Google claims not to train models directly on private photos, the scanning and use of intimate personal data could plausibly lead to privacy harms and violations of rights if misused or if data is exposed. Since no actual harm is reported as having occurred yet, but the potential for significant privacy harm is credible and foreseeable, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the implications and risks of this new AI feature rather than reporting a realized harm or incident.
Thumbnail Image

Gemini for Android Auto Is Here. Users Wish It Weren't

2026-04-20
autoevolution
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Gemini) deployed in a safety-critical context (driving via Android Auto). Users report that the AI assistant's malfunctioning—such as misunderstanding commands, providing incorrect location data, and delivering distracting lengthy responses—has degraded the user experience and could plausibly lead to safety risks while driving. These are direct harms related to the AI system's use and malfunction. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Now Lets Gemini Generate Images From Your Google Photos

2026-04-20
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article details a new AI-powered feature that uses personal data for image generation, clearly involving an AI system. However, it does not describe any realized harm or direct risk of harm stemming from this feature. There is no mention of misuse, malfunction, or any negative consequences. The information primarily serves to inform about a new AI capability and its privacy considerations, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Gemini can now personalise images using your Google data

2026-04-20
Tech Advisor
Why's our monitor labelling this an incident or hazard?
The article details a new AI feature that personalizes image generation using user data, which involves an AI system. However, it does not report any realized harm or incident resulting from this feature. The privacy concerns are acknowledged and addressed, but no violation or harm is described. Therefore, this is not an AI Incident or AI Hazard. It is a general product announcement with some contextual information about privacy and user control, which fits the definition of Complementary Information as it provides context and updates about AI system use and governance without describing harm or plausible harm.
Thumbnail Image

Google Photos scans your library to power new AI image tools - Diaspora Digital Media (DDM News) - Nigeria Breaking News, Africa and World News and Updates

2026-04-20
Diaspora Digital Media (DDM News) - Nigeria Breaking News, Africa and World News and Updates -
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini AI) that processes personal photo data to generate content, which fits the definition of an AI system. However, the article does not report any direct or indirect harm resulting from this AI use, such as privacy breaches, data misuse, or violations of rights. The privacy risks are potential and the feature is opt-in with user controls, indicating plausible future risks but no current incident. Therefore, this qualifies as an AI Hazard because the development and use of this AI system could plausibly lead to privacy harms or other incidents if misused or if protections fail, but no harm has yet occurred.