Exposed Google API Keys Enable Unauthorized Access to Gemini AI and Data

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers discovered that legacy Google Cloud API keys, previously considered safe to embed in public code, now grant unauthorized access to Gemini AI endpoints. This exposes private data and allows attackers to incur significant financial charges, affecting thousands of organizations, including Google itself. The incident highlights a critical security vulnerability in Google's AI integration.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google's generative AI service Gemini AI) and its integration with cloud API keys. The misuse of these keys can lead to unauthorized access to AI services, resulting in potential harm such as data exposure (harm to property and possibly to communities) and financial damage (mounting AI bills). This constitutes harm directly linked to the use of an AI system, fulfilling the criteria for an AI Incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital security

Affected stakeholders
Business

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Generative AI Rollout Exposes Hidden Risk in Google Cloud API Keys

2026-02-28
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI service Gemini AI) and its integration with cloud API keys. The misuse of these keys can lead to unauthorized access to AI services, resulting in potential harm such as data exposure (harm to property and possibly to communities) and financial damage (mounting AI bills). This constitutes harm directly linked to the use of an AI system, fulfilling the criteria for an AI Incident.
Thumbnail Image

Hackers Could Exploit Exposed Google API Keys to Access Gemini AI

2026-02-27
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, Gemini AI, accessed via Google API keys. The exposure and misuse of these keys can directly lead to unauthorized access to AI services and private data, constituting a violation of data privacy and potential financial harm. Although no specific harm is reported as having occurred yet, the article documents active exploitation risks and the potential for significant harm. Therefore, this qualifies as an AI Incident because the misuse of the AI system has directly or indirectly led to harm or risk of harm, including unauthorized data access and financial loss. The company's mitigations and disclosures are complementary information but do not negate the incident classification.
Thumbnail Image

Public Google API keys can be used to expose Gemini AI data

2026-02-27
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini AI) and describes how the misuse of API keys can directly lead to unauthorized access to AI data and financial harm, which fits the definition of an AI Incident. The harm includes violation of data security and potential financial damage, which are significant harms where the AI system's role is pivotal. The incident stems from the use and misconfiguration of AI system credentials, leading to realized harm risks. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google API Keys Expose Private Data Silently Through Gemini

2026-02-27
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's Gemini AI API) and describes how the misuse or misconfiguration of API keys leads to unauthorized access to private AI data and billable AI services, causing direct harm including privacy breaches and financial damage. The involvement of the AI system is central to the incident, and the harms are realized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google API keys for Gemini AI pose security risk

2026-02-27
SC Media
Why's our monitor labelling this an incident or hazard?
The Gemini AI assistant is an AI system, and the exposed API keys enable unauthorized access to it, leading to potential financial harm and data breaches. The misuse of these keys directly results in harm, fulfilling the criteria for an AI Incident. The article reports realized harm risks due to the misuse of AI authentication keys, not just potential future harm or general information, so it is classified as an AI Incident.
Thumbnail Image

Thousands of Google accounts could be misused by hackers: Report

2026-02-28
The News International
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of API keys granting access to an AI system (Gemini AI endpoints). The misuse has directly led to financial harm (unauthorized billing charges) and potential data exposure, fulfilling the criteria for harm to property and possibly communities. The AI system's role is pivotal as the API keys provide access to AI endpoints. The harm is realized, not just potential, as evidenced by reported cases of large unauthorized charges. Hence, this is classified as an AI Incident.
Thumbnail Image

Publicly Exposed Google Cloud API Keys Gain Unintended Access to Gemini Services - IT Security News

2026-03-01
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, Google's Gemini AI platform, accessed via compromised API keys. The misuse of these keys has directly led to harms including unauthorized data access (harm to property and possibly privacy rights) and financial harm due to quota abuse. The involvement of AI services in the harm and the realized consequences meet the criteria for an AI Incident. Although Google has taken mitigation steps, the harm has already occurred, so this is not merely a hazard or complementary information.