AI Coding Assistants Drive Surge in Secret Leaks on GitHub

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In 2025, AI-assisted coding tools, notably Claude Code, doubled the rate of secret leaks in public GitHub commits compared to human developers. GitGuardian reported a 34% year-over-year increase, with nearly 29 million secrets exposed, escalating security risks for organizations and digital infrastructure worldwide.[AI generated]

Why's our monitor labelling this an incident or hazard?

The report explicitly links AI-assisted code commits to a doubling of secret leak rates, with concrete numbers showing 29 million secrets leaked in 2025 and a 34% year-on-year increase. The AI systems (e.g., ClaudeCode) are directly involved in generating code that contains exposed credentials, which is a clear violation of security and can lead to harm such as breaches and unauthorized access. The harm is realized and ongoing, not just potential. The involvement of AI in causing or contributing to this harm meets the criteria for an AI Incident, as the AI system's use has directly led to significant harm related to cybersecurity breaches and exposure of sensitive information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Over 29 million secrets were leaked on GitHub in 2025, and AI really isn't helping

2026-03-18
TechRadar
Why's our monitor labelling this an incident or hazard?
The report explicitly links AI-assisted code commits to a doubling of secret leak rates, with concrete numbers showing 29 million secrets leaked in 2025 and a 34% year-on-year increase. The AI systems (e.g., ClaudeCode) are directly involved in generating code that contains exposed credentials, which is a clear violation of security and can lead to harm such as breaches and unauthorized access. The harm is realized and ongoing, not just potential. The involvement of AI in causing or contributing to this harm meets the criteria for an AI Incident, as the AI system's use has directly led to significant harm related to cybersecurity breaches and exposure of sensitive information.
Thumbnail Image

GitGuardian Reports an 81% Surge of AI-Service Leaks as 29M Secrets Hit Public GitHub

2026-03-17
Markets Insider
Why's our monitor labelling this an incident or hazard?
The report explicitly links AI-assisted coding to a higher rate of secret leaks, which directly contributes to security vulnerabilities and potential breaches. The leaked secrets (credentials, tokens, keys) are critical assets, and their exposure can lead to harm such as unauthorized access, data theft, and disruption of services. The AI system's use in development is a contributing factor to these leaks, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as millions of secrets have been exposed, increasing risk to organizations and users. Hence, this is not merely a hazard or complementary information but an incident involving AI systems causing harm.
Thumbnail Image

GitGuardian Reports an 81% Surge of AI-Service Leaks as 29M Secrets Hit Public GitHub

2026-03-17
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The report explicitly links the increased secret leak rate to the use of AI-assisted coding tools, indicating that the AI system's use has indirectly led to a significant security harm (exposure of secrets). This exposure can cause violations of privacy and intellectual property rights, fitting the definition of an AI Incident. The harm is realized and documented, not merely potential, and the AI system's involvement is clear and causal in the increased leak rate.
Thumbnail Image

The State of Secrets Sprawl 2026: AI-Service Leaks Surge 81% and 29M Secrets Hit Public GitHub

2026-03-17
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-assisted coding and AI service secrets as key factors in the increased leakage of sensitive credentials. These leaks have already occurred and are measurable, with millions of secrets exposed publicly and internally, which can and do lead to security incidents and breaches. The harm includes violations of security and privacy, which fall under harm to property and communities. The AI systems' role is pivotal as they accelerate software creation and increase the surface area for leaks. The article also notes that human workflows and decisions contribute, but the AI tools' involvement is central to the scale and nature of the problem. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

News alert: GitGuardian study shows AI coding tools double leak rates as 29M credentials hit GitHub

2026-03-18
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI coding tools) whose use has directly led to a significant increase in leaked credentials, a form of harm to property and organizational security. The leaked secrets represent a clear security vulnerability that can be exploited, constituting harm. The AI systems' role is pivotal as the leak rates in AI-assisted code are roughly double the baseline, indicating that AI use is a contributing factor to the harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and linked to AI system use.
Thumbnail Image

GitGuardian Reports an 81% Surge of AI-Service Leaks as 29M Secrets Hit Public GitHub - Tech Startups

2026-03-17
Tech News | Startups News
Why's our monitor labelling this an incident or hazard?
The report details how AI-assisted development tools have doubled the secret leak rate compared to baseline, leading to millions of exposed credentials including AI service tokens. These leaks directly increase the risk of compromise and harm to organizations' digital assets, fulfilling the criteria for harm to property and communities. The AI systems' use and their role in accelerating secret exposure are central to the incident. Therefore, this is classified as an AI Incident due to the realized harm caused by AI system use leading to security breaches.
Thumbnail Image

The State of Secrets Sprawl 2026: AI-Service Leaks Surge 81% and 29M Secrets Hit Public GitHub

2026-03-17
GitGuardian Blog - Take Control of Your Secrets Security
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the form of AI-assisted coding tools that have accelerated software development and contributed to a higher rate of secret leaks. The harm is realized as these leaked secrets can be exploited by attackers, causing security breaches and harm to organizations and communities. The article details concrete data on leaked secrets, valid credentials exposed, and the persistence of these vulnerabilities, indicating actual harm rather than just potential risk. The AI systems' involvement is indirect but pivotal, as they influence the pace and scale of software creation, increasing the attack surface and risk of exposure. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI coding assistants twice as likely to leak secrets, as overall leaks rise 34%

2026-03-18
SC Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI coding assistants co-authoring commits that leak secrets at a higher rate than human developers, indicating AI system involvement in the development and use phases. The leaked secrets represent a clear harm to property and potentially to critical infrastructure security, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as millions of secrets have been leaked, some still active, and internal repositories are also affected. The AI system's role is pivotal in increasing the risk and occurrence of these leaks, justifying classification as an AI Incident.