
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
GitHub Copilot, an AI coding assistant, faces backlash from developers over security vulnerabilities (CVE-2025-53773) that could allow remote code execution, increased code errors, and intrusive integration disrupting workflows. These issues raise significant privacy, security, and productivity concerns among users.[AI generated]
Why's our monitor labelling this an incident or hazard?
GitHub Copilot is an AI system that generates code suggestions based on AI models. The article reports on security vulnerabilities (CVE-2025-53773) that could allow remote code execution, which is a direct harm to users' security (harm to persons and property). Additionally, the AI system's suggestions have been linked to increased bugs (41% more errors), indicating harm to software quality and developer productivity. The forced integration and intrusive presence also cause user harm by disrupting workflows and raising privacy concerns. These factors collectively demonstrate realized harms caused directly or indirectly by the AI system's use and malfunction, fitting the definition of an AI Incident.[AI generated]