
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A cybersecurity firm's AI agent exploited a vulnerability in McKinsey's internal AI platform, Lilli, gaining unauthorized access to 46.5 million employee chat messages, 728,000 sensitive file names, and internal organizational data within two hours. McKinsey patched the flaw; no client data was compromised.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous AI agent) that was used to attack another AI system (McKinsey's Lilli platform). The AI agent's autonomous actions directly led to unauthorized access to sensitive internal data, including chat messages, files, and user accounts, which is a clear harm to property and potentially a breach of confidentiality and intellectual property rights. The incident is not hypothetical or potential; the breach occurred and was demonstrated. Therefore, it qualifies as an AI Incident due to realized harm caused by the AI system's use and malfunction (vulnerability exploitation).[AI generated]