
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Villager, an AI-powered penetration testing tool developed by Chinese group Cyberspike, automates and orchestrates cyberattacks using natural language processing and agentic AI. Rapidly adopted since July 2025, it enables both legitimate and malicious actors to conduct sophisticated, evasive attacks at scale, raising significant cybersecurity and regulatory concerns.[AI generated]
Why's our monitor labelling this an incident or hazard?
The AI system Villager is explicitly described as an AI-native pentesting tool that automates offensive security operations, indicating AI system involvement. Its widespread adoption, including by likely threat actors, and its association with malware and hacking groups, directly leads to harm through enabling malicious cyber campaigns. This constitutes a violation of security and potentially human rights, as well as harm to property and communities. The direct link between the AI system's use and realized harm classifies this as an AI Incident rather than a hazard or complementary information.[AI generated]