
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Cybercrime group Hive0163 deployed AI-generated malware, Slopoly, in ransomware attacks during early 2026. Developed with large language models, Slopoly enabled persistent unauthorized access and data theft, demonstrating how AI accelerates malware creation and amplifies harm in extortion campaigns. IBM X-Force researchers uncovered the incident.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (an LLM) being used in the development of malware that has been deployed in real ransomware attacks causing harm. The AI system's involvement is in the malware's development and use, which has directly led to harm (financial extortion, data theft, persistent unauthorized access). This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons or groups (financial harm, disruption of systems).[AI generated]