AI Hallucinations Exploited to Spread Malware

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Recent studies and expert warnings reveal that AI coding tools can hallucinate non-existent package names. Malicious actors may exploit this by uploading fake packages to official repositories, posing significant risks to software security. Researchers emphasize the need for proactive measures to prevent potential exploitation of these AI-generated errors.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (large language models used as code assistants) whose hallucinated outputs (non-existent package names) are exploited by attackers to distribute malware, causing harm to software supply chains. The harm is realized, not just potential, as malicious packages have been published and installed, leading to security risks. The AI system's malfunction (hallucination) is a direct contributing factor to the incident. This fits the definition of an AI Incident because it involves harm to property and communities (software supply chain security) directly linked to AI system use and malfunction.[AI generated]
AI principles
Robustness & digital securitySafetyTransparency & explainabilityAccountabilityPrivacy & data governanceRespect of human rights

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
ConsumersBusinessGeneral public

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information securityResearch and development

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

AI code suggestions sabotage software supply chain

2025-04-12
theregister.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models used as code assistants) whose hallucinated outputs (non-existent package names) are exploited by attackers to distribute malware, causing harm to software supply chains. The harm is realized, not just potential, as malicious packages have been published and installed, leading to security risks. The AI system's malfunction (hallucination) is a direct contributing factor to the incident. This fits the definition of an AI Incident because it involves harm to property and communities (software supply chain security) directly linked to AI system use and malfunction.
Thumbnail Image

AI threats in software development revealed

2025-04-13
ScienceDaily
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs) used in software development that generate hallucinated package names, which attackers can exploit to distribute malware. This leads to direct harm to users' property and security. The researchers demonstrate that this is a real and ongoing issue, not just a theoretical risk, with quantified data on hallucination rates and examples of how the attack works. The AI system's malfunction (hallucination) and its use in code generation directly cause the harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Hallucinations Create "Slopsquatting" Supply Chain Threat

2025-04-14
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (LLMs generating code) and describes how their hallucinations (maliciously exploited) have directly led to a new supply chain attack vector. The harm is realized or highly plausible given the widespread use of AI-generated code and the potential for malicious packages to be introduced into software projects, which can cause harm to property (software integrity) and communities (users relying on compromised software). The article details the mechanism and evidence from research, confirming the AI system's role in causing this harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Slopsquatting: The Emerging Supply Chain Threat Fueled by AI Hallucinations

2025-04-14
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of generative AI systems (LLMs) that hallucinate package names, which attackers exploit to deliver malicious code through supply chain attacks. The harm includes potential and realized injury to organizations' operations, data breaches, and reputational damage, fulfilling the criteria for harm to property, communities, or organizations. The AI system's malfunction (hallucination) and its exploitation directly lead to these harms. The article describes actual incidents and their impacts, not just potential risks, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Slopsquatting: One in five AI code snippets contains fake libraries

2025-04-14
THE DECODER
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI models like ChatGPT-4 and others) producing code snippets that include fake library names. This AI behavior indirectly leads to a plausible security risk where attackers can exploit these hallucinated names to distribute malicious code. Although no specific harm has yet been reported, the described scenario clearly outlines a credible risk of harm to software supply chains and users relying on AI-generated code. Therefore, this qualifies as an AI Hazard because the AI system's malfunction (hallucination) could plausibly lead to an AI Incident involving harm to property or communities through cyberattacks.
Thumbnail Image

"Slopsquatting" attacks are using AI-hallucinated names resembling popular libraries to spread malware

2025-04-14
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves generative AI systems hallucinating software package names that could be exploited by cybercriminals to trick developers into installing malicious packages. Although no confirmed incidents have occurred, the article clearly outlines a plausible future harm scenario where AI's behavior could lead to malware distribution and consequent harm to users and systems. Therefore, this qualifies as an AI Hazard because it describes a credible risk of harm stemming from AI system use, but no realized harm has yet occurred.
Thumbnail Image

New GenAI Supply Chain Threat: Code Package Hallucinations

2025-04-15
The Cyber Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (code-generating LLMs) whose malfunction (hallucination of non-existent packages) directly leads to a security vulnerability that can be exploited to cause harm to software supply chains and potentially to property and communities relying on software integrity. This constitutes harm to property and communities through compromised software integrity and security breaches. The researchers' findings and the described exploitation scenario indicate that harm is either occurring or highly plausible, meeting the criteria for an AI Incident. The event is not merely a potential risk (hazard) or a general update; it documents a concrete security issue with real-world implications and evidence of exploitation potential, thus qualifying as an AI Incident.
Thumbnail Image

AI Code Tools Widely Hallucinate Packages

2025-04-14
darkreading.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly—code-generating LLMs—and their use in generating fictitious package names. The hallucinations have directly led to harm by enabling attackers to upload malicious packages with those names, which developers might unknowingly install, resulting in security compromises. This fits the definition of an AI Incident because the AI system's use has directly led to harm to property and communities through security breaches. The study also highlights the systemic nature of the problem and the real-world consequences, confirming the incident classification rather than a mere hazard or complementary information.