Hackers Use AI-Generated Code to Obfuscate Malware in Phishing Attacks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hackers leveraged AI-generated code to obfuscate malware payloads in phishing campaigns, primarily targeting US organizations. The AI-created code mimicked legitimate business documents and dashboards, making detection difficult and enabling credential theft and data breaches. Microsoft researchers identified the sophisticated use of AI as key to the attacks' success.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Microsoft Security Copilot identifying AI-generated code) used by attackers to develop sophisticated phishing malware. The malware's use directly caused harm by stealing login credentials and tracking users, fulfilling the criteria for an AI Incident. The AI's role in generating complex obfuscation was pivotal in enabling the phishing attack's success, leading to realized harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyTransparency & explainability

Industries
Digital security

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Watch out - hackers are using AI to make phishing emails even more convincing

2025-09-26
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Microsoft Security Copilot identifying AI-generated code) used by attackers to develop sophisticated phishing malware. The malware's use directly caused harm by stealing login credentials and tracking users, fulfilling the criteria for an AI Incident. The AI's role in generating complex obfuscation was pivotal in enabling the phishing attack's success, leading to realized harm.
Thumbnail Image

AI Hackers Craft Phishing Emails and Hide Malware in SVG Files

2025-09-26
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the malicious creation of phishing emails and malware obfuscation, which has directly led to harm through data theft and increased phishing success rates. The article details realized harms such as credential theft and successful phishing attacks, which constitute harm to individuals and communities. Therefore, this qualifies as an AI Incident because the AI system's use in the attack campaign is a direct contributing factor to the harm caused.
Thumbnail Image

Hackers Obfuscated Malware With Verbose AI Code

2025-09-24
DataBreachToday
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the malware code was likely generated by a large language model or AI tool, with detailed analysis showing AI characteristics in the code structure and naming. The AI-generated code was used to obfuscate malicious payloads, directly contributing to the phishing attack's success. The phishing campaign involves deception, credential theft, and malware delivery, which are harms to individuals and organizations. Since the AI system's use directly led to realized harm through malware obfuscation and phishing, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hackers Leverage AI-Generated Code to Obfuscate Its Payload and Evade Traditional Defenses - IT Security News

2025-09-26
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated code by hackers to conceal malware in phishing attacks. This is a clear example of AI system use (AI-generated code) in a malicious context that directly leads to harm (cybersecurity breaches, potential data theft, and harm to users). Therefore, it qualifies as an AI Incident due to the realized harm caused by the AI system's use in cybercrime.
Thumbnail Image

Hackers Leverage AI-Generated Code to Obfuscate Its Payload and Evade Traditional Defenses

2025-09-26
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated code was used to obfuscate malware payloads, which directly facilitated a phishing campaign targeting organizations and individuals, leading to credential theft. This is a clear case where the AI system's use directly led to harm (credential theft and potential broader security breaches). The involvement of AI in generating the malicious code and the resulting realized harm meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Microsoft security team blocked phishing emails using AI-generated attachments disguised as PDFs

2025-09-30
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the attackers' use of AI-generated code to create obfuscated phishing attachments, and AI systems used by defenders to detect and block the attack. The phishing campaign aimed to harvest credentials, a violation of rights and security harm. The attack was active and targeted real organizations, thus harm was realized, meeting the criteria for an AI Incident. The AI system's development and use directly contributed to the harm, even if mitigated. Therefore, this is classified as an AI Incident.
Thumbnail Image

Microsoft Sniffs Out AI-Based Phishing Campaign Using Its AI-Based Tools

2025-09-29
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (large language model) by attackers to generate phishing code, which directly led to a cybersecurity threat aimed at stealing credentials, a form of harm to persons. The AI system's use in the attack is explicit and central to the incident. The campaign was detected and blocked, but the phishing attempt itself constitutes an AI Incident because the AI system's use directly led to a harmful event. The article also discusses the use of AI by defenders, but the primary focus is on the AI-enabled phishing attack and its detection, which fits the definition of an AI Incident.
Thumbnail Image

AI-Generated Code Used in Phishing Campaign Blocked by Microsoft

2025-09-29
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The phishing campaign involved AI-generated code to obfuscate malicious payloads, which is a direct use of an AI system in a harmful context. The attack targeted organizations and attempted to deceive users into providing credentials, which is a clear harm to people and organizations. Even though the attack was blocked, the event describes an actual harmful use of AI, not just a potential risk. Therefore, it qualifies as an AI Incident due to the realized malicious use of AI-generated code in a phishing attack.
Thumbnail Image

Microsoft Uncovers AI-Obfuscated Phishing in SVG Files Mimicking PDFs

2025-09-29
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLMs) used maliciously to generate obfuscated phishing code, which directly led to harm through credential theft. The AI system's use in crafting the attack was pivotal to its success and evasion of traditional detection, fulfilling the criteria for an AI Incident. The report also details the harm caused and the AI system's role in the attack's development and use, not just potential harm or general AI-related news. Hence, it is classified as an AI Incident.
Thumbnail Image

Microsoft Reveals AI-Driven Phishing Campaign Targeting US Organizations

2025-09-29
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI (large language models) was used to generate complex malicious code that bypasses security filters, resulting in phishing attacks targeting organizations and leading to credential theft. This is a direct harm caused by the AI system's use in the attack. The harm includes violations of security and privacy, which fall under harm to communities and property. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm through cybercrime.
Thumbnail Image

Microsoft Thwarts AI-Driven Phishing Attack Hiding in SVG Files

2025-10-01
WebProNews
Why's our monitor labelling this an incident or hazard?
The phishing attack involved AI-generated obfuscated scripts hidden in SVG files, which directly aimed to steal credentials, constituting harm to users' security and privacy (a violation of rights). The AI system was used maliciously to create sophisticated payloads that traditional defenses might miss, and the attack was active until blocked. This meets the criteria for an AI Incident because the AI system's use directly led to harm (credential theft attempts) and the event describes realized harm rather than just potential risk. Microsoft's AI-driven defense is a response but does not negate the incident classification.
Thumbnail Image

Novel AI-powered phishing campaign uncovered

2025-09-30
SC Media
Why's our monitor labelling this an incident or hazard?
The phishing campaign explicitly involves the use of an AI system (LLM) to generate malicious SVG files used to deceive victims and steal credentials. This use of AI directly leads to harm through cybercrime, including unauthorized access to business email accounts and potential data breaches. The harm is realized, not just potential, as the campaign has been active and targeted organizations. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm.