AI-Powered Cyberattacks Target Autonomous AI Agents and Enterprises

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

CrowdStrike's 2025 Threat Hunting Report reveals that cybercriminals and nation-state actors are weaponizing generative AI to scale attacks, automate malware creation, and conduct sophisticated social engineering. Adversaries are also targeting autonomous AI agents within enterprises, leading to breaches, credential theft, and malware deployment, significantly expanding the attack surface and causing real-world harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The involvement of AI by threat actors to scale and accelerate attacks, including attacks on autonomous AI agents, implies direct malicious use of AI systems leading to harm such as disruption of critical infrastructure or violation of security. This constitutes an AI Incident because the AI systems' use has directly led to harmful cybersecurity events.[AI generated]
AI principles
Robustness & digital securitySafetyPrivacy & data governanceRespect of human rightsAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
BusinessWorkers

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Threat Actors Using AI to Scale Operations, Accelerate Attacks and Attack Autonomous AI Agents - IT Security News

2025-08-04
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The involvement of AI by threat actors to scale and accelerate attacks, including attacks on autonomous AI agents, implies direct malicious use of AI systems leading to harm such as disruption of critical infrastructure or violation of security. This constitutes an AI Incident because the AI systems' use has directly led to harmful cybersecurity events.
Thumbnail Image

Threat Actors Exploit AI to Scale Attacks and Target Autonomous Agents - IT Security News

2025-08-04
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The use of AI by adversaries to enhance cyberattacks constitutes the malicious use of AI systems leading to harm. The article describes realized harm through scaled attacks and targeting of autonomous AI agents, which can disrupt enterprise operations and potentially critical infrastructure. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.
Thumbnail Image

2025 CrowdStrike Threat Hunting Report: Adversaries weaponise and target AI at scale - Express Computer

2025-08-04
Express Computer
Why's our monitor labelling this an incident or hazard?
The report explicitly states that adversaries are using AI systems maliciously to accelerate attacks and target autonomous AI agents, leading to direct harms such as credential theft and malware deployment. The involvement of AI in both the attack methods and the targeted systems, combined with the resulting harms, meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

2025 CrowdStrike threat hunting report: Adversaries weaponize and target AI at scale

2025-08-05
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The event involves the use and targeting of AI systems (GenAI and autonomous AI agents) by threat actors to conduct cyberattacks that have directly led to harm such as unauthorized access and malware deployment in over 320 companies. This fits the definition of an AI Incident because the AI systems' use and exploitation have directly caused harm to organizations' security and operations, which can be considered harm to property and disruption of critical infrastructure.
Thumbnail Image

Weaponized AI is making hackers faster, more aggressive, and more successful

2025-08-04
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously by hackers to conduct more effective cyberattacks, including exploiting vulnerabilities in AI agent-building tools. This constitutes the use and misuse of AI systems leading to harm in the form of cybersecurity breaches, which can be considered harm to property and potentially to communities relying on secure digital infrastructure. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to realized harms through cyber intrusions and malware deployment.
Thumbnail Image

Hackers weaponize GenAI to boost cyberattacks - BetaNews

2025-08-04
BetaNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and exploitation of AI systems (GenAI and autonomous AI agents) by threat actors to conduct and scale cyberattacks, including insider threats and malware deployment. These actions have directly caused harm through unauthorized access, malware, ransomware, and disruption of enterprise operations, fitting the definition of an AI Incident. The AI systems' development and use have been weaponized maliciously, leading to realized harm rather than just potential risk.
Thumbnail Image

CrowdStrike: Threat Actors Increasingly Lean on AI Tools

2025-08-04
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI models, AI code assistants, deepfake technologies) by malicious actors to conduct cyberattacks and social engineering that have already caused harm. The report details concrete incidents of AI-enabled infiltration and exploitation, including unauthorized access to company data and use of AI vulnerabilities for remote code execution. These harms fall under violations of rights and harm to communities and property. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are realized and ongoing.
Thumbnail Image

Agentic AI a target-rich zone for cyber attackers in 2025 | Compute...

2025-08-04
Computer Weekly
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit, with generative AI and autonomous AI agents being weaponized and targeted by attackers. The malicious use of AI to automate insider attacks, create deepfakes, and craft phishing emails directly leads to harms such as violations of organizational security, potential breaches of privacy, and harm to communities through propaganda. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident. The article reports ongoing attacks and harms, not just potential risks, so it is classified as an AI Incident.
Thumbnail Image

Adversaries Weaponize and Target AI at Scale

2025-08-04
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used maliciously by adversaries to automate and scale cyberattacks, including insider threats, phishing, malware generation, and ransomware deployment. It also notes attackers exploiting AI agents as new attack surfaces, leading to unauthorized access and persistence in enterprise environments. These activities constitute direct or indirect harm to organizations and potentially critical infrastructure, fitting the definition of an AI Incident. The harms are realized, not merely potential, as evidenced by operational malware and successful attacks. Hence, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

CrowdStrike: AI-Powered Adversaries Target Autonomous AI Agents

2025-08-04
CIOL
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (autonomous AI agents and generative AI) and their use and misuse in cyberattacks. The adversaries' actions have directly led to harm in the form of unauthorized access, credential theft, and malware deployment, which disrupt enterprise operations and compromise security. This fits the definition of an AI Incident as the AI systems' development, use, or malfunction has directly or indirectly led to harm (disruption of critical infrastructure and violation of security).
Thumbnail Image

Threat Actors Using AI to Scale Operations, Accelerate Attacks and Attack Autonomous AI Agents

2025-08-04
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems, including generative AI and machine learning algorithms, are being weaponized by threat actors to conduct cyberattacks that have already caused harm to over 320 companies. The use of AI-generated malware, deepfakes, and AI-driven social engineering directly contributes to breaches and unauthorized access, fulfilling the criteria for an AI Incident due to realized harm (violations of rights and harm to communities). The AI involvement is central and pivotal to the attacks' success, not merely potential or speculative.
Thumbnail Image

AI revolution: Hackers increasingly taking advantage of GenAI tools to code malware and more

2025-08-04
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by hackers to create malware and phishing attacks, as well as AI systems being targeted and compromised to deploy malware and gain unauthorized access. These actions have directly caused harm through cyber intrusions and malicious campaigns. The involvement of AI in both the offensive tools and as targets of attacks is clear, and the harms described (cybersecurity breaches, malware deployment, misinformation) fit within the definitions of AI Incidents. Hence, this event qualifies as an AI Incident due to realized harm caused by AI system use and exploitation.
Thumbnail Image

North Korean Hackers Are Using AI to Get Jobs at U.S. Companies and Steal Data

2025-08-05
Inc.
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI is used by North Korean adversaries to automate and scale cyberattacks, including fraudulent job acquisition and data theft. These actions have directly led to harm, including financial losses and security breaches affecting numerous companies. The AI system's role is pivotal in enabling these sophisticated attacks, fulfilling the criteria for an AI Incident as the harm is realized and directly linked to AI use.
Thumbnail Image

Threat Actors Increasingly Leaning on GenAI Tools

2025-08-05
ITPro Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI (an AI system) by malicious actors to improve their offensive cyber operations. While no specific harm is detailed as having already occurred, the increased sophistication and scale of attacks enabled by AI tools plausibly lead to significant harms such as cyber intrusions, phishing, and scams. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to AI Incidents involving harm to persons, rights, or infrastructure.
Thumbnail Image

CrowdStrike report shows AI tools both targeted and used in new cyberattacks | Back End News

2025-08-06
Back End News
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI systems (Generative AI, AI agents) being used maliciously and targeted by attackers, resulting in actual cyberattacks causing harm to organizations' security and operations. These harms include unauthorized access, malware and ransomware deployment, and exploitation of AI systems themselves. This fits the definition of an AI Incident because the AI systems' use and compromise have directly or indirectly led to significant harms (violations of security, harm to property and operations).
Thumbnail Image

2025 CrowdStrike Threat Hunting Report: Adversaries Weaponize and Target AI at Scale

2025-08-04
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI, autonomous AI agents) being used maliciously by threat actors to conduct cyberattacks that have resulted in actual harm, including credential theft, ransomware deployment, and operational disruption. The AI systems are both tools for attackers and targets themselves, with their exploitation leading to realized harms. This fits the definition of AI Incident as the AI system's use and malfunction (exploitation) have directly led to harms such as property damage, disruption, and violations of security. The detailed examples of attacks and their consequences confirm that harm has occurred, not just potential harm, ruling out AI Hazard or Complementary Information. Hence, the classification is AI Incident.
Thumbnail Image

La IA generativa se convierte en arma y objetivo de los ciberataques más avanzados | Silicon

2025-08-04
Silicon
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, autonomous agents) being used offensively by cybercriminals to cause harm (phishing, ransomware, malware) and also being attacked themselves, leading to unauthorized access and persistence. These events have directly led to harms such as security breaches, operational disruption, and increased cybercrime. The involvement of AI in both causing and being targeted in these attacks meets the definition of an AI Incident, as the harms are realized and the AI systems' role is pivotal.
Thumbnail Image

Los ciberdelincuentes ya explotan los agentes de IA y usan la IA...

2025-08-04
europa press
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how cybercriminal groups are using AI systems, including generative AI and autonomous AI agents, to carry out harmful cyber operations that have materialized in increased cyberattacks, credential theft, and malware deployment. These harms fall under violations of rights and harm to property and communities. The AI systems' development and use have directly led to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ciberdelincuentes ya usan IA generativa para escalar ataques y vulnerar agentes autónomos

2025-08-04
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The involvement of generative AI and autonomous AI agents in cyberattacks is explicitly described, with concrete examples of harm including unauthorized access, ransomware deployment, and sophisticated phishing attacks. These harms fall under violations of security and potentially disruption of critical infrastructure (cloud services and AI agents supporting business operations). Therefore, this event qualifies as an AI Incident because the development and use of AI systems have directly led to realized harms through cybercrime activities.
Thumbnail Image

Hackers usan IA para escalar ciberataques a nivel global

2025-08-04
El Vocero de Puerto Rico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and large language models by hacker groups to automate and enhance cyberattacks, including phishing and malware deployment. These activities have already resulted in increased successful intrusions and credential theft, which constitute harm to property, communities, and potentially critical infrastructure. The AI systems' development and use are directly linked to these harms, qualifying this event as an AI Incident under the OECD framework.
Thumbnail Image

Hackers utilizan IA generativa y agentes autónomos en ciberataques - Fortuna Web

2025-08-04
FORTUNA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, autonomous agents, large language models) being used by cybercriminal groups to carry out attacks that have already occurred, causing harm such as misinformation, unauthorized access, and targeted phishing. These harms fall under violations of rights and harm to communities, and possibly disruption of critical infrastructure. Since the harms are realized and AI is a pivotal factor in these attacks, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El nuevo caballo de Troya de la guerra cibernética: "Cada agente de IA es una identidad superhumana"

2025-08-05
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI, large language models) being used by malicious actors to perpetrate cybercrimes and misinformation campaigns. These uses have resulted in actual intrusions, ransomware attacks, and propaganda dissemination, which constitute harms to property, organizations, and communities. Therefore, the event qualifies as an AI Incident because the AI system's use has directly led to realized harms as defined in the framework.
Thumbnail Image

Agente de IA, el nuevo arsenal de los ciberdelincuentes

2025-08-06
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI, large language models, AI agents) being used maliciously by cybercriminals to conduct attacks that have caused harm, including fraud, phishing, malware deployment, and cloud intrusions. The harms are direct and ongoing, fulfilling the criteria for an AI Incident. The AI systems are central to the attack methods and the resulting harms, not merely potential or hypothetical risks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

بهذه الطريقة.. الذكاء الاصطناعي تحول إلى سلاح في يد القراصنة

2025-08-24
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how generative AI systems are being used by hackers to create malware and conduct cyberattacks, which directly cause harm to data security and privacy. This constitutes a violation of rights and harm to communities through cybercrime. The involvement of AI in these malicious activities is clear and ongoing, making this an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي يفتح الباب أمام جيل جديد من الهجمات السيبرانية

2025-08-24
Hespress
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI systems, particularly generative AI and large language models, are being used by malicious actors to create malware, fake identities, and conduct cyber intrusions. These activities have already resulted in actual cyberattacks and data theft, which are harms to property and communities. The AI involvement is central to the incident, as it enables attackers to innovate and scale their operations. Therefore, this is an AI Incident due to realized harm caused by AI-enabled cyberattacks.
Thumbnail Image

هكذا يوظف قراصنة في الصين وكوريا الشمالية الذكاء الاصطناعي لشن هجمات

2025-08-24
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative AI used to create malware, fake identities, and phishing attacks) whose use has directly led to harms such as data theft, malware infections, and cybercrime escalation, which constitute violations of rights and harm to communities. The article describes realized harms caused by AI-enabled cyberattacks, not just potential risks. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly led to significant harms.
Thumbnail Image

تحذير من خطورة القرصنة باستخدام الذكاء الاصطناعي

2025-08-24
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-powered malware and AI-generated deceptive content (e.g., fake resumes and interviews) have been used by threat actors to successfully infiltrate organizations and steal data. This constitutes direct harm caused by AI systems in use. The harms include violations of privacy, security breaches, and potential economic and reputational damage to victims, fitting the definition of an AI Incident. The involvement of AI is clear and central to the harm described, not merely a potential or future risk, thus excluding classification as a hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي التوليدي.. "برمجيات خبيثة"

2025-08-24
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems being used by hackers to create malware and steal information, which involves AI system use leading to potential harm to property and information security. Although specific harms are not detailed as having occurred in a particular event, the report indicates that AI-enabled cyberattacks are already happening or imminent, constituting a plausible risk of harm. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving harm to property and data security.
Thumbnail Image

الذكاء الاصطناعي التوليدي يفتح عصرًا جديدًا للجرائم السيبرانية

2025-08-25
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how generative AI is being used by malicious actors to develop advanced malware and conduct cyberattacks that have already occurred, causing harm such as data theft and misinformation campaigns. These harms fall under property and community harm categories. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The call for defensive AI agents further supports the recognition of ongoing harm and response.
Thumbnail Image

احذر من خطر برامج القرصنة التي تعتمد على الذكاء الاصطناعي وكيفية حماية نفسك منها - نبأ العرب

2025-08-25
نبأ العرب
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI systems, particularly generative AI and large language models, are being used by malicious actors to create malware, fake identities, and disinformation campaigns that have already caused harm such as data breaches and fraud. The involvement of AI is central to the incident, enabling new forms of cybercrime that impact individuals, companies, and communities. The harms include violations of privacy and security, theft of credentials, and societal disruption through misinformation, fitting the definition of an AI Incident. The article does not merely warn of potential future harm but documents ongoing malicious use of AI, confirming realized harm.