AI-Augmented Cyberattack Compromises 600+ FortiGate Firewalls Globally

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Russian-speaking, financially motivated threat actor used commercial generative AI tools to automate and scale cyberattacks, compromising over 600 FortiGate firewalls across 55 countries in early 2026. The attacker exploited weak credentials and exposed management ports, demonstrating how AI lowers the barrier for large-scale cybercrime.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI-generated scripts and tools were used by the attacker to parse data, conduct reconnaissance, and facilitate lateral movement within networks. The AI involvement was integral to the attack's success, enabling the hacker to breach hundreds of firewalls and extract sensitive credentials and configurations. The harm includes unauthorized access, data theft, and exploitation attempts, which are direct violations of security and privacy rights and cause harm to property and communities. Since the harm has already occurred and AI was a pivotal factor, this event is classified as an AI Incident.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Digital security

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Russian hacker uses multiple AI tools to break hundreds of firewalls

2026-02-23
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated scripts and tools were used by the attacker to parse data, conduct reconnaissance, and facilitate lateral movement within networks. The AI involvement was integral to the attack's success, enabling the hacker to breach hundreds of firewalls and extract sensitive credentials and configurations. The harm includes unauthorized access, data theft, and exploitation attempts, which are direct violations of security and privacy rights and cause harm to property and communities. Since the harm has already occurred and AI was a pivotal factor, this event is classified as an AI Incident.
Thumbnail Image

Amazon says Russian-speaking hacker used AI to hit one of world's most deployed network firewalls in 5 weeks - The Times of India

2026-02-23
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used in the attacker's operations to automate and scale cyberattacks, which directly led to harm including unauthorized access to organizations' Active Directory environments, credential theft, and targeting backup infrastructure. These actions constitute harm to property and organizations and represent a breach of security rights. The AI's role was pivotal in enabling a single attacker to achieve operational scale previously requiring a larger skilled team. Hence, this is an AI Incident due to realized harm caused by AI-augmented cybercrime.
Thumbnail Image

AWS says 600+ FortiGate firewalls hit in AI-augmented attack

2026-02-23
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI tools) used in the development and execution of cyberattacks, which directly led to harm by compromising network security and enabling unauthorized access to sensitive systems. The AI's role was pivotal in automating and scaling the attack, which caused realized harm to multiple organizations across many countries. This fits the definition of an AI Incident because the AI system's use directly led to violations of security and potential breaches of rights, as well as harm to property and communities. The article describes actual harm occurring, not just potential harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

600+ FortiGate Devices Hacked by AI-Armed Amateur

2026-02-23
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to facilitate a large-scale cyberattack compromising hundreds of firewall devices. The AI system's use directly contributed to the harm by enabling an unsophisticated actor to scale attacks that resulted in unauthorized access and potential damage to critical infrastructure and data. This fits the definition of an AI Incident because the AI system's use directly led to violations of security and harm to property and organizational operations. The harm is realized, not just potential, and the AI system's role is pivotal in the attack's scale and success.
Thumbnail Image

AI Let 'Unsophisticated' Hacker Breach 600 FortiGate Firewalls, AWS Says, As AI Lowers 'The Barrier' For Threat Actors

2026-02-23
CRN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI and large language models) used by attackers to enhance their capabilities and scale the cyberattack. The AI's role was pivotal in planning, tool development, and operational assistance, directly leading to the compromise of numerous firewalls and networks, credential theft, and potential ransomware threats. These outcomes represent realized harm to property and security, fulfilling the criteria for an AI Incident. The report details actual harm caused, not just potential risk, and the AI system's involvement is central to the incident.
Thumbnail Image

AI-Assisted Threat Actor Targets 600 FortiGate Firewalls

2026-02-23
TechNadu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by the threat actor to conduct and scale cyberattacks, which directly led to unauthorized access and data breaches affecting critical infrastructure devices (FortiGate firewalls). The use of AI in generating scripts, analyzing network data, and planning lateral movement demonstrates AI's pivotal role in causing harm. The harm includes compromise of network security, exposure of sensitive credentials, and potential disruption of infrastructure management, fitting the definition of an AI Incident. The harm is realized, not just potential, and the AI system's involvement is central to the attack's success.
Thumbnail Image

Hacker used commercial AI to breach 600 firewalls, AWS reveals

2026-02-23
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of commercial generative AI systems to conduct cyberattacks that resulted in unauthorized access to numerous firewall devices and internal networks, causing harm to property and organizational security. The AI's role was pivotal in scaling the attack and lowering the skill barrier for the attacker, directly leading to the incident. The harm is realized, not just potential, and the AI system's involvement is central to the incident's occurrence. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Attacker Breached 600 FortiGate Appliances in AI-Assisted Campaign: Amazon

2026-02-23
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI and large language models) in the development and execution of cyberattacks, which directly led to unauthorized access and theft of credentials from critical network infrastructure. This constitutes harm to property and communities (organizations and their data security). The AI's role was pivotal in enabling the attacker to scale and automate the attack, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the breaches occurred and credentials were stolen, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-powered hacker breaches 600 FortiGate firewalls, Amazon warns

2026-02-23
Computing
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models and AI-assisted scripts) in the malicious use of AI to conduct cyberattacks. The AI system's use directly led to harm by enabling unauthorized access to critical network infrastructure, theft of sensitive data, and potential disruption of services. This constitutes an AI Incident because the AI's role was pivotal in lowering the technical barrier for large-scale cyber intrusions causing realized harm to property and potentially to communities relying on these networks. The description clearly indicates realized harm, not just potential risk.
Thumbnail Image

Hackers Leveraging Multiple AI Services to Compromise 600+ FortiGate Devices

2026-02-21
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (commercial large language models) used as operational tools by the threat actor to conduct and scale cyberattacks. The AI's role was pivotal in automating reconnaissance, credential extraction, and attack planning, which directly led to the compromise of numerous devices and networks. The harms include breaches of security, unauthorized access to sensitive data, and disruption of critical IT infrastructure, fitting the definition of an AI Incident. The involvement is through the use of AI in the attack (use), and the harms are realized, not just potential. Hence, this is classified as an AI Incident.
Thumbnail Image

AI-powered campaign compromises 600 FortiGate systems worldwide

2026-02-23
Security Affairs
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (commercial generative AI tools and large language models) to automate and scale cyberattacks, which directly led to the compromise of over 600 FortiGate devices across 55 countries. The harm includes unauthorized access, credential theft, and potential ransomware deployment, which constitute harm to property and disruption of critical infrastructure management. The AI's role was pivotal in enabling an unsophisticated actor to conduct widespread attacks at scale. This fits the definition of an AI Incident because the AI system's use directly led to significant harm.
Thumbnail Image

AI helps novice threat actor compromise FortiGate devices in dozens of countries

2026-02-23
Cybersecurity Dive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by a threat actor to conduct cyberattacks that resulted in the compromise of numerous devices and theft of sensitive data. The AI's role was pivotal in enabling an unsophisticated actor to scale attacks and cause harm. The harms include unauthorized access, data theft, and potential ransomware deployment, which are direct harms to property and organizational security. Hence, this meets the criteria for an AI Incident as the AI system's use directly led to realized harm.
Thumbnail Image

Threat group leverages LLMs to compromise 600 FortiGate firewalls

2026-02-23
SC Media
Why's our monitor labelling this an incident or hazard?
The article explicitly states that commercial generative AI (LLMs) were used to automate and scale cyberattacks, enabling a low-skilled attacker to compromise numerous firewalls by exploiting weak security configurations. This directly led to harm (unauthorized access to critical infrastructure devices), fulfilling the criteria for an AI Incident. The AI system's role was pivotal in the attack's scale and success, and the harm is materialized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

生成AIを悪用か 世界55カ国で600台超のFortiGate侵害が発生

2026-02-25
ITmedia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (commercial large language models) to automate and enhance cyberattacks, which directly led to unauthorized access and credential theft affecting numerous organizations worldwide. The harms include violations of security and property, and the AI's role was central in enabling the scale and sophistication of the attack. This meets the criteria for an AI Incident as the AI system's use directly contributed to significant harm.
Thumbnail Image

ウィズセキュア、富士フイルムビジネスイノベーションのIT Expert ServicesにおいてEDR/MDRの提供を開始、両社の協業を強化

2026-02-25
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-related cybersecurity technologies (EDR and MDR) that use advanced monitoring and detection capabilities, which can be reasonably inferred to involve AI systems. However, there is no indication of any harm caused or any incident involving these AI systems. The content is about the launch and strengthening of security services to prevent cyber threats, which is a governance and societal response to AI-related cybersecurity challenges. Thus, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

AIモデルやLLMを自社で研究・開発するゾーホー/ManageEngineが描く「誤検知ゼロ」の世界

2026-02-25
@IT
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems explicitly, including AI models and LLMs developed in-house for cybersecurity and IT operations. However, it does not report any actual harm, malfunction, or misuse of these AI systems. The content centers on the company's AI research, product features, and plans to reduce false positives and improve security operations. There is no indication that these AI systems have caused or could plausibly cause harm. Thus, the event does not meet the criteria for AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on AI development and deployment in cybersecurity without reporting any harm or risk of harm.
Thumbnail Image

106カ国2516の標的をClaude Codeなどで偵察 FortiGate狙う攻撃が判明

2026-02-25
ITmedia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLMs such as Claude Code) integrated into the attack workflow, automating key stages of the cyber intrusion. The AI's role was pivotal in organizing reconnaissance data, generating attack plans, and executing commands that led to unauthorized access and data exfiltration. The harms include breaches of confidentiality, integrity, and availability of critical infrastructure and corporate networks, affecting multiple countries and sectors. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harms including violations of rights and harm to property and communities (organizations and their stakeholders).
Thumbnail Image

アクロニス、サイバー脅威レポート 2025 年下半期版を公開:フィッシングやランサムウェアに加え、AI 主導型脅威の台頭でサイバー攻撃が急増

2026-02-26
CNET
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used by cybercriminals to enhance and scale attacks such as ransomware and social engineering, which have directly caused harm to organizations and individuals. The report documents actual incidents and impacts, not just potential risks, fulfilling the criteria for an AI Incident. The AI's role is pivotal in accelerating and expanding the scope of cyberattacks, leading to significant harm including financial loss and disruption of critical infrastructure sectors. Hence, it is not merely a hazard or complementary information but a confirmed AI Incident.
Thumbnail Image

アクロニス、サイバー脅威レポート 2025年下半期版を公開:フィッシングやランサムウェアに加え、AI主導型脅威の台頭でサイバー攻撃が急増

2026-02-26
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used by cybercriminals to enhance and accelerate attacks, including ransomware negotiations and social engineering scams that have caused actual harm. The harms include financial damage, disruption of business operations, and psychological harm to victims. Since these harms are realized and directly linked to the use of AI in cyberattacks, this qualifies as an AI Incident under the OECD framework. The report is not merely a warning or potential risk but documents ongoing, active harms caused by AI-enabled cyber threats.
Thumbnail Image

「BOXIL EXPO 情シス・セキュリティ展 2026 春」に協賛

2026-02-26
CNET
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI incident or harm that has occurred, nor does it report a particular AI hazard event causing or plausibly leading to harm. Instead, it provides complementary information about AI-related cybersecurity risks and governance measures, including references to official reports and upcoming security evaluation frameworks. The focus is on education, awareness, and preparedness rather than reporting a new incident or hazard. Therefore, it fits the definition of Complementary Information.
Thumbnail Image

FortiGateのデバイス600台超が被害に AWSユーザーや情シスが取るべき行動は?

2026-02-25
TechTarget�W���p��
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI tools and AI-assisted scripting) in the malicious use phase to automate and scale cyberattacks against FortiGate devices. The attackers used AI to generate scripts and automate credential abuse and reconnaissance, which directly led to unauthorized access and data theft, causing harm to property and organizational security. The article explicitly states the use of AI in the attack process, and the harm has materialized with over 600 devices compromised. This fits the definition of an AI Incident, as the AI system's use directly led to harm.
Thumbnail Image

AIにより脅威が増した2025年のランサムウェア攻撃、対策のポイントは(2) AIが悪用され始めたランサムウェアの攻撃の脅威とは

2026-02-25
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI is used to create realistic phishing emails and to assist attackers in discovering and exploiting software vulnerabilities faster, which has led to increased ransomware attacks causing data loss, business disruption, and financial damage to companies. These harms fall under injury to groups of people (businesses and their stakeholders), harm to communities, and violations of rights (e.g., data breaches). The AI system's role is pivotal in enabling these attacks, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「侵害は社外で始まる」 クラウドや取引先に"潜伏する"脅威と、求められるセキュリティ再設計

2026-02-25
@IT
Why's our monitor labelling this an incident or hazard?
While the article clearly involves AI systems in the context of cybersecurity threats and defenses, it does not report a concrete AI Incident or AI Hazard. There is no description of realized harm caused by AI systems, nor a specific event where AI use plausibly led to harm. Instead, the article provides a broad overview of risks, evolving threats, and strategic responses, which fits the definition of Complementary Information. It enhances understanding of AI's role in cybersecurity threats and the necessary governance and operational responses without reporting a new incident or hazard.
Thumbnail Image

攻撃者のAI利用はもはや"当たり前" 日本はランサムウェア検出率で世界3位に

2026-02-28
ITmedia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that attackers are using AI to enhance the effectiveness and scale of ransomware and phishing attacks, which have resulted in thousands of incidents worldwide and significant harm to targeted organizations, including managed service providers and critical sectors like manufacturing and healthcare. This constitutes direct harm caused by AI-enabled malicious use, fitting the definition of an AI Incident. The report also highlights the need for improved detection and mitigation, indicating that harm is ongoing and significant.