US Military Deploys AI-Enabled LUCAS Suicide Drones Against Iran

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US military, via its Task Force Scorpion Strike, deployed AI-enabled LUCAS suicide drones—reverse-engineered from Iran’s Shahed-136—in combat against Iranian targets. These autonomous, low-cost drones were used for the first time in large-scale strikes, demonstrating direct harm caused by AI systems in military operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI services from Anthropic in a military attack involving advanced weapons and suicide drones. The suicide drones are AI-enabled systems used in combat, which directly relates to harm through lethal military action. The involvement of AI in the operation, even if the exact role is not fully detailed, is clearly linked to the use of autonomous or semi-autonomous weaponry causing or capable of causing injury or death. This fits the definition of an AI Incident because the AI system's use in the attack has directly led to harm in a conflict setting.[AI generated]
AI principles
AccountabilityRespect of human rights

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
GovernmentGeneral public

Harm types
Physical (death)Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

美攻擊伊朗動用先進武器 首度實戰自殺式無人機 | 聯合新聞網

2026-03-02
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI services from Anthropic in a military attack involving advanced weapons and suicide drones. The suicide drones are AI-enabled systems used in combat, which directly relates to harm through lethal military action. The involvement of AI in the operation, even if the exact role is not fully detailed, is clearly linked to the use of autonomous or semi-autonomous weaponry causing or capable of causing injury or death. This fits the definition of an AI Incident because the AI system's use in the attack has directly led to harm in a conflict setting.
Thumbnail Image

美首度動用「盧卡斯」打伊朗 前國防部官員曝未來戰場勝負關鍵

2026-03-03
中時新聞網
Why's our monitor labelling this an incident or hazard?
The LUCAS drones are AI systems as they are autonomous or semi-autonomous combat drones capable of making decisions to attack targets. Their deployment in the airstrike directly led to the death of an Iranian leader, which is injury or harm to persons. This meets the criteria for an AI Incident because the AI system's use directly caused harm. The article does not merely discuss potential or future harm but reports on an actual military operation with lethal consequences involving AI systems.
Thumbnail Image

空襲伊朗 美國動用便宜易製「仿沙希德」神風無人機

2026-03-01
中時新聞網
Why's our monitor labelling this an incident or hazard?
The "LUCAS" kamikaze drones are described as having high autonomy, long-range navigation, and networked swarm capabilities, all indicative of AI systems. Their use in active combat operations against Iran directly leads to harm, including physical destruction and potential loss of life, fulfilling the criteria for an AI Incident. The article details actual deployment and use, not just potential or future risks, so it is not merely a hazard or complementary information. The harm is direct and material, stemming from the AI system's use in warfare.
Thumbnail Image

「盧卡斯」無人機首出動 低成本高效益

2026-03-01
中時新聞網
Why's our monitor labelling this an incident or hazard?
The "LUCAS" drone is an AI system as it involves autonomous or semi-autonomous decision-making capabilities, real-time data sharing, and electronic warfare resistance, which are indicative of AI use. Its deployment in a military operation that resulted in lethal strikes and destruction constitutes direct harm to persons and communities, fulfilling the criteria for an AI Incident. The article explicitly states the drone's use in combat and the resulting harm, so this is not merely a potential hazard or complementary information but a realized incident involving AI.
Thumbnail Image

美攻擊伊朗動用先進武器 首度實戰自殺式無人機 | 國際 | 中央社 CNA

2026-03-02
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI services in a military operation that has caused harm through attacks on Iranian targets. The AI system's role, while not fully specified, is linked to the development or use phase of the operation. The harm includes injury or harm to people and damage to property, fulfilling the criteria for an AI Incident. The presence of AI in the operation and the resulting harm meet the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美军首次在实战中使用 "LUCAS"新型自杀式无人机 - 国际 - 即时国际

2026-03-01
星洲日报
Why's our monitor labelling this an incident or hazard?
The 'LUCAS' drone is an AI system as it is an autonomous or semi-autonomous unmanned combat attack system capable of making decisions to engage targets. Its use in a real military strike directly leads to harm (injury, death, destruction), fulfilling the criteria for an AI Incident. The article explicitly states its deployment in combat, confirming realized harm linked to the AI system's use. Hence, it is not merely a hazard or complementary information but an incident involving AI causing harm.
Thumbnail Image

美軍「逆向工程」無人機首度出戰打擊伊朗 | 自殺 | 仿製

2026-03-01
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The LUCAS drones are AI systems as they operate autonomously with capabilities for coordination and network-centric tactics, indicating AI-driven decision-making. Their deployment in a military strike against Iran has directly led to harm through physical attacks, fulfilling the criteria for an AI Incident. The event involves the use of AI systems in causing harm, not just potential harm, and thus is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美军"逆向工程"无人机首度出战打击伊朗 | 自杀 | 仿制

2026-03-01
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they have autonomous operational capabilities and coordination features indicative of AI use. Their deployment in a military strike that causes harm to targets in Iran meets the definition of an AI Incident, as the AI system's use has directly led to harm (physical destruction and potential injury or death). Although specific details of damage or casualties are not disclosed, the nature of suicide drones used in combat implies realized harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

英、法、德、義、波五國投入研發自動無人機低成本防空系統

2026-03-02
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of autonomous drone systems for military defense, which are AI systems by definition due to their autonomous operational nature. While the article does not report any actual harm or incident, the deployment of such systems in conflict zones could plausibly lead to AI incidents involving injury, disruption, or other harms. Therefore, this event fits the definition of an AI Hazard, as it describes the credible potential for harm stemming from the use of AI-enabled autonomous weapons systems in the near future.
Thumbnail Image

還施彼身,美軍「見證者」逆向產品首度出戰攻擊伊朗

2026-03-01
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system used in a military attack, which has directly caused harm by destroying enemy infrastructure. The article explicitly states the deployment of this AI-enabled weapon system in combat, indicating realized harm. The harm includes damage to property and disruption of military operations, fitting the definition of an AI Incident. The AI system's development and use are central to the event, and the harm is actual, not just potential.
Thumbnail Image

【新聞直擊】福特號消失 美軍豪賭內幕 | 自殺式無人機 | 美軍雙航母部署 | 攻打伊朗 | 新唐人电视台

2026-03-01
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The LUCAS suicide drone is an AI system as it autonomously identifies and attacks targets, representing an AI system's use in a military context. Its deployment in actual combat has directly led to harm (destruction, potential casualties), fulfilling the criteria for an AI Incident. The article details the operational use and effects of this AI system, not just potential risks or future hazards. Other content about military strategy and political statements does not change the classification but provides context. Therefore, this event is classified as an AI Incident.
Thumbnail Image

以其人之道還治其身 美軍首度對伊朗動用自殺式無人機 | 國際焦點 | 國際 | 經濟日報

2026-03-02
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-enabled autonomous or semi-autonomous drones (AI systems) in active military operations causing direct harm to persons and property, fulfilling the criteria for an AI Incident. The drones are described as expendable, low-cost, and capable of attack missions, indicating AI system use in causing harm. The article explicitly states these drones have been deployed in combat against Iran, so harm is realized, not just potential. Therefore, this is an AI Incident due to the direct use of AI systems in causing harm through military strikes.
Thumbnail Image

美軍仿造伊朗無人機 「還施彼身」襲革命衛隊 - 20260302 - 國際

2026-03-01
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the form of autonomous suicide drones with long-range operational capabilities. These drones are reverse-engineered and mass-produced for military use, and their deployment in attacks has directly led to harm to persons and military infrastructure. The article explicitly states their use in combat operations causing damage, which fits the definition of an AI Incident as the AI system's use has directly led to harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

美軍首度實戰LUCAS無人機 逆向工程伊朗技術還施彼身│TVBS新聞網

2026-03-01
TVBS
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is described as a low-cost, long-range attack drone developed through reverse engineering an Iranian drone. Such drones typically incorporate AI systems for navigation, targeting, and attack execution. The article states that the US military has deployed these drones in actual combat operations, which directly involves the use of AI systems causing harm through military strikes. This fits the definition of an AI Incident because the AI system's use has directly led to harm (injury or harm to persons in conflict).
Thumbnail Image

揭露:美國確認首次在伊朗空襲中使用LUCAS神風特攻無人機作戰

2026-03-02
Gamereactor China
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous or semi-autonomous unmanned aerial vehicle designed for attack missions. Its deployment in an airstrike that targets military infrastructure and personnel directly leads to harm, fulfilling the criteria for an AI Incident. The article describes actual use and harm caused by the AI system, not just potential or hypothetical risks, so it is not an AI Hazard or Complementary Information. The involvement of AI in the drone's operation and the resulting harm from its use in combat clearly classify this as an AI Incident.
Thumbnail Image

伊朗昔研發各路無人機發射 美國「照抄」成新利器 | 國際 | 三立新聞網 SETN.COM

2026-03-01
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous drones capable of searching and attacking targets without human intervention) in active military operations, which directly causes harm through lethal attacks. The article explicitly mentions the autonomous attack capability of the drones, indicating AI system involvement. The harm is realized as these weapons are used in combat, causing injury or death. Therefore, this qualifies as an AI Incident due to the direct use of AI-enabled autonomous weapons causing harm.
Thumbnail Image

美军"逆向工程"无人机首度出战打击伊朗

2026-03-01
botanwang.com
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is described as having autonomous operational capabilities and networked coordination, indicating AI system involvement. Its use in a military strike causing physical harm to targets constitutes direct harm caused by an AI system. Therefore, this event qualifies as an AI Incident due to the direct use of an AI-enabled weapon system in combat resulting in harm.
Thumbnail Image

美军承认,用了新装备_手机网易网

2026-03-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The LUCAS drone system is described as having autonomous coordination and advanced tactical capabilities, indicating AI system involvement. Its deployment in combat and the resulting strikes on targets constitute direct harm to persons and property. The article explicitly states that the U.S. military has used these AI-enabled drones in real combat, fulfilling the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

史詩行動美軍攻守俱佳!LUCAS自殺無人機首度實戰 伊朗採用中俄攔截系統GG - 民視新聞網

2026-03-02
民視新聞網
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous or semi-autonomous unmanned aerial vehicle used for attack missions. Its deployment in combat and use in military strikes directly leads to harm (injury or death) and destruction, fulfilling the criteria for an AI Incident. The article reports on actual use in warfare, not just potential or future risks, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the AI system's use has directly caused harm in a military conflict context.
Thumbnail Image

照抄伊朗「神風無人機」 首度實戰就用在伊朗身上|壹蘋新聞網

2026-03-01
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI-enabled autonomous or semi-autonomous drone system in active combat, directly causing harm to persons and property. The LUCAS drone's design is based on the Iranian Shahed-136, which uses pre-programmed coordinates for targeting, indicating AI or algorithmic decision-making in its operation. The use of these drones in warfare has directly led to harm, fulfilling the criteria for an AI Incident. Although the article does not detail specific casualties, the deployment of lethal autonomous drones in combat inherently involves injury or harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

B-2與F-35聯動出擊 美軍揭怒火行動首日空戰全貌 - 自由軍武頻道

2026-03-03
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of AI-enabled systems such as suicide drones (LUCAS) and electronic warfare aircraft that use sensor fusion and signal collection to conduct attacks and suppress enemy defenses. These systems' use in active combat operations has directly led to harm, including destruction of infrastructure and ongoing conflict. The AI systems' development and use have directly contributed to harm, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

出奇招打伊朗!美國LUCAS無人機首度實戰 專家曝戰場勝負關鍵│TVBS新聞網

2026-03-03
TVBS
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous combat drone capable of targeting and attacking without direct human control. Its use in the airstrike directly led to injury and death, fulfilling the criteria for harm to persons. Therefore, this event qualifies as an AI Incident because the development and use of the AI system directly caused significant harm in a military conflict context.
Thumbnail Image

US Suicide Drone LUCAS, Like Iran's Shaheds, Has These Unique Qualities

2026-03-03
NDTV
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an uncrewed combat attack system with autonomous capabilities. Its deployment in combat and use in strikes means it has directly led to harm (injury or death) as part of warfare. This fits the definition of an AI Incident because the AI system's use has directly caused harm to persons and communities. The article does not merely discuss potential harm or future risks but confirms actual use in combat, indicating realized harm.
Thumbnail Image

Video: Did US Kamikaze Drone Fail Mission? Iraqis Play With Intact LUCAS

2026-03-03
NDTV
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI-enabled kamikaze drone used in combat, implying AI system involvement in autonomous attack missions. The drone's failure to reach its target and being found intact indicates a malfunction or unsuccessful use of the AI system. The event involves the use and malfunction of an AI system in a military operation, which is directly linked to harm in the context of armed conflict. Hence, it meets the criteria for an AI Incident as the AI system's malfunction has directly led to a failure in a combat mission, which is a form of harm in this context.
Thumbnail Image

US debuts suicide drone in Iran after fast-tracked Pentagon procurement

2026-03-03
The Indian Express
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it involves autonomous or semi-autonomous decision-making for combat attack missions. Its deployment in Iran and use in combat means the AI system's use has directly led to harm (injury or death, destruction of property). The article explicitly states the drone was used in combat, indicating realized harm rather than potential harm. Hence, this is an AI Incident rather than a hazard or complementary information. The rapid deployment and use in combat operations meet the criteria for an AI Incident involving harm to persons and communities.
Thumbnail Image

Russian war bloggers are watching the US' new LUCAS drones -- and fretting about Starshield terminals

2026-03-03
Business Insider
Why's our monitor labelling this an incident or hazard?
The LUCAS drone system uses AI-enabled satellite communication terminals (Starshield) to maintain jam-resistant, precise control during combat operations. This technology directly enhances the effectiveness of military strikes, which are inherently harmful to people and communities. The article discusses the actual deployment of these drones in combat against Iran, indicating realized harm rather than just potential harm. The involvement of AI in the drone's guidance and communication system is explicit and central to the event. Hence, this qualifies as an AI Incident due to the direct link between the AI system's use and harm in warfare.
Thumbnail Image

LUCAS Drone Revolutionizes Modern Warfare Strategy | Technology

2026-03-03
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous or semi-autonomous drone capable of carrying diverse payloads and communications, reflecting AI-enabled military technology. The article does not mention any actual harm caused by the drone yet, but the deployment of such AI-enabled weapon systems in conflict zones plausibly leads to injury, harm to communities, or property damage. Since no harm has yet been reported but the risk is credible and inherent, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

US debuts suicide drone in Iran after fast-tracked Pentagon procurement

2026-03-03
Reuters
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it involves autonomous or semi-autonomous decision-making in combat operations. Its deployment in Iran for lethal strikes means the AI system's use has directly led to harm (physical injury or death), fulfilling the criteria for an AI Incident. The article explicitly states the drone was used in combat, implying realized harm. Although the article also discusses development and procurement aspects, the key factor is the actual use of the AI system in combat causing harm, which takes precedence over potential hazards or complementary information.
Thumbnail Image

US Debuts Suicide Drone in Iran After Fast-Tracked Pentagon Procurement

2026-03-03
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous or semi-autonomous combat drone capable of making decisions about strikes. Its deployment in Iran for combat operations means it is being used in a context where harm to persons and property is occurring or highly likely. The article explicitly states the drone has been used in combat, which directly involves the AI system in causing harm. This meets the criteria for an AI Incident, as the AI system's use has directly led to harm (a) injury or harm to persons, and (d) harm to property or communities. The rapid deployment and use in a conflict zone further emphasize the realized harm rather than a potential hazard. Hence, the classification is AI Incident.
Thumbnail Image

US Launched Kamikaze Drones Against Iran, Reflecting Lessons Learned From Ukraine

2026-03-03
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The kamikaze drones described are AI systems as they perform autonomous attack functions. Their deployment in combat operations has directly caused harm to military targets, fulfilling the criteria for an AI Incident. The article explicitly states their use in strikes and shows evidence of crashed drones, confirming active use rather than hypothetical risk. Therefore, this event is classified as an AI Incident due to the direct involvement of AI systems in causing harm during military operations.
Thumbnail Image

US debuts low-cost suicide drone LUCAS in Iran after Pentagon procurement

2026-03-03
Economic Times
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous combat drone capable of conducting lethal strikes. Its deployment in combat in Iran means the AI system's use has directly led to harm (physical injury or death) to persons or groups, which is a clear AI Incident under the framework. The article explicitly states the drone was used in combat, indicating realized harm rather than potential harm. Therefore, this is an AI Incident involving the use of an AI system causing direct harm.
Thumbnail Image

$35K kamikaze drone modelled on Tehran's own Shahed design -- all about LUCAS, deployed by US against Iran

2026-03-04
ThePrint
Why's our monitor labelling this an incident or hazard?
LUCAS is an AI system as it incorporates advanced autonomy, target identification, and swarming capabilities, which are indicative of AI-driven decision-making in real-time combat scenarios. Its deployment in active combat, where it strikes targets and detonates, directly causes harm to persons and communities, fulfilling the criteria for an AI Incident. The article explicitly states its use in combat operations, confirming realized harm rather than potential harm. Hence, this event qualifies as an AI Incident due to the direct involvement of an AI system causing harm through its lethal autonomous functions.
Thumbnail Image

US-Israel, Iran War: What Is LUCAS, The One-Way Drone US Launched In Operation Epic Fury?

2026-03-03
Republic World
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it autonomously identifies and attacks targets without recovery, indicating AI-based decision-making and control. Its deployment in combat operations against Iran has directly led to harm through military strikes, fulfilling the definition of an AI Incident due to injury or harm to persons and harm to property. The article explicitly states the operational use of these autonomous drones in combat, confirming realized harm rather than potential harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Iran a testing ground for new weapons - The Tribune

2026-03-03
The Tribune
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous attack drones) in active combat, which directly leads to harm through strikes and attacks. The drones' autonomous capabilities and operational deployment in warfare meet the definition of an AI system causing direct harm (injury, death, destruction). The article explicitly states the use of these AI-enabled weapons in combat, confirming realized harm rather than potential harm. Hence, it is classified as an AI Incident.
Thumbnail Image

US Serves Iran Taste Of Its Own Medicine In Kamikaze Attacks

2026-03-03
The Daily Caller
Why's our monitor labelling this an incident or hazard?
The LUCAS drones are AI systems as they are autonomous or semi-autonomous combat drones making decisions to conduct attacks. Their deployment in active conflict and use in kamikaze-style attacks directly involves AI systems causing harm to persons and property. The article explicitly states their use in combat and their role in delivering attacks, which meets the criteria for an AI Incident. The harm is direct and realized, not merely potential, and the AI system's use is central to the event. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Phoenix-made 'kamikaze' drone debuts in U.S. combat against Iran

2026-03-03
AZfamily.com
Why's our monitor labelling this an incident or hazard?
The LUCAS drones are AI systems as they are autonomous or semi-autonomous unmanned combat attack systems. Their deployment in strikes against Iran has directly led to harm (destruction of targets), fulfilling the criteria for an AI Incident. The article describes realized harm from the use of these AI systems in combat, not just potential harm. Although concerns about future misuse are mentioned, the primary focus is on the actual combat use and resulting harm, which classifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A Taste of Its Own Medicine? US Confirms First Combat Use of Shahed Clones in Iran

2026-03-03
The Defense Post
Why's our monitor labelling this an incident or hazard?
The LUCAS drones are described as having built-in autonomy, indicating AI system involvement. Their use in combat strikes directly leads to harm to persons and property, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but an actual use of AI systems causing harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system's use is central to the event.
Thumbnail Image

US Debuts Suicide Drone in Iran After Fast-Tracked Pentagon Procurement

2026-03-03
GV Wire
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it involves autonomous or semi-autonomous decision-making capabilities for combat attack missions. Its deployment in combat in Iran has directly led to harm through its use as a lethal weapon. The article explicitly states its use in combat, implying realized harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing harm through its use in warfare.
Thumbnail Image

Cutting-Edge LUCAS Drone Debuts in Combat: A Game-Changer for U.S. Defense | Technology

2026-03-03
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system given its advanced autonomous capabilities and complex communication systems. Its deployment in combat implies use of AI in a context with high potential for harm (injury, death, escalation of conflict). Although no specific harm or incident is reported yet, the nature of the system and its combat use plausibly lead to AI incidents. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the drone's AI system is central to the event.
Thumbnail Image

US Debuts Suicide Drone in Iran After Fast-Tracked Pentagon Procurement

2026-03-03
Asharq Al-Awsat English
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous combat drone capable of making decisions about strikes, which fits the definition of an AI system. Its deployment in combat in Iran has directly led to harm, fulfilling the criteria for an AI Incident. The article explicitly states its use in strikes, implying injury, harm, or destruction. The rapid fielding and operational use in a conflict zone confirm that the AI system's use has caused realized harm, not just potential harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pentagon Pushes Low-Cost Drone Warfare With Iran Deployment - TV360 Nigeria

2026-03-03
TV360 Nigeria
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an unmanned combat attack system capable of autonomous or semi-autonomous operation. The article focuses on its deployment in combat, which involves the use of AI in a lethal context. Although no specific harm or incident is reported, the deployment of such AI-enabled weapons plausibly leads to injury or harm to persons and communities, fulfilling the criteria for an AI Hazard. There is no indication of an actual incident or realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it highlights a significant development with potential for harm.
Thumbnail Image

US debuts suicide drone in Iran after fast-tracked Pentagon

2026-03-03
The Business Standard
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it involves autonomous or semi-autonomous decision-making capabilities for combat attack missions. Its deployment in combat in Iran means it has directly led to harm to persons or groups, fulfilling the harm criteria for an AI Incident. The article explicitly mentions its use in warfare, which inherently involves injury or harm. The rapid deployment and integration with AI-enabled control software further confirm AI involvement. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US debuts suicide drone in Iran after fast-tracked Pentagon procurement

2026-03-03
1470 & 100.3 WMBD
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it involves autonomous or semi-autonomous decision-making capabilities for combat attack missions. Its deployment in Iran constitutes the use of an AI system that has directly led to harm (destruction or injury) in a military conflict. The article explicitly states the drone was used in combat, indicating realized harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to persons or property in a conflict zone.
Thumbnail Image

US debuts suicide drone in Iran after fast-track buy

2026-03-03
The New Arab
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it involves autonomous operation and complex control software enabling combat attacks. Its deployment in combat in Iran means it has directly led to harm or risk of harm to persons and communities, satisfying the definition of an AI Incident. The article does not merely discuss potential risks or future harm but reports actual use in warfare, which involves injury or harm to people. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Men Seen Playing With American Suicide LUCAS Drone In Iraq Amid Airstrikes In Middle East | Watch

2026-03-03
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system used in military operations with autonomous targeting capabilities. The video shows the drone in the hands of civilians, indicating a malfunction or loss of control. While no injury or damage is reported, the event plausibly could lead to harm if the drone is used maliciously or accidentally, fulfilling the criteria for an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it involves a specific AI system and a credible risk of harm.
Thumbnail Image

LUCAS: The U.S. Military Is Using Its Own Version of Iran's Shahed-136 Drones to Swarm Strike Iran

2026-03-03
19FortyFive
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system due to its autonomous coordination and swarm attack capabilities. Its deployment in combat operations has directly caused harm to Iranian military infrastructure, fulfilling the criteria for harm to property and communities. The event involves the use of an AI system in a military context leading to realized harm, thus it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

BRUTAL CLASH! 'That's A 34200% Rise': Nancy Mace Vs Tim Walz On Autism Fraud In Minnesota

2026-03-04
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled autonomous attack drones used in combat and their development for swarm warfare, which involves AI systems. Although no specific harm or incident is reported, the use and proliferation of such weapons plausibly could lead to significant harm, including injury or death, making this a credible AI Hazard. There is no indication of a realized harm or incident, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the potential risks of these AI systems.
Thumbnail Image

US debuts LUCAS kamikaze drone in Iran after fast-tracked Pentagon procurement

2026-03-04
CNA
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous combat drone capable of making attack decisions. Its deployment in combat in Iran means it has directly led to harm (injury or death) to persons or groups, fulfilling the criteria for an AI Incident. The article explicitly states its use in combat, indicating realized harm rather than potential harm. Hence, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Viral Video Shows Locals Fiddling With Alleged Intact US LUCAS Kamikaze Drone In Iraq As Iran-US War Escalates

2026-03-04
NewsX
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it autonomously flies and attacks targets. The event involves the use and potential malfunction or capture of such a system. Although no direct harm from this specific drone is reported, the presence of an intact autonomous kamikaze drone in a conflict zone and its handling by locals plausibly could lead to harm, including injury or escalation of conflict. The event does not describe an actual incident of harm caused by the AI system but highlights a credible risk. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

The US Military Is Using Iran's Own Drones Against It

2026-03-04
The National Interest
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it operates autonomously and can conduct complex attack missions including swarm tactics. Its deployment in strikes against Iran has directly caused harm to people and property, meeting the definition of an AI Incident. The article explicitly states the drone's autonomous capabilities and its use in combat operations resulting in harm, confirming direct AI involvement in causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

LUCAS - Shahid@USA

2026-03-04
GlobalSecurity.org
Why's our monitor labelling this an incident or hazard?
LUCAS is an AI system as it autonomously navigates and executes strike missions with minimal human input, including networked coordination and swarm tactics. Its operational use in combat has directly led to harm through attacks on military targets, fulfilling the harm criteria (d) harm to property and communities, and potentially (a) injury or harm to persons. The article explicitly states the system was used operationally in a military campaign, causing real harm. Therefore, this event is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US Ramps Up Drone Warfare, Fires Improved Iranian Weapons Back at Iran

2026-03-06
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an unmanned combat vehicle capable of autonomous or semi-autonomous operation. Its deployment in combat and use in attacks against Iran directly causes harm to persons and communities, fulfilling the criteria for an AI Incident. The article explicitly states the drones have been fired in combat, causing harm, so this is not a potential hazard but an actual incident. The involvement of AI in the drone's operation and the resulting harm from its use in warfare justifies classification as an AI Incident.
Thumbnail Image

Captured, redesigned, deployed: How Iran's Shahed inspired America's LUCAS Kamikaze drone

2026-03-05
Zee News
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it performs autonomous loitering, target identification, and attack functions. Its deployment in active conflict zones means it is directly causing harm to people and property, fulfilling the criteria for an AI Incident. The article details the use of this AI system in warfare, which inherently involves injury, harm, and destruction, thus constituting an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Copycat drone wars: Iran copied the US, then the US copied Iran

2026-03-06
Economic Times
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it is an autonomous or semi-autonomous unmanned aerial vehicle capable of loitering and attacking targets, which involves AI for navigation, targeting, and decision-making. Its deployment in combat and use to strike Iranian targets directly leads to harm (injury or death, destruction of property). The article explicitly states that the drone is being used operationally in warfare, thus the AI system's use has directly led to harm. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to persons and property in a conflict setting. The article does not merely discuss potential or future harm, nor is it focused on governance or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

Drone boomerang: US turns Iran's own Shahed playbook against Tehran - The Times of India

2026-03-06
The Times of India
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it performs autonomous combat attack functions. Its deployment in active military operations directly leads to harm through strikes on targets, which can cause injury or death and damage to property and communities. The article explicitly states that the drone has been used in combat, indicating realized harm rather than potential harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing harm in a conflict setting.
Thumbnail Image

Has the US reverse-engineered Iran's Shahed drone to deploy a cheaper version in war?

2026-03-06
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of autonomous AI flight controls and GPS-denied inertial navigation in the LUCAS drones, confirming AI system involvement. The drones are actively deployed in combat, causing direct harm to Iranian military installations, which qualifies as harm to property and communities. The event stems from the use of AI systems in military operations, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. The event is not merely complementary information or unrelated news, but a clear case of AI system use causing harm.
Thumbnail Image

Arizona Startup's Replica Drone Shifts Warfare Dynamics in U.S.-Iran Conflict

2026-03-07
Chosun.com
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system because it autonomously navigates to targets based on input coordinates and detonates on impact, which involves AI decision-making and autonomous operation. Its deployment in combat and use in airstrikes against Iran directly leads to harm (injury, death, destruction), fulfilling the criteria for an AI Incident. The article explicitly states the drone's use in warfare and its operational characteristics, confirming AI involvement and realized harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Why billion-dollar defenses have trouble stopping $35,000 drones: The 'slow & lethal' secret of Iran's Shahed drones

2026-03-07
The Financial Express
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they autonomously navigate and strike targets based on input coordinates, demonstrating AI system involvement. The attacks have directly led to harm to property and communities, fulfilling the criteria for an AI Incident. The article details actual harm caused by these AI-enabled drones, not just potential harm, and discusses the military and strategic implications of their use. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cheap drones are making the Iran war a new kind of conflict - The Boston Globe

2026-03-07
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems with autonomous navigation and targeting capabilities. Their deployment in combat has resulted in actual physical harm and disruption, including destruction of infrastructure and economic destabilization. The article explicitly states these harms have occurred and attributes them to the use of these AI-enabled drones. Hence, this is a clear case of an AI Incident due to the direct harm caused by the AI systems' use in warfare.
Thumbnail Image

U.S. "Indispensable" Drones Give Iran a Taste of Its Own Medicine - Why LUCAS Is Invaluable in Op. Epic Fury

2026-03-07
Latest Asian, Middle-East, EurAsian, Indian News
Why's our monitor labelling this an incident or hazard?
The LUCAS drone is an AI system as it operates autonomously with GPS/INS navigation, can function in GPS-denied environments using visual navigation, and supports autonomous swarm operations via a mesh network. Its use in active combat operations against Iran, delivering attacks on military infrastructure, directly leads to harm (destruction of property and potential injury or death). The event involves the use of AI systems in a military context causing direct harm, fitting the definition of an AI Incident. The article clearly describes realized harm from the use of these AI-enabled drones, not just potential harm or future risk, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Society should stand for women, laws alone not enough for safety of women: T'gana CM

2026-03-07
english.varthabharati.in
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled autonomous drones used in warfare that have been deployed in real combat missions, causing destruction of enemy military assets. The drones' autonomous guidance and attack capabilities qualify them as AI systems. Their deployment and use have directly led to harm (destruction of property and potential injury or death), fulfilling the criteria for an AI Incident. Although the article also discusses the development and strategic implications, the realized harm from their use in combat makes this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Iran's low-cost Shahed drones inspired US's LUCAS. What's in India's drone armoury?

2026-03-09
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered swarm technology and autonomous strike capabilities in the Indian drones inspired by the Shahed-136 design, indicating AI system involvement. The drones are used or intended for military strikes, which inherently carry risks of injury, property damage, and disruption. However, the article does not describe a specific incident where these drones have caused harm but rather discusses their development, deployment, and strategic implications. This aligns with the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but no specific harm event is reported. The article also includes contextual information about past conflicts where similar drones caused harm, but the main focus is on the current and future development and deployment of such drones by India and others, not on a new incident. Hence, the classification is AI Hazard.
Thumbnail Image

'Designed to wreak havoc': Cheap drones are shaping war with Iran

2026-03-08
Business Standard
Why's our monitor labelling this an incident or hazard?
The drones described are autonomous or semi-autonomous systems that use AI for navigation and targeting, as indicated by references to satellite communication systems and software advances for autonomous systems. Their use in combat to hit infrastructure and overwhelm defenses has directly caused harm, fulfilling the criteria for an AI Incident. The harm includes damage to property and communities due to military strikes. The article clearly links the AI system's use to realized harm, not just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

US copies Iran Shahed drone design to deploy low cost LUCAS weapon in war

2026-03-09
The Telegraph
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they autonomously navigate and deliver explosive payloads based on input coordinates, influencing physical environments. Their use in attacks on infrastructure and civilian areas has caused direct harm to communities and property, fulfilling the criteria for an AI Incident. The article details realized harm from the deployment and use of these AI-enabled drones, not just potential harm or future risks. Hence, the event is classified as an AI Incident.
Thumbnail Image

FLM 136: America's cheap Iran-designed Shahed drone clone

2026-03-10
Euronews English
Why's our monitor labelling this an incident or hazard?
The FLM 136 is an autonomous unmanned aerial vehicle, which qualifies as an AI system due to its autonomous takeoff, landing, and attack capabilities. Its deployment in combat operations directly leads to harm (physical injury and death) as it is used for kamikaze strikes. The article explicitly states these drones are being launched in combat, indicating realized harm. Therefore, this event meets the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities in a military conflict context.
Thumbnail Image

Iran's Drone Advantage

2026-03-11
Foreign Affairs
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled autonomous drones (the Shahed-136 and the U.S. LUCAS drone) being used in military strikes that have caused damage to buildings and infrastructure in multiple countries, including attacks on the U.S. embassy. This constitutes harm to property and communities, fulfilling the criteria for an AI Incident. The drones are AI systems as they perform autonomous attack and surveillance functions. The harm is realized and ongoing, not merely potential, and the AI system's use is pivotal to the harm described. Hence, the classification as an AI Incident is appropriate.