US Deploys AI-Powered Merops Anti-Drone Systems to Middle East to Counter Iranian Threats

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US is urgently deploying Merops, an AI-driven anti-drone system previously tested in Ukraine, to the Middle East to counter Iranian drone attacks. Merops autonomously detects and intercepts hostile drones, addressing gaps in existing missile defenses amid escalating regional tensions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Merops counterdrone system is an AI system as it autonomously seeks and locks onto targets using AI. The event involves the use of this AI system in an active military conflict where Iranian drones have caused deaths and damage, thus harm has occurred. The AI system's deployment is directly linked to countering these harms. Therefore, this qualifies as an AI Incident because the AI system's use is directly involved in a situation with realized harm to persons and property (military personnel deaths and damage to radar systems).[AI generated]
AI principles
AccountabilitySafety

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
Physical (injury)Physical (death)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

US To Deploy Anti Drone System From Ukraine To Middle East To Counter Iran

2026-03-07
NDTV
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI-enabled anti-drone defense system explicitly described as using AI to detect and intercept drones. The article focuses on the deployment of this system to counter Iranian drones, which pose a significant threat. Although the system is being deployed to prevent harm, no actual harm caused by the AI system or its malfunction is reported. The article discusses the potential for harm from Iranian drones and the need for effective AI-based countermeasures. This aligns with the definition of an AI Hazard, where the AI system's use could plausibly lead to preventing or mitigating harm from drone attacks. It is not Complementary Information because the main focus is on the deployment and potential impact of the AI system, not on updates or responses to past incidents. It is not an AI Incident because no harm caused by the AI system is described. It is not Unrelated because the AI system and its role are central to the event.
Thumbnail Image

U.S. Army Sends Ukraine-Tested Drones to Hit Iran's Drones

2026-03-07
The Wall Street Journal
Why's our monitor labelling this an incident or hazard?
The Merops counterdrone system is an AI system as it autonomously seeks and locks onto targets using AI. The event involves the use of this AI system in an active military conflict where Iranian drones have caused deaths and damage, thus harm has occurred. The AI system's deployment is directly linked to countering these harms. Therefore, this qualifies as an AI Incident because the AI system's use is directly involved in a situation with realized harm to persons and property (military personnel deaths and damage to radar systems).
Thumbnail Image

The US is sending a new drone-killer to the Middle East. It's logged over 1,000 Shahed intercepts over Ukraine.

2026-03-07
Business Insider
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system as it uses artificial intelligence to navigate in jammed environments and autonomously intercept hostile drones. Its use in active conflict zones (Ukraine and the Middle East) directly relates to harm prevention from drone attacks, which are military harms involving injury, disruption, and potential loss of life or property. The AI system's deployment and operational use in these contexts means it is directly involved in an event where AI's role is pivotal to harm outcomes. Although the article focuses on the system's defensive role, the context of military conflict and drone warfare inherently involves harm, making this an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential future harm or governance responses but reports on active use of AI systems in harm-related military operations.
Thumbnail Image

US to Send Anti-Drone System to the Mideast After Successful Use in Ukraine, Officials Say

2026-03-06
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The Merops system is explicitly described as using AI to identify and navigate against drones, which are a threat causing harm in conflict zones. The article discusses the system's deployment in response to active drone attacks, which have caused harm or pose imminent harm. The AI system's use is directly linked to managing and mitigating these harms. Therefore, this event involves the use of an AI system in a context where harm is occurring or has occurred, qualifying it as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential future harm or general AI developments but focuses on the operational use of an AI system in an active harm context.
Thumbnail Image

Middle East crisis: US to deploy Ukraine-tested interceptors to counter Iran's cheap drones - The Times of India

2026-03-07
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Merops) used for defense against drones, indicating AI system involvement. However, the AI system is being deployed to prevent harm from Iranian drones rather than causing harm itself. There is no indication that the AI system malfunctioned or caused injury, rights violations, or other harms. The article focuses on the strategic deployment and capabilities of the AI system as a response to existing threats, which fits the definition of Complementary Information. It updates on societal and technical responses to AI in a military context without reporting a new incident or hazard.
Thumbnail Image

US to deploy 'Merops' anti-drone system in Middle East to deter Iranian attacks: Know about it

2026-03-07
India TV News
Why's our monitor labelling this an incident or hazard?
The 'Merops' system is explicitly described as AI-powered and used to intercept hostile drones, which are a security threat. The deployment is a response to ongoing drone attacks that have caused vulnerabilities and potential harm to US and allied forces and infrastructure. The AI system's role is pivotal in detecting and neutralizing these threats, directly linked to preventing injury, harm to critical infrastructure, and harm to communities. Since the system is being deployed in an active conflict context where drone attacks have already caused harm, this qualifies as an AI Incident due to the AI system's involvement in harm prevention and response to existing threats. The article does not merely discuss potential future risks but the active use of AI systems in a context of ongoing harm and defense.
Thumbnail Image

U.S. will send anti-drone system to Mideast after successful use in Ukraine, officials say

2026-03-07
PBS.org
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system used for defense against drones, indicating AI system involvement. However, the article focuses on the deployment and strategic use of this AI system to counter drone threats, not on any harm caused or plausible harm caused by the AI system itself. There is no indication of malfunction, misuse, or harm resulting from the AI system's development or use. The article mainly provides context on the evolving defense capabilities and strategic responses involving AI technology. Hence, it fits the definition of Complementary Information, as it updates on AI system deployment and defense responses without reporting new harm or plausible harm caused by AI.
Thumbnail Image

US plans to send Ukraine-proven anti-drone system to West Asia

2026-03-07
Firstpost
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system explicitly described as using artificial intelligence to detect and intercept drones. The article focuses on its deployment and planned transfer to a new region to counter drone threats. There is no report of any harm caused by the AI system; instead, it is presented as a defensive tool. Since the system's use could plausibly lead to harm (e.g., escalation of conflict or unintended consequences) or prevent harm, but no actual harm is reported, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article's main focus is on the system's deployment and potential impact, not on updates or responses to past incidents. It is not Unrelated because the AI system and its implications are central to the report.
Thumbnail Image

US to send anti-drone system to the Mideast after successful use in Ukraine, officials say

2026-03-07
Star Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Merops) used for anti-drone defense, confirming AI system involvement. The system is being deployed to counter existing drone threats that have caused harm, but the AI system itself is not causing harm or malfunctioning. The article focuses on the strategic use and effectiveness of the AI system in mitigating harm, which aligns with Complementary Information as it updates on AI system deployment and defense capabilities without reporting new harm or plausible future harm caused by the AI system. Hence, the classification is Complementary Information.
Thumbnail Image

Ukraine Agrees to Share US System to Stop Iran's Drones

2026-03-06
Newser
Why's our monitor labelling this an incident or hazard?
The anti-drone system likely involves AI for detection and interception of drones, indicating AI system involvement. The article focuses on the deployment of this system to counter Iranian drones, which could plausibly lead to harm if the system malfunctions or is used in conflict scenarios. However, no actual harm or incident is reported. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the system involves AI and defense against drones, which is relevant to AI harms.
Thumbnail Image

How the Merops anti-drone system could counter Iranian shahed drones?

2026-03-07
WION
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system explicitly described as being used to counter Iranian Shahed drones that have caused damage by hitting targets. The article discusses the harm caused by these drones and the AI system's role in defense and harm mitigation. Since the AI system's use is directly related to preventing injury, harm to assets, and disruption caused by drone attacks, this qualifies as an AI Incident. The harm is realized (drones have hit targets), and the AI system is actively used to address this harm, thus meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

U.S. will send anti-drone system to Mideast after successful use in Ukraine, officials say

2026-03-07
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (anti-drone system) used in military defense. There is no direct or indirect harm caused by the AI system reported in the article, so it is not an AI Incident. However, the deployment of such AI-enabled systems in conflict zones plausibly could lead to harm, including injury, disruption, or escalation of conflict. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

US to send anti-drone system to the Mideast after successful use in Ukraine, officials say

2026-03-07
Newsday
Why's our monitor labelling this an incident or hazard?
Merops is an AI system designed to autonomously detect and intercept drones, which are a military threat capable of causing harm. The article discusses its successful use in Ukraine and planned deployment in the Middle East to counter Iranian drones, which have been causing damage and vulnerability to U.S. targets. The AI system's role is pivotal in defense against these threats, directly linked to preventing harm to people and infrastructure. Since the system is actively used to counter ongoing drone attacks, this constitutes an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

US to send anti-drone system to the Mideast after successful use in Ukraine, officials say

2026-03-06
thepeterboroughexaminer.com
Why's our monitor labelling this an incident or hazard?
The Merops system is explicitly described as using AI to identify and navigate against drones, which are a threat causing harm and disruption in the Middle East and Ukraine. The article discusses the system's deployment as a response to real and ongoing drone attacks, indicating that harm has occurred and the AI system is actively used to counter it. This fits the definition of an AI Incident because the AI system's use is directly linked to addressing harm caused by drones, which are weapons causing injury, disruption, and potential harm to people and infrastructure. The article does not merely discuss potential future harm or general AI developments but focuses on the AI system's operational use in a harm context.
Thumbnail Image

US to send anti-drone system to the Mideast after successful use in Ukraine, officials say

2026-03-06
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (anti-drone system) whose deployment is planned based on prior successful use. There is no indication of any harm or incident caused by the AI system in this context. The article focuses on the strategic deployment and defense enhancement, which could plausibly lead to future incidents if the system is used in conflict, but no such harm is reported or implied as having occurred yet. Therefore, this qualifies as an AI Hazard due to the plausible future risk associated with the deployment of AI-enabled military defense systems, but not an AI Incident or Complementary Information.
Thumbnail Image

US to send anti-drone system to the Mideast after successful use in Ukraine, officials say

2026-03-06
CHAT News Today
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system used for defense against drones, involving AI for navigation and identification under challenging conditions. The article does not describe any harm caused by the AI system or its malfunction but focuses on its deployment to counter drone threats, which are a significant security concern. The AI system's use is intended to prevent harm from hostile drones, but the deployment in conflict zones implies a plausible risk of incidents involving harm related to AI-enabled drone defense. Since no direct or indirect harm from the AI system is reported, but plausible future harm related to its use in conflict zones exists, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

RaillyNews - USA Sends Drone Systems to Middle East

2026-03-07
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system explicitly described as using AI algorithms for detection, threat assessment, and countermeasures against hostile drones. The article details its deployment in response to real and escalating drone threats from Iran and regional adversaries, which have targeted military and civilian infrastructure. This constitutes a direct use of AI in mitigating harm to persons and property, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or future hazards but reports on an active operational deployment addressing ongoing harms, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

What is Merops anti-drone system and why US is deploying it amid Israel-Iran war?

2026-03-07
News24
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the Merops system as using artificial intelligence to detect and intercept hostile drones with counter-drones, indicating the involvement of an AI system. The deployment is a response to a security threat posed by Iranian drones, which have caused challenges for existing missile defense systems. The system's use directly addresses a security harm—potential drone attacks that could cause injury, disruption, or damage—thus the event involves the use of an AI system to mitigate harm. Since the system is being deployed in an active conflict region due to credible threats, and the AI system's use is directly linked to preventing harm from drone attacks, this qualifies as an AI Hazard because the article describes a credible risk of harm that the AI system is intended to counter. However, since no actual harm caused by the AI system or its malfunction is reported, and the system is being deployed as a preventive measure, it is not an AI Incident. The article focuses on the deployment and capabilities of the AI system in response to plausible threats, fitting the definition of an AI Hazard.
Thumbnail Image

US to send anti-drone system to Middle East after successful use in Ukraine, officials say - ExBulletin

2026-03-07
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Merops) that autonomously identifies and counters drones, indicating AI system involvement. The system is being deployed in conflict zones where drone attacks pose risks to people and infrastructure, so the AI system's use is linked to potential harm. However, the article does not describe any actual harm or malfunction caused by the AI system itself, only its deployment and intended use. Thus, it does not meet the criteria for an AI Incident but fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the context of military conflict.
Thumbnail Image

US supplying Ukraine-tested anti-drone systems to Middle East partners, WSJ reports

2026-03-08
The Kyiv Independent
Why's our monitor labelling this an incident or hazard?
The Merops system qualifies as an AI system because it involves autonomous or semi-autonomous drones that detect, track, and destroy other drones, implying real-time decision-making and adaptive behavior characteristic of AI. The use of this system directly relates to harm prevention in conflict zones, specifically countering drone attacks that have caused harm to cities and infrastructure. While the article does not report a new incident of harm caused by the AI system itself, it describes the system's deployment in active conflict environments where harm is occurring and the system's role in mitigating such harm. The article primarily reports on the use and transfer of this AI system to new regions, reflecting ongoing use rather than a new incident or hazard. It does not describe a malfunction or misuse causing harm, nor does it focus on potential future harm from the system. Therefore, this is best classified as Complementary Information, providing context on AI system deployment, adaptation, and international dissemination in response to existing conflict-related harms.
Thumbnail Image

US deploys Merops interceptor drone system to Middle East conflict

2026-03-08
Intellinews
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI-enabled autonomous drone interceptor system actively used in conflict zones, which qualifies as an AI system. The article discusses its deployment and operational success in shooting down hostile drones, which relates to harm mitigation rather than harm caused by the AI system itself. There is no indication of malfunction, misuse, or harm caused by the AI system; rather, it is used to prevent harm from enemy drones. The article primarily provides information about the system's deployment and strategic use, which fits the definition of Complementary Information as it updates on AI system use and its impact in a military context without describing an AI Incident or AI Hazard.
Thumbnail Image

How Ukrainian Interceptor Drones Are Being Sent To U.S. Troops

2026-03-10
Forbes
Why's our monitor labelling this an incident or hazard?
The Merops drones are AI-equipped autonomous interceptor drones used to counter hostile drones, which are a direct threat to U.S. troops and military infrastructure. Their deployment and operational use directly influence physical environments and have a clear impact on safety and security. Since the AI system's use is integral to preventing harm from enemy drones, this constitutes an AI Incident involving harm prevention in a critical infrastructure and personnel protection context.
Thumbnail Image

U.S. Army Deploys Ukrainian Merops Anti-Drone System Against Iranian Shahed Drones in Middle East

2026-03-09
Army Recognition
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system as it autonomously detects and intercepts drones using AI-driven sensors and decision-making. The article discusses its deployment in an active conflict zone where drone attacks are causing harm, but the AI system is used defensively to prevent or reduce harm. There is no indication that the AI system itself caused harm or malfunctioned, nor that it poses a plausible risk of causing harm. The article mainly provides context on the strategic use and effectiveness of the AI system in countering drone threats, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Merops: Το φτηνό αντι-drone σύστημα που έφτιαξαν οι Ουκρανοί απειλεί τα ιρανικά Shahed στη Μέση Ανατολή

2026-03-08
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system used for countering drones, which are autonomous or semi-autonomous aerial vehicles. Its deployment aims to prevent harm from drone attacks, which are potentially deadly and destructive. Although the article does not report a specific incident of harm caused by or prevented by the AI system, the context implies the system's use to mitigate ongoing threats. Since the article focuses on the deployment and use of an AI system with the potential to prevent harm, but does not describe an actual harm event caused or prevented, this qualifies as an AI Hazard. The system's use could plausibly lead to an AI Incident if it fails or is misused, or it could prevent harm, but no realized harm is described here. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

ΗΠΑ: Με την ένδειξη "κατεπείγον" στέλνουν στη Μέση Ανατολή συστήματα anti-drones δοκιμασμένα στην Ουκρανία

2026-03-08
Newpost.gr
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system as it autonomously detects and engages drone targets using AI for target locking and activation of an explosive mechanism. Its deployment is in response to lethal drone attacks that have caused deaths and damage, so the AI system's use is directly linked to preventing harm in an active conflict zone. The article reports actual harm (deaths and damage) from the attacks the system counters, and the AI system's role is pivotal in this military defense context. Hence, this event meets the criteria for an AI Incident due to direct involvement of an AI system in a situation with realized harm to persons and property.
Thumbnail Image

Οι ΗΠΑ στέλνουν το φτηνό αντι-drone Merops που κλειδώνει τα ιρανικά Shahed - Η Ουκρανία γίνεται στρατιωτικό εργαστήριο στη Μέση Ανατολή - HuffPost - Ειδήσεις και Απόψεις από την Ελλάδα και τον Κόσμο

2026-03-08
HuffPost Greece
Why's our monitor labelling this an incident or hazard?
The Merops system is explicitly described as using AI to autonomously identify and neutralize hostile drones, which are causing harm through attacks. The article discusses the system's deployment and operational use in real conflict scenarios where harm is occurring or has occurred due to drone attacks. The AI system's use is directly linked to preventing or reducing harm, fulfilling the criteria for an AI Incident. Although the article does not describe a malfunction or failure of the AI system, the involvement of AI in an active harm context (military defense against drone attacks) qualifies this as an AI Incident rather than a hazard or complementary information. The event is not unrelated as it clearly involves AI systems and harm mitigation in a military context.
Thumbnail Image

Merops: Η "ουκρανική λύση" απέναντι στα ιρανικά drones

2026-03-08
Offsite
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system as it autonomously detects and engages hostile drones using AI algorithms. The article reports that Iranian drones have caused lethal attacks, including deaths and damage to military infrastructure, which constitutes harm to persons and communities. The AI system is actively used to counter these threats, so its use is directly linked to ongoing harm and conflict. This meets the criteria for an AI Incident, as the AI system's use is directly involved in a context where harm to persons and communities is occurring. The article does not merely discuss potential harm or future risks but describes an active deployment in a conflict with realized harm, excluding classification as a hazard or complementary information.
Thumbnail Image

Merops: Το φτηνό αντι-drone σύστημα που έφτιαξαν οι Ουκρανοί απειλεί τα ιρανικά Shahed στη Μέση Ανατολή

2026-03-08
sofokleous10.gr
Why's our monitor labelling this an incident or hazard?
The Merops system is explicitly described as using artificial intelligence to autonomously identify and engage hostile drones, which are weapons causing lethal harm. The AI system's deployment is directly linked to ongoing military conflicts where harm to persons and property has occurred due to drone attacks. The article details the system's operational use to counter these threats, indicating the AI system's involvement in harm mitigation in an active conflict environment. Therefore, this qualifies as an AI Incident because the AI system's use is directly connected to harm (death and destruction) in conflict zones.
Thumbnail Image

Merops: Το φτηνό αντι-drone σύστημα των Ουκρανών απειλεί τα ιρανικά Shahed

2026-03-08
TYPOS
Why's our monitor labelling this an incident or hazard?
The Merops system is explicitly described as using AI to autonomously detect and engage hostile drones, which are weapons causing harm in conflict zones. The system's deployment and use in active military operations against Iranian Shahed drones, which have caused deaths and damage, directly relates to harm involving injury and death. The AI system's role in intercepting these drones is central to the event. Hence, this is an AI Incident because the AI system's use is directly linked to harm in a conflict setting, fulfilling the criteria of injury or harm to persons and disruption of critical infrastructure.
Thumbnail Image

Meet Merops: The AI Hunter Turning Iran's Shahed Suicide Drones Into Sitting Ducks

2026-03-14
News18
Why's our monitor labelling this an incident or hazard?
The Merops drone is explicitly described as an AI system with autonomous capabilities for real-time decision-making in hostile environments. Its deployment and active use to intercept and neutralize Shahed suicide drones directly prevent harm to civilians and critical infrastructure, which constitutes harm to people and property. The AI system's role is pivotal in this harm prevention, and the event involves the use of AI systems leading to realized harm prevention in a conflict setting. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How US and Israel plan to stop Iran's Shahed-136 drones with Merops drone killers

2026-03-14
MoneyControl
Why's our monitor labelling this an incident or hazard?
The Merops drones are explicitly described as AI-enabled systems that autonomously identify and neutralize hostile drones, which are a direct threat to military personnel and infrastructure. Their deployment in an active military operation to counter Iranian drone attacks indicates the AI system's use is directly linked to preventing injury or harm to people and disruption of critical infrastructure. This fits the definition of an AI Incident because the AI system's use is directly involved in harm prevention in a conflict scenario, which is a significant and clearly articulated harm context. Although the article focuses on the deployment and capabilities rather than a specific failure or malfunction, the use of AI in an active conflict zone with direct implications for harm qualifies it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

U.S. Rushes 10,000 Ukraine-Proven Merops AI Drones to Middle East to Counter Iranian Shahed Swarms

2026-03-14
Latest Asian, Middle-East, EurAsian, Indian News
Why's our monitor labelling this an incident or hazard?
The Merops drones are AI systems explicitly described as using AI for autonomous targeting and interception of hostile drones. Their deployment and operational use have directly led to harm mitigation by destroying Iranian Shahed drones, which are used in attacks against US and allied forces. This constitutes an AI Incident because the AI system's use is directly linked to harm prevention in a conflict involving physical harm risks. The article details realized use and impact, not just potential harm, so it is not an AI Hazard or Complementary Information. It is not unrelated as the AI system's role is central to the event.
Thumbnail Image

Дрони-перехоплювачі, що використовуються в Україні, будуть відбивати удари Ірану

2026-03-15
ФАКТЫ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used in active military defense roles, including intercepting hostile drones and engaging enemy targets. The use of these AI-enabled drones in conflict zones directly relates to harm through military engagement and destruction, which falls under harm to persons, communities, or property. Therefore, this constitutes an AI Incident as the AI systems' use has directly led to harm in an armed conflict context.
Thumbnail Image

США використовують перевірені в Україні дрони для протидії іранським "Шахедам", - Bloomberg

2026-03-14
InternetUA
Why's our monitor labelling this an incident or hazard?
The drones Merops are explicitly described as equipped with AI elements and are used in active military operations to intercept Iranian drones. The deployment of AI-enabled weapon systems in conflict zones directly relates to harm to persons and communities, fulfilling the criteria for an AI Incident. The article reports on actual use, not just potential use, and the harm associated with military drone operations is well established. Hence, this is an AI Incident due to the direct involvement of AI systems in causing or preventing harm in a conflict setting.
Thumbnail Image

Phones 'Ringing Off the Hook' for Ukraine Defense Firms as Mideast Seeks Help

2026-03-13
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled interceptor drones and defense technology developed and used by Ukraine, indicating the presence of AI systems. However, it does not describe any specific harm or incident caused by these AI systems, nor does it highlight a credible risk of future harm beyond the existing conflict. The focus is on the strategic deployment and increasing demand for these AI systems, which informs understanding of the AI ecosystem and geopolitical implications. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

To Fight Iran's Drones, U.S. Taps Ukraine's Hard-Earned Knowledge

2026-03-13
The New York Times
Why's our monitor labelling this an incident or hazard?
Merops is an AI system involved in autonomous or semi-autonomous drone interception, which is a clear AI system by definition. Its deployment is directly linked to preventing harm to military personnel from hostile drones, and the article references deaths caused by drone attacks that the system aims to counter. Therefore, the event involves the use of an AI system that has a direct role in harm prevention in an active conflict context. This qualifies as an AI Incident because the AI system's use is directly connected to harm (deaths and injuries) caused by drones, and the system's deployment is a response to that harm. The article does not merely discuss potential harm or future risks but addresses an ongoing situation where harm has occurred and the AI system is actively used to mitigate it.
Thumbnail Image

US ignored Zelensky's offer to help counter Iran's Shahed drones months ago. Now, it is being called 'tactical error'

2026-03-11
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Shahed drones and interceptor drones) and their use or potential use in military defense. The harm (deaths of US service members) has already occurred due to Shahed drones, but the article focuses on the US ignoring Ukraine's AI-enabled drone defense offer months ago, which is now seen as a tactical error. There is no new AI Incident described, nor a new AI Hazard, but rather a reflection on past decisions and their consequences. This fits the definition of Complementary Information, as it provides supporting context and evaluation of AI-related military developments and responses without reporting a new incident or hazard.
Thumbnail Image

NDTV Speaks To Ukraine's Drone Pilots: Why US Needs Their Shahed Hunters Now

2026-03-13
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-guided interceptor drones used by Ukraine to shoot down Shahed drones, which have been launched extensively by Russia causing damage and harm. The AI systems are directly involved in the use phase, actively intercepting and destroying attack drones, thereby mitigating harm to people and infrastructure. The harm (damage to cities and infrastructure) is realized and ongoing, and the AI system's role is pivotal in countering these attacks. Hence, this qualifies as an AI Incident under the framework, as the AI system's use is directly linked to harm in a conflict setting.
Thumbnail Image

US sends intercept drones used in Ukraine to blunt Iran strikes

2026-03-13
MoneyControl
Why's our monitor labelling this an incident or hazard?
The Merops drones are explicitly described as AI-enabled interceptor drones actively used to counter Iranian drone attacks, indicating AI system involvement. However, the article does not describe any harm caused by these AI systems or any malfunction; rather, it describes their use to prevent harm. There is no indication of plausible future harm from these drones themselves. The article mainly provides information about the deployment and operational use of these AI systems in a military context, which fits the definition of Complementary Information. It updates on AI system use and military strategy without reporting an AI Incident or AI Hazard.
Thumbnail Image

Missed warning? US turned down Ukraine's anti-Shahed drone offer months before Iran conflict

2026-03-11
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled drones used in military conflict, with the Ukrainian counter-drone system designed to detect and intercept Iranian attack drones. The refusal to adopt these AI countermeasures earlier indirectly contributed to harm, including deaths and increased military costs. The AI systems' development, use, and malfunction (or failure to deploy effective countermeasures) are central to the harm described. Hence, this is an AI Incident due to realized harm linked to AI system use and deployment decisions.
Thumbnail Image

Iran Battles Will Spotlight Ukraine's World-Leading Role In Drone War

2026-03-13
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous or semi-autonomous drones used in military conflict, which have directly led to harm including deaths of soldiers and destruction of military assets. The involvement of AI in the development, use, and deployment of these drones and interceptors is clear. The harms include injury and death to persons (soldiers), harm to military property, and broader harm to communities and geopolitical stability. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Why Ukraine's Drone Defense Ecosystem Is In Demand

2026-03-12
Forbes
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses detection networks, command-and-control software, and interceptor drones that rely on AI for autonomous or semi-autonomous operation in drone defense. The use of these AI systems has directly contributed to preventing harm from drone attacks, which is a positive impact rather than a harm. There is no indication of any injury, violation of rights, disruption, or other harms caused by these AI systems. The article is primarily about the development, deployment, and export of AI-enabled defense technology and its strategic implications, without reporting any incident or hazard related to AI causing harm or plausible future harm. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It also is not merely general AI news or product launch, but rather provides contextual and strategic information about AI defense systems and their impact, which fits the definition of Complementary Information.
Thumbnail Image

US Sends Intercept Drones Used in Ukraine to Blunt Iran Strikes

2026-03-13
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled interceptor drones actively used in defense to prevent harm from Iranian drone attacks, indicating AI system involvement in a real-world operational context. However, there is no indication that the AI system caused any harm or malfunctioned, nor is there a plausible risk of harm stemming from the AI system's development or use described here. The focus is on the deployment and strategic use of AI systems to counter threats, which aligns with providing additional context and updates about AI applications in defense. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

Phones 'ringing off the hook' for Ukraine defence firms as Middle East seeks help

2026-03-13
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled drone interceptor systems developed and used by Ukraine to shoot down hostile drones threatening critical infrastructure. The AI systems are actively deployed and operated, with real interceptions occurring, thus directly influencing physical environments and preventing harm. The harm category (b) disruption of critical infrastructure is directly relevant here, as the AI systems protect oil facilities and shipping from drone attacks. The involvement of AI in the development, use, and operational deployment of these systems is clear. Since the AI systems have directly led to harm prevention in a conflict setting, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ukraine finds new role as protector of US, Gulf allies amid Iran war

2026-03-13
news24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Ukraine's deployment of advanced drone defence systems and automated command systems that generate real-time operational tracking and reports based on combat data. These systems involve AI capabilities such as automation, data analysis, and decision support in military operations. Their use directly impacts the defence against Iranian drone attacks, which constitute harm to persons and infrastructure. Hence, this is an AI Incident because the AI systems' use is directly linked to ongoing harm and conflict mitigation.
Thumbnail Image

Ukraine enters the war: Iran suddenly realizes its drone dominance is finished - RFU News

2026-03-13
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of Ukrainian interceptor drones, which are AI systems designed to autonomously or semi-autonomously intercept and destroy hostile drones. These systems are actively deployed in a conflict zone to defend against Iranian drone attacks, which have caused harm to cities and infrastructure. The AI systems' deployment directly influences the ongoing conflict and the protection of critical infrastructure, fulfilling the criteria for an AI Incident. The harm prevented or mitigated is related to injury, harm to communities, and disruption of critical infrastructure. The involvement is in the use phase of the AI systems. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Ukraine waiting for US to sign proposed drone production deal, Zelensky says (Kyiv Independent)

2026-03-13
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly through drone and air defense technologies, which typically incorporate AI for autonomous operation, detection, and interception. Although no harm has occurred yet, the potential for these AI-enabled systems to be used in conflict and cause injury or property damage is credible. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident in the future. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated since the focus is on a pending deal with potential AI-related military applications and associated risks.
Thumbnail Image

What are Shahed drones and what makes them decisive in modern warfare?

2026-03-13
@businessline
Why's our monitor labelling this an incident or hazard?
Shahed drones are AI systems as they are unmanned aerial vehicles capable of autonomous or semi-autonomous operation, including navigation and targeting. Their deployment in warfare has directly led to harm through destruction of infrastructure and military targets, fulfilling the criteria for an AI Incident. The article details actual use of these drones causing harm, not just potential risks, and discusses their strategic impact and ongoing development, confirming the presence of realized harm linked to AI systems.
Thumbnail Image

Zelensky needles US as he sends teams to Middle East to help American troops

2026-03-12
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-enabled drone technology and counter-UAV systems in an active conflict zone where harm to human life has occurred (deaths and injuries of U.S. service members). The deployment of Ukrainian drone teams to assist American forces involves the use of AI systems in defense against drone attacks, directly linked to harm (injury and death). The AI systems' development, use, and deployment are central to the event, and the harm is realized, not just potential. Thus, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to persons.
Thumbnail Image

Ukraine's Expertise In Countering Iranian-Designed Shahed Drones Attracting Growing International Demand - Analysis

2026-03-14
Eurasia Review
Why's our monitor labelling this an incident or hazard?
The Shahed drones are loitering munitions that operate autonomously or semi-autonomously, implying AI system involvement. Their use by Russian forces has directly led to harm, including civilian injuries, fulfilling the criteria for an AI Incident. The article also highlights Ukraine's countermeasures involving AI-enabled detection and interceptor drones, which are actively mitigating harm. The international demand for this expertise and technology is a response to an ongoing AI Incident rather than a mere hazard or complementary information. Therefore, the event is best classified as an AI Incident due to the direct link between AI-enabled drone use and harm.
Thumbnail Image

Trump Blew Off Zelenskyy's Drone Defense Pitch -- Then Iran's Strikes Changed His Mind - Inquisitr News

2026-03-10
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of drones and drone defense technologies that use AI for targeting and interception. The harm includes deaths of U.S. service members caused by Iranian drones, which could have been mitigated by the AI-enabled Ukrainian defense systems initially dismissed. The AI system's use and the failure to adopt it earlier directly and indirectly led to harm. Therefore, this qualifies as an AI Incident due to realized harm involving AI systems in military conflict.
Thumbnail Image

Ukraine offered Trump a battle-tested anti-drone system months before the Iran war, but the administration dismissed it and is now begging for it back | Attack of the Fanboy

2026-03-11
Attack of the Fanboy
Why's our monitor labelling this an incident or hazard?
The Ukrainian anti-drone system qualifies as an AI system because it involves interceptor drones and sensors that likely use AI for autonomous detection, tracking, and interception of attack drones. The event describes the use and development of this AI system in a military context, where its deployment or lack thereof has directly or indirectly led to harm (deaths of U.S. service members and financial costs). The initial refusal to adopt the system and the subsequent reversal highlight the AI system's pivotal role in the harm and its mitigation. Therefore, this event is classified as an AI Incident due to the realized harm linked to the AI system's use and deployment decisions.
Thumbnail Image

Drones reshape the Gulf battlefield as Ukraine's anti-Shahed expertise draws interest - https://eutoday.net

2026-03-13
eutoday.net
Why's our monitor labelling this an incident or hazard?
The Shahed drones are AI systems as they are autonomous loitering munitions capable of navigation and attack. The Ukrainian interceptor drones and electronic warfare systems also involve AI for detection and disruption. However, the article does not describe a specific event where these AI systems have directly or indirectly caused harm or damage (AI Incident), nor does it describe a near miss or credible imminent threat that has not yet materialized (AI Hazard). Instead, it focuses on the strategic and economic aspects of drone warfare and defense innovations, which is contextual and informative. Therefore, this is Complementary Information as it provides important context and updates on AI-enabled military technology and responses without reporting a new incident or hazard.
Thumbnail Image

To Fight Iran's Drones, U.S. Taps Ukraine's Hard-Earned Knowledge

2026-03-13
DNYUZ
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI-enabled anti-drone defense system that has been used operationally in Ukraine and is now being deployed by the U.S. military to counter Iranian drones that have killed American troops. The article explicitly links the AI system's use to the prevention of harm and saving lives, indicating direct involvement of AI in mitigating lethal threats. This meets the criteria for an AI Incident because the AI system's use has directly led to harm reduction (injury and death prevention) in a military conflict context. Although the article also discusses broader strategic and technological implications, the core event is the operational use of an AI system with direct impact on human safety.
Thumbnail Image

Phones 'Ringing Off the Hook' for Ukraine Defense Firms as Mideast Seeks Help

2026-03-13
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled drone interceptor systems used to counter hostile drones, which are a form of AI system performing autonomous or semi-autonomous decision-making and control in real-time. The use of these systems is directly linked to defense against attacks on critical infrastructure and property, which falls under harm categories (a) and (d). The article describes active deployment and operational use, not just potential or hypothetical risks, so it is not merely a hazard. It is not complementary information because the main focus is on the use and impact of these AI systems in defense, not on governance or responses. It is not unrelated because the AI systems are central to the event. Hence, the classification is AI Incident.
Thumbnail Image

In Iran fight, US scrambles to adapt in its 1st major drone war

2026-03-12
ABC News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of drones, which are AI systems capable of autonomous or semi-autonomous operation, in military conflict leading to real harm, including the deaths of seven American troops and injuries to many others. The harm is directly linked to the use of these AI systems in warfare. The article also discusses the development and deployment of drone interceptors and tactics, but the primary focus is on the realized harm caused by AI-enabled drones. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ukraine waiting for US to sign proposed drone production deal, Zelensky says

2026-03-12
The Kyiv Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI-enabled drone systems and air defense technologies, which qualify as AI systems due to their autonomous or semi-autonomous operational capabilities. However, since the agreement is still pending and no actual use or malfunction of these AI systems has occurred, there is no direct or indirect harm reported. The article highlights a plausible future scenario where these AI systems could be used to counter drone attacks, implying potential future impact but not an incident. Therefore, this qualifies as an AI Hazard because the development and potential deployment of these AI-enabled defense systems could plausibly lead to AI incidents in the future, but no incident has yet occurred.
Thumbnail Image

Did US Ignore Ukraine's Warning On Iranian Drones? What Were 'Tactical Errors' That Exposed America's Defence System

2026-03-11
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The Shahed-136 drones are AI-enabled autonomous or semi-autonomous weapons systems that have caused direct physical harm (deaths of US service members). The US ignoring Ukraine's offer of counter-drone technology contributed indirectly to the harm by failing to prevent or mitigate the attacks. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to persons, and the event involves the use and malfunction (or lack of effective countermeasures) of AI systems in a military context.
Thumbnail Image

Mengenal Shahed-136, Drone Murah Iran yang Bikin AS Kewalahan

2026-03-07
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The Shahed-136 drones are autonomous or remotely controlled systems with AI capabilities for navigation and targeting. Their deployment in attacks against US and allied targets has caused direct harm, including damage to infrastructure and strategic military challenges. The article details realized harm from the use of these AI systems in conflict, meeting the criteria for an AI Incident due to direct harm caused by the AI system's use in military aggression.
Thumbnail Image

AS Bisa Tamat, Senjata Pembunuh Murah Buatan Iran Susah Dilawan

2026-03-06
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The Shahed-136 drone is an AI system because it involves autonomous or semi-autonomous operation for targeting and navigation. Its use in the Russia-Ukraine conflict and by Iran has directly caused harm, including physical damage and psychological pressure, fulfilling the criteria for an AI Incident. The article describes actual use and impact, not just potential risks, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the harm is realized and linked to the AI system's deployment in warfare.
Thumbnail Image

Menjiplak Senjata Iran, lalu Memakainya untuk Menyerang Iran

2026-03-10
tirto.id
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of autonomous kamikaze drones, which are AI systems capable of independently seeking and attacking targets. The deployment of these drones in military operations has resulted in tens of thousands of deaths, constituting direct harm to people and communities. The AI system's development (reverse engineering and enhancement) and use in combat directly led to these harms. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Atasi Drone Iran, AS Kirim Sistem Anti-Drone Perang Ukraina ke Timur Tengah

2026-03-07
investor.id
Why's our monitor labelling this an incident or hazard?
The Merops system is explicitly described as using AI for navigation and target discrimination to counter hostile drones, which are AI-enabled weapons causing harm. The article discusses ongoing harm from Iranian drones in conflict zones, and the deployment of Merops is a direct response to this harm. Since the AI system's use is directly linked to preventing or mitigating harm from AI-powered drone attacks, this qualifies as an AI Incident. The event is not merely a potential risk (hazard) or a general update (complementary information), but an active deployment addressing realized harm caused by AI systems (drones).
Thumbnail Image

Daftar Drone dan Rudal Iran Terbaru 2026 Spesifikasi, Kelebihan, dan Kekurangan.

2026-03-08
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of AI-enabled military drones and hypersonic missiles by Iran, which constitute AI systems due to their autonomous or semi-autonomous operational capabilities. While no actual harm or incident is reported, the article clearly indicates that these systems pose a credible threat that could plausibly lead to injury, harm to communities, or disruption in conflict situations. Therefore, this qualifies as an AI Hazard, as the AI systems' use could plausibly lead to significant harm, but no specific harm has yet occurred or been reported in this article.
Thumbnail Image

Senjata Laser 'Star Wars' AS Bakar Drone Shahed Iran

2026-03-06
detiki net
Why's our monitor labelling this an incident or hazard?
HELIOS is an AI-enabled high-energy laser weapon system used operationally to destroy enemy drones. The article explicitly describes its use in combat, successfully burning and downing drones, which are physical property and military assets. The AI system's use directly leads to harm (destruction of drones) and has military implications. This fits the definition of an AI Incident because the AI system's use has directly led to harm (destruction of property) in a real-world event. The event is not merely a potential hazard or complementary information but a realized incident involving AI.
Thumbnail Image

Keampuhan Drone Murah Iran yang Dijuluki 'Rudal Orang Miskin'

2026-03-07
detiki net
Why's our monitor labelling this an incident or hazard?
The Shahed-136 drone is an AI system or at least an autonomous weapon system with AI components (navigation, anti-jamming, targeting). Its deployment in warfare has directly caused harm to people and property, fulfilling the criteria for an AI Incident. The article details actual use and harm, not just potential risks, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the harm is realized and linked to the AI system's use.
Thumbnail Image

Taktik Zelensky Barter Penghancur Drone Iran dengan Rudal Mahal

2026-03-09
detiki net
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems because they perform autonomous or semi-autonomous tasks such as detecting, chasing, and neutralizing enemy drones, which require AI capabilities. Their deployment in active conflict directly contributes to harm (harm to persons and communities through warfare). The article details the operational use and impact of these AI systems, not just potential or future risks, so it is an AI Incident rather than a hazard or complementary information. The geopolitical and trade aspects do not negate the direct harm caused by the AI systems' use in conflict.
Thumbnail Image

Senjata Murah Meriah Ukraina untuk Rontokkan Drone Kamikaze Iran

2026-03-10
detiki net
Why's our monitor labelling this an incident or hazard?
The Ukrainian interceptor drones are AI systems as they autonomously detect and neutralize hostile drones, a complex task involving AI capabilities. Their deployment in active conflict has directly contributed to harm, including the destruction of enemy drones targeting civilians and infrastructure, and the death of military personnel from drone attacks. The article details realized harm and the operational use of these AI systems, not just potential risks. Hence, this is an AI Incident rather than a hazard or complementary information. The event is not unrelated as it clearly involves AI systems and their impact in a military conflict context.
Thumbnail Image

AS Kirim Senjata Anti-Drone Baru ke Timur Tengah! Pemburu Drone Iran

2026-03-09
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The Merops system is an AI system due to its autonomous detection, tracking, and engagement capabilities. The article discusses its deployment in an active conflict zone where drone attacks have caused harm, but it does not report new harm caused by the Merops system itself. The focus is on the use and strategic deployment of this AI system in warfare, providing context and insight into AI's role in modern military conflicts. This fits the definition of Complementary Information, as it provides supporting data and context about AI systems in use and their implications, without describing a new AI Incident or AI Hazard.
Thumbnail Image

Spesifikasi Anti-Drone Merops, Musuh Bebuyutan Drone Shaher Iran yang Akan Dipasok AS di Timteng

2026-03-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in the Merops system for detecting and intercepting drones, confirming AI system involvement. However, there is no indication of any harm caused or any malfunction or misuse leading to harm. The system is described as effective and protective, with no reported incidents of failure or misuse. The article focuses on the system's deployment and capabilities, which informs understanding of AI applications in military defense but does not describe an AI Incident or AI Hazard. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

Kelebihan Sistem Drone Merops, Kenapa Jadi Musuh Bebuyutan Shahed Iran?

2026-03-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI component in the Merops drone defense system used for navigation in contested environments. The system is being deployed in an active conflict zone to counter drone attacks, which are harmful events. However, the article does not describe any malfunction, misuse, or harm caused by the AI system itself. The AI system's involvement is in its use for defense, which could plausibly lead to incidents involving harm in the future, given the military context. Since no actual harm or incident caused by the AI system is reported, and the focus is on potential future harm, the classification as AI Hazard is appropriate.
Thumbnail Image

Ukraina Bakal Kirim Ahli Drone ke Timur Tengah Bantu AS Hadapi Iran

2026-03-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous or semi-autonomous drones (Shahed-136) used in military attacks, which have caused harm in conflict zones. The deployment of experts to counter these drones is a response to an ongoing AI-related military threat. Since the drones have been actively used in attacks causing harm, this qualifies as an AI Incident due to the direct or indirect harm caused by AI systems in warfare.
Thumbnail Image

США перекинули на Близький Схід 10 тис. українських дронів-перехоплювачів для захисту від іранських "Шахедів" - Bloomberg

2026-03-13
OBOZREVATEL
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-equipped drones (Merops) used for military defense against Iranian drones, confirming AI system involvement. However, it does not describe any harm caused by these AI systems, nor any malfunction or misuse leading to harm. The AI systems are used to prevent harm from enemy drones, which is a positive application rather than a harmful incident. There is no indication of plausible future harm from these AI systems themselves; rather, they mitigate existing threats. The article mainly provides information about the deployment, capabilities, and strategic use of these AI systems, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

США відправили на Близький Схід 10 000 дронів-перехоплювачів з ШІ, випробуваних в Україні

2026-03-13
unian
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems used in military operations to intercept and counter enemy drones, which inherently involves risks of injury, death, and escalation of conflict. The article explicitly states these AI drones are deployed and used in active operations, implying realized or imminent harm. The AI system's use in warfare and defense against attacks fits the definition of an AI Incident due to direct involvement in harm to persons and communities. Although the article does not describe a specific incident of harm, the deployment and use in conflict zones with lethal capabilities constitute direct involvement in harm or its immediate risk, qualifying as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Перевірені Україною: США відправили на Близький Схід дрони-перехоплювачі Merops

2026-03-13
РБК-Украина
Why's our monitor labelling this an incident or hazard?
The Merops drones are explicitly described as equipped with AI elements and used in active military defense to intercept and destroy enemy drones. This use of AI systems in armed conflict directly leads to harm (destruction of drones, military engagement) and thus qualifies as an AI Incident under the framework. The article reports on actual use and impact, not just potential or future risks, so it is not merely a hazard or complementary information.
Thumbnail Image

US has sent 10K interceptor drones to Mideast to thwart Iranian attacks, army chief says

2026-03-13
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled interceptor drones (Merops) being deployed and used to counter Iranian drone attacks, which have resulted in casualties among US forces. The AI system's use in defense operations is directly linked to harm to persons, fulfilling the criteria for an AI Incident. The involvement is in the use of the AI system in active military defense where harm has occurred, not merely a potential or future risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

США використовуватиме дрони, які випробували в Україні -- що відомо

2026-03-13
ФОКУС
Why's our monitor labelling this an incident or hazard?
The drones described are AI-supported systems used in military operations to intercept attacks, which directly relates to harm prevention and engagement in conflict scenarios. The article explicitly mentions their deployment and use in active defense, indicating realized involvement of AI systems in potentially harmful conflict situations. This fits the definition of an AI Incident because the AI system's use directly leads to harm or prevention of harm in a conflict context, involving injury, disruption, or harm to communities and critical infrastructure. The article does not merely discuss potential or future risks but actual deployment and use, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

США перекинули на Близький Схід 10 тисяч випробуваних у боях в Україні дронів-перехоплювачів

2026-03-13
ZN.UA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI elements integrated into the interceptor drones (Merops), which are actively used in military operations to counter hostile drones. The deployment of these AI systems in a conflict zone where harm to people and property is a direct consequence of military actions fits the definition of an AI Incident. The AI system's use is not hypothetical or potential but actual and operational, with direct implications for harm in warfare. Hence, it is not merely a hazard or complementary information but an incident involving AI systems causing or mitigating harm in a real-world conflict.
Thumbnail Image

США надіслали на Близький Схід 10 тисяч перехоплювачів, випробуваних у боях в Україні

2026-03-13
Європейська правда
Why's our monitor labelling this an incident or hazard?
The drones described are explicitly AI-enabled systems used in active military operations, which have been combat-tested and deployed in conflict zones. Their use directly involves AI systems whose operation can cause injury, destruction, or other harms associated with warfare. The article details their deployment and effectiveness, implying realized or ongoing harm. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm in a conflict context. The event is not merely a potential hazard or complementary information but a concrete instance of AI system use with associated harms.
Thumbnail Image

США відправили на Близький Схід 10 тис. дронів, випробуваних в Україні

2026-03-13
espreso.tv
Why's our monitor labelling this an incident or hazard?
The drones described are explicitly equipped with AI systems and are used in military operations, which inherently carry risks of harm. The article does not report a specific harm event caused by these drones but highlights their deployment and potential impact in conflict. Given the plausible risk of injury, death, or other harms from AI-enabled weapon systems in active conflict, this situation fits the definition of an AI Hazard rather than an AI Incident. There is no indication of a realized harm event yet, so it is not an AI Incident. It is not merely complementary information because the focus is on the deployment of AI systems with potential for harm, not on responses or ecosystem context. It is not unrelated because AI systems are central to the event.
Thumbnail Image

США відправили на Близький Схід 10 тис. дронів-перехоплювачів Merops

2026-03-13
Gazeta.ua
Why's our monitor labelling this an incident or hazard?
The Merops drones are explicitly described as equipped with AI systems and have been used in combat operations, which implies their AI-driven autonomous or semi-autonomous functionality in intercepting enemy drones. The deployment in an active conflict zone and their role in military operations directly link the AI system's use to potential harm to persons and communities, fulfilling the criteria for an AI Incident. The article describes actual use rather than just potential risk, so it is not merely a hazard. It is not complementary information since the main focus is on the deployment and use of AI drones causing or preventing harm in military conflict. Hence, the classification is AI Incident.
Thumbnail Image

США розгорнули на Близькому Сході 10 000 дронів-перехоплювачів протестованих в Україні

2026-03-13
LB.ua
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems used in military defense operations. Their deployment directly involves the use of AI systems to protect critical infrastructure and military assets from attacks, which falls under harm category (b) - disruption of critical infrastructure management and operation. Since the drones are actively used in defense operations, this is not merely a potential risk but an actual use of AI systems with direct implications for harm prevention and military conflict. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in a context of conflict and defense, with potential for injury or harm in the broader military engagement context.
Thumbnail Image

США відправили 10 тисяч дронів-перехоплювачів на Близький Схід

2026-03-13
5 канал
Why's our monitor labelling this an incident or hazard?
The Merops drones are AI-enabled systems used in active military operations to intercept hostile drones. Their deployment and use in conflict zones directly involve AI systems whose operation can lead to injury or harm to persons and disruption of critical infrastructure (military assets). The article describes actual deployment and use, not just potential or planned use, indicating realized involvement of AI systems in a context with significant harm potential. Hence, this qualifies as an AI Incident due to the direct involvement of AI systems in military conflict with associated harms.
Thumbnail Image

US deployed 10,000 AI-powered drones to Middle East

2026-03-13
Azeri - Press Informasiya Agentliyi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered drones deployed in a military operation, which are designed to detect and counter enemy UAVs. The deployment of such AI-enabled autonomous or semi-autonomous weapon systems in an active conflict zone presents a credible risk of harm to persons, property, and communities. Although the article does not report specific incidents of harm caused by these drones, the scale and nature of deployment imply a plausible risk of harm. Therefore, this event qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving injury, disruption, or violations of rights. There is no indication that harm has already occurred directly from these AI systems, so it is not classified as an AI Incident.
Thumbnail Image

Дрони з українського фронту відправили на Близький Схід

2026-03-13
ipress.ua
Why's our monitor labelling this an incident or hazard?
The event involves AI systems integrated into military drones used in active conflict zones, where their deployment and use directly contribute to military engagements that can cause injury or harm to people. The article explicitly mentions AI-equipped drones used for intercepting enemy drones, which is a direct use of AI in a context that leads to harm. Therefore, this qualifies as an AI Incident under the definition of harm to persons resulting from the use of AI systems in military operations.
Thumbnail Image

США відправили на Близький Схід 10 000 дронів-перехоплювачів, випробуваних в Україні

2026-03-13
Лига Новости
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (interceptor drones with AI capabilities) actively used in military defense to counter hostile drones, which are causing harm in conflict zones. The AI systems' deployment and use directly relate to preventing harm from attacks, fulfilling the criteria for an AI Incident due to their role in ongoing military conflict and harm mitigation. The article does not merely discuss potential risks or future hazards but reports on actual deployment and use in conflict, which is linked to harm and defense against harm.
Thumbnail Image

США відправили на Близький Схід 10 тис дронів-перехоплювачів, які розроблялися для України -- online.ua

2026-03-13
Украина Онлайн
Why's our monitor labelling this an incident or hazard?
The Merops drones are AI-supported interceptor systems actively used in military operations, which inherently carry risks of injury, harm, or disruption. Although no specific harm event is reported, the deployment of AI-enabled weapon systems in conflict zones plausibly leads to harm, meeting the criteria for an AI Hazard. The article focuses on the use and deployment of these AI systems with potential for harm rather than reporting a realized harm incident or a response to past harm, so it is best classified as an AI Hazard.
Thumbnail Image

США направили на Близький Схід 10 тисяч дронів-перехоплювачів, випробуваних у боях в Україні, - Bloomberg

2026-03-13
censor.net
Why's our monitor labelling this an incident or hazard?
The drones Merops are explicitly described as AI-enabled systems used in military operations, which inherently carry risks of harm to persons and property. The article does not describe any realized harm or malfunction but focuses on the deployment and potential use of these AI systems in conflict zones. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (injury, death, or property damage) in the context of armed conflict. There is no indication of an actual incident or harm having occurred yet, so it is not an AI Incident. It is more than just complementary information because the deployment of AI-enabled military drones with lethal potential is a credible risk event.
Thumbnail Image

US sends 10,000 interceptor drones tested in combat in Ukraine to Middle East - Bloomberg

2026-03-13
censor.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-equipped interceptor drones being used in military operations, which qualifies as AI system involvement. The use of these drones in active conflict zones implies a plausible risk of harm (injury, death, escalation), meeting the criteria for an AI Hazard. There is no direct report of harm caused by the AI system in this article, so it does not meet the threshold for an AI Incident. The event is not merely complementary information or unrelated, as it concerns the deployment of AI systems with potential for harm.
Thumbnail Image

SUA trimit drone interceptoare folosite în Ucraina, pentru a contracara atacurile Iranului - HotNews.ro

2026-03-14
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The drones Merops are explicitly described as AI-enabled interceptor drones used in an active military conflict to counter Iranian drone attacks. Their deployment and use directly affect the management of critical infrastructure and military operations, which falls under harm category (b) - disruption of critical infrastructure or military operations. The AI system's use is central to the event, and the article discusses real operational deployment, not just potential or future use. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Arma ieftină cu care SUA vrea să oprească Iranul: 10.000 de drone din Ucraina, trimise pe frontul din Orientul Mijlociu

2026-03-14
Stiri pe surse
Why's our monitor labelling this an incident or hazard?
The drones Merops are explicitly described as AI-enabled systems used in military operations to intercept Iranian drones. Their deployment in an active conflict zone where attacks and counterattacks occur means the AI system's use is directly linked to harm (injury, destruction, or disruption) in the conflict. The article details the use of these AI systems in a way that directly affects the conflict dynamics and potential harm, qualifying it as an AI Incident under the definition of harm to persons and disruption of critical infrastructure. The article does not merely discuss potential or future harm but the actual use of AI systems in a conflict causing or preventing harm.
Thumbnail Image

SUA trimit în Orientul Mijlociu 10.000 de drone interceptoare dezvoltate în Ucraina pentru a contracara atacurile iraniene

2026-03-14
RFI
Why's our monitor labelling this an incident or hazard?
The drones Merops are explicitly described as AI-enabled interceptor drones used in military operations to counter Iranian drone attacks. The use of AI in autonomous or semi-autonomous weapon systems in an active conflict zone presents a credible risk of harm to persons and communities. Although no specific harm or incident is reported in the article, the deployment of such AI systems in warfare plausibly leads to AI Incidents (injury, death, or escalation). Since the article focuses on the deployment and potential impact rather than a realized harm event, it fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

De ce spune Trump că nu are nevoie de Ucraina: Americanii au trimis 10.000 de drone interceptoare în Orientul Mijlociu

2026-03-14
spotmedia.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions drones equipped with artificial intelligence being used in active military defense operations to intercept and destroy enemy drones. The AI systems are integral to the drones' operation and their deployment is part of a military conflict scenario where harm to people and property is a direct consequence. This fits the definition of an AI Incident because the AI system's use has directly led to harm (or the potential for harm) in a conflict environment. The article does not merely discuss potential risks or future hazards but describes active use in a conflict setting, which involves realized harm or risk of harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

SUA trimit drone interceptoare folosite în Ucraina, pentru a contracara atacurile Iranului

2026-03-14
News.ro
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems used in military defense, which inherently carry risks of harm through their use in conflict. The article does not report any realized harm or incident caused by these drones but highlights their deployment and potential impact on military operations. Given the plausible risk of injury, escalation, or other harms from AI-enabled military drones, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

De ce spune Trump că nu are nevoie de Ucraina: Americanii au trimis 10.000 de drone interceptoare în Orientul Mijlociu - Stiripesurse.md

2026-03-14
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The drones described are explicitly equipped with artificial intelligence and are actively used in military defense operations to intercept hostile drones. This involves the use of AI systems in a way that directly impacts military operations and security. However, the article does not report any realized harm or incident caused by the AI systems themselves; rather, it reports on their deployment and strategic use. There is no indication of injury, violation of rights, or other harms caused by these AI systems. The article primarily provides information about the deployment and strategic use of AI drones, which is a development and response in the AI ecosystem but does not describe an incident or hazard causing or plausibly leading to harm. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI system deployment and military use without reporting an AI Incident or AI Hazard.