UAE Unveils AI-Powered Multi-Layered Anti-Drone Defense System

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Khaldes Holding, based in Abu Dhabi, has unveiled DAMITA, the UAE's first fully integrated, multi-layered defense system against drones. DAMITA uses artificial intelligence for autonomous detection, command, and neutralization of individual and swarm drones, aiming to protect cities and critical infrastructure from evolving aerial threats.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as using AI for autonomous detection and neutralization of drone threats. However, the article does not report any actual harm or incident caused by the AI system or its malfunction. Instead, it presents the system as a defensive measure against plausible AI-enabled drone threats. Therefore, this qualifies as an AI Hazard because the system's development and deployment relate to potential future harms from AI-enabled drones, but no incident has occurred yet. It is not Complementary Information because the article is not about responses or updates to a prior incident, nor is it unrelated as it clearly involves AI systems with security implications.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyRobustness & digital securityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (injury)Physical (death)Economic/Property

Severity
AI hazard

AI system task:
Recognition/object detectionEvent/anomaly detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

"كالدس" تقدم "DAMITA".. الإمارات تكشف منظومة دفاعية ثورية ضد أسراب المسيّرات

2025-12-03
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for autonomous detection and neutralization of drone threats. However, the article does not report any actual harm or incident caused by the AI system or its malfunction. Instead, it presents the system as a defensive measure against plausible AI-enabled drone threats. Therefore, this qualifies as an AI Hazard because the system's development and deployment relate to potential future harms from AI-enabled drones, but no incident has occurred yet. It is not Complementary Information because the article is not about responses or updates to a prior incident, nor is it unrelated as it clearly involves AI systems with security implications.
Thumbnail Image

كشف أول منظومة إماراتية متكاملة لمواجهة الطائرات دون طيار

2025-12-03
24.ae
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for detection, discrimination, and command and control in a defense context against drones. The system's purpose is to protect critical infrastructure and urban environments from drone threats, which if realized, would constitute harm to property, communities, or critical infrastructure. Although no harm has yet occurred, the system's autonomous capabilities and potential use in military or security operations create a plausible risk of harm, including accidental damage or escalation. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

الكشف عن أول منظومة إماراتية متكاملة متعددة الطبقات لمواجهة الطائرات المسيّرة

2025-12-03
البيان
Why's our monitor labelling this an incident or hazard?
The system explicitly uses AI for detection and control functions, indicating the presence of an AI system. However, the article does not report any actual harm caused by the system or its malfunction. Instead, it presents the system as a defensive capability to counter drone threats, implying potential future use to prevent harm. Therefore, this event represents a plausible future risk scenario related to AI-enabled defense technology, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

اخبارك نت | "كالدس" تكشف عن أول منظومة إماراتية متكاملة لمواجهة الطائرات المسيرة

2025-12-03
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system (DAMITA) designed for defense against AI-enabled drones, involving AI in detection, decision-making, and engagement. The system's development and intended use relate to military defense against autonomous drone threats, which is a credible AI hazard scenario given the potential for harm in conflict situations. However, since no actual harm or incident is reported, and the article focuses on the system's capabilities and strategic importance rather than any realized harm or malfunction, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

"كالدس" تكشف عن أول منظومة إماراتية لمواجهة المسيّرات | صحيفة الخليج

2025-12-03
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system designed for autonomous detection and neutralization of drones, indicating clear AI system involvement. There is no indication of any realized harm or incident resulting from the system's use or malfunction. However, the nature of the system—a military defense tool with autonomous capabilities—implies plausible future harm risks, such as accidental targeting, misuse, or escalation of conflict. According to the definitions, the mere development and unveiling of such AI-enabled autonomous weapons systems with high potential for misuse constitute an AI Hazard. Hence, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

"كالدس" تكشف عن أول منظومة إماراتية متكاملة لمواجهة الطائرات المسيّرة

2025-12-03
مركز الاتحاد للأخبار
Why's our monitor labelling this an incident or hazard?
The system explicitly uses AI for detection, discrimination, and command and control functions, which qualifies it as an AI system. The event involves the development and use of this AI system for defense against drones, which are described as autonomous and capable of complex tasks. While no actual harm or incident is reported, the system is designed to counter threats that could cause harm. The article focuses on the system's capabilities and strategic importance, implying a plausible future scenario where AI-enabled drones could cause harm if not countered. Therefore, this event represents an AI Hazard, as the AI system's development and deployment could plausibly lead to or prevent AI-related harms in military conflict scenarios. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

"كالدس" تكشف عن أول منظومة إماراتية متكاملة لمكافحة الطائرات المسيّرة - الإمارات نيوز

2025-12-03
الإمارات نيوز
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system (DAMITA) designed for defense against drones, which could plausibly prevent or mitigate harm to critical infrastructure and communities. However, there is no indication that the system has caused any injury, disruption, or rights violations, nor that any harm has occurred or been averted due to its use or malfunction. The article focuses on the system's capabilities and strategic importance rather than any incident or hazard event. Therefore, this is best classified as Complementary Information, providing context on AI-enabled defense technology and its potential role in managing AI-related aerial threats.