Elbit Systems Unveils AI-Powered Lanius Micro-Suicide Drone for Urban Warfare

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Israeli defense firm Elbit Systems has unveiled the Lanius, a micro-suicide drone equipped with advanced AI for autonomous navigation, mapping, and target identification in urban environments. The drone can distinguish between combatants and civilians, raising concerns about the potential for AI-driven lethal force and future risks of harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article details an AI system (Lanius drone) with autonomous navigation, target recognition, and lethal payload delivery capabilities. Although the drone requires human approval for firing, its autonomous functions and lethal intent present a plausible risk of causing injury or harm to persons or groups. The development and potential deployment of such AI-enabled autonomous weapons systems constitute an AI Hazard due to the credible possibility of future harm, even though no incident has yet occurred or been reported.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehiclesDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Research and developmentManufacturing

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

It could've been a racing drone; it's now destroying enemy targets

2022-11-18
DroneDJ
Why's our monitor labelling this an incident or hazard?
The article details an AI system (Lanius drone) with autonomous navigation, target recognition, and lethal payload delivery capabilities. Although the drone requires human approval for firing, its autonomous functions and lethal intent present a plausible risk of causing injury or harm to persons or groups. The development and potential deployment of such AI-enabled autonomous weapons systems constitute an AI Hazard due to the credible possibility of future harm, even though no incident has yet occurred or been reported.
Thumbnail Image

The racing drone that could kill

2022-11-18
Washington Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Lanius drone) with autonomous capabilities for enemy detection and classification, which is intended for lethal use in warfare. The article highlights the plausible future harm that could arise if the human control element is removed or overridden, allowing the AI to decide to kill autonomously. This constitutes a credible AI Hazard because the development and potential use of such AI-enabled lethal drones could plausibly lead to injury or harm to persons, as well as broader ethical and human rights concerns. Since no actual harm has occurred yet, and the focus is on the potential risks and concerns, the event is best classified as an AI Hazard.
Thumbnail Image

Israel's Elbit Sytems unveiled micro-suicide drone, and it has a mother ship

2022-11-17
haaretz.com
Why's our monitor labelling this an incident or hazard?
The described system is an AI system as it uses AI computing devices and deep learning for autonomous navigation and object identification. The event concerns the development and intended use of this AI system for lethal purposes, which could plausibly lead to harm such as injury or death (AI Incident category (a)). Since no actual harm or incident is reported, but the potential for harm is credible and significant, this qualifies as an AI Hazard. The article focuses on the unveiling and capabilities of the system, not on any realized harm or incident.
Thumbnail Image

These Israeli Urban Battlefield Assassin Drones Are Nightmare Fuel

2022-11-15
The Drive
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (LANIUS drone) with autonomous lethal capabilities and AI-powered target identification. Although no actual harm or incident is reported, the system's design and intended use in urban combat environments plausibly could lead to injury, loss of life, or violations of human rights. The autonomous operation and lethal payload delivery capabilities create a credible risk of harm, especially given the challenges in reliably distinguishing combatants from civilians. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated.
Thumbnail Image

Elbit Systems Shows Off Micro-Suicide Drone

2022-11-18
The Jewish Press - JewishPress.com
Why's our monitor labelling this an incident or hazard?
The Lanius drone is an AI system with autonomous navigation and threat identification capabilities. Its use in targeted killings involves AI in lethal force decisions, even if operator approval is required. The development and deployment of such AI-enabled autonomous weapons systems pose a credible risk of causing injury or harm to persons, thus fitting the definition of an AI Hazard. Since no actual harm or incident is reported, but the potential for harm is clear, this event is best classified as an AI Hazard.
Thumbnail Image

It could've been a racing drone; it's now blowing up enemy targets

2022-11-18
DroneDJ
Why's our monitor labelling this an incident or hazard?
The Lanius drone is an AI system employing video analytics, SLAM, and advanced AI algorithms to autonomously navigate and identify targets, including distinguishing combatants from civilians. Its lethal payload and autonomous operation capabilities mean it could directly cause injury or death. Although human approval is required for firing, the AI's role in target identification and navigation is pivotal. Since the article does not report an actual incident but highlights the system's capabilities and intended use, it fits the definition of an AI Hazard due to the plausible future harm it could cause if deployed in combat.
Thumbnail Image

Israel's Elbit Sytems unveiled micro-suicide drone, and it has a mother ship

2022-11-22
haaretz.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI computing devices for autonomous navigation and object identification in a lethal micro-drone designed for targeted killings. Although the human operator retains the detonation decision, the AI's role in navigation and threat identification is critical. The potential for critical errors in object classification and the lethal nature of the payload imply a credible risk of harm to persons, fulfilling the criteria for an AI Hazard. Since no actual harm or incident is reported yet, but the system's deployment and capabilities pose a plausible risk of harm, the classification as AI Hazard is appropriate.
Thumbnail Image

AI-driven combat drone can search buildings and execute suicide attacks

2022-11-21
New Atlas
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system integrated into lethal autonomous drones designed for combat, with capabilities including autonomous navigation, mapping, target detection, and classification. The system's use in lethal suicide attacks directly relates to potential injury or death (harm to persons) and possible violations of human rights. Although a human operator is involved in the engagement decision, the AI's autonomous functions are central to the system's operation and potential for harm. Since the article discusses the launch and capabilities of the system without reporting actual harm or incidents, it represents a credible future risk (hazard) rather than a realized incident. Thus, the event is best classified as an AI Hazard.
Thumbnail Image

(WATCH) Israel unveils micro-suicide drones capable of hunting down enemies in urban combat

2022-11-22
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The drone is described as an autonomous combat system, which involves AI for decision-making in lethal operations. Although the article does not report any actual harm or incidents caused by the drone, the nature of the technology and its intended use in urban combat plausibly could lead to injury, death, and violations of human rights. The development and unveiling of such AI-enabled autonomous weapons systems constitute a credible risk of future harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event.
Thumbnail Image

Elbit launches Lanius search-and-attack quadcopter

2022-11-23
Janes.com
Why's our monitor labelling this an incident or hazard?
The Lanius quadcopter is an AI system due to its autonomous navigation and environment mapping capabilities using SLAM algorithms. Its intended use as a loitering munition to locate and attack enemy personnel implies potential direct harm to people. Although the article does not report any actual harm yet, the development and deployment of such autonomous weapon systems plausibly could lead to injury or harm, qualifying this as an AI Hazard rather than an Incident at this stage.
Thumbnail Image

Tech Company Develops Racing Drone That Can Target Enemies

2022-11-23
Manufacturing.net
Why's our monitor labelling this an incident or hazard?
The described drone system clearly involves AI for autonomous navigation, threat detection, and classification tasks. Although a human operator authorizes lethal engagement, the AI's autonomous capabilities in target identification and navigation are pivotal. The system's deployment in conflict or security scenarios with lethal payloads inherently carries a credible risk of causing injury or death, as well as potential human rights violations. Since the article does not report an actual incident of harm but presents the system's capabilities and intended use, this qualifies as an AI Hazard due to the plausible future harm the AI system could cause if deployed or misused.
Thumbnail Image

Israel's New Drone Swarms Can Hunt Enemies in Urban Combat

2022-11-23
Middle East Forum
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using advanced AI algorithms for autonomous navigation, target detection, classification, and attack in urban combat scenarios. The drone's capabilities to carry lethal payloads and operate in swarms with AI support indicate a high potential for causing injury or harm to persons or groups, as well as harm to communities in conflict zones. However, the article does not describe any realized harm or incident resulting from the drone's deployment, only its development and intended use. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving injury, harm, or other significant harms in the future.
Thumbnail Image

AI-Driven Suicide Drone

2022-11-23
coolbusinessideas.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system integrated into a lethal autonomous drone designed for military use, with capabilities for autonomous mapping, target classification, and engagement with human oversight. While no actual harm or incident is reported, the nature of the system and its intended use clearly pose a credible risk of harm to people and communities. The presence of AI in autonomous lethal weaponry and the potential for misuse or malfunction aligns with the definition of an AI Hazard. Since no realized harm is described, it does not qualify as an AI Incident. It is not merely complementary information or unrelated news, as the focus is on the AI system's capabilities and potential risks.
Thumbnail Image

Elbit's micro-suicide drone swarms can hunt enemies in urban combat

2022-11-23
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems due to their autonomous hunting capabilities. Although no direct harm is reported, the nature of autonomous lethal drones inherently carries plausible risks of injury or death to persons and harm to communities, fulfilling the criteria for an AI Hazard. The article's focus on the technology's potential impact on warfare and ethical concerns supports classification as a hazard rather than an incident or complementary information.
Thumbnail Image

For urban combat: Flocks of Elbit kamikaze microdrones - The Quebec provincial newspaper

2022-11-21
The Quebec provincial newspaper
Why's our monitor labelling this an incident or hazard?
The Lanius drone is an AI system as it uses autonomous navigation, threat detection, and mission execution capabilities. The article focuses on its development and intended use in urban combat, which involves potential lethal applications. No actual harm or incident is reported, so it is not an AI Incident. However, the nature of the system and its intended deployment plausibly could lead to harms such as injury or violations of rights, qualifying it as an AI Hazard. The article does not primarily discuss responses, updates, or broader ecosystem context, so it is not Complementary Information. It is clearly related to AI systems and their potential impacts, so it is not Unrelated.