US DIU Selects Contractors to Develop Autonomous Long-Range Loitering Munitions

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Defense Innovation Unit awarded contracts to AeroVironment, Dragoon, Swan and Auterion—with two partnered with Ukrainian firms—to develop and test Project Artemis: autonomous long-range loitering munitions able to operate in contested, GNSS-denied or low-bandwidth environments. Prototypes, including tested Ukrainian strike drones, are due by 2025 for evaluation and potential mass deployment.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems as the drones described require advanced autonomous navigation, electronic warfare resistance, and adaptability, all indicative of AI integration. The development and planned deployment of long-range attack drones capable of operating in contested environments pose a plausible risk of harm, including injury, disruption, or violations of rights, given their military use. However, no actual harm or incident is reported in the article; it focuses on prototype development and future production. Thus, the event fits the definition of an AI Hazard, as the AI systems could plausibly lead to an AI Incident in the future, but no direct or indirect harm has yet occurred.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehiclesDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Research and developmentManufacturing

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Two Ukrainian Companies Included in Shortlist for US Long-Range Drone Program

2025-03-18
KyivPost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as the drones described require advanced autonomous navigation, electronic warfare resistance, and adaptability, all indicative of AI integration. The development and planned deployment of long-range attack drones capable of operating in contested environments pose a plausible risk of harm, including injury, disruption, or violations of rights, given their military use. However, no actual harm or incident is reported in the article; it focuses on prototype development and future production. Thus, the event fits the definition of an AI Hazard, as the AI systems could plausibly lead to an AI Incident in the future, but no direct or indirect harm has yet occurred.
Thumbnail Image

US Army tests Ukrainian long-range kamikaze drones under classified project Artemis

2025-03-16
Euromaidan Press
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous kamikaze drones) under development and testing. There is no report of actual harm or incidents caused by these systems yet, but their intended use as strike drones with autonomous capabilities plausibly could lead to injury, disruption, or other harms. The classified nature and ongoing development indicate a credible risk of future harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI system involvement is explicit and central.
Thumbnail Image

DIU awards four companies with drone prototype contracts for Project Artemis

2025-03-18
Shephard Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development of long-range loitering munitions (drones) capable of autonomous operation in complex environments, which reasonably infers the involvement of AI systems. Although no incident of harm has occurred yet, the nature of these AI-enabled weapons systems inherently carries a credible risk of causing injury, death, or other significant harms if deployed. The event focuses on the awarding of contracts and the development phase, indicating potential future harm rather than realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Four Companies Receive Contracts to Build UAS Prototypes for Project Artemis - ExecutiveBiz

2025-03-17
ExecutiveBiz
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as the UAS prototypes are described as software-defined autonomous attack platforms, which reasonably implies AI integration for autonomous operation. The article focuses on the development and planned deployment of these systems, which could plausibly lead to harm such as injury or disruption in military contexts. No actual harm or incident is reported, so it does not meet the criteria for an AI Incident. The potential for significant harm from autonomous weapons justifies classification as an AI Hazard.
Thumbnail Image

AV's Cutting-Edge One-Way Attack UAS Secures DIU Backing Under Project Artemis

2025-03-14
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (autonomous attack UAS) with clear military applications. Although no specific harm or incident is reported as having occurred yet, the nature of the system—a fully autonomous weapon capable of precision strikes—poses a credible risk of harm to persons, communities, and property if deployed. The article emphasizes rapid deployment and operational evaluation, indicating plausible future harm. Therefore, this qualifies as an AI Hazard due to the credible potential for significant harm stemming from the AI system's use in autonomous lethal operations. There is no indication of an actual incident or realized harm yet, so it is not an AI Incident. It is more than complementary information because the focus is on the system's development and deployment with inherent risks, not just updates or responses to past events.
Thumbnail Image

AeroVironment secures contract with Defense Innovation Unit

2025-03-14
Markets Insider
Why's our monitor labelling this an incident or hazard?
The contract involves the development and deployment of autonomous precision munitions, which are AI systems by definition due to their autonomous operational capabilities. Although the article does not report any actual harm or incident, the nature of these AI-enabled weapons systems carries a credible risk of causing injury, death, or other serious harms in the future. The event is thus best classified as an AI Hazard, reflecting the plausible future harm from the use of such AI systems in military contexts.
Thumbnail Image

AV's Cutting-Edge One-Way Attack UAS Secures DIU Backing Under Project Artemis

2025-03-14
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system: AV's autonomous one-way attack UAS with AI components like AVACORE autonomy software and SPOTR-Edge computer vision enabling autonomous navigation and precision strike. The event concerns the development and planned operational evaluation of this AI-enabled weapon system. No actual harm or incident is described, but the system's intended use as a precision-strike autonomous weapon plausibly could lead to injury, death, or other harms. The article focuses on the system's capabilities and upcoming testing, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident in the future. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated since it clearly involves an AI system with potential for harm.
Thumbnail Image

US DIU contracts four companies for Artemis project

2025-03-17
Army Technology
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous UAS platforms capable of operating without GPS and in electronic warfare-denied environments, indicating advanced AI autonomy and navigation. The systems are intended as loitering weapons, which are autonomous or semi-autonomous lethal systems. Although no incident or harm has yet occurred, the nature of these AI-enabled weapons systems and their intended use in contested environments present a credible risk of future harm, including injury or death and other harms defined under AI Incidents. Since the event concerns the development and testing phase with no realized harm yet, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the development and potential deployment of these AI systems with inherent risks, not on responses or updates to past incidents.
Thumbnail Image

AV secures DIU contract to advance autonomous strike drone deployment

2025-03-17
SpaceWar
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (autonomous strike drone with AI-based autonomy software and computer vision) being developed and deployed for military purposes. The system's autonomous precision targeting and operation in contested environments indicate AI involvement. The deployment of such autonomous weapon systems poses a plausible risk of harm, including injury or death, disruption, and other significant harms inherent in autonomous weapons use. Although no specific harm has yet occurred, the nature and intended use of the system imply a credible potential for harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the deployment of autonomous strike drones.
Thumbnail Image

AeroVironment's Cutting-Edge One-Way Attack UAS Secures DIU Backing Under Project Artemis

2025-03-14
Green Stock News
Why's our monitor labelling this an incident or hazard?
The article details the development and planned deployment of an AI-enabled autonomous weapon system (one-way attack UAS) intended for precision strikes in contested environments. Although no specific harm or incident has occurred yet, the nature of the system and its intended use in military conflict plausibly could lead to injury, loss of life, or other harms. The AI system's autonomous operation in GPS-denied and electronic warfare environments increases the risk of unintended consequences or malfunction. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to persons or communities in the future. It is not an AI Incident because no harm has yet occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

One-way attack drone from AeroVironment chosen for DIU's Project Artemis - Military Embedded Systems

2025-03-17
militaryembedded.com
Why's our monitor labelling this an incident or hazard?
The AeroVironment one-way attack drone incorporates AI systems such as autonomy software and computer vision for navigation and targeting, which are integral to its operation. The system is designed for precision strikes and autonomous operation in GPS-denied environments, implying significant AI involvement. Although no harm has yet occurred, the deployment of autonomous strike drones with AI capabilities poses a credible risk of harm including injury, disruption, or violations of rights due to their lethal potential. Therefore, this event represents an AI Hazard as it plausibly could lead to AI Incidents involving harm if deployed or misused.
Thumbnail Image

AeroVironment's (AVAV) One-Way Attack UAS Secures DIU Backing Under Project Artemis

2025-03-14
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system: an autonomous, software-defined, one-way attack UAS with advanced autonomy software and computer vision. The event concerns the development and planned deployment of this system, which is intended for military precision strikes. Although no harm has yet occurred, the system's capabilities and intended use in contested environments imply a credible risk of future harm, including injury or other significant harms. The event does not describe any actual incident or harm caused by the AI system, so it is not an AI Incident. It is not merely complementary information because the focus is on the system's development and deployment with potential for harm, not on responses or updates to past incidents. Hence, the classification is AI Hazard.