OpenAI Provides Voice Command AI for Pentagon Drone Swarm Competition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI is supplying AI technology to translate battlefield voice commands into digital instructions for autonomous drone swarms in a $100 million Pentagon competition. While OpenAI's role excludes direct drone operation or weapons control, the project raises concerns about future risks of AI-enabled autonomous military systems. No harm has occurred yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems (voice-controlled software for autonomous drones) in a military context, which could plausibly lead to significant harm if misused or malfunctioning, such as injury or disruption in combat scenarios. However, no actual harm or incident has occurred yet, and the role of OpenAI is limited and indirect. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm in the future but does not describe any current incident or harm.[AI generated]
AI principles
SafetyAccountability

Industries
Government, security, and defence

Severity
AI hazard

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

OpenAI joins Pentagon's $100 million drone swarm initiative, Bloomberg reports By Investing.com

2026-02-13
Investing.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (voice-controlled software for autonomous drones) in a military context, which could plausibly lead to significant harm if misused or malfunctioning, such as injury or disruption in combat scenarios. However, no actual harm or incident has occurred yet, and the role of OpenAI is limited and indirect. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm in the future but does not describe any current incident or harm.
Thumbnail Image

OpenAI Tapped for Voice Control Tech in US Drone Swarm Challenge

2026-02-13
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (OpenAI's language model) in a military drone swarm challenge, fulfilling the AI System criterion. The AI is used to translate voice commands, which is part of the AI system's use. No direct or indirect harm has occurred yet, but the article discusses credible concerns about the potential for harm if AI is used to control lethal drone operations without human oversight. The development and deployment of AI-enabled drone swarms for offensive military purposes plausibly could lead to harms such as injury, violations of rights, or harm to communities. Since no harm has materialized yet, and the AI's role is limited to command translation, this event fits the definition of an AI Hazard. The article also includes contextual information about ethical concerns and governance, but the primary focus is on the potential risks of AI in this military application.
Thumbnail Image

OpenAI tapped for voice control tech in U.S. drone swarm trial

2026-02-14
The Japan Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it translates voice commands into digital instructions for drone swarms, which are AI-enabled systems capable of autonomous operation. Although no harm has yet occurred, the technology's intended use in autonomous military drone swarms presents a credible risk of future harm, including injury or violations of human rights. The article does not report any incident or malfunction causing harm, so it is not an AI Incident. It is not merely complementary information because the focus is on the development and trial of potentially hazardous AI technology. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

OpenAI selected for $100M Pentagon drone swarm competition

2026-02-13
Crypto Briefing
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (OpenAI's voice-to-command software) used in a military context for autonomous drone swarms. While the AI is not directly controlling weapons or drones, its role in commanding autonomous systems in a defense competition implies potential future harm. The event does not describe any realized harm or incident but highlights plausible future risks related to AI-enabled autonomous weapons systems. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI Tapped For Voice Control Tech In US Drone Swarm Trial

2026-02-14
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (OpenAI's language models) used in a military context to translate voice commands for drone swarms. The AI's role is in the use phase, translating commands that could lead to autonomous drone actions with lethal potential. No actual harm or incident has been reported; the project is in development and competition stages. The potential for harm is credible given the offensive military application and concerns expressed by defense officials. Thus, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (e.g., injury, violation of rights) in the future. It is not Complementary Information because the main focus is on the development and potential risks, not on responses or updates to past incidents. It is not unrelated because AI involvement and plausible harm are central to the article.