Shield AI Unveils Autonomous X-BAT Combat Drone with AI-Piloted Capabilities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Shield AI has unveiled the X-BAT, an AI-piloted, vertical takeoff and landing (VTOL) combat drone designed for autonomous military operations. Powered by the Hivemind autonomy software, X-BAT can execute missions independently or alongside manned aircraft, raising future risks of harm in warfare due to its autonomous capabilities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes an AI system (Hivemind) controlling an autonomous fighter jet capable of independent combat missions, which clearly qualifies as an AI system. The event concerns the development and planned deployment of this system, not an incident where harm has already occurred. However, the autonomous nature and lethal military application of the AI system imply a credible risk of future harm, such as injury or death in combat, making it an AI Hazard. There is no indication that harm has yet occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's development with significant implications for future harm potential.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Robots, sensors, and IT hardwareGovernment, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

X-BAT, world's first AI-piloted fighter jet with vertical takeoff and landing takes shape as Shield AI joins the US defense industry complex

2025-10-23
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Hivemind) controlling an autonomous fighter jet capable of independent combat missions, which clearly qualifies as an AI system. The event concerns the development and planned deployment of this system, not an incident where harm has already occurred. However, the autonomous nature and lethal military application of the AI system imply a credible risk of future harm, such as injury or death in combat, making it an AI Hazard. There is no indication that harm has yet occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's development with significant implications for future harm potential.
Thumbnail Image

A new autonomous fighter jet just broke cover. It's powered by the same AI brain that flew an F-16 through a dogfight.

2025-10-22
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the Hivemind AI brain) controlling a fully autonomous fighter jet capable of combat operations without human intervention. This AI system's use in lethal military applications directly implicates risks of injury or death (harm to persons) and broader societal harms related to warfare and conflict escalation. The AI's deployment in real-world combat scenarios and its capability to operate independently without human oversight meet the criteria for an AI Incident, as the AI system's use has directly led or will lead to significant harm. The event is not merely a potential risk but an active deployment of AI in a context with high potential for harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

America's 'BAT' man unveils tech built to outsmart a Chinese first strike

2025-10-25
Fox News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous fighter jet with onboard AI) and its intended use in military operations, which could plausibly lead to significant harm in future conflict scenarios. However, no actual harm, injury, violation of rights, or disruption has occurred yet. The article is primarily about the unveiling and potential strategic impact of the AI system, making it a credible AI Hazard due to the plausible future risks associated with autonomous lethal systems. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated since it directly concerns an AI system with potential for harm.
Thumbnail Image

Skys Deadliest Terminator Revealed: AI Jet Needs No Pilot, No Runway - Takes Off Like Rocket, Strikes Without Mercy

2025-10-24
Zee News
Why's our monitor labelling this an incident or hazard?
The X-BAT fighter jet is an AI system explicitly described as fully autonomous in flight and combat operations, designed to kill without human pilots. Its deployment represents a direct use of AI in lethal military applications, causing or enabling harm to people and communities. The article also references existing AI weapons actively causing destruction in conflict zones, confirming realized harm from AI systems. This fits the definition of an AI Incident, as the AI system's use has directly led or will imminently lead to injury or harm to persons and communities through autonomous lethal force.
Thumbnail Image

This new AI fighter jet doesn't need pilots, runways, or GPS

2025-10-24
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as controlling an autonomous fighter jet drone with lethal capabilities. Although no harm has yet occurred (the first flight is scheduled for next year), the system's intended use as an autonomous weapon with strike capabilities plausibly could lead to injury, disruption, or other harms. The article also references ongoing debates about autonomous weapons and calls for bans, underscoring the recognized risks. Since harm is not yet realized but plausible, this fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

X-BAT -- An AI-powered fighter that neither needs a runway nor GPS

2025-10-22
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the X-BAT autonomous fighter controlled by AI software) whose development and intended use in military operations could plausibly lead to significant harms such as injury, disruption, or violations of rights if deployed. However, since no actual harm or incident has occurred or been reported, and the system is still in development without contracts or deployment, this constitutes a plausible future risk rather than a realized harm. Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Defense tech firm Shield AI unveils X-BAT drone platform for Air Force

2025-10-23
Washington Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous drone platform with AI-enabled software for navigation and operation in complex environments. Although no harm has yet occurred since the drone is still in development and testing phases, the intended use as an armed military drone capable of carrying missiles and bombs implies a credible risk of future harm, including injury or death and disruption of critical infrastructure or military operations. The article does not report any realized harm or incident but highlights the plausible future risks associated with the deployment of such AI-enabled autonomous weapons. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

X-BAT combat drone blends VTOL, stealth and speed

2025-10-23
New Atlas
Why's our monitor labelling this an incident or hazard?
The X-BAT drone integrates an AI system (Shield AI's proprietary AI Hivemind) that enables autonomous operation and collaboration in contested environments, indicating clear AI involvement. Although the drone is still in development and has not caused any harm yet, its intended use as a combat aircraft capable of autonomous lethal operations presents a credible risk of future harm, such as injury to persons, disruption of critical infrastructure, or violations of human rights in warfare. The article does not report any realized harm or incident but highlights the potential for significant impact once deployed. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Shield AI Unveils Fully Autonomous VTOL Fighter Jet

2025-10-24
Military
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (the Hivemind software controlling the autonomous fighter jet). The article does not describe any direct or indirect harm caused by this AI system yet, but the system's intended use as an autonomous combat aircraft capable of carrying weapons and operating in contested environments clearly presents a plausible risk of harm (injury, disruption, violations of rights). The past injury incident mentioned relates to a different drone model (V-BAT) and is background context, not the main event. The unveiling and planned deployment of the X-BAT autonomous fighter jet thus represent a credible AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

Shield AI shows off jet-powered VTOL combat drone

2025-10-24
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Shield AI's Hivemind autonomy software) integrated into a combat drone, which is still under development and not yet operational. The article does not report any injury, violation of rights, disruption, or harm caused by the AI system. Instead, it presents the potential capabilities and future deployment plans, implying plausible future harm due to the nature of autonomous weapon systems. Therefore, this qualifies as an AI Hazard because the development and intended use of this AI-enabled autonomous combat drone could plausibly lead to significant harm in the future, but no harm has yet occurred.
Thumbnail Image

Shield AI unveils AI-piloted VTOL stealth drone

2025-10-24
UK Defence Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system controlling an autonomous combat drone capable of operating without human control in contested environments. Although no incident of harm is reported, the nature of the system—a weaponized AI-piloted drone—carries a credible risk of causing injury, violations of rights, or other significant harms in the future. The event is about the development and unveiling of this system, not about an actual harm event. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Shield AI unveils X-BAT autonomous vertical takeoff fighter jet

2025-10-24
Defense News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Shield AI's Hivemind) as the autonomous core of the X-BAT drone, which is designed for combat operations. While no harm has yet occurred, the autonomous weapon system's development and intended use in contested military environments plausibly could lead to AI Incidents involving injury, loss of life, or other harms. The event does not describe any actual harm or malfunction but focuses on the unveiling and capabilities of the system, making it a potential hazard rather than an incident. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

Shield AI Unveils High-End, VTOL Collaborative Combat Aircraft | Aviation Week Network

2025-10-22
Aviation Week
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (Hivemind autonomy software) integrated into an autonomous combat aircraft capable of supersonic flight and missile carriage. Although no harm has yet occurred, the nature of the system and its intended military application imply a credible risk of injury, disruption, or other harms if deployed. The article focuses on the unveiling and future plans rather than any realized harm or incident. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Shield AI launches X-BAT unmanned VTOL strike jet concept

2025-10-22
Janes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development of a fully autonomous strike jet, which qualifies as an AI system due to its autonomous operation and complex decision-making capabilities. Although no harm has yet occurred, the intended military use of the system as a strike jet inherently carries significant potential for harm, including injury or death, disruption of critical infrastructure, and violations of human rights. The event thus fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. There is no indication of realized harm or incident, nor is the article focused on responses or updates to prior incidents, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Tech company unveils new AI-fighter drone

2025-10-24
KTBS
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as controlling an autonomous military drone capable of offensive operations. The development and deployment of such AI-enabled autonomous weapons systems inherently carry risks of causing injury, death, and other harms in warfare. Since the article focuses on the unveiling and intended military use without reporting any actual harm yet, this event fits the definition of an AI Hazard due to the plausible future harm from the AI system's use in combat scenarios.
Thumbnail Image

Shield AI Unveils X-BAT, an AI-Piloted VTOL Fighter Jet for Contested Environments

2025-10-22
IT News Online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Shield AI's Hivemind autonomy software) integrated into a combat aircraft capable of autonomous operations in contested environments. Although no harm or incident is reported, the system's intended use as an autonomous fighter jet inherently carries plausible risks of harm, including injury or death, violations of human rights, and broader conflict-related harms. The mere development and unveiling of such an AI-enabled autonomous weapon system with lethal capabilities qualifies as an AI Hazard under the framework, as it could plausibly lead to an AI Incident in the future. There is no indication of actual harm yet, so it is not an AI Incident. It is not Complementary Information or Unrelated, as the event centers on the AI system's development and potential impact.
Thumbnail Image

Shield AI Unveils X-BAT, an AI-Piloted VTOL Fighter Jet for Contested Environments

2025-10-22
IT News Online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Hivemind autonomy software) integrated into a combat aircraft capable of autonomous operations in contested environments. Although no harm has yet occurred, the system's intended military use and autonomous lethal capabilities imply a credible risk of future harm, such as injury or violations of human rights in warfare. The event is not a report of an incident or harm already realized, nor is it merely complementary information or unrelated news. Hence, it fits the definition of an AI Hazard, as the development and unveiling of such an AI-piloted weapon system could plausibly lead to AI Incidents in the future.
Thumbnail Image

Shield AI Unveils X-BAT, a Vertical Take-Off Autonomous Combat Aircraft

2025-10-25
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The X-BAT is an AI system designed for autonomous combat operations, which inherently involves significant risks of harm including injury or death in military conflict, disruption of critical infrastructure, and broader societal harms. Although the article does not report any realized harm or incidents caused by the AI system, the development and deployment of such autonomous combat UAVs plausibly could lead to AI incidents involving harm to persons, communities, or critical infrastructure. Therefore, this event represents an AI Hazard due to the credible risk posed by the autonomous combat capabilities and potential future use in warfare.
Thumbnail Image

Flight Debut for Shield AI X-BAT Could Come Next Fall - Defense Daily

2025-10-24
Defense Daily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Hivemind) integrated into an autonomous military drone (X-BAT). The article discusses planned future demonstrations and operational validation, indicating the AI system's development and intended use but no actual incident or harm has occurred yet. Given the nature of autonomous weapon systems, there is a credible potential for future harm, making this an AI Hazard rather than an AI Incident or Complementary Information. The article does not report any realized harm or societal/governance responses, so it is not Complementary Information.
Thumbnail Image

Shield AI Says Development of 2,000 Mile Range X-BAT Underway - Defense Daily

2025-10-21
Defense Daily
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Hivemind) guiding an autonomous fighter drone with long-range capabilities. The system's intended use in military operations without GPS or communications suggests autonomous operation in complex environments, which carries inherent risks of harm. Since the article discusses ongoing development and potential deployment but does not report any actual harm or incident, it fits the definition of an AI Hazard. The autonomous weapon system's potential to cause injury, disrupt infrastructure, or violate rights is a credible risk, making this a plausible future harm scenario.
Thumbnail Image

America’s ‘BAT’ man unveils tech built to outsmart a Chinese first strike

2025-10-25
Fox Wilmington
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the Hivemind autonomy) integrated into a military aircraft designed for combat operations. Although the system is still under development and not yet combat-ready, its intended use in warfare implies a credible risk of causing harm (injury, death, destruction) in future conflicts. The AI's role in autonomous decision-making for navigation and target identification, even with human-in-the-loop for lethal decisions, means the AI system's malfunction or use could plausibly lead to an AI Incident. Since no actual harm has occurred yet, the event is best classified as an AI Hazard.
Thumbnail Image

Shield AI Unveils Unmanned Fighter Jet: A New Era for Drone Wingmen and Solo Military Aircraft Operations

2025-10-23
Visegrád Post
Why's our monitor labelling this an incident or hazard?
The X-BAT is an AI system explicitly described as an autonomous unmanned fighter jet with lethal capabilities. Although no harm has yet occurred, the nature of the system and its intended military use imply credible risks of injury, disruption, or other harms if deployed or misused. The article focuses on the unveiling and potential military applications, highlighting the strategic shift towards autonomous AI systems in warfare. Since no actual harm or incident is reported, but plausible future harm is evident, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

New Kind of CCA? Meet the Supersonic, VTOL X-BAT

2025-10-24
Air & Space Forces Magazine
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system, specifically the Hivemind autonomy software enabling autonomous operation of the X-BAT combat drone. The event concerns the development and planned deployment of this AI-enabled system, which could plausibly lead to harms such as injury, disruption, or violations of rights in future combat scenarios. However, no actual harm or incident is reported at this time. The article is primarily an announcement and description of a new AI-enabled military technology, highlighting potential future risks but not describing any realized harm or incident. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI-piloted VTOL aircraft for autonomous combat operations unveiled by Shield AI - Military Embedded Systems

2025-10-23
militaryembedded.com
Why's our monitor labelling this an incident or hazard?
The Shield AI X-BAT aircraft is an AI system explicitly described as autonomous and capable of combat operations, including strike and electronic warfare. Its deployment in contested environments and autonomous operation in communications-denied conditions indicate a credible risk of harm, including injury or death, disruption of critical infrastructure, or violations of human rights. Although no harm has yet occurred, the nature and intended use of this AI system in autonomous combat roles constitute a plausible future risk of significant harm, qualifying this event as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

This AI-powered fighter jet doesn't need a pilot... or a runway

2025-10-23
Yahoo Tech
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Shield AI's autonomy software) integrated into an autonomous fighter jet drone designed for military use. Although no harm has yet occurred since the system is still in development and not operational, the nature of the system—a weaponized autonomous drone—poses a credible risk of future harm, including injury or violations of human rights. The event is about the development and unveiling of this AI system, which could plausibly lead to an AI Incident once deployed or misused. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Silicon Valley-backed Shield AI enters fighter jet race with new "wingman" drone

2025-10-23
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI-enabled autonomous military drone system, which is explicitly described as capable of autonomous operation. While no harm has yet occurred, the intended use in combat and the autonomous capabilities imply a credible risk of future harm, such as injury or violations of human rights. The article does not report any actual harm or incidents, so it does not qualify as an AI Incident. It is more than general AI news or product announcement because it highlights the potential risks associated with autonomous weapon systems. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

CNBC exclusive: First look at Shield AI's new AI-piloted military fighter drone

2025-10-21
CNBC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Hivemind) piloting a military fighter drone capable of combat and missile deployment, which clearly involves AI system development and use. Although a past injury incident is referenced, the article does not report any new harm caused by the AI system. The main focus is on the unveiling and potential of the AI-piloted drone, which could plausibly lead to harm given its combat role and autonomous capabilities. This fits the definition of an AI Hazard, as the development and deployment of such AI systems could plausibly lead to injury, harm to communities, or disruption of critical infrastructure in the future. The article does not describe a realized harm event directly caused by the AI system in this new context, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the AI system and its potential risks are central to the report.
Thumbnail Image

Meet world's first AI fighter jet, which requires neither pilot nor runway to fly, it is..., made by...

2025-10-25
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system controlling an autonomous fighter jet with offensive weaponry, which is not yet operational but planned for production and deployment in the near future. The AI system's role in autonomous combat operations implies a credible risk of harm, including injury or death and disruption of critical infrastructure. Since no harm has yet occurred but the potential for significant future harm is clear, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the AI system's development and its plausible future impact, not on responses or updates to past incidents.
Thumbnail Image

Silicon Valley-backed Shield AI enters fighter jet race with new "wingman" drone

2025-10-22
CNA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous military drone with AI-based autonomy software) in development and intended for use in military operations. Although no harm has yet occurred, the autonomous combat drone's potential use in warfare could plausibly lead to injury, disruption, or other harms. The article does not report any actual harm or incident but focuses on the conceptual launch and future plans, fitting the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

AI-Powered Fighter Jet Takes Off Like a Space Rocket, Needs No Pilot to Fight Battles

2025-10-24
autoevolution
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous fighter jet drone with AI piloting capabilities) whose development and intended use could plausibly lead to significant harms, including harm to persons and communities through military combat operations. Since no actual harm or incident has occurred yet, but the system's deployment would represent a major new AI-enabled military capability with potential for harm, this qualifies as an AI Hazard. The article does not describe any realized harm or incident, nor does it focus on responses or complementary information, so it is not an AI Incident or Complementary Information.
Thumbnail Image

2. Shield AI develops world's first autonomous VTOL fighter jet

2025-10-22
Defence Blog
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Hivemind) that autonomously pilots a combat-capable fighter jet, which is a clear AI system by definition. The event concerns the development and intended use of this AI system in lethal military operations, which could plausibly lead to injury or death and other harms. No actual harm or incident is described, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it highlights the introduction of a new AI-enabled autonomous weapon system with significant potential for harm. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

Shield AI Introduces 'World's First' AI-Piloted VTOL Fighter Jet 'X-Bat'

2025-10-23
The Defense Post
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as the core technology enabling autonomous flight and combat capabilities of the X-Bat drone. The potential for harm is high given the military application and weaponization, which could plausibly lead to injury, death, or other harms if deployed or misused. Since no harm has yet occurred or been reported, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a product launch without risk, because the nature of the system and its intended use imply credible future risks.
Thumbnail Image

Shield AI unveils new autonomous VTOL fighter jet concept

2025-10-23
Shephard Media
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and unveiling of an AI-enabled autonomous military drone platform, which involves AI systems controlling an armed VTOL UAV. Although no harm or incident has occurred yet, the nature of the system and its intended use as an autonomous fighter jet or loyal wingman drone plausibly could lead to significant harms in the future. Therefore, this event qualifies as an AI Hazard due to the credible risk associated with autonomous weapon systems, even though no direct harm is reported at this stage.