US Army solicits AI robots to build bridges under fire

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Army issued an SBIR seeking defense contractors to develop autonomous, AI-controlled robotic rafts capable of self-assembling floating bridges in contested areas. Aimed at reducing engineer casualties and logistics footprint, the untested systems could face GPS jamming, cyberattacks, or failures in combat.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses the development and intended use of AI systems for autonomous bridge-building robots in military combat situations. Although no incident of harm has occurred yet, the nature of the AI system's application in warfare and the potential for these systems to be used under fire plausibly leads to significant harms, including injury or death to personnel and disruption of military operations. The AI system's role is pivotal in enabling autonomous operation in dangerous environments, which carries credible risks. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
Robustness & digital securitySafetyAccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehiclesLogistics, wholesale, and retailDigital securityIT infrastructure and hosting

Affected stakeholders
Workers

Harm types
Physical (injury)Physical (death)Economic/Property

Severity
AI hazard

Business function:
Research and developmentLogisticsICT management and information security

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

The US Army wants AI robots to build bridges under fire

2025-02-16
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the development and intended use of AI systems for autonomous bridge-building robots in military combat situations. Although no incident of harm has occurred yet, the nature of the AI system's application in warfare and the potential for these systems to be used under fire plausibly leads to significant harms, including injury or death to personnel and disruption of military operations. The AI system's role is pivotal in enabling autonomous operation in dangerous environments, which carries credible risks. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

The US Army wants AI robots to build bridges under fire

2025-02-16
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (autonomous robots) for military bridge-building under fire, which is a hazardous operation. Although no incident of harm has occurred yet, the article clearly outlines the potential for harm to personnel and military operations if these AI systems fail or are compromised. The AI systems' role is pivotal in enabling autonomous bridge construction in dangerous conditions, which could plausibly lead to injury, disruption, or other harms. Hence, this fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future, but no realized harm is reported at this stage.
Thumbnail Image

Army Calling for AI Robots to Build Bridges Amid Combat

2025-02-16
NewsMax
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous robots designed for bridge-building under fire. The solicitation highlights challenges such as GPS jamming and cyberattacks, indicating the operational environment is hostile and the AI systems' failure or misuse could lead to harm. Although no incident has occurred yet, the nature of the AI system's intended use in combat and the potential for harm make this an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI involvement is central, and it is not Complementary Information as it does not update or respond to a past incident but rather describes a future development with plausible risks.
Thumbnail Image

The US Army wants AI robots to build bridges under fire

2025-02-16
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous, AI-controlled robotic rafts intended to perform dangerous military engineering tasks. Although the project is currently in the solicitation and development phase with no reported incidents of harm, the intended use in contested combat environments where lives are at risk means that malfunction, misuse, or failure of these AI systems could plausibly lead to injury or death of personnel, disruption of critical military infrastructure, or other significant harms. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Army Calling for AI Robots to Build Bridges Amid Combat

2025-02-16
WGOW-AM
Why's our monitor labelling this an incident or hazard?
The article focuses on the Army's call for AI-enabled autonomous bridge-building robots, which are not yet deployed or involved in any incident. The AI system's development and intended use in combat could plausibly lead to harm, such as injury or death in warfare, or escalation of conflict risks. Since no harm has yet occurred, this qualifies as an AI Hazard rather than an AI Incident. The presence of AI is reasonably inferred from the description of autonomous robots performing complex tasks under fire. Therefore, the event is best classified as an AI Hazard.