U.S. Establishes AI-Powered Autonomous Military Force for Latin America

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The U.S. Army has announced the creation of an autonomous military force using AI to support Southern Command operations in Central and South America and the Caribbean. The initiative aims to combat drug cartels and respond to crises, raising concerns about potential future harm from AI-enabled autonomous weapons systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-based autonomous and semi-autonomous systems for military purposes, which qualifies as AI system involvement. The event concerns the development and planned deployment of these systems, not a realized harm. However, autonomous weapons and military AI systems inherently carry credible risks of causing injury, disruption, or other harms. Since no actual harm is reported yet, but the plausible future harm is clear, this fits the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it directly involves AI systems with potential for harm.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Estados Unidos crea una fuerza de guerra basada en inteligencia artificial en América latina

2026-04-21
Clarin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based autonomous and semi-autonomous systems for military purposes, which qualifies as AI system involvement. The event concerns the development and planned deployment of these systems, not a realized harm. However, autonomous weapons and military AI systems inherently carry credible risks of causing injury, disruption, or other harms. Since no actual harm is reported yet, but the plausible future harm is clear, this fits the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it directly involves AI systems with potential for harm.
Thumbnail Image

EU crea fuerza militar con inteligencia artificial

2026-04-22
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and autonomous systems in a military force intended for combat and counter-narcoterrorism operations. Although no incident of harm is reported, the nature of the AI system's intended use in lethal autonomous weapons and military operations inherently carries a credible risk of causing injury, death, or other harms. The event is about the development and planned deployment of such AI systems, fitting the definition of an AI Hazard as it could plausibly lead to an AI Incident involving harm to persons or communities. There is no indication that harm has already occurred, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the focus is on the AI-enabled military force and its potential risks.
Thumbnail Image

Ejército de EU creará una fuerza de guerra autónoma basada en IA en Latinoamérica - El Sol de México | Noticias, Deportes, Gossip, Columnas

2026-04-21
OEM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and deployment of autonomous and semi-autonomous systems, which are AI systems, for military purposes. While no actual harm or incident is reported, the nature of these systems and their intended use in conflict and security operations imply a credible risk of future harm, including injury, human rights violations, or disruption of security. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

El Ejército de EE.UU. creará una fuerza de guerra autónoma basada en IA en Latinoamérica - Mundo - ABC Color

2026-04-22
ABC Digital
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the development and intended use of AI systems for autonomous warfare, which inherently carry significant risks of harm including injury or death, disruption, and violations of rights. Although no specific harm has yet occurred, the creation and deployment of AI-enabled autonomous weapons systems with lethal capabilities plausibly could lead to serious harms as defined under AI Hazards. Since the article describes a planned development and deployment without reporting actual harm yet, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Estados Unidos decide usar IA para combatir al crimen organizado en Sudamérica

2026-04-22
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the development and planned use of AI systems (autonomous and semi-autonomous platforms, AI-enabled warfare groups) for military and law enforcement purposes against organized crime. Although no actual harm or incident is reported, the deployment of AI in lethal military operations inherently carries a credible risk of injury, human rights violations, and other harms. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to an AI Incident involving harm to persons or communities. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the planned AI-enabled military capabilities and their implications.
Thumbnail Image

Ejército de EEUU ordenó la creación de una fuerza de guerra que apoyará al Comando Sur

2026-04-21
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the development and intended use of AI systems in military autonomous operations with potential lethal applications. While no actual harm or incident is reported yet, the creation and deployment of such AI-enabled autonomous warfare forces plausibly could lead to harms such as injury or death, disruption, or violations of rights. Therefore, this event constitutes an AI Hazard due to the credible risk of future harm stemming from the AI systems' use in military conflict and security operations.
Thumbnail Image

Estados Unidos impulsa fuerza militar con inteligencia artificial en América Latina

2026-04-21
El Siglo de Torreón
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in autonomous and semi-autonomous military systems for combat operations, which qualifies as AI system involvement. Although no direct harm is reported yet, the development and deployment of AI-enabled autonomous weapons systems pose a credible risk of causing injury, harm to communities, or disruption of peace and security. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident in the future. There is no indication of realized harm or incident at this stage, nor is the article primarily about responses or complementary information, so AI Hazard is the appropriate classification.
Thumbnail Image

EE.UU. planea fuerza militar basada en IA para enfrentar a cárteles en América Latina - La Opinión

2026-04-22
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI systems in autonomous military platforms aimed at combatting drug cartels and managing crises. Although no harm has yet occurred, the nature of these AI-enabled military systems and their intended use in conflict and crisis scenarios plausibly could lead to injury, harm to communities, or disruption of critical infrastructure. The article does not report any realized harm or malfunction but highlights a credible potential for future harm due to the deployment of AI in military operations. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

El Ejército de EE.UU. creará una fuerza de guerra autónoma basada en IA en Latinoamérica

2026-04-22
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and intended use of AI and autonomous systems for military operations, including lethal force, which could plausibly lead to injury or harm to people and other significant harms. Although the force is not yet operational and no harm has been reported, the nature of the AI system's intended use in autonomous warfare clearly presents a credible risk of future harm. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.
Thumbnail Image

El Ejército de EE.UU. creará una fuerza de guerra autónoma basada en IA en Latinoamérica

2026-04-21
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in autonomous military systems intended for combat and strategic operations, which inherently carry risks of harm including injury or death, disruption of regional stability, and potential violations of human rights. Although no specific harm has yet occurred, the deployment of AI-enabled autonomous weapons and warfare systems poses a credible and plausible risk of causing such harms in the future. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the development and intended use of AI in autonomous warfare.
Thumbnail Image

Comando Sur activa despligue militar de alta tecnología (VIDEO)

2026-04-21
Radio y Televisión Martí | RadioTelevisionMarti.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of autonomous and semi-autonomous systems and the integration of AI technologies in military operations, which qualifies as AI system involvement. There is no report of actual harm or incidents caused by these systems yet, so it is not an AI Incident. The focus is on the establishment and future deployment of these AI-enabled systems, which could plausibly lead to harm given the nature of autonomous military technology. Hence, the event is best classified as an AI Hazard, reflecting the credible risk of future harm from the use of AI in military autonomous systems.
Thumbnail Image

EE.UU. creará fuerza de guerra autónoma con IA para Latinoamérica

2026-04-21
El Comercio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and autonomous systems in military operations with the goal of increasing lethality and operational effectiveness. The development and deployment of AI-enabled autonomous weapons systems inherently carry a credible risk of causing harm, including injury or death, disruption, and violations of rights. Although no specific harm has yet occurred, the nature of these systems and their intended use plausibly could lead to significant harm. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm from the development and deployment of AI-powered autonomous warfare systems.