Colombian Army Deploys AI-Enabled Drone Battalion Amid Armed Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Colombian Army has established its first AI-enabled drone battalion to counter armed groups using similar technology. These drones, capable of facial recognition and vehicle tracking, have been involved in over 350 attacks, resulting in 15 deaths and 170 injuries among military personnel and civilians. The incident highlights AI's direct role in causing harm during conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The drones described are equipped with AI capabilities such as facial recognition and vehicle tracking, used in military operations against armed groups. The article reports over 350 attacks by drones from illegal groups causing deaths and injuries, indicating realized harm. The military's use of AI drones to counter these threats is part of the ongoing conflict. This constitutes an AI Incident because the AI systems' use and malfunction (in the case of hostile drones) have directly led to injury and death, fulfilling the criteria for harm to persons and communities.[AI generated]
AI principles
SafetyRespect of human rightsAccountability

Industries
Government, security, and defence

Affected stakeholders
WorkersGeneral public

Harm types
Physical (death)Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Colombia presenta su primer batallón de drones para combatir grupos armados

2025-10-11
Yahoo!
Why's our monitor labelling this an incident or hazard?
The drones described are equipped with AI capabilities such as facial recognition and vehicle tracking, used in military operations against armed groups. The article reports over 350 attacks by drones from illegal groups causing deaths and injuries, indicating realized harm. The military's use of AI drones to counter these threats is part of the ongoing conflict. This constitutes an AI Incident because the AI systems' use and malfunction (in the case of hostile drones) have directly led to injury and death, fulfilling the criteria for harm to persons and communities.
Thumbnail Image

Ejército de Colombia presenta su primer batallón de drones

2025-10-11
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology integrated into drones used by the Colombian army for offensive and defensive military operations. The drones' capabilities include facial recognition and tracking, which are AI functions. The context involves ongoing armed conflict where drones have been used by guerrillas to attack military personnel, causing deaths and injuries, and now the army is deploying AI-enabled drones in response. This direct involvement of AI systems in causing or countering harm in an armed conflict meets the criteria for an AI Incident due to injury and harm to persons and communities. The event is not merely a potential risk but involves realized harm linked to AI system use.
Thumbnail Image

Ejército colombiano presenta su primer batallón de drones para combatir a grupos armados

2025-10-11
Prensa Libre
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (drones equipped with AI for autonomous or semi-autonomous attack capabilities) being deployed in a conflict setting. The use of AI-enabled drones for attack purposes directly relates to potential harm to persons (civilians and combatants), which fits the definition of an AI Incident due to the direct or indirect harm caused by AI systems in military conflict. The article describes the deployment and use of these AI systems, not just their development or potential future use, indicating realized harm or at least direct involvement in harm scenarios.
Thumbnail Image

Colombia presenta su primer batallón de drones para combatir grupos armados

2025-10-11
France 24
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems capable of autonomous or semi-autonomous identification and tracking, used in active military operations against armed groups. The article reports actual harm caused by drone attacks, including deaths and injuries to military personnel and civilians, fulfilling the criteria for harm to persons and communities. The AI system's use in these attacks directly leads to these harms, making this an AI Incident rather than a hazard or complementary information. The presence of AI is explicit, and the harms are realized, not just potential.
Thumbnail Image

Innovación: así es el primer batallón de drones que usará el Ejército Nacional

2025-10-11
Portafolio.co
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems capable of autonomous or semi-autonomous functions such as facial recognition and vehicle tracking. The article reports over 350 attacks by illegal groups using drones that have caused 15 deaths and 170 injuries, indicating realized harm. The Colombian Army's use of AI drones for offensive and defensive operations is a direct response to these harms. The involvement of AI in these drones and the resulting physical harm to people qualifies this as an AI Incident under the OECD framework.
Thumbnail Image

Colombia presenta primer batallón de drones para combatir el crimen organizado

2025-10-11
Aporrea
Why's our monitor labelling this an incident or hazard?
The article involves AI systems reasonably inferred in the form of military drones, which typically incorporate AI for navigation and targeting. The event concerns the use and development of these AI-enabled drones in a conflict setting where harm (deaths and injuries) has already occurred due to drone attacks by armed groups. The new battalion's formation is a response to these threats and involves training operators for AI drone systems. Since the article does not report a new AI system malfunction or misuse causing harm, but rather the establishment of a military unit to counter existing threats, it does not meet the criteria for an AI Incident. However, the presence and use of AI drones in conflict with potential for harm qualifies it as an AI Hazard, as the AI systems could plausibly lead to further incidents of harm in the future.
Thumbnail Image

La Jornada: Colombia presenta drones con IA contra grupos armados

2025-10-12
La Jornada
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as drones equipped with AI capabilities like facial recognition and tracking. These AI systems are used in military operations against armed groups that have caused deaths and injuries through drone attacks. The AI system's use is directly linked to harm to persons (military personnel and civilians) in an armed conflict context, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but involves realized harm through the conflict-related use of AI drones.