Israel Unveils AI-Enabled Autonomous Battle Tank 'Barak'

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Israel's Ministry of Defense has unveiled the Merkava Mark V 'Barak' tank, equipped with advanced AI systems for autonomous target identification, engagement, and battlefield intelligence. While no harm has yet occurred, the deployment of such AI-enabled weaponry poses credible future risks of injury or disruption in military conflicts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The tank integrates AI systems for autonomous target identification and attack, which implies AI involvement in military operations. While no harm is reported as having occurred, the deployment of AI-enabled autonomous weaponry presents a credible risk of harm in future military engagements, including injury or harm to persons and disruption of critical infrastructure. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the use of AI in autonomous weapons systems.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehiclesDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Superczołg ze sztuczną inteligencją. Izrael pokazał "Błyskawicę"

2023-09-20
tech.wp.pl
Why's our monitor labelling this an incident or hazard?
The tank integrates AI systems for autonomous target identification and attack, which implies AI involvement in military operations. While no harm is reported as having occurred, the deployment of AI-enabled autonomous weaponry presents a credible risk of harm in future military engagements, including injury or harm to persons and disruption of critical infrastructure. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the use of AI in autonomous weapons systems.
Thumbnail Image

Izrael zaprezentował czołg piątej generacji Barak

2023-09-20
TVN24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into the tank for autonomous target identification and defense. This qualifies as an AI system. However, there is no indication that the AI system has caused or contributed to any injury, violation of rights, disruption, or other harms. The event is about the development and presentation of an AI-enabled military system with potential for future use in conflict, which could plausibly lead to harm, but no harm has yet occurred or is described. Therefore, this is best classified as an AI Hazard, reflecting the plausible future risk posed by the deployment of AI-enabled autonomous military technology.
Thumbnail Image

"Prawdziwa rewolucja na polu bitwy". Pokazali światu super-czołg ze sztuczną inteligencją

2023-09-20
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into a military tank that autonomously detect and attack targets, as well as provide real-time battlefield intelligence. This is a clear example of AI system use in a weaponized context with direct potential for harm in armed conflict. Although no specific harm or incident is reported yet, the deployment of such AI-enabled autonomous weapon systems inherently carries a credible risk of causing injury, death, or other harms in warfare. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to AI Incidents involving injury or harm to persons in combat situations. It is not an AI Incident yet because no actual harm or malfunction is reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

Izrael: Zaprezentowano nowy superczołg wspomagany przez sztuczną inteligencję

2023-09-19
wnp.pl
Why's our monitor labelling this an incident or hazard?
The tank integrates AI systems that directly influence combat operations, including autonomous target identification and engagement, which can lead to injury or harm to persons in conflict scenarios. The development and deployment of such AI-enabled weaponry pose a credible risk of causing harm, including potential misuse or malfunction in warfare. Although the article does not report a specific incident of harm occurring yet, the nature of the AI system and its intended use in lethal military operations plausibly could lead to harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the AI system's deployment in a combat context.
Thumbnail Image

Superczołg wspomagany przez sztuczną inteligencję? Zaprojektowano go w Izraelu

2023-09-19
forsal.pl
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems integrated into a military tank for autonomous target detection and engagement, which qualifies as an AI system. Although no specific harm or incident is reported, the nature of the AI system—autonomous weaponry capable of independently identifying and attacking targets—presents a credible risk of causing harm in the future. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to injury, violations of rights, or other significant harms. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the focus is on the AI system's capabilities and potential risks in a military context.