Israel Unveils Armed Semi-Autonomous Border Patrol Robot

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Israeli defense company ELTA Systems has introduced a semi-autonomous armed robot capable of patrolling borders, tracking targets, and firing weapons. While proponents claim it enhances troop safety, critics, including human rights groups, warn of risks such as autonomous lethal decisions and inability to distinguish civilians from combatants. No harm reported yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system in the form of a semi-autonomous armed robot with autonomous navigation and targeting capabilities. While the robot is currently remotely controlled, some functions operate autonomously, which introduces the risk of malfunction or misuse leading to harm. The article highlights concerns from human rights advocates about the inability of such machines to distinguish civilians from combatants, implying a credible risk of harm and rights violations. Since no actual incident of harm is reported but the potential for serious harm is clear and plausible, this qualifies as an AI Hazard rather than an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityRespect of human rightsTransparency & explainabilityDemocracy & human autonomyFairness

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rights

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Novi izraelski naoružani robot za patroliranje granicama

2021-09-13
Aljazeera
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of a semi-autonomous armed robot with autonomous navigation and targeting capabilities. While the robot is currently remotely controlled, some functions operate autonomously, which introduces the risk of malfunction or misuse leading to harm. The article highlights concerns from human rights advocates about the inability of such machines to distinguish civilians from combatants, implying a credible risk of harm and rights violations. Since no actual incident of harm is reported but the potential for serious harm is clear and plausible, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Izraelska kompanija predstavila naoružanog robota za patroliranje duž granica

2021-09-13
Krstarica
Why's our monitor labelling this an incident or hazard?
The robot described is an AI system because it performs autonomous functions such as navigation and surveillance, and can potentially engage targets. The event involves the development and intended use of this AI system in military operations. While no direct harm has been reported yet, the concerns raised about autonomous targeting and inability to distinguish civilians from combatants indicate a plausible risk of serious harm, including violations of human rights. This fits the definition of an AI Hazard, as the event could plausibly lead to an AI Incident involving injury or rights violations. Since no actual harm has occurred yet, it is not an AI Incident. The event is not merely complementary information or unrelated, as it centers on the risks posed by the AI system's capabilities.
Thumbnail Image

Izraelska kompanija predstavila naoružanog robota za patroliranje duž granica

2021-09-13
vijesti.me
Why's our monitor labelling this an incident or hazard?
The robot described is an AI system with semi-autonomous capabilities, including autonomous movement and surveillance, and potentially autonomous targeting. The event involves the development and use of this AI system in military contexts where harm to people is a direct risk. However, the article does not report any actual harm or incident caused by the robot so far, only warnings and concerns about possible future harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The concerns about violation of human rights and potential lethal autonomous decisions further support this classification as a hazard.
Thumbnail Image

Novi izraelski naoružani robot za patroliranje granicama

2021-09-13
Bljesak.info
Why's our monitor labelling this an incident or hazard?
The event involves an AI system embedded in a military robot with autonomous capabilities, including navigation and target engagement. Although the company claims fully autonomous lethal functions are not offered to clients, the presence of autonomous movement and surveillance functions, combined with the robot's lethal armament, creates a plausible risk of harm to people and potential violations of human rights. The concerns raised by Human Rights Watch about the inability to distinguish civilians from combatants further support the classification as a hazard. Since no actual harm or incident is reported yet, but the system's use could plausibly lead to harm, this qualifies as an AI Hazard.