
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Israeli defense company ELTA Systems has introduced a semi-autonomous armed robot capable of patrolling borders, tracking targets, and firing weapons. While proponents claim it enhances troop safety, critics, including human rights groups, warn of risks such as autonomous lethal decisions and inability to distinguish civilians from combatants. No harm reported yet.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of a semi-autonomous armed robot with autonomous navigation and targeting capabilities. While the robot is currently remotely controlled, some functions operate autonomously, which introduces the risk of malfunction or misuse leading to harm. The article highlights concerns from human rights advocates about the inability of such machines to distinguish civilians from combatants, implying a credible risk of harm and rights violations. Since no actual incident of harm is reported but the potential for serious harm is clear and plausible, this qualifies as an AI Hazard rather than an AI Incident.[AI generated]