
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
US-based Palantir's AI technologies, including data analysis and targeting platforms, have been used by the Israeli military in Gaza, Iran, and Lebanon, directly contributing to lethal operations and civilian harm. These AI systems facilitated surveillance, target identification, and decision-making in military actions, raising concerns over human rights violations.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems developed by Palantir being used by military forces in active conflict zones, with direct involvement in target identification and attack operations that have caused harm and casualties. This meets the definition of an AI Incident because the AI system's use has directly led to injury and harm to groups of people, as well as violations of human rights. The article also discusses the AI systems' role in surveillance and lethal operations, confirming the AI system's pivotal role in causing harm. Hence, this is classified as an AI Incident.[AI generated]