Palantir's AI Targeting System Expanded Across US Military Branches

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Palantir Technologies secured a $100 million contract to expand its Maven Smart System, an AI-powered platform used for battlefield awareness and target identification, across all branches of the US military. The system has been actively used to identify targets for airstrikes, directly influencing lethal military operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system, Maven Smart System, is explicitly described as being used by the military to identify targets and potential airstrike zones, which directly leads to harm (injury or death) in conflict zones. The contract extension implies continued and expanded use of this AI system in military operations. Therefore, the event involves the use of an AI system whose outputs directly contribute to harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential or future risks but confirms active deployment and use in harmful contexts.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Government, security, and defenceDigital securityIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Palantir wins $100M contract with US military for AI tool

2024-09-20
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The AI system, Maven Smart System, is explicitly described as being used by the military to identify targets and potential airstrike zones, which directly leads to harm (injury or death) in conflict zones. The contract extension implies continued and expanded use of this AI system in military operations. Therefore, the event involves the use of an AI system whose outputs directly contribute to harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential or future risks but confirms active deployment and use in harmful contexts.
Thumbnail Image

Palantir Wins $100 Million US Contract for AI Targeting Tech

2024-09-19
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Maven Smart System) used in military targeting, which is operational and has been used in conflict zones. The article mentions potential risks and ethical concerns but does not describe any realized harm or incident resulting from the AI system's use. Therefore, it does not qualify as an AI Incident. The expansion of the system's deployment and the associated concerns about future risks constitute a plausible risk of harm, making this an AI Hazard. It is not merely complementary information because the focus is on the contract enabling broader deployment of a potentially harmful AI system, implying credible future risks.
Thumbnail Image

Palantir Wins $100 Million US Contract for AI Targeting Tech

2024-09-19
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the US military to identify targets for air strikes, which inherently involves harm to persons or groups. The AI system's outputs influence lethal decisions, making it directly linked to injury or harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (injury or death) in a military context.
Thumbnail Image

Palantir Technologies wins $100M US military contract expansion

2024-09-20
Proactiveinvestors NA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Palantir's Maven Smart System) designed for military digital warfare, integrating AI/ML for battlefield awareness and decision-making. Although no harm has yet occurred or been reported, the use of AI in military operations inherently carries risks that could plausibly lead to injury, disruption, or other harms. The event concerns the expansion and deployment of this AI system, which could plausibly lead to an AI Incident in the future. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Palantir Seals Historic $99.8 Billion Deal: Revolutionizing Military AI with Unprecedented Investment - News Directory 3

2024-09-20
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Maven Smart System) used in military operations, which is a high-stakes domain where AI malfunction or misuse could plausibly lead to harm (injury, disruption, or rights violations). The event concerns the expansion and deployment of this AI system, indicating potential future risks. However, no direct or indirect harm has been reported or described as having occurred. The focus is on contract expansion and deployment plans, not on any incident or malfunction. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet materialized.
Thumbnail Image

Palantir secures $100 million AI contract with U.S. Military

2024-09-20
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The Maven Smart System is an AI system explicitly described as being used for battlefield targeting and situational awareness, which directly influences military decisions that can cause harm to persons and communities. The contract expands the system's deployment, indicating ongoing and increased use of AI in lethal military operations. This meets the criteria for an AI Incident because the AI system's use has directly led or is directly involved in harm (injury or harm to persons in conflict zones). The article does not merely describe potential future harm or a hazard, but an active operational AI system in military use with real consequences. Hence, the classification is AI Incident.