Pentagon and Anthropic Clash Over Military Use of AI Models

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Pentagon, led by ex-Uber executive Emil Michael, is in a standoff with AI company Anthropic over the potential military use of Anthropic's AI models, particularly regarding mass surveillance and autonomous weapons. The Pentagon has labeled Anthropic a supply chain risk, escalating concerns about AI misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems (Anthropic's AI models) and their potential use in sensitive military contexts, which could plausibly lead to harm if misused (e.g., autonomous weapons, mass surveillance). However, no realized harm or incident is reported. The main focus is on the dispute, negotiation, and risk designation, which points to a potential risk rather than an actual incident. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet occurred or been reported.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rightsPublic interest

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Pentagon Turns to Ex-Uber Executive in Anthropic Feud Over AI

2026-03-07
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems and their potential military applications, it does not describe any direct or indirect harm resulting from the development, use, or malfunction of AI systems. The concerns about mass surveillance and autonomous weapons represent plausible future risks, but no specific incident or harm has occurred yet. Therefore, the event is best classified as Complementary Information, as it provides context on governance, negotiation, and strategic responses related to AI in defense, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Pentagon Turns to Ex-Uber Executive in Anthropic Feud Over AI

2026-03-07
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI models) and their potential use in sensitive military contexts, which could plausibly lead to harm if misused (e.g., autonomous weapons, mass surveillance). However, no realized harm or incident is reported. The main focus is on the dispute, negotiation, and risk designation, which points to a potential risk rather than an actual incident. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet occurred or been reported.
Thumbnail Image

Pentagon turns to ex-Uber executive in Anthropic feud over AI

2026-03-07
San Jose Mercury News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI models) and their potential use in sensitive military applications, which could plausibly lead to harm such as violations of rights or misuse in autonomous weapons. However, no actual harm or incident has occurred yet; the situation is a standoff and negotiation phase. Therefore, this constitutes an AI Hazard, as the development and potential use of these AI systems could plausibly lead to significant harm, but no direct or indirect harm has been reported at this time.
Thumbnail Image

Pentagon turns to ex-Uber executive in Anthropic feud over AI

2026-03-07
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI models) and their potential military use, which is a significant governance and ethical issue. However, it does not describe any realized harm or incident caused by these AI systems, nor does it present a clear and immediate plausible risk of harm. The focus is on negotiations, strategic disputes, and the role of a key individual in the Pentagon, which fits the definition of Complementary Information as it informs about governance and societal responses to AI-related challenges without reporting a new incident or hazard.