Anthropic and OpenAI Face Military AI Ethics Crisis in the US

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI and Anthropic are embroiled in controversy over US military use of their AI systems. OpenAI faced backlash and leadership resignation after partnering with the Pentagon, while Anthropic refused to relax safeguards, was blacklisted, and is suing the US government over military AI restrictions and potential misuse risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems (OpenAI's models, Anthropic's Claude, and xAI's Grok) being used by the U.S. military for operational decision-making, including planning strikes and capture missions. The AI systems' use in military operations inherently carries risks of harm to people and communities (e.g., lethal strikes, surveillance). Although the article does not report a specific incident of harm caused by these AI systems, it emphasizes the lack of developer control and ethical concerns, indicating a credible risk of harm. This fits the definition of an AI Hazard, as the AI systems' deployment in military contexts could plausibly lead to injury, violations of rights, or other significant harms. It is not an AI Incident because no direct or indirect harm from AI use is reported as having occurred yet. It is not Complementary Information because the article's main focus is on the risks and ethical concerns of AI use in military operations, not on responses or governance measures. It is not Unrelated because AI systems are central to the described events and their potential harms.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rights

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Alors qu'Anthropic est en conflit ouvert avec l'administration Trump, Sam Altman, patron d'OpenAI, reconnaît qu'il ne pourra pas contrôler l'utilisation que le Pentagone fera de son IA

2026-03-06
BFMTV
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (OpenAI's models, Anthropic's Claude, and xAI's Grok) being used by the U.S. military for operational decision-making, including planning strikes and capture missions. The AI systems' use in military operations inherently carries risks of harm to people and communities (e.g., lethal strikes, surveillance). Although the article does not report a specific incident of harm caused by these AI systems, it emphasizes the lack of developer control and ethical concerns, indicating a credible risk of harm. This fits the definition of an AI Hazard, as the AI systems' deployment in military contexts could plausibly lead to injury, violations of rights, or other significant harms. It is not an AI Incident because no direct or indirect harm from AI use is reported as having occurred yet. It is not Complementary Information because the article's main focus is on the risks and ethical concerns of AI use in military operations, not on responses or governance measures. It is not Unrelated because AI systems are central to the described events and their potential harms.
Thumbnail Image

OpenAI : la directrice de la robotique démissionne après le contrat controversé avec le Pentagone

2026-03-09
Boursier.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (OpenAI's models) being integrated into military systems, which are likely to involve autonomous or semi-autonomous decision-making capabilities. The ethical concerns raised by the robotics director about surveillance and lethal autonomous weapons indicate plausible future harms related to human rights violations and physical harm. No actual incident of harm is described, but the potential for harm is credible and significant. The resignation and public statements highlight the controversy and risk, but no realized harm or incident is reported, so this is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Accord OpenAI-Pentagone: la dirigeante de la filiale robotique du géant de l'IA annonce sa démission | FranceSoir

2026-03-09
France Soir
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems developed and deployed by OpenAI for use by the Department of Defense, including potential applications in autonomous weapons and mass surveillance. The resignation of a senior executive over concerns about inadequate safeguards indicates serious governance and ethical issues. Although no actual harm has been reported yet, the potential for misuse or malfunction of these AI systems in military and surveillance operations could plausibly lead to violations of human rights or other significant harms. Thus, the event fits the definition of an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.
Thumbnail Image

Anthropic attaque le gouvernement américain en justice (update)

2026-03-10
ICTjournal - Le magazine suisse des technologies de l’information pour l’entreprise
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's large language models) and their use in military contexts. The conflict centers on the potential removal of safeguards that prevent harmful uses such as mass surveillance and autonomous weapons deployment. Although no direct harm has yet occurred, the dispute highlights credible risks of future harm if the AI is used unrestrictedly by the military. The legal and political actions, including blacklisting and lawsuits, are responses to this risk. Since the harms are potential and plausible but not realized, and the event focuses on the risk and governance conflict rather than an actual incident, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Le Pentagone aurait obtenu pour la première fois l'accès aux produits OpenAI via Microsoft Azure en 2023~? avant la conclusion d'un accord officiel et malgré la politique d'utilisation d'OpenAI

2026-03-06
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (OpenAI models accessed via Azure OpenAI service) and their use by the Pentagon. However, it does not report any direct or indirect harm caused by this use, nor does it describe a plausible future harm event. Instead, it discusses the ethical and policy implications, internal company debates, and the nature of access to AI technology by the military. This fits the definition of Complementary Information, as it provides important context and updates on AI governance and societal responses without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Le gouvernement américain va-t-il nationaliser les entreprises d'IA ? Le bras de fer entre Anthropic et le Pentagone inquiète les PDG, Sam Altman estime que cette hypothèse ne peut être totalement exclue

2026-03-09
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems and their use in military contexts, but it does not describe any direct or indirect harm caused by AI systems. The discussion is about potential government actions, strategic partnerships, and hypothetical future nationalization, which are governance and policy issues rather than incidents or hazards. There is no indication that AI systems have malfunctioned or caused injury, rights violations, or other harms. The article mainly provides complementary information about the evolving AI ecosystem, government relations, and industry responses, without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

La division robotique d'OpenAI perd sa principale dirigeante Caitlin Kalinowski à cause d'un désaccord sur les termes de déploiement militaire

2026-03-07
Benzinga France
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems developed by OpenAI's robotics division and their potential military applications, including autonomous weapons and surveillance. Although no direct harm or incident has occurred, the ethical concerns and the debate over deployment terms indicate a credible risk of future harm. The resignation of a key leader over these concerns underscores the seriousness of the potential hazards. The event focuses on the plausible future harms from AI use in military contexts rather than reporting an actual harm event or incident. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Intelligence artificielle : OpenAI vend son âme au Pentagone, ChatGPT en chute libre !

2026-03-08
Maghreb Émergent
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's models) being used by the Pentagon, which is a significant development with potential for harm. However, the harms described are reputational and societal reactions (user backlash, ethical concerns), not direct or indirect harms caused by the AI system's malfunction or use. There is no report of injury, rights violations, or operational disruption caused by the AI. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. Instead, it fits the category of Complementary Information as it details societal and governance responses, public reactions, and ethical debates surrounding AI use in military contexts, enhancing understanding of the broader AI ecosystem and its implications.
Thumbnail Image

Anthropic poursuit l'administration Trump pour faire annuler l'étiquette US \

2026-03-10
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and its use in military contexts, which is a sensitive area with potential for significant harm. However, the event described is a legal challenge against government restrictions and designations, not an incident where the AI system has caused harm or malfunctioned. The dispute concerns potential future uses and restrictions, making it a matter of governance and legal response rather than a realized AI Incident or an immediate AI Hazard. Therefore, this event is best classified as Complementary Information, as it provides important context and updates on societal and governance responses related to AI use and restrictions, without describing a new AI Incident or AI Hazard.