Public Boycott of OpenAI After Pentagon AI Deal Raises Military AI Ethics Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A mass online boycott campaign, "QuitGPT," has mobilized over 1.5 million people to protest OpenAI's agreement with the U.S. Pentagon to deploy AI models in classified military networks. The campaign highlights public fears of potential misuse, such as autonomous weapons and mass surveillance, and follows Anthropic's refusal to grant similar access.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (OpenAI's models) in a military context, which is explicitly stated. The protest and boycott are reactions to the potential risks associated with this use, such as mass surveillance and autonomous weapons, which are credible and plausible future harms. No actual harm has been reported yet, so it does not qualify as an AI Incident. The article focuses on the potential for harm and societal response rather than reporting a realized harm or incident. Hence, the classification as AI Hazard is appropriate.[AI generated]
AI principles
Respect of human rightsSafety

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rights

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Thirrje për bojkot ndaj ChatGPT-së pas marrëveshjes me Pentagonin - KOHA.net

2026-03-02
KOHA.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (OpenAI's models) in a military context, which is explicitly stated. The protest and boycott are reactions to the potential risks associated with this use, such as mass surveillance and autonomous weapons, which are credible and plausible future harms. No actual harm has been reported yet, so it does not qualify as an AI Incident. The article focuses on the potential for harm and societal response rather than reporting a realized harm or incident. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Trump urdhëron qeverinë të ndalojë përdorimin e Anthropic

2026-03-01
Telegrafi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system developed by Anthropic that has been used by U.S. government and military agencies, thus involving an AI system. However, the article does not describe any realized harm or incident caused by the AI system's development, use, or malfunction. Instead, it reports a political and administrative decision to cease use of the AI system due to concerns over access and supply chain risks. There is no indication of direct or indirect harm caused by the AI system, nor a credible imminent risk of harm described. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context about governance and policy decisions related to AI systems in government use, enhancing understanding of the AI ecosystem and responses to AI deployment.
Thumbnail Image

Fushata kundër ChatGPT merr vrull pas marrëveshjes ushtarake OpenAI-Pentagon

2026-03-02
Telegrafi
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (OpenAI's ChatGPT and AI models) and their deployment in military classified networks, which is a use case with credible potential for harm or ethical violations. The public backlash and statements from Anthropic's CEO emphasize concerns about AI undermining democratic values and unsafe uses. However, no actual harm or incident has occurred yet; the article focuses on the potential risks and societal reactions. Hence, it is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Shkaku i reagimit - 'Anulo ChatGPT': Bojkoti i inteligjencës artificiale rritet pas marrëveshjes ushtarake OpenAI-Pentagon

2026-03-02
Syri | Lajmi i fundit
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (OpenAI's ChatGPT and Anthropic's Claude) and their deployment in military networks, which is a use case with credible potential for harm, including autonomous weapons and mass surveillance. The public reaction and boycott reflect concerns about these plausible future harms. Since no actual harm or incident is reported, and the focus is on potential risks and ethical concerns, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems and their military use are central to the story.
Thumbnail Image

Top News - Luftë me inteligjencë artificiale. Anthropic refuzoi Pentagonin, Trump ndëshkon kompaninë - Top Channel

2026-03-02
top-channel.tv
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ClaudeAI) and its intended use by a critical infrastructure operator (the Pentagon). The concerns about unrestricted use for autonomous weapons or mass surveillance imply plausible future harm, making this a potential AI Hazard. However, since no actual harm or incident has occurred or been reported, and the main focus is on the dispute and policy decisions, it does not qualify as an AI Incident. It is more than general AI news because it highlights a credible risk and governance conflict, so it is not unrelated or merely complementary information. Therefore, the classification is AI Hazard.
Thumbnail Image

Fushata "QuitGPT" merr mbështetje masive pas marrëveshjes me Pentagonin - Euronews Albania

2026-03-02
Euronews Albania
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (OpenAI's AI models) and their use in military classified networks, which could plausibly lead to harms such as violations of human rights, security risks, or other significant harms. Since no actual harm or incident has occurred yet, but credible concerns and public mobilization indicate plausible future harm, this qualifies as an AI Hazard. The article also includes societal and governance responses, but the primary focus is on the potential risks and the campaign against the AI use in defense, not on a realized incident or a complementary update to a past incident.