Military Use of AI Sparks International Concerns and Ethical Disputes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Anthropic and OpenAI have faced disputes with the US military over the use of their AI models, including Claude, in autonomous weapons and surveillance. China warned of ethical risks as AI is used in military operations, raising concerns about loss of human control and potential harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (Anthropic's Claude and OpenAI's ChatGPT) used by the U.S. military in weapons and operations, with disputes over their use in autonomous weapons and surveillance. It also references actual military actions by Israel using advanced AI applications causing physical and human harm. These facts demonstrate direct involvement of AI systems in causing harm to people and communities, fulfilling the criteria for an AI Incident. The political and ethical disputes and contract changes are contextual but do not negate the realized harm from AI use in military conflict. Hence, the classification as AI Incident is appropriate.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

​الذكاء الاصطناعي والسياسة والحرب

2026-03-18
Aljazeera
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Anthropic's Claude and OpenAI's ChatGPT) used by the U.S. military in weapons and operations, with disputes over their use in autonomous weapons and surveillance. It also references actual military actions by Israel using advanced AI applications causing physical and human harm. These facts demonstrate direct involvement of AI systems in causing harm to people and communities, fulfilling the criteria for an AI Incident. The political and ethical disputes and contract changes are contextual but do not negate the realized harm from AI use in military conflict. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

خلافات حادة تدفع البنتاغون للبحث عن بدائل.. هل ينتهي التعاون مع "أنثروبيك"؟

2026-03-18
قناة العربية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) and their use in sensitive military applications, with a clear link to AI system development and use. However, the article does not report any actual harm or incident caused by AI malfunction or misuse. Instead, it focuses on strategic decisions, contractual disputes, and governance measures to mitigate potential risks. Therefore, it fits the definition of Complementary Information, as it provides important context and updates on AI governance and ecosystem developments without describing a specific AI Incident or AI Hazard.
Thumbnail Image

"أنثروبيك" تبحث عن خبير أسلحة كيميائية لمنع إساءة استخدام أدواتها

2026-03-18
العربي الجديد
Why's our monitor labelling this an incident or hazard?
Anthropic's recruitment of an expert to prevent catastrophic misuse of its AI tools indicates recognition of a credible risk that these AI systems could be exploited to cause serious harm, such as chemical or radiological weapon production. The article does not report any realized harm but focuses on the potential for misuse and the company's efforts to mitigate it. This fits the definition of an AI Hazard, where the AI system's development or use could plausibly lead to harm, especially in military or weaponization contexts. The event is not an AI Incident because no direct or indirect harm has yet occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

بكين تطالب واشنطن بوضع "كوابح" للذكاء الاصطناعي في الحروب

2026-03-18
24.ae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Anthropic's Claude model) being used operationally by the US military in active conflict scenarios, including target selection and strike planning, which directly influences life-and-death decisions. The Chinese defense ministry's warnings about loss of human control and ethical concerns further underscore the risks and harms associated with this AI use. The AI's role in military decision-making and lethal operations constitutes direct involvement leading to harm (harm to communities, potential human rights violations). Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

شركات الذكاء الاصطناعي تستعين بخبراء أسلحة لمنع استخدام كارثي

2026-03-18
euronews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Anthropic's Claude and OpenAI's models) and their development and use, specifically focusing on preventing misuse that could lead to catastrophic harm. However, the article does not report any realized harm or incident caused by these AI systems. Instead, it details efforts to prevent potential misuse and manage risks, which aligns with the definition of an AI Hazard. The hiring of experts to design controls and monitor threats indicates recognition of plausible future harm but no current incident. Therefore, this is an AI Hazard, not an AI Incident or Complementary Information, as it is not a response to a past incident but a proactive risk management step.