AI-Powered Airstrikes Accelerate Lethal Decision-Making in Iran Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

U.S. and Israeli forces used Anthropic's AI model Claude to automate and accelerate airstrike planning and execution during attacks on Iran, resulting in around 900 strikes and the death of Iran's Supreme Leader. Experts warn this AI-driven process reduces human oversight, raising ethical and legal concerns over civilian harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used in military targeting and strike planning, which directly led to a missile strike causing civilian deaths and a serious violation of international humanitarian law. This constitutes harm to persons and a breach of legal obligations protecting fundamental rights. Therefore, this is an AI Incident because the AI system's use directly contributed to the harm and legal violations described.[AI generated]
AI principles
SafetyAccountability

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Physical (death)

Severity
AI incident

Business function:
Other

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

صحيفة بريطانية تحذر من استخدام الذكاء الاصطناعي في الضربات العسكرية على إيران - الوطن

2026-03-03
الوطن
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in military targeting and strike planning, which directly led to a missile strike causing civilian deaths and a serious violation of international humanitarian law. This constitutes harm to persons and a breach of legal obligations protecting fundamental rights. Therefore, this is an AI Incident because the AI system's use directly contributed to the harm and legal violations described.
Thumbnail Image

"تهميش القرار البشري": الحرب على إيران تكشف تسارع القصف بالذكاء الاصطناعي...

2026-03-03
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting and decision-making that have directly led to lethal strikes causing significant civilian casualties, including children. This meets the definition of an AI Incident as the AI system's use has directly led to harm to people and communities (harm categories a and d). The involvement of AI in accelerating and possibly diminishing human decision-making oversight further supports the classification. The harm is realized, not just potential, and the AI system's role is pivotal in the incident described.
Thumbnail Image

"حرب إيران" بدء حروب الذكاء الاصطناعى.. الجارديان: أمريكا استخدمت نموذج "كلود" فى الضربات.. وخبراء: مخاوف من تهميش دور الإنسان فى اتخاذ القرارات واقتصار دور العسكريين على الموافقة وقدرات تختصر أسابيع إلى ثوان - اليوم السابع

2026-03-03
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Claude') in military targeting and strike planning, which has directly led to lethal airstrikes causing civilian deaths and the killing of a high-profile target. This constitutes direct harm to people (harm category a) and potential violations of international humanitarian law (category c). The AI system's role is pivotal in accelerating and automating the decision-making process, reducing human oversight and increasing risks of harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

خبراء عسكريون: أدوات الذكاء الصناعي في الهجمات العسكرية على إيران بداية عهد جديد من الضربات الجوية

2026-03-03
Alwasat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Claude') in planning and executing military strikes that have caused deaths and legal violations. The AI system's involvement in accelerating and automating lethal decisions directly led to harm to people (deaths of civilians and military targets) and breaches of humanitarian law. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people and violations of legal obligations. The article also discusses the ethical and strategic implications of AI in warfare, reinforcing the significance of AI's role in the incident.
Thumbnail Image

اخبارك نت | الذكاء الاصطناعي "يكتسح" في العملية الأميركية على إيران

2026-03-03
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used in military operations that have directly caused harm, including the killing of individuals and disabling of defense systems. The AI's role in accelerating and automating target selection and strike authorization is central to the harm caused. The article also highlights ethical concerns about reduced human oversight, reinforcing the AI's pivotal role. Therefore, this qualifies as an AI Incident due to direct harm caused through AI-enabled military action.
Thumbnail Image

"من أيام إلى ثوانٍ".. كيف قلّص الذكاء الاصطناعي وقت اتخاذ قرار الضربات الجوية في حرب إيران؟

2026-03-03
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military operations that have resulted in lethal airstrikes causing civilian deaths and legal violations. The AI system's role in accelerating and recommending targets is a direct contributing factor to these harms. The harms include injury and death to persons and violations of legal protections, fitting the definition of an AI Incident. The involvement is through the use of the AI system in operational decision-making, and the harms are realized, not just potential. Therefore, the event is classified as an AI Incident.
Thumbnail Image

استخدم في العدوان على إيران.. الذكاء الاصطناعي يدير المعارك في الحروب | التلفزيون العربي

2026-03-03
التلفزيون العربي
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used in military operations to select and approve targets for strikes, which have resulted in deaths and destruction. The AI's role is pivotal in accelerating and automating lethal decision-making processes, directly leading to harm (death of individuals, destruction of property, and broader harm to communities). The article also highlights ethical concerns about diminished human control, reinforcing the significance of AI's role in causing harm. Hence, this is a clear AI Incident as per the definitions provided.
Thumbnail Image

ﻋﺼﺮ اﻟﺤﺮوب ﻓﺎﺋﻘﺔ اﻟﺴﺮﻋﺔ ﻳﺒﺪأ ﻣﻦ إﻳﺮان

2026-03-04
الوفد
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military operations that have directly led to harm through the execution of strikes on targets in Iran and Gaza. The AI's role in accelerating decision-making and targeting contributes to the harm caused by these strikes, including potential violations of human rights and the ethical implications of reduced human control. Therefore, this qualifies as an AI Incident because the AI system's use has directly contributed to harm (physical and legal) in a conflict context.
Thumbnail Image

استخدمته أمريكا في الحرب على إيران.. الذكاء الاصطناعي يدير المعارك - وكالة ستيب نيوز

2026-03-04
وكالة ستيب نيوز
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in active military conflict, where the AI's role in targeting and decision-making has directly led to lethal harm (deaths from airstrikes). The AI system's involvement is explicit and central to the incident. The harm includes injury and death to persons, fulfilling the criteria for an AI Incident. The article also highlights the risks of reduced human oversight, reinforcing the direct link between AI use and harm.
Thumbnail Image

حروب بالذكاء الإصطناعي: عصر جديد بلا أخلاقيات؟ | د. خليل أبو قورة

2026-03-07
MEO
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (including autonomous lethal weapons) being used in military operations that have caused or could cause harm to civilians and raise serious ethical and legal concerns. The involvement of AI in targeting and striking without direct human control, and the resulting ambiguity in responsibility, directly relates to harms to people and violations of rights. The mention of actual military use (e.g., US strikes on Iran and Venezuela, Israeli use in Gaza) supports that harm is occurring or has occurred. Thus, this is an AI Incident rather than a hazard or complementary information, as the harms are realized and the AI's role is pivotal.
Thumbnail Image

Πλήγματα με ταχύτητα μεγαλύτερη από την "ταχύτητα της σκέψης" - Η "γνωστική αποσύνδεση" όσων παίρνουν αποφάσεις

2026-03-03
Η Ναυτεμπορική
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Anthropic's Claude) in military targeting and strike execution, which has directly led to lethal harm and significant military actions. The AI system's role in compressing decision time and recommending targets is central to the harm caused. This meets the definition of an AI Incident because the AI system's use has directly led to injury and harm to persons and communities. The concerns about cognitive disconnection further emphasize the risks and harms associated with the AI system's deployment in this context.
Thumbnail Image

Η Τεχνητή Νοημοσύνη στον πόλεμο: Πώς λειτουργεί και ποια ηθικά διλήμματα ανακύπτουν; | LiFO

2026-03-03
LiFO
Why's our monitor labelling this an incident or hazard?
The event involves explicit use of AI systems in military operations that have directly led to significant harm, including loss of human life and violations of international humanitarian law. The AI system's role in accelerating and automating lethal targeting decisions is a direct causal factor in these harms. The article also discusses ethical dilemmas arising from this use, reinforcing the significance of AI's involvement. Therefore, this is classified as an AI Incident due to the realized harm caused by AI-enabled military actions.
Thumbnail Image

Πόλεμος στο Ιράν / Σηματοδοτεί εποχή βομβαρδισμών με τεχνητή νοημοσύνη - Πιο γρήγορα κι από "την ταχύτητα της σκέψης"

2026-03-04
TVXS - TV Χωρίς Σύνορα
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Anthropic's Claude model) by the US military to support targeting and strike decisions that have resulted in lethal attacks, including civilian casualties. The AI system's involvement in accelerating the 'kill chain' and influencing lethal military operations directly leads to harm to persons and breaches of humanitarian law, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in the event.
Thumbnail Image

Πόλεμος στο Ιράν: Σηματοδοτεί εποχή βομβαρδισμών με τεχνητή νοημοσύνη - Πιο γρήγορα κι από "την ταχύτητα της σκέψης" - Αγώνας της Κρήτης

2026-03-04
Αγώνας της Κρήτης
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Anthropic's Claude model integrated with Palantir's system) in military targeting and decision-making that has directly led to lethal strikes causing deaths, including civilians. This meets the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities (harms a and d). The involvement is not hypothetical or potential but actual and realized. The article also discusses the ethical and operational implications of AI accelerating the kill chain and reducing human control, reinforcing the direct link between AI use and harm. Hence, the classification is AI Incident.
Thumbnail Image

Ο πόλεμος στο Ιράν προμηνύει μια εποχή βομβαρδισμών με χρήση ΑΙ, ταχύτερων από την "ταχύτητα της σκέψης"

2026-03-03
City Online Free Press
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Anthropic's Claude) integrated into military decision-making systems to identify and prioritize targets and to support attack authorization. The AI's involvement has directly led to lethal military strikes causing death and injury, including civilian casualties, which are harms to persons and violations of humanitarian law. The AI system's role in accelerating and automating lethal decisions makes it a pivotal factor in these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic και Πεντάγωνο ξανά σε διαπραγματεύσεις για τη χρήση Τεχνητής Νοημοσύνης από τον στρατό | LiFO

2026-03-05
LiFO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Anthropic's Claude) integrated into military targeting and attack systems, which has directly led to lethal military strikes causing deaths and humanitarian law violations. The AI system's role in accelerating and automating targeting decisions is central to the harm described. This meets the criteria for an AI Incident because the AI system's use has directly led to injury and harm to persons and communities (criteria a and d), and possibly violations of legal obligations (criterion c). The article also discusses the ethical and operational risks of such AI use, reinforcing the direct causation of harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

AI made the Iran strikes faster than any war in history, but 165 people in a school just paid the price for that speed | Attack of the Fanboy

2026-03-03
Attack of the Fanboy
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in the development and execution of military strikes that caused significant loss of life, including civilians. The AI system's involvement in target identification and strike recommendation directly led to harm (deaths and injuries), fulfilling the criteria for an AI Incident. Ethical concerns about decision-making and potential human detachment further support the significance of AI's role. The harm is realized, not just potential, and the AI system's use is central to the event.
Thumbnail Image

How is AI shaping the conflict in Iran?

2026-03-06
ITV Hub
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems like Claude have been used by the US military in operations that involve targeting and attacking Iranian sites, which inherently involves harm to people and communities. The AI's role in processing intelligence and enabling faster military decisions is a direct factor in these harms. The article also discusses ethical concerns and risks of AI in warfare, but the key point is that AI has been actively used in lethal military actions, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Opinion: Iran strike may be the first AI war, and it won't be the last

2026-03-05
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Anthropic's Claude and Palantir's AI-enabled system) being used in military strikes that have caused harm (attacks on Iran). The AI system's role in accelerating targeting and strike decisions indicates its involvement in causing harm to people, fulfilling the criteria for an AI Incident. Although the article is opinion-based and discusses ethical concerns and potential future risks, the described use of AI in lethal operations with possible reduced human oversight constitutes direct or indirect harm under the framework. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-driven warfare is here, and the Iran strikes show how fast it's advancing

2026-03-03
TechSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Claude AI model) being used in military strike planning and intelligence analysis that led to bombing strikes killing many people, including civilians. The AI's role in accelerating decision-making and targeting directly contributed to physical harm and loss of life, fulfilling the criteria for an AI Incident. The involvement is not hypothetical or potential but realized harm linked to AI use in warfare. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Israel Accused of Using AI to Pick Iran Targets 'Without Any Human Oversight' -- Just Like in Gaza | Common Dreams

2026-03-05
Common Dreams
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military targeting decisions that have directly caused harm to civilians, including deaths and injuries, fulfilling the criteria for an AI Incident. The AI system's autonomous or semi-autonomous role in selecting targets without adequate human supervision has led to wrongful strikes and significant loss of life, which is a clear harm to persons and communities. The article provides concrete examples of such harm and expert commentary on the risks and consequences, confirming the direct link between AI use and realized harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Reports of AI use in US-Israeli attacks on Iran spark discussion; Chinese expert urges caution on AI military applications

2026-03-05
GlobalSecurity.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Anthropic's Claude, Israeli Lavender system) being used in military strikes that have caused harm, such as bombing campaigns and attacks on Iran. The AI systems are involved in decision-making processes that directly influence lethal actions, fulfilling the criteria for an AI Incident as the AI's use has directly led to harm to people and communities. The discussion about the risks of AI sidelining human control further supports the assessment of realized harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

What is Anthropic AI? How it helped U.S.-Israel to kill Iran's supreme leader Ali Khamenei?

2026-03-05
Zee News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Anthropic's Claude AI was used to process intelligence and assist in targeting decisions during a military strike that killed a high-profile individual. The AI system's involvement in accelerating decision-making and targeting directly contributed to the lethal outcome, which is a clear harm to persons. This fits the definition of an AI Incident, as the AI system's use directly led to injury or harm to people. The ethical debate mentioned further underscores the significance of the harm caused by AI use in this context.
Thumbnail Image

Claude AI In Action: How Anthropic's Tool Helped US Strike 1,000 Targets In Iran In 24 Hours

2026-03-07
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Claude AI was used to synthesize intelligence, prioritize targets, and compress the kill chain timeline, enabling US and Israeli forces to strike approximately 1,000 targets in Iran within 24 hours. This is a clear example of AI use in a real-world military operation that directly led to harm (destruction of infrastructure and potential casualties). The AI system's role was pivotal in accelerating and shaping these decisions. Although human decision-makers retained final authority, the AI's recommendations materially influenced the strikes. This meets the definition of an AI Incident because the AI system's use directly led to harm (property damage, harm to communities) in a conflict setting. The political context and ethical concerns do not negate the realized harm caused by the AI-enabled targeting.
Thumbnail Image

The Guardian view on AI in war: the Iran conflict shows that the paradigm shift has already begun

2026-03-06
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military targeting and offensive operations that have already resulted in civilian casualties, which constitutes direct harm to people (harm category a) and harm to communities (category d). The AI's role in facilitating mass killings and reducing human oversight and accountability confirms that the AI system's use has directly led to significant harm. Therefore, this qualifies as an AI Incident. The discussion of governance and oversight is complementary but does not overshadow the primary focus on realized harm caused by AI use in warfare.
Thumbnail Image

'Let AI Do It': How Claude-Backed Maven Fired 900 US Strikes On Iran In 12 Hours

2026-03-06
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Maven powered by Claude AI) in military targeting and strike operations that have directly caused harm, including civilian casualties. This meets the definition of an AI Incident because the AI system's use has directly led to injury and harm to people and communities. The article also references the malfunction or limitations of the AI system in complex scenarios and the potential risks of removing human oversight, but the realized harm from AI-assisted strikes is the primary focus. Therefore, this is classified as an AI Incident.
Thumbnail Image

1000 targets in 24 hours: How US military used AI to hit Iran

2026-03-06
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system was used to identify and prioritize targets for missile strikes that resulted in lethal outcomes and destruction. The AI's role was central to the military operation's effectiveness and speed, directly contributing to harm. The involvement of AI in lethal military operations causing death and destruction clearly meets the definition of an AI Incident. The article also mentions the ethical controversy and political responses, but the primary focus is on the realized harm caused by AI-enabled targeting.
Thumbnail Image

How US Military Used Claude AI To Plan And Execute 1,000 Iran Strikes Within A Single Day

2026-03-07
TimesNow
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Maven Smart System with Claude AI) in military operations that resulted in strikes on about 1,000 targets. This clearly involves the use of AI in a context that directly leads to harm (physical destruction and potential injury or death). The AI system's involvement is central to the event, fulfilling the criteria for an AI Incident as the AI's use directly led to harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Inside the US's AI plans for defence and future warfare

2026-03-06
Euronews English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Anthropic's Claude and OpenAI's AI chatbots) in military operations that have led to tangible outcomes, such as the capture of a political leader. This constitutes direct involvement of AI in causing harm or significant geopolitical effects, fulfilling the criteria for an AI Incident. The concerns about AI reliability, ethical issues, and autonomous weapons further underscore the potential for harm. The event is not merely a potential risk or a complementary update but a report of actual AI use in operations with consequential impacts, thus classifying it as an AI Incident.
Thumbnail Image

The new kill chain: America is using AI to bomb targets in Iran

2026-03-06
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to process intelligence, recommend targets, and guide military strikes in the US campaign against Iran. These AI systems have directly contributed to the execution of hundreds of strikes, which inherently cause harm to persons and property. The human-in-the-loop approach does not negate the AI's pivotal role in enabling these harms. Hence, the event meets the criteria for an AI Incident because the AI's use has directly led to injury and harm in a military conflict.
Thumbnail Image

How the US military used AI to attack Iran and hit 1,000 targets within first 24 hours

2026-03-07
India TV News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Maven Smart System with Claude AI) used in a military context to conduct lethal strikes resulting in death and destruction. The AI system's outputs directly influenced targeting decisions and operational execution, leading to realized harm (death, destruction of targets). This fits the definition of an AI Incident because the AI system's use directly led to injury and harm to persons and communities (harm categories a and d). The involvement is not hypothetical or potential but actual and materialized. Therefore, the classification is AI Incident.
Thumbnail Image

The age of AI warfare

2026-03-07
GEO TV
Why's our monitor labelling this an incident or hazard?
The article explicitly details the use of AI systems in active military conflicts, including autonomous drones and algorithmic targeting tools that have already influenced operational decisions and strikes. It discusses the ethical risks of civilian casualties accepted by AI algorithms, the compression of decision timelines reducing human oversight, and the potential for escalation to nuclear conflict. These represent direct and indirect harms to human life, communities, and fundamental rights. The presence and use of AI systems are clear and central to the harms described, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI capability debated as tech guides Iran strikes

2026-03-07
Bangkok Post
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used in military targeting that has plausibly resulted in civilian deaths, which constitutes harm to people. The potential mistaken AI targeting of a school killing 150 people is a direct harm linked to AI use. The involvement of AI in selecting targets and the resulting harm meets the criteria for an AI Incident, as the AI system's use has directly or indirectly led to injury or harm to groups of people. The article also discusses the moral and legal implications of AI use in warfare, reinforcing the significance of the harm caused. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The US military is integrating AI into modern warfare: Everything you need to know

2026-03-06
The News International
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in real military operations, which directly relates to the use of AI leading to potential or actual harm (injury, violation of rights, or escalation of conflict). The article mentions the capture of a political leader facilitated by AI, indicating realized impact. The concerns about reliability and autonomous weapons further underscore the risks. Given the direct use of AI in sensitive military contexts and the associated risks, this is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI: The New Frontline Weapon in Autonomous Warfare

2026-03-06
My Northwest
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in active military operations, including autonomous drones capable of striking targets with minimal human control. This use has already resulted in attacks and casualties, such as strikes on embassies and consulates. The involvement of AI in making life-and-death decisions without meaningful human control constitutes a direct link to harm (injury or death to persons, harm to communities). The concerns raised by human rights organizations and international bodies about unlawful killings and violations of humanitarian law further support the classification as an AI Incident. The article also notes the Pentagon's use of AI models in attacks, confirming the AI system's role in causing harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US military relies on AI as tool to speed Iran operations

2026-03-06
Stars and Stripes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used as tools to assist human decision-making in military operations, indicating AI system involvement. However, it does not report any confirmed harm caused by AI malfunction or misuse. The investigation into civilian casualties does not link AI to the incident. The article mainly provides information on the deployment of AI in military operations, the companies involved, and the ethical and governance debates surrounding such use. Therefore, it does not meet the criteria for an AI Incident or AI Hazard but fits the definition of Complementary Information as it updates on AI use and related governance issues in a sensitive context.
Thumbnail Image

Stockwatch

2026-03-06
Stockwatch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Claude embedded in Palantir's Maven Smart System) used by the military to shorten the kill chain, including target identification and strike launching, which are tasks indicative of AI systems with autonomous or semi-autonomous lethal capabilities. The concerns raised about bypassing human oversight and the potential for fully autonomous weapons align with plausible future harms such as injury, death, and human rights violations. Since no actual harm is reported but the risk is credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Questions over AI capability as tech guides Iran strikes

2026-03-07
KTBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to guide military strikes and intelligence analysis, including the Maven Smart System and generative AI models integrated to enhance targeting capabilities. The reported bombing of a school with 150 casualties, potentially due to AI targeting errors, constitutes direct harm to people and communities. The involvement of AI in selecting targets and the resulting civilian harm meets the criteria for an AI Incident, as the AI system's use has directly or indirectly led to injury and harm. The discussion of human control and responsibility further supports the significance of the AI system's role in the harm caused.
Thumbnail Image

Questions over AI capability as tech guides Iran strikes

2026-03-07
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to guide military strikes and intelligence analysis, including the Maven Smart System and generative AI models integrated to enhance targeting. The reported bombing of a school with 150 casualties, possibly due to AI targeting errors, constitutes direct harm to people. The involvement of AI in selecting targets and the resulting civilian harm meets the criteria for an AI Incident, as the AI system's use has directly or indirectly led to injury or harm to groups of people. The discussion of human control and responsibility further supports the assessment of an incident rather than a mere hazard or complementary information.
Thumbnail Image

The AI-powered 'forever wars' start now - Coda Story

2026-03-06
Coda Story
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to select targets and coordinate military strikes, which have resulted in the deaths and injuries of civilians, including children. This constitutes direct harm to people (harm category a) and harm to communities (category d). The AI system's use in lethal military operations and the resulting casualties meet the criteria for an AI Incident, as the AI's development and use have directly led to significant harm. The article also discusses ethical concerns and accountability issues, reinforcing the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Questions over AI capability as tech guides Iran strikes

2026-03-07
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in military targeting and strike operations, including semi-autonomous drones and AI tools like the Maven Smart System integrated with generative AI models. It reports a bombing incident with civilian casualties that may have resulted from AI targeting errors, indicating direct or indirect harm caused by AI system use. The involvement of AI in causing injury or harm to people (harm to health and communities) meets the criteria for an AI Incident. The article also highlights ongoing debates about human control and responsibility, but the presence of actual harm linked to AI use is sufficient for classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

U.S. School Massacre in Iran Raises Questions Over AI Warfare: Anthropic Systems, Pentagon Targeting, and the Precedent of Israel's Algorithmic Bombing in Gaza - Greatreporter

2026-03-06
Greatreporter.com
Why's our monitor labelling this an incident or hazard?
The event involves a mass casualty caused by a military strike where AI systems were integrated into the targeting intelligence workflow. The harm (death of over 160 children) is direct and severe. The AI system's involvement is plausible and under investigation, with the article emphasizing the AI-assisted nature of the targeting process. This meets the criteria for an AI Incident because the AI system's use in military targeting has directly or indirectly led to significant harm to people, fulfilling the definition of an AI Incident under harm category (a).
Thumbnail Image

How AI Is Shaping the Iran War and Future Conflicts - GreekReporter.com

2026-03-07
GreekReporter.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military operations that have caused or contributed to harm, including civilian casualties and cyber disruptions affecting critical infrastructure. It also discusses the use of AI for target identification and autonomous weapons, which have legal and ethical implications. The involvement of AI in these harms is direct or indirect, fulfilling the criteria for an AI Incident. The article also references governance and legal debates but the primary focus is on the realized harms and use of AI in conflict, not just potential risks or responses, so it is not merely Complementary Information or an AI Hazard.
Thumbnail Image

How AI Is Shaping the Iran War and Future Conflicts - thetimes.gr

2026-03-07
thetimes.gr - All the news you need!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Maven Smart System, large language models) used in military operations that have contributed to civilian casualties and cyberattacks disrupting Iranian communications and sensor networks. These are direct or indirect harms to people and critical infrastructure caused by AI use in conflict. The discussion of autonomous weapons and AI contracts further supports the AI system's involvement in harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US-Israel war on Iran: How Anthropic's Claude AI helped US strike 1,000 targets in Iran within 24 hours of war

2026-03-07
The Financial Express
Why's our monitor labelling this an incident or hazard?
The AI system Claude was used operationally to analyze intelligence data and identify targets that were subsequently struck, indicating direct involvement in causing harm (physical injury and destruction) through military action. This fits the definition of an AI Incident, as the AI's use directly led to harm to persons and property. The ethical concerns further underscore the gravity of the AI's role. Although humans made final decisions, the AI's pivotal role in accelerating and shaping those decisions is clear. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI on the Battlefield: Claude Accelerates Military Targeting in Iran Conflict

2026-03-07
The Hans India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude) integrated into military intelligence systems to analyze data and recommend targets, which were then prioritized for strikes. This AI involvement directly contributed to military actions that cause harm to persons and property, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in accelerating decision-making and targeting, which has materialized in real military operations. Although human commanders make final decisions, the AI's recommendations significantly influence outcomes. The ethical concerns and political disputes further underscore the significance of the AI's impact. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La guerra también la decide un algoritmo: alertas por uso de IA en ataques a Irán

2026-03-08
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used in the development and use phases to identify military targets and execute attacks, which have directly led to deaths and injuries, including civilian casualties. The involvement of AI in selecting targets that resulted in harm to people and potential violations of international humanitarian law meets the criteria for an AI Incident. The harm is realized and significant, and the AI's role is pivotal in accelerating lethal decisions and possibly causing erroneous targeting. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Military's AI use has been critical to Iran war plan | Arkansas Democrat Gazette

2026-03-07
ArkansasOnline
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in a military conflict where harm to civilians has occurred, and AI tools are integral to the operational process. Although AI does not make final targeting decisions, its role in data processing and decision support is pivotal. The investigation into civilian casualties, even if AI's role is not confirmed, combined with the use of AI in lethal operations, meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to people. The article also discusses the risks of automation bias and the ethical debate around AI in warfare, reinforcing the significance of AI's involvement in causing or contributing to harm.
Thumbnail Image

Questions over AI Capability as Tech Guides Iran Strikes

2026-03-07
Asharq Al-Awsat English
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems used in military targeting and intelligence, including the Maven Smart System and generative AI models integrated to enhance detection and simulation. The reported bombing of a school with 150 casualties, potentially due to AI targeting errors, constitutes direct harm to people. The involvement of AI in selecting targets and the possibility of mistaken strikes fulfills the criteria for an AI Incident, as the AI's use has directly or indirectly led to harm. Although some details remain unconfirmed, the credible report of casualties linked to AI-guided strikes justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El uso de la IA en los ataques a Irán preocupa por los posibles errores en la selección de objetivos - Proceso Digital

2026-03-07
Proceso Hn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used in the development and use phases for selecting military targets, which has directly led to harm to civilians (injury and death). The errors in AI-driven target identification have caused significant harm to people and communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI's role is pivotal in the accelerated and erroneous targeting process. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Iran war: Apparent AI use in fighting raises daunting questions, expert says

2026-03-04
NZ Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in military targeting that may have contributed to a deadly strike on a school, resulting in over 150 deaths. Although the exact role of AI is not confirmed, the discussion centers on AI's involvement in target selection and the potential for errors or autonomous decisions leading to harm. This fits the definition of an AI Incident because the development or use of AI systems has directly or indirectly led to harm to people. The harm is significant (loss of life), and the AI system's role is pivotal in the chain of events, even if the details remain uncertain. The article also raises legal and ethical questions about responsibility and control, reinforcing the incident's gravity. Therefore, it is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Is AI being used by Israel, US in Iran attacks? Expert raises daunting questions

2026-03-05
Khaleej times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the likely use of AI to identify targets in military strikes that have caused civilian casualties, including the death of over 150 people at a school. This constitutes harm to persons and communities (harms a and d). The AI system's involvement is in its use for target selection and attack execution, which directly or indirectly led to these harms. The discussion about the opacity of AI decision-making, potential errors, and lack of human oversight further supports the classification as an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in the event described.
Thumbnail Image

AI in the battlefield raises chilling questions: Who's really deciding when and where to strike?

2026-03-05
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the likely use of AI systems to identify targets and launch attacks, which have resulted in significant harm, including civilian casualties. The AI system's role in accelerating target selection and potentially making lethal decisions without clear human oversight directly links it to harm to persons and possible violations of legal and ethical norms. The discussion of the opacity of AI decision-making and the difficulty in assigning responsibility further supports the classification as an AI Incident. Although some details remain unverified, the described harm and AI involvement meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apparent AI use in Iran war raises daunting questions, says expert - Jamaica Observer

2026-03-04
Jamaica Observer
Why's our monitor labelling this an incident or hazard?
The article describes an event where AI systems are reportedly used to select military targets and launch attacks, which has directly led to harm, including civilian deaths. The involvement of AI in the targeting process and the resulting casualties constitute an AI Incident under the framework, as the AI system's use has directly led to harm to people. The discussion of legal and ethical concerns further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Apparent use of AI in Iran war raises daunting questions, expert says

2026-03-05
The Japan Times
Why's our monitor labelling this an incident or hazard?
The involvement of AI in selecting targets and launching attacks directly relates to the use of AI systems in warfare, which has led to harm to persons (death of Iran's supreme leader and presumably others). This constitutes an AI Incident because the AI system's use has directly led to harm. Although the article uses terms like 'suspected' and 'appeared likely,' the described harm is concrete and significant, and the AI's role is pivotal in the conduct of these attacks. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Apparent AI use in Iran war raises daunting questions: expert

2026-03-04
SpaceWar
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems likely being used for military targeting, which is an AI system involvement in use. The reported strike on a school causing over 150 deaths is a serious harm to people and communities, potentially linked to AI-driven targeting decisions. However, the article does not confirm that AI caused the harm directly or indirectly, only that AI use is likely and raises serious questions about control and accountability. Since the harm is reported but AI's causal role is not definitively established, the event is best classified as an AI Hazard reflecting plausible future or ongoing risk of AI-related harm in warfare. The discussion of ethical and legal questions, opacity of AI decision-making, and potential for mistakes supports this classification. There is no indication that this is merely complementary information or unrelated news, as the AI system's role is central to the concerns raised.
Thumbnail Image

Apparent AI use in war on Iran raises daunting questions - kuwaitTimes

2026-03-05
Kuwait Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military targeting and attack operations, which have directly resulted in harm to human life, including civilian casualties. The AI system's role in target selection and attack execution is central to the incident, fulfilling the criteria for an AI Incident due to injury and harm to people. The discussion of potential errors, lack of human oversight, and moral/legal questions further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

米軍、イラン軍事作戦にAI投入 戦争に変革もたらすか 攻撃対象の優先順位を提示

2026-03-07
産経ニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system integrated into military operations to analyze intelligence and prioritize attack targets. This AI involvement directly affects decisions that can cause physical harm and other serious consequences. The use of AI in lethal military targeting meets the criteria for an AI Incident because it has directly led to harm or the potential for harm in an active conflict context. The event is not merely a potential hazard or complementary information but a realized use of AI with direct implications for harm.
Thumbnail Image

オープンAI、軍事利用で逆風 米国防総省との合意巡り:時事ドットコム

2026-03-08
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT by OpenAI) and its use in military and surveillance contexts. While there is no report of actual harm occurring, the concerns about large-scale surveillance and autonomous weapons imply a credible risk of violations of human rights and ethical harms. The public backlash and internal company dissent highlight the contentious nature of this use. Since the harms are potential and plausible but not yet realized, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and societal reaction to the military use agreement, not on responses to a past incident.
Thumbnail Image

イラン戦争の基盤を支えているのは汎用AIだった|ニフティニュース

2026-03-07
�j�t�e�B�j���[�X
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment and active use of an AI system (Anthropic's Claude) in a military operation that involves targeting and destruction, which are forms of harm to persons and property. The AI system's role in processing intelligence and enabling the kill chain means it is directly involved in causing harm. This fits the definition of an AI Incident, as the AI system's use has directly led to harm in a conflict setting.
Thumbnail Image

アンソロピックが米政府提訴/AI排除は不当と主張 | 四国新聞社

2026-03-09
四国新聞社
Why's our monitor labelling this an incident or hazard?
Although the event involves an AI system and its use restrictions, there is no indication that the AI system's development, use, or malfunction has directly or indirectly caused harm as defined by the AI Incident criteria. The lawsuit concerns alleged unconstitutional government actions and restrictions, which is a governance and legal dispute rather than an AI Incident or Hazard. Therefore, this event is best classified as Complementary Information, as it provides context on societal and governance responses related to AI.
Thumbnail Image

アンソロピック、米政府を提訴 AI排除「報復であり違法」

2026-03-09
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's AI "Claude") and concerns its exclusion from government procurement, which is a legal and governance issue. However, there is no indication that this exclusion has caused any direct or indirect harm such as injury, rights violations, or disruption. The event is primarily about a legal challenge and governance dispute related to AI deployment, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI rather than describing a new harm or plausible future harm.
Thumbnail Image

米政府、調達するAIの用途無制限に アンソロピックと対立で新規則

2026-03-09
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory development concerning AI procurement by the U.S. government, specifically expanding the permitted uses of procured AI systems. There is no indication of any actual harm caused by AI systems, nor is there a clear plausible risk of harm directly resulting from this policy change. The event is about governance and policy response to AI system deployment rather than an incident or hazard involving AI causing or potentially causing harm. Therefore, it fits the definition of Complementary Information, as it provides context and updates on AI governance without reporting an AI Incident or AI Hazard.
Thumbnail Image

アンソロピックが国防総省を提訴 AIの安全利用めぐる「報復」受け:朝日新聞

2026-03-09
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The article centers on a lawsuit and policy dispute involving AI safety and military use restrictions, which is a governance and societal response to AI development and deployment. There is no indication of realized harm or direct/indirect causation of harm by the AI system. The event does not describe an AI Incident or AI Hazard but rather a complementary information scenario about legal and policy developments in AI governance.
Thumbnail Image

アンソロピックが国防総省を提訴、リスク指定は「違法」(ロイター)

2026-03-10
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and its use in sensitive contexts (national security, autonomous weapons). However, the event centers on a legal challenge against government restrictions rather than an incident or hazard involving realized or plausible harm. There is no indication that the AI system has caused or is causing harm, nor that it poses a credible imminent risk of harm. The main focus is on the legal and governance dispute, which fits the definition of Complementary Information as it provides context on societal and governance responses to AI-related issues.
Thumbnail Image

米アンソロピック AIクロード排除でトランプ政権提訴 兵器利用で国防総省と対立(テレビ朝日系(ANN))

2026-03-10
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The article describes a legal dispute involving an AI system and government restrictions due to concerns about its use in autonomous weapons and surveillance, which are potential sources of harm. However, there is no indication that any harm has occurred yet, only that the government considers the AI system a supply chain risk and has excluded it from procurement. The lawsuit challenges this exclusion as illegal. Since no realized harm or incident is described, but there is a plausible risk related to the AI system's use in weapons and surveillance, this qualifies as an AI Hazard. The event is not merely general AI news or a complementary update, but a credible risk scenario involving AI use and government response.
Thumbnail Image

米アンソロピック、国防総省提訴 「供給網のリスク」取り消し求め:時事ドットコム

2026-03-10
時事ドットコム
Why's our monitor labelling this an incident or hazard?
While the AI system (Claude) is explicitly mentioned and its military use is central to the dispute, the event does not describe any direct or indirect harm caused by the AI system's development, use, or malfunction. Instead, it concerns a governmental designation and legal challenge regarding supply chain risk and military use restrictions. There is no indication of injury, rights violations, infrastructure disruption, or other harms caused or plausibly caused by the AI system. Therefore, this is best classified as Complementary Information, as it provides context on governance and regulatory responses related to AI but does not report an AI Incident or AI Hazard.
Thumbnail Image

アンソロピック、リスク指定で売上高数十億ドル減も CFO予想

2026-03-10
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
Anthropic is an AI company, so the involvement of AI systems is explicit. The designation by the Department of Defense as a supply chain risk and the resulting financial and operational impacts stem from the use and potential misuse of AI technology, particularly in military contexts. However, the event does not describe any realized harm such as injury, rights violations, or disruption caused by the AI system itself. Instead, it focuses on economic and reputational impacts and legal disputes arising from regulatory actions. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard but rather provides complementary information about governance, regulatory, and business responses related to AI.
Thumbnail Image

アンソロピックが米政府提訴 AI排除は不当と主張

2026-03-09
神戸新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's generative AI) and concerns its use and regulation by the government. However, the event is about a legal challenge to government actions restricting the AI's use, not about any realized harm or direct malfunction of the AI system causing harm. There is no indication that the AI system has caused injury, rights violations, or other harms. Instead, the event focuses on a dispute over policy and procurement decisions. Therefore, this is best classified as Complementary Information, as it provides context on governance and legal responses related to AI but does not describe an AI Incident or AI Hazard.
Thumbnail Image

AI新興企業、米政府を提訴 調達排除は「不当」

2026-03-10
神戸新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system developed by Anthropic and concerns its use and exclusion from government procurement. However, there is no indication that any harm has occurred or that the AI system has malfunctioned or been misused to cause harm. The lawsuit is about policy and procurement decisions, not about an incident or hazard caused by the AI system itself. Therefore, this is best classified as Complementary Information, as it provides context on governance and legal responses related to AI but does not describe an AI Incident or AI Hazard.
Thumbnail Image

アンソロピックが米政府提訴 | 中国新聞デジタル

2026-03-09
�����V���f�W�^��
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's generative AI Claude) and concerns its use and government procurement decisions. However, there is no direct or indirect harm caused by the AI system described in the article. The issue is a legal challenge regarding government policy and free speech rights, not an incident or hazard involving realized or plausible harm from the AI system itself. Therefore, this is best classified as Complementary Information about societal and governance responses related to AI.
Thumbnail Image

AI新興企業、米政府を提訴|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2026-03-10
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (developed by the startup Anthropic) and a legal dispute over its exclusion from government contracts due to security concerns. However, there is no mention of any actual harm caused by the AI system, nor a credible risk of harm that could plausibly lead to an AI Incident. The event is primarily about a legal and policy dispute, which fits the category of Complementary Information as it relates to governance and societal responses to AI, rather than an AI Incident or Hazard.
Thumbnail Image

急成長「Claude」に投資家はどう乗っかる?未上場アンソロピック関連銘柄と投資アイデア=石塚由奈 | マネーボイス

2026-03-09
マネーボイス
Why's our monitor labelling this an incident or hazard?
The article centers on the development, investment, and strategic positioning of Anthropic's Claude AI system, including its safety research and business growth. It reports on a service outage due to demand and a contract termination with the US Department of Defense over ethical concerns, but no actual harm or incident caused by the AI system is described. The focus is on the company's approach to AI safety, investment, and market dynamics, which fits the definition of Complementary Information as it provides supporting context and updates without describing a new harm or plausible future harm event.
Thumbnail Image

AI新興企業、米政府を提訴 調達排除は「不当」 | 上毛新聞電子版|群馬県のニュース・スポーツ情報

2026-03-10
上毛新聞
Why's our monitor labelling this an incident or hazard?
The article describes a lawsuit concerning the exclusion of an AI company's products from government procurement, citing national security concerns and alleged retaliation. There is no indication that the AI system's development, use, or malfunction has directly or indirectly caused harm as defined by the AI Incident criteria. The event focuses on governance, legal, and policy issues surrounding AI, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

【茨城新聞】アンソロピックが米政府提訴

2026-03-09
茨城新聞社
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude) and its use and regulation by the U.S. government. However, no direct or indirect harm has occurred or is described as plausibly imminent due to the AI system itself. The event is primarily about a legal challenge and governance dispute, which fits the definition of Complementary Information as it provides context on societal and governance responses to AI rather than reporting an AI Incident or Hazard.
Thumbnail Image

AI新興企業、米政府を提訴/調達排除は「不当」 | 四国新聞社

2026-03-10
四国新聞社
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system developed by the company Anthropic and concerns its exclusion from government procurement, which is alleged to be unlawful retaliation. However, there is no indication that any harm (injury, rights violation, disruption, or property/community/environmental harm) has occurred or is occurring due to the AI system's development or use. The event is primarily about a legal challenge and governance dispute regarding AI procurement policies, not about realized or imminent harm caused by the AI system. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it is best classified as Complementary Information because it provides important context on societal and governance responses related to AI systems and their regulation.
Thumbnail Image

Anthropicが米国防総省のブラックリスト指定阻止を求め提訴

2026-03-10
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI model Claude) and its use policies, but there is no indication that the AI system's development, use, or malfunction has directly or indirectly caused harm. Instead, the event focuses on a legal and political conflict over government restrictions and constitutional rights. There is no mention of actual or potential harm resulting from the AI system's operation or malfunction, nor a credible risk of such harm described. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on governance, legal challenges, and societal responses related to AI risk management and industry dynamics.
Thumbnail Image

米国防総省がアンソロピックに"報復"、AIの軍事・監視利用巡りテック企業に「屈服か破壊か」迫るトランプ政権 | JBpress (ジェイビープレス)

2026-03-09
JBpress(日本ビジネスプレス)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system developer (Anthropic) and concerns about the use of AI for autonomous weapons and mass surveillance, which are recognized as potential sources of serious harm including human rights violations. The U.S. Department of Defense's designation signals a credible risk that these AI systems could be used in harmful ways. Since no direct harm has been reported yet and the company is actively resisting such uses, the event is best classified as an AI Hazard, reflecting plausible future harm rather than realized harm or a response to past harm.
Thumbnail Image

AI開発企業・アンソロピック、トランプ政権を提訴(2026年3月10日掲載)|日テレNEWS NNN

2026-03-09
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude) and its use by government agencies, with concerns about misuse in autonomous weapons. However, no direct or indirect harm from the AI system's development, use, or malfunction is reported as having occurred. The event centers on a legal challenge against government restrictions, reflecting a governance and societal response to AI risks rather than an incident or imminent hazard. Therefore, it fits best as Complementary Information, providing context on governance and legal disputes related to AI use and security concerns.
Thumbnail Image

米AI企業アンソロピックがトランプ政権を提訴

2026-03-09
KWP News/九州と世界のニュース
Why's our monitor labelling this an incident or hazard?
The article focuses on a lawsuit challenging government actions related to AI technology use and military restrictions. While it involves an AI system (Anthropic's Claude) and its military use, the event centers on legal and policy disputes without any reported injury, rights violations, or other harms caused by the AI system itself. There is no indication of direct or indirect harm caused by the AI system's development, use, or malfunction. The event is best classified as Complementary Information because it provides important context on governance, legal challenges, and societal responses related to AI, rather than describing an AI Incident or AI Hazard.
Thumbnail Image

گوگل عامل‌های هوش مصنوعی به پنتاگون می‌دهد

2026-03-12
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction and use of AI agents within the Department of Defense for administrative and strategic support tasks. There is no mention of any harm, malfunction, or misuse resulting from these AI systems. The discussion about challenges with another AI company and safety frameworks is contextual and does not describe an incident or hazard. Therefore, this event is best classified as Complementary Information, as it provides context and updates on AI adoption and governance in a critical sector without reporting any specific AI-related harm or plausible future harm.
Thumbnail Image

گوگل عامل‌های هوش مصنوعی به پنتاگون می‌دهد

2026-03-11
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents) actively used by the Pentagon for various tasks, indicating AI system involvement. While the AI agents are currently used for unclassified tasks, the planned expansion to classified and highly confidential systems suggests a credible risk of future harm. No direct or indirect harm has been reported yet, so it is not an AI Incident. The potential for misuse, security risks, or operational disruption in military contexts makes this a plausible future harm scenario, fitting the definition of an AI Hazard. The article does not focus on responses, updates, or broader ecosystem context beyond the deployment and expansion plans, so it is not Complementary Information. It is clearly related to AI systems and their use in a sensitive domain, so it is not Unrelated.
Thumbnail Image

اوکراین آموزش هوش مصنوعی با داده های جنگی را تبلیغ می کند

2026-03-13
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being trained with battlefield data for drone applications, indicating AI system involvement. Although no direct harm or incident is reported, the use of AI in warfare inherently carries significant risks of harm (injury, disruption, violations of rights). The sharing and training of AI models with war data increases the likelihood of AI-enabled military actions that could cause harm. Thus, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not an AI Incident because no actual harm has yet been reported, nor is it merely Complementary Information or Unrelated.
Thumbnail Image

ماشین‌ها در میدان جنگ/ چگونه هوش مصنوعی در حال شکل دادن جنگ با ایران است؟

2026-03-15
خبرگزاری ایلنا
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in warfare that have directly led to harm, including the death of high-profile individuals and destruction of military and civilian targets. The AI systems are involved in target identification, decision-making, and autonomous weapon deployment, which have caused injury and harm to persons and communities. The involvement of AI in these lethal operations and the resulting harms meet the criteria for an AI Incident as defined. The article also references ethical and legal concerns but the primary focus is on realized harm caused by AI-enabled military systems.