US Uses Anthropic AI in Lethal Military Strikes on Iran

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During Operation Epic Fury, the US military used Anthropic's AI services, including Claude tools, alongside B-2 bombers and drones in strikes against Iranian military infrastructure. The AI's specific role is unclear, but its deployment contributed to lethal operations causing significant harm in Iran.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that Anthropic's AI tools were used by the US military in strikes that caused deaths and destruction in Iran. The harm is direct and significant, involving loss of life and damage to property and communities. The AI system's involvement in the military operation that led to these harms qualifies this as an AI Incident. The lack of detail on how the AI was used does not negate the fact that its use was part of an operation causing harm. Therefore, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
Physical (death)

Severity
AI incident

Business function:
Other


Articles about this incident or hazard

Thumbnail Image

US used Anthropic AI in strikes against Iran: Report

2026-03-02
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Anthropic's AI tools were used by the US military in strikes that caused deaths and destruction in Iran. The harm is direct and significant, involving loss of life and damage to property and communities. The AI system's involvement in the military operation that led to these harms qualifies this as an AI Incident. The lack of detail on how the AI was used does not negate the fact that its use was part of an operation causing harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Operation Epic Fury: US uses AI, stealth bombers and suicide drones against Iran | Check full list

2026-03-02
India TV News
Why's our monitor labelling this an incident or hazard?
The operation explicitly mentions the use of suicide drones and low-cost attack drones modeled after Iranian designs, which are likely AI-enabled for autonomous or semi-autonomous operation. The use of these AI systems in military strikes directly causes harm to property and potentially to people, fulfilling the criteria for an AI Incident. The event involves the use of AI systems in a harmful context (military attack), with direct consequences, not merely a potential or hypothetical risk. Therefore, it is classified as an AI Incident.
Thumbnail Image

US Deploys B-2 Bombers, Suicide Drones, Anthropic AI in Iran Strikes

2026-03-02
Deccan Chronicle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Anthropic's AI services in a military strike that caused deaths, including a high-profile target. Although the exact function of the AI is not detailed, its deployment in lethal operations implies its outputs or support contributed to the harm. This meets the criteria for an AI Incident as the AI system's use directly or indirectly led to harm to persons and communities. The involvement of AI in lethal autonomous or semi-autonomous weapon systems or decision support in targeting is a recognized source of AI-related harm.
Thumbnail Image

World News | US Used B-2 Bombers, Suicide Drones, Anthropic AI in Strikes Against Iran: Report | LatestLY

2026-03-02
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Anthropic's Claude tools) in a military operation that resulted in lethal strikes against Iran, causing death and harm. The AI system's involvement in the development or execution of these strikes directly led to harm to persons and communities, fulfilling the criteria for an AI Incident. The article explicitly mentions AI use in the attack, and the harm is realized, not just potential.
Thumbnail Image

Artificial intelligence, stealth jets and suicide drones: Inside the US assault on Iran

2026-03-02
The Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools being used in a military operation that caused harm (deaths and destruction). However, the AI's role is not detailed or shown to be a direct or indirect cause of harm; the harm stems from the military strikes themselves. The AI involvement is reported but not linked to malfunction, misuse, or a causal chain leading to harm. Thus, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides additional context about AI's integration in military operations, fitting the definition of Complementary Information.
Thumbnail Image

US used B-2 bombers, suicide drones, Anthropic AI in strikes against Iran: Report

2026-03-02
KalingaTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI services from Anthropic in a military strike operation that caused lethal harm and destruction. The AI system's involvement in the use of lethal force and strategic targeting in warfare directly links it to harm to persons and communities, fulfilling the criteria for an AI Incident. Although the exact role of the AI tools is not detailed, their use in the operation that led to deaths and destruction is sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

From B-2 Bombers To Anthropic AI: What US Unleashed In Deadly 'Operation Epic Fury' Against Iran

2026-03-02
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI services in a military operation that caused harm to Iranian military infrastructure and leadership, which falls under harm to persons and property. Although the precise function of the AI tools is not detailed, their deployment in a lethal strike indicates AI's involvement in causing harm. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect role of AI in harm resulting from the operation.
Thumbnail Image

مقتل خامنئي؟.. العقل المدبر ليس بشرياً

2026-03-02
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Claude') in planning and executing a military strike that resulted in the death of a person, which is a direct harm to human life. The AI system was used to analyze complex data and simulate attack scenarios, playing a pivotal role in the operation's success. This meets the definition of an AI Incident as the AI system's use directly led to injury or harm to a person. The involvement is in the use phase of the AI system, and the harm is realized and significant. Hence, the event is classified as an AI Incident.
Thumbnail Image

الجيش الأميركي استخدم "كلود" في إيران رغم وقفه من إدارة ترامب

2026-03-01
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly named (Claude) used by the U.S. military in intelligence and operational contexts that led to real harm (deaths in a military operation). The AI's use was against the provider's terms and official government directives, indicating misuse or failure to comply with legal and ethical frameworks. The AI system's outputs influenced targeting and operational decisions, directly contributing to harm to people, fulfilling the criteria for an AI Incident. The article also mentions ongoing use despite bans, reinforcing the direct link to harm rather than a potential future risk or mere complementary information.
Thumbnail Image

الجيش الأميركي يستخدم أداة ذكاء اصطناعي ضد إيران

2026-03-01
Alrai-media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military operations that have led to an airstrike in Iran, implying direct harm to persons and infrastructure. The AI system is used for intelligence and targeting, which are critical functions that influence physical harm and conflict outcomes. This fits the definition of an AI Incident, as the AI system's use has directly led to harm (injury or harm to persons and disruption of critical infrastructure). The article also notes the ongoing use and replacement of AI tools in military contexts, but the primary focus is on the realized harm from AI-assisted military action, not just potential or complementary information.
Thumbnail Image

لضرب إيران.. الجيش الأمريكي استخدم أداة ذكاء اصطناعي حظرها ترامب - الوطن

2026-03-01
جريدة الوطن
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') in military operations that have directly led to harm (airstrikes in Iran). The AI system was used in target identification and battle simulations, which are critical to the execution of military attacks. This constitutes direct involvement of AI in causing harm to people and property, fulfilling the criteria for an AI Incident. The article does not merely discuss potential or future harm but describes actual use in operations with real consequences. Therefore, the classification is AI Incident.
Thumbnail Image

اخبارك نت | لضرب إيران.. جيش أميركا استخدم أداة ذكاء اصطناعي حظرها ترامب

2026-03-01
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') in military operations that have caused direct harm, such as airstrikes and capture missions. The AI system is used for intelligence and targeting, which are critical to the execution of these operations. The involvement of the AI system in decisions leading to physical harm to persons and potential disruption of critical infrastructure meets the criteria for an AI Incident. The article describes realized harm resulting from the AI system's use, not just potential harm, and thus it is not merely a hazard or complementary information. Therefore, the classification as an AI Incident is justified.
Thumbnail Image

رغم قرار الحظر.. أدوات أنثروبيك شاركت في عملية عسكرية أمريكية ضد إيران - وكالة ستيب نيوز

2026-03-01
وكالة ستيب نيوز
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude' by Anthropic) in military operations that have led to harm, including an airstrike against Iran and the capture of Venezuela's president. The AI system was used for intelligence analysis and operational planning, directly influencing military actions. This constitutes an AI Incident because the AI system's use has directly led to harm (military conflict and associated consequences). The article also highlights the security and political controversies around the AI tool's use, but the primary focus is on realized harm through military operations, not just potential or policy issues.
Thumbnail Image

كيف استخدم الجيش الأمريكي تطبيق كلود للذكاء الاصطناعي في تحديد الأهداف في إيران؟ - BBC News عربي

2026-03-02
BBC
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) in a military context for targeting, which directly contributed to a military attack, thus causing harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (a military strike).
Thumbnail Image

ذكاء "آنثروبيك" الاصطناعي يلعب دورا محوريا في الحرب على إيران

2026-03-02
Aljazeera
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude) used by the U.S. military in active war operations against Iran. The AI system is used for coordinating strikes, intelligence, and battle simulations, which directly relates to harm to persons and possibly critical infrastructure. The use of AI in military targeting and operations is a clear case of AI involvement leading to harm or potential harm. The article describes actual use rather than just potential, so this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of the AI system in military operations that cause or could cause injury or death fits the definition of an AI Incident under harm category (a).
Thumbnail Image

رغم حظر ترامب.. الجيش الأميركي استخدم ذكاء "أنثروبيك" في هجوم إيران

2026-03-02
قناة العربية
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's 'Claude') used by the US military in an attack on Iran, which constitutes harm to a community or group of people. The AI system's use in target selection and intelligence directly contributed to the military action, thus directly leading to harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use led to harm (a military attack).
Thumbnail Image

صحيفة: القوات الأمريكية تستعين بتقنيات أنثروبيك في ضرباتها ضد إيران

2026-03-02
بوابة أرقام المالية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Anthropic's Claude) in military targeting and battle simulations that support airstrikes against Iranian targets. These strikes cause harm to people and property, fulfilling the criteria for harm under AI Incident definitions. The AI system's use in target identification and operational planning directly contributes to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي يدخل قلب القرار الحربي ويعيد رسم خرائط القوة

2026-03-02
Dostor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models like Claude, advanced analysis tools like 'Maven') being used in sensitive military contexts to analyze intelligence, identify targets, and influence the timing and execution of lethal operations. The death of the Iranian leader following an AI-assisted operation constitutes direct harm to a person and has broader implications for political and military stability. The AI systems' development and use directly contributed to this harm, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or future hazards but reports on realized harm linked to AI use in warfare.
Thumbnail Image

وكالة سرايا : الجيش الأميركي استخدم "كلود" في ضربات إيران رغم حظر ترامب

2026-03-02
(وكالة أنباء سرايا (حرية سقفها السماء
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) explicitly used in military operations, including intelligence analysis and target selection, which directly contributed to strikes on Iran. These strikes constitute harm to persons and communities, fulfilling the criteria for an AI Incident. The AI's role, while not autonomous weapon control, was pivotal in supporting lethal military decisions. The article describes realized harm linked to AI use, not just potential harm, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

هل ساهمت تطبيقات الذكاء الاصطناعي في الوصول لخامئني؟

2026-03-02
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military operations that directly led to the death of a political leader, which is a clear harm to a person and political community. The AI systems were integral in intelligence analysis and operational planning, thus their use directly contributed to the harm. This meets the criteria for an AI Incident because the AI's use in targeting and executing a strike caused injury or harm to persons and affected political stability, fulfilling the harm definitions (a) and (d).
Thumbnail Image

اخبارك نت | كيف استخدم الجيش الأمريكي تطبيق كلود للذكاء الاصطناعي في تحديد الأهداف في إيران؟ - BBC News عربي

2026-03-03
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The use of an AI system for military targeting in an attack directly involves the AI system in potentially lethal operations, which can cause injury or harm to persons or groups. This constitutes an AI Incident because the AI system's use has directly led to harm or the potential for harm in a conflict scenario. The article explicitly mentions the AI system's use in a military attack, fulfilling the criteria for an AI Incident.
Thumbnail Image

"أنثروبيك"... شركة ذكاء اصطناعي حظرها ترمب واستخدمها جيشه ضد إيران

2026-03-02
مجلة المجلة
Why's our monitor labelling this an incident or hazard?
The AI system 'Claude' is explicitly mentioned as being used by the U.S. military for critical functions such as intelligence analysis and target selection, which directly contributed to a military airstrike in Iran. This use of AI has led to real-world harm (physical harm to persons and geopolitical consequences). The conflict over control and ethical boundaries further highlights the AI system's pivotal role in causing harm. Hence, the event meets the criteria for an AI Incident because the AI system's use has directly led to harm and raises significant human rights and security concerns.
Thumbnail Image

ترامب يحظر أنثروبيك والجيش الأميركي يستمر في استخدامه بإيران

2026-03-02
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The AI system 'Claude' is explicitly mentioned as being used by the U.S. Central Command for intelligence and targeting, which directly relates to military operations that could cause harm. The use of AI in such a context involves the AI system's use leading to potential harm (e.g., harm to persons or communities in conflict zones). Although no specific harm event is described, the ongoing use of AI in military targeting inherently involves risk of harm. However, since the article does not report a realized harm or incident caused by the AI system but rather the continued use despite security concerns, this situation is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use in military operations.
Thumbnail Image

ترمب يحظر أنثروبيك.. والجيش الأميركي يواصل استخدامه في إيران

2026-03-02
Asharq News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Claude' by Anthropic) in military operations that have led to real-world harm, such as the U.S. military actions against Iran and the arrest of a foreign leader. The AI system is used for critical decision-making tasks like intelligence analysis and target selection, which directly influence physical and geopolitical outcomes. This constitutes an AI Incident because the AI system's use has directly contributed to harm (military conflict and arrests). The ongoing reliance on the AI system despite official bans highlights the direct involvement and impact of AI in causing harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الجيش الأميركي استخدم الذكاء الاصطناعي في توجيه ضربات دقيقة ضد إيران

2026-03-02
قناة المملكة
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Anthropic's Claude) in military operations that directly affect physical harm and conflict outcomes, fulfilling the definition of an AI Incident. The AI system's outputs are used for target selection and battle simulations, which are critical to the conduct of warfare and can lead to injury or harm to people (harm category a). The article describes actual use, not just potential, and the AI's role is pivotal in these operations. The continued use despite official orders also indicates a failure to comply with directives, reinforcing the incident classification. Therefore, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

الـ AI في ساحات الحرب.. كيف يعيد رسم موازين القوة العسكرية؟

2026-03-03
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') in active military operations that have caused harm (e.g., attacks on Iran). The AI system is used for intelligence and operational decision support, directly influencing actions that lead to harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and communities in conflict. The article also discusses the ethical and legal challenges, but the primary focus is on the realized use of AI in military harm, not just potential future risks or governance responses. Therefore, it is classified as an AI Incident.
Thumbnail Image

الذكاء الاصطناعي "يكتسح" في العملية الأميركية على إيران

2026-03-03
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves explicit use of AI systems in military targeting and strike decisions that have directly resulted in deaths and destruction, fulfilling the criteria for harm to persons and communities. The AI systems' role in accelerating and automating lethal decisions is central to the incident. The article also highlights ethical concerns and potential risks of overreliance on AI, but the realized harm (killing of individuals) confirms this as an AI Incident rather than a hazard or complementary information. The AI system's development and use directly led to significant harm, meeting the definition of an AI Incident.
Thumbnail Image

من فنزويلا إلى إيران... الذكاء الاصطناعي لاصطياد القادة المعادين لأميركا

2026-03-03
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI language models in military operations that resulted in the capture and killing of political leaders, which is a direct harm to persons. The AI systems are used in serious combat and covert operations, indicating their role in causing injury or death. This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons. The article does not merely speculate about potential harm but reports on actual operations where AI was involved, thus excluding AI Hazard or Complementary Information classifications.
Thumbnail Image

الحرب على إيران.. كيف يغير الذكاء الاصطناعي أساليب تخطيط وتنفيذ الضربات؟

2026-03-03
جريدة الشروق
Why's our monitor labelling this an incident or hazard?
The event involves explicit use of AI systems in military operations that have directly caused harm to human life and violated international humanitarian law. The AI systems are integral to the planning and execution of strikes, which have resulted in deaths and legal violations. This meets the criteria for an AI Incident because the AI's development and use have directly led to injury and violations of rights. The article also highlights concerns about the reduction of human oversight ('decision pressure'), which further underscores the AI's pivotal role in causing harm.
Thumbnail Image

الذكاء الاصطناعي "جنرال أميركي" في حرب إيران

2026-03-04
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Anthropic's Claude) in military operations that include targeting and strikes against Iran, which constitute harm to persons and communities. The AI system's outputs are used in decision-making that leads to physical harm, fulfilling the criteria for an AI Incident. The ethical conflict and political decisions around the AI system's use further support the significance of the AI's role in causing harm. Hence, this is not merely a potential hazard or complementary information but a realized AI Incident involving harm through military action.
Thumbnail Image

تهميش دور الإنسان فى أخد القرار ..ضرب إيران يبشر بنوع جديد من الحروب - اليوم السابع

2026-03-03
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in the conduct of military strikes that have caused deaths, fulfilling the criteria of an AI Incident due to direct harm to persons. The AI system's role in target selection, legal assessment, and strike execution is pivotal and has directly led to lethal outcomes. The article also highlights concerns about the marginalization of human decision-making, reinforcing the AI system's central role in causing harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

كيف غيّر الذكاء الاصطناعي قواعد الحرب على إيران؟

2026-03-03
24.ae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Anthropic's 'Claude') in military operations that have resulted in hundreds of air strikes and the killing of a high-profile individual, indicating direct harm caused by AI-enabled decision-making. The AI's role in accelerating and automating lethal strikes, with concerns about reduced human oversight and ethical considerations, confirms that the AI system's use has directly led to injury and harm. This fits the definition of an AI Incident, as the AI system's use has directly caused harm to persons and communities in a military conflict context.
Thumbnail Image

اخبارك نت | الـ AI في ساحات الحرب.. كيف يعيد رسم موازين القوة العسكرية؟

2026-03-03
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the deployment and use of an AI system in military operations that have caused harm through attacks on Iran, thus fulfilling the criteria for an AI Incident. The AI system is involved in intelligence analysis, target selection, and battlefield simulation, directly influencing lethal military actions. The harm includes injury or death to people and disruption of peace and security, which are harms to communities and potentially violations of human rights or international law. Although the article notes that final decisions remain with human commanders, the AI system's role is pivotal in the chain of events leading to harm. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"CBS": واشنطن استخدمت نموذج "كلود" في الهجوم على إيران

2026-03-03
Addiyar
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') in military operations, which could plausibly lead to significant harms such as violations of human rights, harm to communities, or escalation of conflict. The article highlights a dispute over usage controls and legal boundaries, indicating potential risks. However, no direct or indirect harm has been reported as having occurred yet. Therefore, this situation constitutes an AI Hazard, as the AI system's use in military attacks and autonomous weapons could plausibly lead to an AI Incident in the future.
Thumbnail Image

الحرب المعززة بالذكاء الاصطناعي.. أداة لسفك الدماء "بدقة"

2026-03-04
Aljazeera
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in active military operations that have led to lethal outcomes, including missile strikes and surveillance that impact civilian populations. This constitutes direct harm to people and communities, fulfilling the criteria for an AI Incident. Additionally, the discussion about potential future fully autonomous AI weapons underscores plausible future harm, but since harm is already occurring, the classification prioritizes AI Incident. The involvement of AI in planning, targeting, and execution of attacks confirms the AI system's role in causing harm. The ethical concerns and company-government conflicts further support the significance of AI's impact in this context.
Thumbnail Image

ترامب يستخدم الذكاء الاصطناعي في الهجوم على إيران

2026-03-04
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in active military operations that have directly caused harm through targeted strikes in Iran. The AI system's development and use have materially contributed to the harm by enabling rapid and precise targeting decisions. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities. The article does not merely discuss potential or future harm, nor is it solely about governance or responses; it reports on actual harm caused by AI-enabled military actions.
Thumbnail Image

من فنزويلا إلى إيران...كيف وظفت أمريكا الذكاء الاصطناعي في ملاحقة القادة المعارضين لها ؟

2026-03-05
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (large language models) in military operations that resulted in the capture or killing of political leaders, which are clear harms to persons and political communities. The AI systems' involvement is in their use for operational planning and targeting, directly contributing to these outcomes. This fits the definition of an AI Incident because the AI's use has directly led to harm (death and political violence).
Thumbnail Image

من اغتيال خامنئي لخطف مادورو... التقنيات التي بحوزتكم واستخدمت بعمليات سرية

2026-03-04
i24NEWS English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Claude, ChatGPT, Grok) being used by military and intelligence agencies for planning and executing lethal operations, including assassinations and kidnappings. The AI systems were used for intelligence analysis, target identification, and operational planning, which directly contributed to harm to persons (leaders assassinated, military operations). This meets the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to persons and communities. The article also discusses the refusal of Anthropic to remove safety measures, indicating the AI's potential for misuse in violence. Hence, the event is classified as an AI Incident.
Thumbnail Image

برنامج الذكاء الاصطناعي كلود في قلب العدوان على إيران

2026-03-04
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude) in military targeting and execution of strikes that have caused deaths, including a major political figure. This constitutes direct harm to persons, fulfilling the criteria for an AI Incident. The AI system's role in accelerating and automating decision-making in lethal operations is central to the harm caused. Although there are also potential future harms discussed (e.g., cyberattacks), the realized lethal strikes take precedence, confirming this as an AI Incident rather than a hazard or complementary information. The involvement of AI in these military operations and the resulting deaths clearly meet the definition of an AI Incident under the OECD framework.
Thumbnail Image

واشنطن بوست: ترامب يستخدم الذكاء الاصطناعي لتحديد الأهداف داخل إيران - بوابة الأهرام

2026-03-04
جريدة الأهرام
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in active military operations that have directly led to harm (military strikes in Iran). The AI system's development and use have materially contributed to the targeting and execution of attacks, fulfilling the criteria for an AI Incident due to direct harm to persons and communities. The article also discusses ethical and operational implications, but the primary focus is on realized harm caused by AI-enabled military targeting, not just potential or complementary information.
Thumbnail Image

ترامب يستعين بالذكاء الاصطناعي لتحديد الأهداف في إيران

2026-03-04
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military targeting and decision-making that directly leads to harm (injury or death, destruction of property) as part of military operations. The AI system's development and use have directly contributed to the identification and prioritization of targets for attack, which constitutes harm under the framework. Hence, this qualifies as an AI Incident due to the direct link between AI use and harm in a military conflict context.
Thumbnail Image

Il Pentagono ha usato Anthropic negli attacchi in Iran nonostante il divieto imposto da Trump

2026-03-01
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The AI system Claude was actively used in military operations for intelligence and target selection, which directly implicates it in decisions that can cause injury or death and violate human rights. The article explicitly states the AI's role in these operations and the associated risks, fulfilling the criteria for an AI Incident. Although the company Anthropic imposed ethical limits, the Pentagon's use of the AI in lethal contexts and the ongoing debate about autonomous weapons highlight realized or imminent harm. Hence, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

++ 'Il Pentagono ha usato Anthropic per l'attacco in Iran nonostante il divieto' ++ - Notizie - Ansa.it

2026-03-01
ANSA.it
Why's our monitor labelling this an incident or hazard?
The AI system Claude was used in the development and execution of a military attack, specifically for intelligence and target identification, which directly relates to harm (potential injury or death) to persons and disruption of critical infrastructure. The use of AI in this context, especially against explicit prohibitions, constitutes an AI Incident because the AI system's use directly contributed to actions causing harm. The article describes realized use leading to harm, not just potential risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Il Pentagono ha usato Anthropic per attaccare l'Iran. La start up era stata bandita dal Pentagono dopo l'offerta "sui limiti etici" di Hegseth

2026-03-01
Open
Why's our monitor labelling this an incident or hazard?
The AI system Claude was used by the Pentagon for military intelligence and targeting in an attack on Iran, which constitutes direct involvement of AI in an event causing harm to persons and potentially violating legal and ethical standards. The use occurred despite an official ban citing ethical risks, highlighting misuse or disregard of governance frameworks. This meets the criteria for an AI Incident as the AI system's use directly led to harm and breaches of obligations.
Thumbnail Image

Come è stato ucciso Khamenei: l'intervento della Cia, le bombe di Israele e l'uso di Anthropic

2026-03-01
Today
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Pentagon used Anthropic's AI system for intelligence and operational support in a military strike that killed Khamenei and others. The AI system's outputs were pivotal in identifying targets and timing the attack, which directly led to harm (deaths). This meets the definition of an AI Incident because the AI system's use directly led to injury or harm to persons. The involvement is not speculative or potential but actual and consequential. Hence, the event is classified as an AI Incident.
Thumbnail Image

Il Pentagono ha usato l'AI di Anthropic per supportare gli attacchi in Iran (nonostante il divieto di Trump)

2026-03-01
MRW.it
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Pentagon used an AI system (Anthropic's Claude) for intelligence analysis and target selection in military operations that resulted in the death of an individual. This is a clear case where the AI system's use directly led to harm to persons, fulfilling the criteria for an AI Incident. The involvement is not hypothetical or potential but actual and consequential. The ethical concerns and government ban further underscore the significance of the harm caused. Hence, the event is classified as an AI Incident.
Thumbnail Image

트럼프 금지령에도 이란 공습에 클로드 사용...미국, 앤트로픽 AI 없이는 전쟁도 못하나 - 매일경제

2026-03-02
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of the AI system Claude in a real military operation (Iran airstrike), where it was involved in intelligence and target identification tasks. Such use directly influences decisions that can cause injury or harm to people and damage to property, which fits the definition of an AI Incident. The involvement is not hypothetical or potential but actual and ongoing, with the AI system playing a pivotal role. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

[영상] CIA는 수집하고 AI는 시뮬레이션...美 핀셋타격 공식 | 연합뉴스

2026-03-02
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) in a military context to analyze intelligence and simulate battlefield scenarios, which directly led to a missile strike causing fatalities. This constitutes direct harm to persons caused by the use of an AI system, fitting the definition of an AI Incident under harm category (a).
Thumbnail Image

트럼프 금지했는데도...미, 이란 공습에 앤트로픽 AI 활용

2026-03-02
아시아경제
Why's our monitor labelling this an incident or hazard?
The AI system 'Claude' was explicitly used by the U.S. military in an airstrike operation against Iran, which is a real-world event involving potential harm to persons and national security. The AI's role in intelligence assessment and target identification directly contributed to the military action. Despite political controversy and directives to halt its use, the AI system was employed, indicating its involvement in harm. This fits the definition of an AI Incident, as the AI system's use directly led to harm or risk of harm in a military conflict context.
Thumbnail Image

트럼프 '금지 명령' 내렸는데...미군, 이란 공습에 사용한 기술 [지금이뉴스]

2026-03-03
YTN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Claude') in military operations that have caused or could cause harm, such as airstrikes. The AI system was used for critical functions like target identification and battlefield simulation, which are directly linked to harm to persons and property. The involvement of AI in these military actions meets the definition of an AI Incident, as the AI system's use has directly led to harm or the potential for harm. The political and ethical conflict over the AI's military use further supports the significance of the AI system's role in causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

"美, 이란 공습에 앤스로픽 클로드 썼다" AI 공포 확산

2026-03-01
서울경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Anthropic's Claude) in military operations that have directly led to harm (airstrikes on Iran). The AI system was used for target identification and battlefield simulation, which are critical to the execution of attacks that can cause injury or death, thus meeting the criteria for an AI Incident. The article describes realized harm linked to the AI system's use, not just potential harm, and discusses the political and security implications, confirming the direct involvement of AI in causing harm.
Thumbnail Image

"이란 공습에 앤트로픽 AI 활용"...트럼프 사용중단 지시에도 군사작전에 AI 깊이 개입

2026-03-02
weekly.khan.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') in military operations, which is explicitly stated. The AI system's use in intelligence evaluation and target identification directly supports military actions that can cause injury or harm to people, fulfilling the criteria for harm (a). The article describes actual use, not just potential use, so this is an AI Incident rather than a hazard. The political conflict and orders to cease use do not negate the fact that the AI system was used in operations causing or potentially causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

美 이란 공습 쓰인 AI '클로드'...현대 전쟁서 AI 본격 활용

2026-03-03
dongascience.com
Why's our monitor labelling this an incident or hazard?
The AI system 'Claude' was explicitly used in the planning and execution of a military airstrike that led to lethal outcomes. The AI's role in intelligence assessment and operational simulation directly influenced the military decision-making process that caused harm to individuals and communities. This meets the criteria for an AI Incident as the AI system's use directly led to harm to persons and communities. The article does not merely discuss potential or future harm but describes realized harm linked to the AI system's deployment in warfare.
Thumbnail Image

ट्रंप का बड़ा फैसला: Anthropic के AI पर लगाई रोक, कंपनी भी अड़ी, बोली- कोर्ट में देंगे चुनौती

2026-02-28
Times Network Hindi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's AI technology) and concerns its use by government agencies. While no direct harm is reported, the government's action is based on perceived risks related to supply chain security, implying plausible future harm if the AI technology were used unchecked. The company's legal challenge and the public dispute highlight governance and regulatory responses to AI risks. Since no actual harm has occurred but there is a credible risk leading to regulatory intervention, this event qualifies as Complementary Information about governance and societal response to AI risks rather than an AI Incident or AI Hazard.
Thumbnail Image

अमेरिकी सेना ने ईरान हमले में इस AI टूल के जरिए लगाया सटीक निशाना

2026-03-02
Times Network Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) in a military operation that directly led to a precise attack, implying harm or potential harm to persons or property. The AI system's use in targeting constitutes direct involvement leading to harm, qualifying this as an AI Incident. The political ban and continued military use provide context but do not negate the realized harm from AI use in the attack.
Thumbnail Image

ट्रंप से लड़ाई के बाद यूएस में छा गई ये AI ऐप, बैन के बाद बच्चा-बच्चा कर रहा पसंद, बनी नंबर वन

2026-03-02
LallanTop - News with most viral and Social Sharing Indian content on the web in Hindi
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude AI by Anthropic) and its use and development in military contexts. The conflict centers on ethical and control issues that could plausibly lead to harms such as privacy violations, misuse in autonomous weapons, and broader human rights concerns. Although no direct harm is reported, the ongoing use by the Pentagon despite the ban and the supply-chain risk designation indicate credible potential for future harm. Hence, this is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

ट्रंप ने लगाया बैन, फिर भी ईरान हमले में अमेरिका की ओर से यूज हुआ सीक्रेट AI टूल!

2026-03-02
NDTV Gadgets 360 Hindi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude) in a military air operation against Iran, which involves potential harm to persons and national security risks. The AI system was used for intelligence and targeting, directly influencing military actions. Despite a ban from the President, the AI was used, indicating a failure to comply with legal or policy directives. The involvement of AI in lethal military operations and the dispute over ethical safeguards highlight direct or indirect harm linked to the AI system's use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'हमें इसकी जरूरत नहीं', AI के सैन्य इस्तेमाल पर डोनाल्ड ट्रंप ने Anthropic कंपनी पर लगाया बैन

2026-02-28
AajTak
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude AI) and its potential military use, which is being banned due to national security concerns. No actual harm or incident is reported; instead, the government is acting to prevent possible future harms related to AI-enabled autonomous weapons and surveillance. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, and the government is taking steps to mitigate that risk. The event is not merely complementary information because the ban itself is a direct response to the potential hazard posed by the AI system. It is not an AI Incident because no realized harm is described.
Thumbnail Image

Anthropic ने नहीं मानी बात... अमेरिका में AI पर मचा घमासान, ट्रंप ने दिया ये आदेश

2026-02-28
AajTak
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and its intended use by the Pentagon for military and surveillance purposes. The conflict arises from the AI system's use and the refusal to remove safety guardrails, which could lead to violations of privacy and ethical norms. No actual harm has been reported yet, but the potential for significant harm is credible and directly linked to the AI system's use. Hence, this is an AI Hazard rather than an AI Incident. The article also mentions legal and governance responses, but the primary focus is on the plausible future harm from unrestricted AI use in defense.
Thumbnail Image

Donald Trump On Anthropic AI: एंथ्रोपिक एआई पर डॉनल्ड ट्रंप का अमेरिकी सरकार को बड़ा आदेश - CNBC Awaaz

2026-02-28
CNBC Awaaz
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude model) and its use in military contexts, which is a high-risk application. The refusal of the company to allow unconditional military use and the government's threat to enforce compliance under the Defense Production Act indicate a serious dispute with potential for significant harm, such as misuse in autonomous weapons or mass surveillance. No actual harm is reported yet, so it is not an AI Incident. The event plausibly could lead to an AI Incident if the AI is used in ways the company opposes or if government enforcement leads to misuse. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

डेटा से टारगेट तक: ईरान हमले की तैयारी में Anthropic के Claude AI ने क्या-क्या किया?

2026-03-01
AajTak
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude AI) explicitly mentioned as being used by the US military for analyzing intelligence data and prioritizing targets in a real military strike against Iran. The AI's outputs directly influenced decisions that led to harm (the airstrike). Although the final decision was made by humans, the AI system's role was pivotal in accelerating and shaping those decisions. This fits the definition of an AI Incident because the AI system's use directly or indirectly led to harm to communities and property. The article also discusses ethical and governance issues but the primary focus is on the realized harm facilitated by the AI system's use in a military operation.
Thumbnail Image

ट्रंप के आदेश के बाद ट्रेजरी ने Anthropic AI का उपयोग समाप्त किया द्वारा Investing.com

2026-03-02
Investing.com भारत
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude platform) and its use by government agencies. The cessation of use is due to concerns about national security risks and potential harm, but no actual harm or incident is reported. Therefore, this event represents a governance response to a perceived AI risk rather than an AI Incident or an immediate hazard. It fits best as Complementary Information because it provides context on societal and governance responses to AI-related concerns without describing a realized harm or a direct plausible future harm event.
Thumbnail Image

जंग में AI का यूज ट्रंप क्यों चाहते हैं, Anthropic से विवाद क्या है? OpenAI निशाने पर क्यों

2026-03-03
AajTak
Why's our monitor labelling this an incident or hazard?
The article centers on the strategic, ethical, and trust-related tensions between AI companies and government partnerships, without reporting any realized harm or a concrete incident involving AI systems causing injury, rights violations, or other harms. It also does not describe a specific event where AI use could plausibly lead to harm but rather discusses broader implications and responses. Therefore, it fits the definition of Complementary Information, as it provides important context and updates on AI ecosystem developments and governance debates without reporting a new AI Incident or AI Hazard.
Thumbnail Image

10 हजार से ज्यादा ड्रोन्स एक साथ कंट्रोल करने की तैयारी में थी Anthropic, अमेरिकी सरकार से टूटी डील

2026-03-03
AajTak
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (voice-controlled AI for drone swarms) developed for military use, which could plausibly lead to significant harms such as injury or disruption in warfare contexts. Anthropic's participation in this defense project, despite the proposal not being selected, indicates the integration of AI in autonomous or semi-autonomous weapons systems. Since no actual harm or incident has occurred yet, but the potential for harm is credible and significant, the event fits the definition of an AI Hazard.
Thumbnail Image

امریکا نے ایران کیخلاف حملے میں کونسی اے آئی ٹیکنالوجی استعمال کی؟

2026-03-02
jang.com.pk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in active military operations against Iran, which directly implicates potential and actual harm to people and communities. The AI system was used for intelligence and targeting, which are critical components in warfare that can lead to injury or death. The article confirms the AI system's deployment and the resulting political and legal disputes, indicating realized use and associated risks. Therefore, this is an AI Incident because the AI system's use has directly contributed to military actions with potential or actual harm.
Thumbnail Image

امریکی فوج نے ایران پر حملوں کیلیے کون سی اے آئی ٹیکنالوجی استعمال کی ؟

2026-03-02
ایکسپریس اردو
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in military operations that directly led to significant loss of life, which constitutes harm to people (criterion a). The AI system was used for intelligence and targeting, directly influencing lethal attacks. This is a clear AI Incident because the AI system's use was a pivotal factor in causing harm. The article also discusses governance and legal disputes, but these are secondary to the primary incident of AI-enabled lethal military action. Therefore, the classification is AI Incident.
Thumbnail Image

خامنہ ای پر حملہ امریکہ نے مصنوعی ذہانت کا بھی استعمال کیا

2026-03-03
Nawaiwaqt
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military attacks, including intelligence and targeting functions, which directly relate to harm to persons and potential violation of international laws. The AI system's use in these operations, despite a ban, indicates its role in causing or enabling harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in harmful military actions.
Thumbnail Image

امریکی فوج کا صدر ٹرمپ کا حکم ماننے سے انکار! ایران پر حملے کیلئے مصنوعی ذہانت کا استعمال

2026-03-02
MM News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude AI) in military operations involving airstrikes on Iran, which inherently carry risks of harm to persons and national security. The AI system was used despite a direct order to stop, indicating a malfunction in governance or compliance. The AI's role in intelligence analysis and target identification directly supports military actions that can cause injury or harm, fulfilling the criteria for an AI Incident. The event involves the use of AI in a context where harm is occurring or highly likely, and the AI's involvement is pivotal. Therefore, this is classified as an AI Incident.
Thumbnail Image

ایران پر امریکی حملوں میں مصنوعی ذہانت کا استعمال، اسٹارٹ اپ اینتھروپک کا کلاؤڈ ٹول اہم کردار ادا کر گیا

2026-03-02
dailyaaj.com.pk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Anthropic's AI cloud technology) in US military airstrikes, which involve direct harm to persons and geopolitical conflict. The AI system was used for intelligence analysis and target identification, functions that directly influence military operations and their outcomes. The involvement of the AI system in these operations means it has directly led to harm (injury or harm to persons in warfare). Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ایران پر حملوں میں اے آئی ٹیکنالوجی استعمال کا حیران کن انکشاف

2026-03-03
Urdu News - Today News - Daily Jasarat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military operations that have already taken place, implying direct involvement of AI in actions that can cause harm to people and communities. The use of AI for intelligence and targeting in warfare can lead to injury, loss of life, and violations of human rights and international law. The internal conflict over the use of this AI system and its legal challenges further highlight the significance of the harm and the AI system's pivotal role. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ایران پر حملےاے آئی کے استعمال کا حیران کن انکشاف - Siasat Daily - Urdu

2026-03-03
Siasat Daily - Urdu
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in military operations that have already taken place, which directly or indirectly leads to harm through armed conflict. The AI system was used for intelligence and operational planning, contributing to the execution of attacks. This fits the definition of an AI Incident because the AI system's use has directly led to harm (military attacks). The internal conflict about authorization and continued use does not negate the realized harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

12 گھنٹوں میں 900 حملے: کیا مصنوعی ذہانت جنگ کے قواعد بدل رہی ہے؟

2026-03-04
jang.com.pk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in real military operations that have resulted in numerous attacks, indicating direct harm to people and communities. The AI systems are involved in targeting, decision-making, and execution of strikes, which have already occurred, fulfilling the criteria for an AI Incident. The concerns about reduced human oversight and the speed of AI-driven decisions further support the classification. The event is not merely a potential risk or a complementary update but a report of actual harm linked to AI use in warfare.
Thumbnail Image

امریکا اور اسرائیل کی ایران میں فوجی مہم میں مصنوعی ذہانت کا کلیدی کردار

2026-03-05
jang.com.pk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in active military operations that have directly led to harm through attacks on targets in Iran. The AI's role in identifying targets and planning attacks is pivotal, and the use of AI in warfare inherently involves risks of injury or death to people, qualifying this as an AI Incident. The article explicitly states AI's central role in these operations, confirming AI system involvement and direct harm.
Thumbnail Image

امریکا اور اسرائیل کی ایران میں فوجی مہم، مصنوعی ذہانت کا کلیدی کردار، امریکی اخبار

2026-03-06
jang.com.pk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in active military operations that have directly led to harm through attacks on approximately one thousand targets within 24 hours. The AI system's role in target identification and prioritization is pivotal to the harm caused. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm in a conflict setting.
Thumbnail Image

ایران کی جنگ میں امریکہ نے مصنوعی ذہانت کیسے استعمال کی، جانیں لینے کا ذمہ دار کون؟

2026-03-05
Ummat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military operations that have resulted in numerous attacks causing harm. The AI system's role is pivotal in accelerating decision-making and enabling large-scale, rapid attacks, which directly leads to harm to people and communities. The involvement is through the use of AI in operational deployment, not just development or potential use. The harms are realized and significant, including ethical and legal risks. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

امریکا ، اسرائیل کا ایرانی جنگ میں مصنوعی ذہانت کے استعمال کا انکشاف

2026-03-05
dailykhabrain.com.pk
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI systems being used in military operations that involve attacks on targets, which implies direct harm to people and property. The AI's role in proposing hundreds of targets and accelerating war planning shows its involvement in causing harm. The use of AI in autonomous weapons and surveillance further supports the classification as an AI Incident due to the direct link to harm in armed conflict. Therefore, this is not merely a potential hazard or complementary information but an actual incident where AI has contributed to harm.
Thumbnail Image

لو كان الذكاء الاصطناعي سلاحاً فمن ينبغي له السيطرة عليه؟

2026-03-02
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article centers on the plausible future risks and governance challenges of AI, particularly in military contexts, including the potential for autonomous weapons and ethical conflicts. Since no actual harm or incident has occurred, and the discussion is about possible future dangers and political struggles over AI control, this fits the definition of an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information since it does not provide updates or responses to a past incident. It is not unrelated because it clearly involves AI systems and their potential impacts.
Thumbnail Image

رفضت تعاوناً مع "البنتاغون".. "أنثروبيك" الضامن الأخلاقي للذكاء الاصطناعي

2026-03-02
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems, nor does it describe a plausible future harm scenario. Instead, it details a company's ethical decision to refuse military use of its AI and the resulting governmental response. This is a governance and ethical stance update, providing context and insight into AI ecosystem dynamics without describing an AI Incident or Hazard. Therefore, it is best classified as Complementary Information.
Thumbnail Image

من يقود الذكاء الاصطناعي؟.. 3 أنظمة متباينة تُشكّل المشهد التقني العالمي

2026-03-03
24.ae
Why's our monitor labelling this an incident or hazard?
The article is a broad analytical report on the global AI landscape and governance strategies without describing any concrete AI incident or hazard. It does not report any realized harm or plausible imminent harm caused by AI systems. Nor does it focus on responses to specific AI incidents. Therefore, it fits the definition of Complementary Information, as it provides contextual and strategic insights into AI development and governance without reporting a new AI Incident or AI Hazard.
Thumbnail Image

7 أدوار جوهرية للذكاء الاصطناعي لكشف الأخبار المضللة ومحاربة الشائعات

2026-03-04
24.ae
Why's our monitor labelling this an incident or hazard?
The content is a general analytical discussion about AI's roles in fighting misinformation and the importance of human oversight and transparency. It does not describe any realized harm, nor does it report a specific event where AI caused or could plausibly cause harm. It also does not provide updates or responses to prior incidents. Therefore, it fits best as Complementary Information, providing context and understanding about AI's societal impact and governance considerations rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

حرب إيران في زمن الـAI‏.. انتصار "الكود" أسرع من رمشة عين الجنرالات

2026-03-04
بوابة اخبار اليوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Anthropic's Claude) used for analyzing data, identifying targets, and executing military operations against Iran. The AI's role in accelerating strikes and cyber operations that disrupt Iranian systems and influence populations indicates direct involvement in causing harm and disruption. This fits the definition of an AI Incident as the AI system's use has directly led to harm (military strikes, disruption of critical infrastructure, psychological operations) and violations of rights. The article also discusses ethical concerns about AI decision-making in warfare, reinforcing the significance of AI's role in the incident.
Thumbnail Image

حملة مقاطعة كبرى: هل يجب عليك الانسحاب من ChatGPT بعد صفقة البنتاغون؟

2026-03-02
euronews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and Anthropic's AI) and their use in military applications, which raises credible concerns about potential harms such as autonomous weapons use and mass surveillance. The public protest and campaign to boycott ChatGPT are responses to these plausible future harms. Since no actual harm or incident has been reported yet, but the risk is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI systems and their military use are central to the event.
Thumbnail Image

نماذج الذكاء الاصطناعي.. ومسألة القيود الأخلاقية

2026-03-02
مركز الاتحاد للأخبار
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems, nor does it report a specific event where AI use or malfunction led to injury, rights violations, or other harms. Instead, it outlines a policy and ethical dispute that could plausibly lead to future harms if AI is militarized without ethical constraints. Therefore, it fits the definition of an AI Hazard, as it discusses credible potential risks and governance challenges related to AI's future use, especially in military contexts, but no actual incident has occurred yet.
Thumbnail Image

كيف أدى العدوان على إيران إلى تحولات عميقة بين واشنطن وشركات الذكاء الاصطناعي؟ | التلفزيون العربي

2026-03-03
التلفزيون العربي
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems developed by companies like Anthropic and OpenAI, used by the U.S. military. The use of AI in planning military strikes on Iran constitutes direct involvement of AI in an event causing harm (military aggression). The conflict over ethical constraints and the subsequent military action demonstrate that AI's use has directly led to harm, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or governance responses but reports on actual use of AI in military operations causing harm, thus qualifying as an AI Incident.
Thumbnail Image

الجيش الأمريكي يستخدم الذكاء الاصطناعي في الحرب مع إيران - نافذة مصر

2026-03-04
egyptwindow.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military operations, including missile guidance and decision support, which are AI systems by definition. The involvement is in the use of AI in warfare, with potential for direct harm to people (injury or death) and broader harm to communities and international stability. While no specific incident of harm is described, the credible risk of harm from AI-enabled military actions is clearly articulated. Hence, this is an AI Hazard rather than an AI Incident. The article also discusses governance and ethical concerns but does not focus on a response or update to a past incident, so it is not Complementary Information. It is not unrelated because AI systems are central to the described event.
Thumbnail Image

الجارديان || ترامب يستخدم الذكاء الاصطناعي في خوض حروبه - هذه نقطة تحول خطيرة - نافذة مصر

2026-03-04
egyptwindow.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military operations that have directly led to harm, including casualties and destabilization in the Middle East. The AI system's outputs were used to guide missile strikes and regime change efforts, which are clear harms to people and communities. This meets the definition of an AI Incident because the AI's use has directly led to injury, harm, and disruption. The article's focus on the actual use and consequences of AI in warfare, rather than hypothetical risks or policy discussions alone, supports this classification.
Thumbnail Image

أخبار الذكاء الاصطناعي: X تطرح حظر تحقيق الدخل لمدة 90 يومًا لمقاطع الفيديو الحربية للذكاء الاصطناعي غير الموسومة بالذكاء الاصطناعي

2026-03-04
The Coin Republic
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of generative AI creating war-related videos, and the platform's use of AI detection tools to identify such content. However, the article focuses on the platform's policy response to prevent misinformation spread and protect content authenticity, rather than reporting an actual incident of harm caused by AI. The policy aims to reduce plausible future harm by discouraging undisclosed AI-generated war content, but no realized harm or incident is described. Therefore, this is best classified as Complementary Information, as it details governance and societal responses to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

مهام "كلود" السرية.. من مطاردة مادورو إلى ضرب إيران

2026-03-05
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) in military operations that have directly led to harm, including targeted strikes and capture operations. The AI system's outputs are used to identify targets and prioritize strikes, which directly contributes to injury and harm to persons and communities, fulfilling the criteria for an AI Incident. The article explicitly states that the AI system was used in operations causing harm and that military leaders rely on it despite company restrictions. Therefore, this is an AI Incident due to direct involvement of AI in causing harm through military actions.
Thumbnail Image

"أبل ميوزك" تتحرك ضد فوضى الذكاء الاصطناعي في الموسيقى

2026-03-05
قناة العربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in music production and the industry's response to it, but it does not describe any realized harm or incident caused by AI. Instead, it focuses on transparency measures and detection efforts, which are governance and societal responses to AI's growing role in music. Therefore, this is Complementary Information as it provides context and updates on AI-related developments and responses without reporting an AI Incident or AI Hazard.
Thumbnail Image

أزمة خلف الكواليس.. لماذا تراجعت أنثروبيك عن مشروعها مع البنتاغون؟

2026-03-05
صدى البلد
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's AI models) and their intended use in military applications. The tensions and disagreements with the Pentagon about AI use in autonomous weapons and surveillance indicate concerns about potential misuse or premature deployment, which could plausibly lead to harms such as violations of human rights or escalation of conflict. However, no actual harm or incident is reported; the conflict is about policy and readiness rather than a malfunction or misuse causing damage. The mention of geopolitical risks to AI infrastructure also points to plausible future harms but not realized incidents. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

"المركزي الأوروبي" يكشف: هل يعزز الذكاء الاصطناعي الإنتاجية أم يهدد فرص العمل؟

2026-03-05
Dostor
Why's our monitor labelling this an incident or hazard?
The article is primarily an economic and policy analysis discussing the current and potential future impacts of AI on employment and productivity. It does not report any realized harm or incident caused by AI systems, nor does it describe a specific plausible hazard event. The focus is on understanding trends and advising policy responses, which fits the definition of Complementary Information as it provides context and insight into AI's societal implications without reporting a new incident or hazard.
Thumbnail Image

مهام "كلود" السرية.. البنتاغون يستخدم برنامج محادثة لقتل القيادات الإيرانية

2026-03-05
@Elaph
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Claude') explicitly mentioned as integrated into military targeting and decision-making systems, directly influencing lethal operations against Iranian targets and others. The AI's use in identifying and prioritizing targets that led to military strikes causing harm fits the definition of an AI Incident, as the AI system's use has directly led to injury or harm to persons (harm category a). The article describes realized harm, not just potential harm, and the AI's role is pivotal in these outcomes. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

إطلاق أدوات لقياس مستوى المخاطر لنماذج الذكاء الاصطناعي - بوابة الأهرام

2026-03-05
جريدة الأهرام
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it discusses AI models and their security vulnerabilities, and the tools developed to assess these risks. However, there is no indication that any harm has occurred or that an AI system has malfunctioned or been misused leading to harm. The article primarily provides information about new security assessment tools and their role in managing AI risks, which fits the definition of Complementary Information. It supports better risk assessment and mitigation but does not describe an AI Incident or AI Hazard itself.
Thumbnail Image

جدل واسع يلاحق ChatGPT.. حملة مقاطعة بعد تعاون OpenAI مع البنتاغون

2026-03-05
الرسالة نت
Why's our monitor labelling this an incident or hazard?
The article centers on the announcement of a partnership between OpenAI and the Pentagon, which has sparked public debate and user backlash. While the AI system (ChatGPT) is involved, the event does not describe any realized harm or incident caused by the AI system's development, use, or malfunction. Instead, it reports on reactions, ethical concerns, and market effects, which fall under complementary information about AI ecosystem developments and governance issues. Therefore, this event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Погазване на етичните норми? САЩ използвали изкуствен интеле...

2026-03-04
marica.bg
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude, a large language model) in military operations for target selection, which has directly contributed to lethal attacks causing hundreds of deaths. This constitutes harm to persons (criterion a). The AI's role is pivotal in analyzing intelligence and simulating scenarios to identify targets, thus directly influencing the harm. The article also notes ethical concerns and tensions around this use, but the key point is that the AI's use has already led to significant harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

САЩ използват AI при подбора на цели за удари срещу Иран

2026-03-04
It.dir.bg
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude) used by the US military to select targets in military strikes that have resulted in hundreds of casualties, including civilian deaths. The AI's role in target identification and operational planning directly contributes to harm to human life, fulfilling the criteria for an AI Incident. Although the AI is not used in fully autonomous weapons, its outputs influence lethal decisions, making the harm directly linked to AI use. Therefore, this is classified as an AI Incident due to realized harm caused with AI involvement.
Thumbnail Image

Anthropic е подновила преговорите с Пентагона след конфликта около AI безопасността

2026-03-05
Investor.bg
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI models) and their potential military use, which could have implications for AI safety and ethical concerns. However, it does not report any realized harm, malfunction, or misuse of the AI systems. The focus is on negotiations, policy, and industry reactions, which are governance and societal responses to AI-related issues. There is no direct or indirect harm described, nor a clear plausible future harm event occurring at this time. Thus, it fits the definition of Complementary Information, providing important context and updates on AI governance and risk management rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Изкуственият интелект вече участва във войната - нов етап в съвременните конфликти

2026-03-06
Blitz.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude) in military strike planning that has resulted in attacks causing civilian deaths, fulfilling the criteria for harm to people (a). The AI system's involvement in decision-making and targeting directly contributed to these harms. Additionally, the use of AI in cyber operations and missile targeting further supports the presence of AI systems causing or enabling harm. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm is realized and directly linked to AI use.
Thumbnail Image

OpenAI премина червената линия, която Anthropic отказа да пресече

2026-03-06
Investor.bg
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's Claude AI and OpenAI's models) and their use in military and surveillance contexts, which are areas with potential for significant harm. However, the article focuses on the companies' ethical stances, negotiations, and government actions rather than describing a specific incident of harm or a direct AI-driven event causing injury, rights violations, or disruption. The sanctions and negotiations represent governance and societal responses to AI risks rather than new incidents or hazards themselves. The discussion of potential misuse and ethical boundaries is important but does not describe a concrete AI Hazard event with imminent risk or an AI Incident with realized harm. Thus, the article fits the definition of Complementary Information, providing context and updates on AI ecosystem developments and governance responses.
Thumbnail Image

Пентагонът включи AI стартъпа Anthropic в "черния списък" на доставчиците

2026-03-06
It.dir.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's AI model Claude) used in military applications, indicating AI system involvement. The event stems from the use and governance of AI technology, specifically the DoD's decision to blacklist a supplier due to perceived risks, which is a governance and strategic response rather than a harm event. There is no indication that the AI system caused injury, rights violations, infrastructure disruption, or other harms, nor that it plausibly could lead to such harms imminently. The event focuses on regulatory and contractual decisions, company responses, and market reactions, which align with Complementary Information as per the definitions. Hence, it is not an AI Incident or AI Hazard but a governance-related update enhancing understanding of AI ecosystem dynamics.
Thumbnail Image

Шефът на Anthropic нарече Доналд Тръмп диктатор

2026-03-06
It.dir.bg
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (Anthropic's Claude chatbot) is clear, and its use in military operations is noted, which could imply potential risks. However, the article does not report any actual harm, injury, rights violations, or disruptions caused by the AI system. The main focus is on political conflict, regulatory positions, and industry reactions, which are governance and ecosystem developments rather than incidents or hazards. Thus, this qualifies as Complementary Information, as it enhances understanding of the AI ecosystem and governance challenges without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Yapay zeka savaş alanında... ABD İran'a saldırılarda Claude'u devreye soktu

2026-03-04
Hürriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system Claude was used to identify and prioritize targets in military operations that caused deaths and injuries in Iran. The AI system was integrated into a military intelligence platform and actively contributed to planning and executing attacks. The resulting harm includes loss of life and harm to communities, which are direct harms as defined in the framework. Hence, this qualifies as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Washington Post bombayı patlattı! İran'a saldırılarını 'Claude' planlamış; işte ABD'nin gizli silahı

2026-03-04
Milliyet
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of an AI system (Claude) integrated with the Maven system to plan and execute military strikes that resulted in the deaths of hundreds of people. The AI system's outputs directly influenced targeting decisions and prioritization, which led to real harm (loss of life and harm to communities). This meets the definition of an AI Incident, as the AI system's use directly led to injury and harm to groups of people. The involvement is not speculative or potential but realized harm. Hence, the classification is AI Incident.
Thumbnail Image

WP: ABD, İran'a yönelik saldırılarında Anthropic'in yapay zeka uygulaması Claude'ı yoğun olarak kullanıyor

2026-03-04
Haberler
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) explicitly mentioned as integrated into a military intelligence system (Maven) used for real-time targeting and attack planning. The AI's outputs directly influenced the selection and prioritization of targets in military strikes that caused deaths and injuries, fulfilling the criteria for an AI Incident due to direct harm to people. The article reports realized harm (867 deaths) linked to the AI-supported military operations, not just potential harm, confirming it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pentagon'un Yapay Zeka Kullanımında Claude Etkisi - Son Dakika

2026-03-04
Son Dakika
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in military operations that have caused deaths and injuries, fulfilling the criteria for an AI Incident. The AI system's outputs were pivotal in selecting and prioritizing targets for attacks that led to casualties. The harm is realized and directly linked to the AI system's use in planning and conducting these strikes. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ABD'nin gizli silahı: Trump'ın yasakladığı yapay zeka İran'a saldırıyı planlamış

2026-03-05
CNN Türk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in military planning and execution, which directly led to harm including deaths and destruction during the Iran attacks. The AI system's outputs were pivotal in selecting and prioritizing targets, thus playing a direct role in the harm caused. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people, fulfilling criterion (a) and (d) of AI Incident harms.
Thumbnail Image

WP: ABD, İran'a yönelik saldırılarında Anthropic'in yapay zeka uygulaması Claude'ı yoğun olarak kullanıyor

2026-03-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system Claude was used to identify and prioritize targets in military operations against Iran, which led to actual attacks causing hundreds of deaths, including high-ranking officials. This is a direct causal link between the AI system's use and harm to human life, which qualifies as an AI Incident under the OECD framework. The involvement is in the use of the AI system for military targeting and planning, resulting in realized harm (loss of life and injury).
Thumbnail Image

Pentagon'un gizli silahı ifşa oldu! İran saldırısını bizzat Claude planlamış

2026-03-04
A Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude) integrated into a military planning system (Maven) to select and prioritize targets for attacks against Iran. The AI's outputs directly influenced military strikes, which inherently involve harm to persons and property. The AI system's role is pivotal in the harm caused by these military operations. This meets the definition of an AI Incident, as the AI system's use has directly led to harm (a) injury or harm to persons and (d) harm to property and communities. The scale and nature of the use confirm this classification rather than a hazard or complementary information.
Thumbnail Image

Savaşta algoritma dönemi

2026-03-04
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system Claude was used to identify, prioritize, and coordinate military targets in attacks that caused hundreds of deaths and injuries. This constitutes direct involvement of an AI system in causing harm to people, fulfilling the criteria for an AI Incident. The harm is materialized and significant, involving loss of life and injury in a military conflict. Hence, this is classified as an AI Incident.
Thumbnail Image

ABD'nin gizli silahı! İran'daki hedefleri Claude belirledi

2026-03-04
Türkiye
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Claude being integrated into a military targeting system (Maven) used by the Pentagon to plan and execute attacks in Iran. The AI system's outputs directly influenced the selection and prioritization of targets, leading to actual military strikes. This constitutes direct harm to people and property, fulfilling the criteria for an AI Incident. The involvement is not hypothetical or potential but realized, with the AI system playing a pivotal role in causing harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

ABD, İran'da bin hedefin vurulmasında yapay zekadan yararlandı

2026-03-05
euronews
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Maven Smart System with Claude AI) in a military operation that directly led to the striking of numerous targets in Iran, which constitutes harm to property and communities. The AI system's role in target identification, prioritization, and operational support is pivotal to the incident. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm in a conflict context.
Thumbnail Image

Η Anthropic μηνύει τη κυβέρνηση Τραμπ για το "μπλόκο" στις στρατιωτικές συνεργασίες

2026-03-10
NEWS 24/7
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude chatbot) and its potential military use, which is the subject of a government ban and legal challenge. However, no actual harm or incident caused by the AI system is described. The focus is on the legal dispute, policy decisions, and industry reactions, which are governance and societal responses to AI-related concerns. Therefore, this event is best classified as Complementary Information, as it provides important context and updates on AI governance and legal challenges without describing a realized or plausible harm incident or hazard.
Thumbnail Image

Η Anthropic μηνύει το Πενταγώνου για να ανατρέψει τον κατάλογο εθνικής ασφάλειας Πηγή: Investing.com

2026-03-09
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The event involves an AI system developed by Anthropic and its use in military contexts, which has led to a governmental designation restricting its use. However, the article does not describe any actual harm caused by the AI system, nor does it report any incident or malfunction resulting in injury, rights violations, or other harms. Instead, it details a legal and regulatory dispute about the potential risks and controls related to AI technology's military applications. This situation represents a plausible risk scenario regarding AI's use in sensitive areas like national security, but no realized harm or incident is reported. Therefore, it fits the definition of an AI Hazard, as the development and use of AI technology in military contexts could plausibly lead to harms related to national security or other issues if not properly managed.
Thumbnail Image

Η Anthropic μηνύει το Πεντάγωνο για την ετικέτα κινδύνου στην εφοδιαστική αλυσίδα Πηγή: Euronews

2026-03-09
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Claude) and discusses its development, use, and governance in a defense context. The DoD's designation of Anthropic as a supply chain risk implies concerns about potential future harms, but no actual harm or incident is reported. The legal challenge by Anthropic is a response to this designation and the associated restrictions. Since the event centers on legal and governance responses to perceived risks rather than an actual AI Incident or a direct AI Hazard, it fits the definition of Complementary Information. It enhances understanding of AI ecosystem dynamics, governance challenges, and societal responses without describing a new harm or imminent risk.
Thumbnail Image

Το μέλλον που φοβόμασταν είναι ήδη εδώ

2026-03-08
in.gr
Why's our monitor labelling this an incident or hazard?
The article centers on the tensions and governance challenges arising from the use of AI systems like Claude in government and military settings, including ethical constraints, contract disputes, and ideological conflicts. While it discusses potential risks and the possibility of harm from misuse or lack of control, it does not describe a concrete event where the AI system directly or indirectly caused harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual and governance-related information about AI development, use, and oversight, fitting the definition of Complementary Information.
Thumbnail Image

Ο Τραμπ απαιτεί υποταγή από τις εταιρείες ΑΙ - Εκβιάζει για νέα εργαλεία πολέμου | in.gr

2026-03-11
in.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems and their use in military and political contexts, with clear potential for significant future harm if unrestricted use is enforced, such as autonomous weapons deployment or mass surveillance. However, the article does not document any realized harm or incident caused by AI systems. The main focus is on the political and regulatory conflict, legal challenges, and potential future risks rather than an actual AI Incident. Therefore, this qualifies as an AI Hazard because it plausibly could lead to AI Incidents in the future, given the nature of the technologies and the government's push for unrestricted use, but no harm has yet occurred as described.
Thumbnail Image

Ο κίνδυνος από την τεχνητή νοημοσύνη είναι πλέον πραγματικός | in.gr

2026-03-10
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude model) and discusses their development and use, including potential misuse and malfunction risks. It highlights credible concerns about future harms such as surveillance overreach, autonomous weapons, data breaches, and catastrophic accidents linked to AI. Although no actual harm has yet materialized, the described circumstances and expert warnings indicate plausible future harm. The focus is on the potential for AI-driven disasters and the geopolitical struggle over AI control, fitting the definition of an AI Hazard. There is no report of a direct or indirect AI Incident (realized harm), nor is the article primarily about responses or updates (Complementary Information), nor unrelated news. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

Μήπως ήρθε η ώρα να πείτε και εσείς αντίο στο ChatGPT; | in.gr

2026-03-09
in.gr
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (ChatGPT and Anthropic's Claude) and their use in military contexts, which raises ethical and societal concerns. However, no actual harm or incident caused by the AI systems is reported. The boycott and political disputes are reactions to potential or perceived misuse, but no concrete AI Incident or Hazard is described. The main content is about societal and governance responses to AI developments and controversies, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Ο Λευκός Οίκος ετοιμάζει εκτελεστικό διάταγμα για να απομακρύνει την Anthropic | LiFO

2026-03-10
LiFO
Why's our monitor labelling this an incident or hazard?
The article centers on a governmental policy action aimed at mitigating perceived risks from an AI system, reflecting a governance response rather than a realized harm or incident. There is no indication that the AI system has directly or indirectly caused harm, nor that harm is imminent or plausible in the immediate term as described. Therefore, this event is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related concerns without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Μήνυση από την εταιρεία τεχνητής νοημοσύνης κατά της κυβέρνησης Τραμπ γιατί τη χαρακτηρίζει "κίνδυνο"

2026-03-09
Cretalive
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI models) and discusses its use and restrictions by government agencies. However, it does not report any realized harm (injury, rights violations, disruption, or other harms) caused by the AI system. The dispute is about policy and legal classification, not about an incident or hazard caused by the AI system. The main focus is on the legal and governance conflict, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Η Anthropic μηνύει την κυβέρνηση των ΗΠΑ για τον χαρακτηρισμό "supply-chain risk"

2026-03-10
Insomnia.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's AI technology) and government actions restricting its use, which impacts the company's operations and contracts. However, no actual harm (physical, rights violations, infrastructure disruption, or community harm) is reported as having occurred due to the AI system's development, use, or malfunction. The focus is on legal and constitutional challenges and government policy decisions, which are governance and societal responses to AI. This fits the definition of Complementary Information, as it updates on legal proceedings and governance issues related to AI without describing a new AI Incident or AI Hazard.
Thumbnail Image

Η Anthropic μηνύει την κυβέρνηση Τραμπ μετά τον χαρακτηρισμό της ως "απειλή για την εθνική ασφάλεια"

2026-03-09
Newpost.gr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI models) and discusses its development and use in military contexts. The dispute centers on the potential misuse of AI for mass surveillance and autonomous weapons, which are significant harms under the framework. However, the article does not report any actual harm or incident caused by the AI system; rather, it details a legal and regulatory conflict about controlling AI use to prevent such harms. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident if the AI were misused or deployed without restrictions. There is no indication of realized harm or incident, so it is not an AI Incident. It is more than complementary information because it concerns a direct conflict over AI use with potential for harm, not just updates or responses. It is not unrelated because AI systems and their risks are central to the event.
Thumbnail Image

Μεγάλος κίνδυνος από την τεχνητή νοημοσύνη

2026-03-10
ekriti
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude model and other AI technologies) and discusses their development, use, and potential misuse. It highlights credible risks such as AI-enabled surveillance, autonomous weapons, and AI-driven cyberattacks that could cause harm to people, communities, and national security. However, the article does not report any realized harm or incident caused by AI but rather focuses on the plausible future harms and the geopolitical tensions that increase these risks. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The discussion about the potential for an "AI Armageddon" and the lack of immediate harm supports this classification.
Thumbnail Image

Μήνυση από την εταιρεία τεχνητής νοημοσύνης Anthropic κατά της Κυβέρνησης Τραμπ γιατί τη χαρακτηρίζει "κίνδυνο"

2026-03-09
ΡΕΠΟΡΤΕΡ
Why's our monitor labelling this an incident or hazard?
The article focuses on a legal and regulatory conflict involving an AI company and government agencies, specifically about the use and control of AI technology. While AI systems are involved, there is no report of actual harm, injury, rights violations, or disruption caused by the AI systems themselves. The designation of 'supply chain risk' and the resulting restrictions are administrative and legal actions, not incidents of harm or hazards of plausible harm. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. Instead, it is best classified as Complementary Information because it provides important context about governance, legal disputes, and policy responses related to AI systems and their deployment in sensitive areas such as national security.
Thumbnail Image

Anthropic στο δικαστήριο: Η "μαύρη λίστα" του Πενταγώνου απειλεί έσοδα δισ. και τη φήμη της

2026-03-10
SofokleousIn.GR
Why's our monitor labelling this an incident or hazard?
The article centers on a legal and regulatory conflict involving an AI company and government classification affecting business operations and reputation. There is no mention of any injury, rights violation, infrastructure disruption, or other harms caused by the AI system's development, use, or malfunction. The event does not describe any realized or plausible future harm directly linked to the AI system itself. Instead, it highlights governance and policy dynamics, which fits the definition of Complementary Information.
Thumbnail Image

Αντεπίθεση Anthropic: Μηνύει το Πεντάγωνο για τη "μαύρη λίστα" και τις διώξεις της κυβέρνησης Τραμπ

2026-03-10
PCMag Greece
Why's our monitor labelling this an incident or hazard?
The article involves an AI company and its AI technologies, but the harm described is economic and reputational damage to the company due to government sanctions, not harm caused by the AI system's malfunction, use, or development. There is no indication that the AI system itself caused injury, rights violations, infrastructure disruption, or environmental harm. The focus is on legal and political conflict over AI governance and policy, which fits the definition of Complementary Information as it details governance responses and legal proceedings related to AI without reporting an AI Incident or Hazard.
Thumbnail Image

Η Anthropic μηνύει την κυβέρνηση Τραμπ μετά τον χαρακτηρισμό της ως "απειλή για την εθνική ασφάλεια"

2026-03-09
Reporter.gr
Why's our monitor labelling this an incident or hazard?
The article centers on the legal and regulatory conflict involving an AI company and the government, focusing on the consequences for the company rather than harm caused by the AI system. The AI system (Anthropic's models) is involved, but no harm caused by the AI system is reported. The harms described are economic and reputational impacts on the company due to government actions, which do not meet the criteria for AI Incident or AI Hazard. The article mainly provides context on governance and legal responses to AI, fitting the definition of Complementary Information.
Thumbnail Image

Η Anthropic μηνύει Πεντάγωνο για ετικέτα κινδύνου εφοδιασμού

2026-03-09
euronews
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Claude) and its use in defense applications, which are critical infrastructure. However, the event is a legal and governance dispute over the use and access to the AI system, not an incident where the AI system has caused harm or malfunctioned. The DoD's designation of Anthropic as a supply chain risk suggests potential future risks, but no actual harm or malfunction has occurred or been reported. The main focus is on the legal challenge and policy implications rather than a realized or imminent AI-related harm. Thus, it fits the definition of Complementary Information, providing context and updates on governance and legal responses to AI in defense, rather than an AI Incident or Hazard.
Thumbnail Image

Λευκός Οίκος: Σχέδιο εκτελεστικού διατάγματος για απομάκρυνση της Anthropic | Pagenews.gr

2026-03-11
Pagenews.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude by Anthropic) explicitly mentioned and its use within federal government systems. The government's consideration of an executive order to remove this AI system is based on concerns about potential risks to national security and operational safety, implying plausible future harm if the system remains deployed. No actual harm or incident has been reported yet, but the situation reflects a credible risk scenario. The article focuses on the potential for harm and regulatory response rather than an actual incident or realized harm, so it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Η Anthropic μηνύει το Πεντάγωνο για την ετικέτα κινδύνου στην εφοδιαστική αλυσίδα - BusinessNews.gr

2026-03-09
businessnews.gr
Why's our monitor labelling this an incident or hazard?
The article describes a conflict over the use of an AI system (Claude) with potential applications in autonomous weapons and mass surveillance, which are recognized as high-risk uses that could lead to significant harms. The Department of Defense's designation of Anthropic as a supply chain risk reflects concerns about these plausible future harms. However, the article does not report any realized harm or incident caused by the AI system's use or malfunction. Instead, it focuses on the legal and regulatory dispute aimed at preventing such harms. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic κατά Πενταγώνου: Υποστήριξη από Google και OpenAI

2026-03-10
SecNews.gr
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI technology) and their potential military use, which inherently carries risks. However, the event centers on a legal dispute and policy decisions rather than an actual incident of harm or malfunction caused by AI. The article highlights concerns about future risks (autonomous weapons, mass surveillance) but does not report any realized harm or direct AI-driven incident. The focus is on governance, ethical considerations, and industry responses, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Η Anthropic μηνύει το Πεντάγωνο και τον υπουργό Πολέμου, Πιτ Χέγκσεθ

2026-03-09
sofokleous10.gr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI models) and their use in defense, but no direct or indirect harm has occurred. The event is a legal and governance dispute over AI use and control, not an incident or hazard involving realized or plausible harm. It fits the definition of Complementary Information because it details societal and governance responses to AI technology and its regulation, without reporting an AI Incident or AI Hazard.
Thumbnail Image

Anthropic: Με δικαστικά μέσα προσπαθεί να αποφύγει τη... black list του Πενταγώνου - Fibernews

2026-03-09
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems developed by Anthropic and their potential use in autonomous weapons and surveillance, which are areas with high risk of harm including injury, violation of rights, and national security threats. The Pentagon's blacklist and restrictions reflect concerns about these risks. Although no actual harm has been reported, the dispute and legal actions revolve around preventing potentially dangerous uses of AI technology. The AI's role is pivotal in the potential harms discussed, but since harm has not yet materialized, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is not on updates or responses to a past incident but on an ongoing conflict about potential future harms. It is not Unrelated because AI systems and their military use are central to the event.
Thumbnail Image

Το Πεντάγωνο έκανε όπλο το Claude - Και τώρα δηλώνει άγνοια

2026-03-09
Business Voice
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Claude) used by the military to generate target lists that led to a strike hitting a civilian school, causing harm to people and raising human rights concerns. The AI system's involvement in target selection and the resulting harm meet the criteria for an AI Incident. The harm is realized (the school was hit), and the AI system's role was pivotal in producing the target list. Although a human made the final strike decision, the AI's output was essential and the human control was arguably insufficient. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Anthropic: Λανσάρει αγορά AI apps στη σκιά κόντρας - STARTUPPER

2026-03-09
STARTUPPER
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Anthropic's Claude AI models and AI-powered applications) and discusses their commercial deployment and regulatory scrutiny. However, there is no indication of any harm caused or plausible harm imminent from the AI systems described. The conflict with the Department of Defense and the supply chain risk designation represent governance and legal challenges rather than direct or potential AI-related harm. The launch of the AI app marketplace is a business development rather than an incident or hazard. Thus, the article fits the definition of Complementary Information, providing important context and updates on AI governance and market dynamics without reporting an AI Incident or AI Hazard.
Thumbnail Image

To "beef" στην Τεχνητή Νοημοσύνη

2026-03-12
Παραθυρο
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the debate and policy stances regarding AI use in military and advertising contexts, including concerns about potential misuse and fundamental rights violations. It does not describe a concrete incident where an AI system caused harm or a near-miss event. Instead, it reports on company positions, government decisions, and industry reactions, which are complementary information about AI governance and ethical considerations. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Microsoft: Στηρίζει την Anthropic στη διαμάχη της με το αμερικανικό Υπουργείο Άμυνας - Fibernews

2026-03-11
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI technology) and discusses their potential risks and regulatory challenges, but no direct or indirect harm has occurred yet. The focus is on a legal dispute and advocacy to prevent disruption and ensure safe AI use, which fits the definition of Complementary Information. There is no description of an AI Incident (harm realized) or an AI Hazard (plausible future harm from AI use) as the article centers on legal and governance responses rather than an event causing or imminently risking harm.
Thumbnail Image

Η Palantir συνεχίζει να χρησιμοποιεί το Claude AI εν μέσω σύγκρουσης του Πενταγώνου με την Anthropic Πηγή: Investing.com

2026-03-12
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude AI) and its use by Palantir and the DoD. However, the article does not describe any direct or indirect harm resulting from the AI system's development, use, or malfunction. The DoD's designation of Anthropic as a supply chain risk suggests potential concerns but does not specify plausible harm or incidents caused by the AI. Therefore, this is not an AI Incident or AI Hazard. The article provides contextual information about AI system use amid organizational disputes, fitting the definition of Complementary Information.
Thumbnail Image

Κολοσσοί της τεχνολογίας και πρώην στρατηγοί στηρίζουν τη νομική μάχη της Anthropic κατά ΗΠΑ Πηγή: Euronews

2026-03-12
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI chatbot) and its use in military contexts, but the event centers on a legal challenge against a government designation rather than any harm caused by the AI system. There is no indication of injury, rights violations, or other harms resulting from the AI's development, use, or malfunction. The concerns raised are about potential economic and strategic impacts and government policy, which are governance and societal response issues. Therefore, the event does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information, as it details governance and legal developments affecting AI deployment and industry dynamics.
Thumbnail Image

Ποιοι στην τεχνολογία στηρίζουν την Anthropic στη διαμάχη με τις ΗΠΑ;

2026-03-12
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude chatbot) and its use in military settings, but the article focuses on a legal and policy dispute rather than any realized or imminent harm caused by the AI system. There is no indication that the AI system malfunctioned, was misused to cause harm, or that harm has occurred or is imminent. The main narrative is about the government's designation of Anthropic as a supply chain risk, the company's legal challenge, and the support it receives from various actors. This fits the definition of Complementary Information, which includes governance responses, legal proceedings, and societal reactions related to AI, without describing a new AI Incident or AI Hazard.
Thumbnail Image

U.S. Neutralizes Four Soleimani-Class Warships

2026-03-11
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI tools by the U.S. military to reduce targeting and strike processes from hours or days to seconds, which directly supports the execution of strikes that have neutralized Iranian warships and destroyed thousands of targets. This involvement of AI in the use phase of military operations has directly led to harm to property and military assets, fulfilling the criteria for an AI Incident. The harm is materialized and significant, and the AI system's role is pivotal in the operational success described.
Thumbnail Image

US Military Using AI in Attacks on Iran

2026-03-11
NTD
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in active military operations that have caused physical harm and destruction. The AI tools are integral to the military strategy, enabling faster decision-making that leads to attacks causing injury and death, as well as damage to property. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities and harm to property. The article does not merely discuss potential or future harm but describes ongoing military actions involving AI.