AI-Enabled Autonomous Weapons Cause Harm in Modern Warfare

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems integrated into military hardware, such as autonomous drones and weapons, are actively used in conflicts like Ukraine, enabling the selection and attack of human targets without human intervention. This deployment has led to direct harm, raising global security concerns and prompting, but not achieving, international agreements to restrict such technologies.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems integrated into autonomous weapons and tactical military software, which are capable of independently identifying and attacking human targets. Although no specific incident of harm is reported, the discussion centers on the credible and significant risks these AI systems pose in warfare, including lethal harm and destabilization of global security. This fits the definition of an AI Hazard, as the development and deployment of such AI systems could plausibly lead to AI Incidents involving injury, death, and broader societal harm. There is no indication of a specific realized harm event, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the risks and implications of AI in military applications.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

الذكاء الاصطناعي في الحروب: أسلحة مستقلة وبرمجيات تكتيكية

2023-11-16
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into autonomous weapons and tactical military software, which are capable of independently identifying and attacking human targets. Although no specific incident of harm is reported, the discussion centers on the credible and significant risks these AI systems pose in warfare, including lethal harm and destabilization of global security. This fits the definition of an AI Hazard, as the development and deployment of such AI systems could plausibly lead to AI Incidents involving injury, death, and broader societal harm. There is no indication of a specific realized harm event, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the risks and implications of AI in military applications.
Thumbnail Image

الذكاء الاصطناعي في الحروب: أسلحة مستقلة وبرمجيات تكتيكية

2023-11-16
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (autonomous weapons and AI-powered tactical software) whose development and use have directly led to or are currently leading to harm in armed conflict, including threats to human life and shifts in global military power. The autonomous weapons can select and attack human targets without human intervention, which constitutes direct harm to people. The tactical AI software enhances military decision-making that can lead to lethal outcomes. The presence of these AI systems in active conflict zones (e.g., Russia-Ukraine war) confirms realized harm rather than just potential. Therefore, this event meets the criteria for an AI Incident due to direct involvement of AI systems causing harm to people and communities in warfare.
Thumbnail Image

أسلحة ومركبات مستقلة وبرمجيات تكتيكية... الذكاء الاصطناعي سيحدث ثورة في الحروب

2023-11-16
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into autonomous weapons and tactical military software that can independently select and attack human targets, which directly leads to harm (injury or death) in armed conflict. The deployment of such systems in active war zones and their potential to cause large-scale lethal harm qualifies this as an AI Incident. The discussion of ethical concerns and strategic implications further supports the significance of the harm caused by these AI systems. Therefore, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ثورة الذكاء الاصطناعي في الحروب.. مخاوف بشرية من مستقبل غامض

2023-11-16
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous weapons and military applications, which are capable of independently selecting and attacking human targets. The use of such AI systems in ongoing conflicts (e.g., Russia-Ukraine war) indicates their deployment and operational use, implying a direct link to potential harm. However, the article does not report a specific realized harm event but rather discusses the potential and emerging risks associated with these AI systems. The autonomous weapons' capability to cause mass casualties and escalate conflicts represents a plausible future harm. Hence, the event is best classified as an AI Hazard, reflecting credible risks from the development and use of AI in lethal autonomous weapons and military decision-making systems.
Thumbnail Image

لماذا الذكاء الاصطناعي أخطر من القنبلة الذرية؟

2023-11-16
صحيفة الاقتصادية
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems used in autonomous weapons and military decision-making tools that could lead to significant harm, including lethal attacks and escalation of warfare. While no actual harm or incident is reported, the potential for these AI systems to cause injury, death, and geopolitical instability is credible and significant. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated, as the focus is on the risks posed by AI in military applications.
Thumbnail Image

الإنسان لم يعد القوة المهيمنة في الحروب | | صحيفة العرب

2023-11-18
صحيفة العرب
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into autonomous weapons that can identify and attack human targets without human intervention, which is a direct use of AI leading to harm (death and injury) in armed conflict. The mention of their deployment in the Russia-Ukraine war confirms realized harm rather than just potential. Therefore, this qualifies as an AI Incident under the definition of harm to people and communities caused directly or indirectly by AI system use.
Thumbnail Image

الذكاء الاصطناعي يمتلك القدرة على إحداث ثورة في الحروب | MEO

2023-11-17
MEO
Why's our monitor labelling this an incident or hazard?
The article centers on AI systems integrated into autonomous weapons and military platforms that can independently select and engage human targets, which directly relates to AI systems capable of causing harm. While no specific harm event is described, the article highlights the plausible future harm these AI systems could cause, including mass casualties and destabilization of global security. This fits the definition of an AI Hazard, as the development and deployment of such AI-enabled autonomous weapons could plausibly lead to AI Incidents involving injury, death, and violations of human rights. The discussion of ongoing military use and strategic investments underscores the credible risk. Therefore, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

أسلحة وأكثر... الذكاء الاصطناعي سيحدث ثورة في الحروب!

2023-11-18
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into autonomous weapons and military vehicles that can identify and attack human targets without human intervention, which directly leads to harm to people and communities (harm category a and d). The use of AI in these weapons is not hypothetical but already partially realized (e.g., autonomous drones in Ukraine). The harms include injury and death, as well as broader societal and ethical harms related to warfare. Therefore, this is an AI Incident due to the direct involvement of AI systems in causing harm through autonomous lethal force.
Thumbnail Image

La inteligencia artificial transforma el campo de batalla

2023-11-16
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into autonomous weapons capable of lethal action without human control, which could plausibly lead to significant harm including loss of life and geopolitical instability. While no actual harm or incident is reported, the discussion of ongoing development, deployment in conflict zones like Ukraine, and the potential for mass destruction meets the criteria for an AI Hazard. The event does not describe a realized harm (incident), nor is it merely complementary information or unrelated news. Hence, it is classified as an AI Hazard reflecting the credible risk posed by AI-enabled autonomous weapons.
Thumbnail Image

Cómo la inteligencia artificial está transformando las guerras en Ucrania y Gaza

2023-11-17
La Nacion
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into autonomous weapons and military software that are currently used in active conflict zones like Ukraine and Gaza. These systems have the capability to select and attack human targets autonomously, which directly causes harm to people and communities, fulfilling the criteria for an AI Incident. The discussion of ongoing use and impact in warfare confirms realized harm rather than just potential risk. Hence, this is not merely a hazard or complementary information but an AI Incident due to the direct involvement of AI in causing harm in armed conflict.
Thumbnail Image

¿Por qué la Inteligencia Artificial podría transformar guerras como la bomba atómica?

2023-11-17
El Tiempo
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems integrated into military hardware and software that are currently used in conflict zones (e.g., autonomous drones in Ukraine) causing direct harm and changing combat conditions, fulfilling the criteria for an AI Incident. It also discusses the broader implications and risks of AI-enabled autonomous weapons, which could plausibly lead to further incidents, but since harm is already occurring, the classification prioritizes AI Incident. The presence of AI systems is clear, their use in warfare is described, and the harms include injury, death, and disruption of military operations, fitting the definition of AI Incident.
Thumbnail Image

La inteligencia artificial, nueva transformación para la guerra

2023-11-16
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into military hardware and software that are currently used in conflict zones, such as autonomous drones and weapons capable of selecting and attacking human targets without human intervention. This use directly leads to harm (injury or death) to people in war, fulfilling the criteria for an AI Incident. The discussion of ongoing deployment in Ukraine and the potential for mass destruction confirms realized harm rather than just potential risk. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

La Inteligencia Artificial está cambiando el escenario de las guerras: barato, preciso y sin emociones

2023-11-16
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems integrated into military hardware and software, such as autonomous drones and AI-powered tactical planning tools. It discusses their use in ongoing conflicts and the potential for these systems to cause large-scale harm, including lethal autonomous attacks and shifts in power dynamics. However, it does not document a specific event where AI directly or indirectly caused harm (an AI Incident). Instead, it focuses on the plausible future risks and ethical concerns of AI in warfare, fitting the definition of an AI Hazard. The article also mentions ongoing diplomatic discussions about regulating such technologies, reinforcing the hazard perspective rather than reporting a realized incident.
Thumbnail Image

La IA está transformando las guerras en Ucrania y Gaza

2023-11-20
Diario de Cuyo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into autonomous weapons and military vehicles that are currently used in active conflicts, such as Ukraine, causing direct harm to people and communities. The autonomous nature of these weapons, capable of selecting and attacking human targets without human intervention, fits the definition of AI systems causing direct harm (AI Incident). Additionally, the discussion of ongoing development and potential mass deployment of such weapons also implies plausible future harm (AI Hazard). However, since harm is already occurring, the event is best classified as an AI Incident. The article also touches on ethical and strategic implications, but the primary focus is on the realized and ongoing harm caused by AI-enabled military systems.
Thumbnail Image

La inteligencia artificial, nueva transformación para la guerra | El Deber

2023-11-16
EL DEBER
Why's our monitor labelling this an incident or hazard?
The article focuses on the ongoing development and deployment of AI-enabled autonomous weapons and military systems, which could plausibly lead to significant harms including loss of life, escalation of conflict, and ethical violations. Although no specific AI-caused harm has yet occurred or is reported, the credible risk of such harm is emphasized, making this an AI Hazard. The discussion of potential mass deployment of autonomous weapons and the strategic military advantages they confer supports classification as a hazard rather than a mere general AI news or complementary information. There is no report of an actual AI Incident (harm realized), so AI Hazard is the appropriate classification.
Thumbnail Image

Como la pólvora y la energía atómica, la Inteligencia Artificial revoluciona la guerra

2023-11-18
El Observador
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into military hardware and software that can autonomously identify and attack targets, which constitutes direct involvement of AI in causing harm (injury or death) to people. The mention of ongoing conflicts where such technologies are used confirms that harm is occurring. Additionally, the discussion of risks such as accidental escalation or mass deployment of autonomous weapons further supports the classification as an AI Incident. Therefore, this event meets the criteria for an AI Incident due to the realized harm caused by AI-enabled autonomous weapons and military systems.
Thumbnail Image

La inteligencia artificial, nueva transformación para la guerra | Noticias de Norte de Santander, Colombia y el mundo

2023-11-19
La Opinión
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI's role in military technology, including autonomous lethal weapons, which are AI systems capable of making decisions about lethal force. The discussion about the potential for AI to revolutionize warfare and the lack of a ban on such weapons indicates a credible risk that these AI systems could lead to harm, including injury or death, disruption of security, and geopolitical instability. Although no specific incident of harm is reported, the article clearly outlines a plausible future harm scenario from the development and use of AI in autonomous weapons. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.
Thumbnail Image

La inteligencia artificial, nueva transformación para la guerra - Diario La Tribuna

2023-11-16
Diario La Tribuna
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into military hardware and software that can autonomously select and attack human targets, which constitutes direct involvement of AI in causing or enabling harm to people (harm to health and life). The use of autonomous drones and weapons in active conflict zones like Ukraine demonstrates realized harm or at least ongoing risk of harm. The discussion of AI-enabled lethal autonomous weapons and their deployment in warfare meets the criteria for an AI Incident, as the AI systems' use has directly or indirectly led to harm or lethal outcomes. The article also references ethical and strategic concerns about these systems, reinforcing the assessment of realized harm rather than mere potential.
Thumbnail Image

La inteligencia artificial, nueva transformación para la guerra

2023-11-16
UDG TV
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into military hardware and software that can autonomously select and attack human targets, which directly relates to harm to persons and communities. The use of autonomous drones in active conflict (Ukraine) indicates realized harm, qualifying as an AI Incident. The discussion of potential mass deployment of autonomous lethal weapons further supports the presence of direct or indirect harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in lethal military operations causing harm.
Thumbnail Image

La inteligencia artificial revoluciona la letalidad de la guerra

2023-11-16
Newsweek México
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into weapons and military platforms that can autonomously select and attack human targets, which constitutes direct involvement of AI in causing harm (injury and death in warfare). The ongoing conflict in Ukraine demonstrates actual use of such AI-enabled systems causing harm, fulfilling the criteria for an AI Incident. The discussion of potential future risks and ethical concerns further supports the significance of the harm. Therefore, this event is classified as an AI Incident due to realized harm from AI-enabled autonomous weapons in active conflict.
Thumbnail Image

"Войну выигрывают технологии". Как искусственный интеллект поможет победить в войне с РФ?

2023-12-04
Економічна правда
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in operational military contexts, such as autonomous drones and AI-powered weapon turrets that identify and attack enemy targets, which directly influence physical harm and the dynamics of war. The AI systems are not hypothetical or potential but are actively deployed and contributing to harm in the conflict. This meets the definition of an AI Incident, as the AI's use in warfare has directly or indirectly led to injury or harm to persons and communities. The article does not merely discuss potential risks or general AI development but focuses on actual AI use in combat, causing real-world harm.
Thumbnail Image

Декларація Блетчлі з ШІ - як Україні скористатися глобальними трендами -- Delo.ua

2023-12-01
delo.ua
Why's our monitor labelling this an incident or hazard?
The content centers on international policy frameworks, ethical considerations, and strategic approaches to AI governance rather than any concrete event involving AI systems causing harm or posing immediate risk. There is no mention of an AI system malfunctioning, being misused, or leading to injury, rights violations, or other harms. The article is primarily informative and contextual, discussing AI risks and regulatory responses as a background for Ukraine's engagement with global AI trends. Therefore, it fits the definition of Complementary Information, as it provides supporting context and updates on AI governance without reporting a new AI Incident or AI Hazard.
Thumbnail Image

США ставлять Україну за приклад у сфері цифровізації

2023-12-01
ZN.UA
Why's our monitor labelling this an incident or hazard?
The article does not report any AI Incident or AI Hazard. It focuses on policy intentions, international cooperation, and positive examples of digitalization, including AI governance efforts. There is no mention of AI systems causing or potentially causing harm, nor any malfunction or misuse leading to harm. Therefore, it fits best as Complementary Information, providing context and updates on AI governance and digital transformation without describing a specific incident or hazard.
Thumbnail Image

США ставлять Україну за приклад у сфері цифровізації

2023-12-02
InternetUA
Why's our monitor labelling this an incident or hazard?
The article does not describe any AI system malfunction, misuse, or harm. It reports on strategic policy intentions, international collaboration, and recognition of Ukraine's digital initiatives as a model. These are governance and contextual developments related to AI but do not constitute an AI Incident or AI Hazard. Therefore, the article is best classified as Complementary Information, providing context and updates on AI governance and international cooperation.
Thumbnail Image

Небезпечні, але перспективні - в Білому домі планують розвивати технології ШІ, а в сфері цифровізації згадують приклад України

2023-11-30
VOA
Why's our monitor labelling this an incident or hazard?
The article focuses on governmental and organizational strategies to manage AI risks and promote safe AI development, including regulatory frameworks, international cooperation, and cybersecurity efforts. While it mentions concerns about potentially dangerous AI discoveries and cybersecurity threats, it does not describe any actual AI system malfunction, misuse, or harm occurring. The discussion of OpenAI's internal issues and warnings about dangerous AI is presented as a concern or potential risk, not a realized incident. Therefore, the article is best classified as Complementary Information, providing context and updates on AI governance and risk management rather than reporting an AI Incident or AI Hazard.